id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
447625
https://en.wikipedia.org/wiki/Ray%20casting
Ray casting
Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See Solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system, circa 1979. Before ray casting (and ray tracing), computer graphics algorithms projected surfaces or edges (e.g., lines) from the 3D world to the image plane where visibility logic had to be applied. The world-to-image plane projection is a 3D homogeneous coordinate system transformation (aka: 3D projection, affine transformation, or projective transform (Homography)). Rendering an image in that way is difficult to achieve with hidden surface/edge removal. Plus, silhouettes of curved surfaces have to be explicitly solved for whereas it is an implicit by-product of ray casting, so there is no need to explicitly solve for it whenever the view changes. Ray casting greatly simplified image rendering of 3D objects and scenes because a line transforms to a line. So, instead of projecting curved edges and surfaces in the 3D scene to the 2D image plane, transformed lines (rays) are intersected with the objects in the scene. A homogeneous coordinate transformation is represented by 4x4 matrix. The mathematical technique is common to computer graphics and geometric modeling. A transform includes rotations around the three axes, independent scaling along the axes, translations in 3D, and even skewing. Transforms are easily concatenated via matrix arithmetic. For use with a 4x4 matrix, a point is represented by [X, Y, Z, 1] and a direction vector is represented by [Dx, Dy, Dz, 0]. (The fourth term is for translation and that does not apply to direction vectors.) While simplifying the mathematics, the ray casting algorithm is very computer-processing intensive. Pixar has large render farms, buildings with 1000's of CPUs, to make their animations using ray tracing [aka "ray casting"] as a core technique. Concept Ray casting is the most basic of many computer graphics rendering algorithms that use the geometric algorithm of ray tracing. Ray tracing-based rendering algorithms operate in image order to render three-dimensional scenes to two-dimensional images. Geometric rays are traced from the eye of the observer to sample the light (radiance) travelling toward the observer from the ray direction. The speed and simplicity of ray casting comes from computing the color of the light without recursively tracing additional rays that sample the radiance incident on the point that the ray hit. This eliminates the possibility of accurately rendering reflections, refractions, or the natural falloff of shadows; however all of these elements can be faked to a degree, by creative use of texture maps or other methods. The high speed of calculation made ray casting a handy rendering method in early real-time 3D video games. The idea behind ray casting is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray – think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modelling techniques and easily rendered. From the abstract for the paper "Ray Casting for Modeling Solids": To visualize and analyze the composite solids modeled, virtual light rays are cast as probes. By virtue of its simplicity, ray casting is reliable and extensible. The most difficult mathematical problem is finding line-surface intersection points. So, surfaces as planes, quadrics, tori, and probably even parametric surface patches may bound the primitive solids. The adequacy and efficiency of ray casting are issues addressed here. A fast picture generation capability for interactive modeling is the biggest challenge. Light rays and the camera geometry form the basis for all geometric reasoning here. This figure shows a pinhole camera model for perspective effect in image processing and a parallel camera model for mass analysis. The simple pinhole camera model consists of a focal point (or eye point) and a square pixel array (or screen). Straight light rays pass through the pixel array to connect the focal point with the scene, one ray per pixel. To shade pictures, the rays’ intensities are measured and stored as pixels. The reflecting surface responsible for a pixel’s value intersects the pixel’s ray. When the focal length, distance between focal point and screen, is infinite, then the view is called “parallel” because all light rays are parallel to each other, perpendicular to the screen. Although the perspective view is natural for making pictures, some applications need rays that can be uniformly distributed in space. For modeling convenience, a typical standard coordinate system for the camera has the screen in the X-Y plane, the scene in the +Z half space, and the focal point on the -Z axis. A ray is simply a straight line in the 3D space of the camera model. It is best defined as a direction vector in parameterized form as a point (X0, Y0, Z0) and a direction vector (Dx, Dy, Dz). In this form, points on the line are ordered and accessed via a single parameter t. For every value of t, a corresponding point (X, Y, Z) on the line is defined: X = X0 + t · Dx Y = Y0 + t · Dy Z = Z0 + t · Dz If the vector is normalized, then the parameter t is distance along the line. The vector can be normalized easily with the following computation: Dist = √(Dx2 + Dy2 + Dz2) Dx = Dx / Dist Dy = Dy / Dist Dz = Dz / Dist Given geometric definitions of the objects, each bounded by one or more surfaces, the result of computing one ray’s intersection with all bounded surfaces in the screen is defined by two arrays, Ray parameters: t[1], t[2], ..., t[n] Surface pointers: S[1], S[2], ..., S[n] where n is the number of ray-surface intersections. The ordered list of ray parameters t[i] denote the enter-exit points. The ray enters a solid at point t[1], exits at t[2], enters a solid at t[3], etc. Point t[1] is closest to the camera and t[n] is furthest. In association with the ray parameters, the surface pointers contain a unique address for the intersected surface’s information. The surface can have various properties such as color, specularity, transparency with/without refraction, translucency, etc. The solid associated with the surface may have its own physical properties such as density. This could be useful, for instance, when an object consists of an assembly of different materials and the overall center of mass and moments of inertia are of interest. Applying the information Three algorithms using ray casting are to make line drawings, to make shaded pictures, and to compute volumes and other physical properties. Each algorithm, given a camera model, casts one ray per pixel in the screen. For computing volume, the resolution of the pixel screen to use depends on the desired accuracy of the solution. For line drawings and picture shading, the resolution determines the quality of the image. LINE DRAWINGS. To draw the visible edges of a solid, generate one ray per pixel moving top-down, left-right in the screen. Evaluate each ray in order to identify the visible surface S[1], the first surface pointer in the sorted list of ray-surface intersections. If the visible surface at pixel location (X, Y) is different than the visible surface at pixel (X-1, Y), then display a vertical line one pixel long centered at (X-½, Y). Similarly, if the visible surface at (X, Y) if different than the visible surface at pixel (X, Y-1), then display a horizontal line one pixel long centered at (X, Y-½). The resulting drawing will consist of horizontal and vertical edges only, looking jagged in course resolutions. Roth's ray casting system generated the images of solid objects on the right. Box enclosures, dynamic bounding, and coherence were used for optimization. For each picture, the screen was sampled with a density of about 100x100 (e.g., 10,000) rays and new edges were located via binary searches. Then all edges were followed by casting additional rays at one pixel increments on the two sides of the edges. Each picture was drawn on a Tektronix tube at 780x780 resolution. SHADED PICTURES. To make a shaded picture, again cast one ray per pixel in the screen. This time, however, use the visible surface pointer S[1] at each pixel to access the description of the surface. From this, compute the surface normal at the visible point t[1]. The pixel’s value, the displayable light intensity, is proportional to the cosine of the angle formed by the surface normal and the light-source-to-surface vector. Processing all pixels this way produces a raster-type picture of the scene. COMPUTING VOLUME AND MOMENTS OF INERTIA. The volume (and similar properties) of a solid bounded by curved surfaces is easily computed by the “approximating sums” integration method, by approximating the solid with a set of rectangular parallelepipeds. This is accomplished by taking an "in-depth" picture of the solid in a parallel view. Casting rays through the screen into the solid partitions the solid into volume elements. Two dimensions of the parallelepipeds are constant, defined by the 2D spacing of rays in the screen. The third dimension is variable, defined by the enter-exit point computed. Specifically, if the horizontal and vertical distances between rays in the screen is S, then the volume “detected” by each ray is S × S × (t[2]-t[1] + t[4]-t[3] + ∙∙∙ + t[n]-t[n-1]) / L where L is defined as the length of the direction vector. (If already normalized, this is equal to 1.) L = √(Dx2 + Dy2 + Dz2) Each (t[i]-t[i-1])/L is a length of a ray segment that is inside of the solid. This figure shows the parallelepipeds for a modeled solid using ray casting. This is a use of parallel-projection camera model. In-out ray classification This figure shows an example of the binary operators in a composition tree using + and – where a single ray is evaluated. The ray casting procedure starts at the top of the solid composition tree, recursively descends to the bottom, classifies the ray with respect to the primitive solids, and then returns up the tree combining the classifications of the left and right subtrees. This figure illustrates the combining of the left and right classifications for all three binary operators. Realistic shaded pictures Ray casting is a natural modeling tool for making shaded pictures. The grayscale ray-casting system developed by Scott Roth and Daniel Bass at GM Research Labs produced pictures on a Ramtek color raster display around 1979. To compose pictures, the system provided the user with the following controls: View Viewing direction and position Focal length: width-angle perspective to parallel Zoom factor Illumination Number of light sources Locations and intensities of lights Optionally shadow Intensities of ambient light and background Surface reflectance % reflected diffusely % reflected specularly % transmitted This figure shows a table scene with shadows from two point light sources. Shading algorithms that implement all of the realistic effects are computationally expensive, but relatively simple. For example, the following figure shows the additional rays that could be cast for a single light source. For a single pixel in the image to be rendered, the algorithm casts a ray starting at the focal point and determines that it intersects a semi-transparent rectangle and a shiny circle. An additional ray must then be cast starting at that point in the direction symmetrically opposite the surface normal at the ray-surface intersection point in order to determine what is visible in the mirrored reflection. That ray intersects the triangle which is opaque. Finally, each ray-surface intersection point is tested to determine if it is in shadow. The “Shadow feeler” ray is cast from the ray-surface intersection point to the light source to determine if any other surface blocks that pathway. Turner Whitted calls the secondary and additional rays “Recursive Ray Tracing”. [A room of mirrors would be costly to render, so limiting the number of recursions is prudent.] Whitted modeled refraction for transparencies by generating a secondary ray from the visible surface point at an angle determined by the solid’s index of refraction. The secondary ray is then processed as a specular ray. For the refraction formula and pictorial examples, see Whitted’s paper. Enclosures and efficiency Ray casting qualifies as a brute force method for solving problems. The minimal algorithm is simple, particularly in consideration of its many applications and ease of use, but applications typically cast many rays. Millions of rays may be cast to render a single frame of an animated film. Computer processing time increases with the resolution of the screen and the number of primitive solids/surfaces in the composition. By using minimum bounding boxes around the solids in the composition tree, the exhaustive search for a ray-solid intersection resembles an efficient binary search. The brute force algorithm does an exhaustive search because it always visits all the nodes in the tree—transforming the ray into primitives’ local coordinate systems, testing for ray-surface intersections, and combining the classifications—even when the ray clearly misses the solid. In order to detect a “clear miss”, a faster algorithm uses the binary composition tree as a hierarchical representation of the space that the solid composition occupies. But all position, shape, and size information is stored at the leaves of the tree where primitive solids reside. The top and intermediate nodes in the tree only specify combine operators. Characterizing with enclosures the space that all solids fill gives all nodes in the tree an abstract summary of position and size information. Then, the quick “ray intersects enclosure” tests guide the search in the hierarchy. When the test fails at an intermediate node in the tree, the ray is guaranteed to classify as out of the composite, so recursing down its subtrees to further investigate is unnecessary. Accurately assessing the cost savings for using enclosures is difficult because it depends on the spatial distribution of the primitives (the complexity distribution) and on the organization of the composition tree. The optimal conditions are: No primitive enclosures overlap in space Composition tree is balanced and organized so that sub-solids near in space are also nearby in the tree In contrast, the worst condition is: All primitive enclosures mutually overlap The following are miscellaneous performance improvements made in Roth’s paper on ray casting, but there have been considerable improvements subsequently made by others. Early Outs. If the operator at a composite node in the tree is – or & and the ray classifies as out of the composite’s left sub-solid, then the ray will classify as out of the composite regardless of the ray’s classification with respect to the right sub-solid. So, classifying the ray with respect to the right sub-solid is unnecessary and should be avoided for efficiency. Transformations. By initially combining the screen-to-scene transform with the primitive’s scene-to-local transform and storing the resulting screen-to-local transforms in the primitive’s data structures, one ray transform per ray-surface intersection is eliminated. Recursion. Given a deep composition tree, recursion can be expensive in combination with allocating and freeing up memory. Recursion can be simulated using static arrays as stacks. Dynamic Bounding. If only the visible edges of the solid are to be displayed, the ray casting algorithm can dynamically bound the ray to cut off the search. That is, after finding that a ray intersects a sub-solid, the algorithm can use the intersection point closest to the screen to tighten the depth bound for the “ray intersections box” test. This only works for the + part of the tree, starting at the top. With – and &, nearby “in” parts of the ray may later become “out”. Coherence. The principle of coherence is that the surfaces visible at two neighboring pixels are more likely to be the same than different. Developers of computer graphics and vision systems have applied this empirical truth for efficiency and performance. For line drawings, the image area containing edges is normally much less than the total image area, so ray casting should be concentrated around the edges and not in the open regions. This can be effectively implemented by sparsely sampling the screen with rays and then locating, when neighboring rays identify different visible surfaces, the edges via binary searches. Anti-aliasing The jagged edges caused by aliasing is an undesirable effect of point sampling techniques and is a classic problem with raster display algorithms. Linear or smoothly curved edges will appear jagged and are particularly objectionable in animations because movement of the image makes the edges appear fuzzy or look like little moving escalators. Also, details in the scene smaller than the spacing between rays may be lost. The jagged edges in a line drawing can be smoothed by edge following. The purpose of such an algorithm is to minimize the number of lines needed to draw the picture within one pixel accuracy. Smooth edges result. The line drawings above were drawn this way. To smooth the jagged edges in a shaded picture with subpixel accuracy, additional rays should be cast for information about the edges. (See Supersampling for a general approach.) Edges are formed by the intersection of surfaces or by the profile of a curved surface. Applying "Coherence" as described above via binary search, if the visible surface at pixel (X,Y) is different than the visible surface at pixel (X+1,Y), then a ray could be generated midway them at (X+½,Y) and the visible surface there identified. The distance between sample points could be further subdivided, but the search need not be deep. The primary search depth to smooth jagged edges is a function of the intensity gradient across the edge. Since (1) the area of the image that contains edges is usually a small percentage of the total area and (2) the extra rays cast in binary searches can be bounded in depth — that of the visible primitives forming the edges — the cost for smoothing jagged edges is affordable. History of ray casting For the history of ray casting, see ray tracing (graphics) as both are essentially the same technique under different names. Scott Roth had invented the term "ray casting" before having heard of "ray tracing". Additionally, Scott Roth's development of ray casting at GM Research Labs occurred concurrently with Turner Whitted's ray tracing work at Bell Labs. Ray casting in early computer games In early first person games, raycasting was used to efficiently render a 3D world from a 2D playing field using a simple one-dimensional scan over the horizontal width of the screen. Early first-person shooters used 2D ray casting as a technique to create a 3D effect from a 2D world. While the world appears 3D, the player cannot look up or down or only in limited angles with shearing distortion,. This style of rendering eliminates the need to fire a ray for each pixel in the frame as is the case with modern engines; once the hit point is found the projection distortion is applied to the surface texture and an entire vertical column is copied from the result into the frame. This style of rendering also imposes limitations on the type of rendering which can be performed, for example depth sorting but depth buffering may not. That is polygons must be full in front of or behind one another, they may not partially overlap or intersect. Wolfenstein 3D The video game Wolfenstein 3D was built from a square based grid of uniform height walls meeting solid-colored floors and ceilings. In order to draw the world, a single ray was traced for every column of screen pixels and a vertical slice of wall texture was selected and scaled according to where in the world the ray hits a wall and how far it travels before doing so. The purpose of the grid based levels was twofold — ray-wall collisions can be found more quickly since the potential hits become more predictable and memory overhead is reduced. However, encoding wide-open areas takes extra space. ShadowCaster The Raven Software game ShadowCaster uses an improved Wolfenstein-based engine with added floors and ceilings texturing and variable wall heights. Comanche series The Voxel Space engine developed by NovaLogic for the Comanche games traced a ray through each column of screen pixels and tested each ray against points in a heightmap. Then it transformed each element of the heightmap into a column of pixels, determined which are visible (that is, have not been occluded by pixels that have been drawn in front), and drew them with the corresponding color from the texture map. Beyond raycasting Later DOS games like id Software's DOOM kept many of the raycasting 2.5D restrictions for speed but went on to switch to alternative rendering techniques (like BSP), making them no longer raycasting engines. Computational geometry setting In computational geometry, the ray casting problem is also known as the ray shooting problem and may be stated as the following query problem: given a set of objects in d-dimensional space, preprocess them into a data structure so that for each query ray, the initial object hit by the ray can be found quickly. The problem has been investigated for various settings: space dimension, types of objects, restrictions on query rays, etc. One technique is to use a sparse voxel octree. See also Ray Tracing A more sophisticated ray-casting algorithm which considers global illumination Photon mapping Radiosity (computer graphics) Path Tracing Volume ray casting 2.5D References External links Raycasting example in the browser. (unavailable) Raycasting planes in WebGL with source code Interactive raycaster for the Commodore 64 in 254 bytes (with source code) Interactive raycaster for MSDOS in 64 bytes (with source code) Computer graphics algorithms
23655391
https://en.wikipedia.org/wiki/Kqueue
Kqueue
Kqueue is a scalable event notification interface introduced in FreeBSD 4.1 on July 2000, also supported in NetBSD, OpenBSD, DragonFly BSD, and macOS. Kqueue was originally authored in 2000 by Jonathan Lemon, then involved with the FreeBSD Core Team. Kqueue makes it possible for software like nginx to solve the c10k problem. Kqueue provides efficient input and output event pipelines between the kernel and userland. Thus, it is possible to modify event filters as well as receive pending events while using only a single system call to kevent(2) per main event loop iteration. This contrasts with older traditional polling system calls such as poll(2) and select(2) which are less efficient, especially when polling for events on numerous file descriptors. Kqueue not only handles file descriptor events but is also used for various other notifications such as file modification monitoring, signals, asynchronous I/O events (AIO), child process state change monitoring, and timers which support nanosecond resolution, furthermore kqueue provides a way to use user-defined events in addition to the ones provided by the kernel. Some other operating systems which traditionally only supported select(2) and poll(2) also currently provide more efficient polling alternatives, such as epoll on Linux and I/O completion ports on Windows and Solaris. libkqueue is a user space implementation of kqueue(2), which translates calls to an operating system's native backend event mechanism. API The function prototypes and types are found in sys/event.h. int kqueue(void); Creates a new kernel event queue and returns a descriptor. int kevent(int kq, const struct kevent *changelist, int nchanges, struct kevent *eventlist, int nevents, const struct timespec *timeout); Used to register events with the queue, then wait for and return any pending events to the user. In contrast to epoll, kqueue uses the same function to register and wait for events, and multiple event sources may be registered and modified using a single call. The changelist array can be used to pass modifications (changing the type of events to wait for, register new event sources, etc.) to the event queue, which are applied before waiting for events begins. nevents is the size of the user supplied eventlist array that is used to receive events from the event queue. EV_SET(kev, ident, filter, flags, fflags, data, udata); A macro that is used for convenient initialization of a struct kevent object. See also OS-independent libraries with support for kqueue: libevent libuv Kqueue equivalent for other platforms: on Solaris, Windows and AIX: I/O completion ports. Note that completion ports notify when a requested operation has completed, whereas kqueue can also notify when a file descriptor is ready to perform an I/O operation. on Linux: epoll system call has similar but not identical semantics. inotify is a Linux kernel subsystem that notices changes to the filesystem and reports those to applications. References External links libbrb_core implements an abstraction for an event-oriented base, using kqueue() system call FreeBSD source code of the kqueue() system call OpenBSD source code of the kqueue() system call NetBSD source code of the kqueue() system call DragonFly BSD source code of the kqueue() system call Events (computing) BSD software FreeBSD OpenBSD NetBSD DragonFly BSD MacOS Operating system APIs Operating system technology System calls
28636169
https://en.wikipedia.org/wiki/Repton%20Priory
Repton Priory
Repton Priory was a priory in Repton, Derbyshire, England. It was established in the 12th century and was originally under the control of Calke Priory. It was dissolved in 1538. The priory became a place of pilgrimage on account of the shrine of St Guthlac, and his bell. Pilgrims believed that placing their head upon it would cure headaches. History In the 12th century Maud of Gloucester, Countess of Chester held the manor of Repton. When her husband Ranulf de Gernon, 4th Earl of Chester died in 1153 she granted St Wystan's Church to the Augustinian canons at Calke Priory. Maud then had a new Priory built at Repton, dedicated to the Holy Trinity. Repton Priory was originally a cell to Calke Priory; however, Countess Maud's donation was made on the condition that most of the canons should transfer to the new Repton Priory as soon as convenient. This happened in 1172, with the two priories' roles thus reversed and Calke becoming a cell to Repton. The canons did not abandon Calke entirely though, and the priory is for the next few centuries referred to as a joint priory of both Repton and Calke. Contemporary charters refer to the: "Prior and Canonry of Holy Trinity of Repton and the Canonry of St. Giles of Calke". The medieval priory buildings included the priory church, a cloister flanked by a chapter house, refectory, prior's lodgings, a hall and cellars, plus ancillary buildings a short distance away. In 1220 Nicholas de Willington granted the advowson of the church at Willington to the priory on the condition that the canons of the priory would pray for him and his heirs. In January 1263 Pope Urban IV ordered the priory to pay his papal subdeacon and chaplain, John De Ebulo, a very large pension of forty silver marks a year. It is unclear why the pope ordered the priory to pay this expense. The priory was granted a charter of confirmation by Roger de Meyland, Bishop of Coventry and Lichfield in 1271 and a second by King Henry III in 1272. These charters confirmed the Priory's control of St Wystan's Church (which the priory had left without a vicar) and St. Wystan's eight chapelries at Bretby, Foremark, Ingleby, Measham, Milton, Newton, Smisby and Ticknall. The charters also confirmed Repton Priory's control of the churches of Croxall and Willington in Derbyshire and Baddow in Essex. The Taxation Roll of 1291 reveals the priory received an annual income of £38 0s. 3½d. from their secular properties, and £28 from their control of St Wystan's Church. As they held land with an income of over £20, in 1297 the prior was summoned to a muster at Nottingham to perform military service. The priory had originally remained under the patronage of the founding family: the descendants of Maud of Gloucester, Countess of Chester. However, the election of a new prior in 1336 revealed the priory's advowson had passed to the King. The advowson had originally passed through the Chester family to Ranulf de Blondeville, 6th Earl of Chester. Upon his death his property was shared between his four sisters; Matilda of Chester, Countess of Huntingdon receiving the advowson of the priory which passed through her descendants to John Balliol, former King of Scotland. The control of the Priory passed to King Edward I upon the forfeiture of all of John Balliol's land. The 1535 Valor Ecclesiasticus records the priory as having an annual income of £118 8s., after expenses. Repton thus failed to escape the first wave of King Henry VIII's dissolutions, and was dissolved in 1536, along with the other small monasteries (those with incomes of £200 or less). Repton was, however, among a minority of priories which were reinstated after the payment of a bribe or fine. The year after it was first dissolved, on 12 June 1537, John Young was reappointed as prior having paid the King a "very heavy fine" of £266 13s. 4d. The fine only saved the priory for another year, however, as on 25 October 1538 the priory was surrendered to the crown for dissolution for a second (and final) time. The prior, John Young, died three days before the formal surrender was signed. The sub-prior was allotted a pension of £6 annually; four of the canons were awarded £5 6. 8d. annually; three canons were awarded £5 annually; and a further two canons were awarded £4 annually. Following dissolution the priory was awarded to Thomas Thacker, who retained the priory buildings. After his death in 1548 it passed to his son Gilbert Thacker. Following the accession of Catholic Queen Mary I, Gilbert was concerned that the priory might be put back into religious use, and so ordered that it be completely destroyed, a task that was almost entirely completed within a single day. Gilbert Thacker claimed "He would destroy the nest, for fear the birds should build therein again." On 6 June 1557 Sir John Port of Etwall died without a male heir and his bequests included funds to provide almshouses at Etwall but also the means to found a "Grammar School in Etwalle or Reptone", where the scholars every day were to pray for the souls of his parents and other relatives. In 1559 the executors of the will purchased from the Thacker family, for £37 10s, the former priory site, which was developed into Repton School. Remains Of the original priory building only fragments survive. Fragments of the foundations of the prior's lodgings, dated c.1438, were incorporated into a later building at Repton School; the majority of this building dates from the 17th century, however, and was comprehensively altered in the 19th century. Parts of the foundations of other areas of the priory remain in several areas, having been uncovered during construction work at Repton School in 1922: the bases of a cluster of columns remain of the former chancel and chapels; fragments of an arch remain, belonging to the former pulpitum, which were moved to their current position in 1906; and fragments of the door surrounds of both the chapter house and warming room also survive. Priors Priors of Repton: Robert, c.1153–c.1160 Nicholas, c.1172–c.1181 Albred, c.1200 Richard, c.1208 Nicholas, c. 1215 John, c.1220 Reginald, c. 1230 Peter, c.1252 Robert, c.1289 Ralph, 1316–36 John de Lichfield, 1336–46 Simon de Sutton, 1346–56 Ralph of Derby, 1356–99 William of Tutbury, 1399– William Maynesin, c.1411 Wystan Porter, ?–1436; resigned John Overton, 1436–38; died in office John Wylne, 1438–71 Thomas Sutton, 1471–86 Henry Prest, 1486–1503 William Derby, 1503–08 John Young, 1508–36, and 1537–38; died in office References Pevsner, Nikolaus; Williamson, Elizabeth (1978), The Buildings of England: Derbyshire, Penguin Books, Monasteries in Derbyshire Monasteries dissolved under the English Reformation
153754
https://en.wikipedia.org/wiki/Nat%20Friedman
Nat Friedman
Nathaniel Dourif Friedman is an American technology executive and investor. From October 2018 to November 2021 he was the chief executive officer (CEO) of GitHub. Life and career In 1996 while a freshman at Massachusetts Institute of Technology, Friedman befriended Miguel de Icaza on LinuxNet, the IRC network that Friedman had created to discuss Linux. As an intern at Microsoft Friedman worked on the IIS web server. At MIT he studied Computer Science and Mathematics and graduated with a Bachelor of Science in 1999. Friedman co-founded Ximian (originally called International Gnome Support, then Helix Code) with de Icaza to develop applications and infrastructure for GNOME, the project de Icaza had started with the aim of producing a free software desktop environment. The company was later bought by Novell in 2003. At Novell, Friedman was the Chief Technology and Strategy Officer for Open Source until January 2010. There he launched the Hula Project which began with the release of components of Novell NetMail as open source. During his tenure, Novell began an effort to migrate 6,000 employees away from Microsoft Windows to SUSE Linux and from Microsoft Office to OpenOffice.org. Friedman's final project before his departure was work on SUSE Studio. During his sabbatical, Friedman created and hosted a podcast called Hacker Medley. In May 2011, Friedman and de Icaza together founded Xamarin, and Friedman was made CEO. The company was created to offer commercial support for Mono, a project that de Icaza had initiated at Ximian to provide a free software implementation of Microsoft's .NET software stack. At Xamarin they focused on continuing to develop Mono and MonoDevelop and marketing the cross-platform Xamarin SDK to developers targeting mobile computing devices and video game consoles. In 2016, Xamarin was acquired by Microsoft. With the June 2018 announcement of Microsoft's $7.5 billion acquisition of GitHub, the companies simultaneously announced that Friedman would become GitHub's new CEO. GitHub's co-founder and then-current CEO Chris Wanstrath had been leading a search for his replacement since August 2017. Friedman assumed the role of CEO on the 29th of October 2018. On 3rd of November 2021, Friedman announced that he is stepping down as CEO to move on to his next adventures. He has been married to Stephanie Friedman (née Schatz) since 2009. References External links 1977 births American male bloggers American bloggers American computer programmers 21st-century American businesspeople GNOME developers Living people GitHub people
582146
https://en.wikipedia.org/wiki/OpenMosix
OpenMosix
openMosix was a free cluster management system that provided single-system image (SSI) capabilities, e.g. automatic work distribution among nodes. It allowed program processes (not threads) to migrate to machines in the node's network that would be able to run that process faster (process migration). It was particularly useful for running parallel applications having low to moderate input/output (I/O). It was released as a Linux kernel patch, but was also available on specialized Live CDs. openMosix development has been halted by its developers, but the LinuxPMI project is continuing development of the former openMosix code. History openMosix was originally forked from MOSIX by Moshe Bar on February 10, 2002 when MOSIX became proprietary software. openMosix was considered stable on Linux kernel 2.4.x for the x86 architecture, but porting to Linux 2.6 kernel remained in the alpha stage. Support for the 64-bit AMD64 architecture only started with the 2.6 version. On July 15, 2007, Bar announced that the openMOSIX project would reach its end of life on March 1, 2008, due to the decreasing need for SSI clustering as low-cost multi-core processors increase in availability. OpenMosix used to be distributed as a Gentoo Linux kernel choice, but it was removed from Gentoo Linux's Portage tree in February 2007. As of March 1, 2008, openMosix read-only source code is still hosted at SourceForge. The LinuxPMI project is continuing development of the former openMosix code. ClusterKnoppix ClusterKnoppix is a specialized Linux distribution based on the Knoppix distribution, but which uses the openMosix kernel. Traditionally, clustered computing could only be achieved by setting up individual rsh keys, creating NFS shares, editing host files, setting static IPs, and applying kernel patches manually. ClusterKnoppix effectively renders most of this work unnecessary. The distribution contains an autoconfiguration system where new ClusterKnoppix-running computers attached to the network automatically join the cluster. ClusterKnoppix is a modified Knoppix distro using the OpenMosix kernel. See also Kerrighed OpenSSI Live CDs Linux Live CDs with openMosix include: CHAOS (a very small boot CD) dyne:bolic Quantian, a scientific distribution based on clusterKnoppix References External links openMosixWiki Original page at wayback archive ClusterKnoppix at sourceforge.net Clusterknoppix at distrowatch Load-Balancing cluster HowTo using ClusterKnoppix http://ftp.fi.muni.cz/pub/linux/clusterKnoppix/ http://ftp.linux.cz/pub/linux/clusterKnoppix/ openMosix cluster sites Cluster at National Taras Shevchenko University of Kyiv Hydra MBG Cluster Cluster computing Internet Protocol based network software Parallel computing
16771
https://en.wikipedia.org/wiki/Kernel
Kernel
Kernel may refer to: Computing Kernel (operating system), the central component of most operating systems Kernel (image processing), a matrix used for image convolution Compute kernel, in GPGPU programming Kernel method, in machine learning Kernelization, a technique for designing efficient algorithms Kernel, a routine that is executed in a vectorized loop, for example in general-purpose computing on graphics processing units KERNAL, the Commodore operating system Mathematics Objects Kernel (algebra), a general concept that includes: Kernel (linear algebra) or null space, a set of vectors mapped to the zero vector Kernel (category theory), a generalization of the kernel of a homomorphism Kernel (set theory), an equivalence relation: partition by image under a function Difference kernel, a binary equalizer: the kernel of the difference of two functions Functions Kernel (geometry), the set of points within a polygon from which the whole polygon boundary is visible Kernel (statistics), a weighting function used in kernel density estimation to estimate the probability density function of a random variable Integral kernel or kernel function, a function of two variables that defines an integral transform Heat kernel, the fundamental solution to the heat equation on a specified domain Convolution kernel Stochastic kernel, the transition function of a stochastic process Transition kernel, a generalization of a stochastic kernel Pricing kernel, the stochastic discount factor used in mathematical finance Positive-definite kernel, a generalization of a positive-definite matrix Kernel trick, in statistics Reproducing kernel Hilbert space Science Seed, inside the nut of most plants or the fruitstone of drupes, especially: Apricot kernel Corn kernel Palm kernel Wheat kernel Atomic nucleus, the center of an atom Companies Kernel (neurotechnology company), a developer of neural interfaces The Kernel Brewery, a craft brewery in London The Kernel, an Internet culture website, now part of The Daily Dot Other uses Kernel (EP), by the band Seam Kernel Fleck, a character in The Demonata series of books Brigitte Kernel (born 1959), French journalist and writer See also Colonel, a senior military officer
40511698
https://en.wikipedia.org/wiki/Red%20Hat%20Gluster%20Storage
Red Hat Gluster Storage
Red Hat Gluster Storage, formerly Red Hat Storage Server, is a computer storage product from Red Hat. It is based on open source technologies such as GlusterFS and Red Hat Enterprise Linux. The latest release, RHGS 3.1, combines Red Hat Enterprise Linux (RHEL 6 and also RHEL 7) with the latest GlusterFS community release, oVirt, and XFS File System. In April 2014, Red Hat re-branded GlusterFS-based Red Hat Storage Server to "Red Hat Gluster Storage". Description Red Hat Gluster Storage, a scale-out NAS product, uses as its basis GlusterFS, a distributed file-system. Red Hat Gluster Storage also exemplifies software-defined storage (SDS). History In June 2012, Red Hat Gluster Storage was announced as a commercially supported integration of GlusterFS with Red Hat Enterprise Linux. Releases 3.5 3.4 3.3 Release Notes 3.2 3.1 3.0 2.1 References Distributed data storage Distributed file systems Distributed file systems supported by the Linux kernel Network file systems Red Hat software Software using the GPL license Userspace file systems Virtualization-related software for Linux
3116
https://en.wikipedia.org/wiki/Accumulator%20%28computing%29
Accumulator (computing)
In a computer's central processing unit (CPU), the accumulator is a register in which intermediate arithmetic logic unit results are stored. Without a register like an accumulator, it would be necessary to write the result of each calculation (addition, multiplication, shift, etc.) to main memory, perhaps only to be read right back again for use in the next operation. Access to main memory is slower than access to a register like an accumulator because the technology used for the large main memory is slower (but cheaper) than that used for a register. Early electronic computer systems were often split into two groups, those with accumulators and those without. Modern computer systems often have multiple general-purpose registers that can operate as accumulators, and the term is no longer as common as it once was. However, to simplify their design, a number of special-purpose processors still use a single accumulator. Basic concept Mathematical operations often take place in a stepwise fashion, using the results from one operation as the input to the next. For instance, a manual calculation of a worker's weekly payroll might look something like: look up the number of hours worked from the employee's time card look up the pay rate for that employee from a table multiply the hours by the pay rate to get their basic weekly pay multiply their basic pay by a fixed percentage to account for income tax subtract that number from their basic pay to get their weekly pay after tax multiply that result by another fixed percentage to account for retirement plans subtract that number from their basic pay to get their weekly pay after all deductions A computer program carrying out the same task would follow the same basic sequence of operations, although the values being looked up would all be stored in computer memory. In early computers, the number of hours would likely be held on a punch card and the pay rate in some other form of memory, perhaps a magnetic drum. Once the multiplication is complete, the result needs to be placed somewhere. On a "drum machine" this would likely be back to the drum, an operation that takes considerable time. And then the very next operation has to read that value back in, which introduces another considerable delay. Accumulators dramatically improve performance in systems like these by providing a scratchpad area where the results of one operation can be fed to the next one for little or no performance penalty. In the example above, the basic weekly pay would be calculated and placed in the accumulator, which could then immediately be used by the income tax calculation. This removes one save and one read operation from the sequence, operations that generally took tens to hundreds of times as long as the multiplication itself. Accumulator machines An accumulator machine, also called a 1-operand machine, or a CPU with accumulator-based architecture, is a kind of CPU where, although it may have several registers, the CPU mostly stores the results of calculations in one special register, typically called "the accumulator". Almost all computers were accumulator machines with only the high-performance "supercomputers" having multiple registers. Then as mainframe systems gave way to microcomputers, accumulator architectures were again popular with the MOS 6502 being a notable example. Many 8-bit microcontrollers that are still popular as of 2014, such as the PICmicro and 8051, are accumulator-based machines. Modern CPUs are typically 2-operand or 3-operand machines. The additional operands specify which one of many general purpose registers (also called "general purpose accumulators") are used as the source and destination for calculations. These CPUs are not considered "accumulator machines". The characteristic which distinguishes one register as being the accumulator of a computer architecture is that the accumulator (if the architecture were to have one) would be used as an implicit operand for arithmetic instructions. For instance, a CPU might have an instruction like: ADD memaddress that adds the value read from memory location memaddress to the value in the accumulator, placing the result back in the accumulator. The accumulator is not identified in the instruction by a register number; it is implicit in the instruction and no other register can be specified in the instruction. Some architectures use a particular register as an accumulator in some instructions, but other instructions use register numbers for explicit operand specification. History of the computer accumulator Any system that uses a single "memory" to store the result of multiple operations can be considered an accumulator. J. Presper Eckert refers to even the earliest adding machines of Gottfried Leibniz and Blaise Pascal as accumulator-based systems. Percy Ludgate was the first to conceive a multiplier-accumulator (MAC) in his Analytical Machine of 1909. Historical convention dedicates a register to "the accumulator", an "arithmetic organ" that literally accumulates its number during a sequence of arithmetic operations: "The first part of our arithmetic organ ... should be a parallel storage organ which can receive a number and add it to the one already in it, which is also able to clear its contents and which can store what it contains. We will call such an organ an Accumulator. It is quite conventional in principle in past and present computing machines of the most varied types, e.g. desk multipliers, standard IBM counters, more modern relay machines, the ENIAC" (Goldstine and von Neumann, 1946; p. 98 in Bell and Newell 1971). Just a few of the instructions are, for example (with some modern interpretation): Clear accumulator and add number from memory location X Clear accumulator and subtract number from memory location X Add number copied from memory location X to the contents of the accumulator Subtract number copied from memory location X from the contents of the accumulator Clear accumulator and shift contents of register into accumulator No convention exists regarding the names for operations from registers to accumulator and from accumulator to registers. Tradition (e.g. Donald Knuth's (1973) hypothetical MIX computer), for example, uses two instructions called load accumulator from register/memory (e.g. "LDA r") and store accumulator to register/memory (e.g. "STA r"). Knuth's model has many other instructions as well. Notable accumulator-based computers The 1945 configuration of ENIAC had 20 accumulators, which could operate in parallel. Each one could store an eight decimal digit number and add to it (or subtract from it) a number it received. Most of IBM's early binary "scientific" computers, beginning with the vacuum tube IBM 701 in 1952, used a single 36-bit accumulator, along with a separate multiplier/quotient register to handle operations with longer results. The IBM 650, a decimal machine, had one 10 digit distributor and two ten-digit accumulators; the IBM 7070, a later, transistorized decimal machine had three accumulators. The IBM System/360, and Digital Equipment Corporation's PDP-6, had 16 general purpose registers, although the PDP-6 and its successor, the PDP-10, call them accumulators. The 12-bit PDP-8 was one of the first minicomputers to use accumulators, and inspired many later machines. The PDP-8 had but one accumulator. The HP 2100 and Data General Nova had 2 and 4 accumulators. The Nova was created when this follow-on to the PDP-8 was rejected in favor of what would become the PDP-11. The Nova provided four accumulators, AC0-AC3, although AC2 and AC3 could also be used to provide offset addresses, tending towards more generality of usage for the registers. The PDP-11 had 8 general purpose registers, along the lines of the System/360 and PDP-10; most later CISC and RISC machines provided multiple general purpose registers. Early 4-bit and 8-bit microprocessors such as the 4004, 8008 and numerous others, typically had single accumulators. The 8051 microcontroller has two, a primary accumulator and a secondary accumulator, where the second is used by instructions only when multiplying (MUL AB) or dividing (DIV AB); the former splits the 16-bit result between the two 8-bit accumulators, whereas the latter stores the quotient on the primary accumulator A and the remainder in the secondary accumulator B. As a direct descendant of the 8008, the 8080, and the 8086, the modern ubiquitous Intel x86 processors still uses the primary accumulator EAX and the secondary accumulator EDX for multiplication and division of large numbers. For instance, MUL ECX will multiply the 32-bit registers ECX and EAX and split the 64-bit result between EAX and EDX. However, MUL and DIV are special cases; other arithmetic-logical instructions (ADD, SUB, CMP, AND, OR, XOR, TEST) may specify any of the eight registers EAX, ECX, EDX, EBX, ESP, EBP, ESI, EDI as the accumulator (i.e. left operand and destination). This is also supported for multiply if the upper half of the result is not required. x86 is thus a fairly general register architecture, despite being based on an accumulator model. The 64-bit extension of x86, x86-64, has been further generalized to 16 instead of 8 general registers. References Goldstine, Herman H., and von Neumann, John, "Planning and Coding of the Problems for an Electronic Computing Instrument", Rep. 1947, Institute for Advanced Study, Princeton. Reprinted on pp. 92–119 in Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York. }. A veritable treasure-trove of detailed descriptions of ancient machines including photos. Central processing unit Digital registers
61059265
https://en.wikipedia.org/wiki/2019%20Holiday%20Bowl
2019 Holiday Bowl
The 2019 Holiday Bowl was a college football bowl game played on December 27, 2019. Kickoff was at 8:07 p.m. EST (5:07 p.m. local PST). The game was aired on FS1. It was the 42nd edition of the Holiday Bowl, and was one of the 2019–20 bowl games concluding the 2019 FBS football season. This was the third season in which the Holiday Bowl was held at the SDCCU Stadium. The game was sponsored by San Diego County Credit Union and officially known as the San Diego County Credit Union Holiday Bowl. This game also marked the last Holiday Bowl with a conference tie-in with the Big Ten, as starting in 2020, the bowl will have a conference tie-in with the ACC. This is also the last game played at SDCCU Stadium as the 2020 edition was cancelled due to the COVID-19 pandemic and due to the site being demolished to make way for a new stadium. Teams The game featured the USC Trojans of the Pac-12 Conference and Iowa Hawkeyes of the Big Ten Conference. This was the 10th meeting between the two programs, with USC holding a 7–2 edge in previous meetings. This was Iowa's fourth Holiday Bowl, going 2–0–1 in previous appearances in 1986 (won against SDSU Aztecs 39–38), 1987 (won against Wyoming Cowboys 20–19) and 1991 (tied against BYU Cougars 13–13 – this was the last time a postseason game was tied in the NCAA). This was USC's third Holiday Bowl, going 1–1 in previous appearances in 2014 (won against Nebraska Cornhuskers 45–42) and 2015 (lost against Wisconsin Badgers 21–23). The teams had previously met in the 2003 Orange Bowl, where USC won by a score of 38–17. USC Trojans The Trojans entered the game with an 8–4 record (7–2 in conference) and ranked 22nd in the AP Poll. The Trojans finished in second place in the Pac-12's South Division. USC was 2–3 against ranked teams. Iowa Hawkeyes The Hawkeyes entered the game ranked 19th in the AP Poll (16th in the CFP poll), with a 9–3 record (6–3 in conference). The Hawkeyes finished in third place in the Big Ten's West Division. Iowa was 1–3 against ranked teams. Game summary Statistics References External links Media guide Game statistics at statbroadcast.com Holiday Bowl Holiday Bowl USC Trojans football bowl games Iowa Hawkeyes football bowl games Holiday Bowl Holiday Bowl
1069443
https://en.wikipedia.org/wiki/Lighting%20control%20console
Lighting control console
A lighting control console (also called a lightboard, lighting board, or lighting desk) is an electronic device used in theatrical lighting design to control multiple stage lights at once. They are used throughout the entertainment industry and are normally placed at the front of house (FOH) position or in a control booth. All lighting control consoles can control dimmers which control the intensity of the lights. Many modern consoles can control Intelligent lighting (lights that can move, change colors and gobo patterns), fog machines and hazers, and other special effects devices. Some consoles can also interface with other electronic performance hardware (i.e. sound boards, projectors, media servers, automated winches and motors, etc.) to improve synchronization or unify their control. Lighting consoles communicate with the dimmers and other devices in the lighting system via an electronic control protocol. The most common protocol used in the entertainment industry today is DMX512, although other protocols (e.g. 0-10 V analog lighting control) may still be found in use, and newer protocols such as ACN and DMX-512-A are evolving to meet the demands of ever increasing device sophistication. Some lighting consoles can communicate over a network via a switch to have more control over more complex systems. A common protocol for this is sACN (pronounced: streaming A.C.N.) or Art-Net. Types of control consoles Consoles vary in size and complexity, from small preset boards to dedicated moving light consoles. The purpose of all lighting consoles, however is the same: to consolidate control of the lights into an organized, easy-to-use system, so that the lighting designer can concentrate on producing a good show. Most consoles accept MIDI Show Control signals and commands to allow show control systems to integrate their capabilities into more complex shows. Preset boards Preset boards are the most basic lighting consoles—and also the most prevalent in smaller installations. They consist of two or more identical fader banks, called scenes. The faders (control slides) on these scenes can be manually adjusted. Each scene has the same number of channels which control the same dimmers. So the console operator can build a scene offline or in "blind", a cross-fader or submaster is used to selectively mix or fade between the different scenes. Generally, at least with a preset board, the operator has a cue sheet for each scene, which is a diagram of the board with the faders in their positions, as determined by the lighting designer. The operator sets the faders into their positions based on the cue sheets. Typically during a cue, the operator sets the next scene. Then the operator makes the transition between the scenes using the cross-fader. Preset boards are not as prevalent since the advent of digital memory consoles, which can store scenes digitally, and are generally much less cumbersome but more expensive than preset boards. However, for small setups such as that of a DJ, they remain the board of choice for their simple to use interface and relative flexibility. Preset boards generally control only conventional lights; though some advanced hybrid consoles can be patched to operate intelligent lights in a round-about way by setting the control channels of the light to channels the preset board can control. However, this is not recommended since it is a cumbersome process. Memory consoles Memory-based consoles have become very popular in almost all larger installations, particularly theatres. This type of controller has almost completely replaced preset consoles as controllers of choice. Memory consoles are preferable in productions where scenes do not change from show to show, such as a theatre production, because scenes are designed and digitally recorded, so there is less room for human error, and less time between lighting cues is required to produce the same result. They also allow for lighting cues to contain larger channel counts due to the same time savings gained from not physically moving individual channel faders. Many memory consoles have a bank of faders. These faders can be programmed to control a single channel (a channel is a lighting designer's numerical name for a dimmer or group of dimmers) or a group of channels (known as a ""submaster""). The console may also have provision to operate in analog to a manual desk for programming scenes or live control. On more advanced consoles, faders can be used to control effects, chases (sequences of cues), and moving light effects (if the console can control moving lights). Moving light controllers Moving Light Controllers are another step up in sophistication from Memory Consoles. As well as being capable of controlling ordinary luminares via dimmers, they provide additional controls for intelligent fixtures. On midrange controllers, these are usually provided as a section separate from main Preset and Cue stack controls. These include an array of buttons allowing the operator to select the fixture or fixtures they want to control, and a joystick, or a number of wheels or rotary encoders to control fixture attributes such as the orientation (pan and tilt), focus, colour, gobos etc. found in this type of light. Unlike a fader that shows its value based on the position of a slider, a wheel is continuously variable and provides no visual feedback for the value of a particular control. Some form of display such as LCD or LED is therefore vital for displaying this information. The more advanced desks typically have one or more touchscreens, and present a GUI that integrates all the aspects of the lighting. As there is no standard way of controlling an intelligent light, an important function for this type of desk is to consolidate the various ways in which the hundreds of types of intelligent lights are controlled into a single abstract interface for the user. By integrating knowledge of different fixtures and their attributes into the lighting desk software, the detail of how an attribute such as pan or tilt is controlled for one device vs. another can be hidden from the operator. This frees the operator to think in terms of what they want to achieve (e.g. pan 30 degrees clockwise) instead of how it is achieved for any given fixture (e.g. send value 137 down channel 23). Furthermore, should a lighting fixture need to be replaced with one from a different vendor that has different control sequences, no change need be apparent to the control operator. For some further discussion on how intelligent fixtures are controlled, see Digital MultipleX (DMX). Personal computer-based controllers Personal Computer (PC) based controllers are relatively new, and use software with a feature set similar to that found in a hardware-based console. As dimmers, automated fixtures and other standard lighting devices do not generally have current standard computer interfaces, options such as DMX-512 ports and fader/submaster panels connected via USB are commonplace. This system allows a "build-to-fit" approach: the end user initially provides a PC that fits their budget and any other needs with future options to improve the system, for example, by increasing the number of DMX outputs or additional console style panels. Many lightboard vendors offer a PC software version of their consoles. Commercial lighting control software often requires a specific, and possibly expensive, hardware DMX interface. However, inexpensive (<$150) DMX PC interfaces and free or Open source software are available. Many console vendors also make a software simulator or "offline editor" for their hardware consoles, and these are often downloadable for free. The simulator can be used to pre-program a show, and the cues then loaded into the actual console. In addition, lighting visualization software is available to simulate and approximate how lighting will appear on stage, and this can be useful for programming effects and spotting obvious programming errors such as incorrect colour changes. Remote focus unit Many memory consoles have an optional Remote Focus Unit (RFU) controller that can be attached to the light board and used to control the board's functions (though usually in some limited capacity). They are usually small enough to be handheld. This is ideal in situations where moving the light board is impractical, but control is needed away from where the board is located. That is, if the light board is in a control room that is located far from the fixtures, such as a catwalk, an RFU can be attached and an electrician or the lighting designer can bring it to a location which is close to the lights. Some of the newer and more advanced boards have RFUs that can be connected through USB or even wirelessly. Various manufacturers offer software for devices such as Android and iPhones that cause the devices to act as remote controllers for their consoles. Also, independent software developers have released applications that can send Art-Net packets from an iPhone, thus enabling an iPhone to serve as a fully featured console when used in conjunction with an Art-Net to DMX converter or Art-Net compatible luminaries and dimmers. An example of this is ETC's (electronic theater controls) app called iRFR for Apple devices or aRFR for Android devices. The Controller Interface Transport Protocol, or CITP, is a network protocol used between visualizers, lighting control consoles and media servers to transport non-show critical information during pre-production. The protocol is used for a number of purposes including SDMX, browsing media and thumbnails, and streaming media among different devices. See also Avolites Compulite Frederick Bentham Genlyte Group George Izenour Light board operator, the person working the lightboard. Lighting control systems for a building or residence. Non-dim circuit Strand Lighting References External links Stage lighting Stagecraft software
2854731
https://en.wikipedia.org/wiki/University%20of%20Kelaniya
University of Kelaniya
The University of Kelaniya (, , abbreviated UOK) is a public university in Sri Lanka. Just outside the municipal limits of Colombo, in the city of Kelaniya, the university has two major campuses, seven locations, six faculties and four institutions. History The University of Kelaniya has its origin in the historic Vidyalankara Pirivena, founded in 1875 by Ratmalane Sri Dharmaloka Thera as a centre of learning for Buddhist monks. With the establishment of modern universities in Sri Lanka in the 1940s and 1950s, the Vidyalankara Pirivena became the Vidyalankara University in 1959, later the Vidyalankara Campus of the University of Ceylon in 1972, and, ultimately, the University of Kelaniya in 1978. The University of Kelaniya has pioneered a number of new developments in higher education. It was one of the first universities to begin teaching science in Sinhala and the first to restructure the traditional Arts Faculty into three faculties: Humanities, Social Sciences, and Commerce and Management. It has several departments not generally found in the Sri Lankan University system and some Kelaniya innovations have been adopted subsequently by other universities. These include the Departments of Industrial Management and Microbiology in the Faculty of Science; Departments of Linguistics, Fine Arts, Modern Languages and Hindi in the Faculty of Humanities; and Mass Communication and Library and Information Sciences in the Faculty of Social Sciences. Symbols Coat of arms The coat of arms of the University of Kelaniya is circular and consists of three concentric bands, the outermost of which contains the name of the University in Sinhala and English. The motto of the institution, "Pannaya Parisujjhati" ("Self-purification is by insight"), is a quotation from the Alavaka-sutta of the Samyutta Nikaya, given in Sinhala characters in the same band. The middle band containing a creeper design encloses the innermost, which shows a full-blown lotus, signifying purity. These two designs are reminiscent of those occurring in the well-known moonstones at Anuradhapura. Faculties Faculty of Science The Faculty of Science started functioning in October 1967 with Prof. Charles Dahanayake as the Dean of Science. The intake of the first batch of students was 57. Formal approval for the Faculty was given by the Minister of Education in 1968. The science faculty was housed in the main building known as the "Science Block". Due to the continued increase in the student intake from year to year, a new lecture theatre complex and an auditorium were constructed in 1992, which enabled the intake of students to be increased to 450 in 2003. A new laboratory complex for the Chemistry Department and three buildings for the Departments for Industrial Management, Microbiology and Zoology have now been completed. The science faculty was the first among the Sri Lankan universities to initiate the changeover from the traditional three subject (General) degree with end of year examinations to a more flexible course unit system, i.e., a modularized credit-based system in a two-semester academic year with the end of semester examinations. It offers a variety of course pathways designed to provide flexibility in the choice of subjects. Under this system students have the option of reading for a traditional three subject degree or for a degree consisting of two principal subjects and a selection of course units drawn from other subject areas. The BSc (Special) degree courses, begun in 1974, adopted the course unit system in 1986. Currently, Prof. Sudath Kalingamudali is the dean of the faculty of Science. The faculty consists of eight departments. Department of Botany Department of Chemistry Department of Industrial Management Department of Mathematics Department of Microbiology Department of Physics and Electronics Department of Statistics & Computer Science Department of Zoology & Environmental Management Faculty of Medicine The Faculty of Medicine of the University of Kelaniya is on a campus at Ragama. It is one of eight medical schools in Sri Lanka. The faculty began classes with the admission of 120 students in September 1991 after the government, in 1989, nationalised the North Colombo Medical College (NCMC), the first privately funded medical school in Sri Lanka established in 1980. The first batch of students, of the Faculty of Medicine, University of Kelaniya completed their five-year course and graduated MBBS in September 1996. Prof. Carlo Fonseka was the first dean of the faculty. Subsequent deans were Prof. Janaka de Silva, Prof. Rajitha Wickremasinghe and Prof. Nilanthi de Silva. The current dean is Prof. Prasantha S. Wijesinghe. The faculty now has over 1,000 medical students. This number includes international students, mainly from other South Asian countries, who have been admitted on a fee-levying basis. The faculty also welcomes students for elective appointments. Students from medical schools in Europe, USA and Australia have spent their elective periods with the university. In addition to the MBBS course, it conducts a BSc programme in speech and hearing sciences. There is a permanent academic staff of over 120 and in addition 40 temporary academic staff and over 60 visiting staff that includes consultants who are based in the affiliated teaching hospitals. Faculty of Social Science The Faculty of Social Sciences, in student population, is the largest faculty in the University of Kelaniya. Department of Library and Information Science Faculty of Humanities The faculty includes disciplines associated with Buddhist and Asian cultures, such as Pali and Buddhist Studies, Sinhala, Tamil, Sanskrit, Hindi, Japanese and Chinese, while teaching courses in modern European languages such as English, French, German and Russian. Faculty of Commerce and Management The faculty consists of four departments: Department of Commerce Department of Accountancy Department of Marketing Management Department of Human Resource Management Department of Finance The Department of Commerce was the first to be established and contribute graduates to the industry. BCOM degree is unique to the Department of Commerce. Department provides special degree in Financial Management(BCOM (special) Degree in Financial management), Business technology(BCOM (special) Degree in Business Technology), Entrepreneurship(BCOM (special) Degree in Entrepreneurship) and Commerce(BCOM (special) Degree in Commerce). Faculty of Computing and Technology The University of Kelaniya established its 7th Faculty - the Faculty of Computing and Technology (FCT) on 30 December 2015 and the Faculty commenced its operations on 18 January 2016. The Faculty will offer Postgraduate Programmes in the areas of Computer Science, Software Engineering, Information Technology and Engineering Technology. The Master of Information Technology in Education programme is currently being developed for the Ministry of Education to train ICT teachers in the national education system. The faculty will conduct research in diverse fields of significant impact. The research enterprise at the Faculty of Computing and Technology will expand from fundamental Computer Science research to the development of new technologies with applications to the industry and society as a whole. The Faculty is planning to propose the following Research and Development Centres: Centre for Nanotechnology Centre for eLearning Language Engineering Research Centre Centre for Geo-informatics Centre for Computational Mathematics Centre for Data Science Centre for Cyber Security and Digital Forensics Faculty of Graduate Studies Twenty-three postgraduate degree programmes and six postgraduate diploma programmes are coordinated by the Faculty of Graduates Studies. Postgraduate diplomas Regional Planning Information Technology Industrial and Business Management Environmental Management Mathematics Human Resource Management Postgraduate degrees M.A. in Sinhala M.A. in Linguistics M.A. in Drama and Theatre M. S. Sc. in Mass Communication M. S. Sc. in Sociology M. S. Sc. in Geography M. S. Sc. in Library Science M. S. Sc. in Economics M. S. Sc. in Political Science M. S. Sc. in History M. S. Sc. in Philosophy MSc in Applied Microbiology MSc in Food and Nutrition MSc in Aquaculture and Fisheries Management MSc in Biodiversity and Integrated Environmental Management MSc in Industrial and Environment Chemistry MSc in Management and Information Technology MSc in Computer Science Master of Commerce Master of Business Management Master of Philosophy Doctor of Philosophy Doctor of Medicine Libraries and ICT services University Library The University of Kelaniya Library, has a history spanning over 60 years. In parallel to Vidyalankara Pirivena became Vidyalankara University in 1959, the Library was started with a collection of books belonging to Vidyalankara Pirivena. Subsequently, it was shifted to the current premises in 1977. Later, as this old building was not sufficient enough to accommodate its growing collection and student population, the newly built four-storied building was added to the Library system in 2013. Presently, the Library owns a collection of over 245,000 books and monographs relevant to various study programmes and research activities in the University. Furthermore, the Library has subscribed to EBSCO HOST, JSTOR, Emerald, Oxford University Press, Taylor and Francis databases with access facilities for more than 20,000 academic e-journals and more than 100,000 e-books. Professional librarians have been appointed for each faculty to fulfil the informational requirements and Information Literacy training needs of the users. The services provided by the Subject Liaison Librarians are : Reference Sources and quick reference answers Information Literacy Courses Plagiarism Checking and consultations for avoiding plagiarism Library Orientation/induction Programmes including library tours Inter Library Loan Service Document Delivery Service Writing Help Services Subject Guide Services Literature Review Services Remote access via VPN and Shibboleth Services Special Software Training for Researchers e.g. Turnitin, Urkund, LaTeX, SPSS, Grammarly, Mendeley etc. Citation analysis service Information and Communication Technology Centre The ICT Centre provides support services in IT-related teaching, research, internet services, staff development and hardware maintenance for the entire university. It conducts a computer literacy course open to all students and advanced courses in Visual Basic, web designing and hardware technology for the students who have successfully completed the computer literacy course. On-the-job training in the IT arena is provided for the young people just out of the university who work in the ICT Centre. The maintenance unit provides network, hardware and software support to the clients in the university, and IT for academic departments and administrative branches of the university. Video filming of special events of the university since 2005 is an additional service. Currently, the ICT Centre hosts web and mail services for all students and staff. They have a university-wide wifi network called "Kelani-WiFi". The ICT Centre is the first institutional Sri Lankan member of Eduroam. Historically related institutions The Vidyodaya University was created at the same time as the Vidyalankara University. Today Vidyodaya University is known as the University of Sri Jayewardenepura. Students Societies and Clubs Sports Club Gavel Club Art Society Inventors Club AIESEC Haritha Kawaya Buddhist Students' Society Leo club Humane Society CSM Centre for Gender Studies(CGS) Centre for Gender studies was founded by Prof. Maitree Wickremasinghe with the objective of initiating and conducting research on Gender issues and conducting educational programs on gender studies. Currently, Dr Sagarika Kannangara acts as the director of this centre. Faculty staff Vice chancellors Thilak Rathnakara (1978–1982) S. L. Kekulawala M. P. Perera (1983–1985) I. Balasooriya (1985–1987) M. M. J. Marasinghe K. Dharmasena H. H. Costa (1994–1997) Senaka Bandaranaike (1997–1999) Thilakaratne Kapugamage (1999–2005) M. J. S. Wijeyaratne (2005–2008) Sarath Amunugama (2008–2012) Sunanda Madduma Bandara (2012–2017) Semasinghe Dissanayake (2017–2020) Nilanthi de Silva (2020 - To date) Notable alumni Janaka de Silva Nalin de Silva Saman Gunatilake Jagdish Kashyap Karunasena Kodituwakku Sunanda Mahendra Kollupitiye Mahinda Sangharakkhitha Thera Polwatte Buddhadatta Mahanayake Thera Witiyala Seewalie Thera Jagath Weerasinghe Maitree Wickramasinghe Harischandra Wijayatunga Ashin Nandamalabhivamsa Anura Kumara Dissanayaka Sevvandi Jayakody Facilities Scholarships and Bursaries Hostel Facilities for Students Hostel facilities are provided for selected numbers of students. Hostels are located inside and outside the university premises. Kiriwaththuduwe Sri Prangnasara Hostel for Clergy C.W.W. Kannangara Boys' Hostel Yakkaduwe Pannarama Boys' Hostel Bandaranayake Girls' Hostel Sangamitta Girls' Hostel Viharamahadevi Girls' Hostel Ediriweera Sarachchandra Girls' Hostel E.W. Adikarama Girls Hostel Gunapala Malalasekara Girls' Hostel Hemachandra Rai Girls' Hostel Soma Guna Mahal (External) Girls' Hostel Bulugaha Junction (External) Girls' Hostel See also List of split up universities References External links 1959 establishments in Ceylon Educational institutions established in 1959 Statutory boards of Sri Lanka Education in Kelaniya Universities and colleges in Western Province, Sri Lanka Universities in Sri Lanka
58733407
https://en.wikipedia.org/wiki/2001%20Troy%20State%20Trojans%20football%20team
2001 Troy State Trojans football team
The 2001 Troy State Trojans football team represented Troy State University in the 2001 NCAA Division I-A football season. The Trojans played their home games at Veterans Memorial Stadium in Troy, Alabama and competed as a Division I-A Independent. The 2001 season was Troy State's transitional year of moving to Division I-A. Schedule References Troy State Troy Trojans football seasons Troy State Trojans football
9728830
https://en.wikipedia.org/wiki/AIPS%2B%2B
AIPS++
Astronomical Image Processing System ++ is a software package whose development was started in the early nineties, written almost entirely in C++, and which initial goal was to replace the by then already aging AIPS software. It has now been reborn as CASA and is the basis of the image processing systems for several next-generation radio telescopes including ALMA, eVLA, and ASKAP. Early history In 1988-89, the Director of National Radio Astronomy Observatory (NRAO), Paul Vanden Bout, convened an independent review panel, the Software Advisory Group (SWAG) to produce recommendations for the future of processing software for NRAO. SWAG was chaired by Tim Cornwell, and its members included Geoff Croes, Gareth Hunt, Jan Noordam, and Ray Norris. SWAG recommendations were that: All data processing in NRAO should be coordinated by a new Assistant Director for Computing AIPS should be re-designed and re-implemented following certain general guidelines An equal amount of attention should be devoted to single-dish software. In late 1990 the NRAO Director accepted the recommendations, and the task of defining the new package began. The project was originally an effort by several astronomical institutes together in a consortium, the Australia Telescope National Facility (ATNF), the Jodrell Bank Observatory (JBO) and the MERLIN/VLBI National Facility (MERLIN/VLBI), the Berkeley-Illinois-Maryland Association (BIMA), the National Radio Astronomy Observatory (NRAO) and the Netherlands Foundation for Research in Astronomy ASTRON. Features AIPS++ provides facilities for calibration, editing, image formation, image enhancement, and analysis of images and other astronomical data. A major focus is on reduction of data from both single-dish and aperture synthesis radio telescopes. Although the tools provided in AIPS++ are mainly designed for processing data from varieties of radio telescopes, the package is expected to also be useful for processing other types of astronomical data and images. However, the reduction of most data from imaging array detectors is performed using IRAF instead. AIPS++ is structured as a library of tools at the lower levels, designed to replace AIPS of more monolithic applications. In general, the counterpart of an AIPS task is an AIPS++ tool function, although the toolkit structure of AIPS++ will generally mean that these functions are more fine-grained, except for the more integrated tools at the higher levels (such as map). The counterparts of AIPS adverbs are the parameters of AIPS++tool functions. The command-line interpreter in AIPS is POPS, while the counterpart in AIPS++ is Glish. The code used as standard in most astronomical institutes is still AIPS, as AIPS++ is usually not yet considered sufficiently reliable and usable. Like most research astronomy software, it is available for all major operating systems except Microsoft Windows. AIPS++/CASA On August 25, 2004 the AIPS++ code base was re-organized to a more modular structure; since then it is referred to as CASA ("Common Astronomy Software Applications"). CASA consists of a suite of C++ libraries derived from the original AIPS++ tasks. The Glish scripting system is being replaced by Python bindings, a system known as "CASApy". The CASA software is no longer developed by the consortium, but mainly within NRAO for use on the Atacama Large Millimeter Array. The core of the old AIPS++ libraries, now known as CasaCore are still being maintained and developed by the original consortium members. A separate Python interface is available known as python-casacore (formerly "Pyrap"). Python-casacore is mainly developed within ATNF and ASTRON for replacing Glish for the Australia Telescope Compact Array, WSRT and LOFAR. CASA also uses these core libraries but not python-casacore. References AIPS++ Mission Statement The AIPS++ System Manual describes the "nuts & bolts" of the AIPS++ System; how it is put together and how it is intended to run. Software Dictionary AIPS/AIPS++ Croes, G., 1993, "On AIPS++ A New Astronomical Image Processing System" External links AIPS++ homepage CASA homepage Usenet newsgroup alt.sci.astro.aips CasaCore and PyRap Computing at the ATNF Radio astronomy Interferometry C++ software Astronomical imaging Astronomy software
4928985
https://en.wikipedia.org/wiki/Cryptographic%20primitive
Cryptographic primitive
Cryptographic primitives are well-established, low-level cryptographic algorithms that are frequently used to build cryptographic protocols for computer security systems. These routines include, but are not limited to, one-way hash functions and encryption functions. Rationale When creating cryptographic systems, designers use cryptographic primitives as their most basic building blocks. Because of this, cryptographic primitives are designed to do one very specific task in a precisely defined and highly reliable fashion. Since cryptographic primitives are used as building blocks, they must be very reliable, i.e. perform according to their specification. For example, if an encryption routine claims to be only breakable with number of computer operations, and it is broken with significantly fewer than operations, then that cryptographic primitive has failed. If a cryptographic primitive is found to fail, almost every protocol that uses it becomes vulnerable. Since creating cryptographic routines is very hard, and testing them to be reliable takes a long time, it is essentially never sensible (nor secure) to design a new cryptographic primitive to suit the needs of a new cryptographic system. The reasons include: The designer might not be competent in the mathematical and practical considerations involved in cryptographic primitives. Designing a new cryptographic primitive is very time-consuming and very error-prone, even for experts in the field. Since algorithms in this field are not only required to be designed well but also need to be tested well by the cryptologist community, even if a cryptographic routine looks good from a design point of view it might still contain errors. Successfully withstanding such scrutiny gives some confidence (in fact, so far, the only confidence) that the algorithm is indeed secure enough to use; security proofs for cryptographic primitives are generally not available. Cryptographic primitives are similar in some ways to programming languages. A computer programmer rarely invents a new programming language while writing a new program; instead, they will use one of the already established programming languages to program in. Cryptographic primitives are one of the building blocks of every crypto system, e.g., TLS, SSL, SSH, etc. Crypto system designers, not being in a position to definitively prove their security, must take the primitives they use as secure. Choosing the best primitive available for use in a protocol usually provides the best available security. However, compositional weaknesses are possible in any crypto system and it is the responsibility of the designer(s) to avoid them. Combining cryptographic primitives Cryptographic primitives, on their own, are quite limited. They cannot be considered, properly, to be a cryptographic system. For instance, a bare encryption algorithm will provide no authentication mechanism, nor any explicit message integrity checking. Only when combined in security protocols, can more than one security requirement be addressed. For example, to transmit a message that is not only encoded but also protected from tinkering (i.e. it is confidential and integrity-protected), an encoding routine, such as DES, and a hash-routine such as SHA-1 can be used in combination. If the attacker does not know the encryption key, they can not modify the message such that message digest value(s) would be valid. Combining cryptographic primitives to make a security protocol is itself an entire specialization. Most exploitable errors (i.e., insecurities in crypto systems) are due not to design errors in the primitives (assuming always that they were chosen with care), but to the way they are used, i.e. bad protocol design and buggy or not careful enough implementation. Mathematical analysis of protocols is, at the time of this writing, not mature. There are some basic properties that can be verified with automated methods, such as BAN logic. There are even methods for full verification (e.g. the SPI calculus) but they are extremely cumbersome and cannot be automated. Protocol design is an art requiring deep knowledge and much practice; even then mistakes are common. An illustrative example, for a real system, can be seen on the OpenSSL vulnerability news page here. Commonly used primitives One-way hash function, sometimes also called as one-way compression function—compute a reduced hash value for a message (e.g., SHA-256) Symmetric key cryptography—compute a ciphertext decodable with the same key used to encode (e.g., AES) Public-key cryptography—compute a ciphertext decodable with a different key used to encode (e.g., RSA) Digital signatures—confirm the author of a message Mix network—pool communications from many users to anonymize what came from whom Private information retrieval—get database information without server knowing which item was requested Commitment scheme—allows one to commit to a chosen value while keeping it hidden to others, with the ability to reveal it later Cryptographically secure pseudorandom number generator See also :Category:Cryptographic primitives – a list of cryptographic primitives Cryptographic agility References Levente Buttyán, István Vajda : Kriptográfia és alkalmazásai (Cryptography and its applications), Typotex 2004, Menezes, Alfred J : Handbook of applied cryptography, CRC Press, , October 1996, 816 pages. Crypto101 is an introductory course on cryptography, freely available for programmers of all ages and skill levels.
61744055
https://en.wikipedia.org/wiki/The%20Sumerian%20Game
The Sumerian Game
The Sumerian Game is a text-based strategy video game of land and resource management. It was developed as part of a joint research project between the Board of Cooperative Educational Services of Westchester County, New York and IBM in 1964–1966 for investigation of the use of computer-based simulations in schools. It was designed by Mabel Addis, then a fourth-grade teacher, and programmed by William McKay for the IBM 7090 time-shared mainframe computer. The first version of the game was played by a group of 30 sixth-grade students in 1964, and a revised version featuring refocused gameplay and added narrative and audiovisual elements was played by a second group of students in 1966. The game is composed of three segments, representing the reigns of three successive rulers of the city of Lagash in Sumer around 3500 BC. In each segment the game asks the players how to allocate workers and grain over a series of rounds while accommodating the effects of their prior decisions, random disasters, and technological innovations, with each segment adding complexity. At the conclusion of the project the game was abandoned; a description of it was given to Doug Dyment in 1968, however, and he recreated a version of the first segment of the game as King of Sumeria. This game was expanded on in 1971 by David H. Ahl as Hamurabi, which in turn led to many early strategy and city-building games. The Sumerian Game has been described as the first video game with a narrative, as well as the first edutainment game. As a result, Mabel Addis has been called the first female video game designer and the first writer for a video game. Gameplay The Sumerian Game is a largely text-based strategy video game centered on resource management. The game, set around 3500 BC, has players act as three successive rulers of the city of Lagash in Sumer—Luduga I, II, and III—over three segments of increasingly complex economic simulation. Two versions of the game were created, both intended for play by a classroom of students with a single person inputting commands into a teleprinter, which would output responses from the mainframe computer. The second version had a stronger narrative component to the game's text and interspersed the game with taped audio lectures, presented as the discussions of the ruler's court of advisors, corresponding with images on a slide projector. In both versions, the player enters numbers in response to questions posed by the game. In the first segment of the game, the player plays a series of rounds—limited to 30 in the second version of the game—in which they are given information about the current population, acres of farmland, number of farmers, grain harvested that round, and stored grain. The rounds start in 3500 BC, and are meant to represent seasons. The player then selects how much grain will be used as food, seed for planting, and storage. After making their selections, the game calculates the effect of the player's choices on the population for the next round. Additionally, after each round, the game selects whether to report several events. The city may be struck with a random disaster, such as a fire or flood, which destroys a percentage of the city's population and harvest. Independent of disasters, a percentage of the stored grain may also be lost to rot and rats. Additionally, the game may report a technological innovation which has a positive effect on subsequent rounds, such as reducing the amount of grain that may spoil or reducing the number of farmers needed for each acre of land. Several of these innovations require the player to have first "exhibited some good judgement", such as by adequately feeding their population for multiple rounds. In the second and third segments of the game, the city's population and grain are adjusted to preset levels, regardless of the player's performance in the prior segment, to represent that some time has passed since the decisions of the prior ruler. The player then again plays through a series of rounds. In the second segment, the player can also apply workers towards the development of several crafts—which in turn can result in innovations—while the third increases the complexity of the simulation by adding trade and expansion choices. In the original version of the game, the second and third segments were expansions on the first, requiring the same choices around grain in addition to the new choices. In the second version of the game, the second segment was refocused. The rounds were limited to 10 and the player was no longer required to make choices around grain allocation, but instead only make decisions about applying workers to farming or crafts. The third segment was not changed, though plans were made to either also remove the grain allocation choices and add more choices around trade, colonization, and war, or else to instead make the third segment a combination of the first two segments. Development In 1962, the Board of Cooperative Educational Services (BOCES) of Westchester County, New York, began a series of discussions with researchers at IBM about the use of computers in education research. The BOCES system had been established in New York to help rural school districts pool resources, and the Westchester BOCES Superintendent Dr. Noble Gividen believed that computers, along with computer simulation games like the Carnagie Tech Management Game being used in colleges, could be used to improve educational outcomes at small districts in Westchester. BOCES and IBM held a joint workshop, led by Bruse Moncreiff and James Dinneen of IBM along with Dr. Richard Wing, curriculum research coordinator for BOCES, in June 1962, involving ten teachers from the area to discuss ways of using simulations in classroom curricula. Based on the result of the workshop, BOCES applied for a grant from the U.S. Office of Education that December to continue to study the concept for 18 months, receiving almost instead for "Cooperative Research Project 1948". The project began in February 1963 under the direction of Dr. Wing, who asked for proposals from nine teachers. One of the teachers, Mabel Addis, proposed an expansion of an idea made by Moncreiff at the summer workshop: an economic model of a civilization, intended to teach basic economic theory. Moncreiff had been inspired by prior research, especially the paper "Teaching through Participation in Micro-simulations of Social Organization" by Richard Neier, and by the board game Monopoly, and wanted to use the ancient Sumerian civilization as the setting to counter what he saw as a trend in school curriculum to ignore pre-Greek civilizations, despite evidence of their importance to early history. Addis, a fourth-grade teacher at Katonah Elementary School, agreed with Moncreiff about the undervaluation of pre-Greek civilizations in schools, and had studied Mesoptamian civilizations in college. Her proposal was approved, and she began work with IBM programmer William McKay to develop the game. The game itself, The Sumerian Game, was designed and written by Addis and programmed by McKay in the Fortran programming language for an IBM 7090 time-shared mainframe computer. Like many early mainframe games, it was only run on a single computer. Commands were entered and results printed with an IBM 1050 teleprinter. The researchers ran one play session with 30 sixth-grade students. Project 1948 concluded in August 1964, and a report on its outcome given to the Office of Education in 1965 listing the eight "subprojects" that had been proposed in it, of which The Sumerian Game was the only game. Two weeks after its conclusion a new project was started as Cooperative Research Project 2148, with two more grants given totaling over , focusing on the first project's progress with the game and to run through 1967. This project created three games: The Sierra Leone Game, The Free Enterprise Game, and an expansion of The Sumerian Game. Addis rewrote and expanded the game in the summer of 1966 by adding a stronger narrative flow to how the advisor tells the player about the events of the city, refocusing the second segment of the game on the new concepts introduced, and interspersing the game with taped audio lectures corresponding with images on a slide projector. These have been described as the first cutscenes. The researchers conducted a playtest of the new version of the game with another 30 sixth-grade students the following school year, and produced a report in 1967. Legacy Following the creation of the second version of The Sumerian Game, the first segment of the game was reprogrammed by Jimmer Leonard, a graduate student in Social Relations at Johns Hopkins University, for the IBM 1401, to be used at demonstrations at a terminal in the BOCES Research Center in Yorktown Heights, New York. The project was mentioned in Time and Life magazines in 1966. After the conclusion of the second project in 1967, however, BOCES did not receive funds to extend the project further, and as per the agreement with IBM all three games became the property of the company. IBM did not attempt to use them as part of any further educational initiatives. In 1968, however, Digital Equipment Corporation (DEC) employee Doug Dyment gave a talk about computers in education at the University of Alberta, and after the talk a woman who had once seen The Sumerian Game described it to him. Dyment decided to recreate the game as an early program for the FOCAL programming language, recently developed at DEC, and programmed it for a DEC PDP-8 minicomputer. He named the result King of Sumeria. Needing the game to run in the smallest memory configuration available for the computer, he included only the first segment of the game. He also chose to rename the ruler to the more famous Babylonian king Hammurabi, misspelled as "Hamurabi". Dyment's game, sometimes retitled The Sumer Game, proved popular in the programming community: Jerry Pournelle recalled in 1989 that "half the people I know wrote a Hammurabi program back in the 1970s; for many, it was the first program they'd ever written in their lives". Around 1971, DEC employee David H. Ahl wrote a version of The Sumer Game in the BASIC programming language. Unlike FOCAL, BASIC was run not just on mainframe computers and minicomputers, but also on personal computers, then termed microcomputers, making it a much more popular language. In 1973, Ahl published BASIC Computer Games, a best-selling book of games written in BASIC, which included his version of The Sumer Game. The expanded version was renamed Hamurabi and added an end-of-game performance appraisal. In addition to the multiple versions of Hamurabi, several simulation games have been created as expansions of the core game. These include Kingdom (1974) by Lee Schneider and Todd Voros, which was then expanded to Dukedom (1976). Other derivations include King (1978) by James A. Storer, and Santa Paravia en Fiumaccio (1978) by George Blank; Santa Paravia added the concept of city building management to the basic structure of Hamurabi, making The Sumerian Game an antecedent to the city-building genre as well as an early strategy game. As The Sumerian Game was created during the early history of video games as part of research into new uses for computer simulations, it pioneered several developments in the medium. In addition to being a prototype of the strategy and city-building genres, The Sumerian Game has been described as the first video game with a narrative, as well as the first edutainment game. As a result, Mabel Addis has been called the first female video game designer and the first writer for a video game. The original code for The Sumerian Game is lost, but the projector slides and three printouts of individual game sessions were found in 2012 and donated to The Strong National Museum of Play, where they are kept in the Brian Sutton-Smith Library and Archives of Play. References External links 1964 video games Early history of video games Educational video games Fiction set in the 4th millennium BC Mainframe games Strategy video games Sumer in fiction Video games developed in the United States Video games set in antiquity Video games with textual graphics
359263
https://en.wikipedia.org/wiki/The%20SWORD%20Project
The SWORD Project
The SWORD Project is the CrossWire Bible Society's free software project. Its purpose is to create cross-platform open-source tools—covered by the GNU General Public License—that allow programmers and Bible societies to write new Bible software more quickly and easily. Overview The core of The SWORD Project is a cross-platform library written in C++, providing access, search functions and other utilities to a growing collection of over 200 texts in over 50 languages. Any software based on their API can use this collection. JSword is a separate implementation, written in Java, which reproduces most of the API features of the C++ API and supports most SWORD data content. The project is one of the primary implementers of and contributors to the Open Scripture Information Standard (OSIS), a standardized XML language for the encoding of scripture. The software is also capable of utilizing certain resources encoded in using the Text Encoding Initiative (TEI) format and maintains deprecated support for Theological Markup Language (ThML) and General Bible Format (GBF). Bible study front-end applications A variety of front ends based on The SWORD Project are available: And Bible And Bible, based on JSword, is an Android application. Alkitab Bible Study Alkitab Bible Study, based on JSword, is a multiplatform application with binaries available for Windows, Linux, and OS X. It has been described as "an improved Windows front-end for JSword". The Bible Tool The Bible Tool is a web front end to SWORD. One instance of the tool is hosted at CrossWire's own site. BibleDesktop BibleDesktop is built on JSword featuring binaries for Windows (98SE and later), OS X, and Linux (and other Unix-like OSes). BibleTime BibleTime is a C++ SWORD front end using the Qt GUI toolkit, with binaries for Linux, Windows, FreeBSD, and OS X. BibleTime Mini BibleTime Mini is a multiplatform application for Android, BlackBerry, jailbroken iOS, MeeGo, Symbian, and Windows Mobile. BPBible BPBible is a SWORD front end written in Python, which supports Linux and Windows. A notable feature is that a PortableApps version of BPBible is available. Eloquent Eloquent (formerly MacSword) is a free open-source application for research and study of the Bible, developed specifically for Macintosh computers running macOS. It is a native OS X app built in Objective-C. Eloquent allows users to read and browse different bible translations in many languages, devotionals, commentaries, dictionaries and lexicons. It also supports searching and advanced features such as services enabling users to access the Bible within other application programs. Eloquent is one of About.com's top 10 Bible programs. Version 2.3.5 of Eloquent continues with the Snow Leopard development. However, starting with the version 2.4.0, Eloquent has started with the OS X Lion testing, implementing features that are specific only to the Lion operating system. Ezra Project Ezra Project is an open source bible study tool focussing on topical study based on keywords/tags. It is based on Electron and works on Windows, Linux, and OS X. FireBible FireBible is a Firefox extension that works on Windows, Linux, and OS X. PocketSword PocketSword is an iOS front end supporting iPad, iPhone, and iPod Touch available in Apple's App Store. STEPBible STEPBible (STEP - Scripture Tools for Every Person) is an initiative by Tyndale House, Cambridge to build an online Bible study tool based on The SWORD Project. The first public release (Beta launch) of the software as an online platform was on 25 July 2013. The desktop version runs in any browser on the desktop computer. Additionally, the STEPBible app can be installed on an iOS device such as phones or tablets running iOS, or Android, and on a Chrome book. The SWORD Project for Windows The SWORD Project for Windows (known internally as BibleCS) is a Windows application built in C++Builder. Xiphos Xiphos (formerly GnomeSword) is a C++ SWORD front end using GTK+, with binaries available for Linux, UNIX, and Windows (2000 and later). It has been described as "a top-of-the-line Bible study program." xulsword xulsword is a XUL-based front end for Windows and Linux. Portable versions of the application, intended to be run from a USB stick, are also available. Others Additional front ends to SWORD exist to support a number of legacy and niche platforms, including: diatheke (CLI & CGI) SwordReader (Windows Mobile) Rapier (Maemo) Reviews It is one of About.com's top 10 bible programs. Bible Software Review, Review of MacSword version 1.2, June 13, 2005. Foster Tribe SwordBible Review November 25, 2008 Michael Hansen, Studying the Bible for Free, Stimulus, Volume 12 Number 3, August 2004, page 33 - 38 See also Biblical software Go Bible – a free Bible viewer for the Java ME platform Palm Bible Plus – a free Bible viewer for Palm OS List of free and open-source software packages References External links The SWORD Project JSword Electronic Bibles Electronic publishing Text Encoding Initiative Online Scripture Search Engine
1013768
https://en.wikipedia.org/wiki/LAPACK
LAPACK
LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK was designed as the successor to the linear equations and linear least-squares routines of LINPACK and the eigenvalue routines of EISPACK. LINPACK, written in the 1970s and 1980s, was designed to run on the then-modern vector computers with shared memory. LAPACK, in contrast, was designed to effectively exploit the caches on modern cache-based architectures, and thus can run orders of magnitude faster than LINPACK on such machines, given a well-tuned BLAS implementation. LAPACK has also been extended to run on distributed memory systems in later packages such as ScaLAPACK and PLAPACK. Netlib LAPACK is licensed under a three-clause BSD style license, a permissive free software license with few restrictions. Naming scheme Subroutines in LAPACK have a naming convention which makes the identifiers very compact. This was necessary as the first Fortran standards only supported identifiers up to six characters long, so the names had to be shortened to fit into this limit. A LAPACK subroutine name is in the form pmmaaa, where: p is a one-letter code denoting the type of numerical constants used. S, D stand for real floating-point arithmetic respectively in single and double precision, while C and Z stand for complex arithmetic with respectively single and double precision. The newer version, LAPACK95, uses generic subroutines in order to overcome the need to explicitly specify the data type. mm is a two-letter code denoting the kind of matrix expected by the algorithm. The codes for the different kind of matrices are reported below; the actual data are stored in a different format depending on the specific kind; e.g., when the code DI is given, the subroutine expects a vector of length n containing the elements on the diagonal, while when the code GE is given, the subroutine expects an array containing the entries of the matrix. aaa is a one- to three-letter code describing the actual algorithm implemented in the subroutine, e.g. SV denotes a subroutine to solve linear system, while R denotes a rank-1 update. For example, the subroutine to solve a linear system with a general (non-structured) matrix using real double-precision arithmetic is called DGESV. Details on this scheme can be found in the Naming scheme section in LAPACK Users' Guide. Use with other programming languages Many programming environments today support the use of libraries with C binding. The LAPACK routines can be used like C functions if a few restrictions are observed. Several alternative language bindings are also available: Armadillo for C++ IT++ for C++ LAPACK++ for C++ Lacaml for OCaml CLapack for C SciPy for Python Gonum for Go NLapack for .NET Implementations As with BLAS, LAPACK is frequently forked or rewritten to provide better performance on specific systems. Some of the implementations are: Accelerate Apple's framework for macOS and iOS, which includes tuned versions of BLAS and LAPACK. Netlib LAPACK The official LAPACK. Netlib ScaLAPACK Scalable (multicore) LAPACK, built on top of PBLAS. Intel MKL Intel's Math routines for their x86 CPUs. OpenBLAS Open-source reimplementation of BLAS and LAPACK. Gonum LAPACK A partial native Go implementation. Since LAPACK uses BLAS for the heavy-lifting, just linking to a better-tuned BLAS implementation usually improves the performance sufficiently. As a result, LAPACK is not reimplemented as often as BLAS is. Similar projects These projects provide a similar functionality to LAPACK, but the main interface differs from that of LAPACK: Libflame A dense linear algebra library. Has a LAPACK-compatible wrapper. Can be used with any BLAS, although BLIS is the preferred implementation. Eigen A header library for linear algebra. Has a BLAS and a partial LAPACK implementation for compatibility. MAGMA Matrix Algebra on GPU and Multicore Architectures (MAGMA) project develops a dense linear algebra library similar to LAPACK but for heterogeneous and hybrid architectures including multicore systems accelerated with GPGPUs. PLASMA The Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) project is a modern replacement of LAPACK for multi-core architectures. PLASMA is a software framework for development of asynchronous operations and features out of order scheduling with a runtime scheduler called QUARK that may be used for any code that expresses its dependencies with a directed acyclic graph. See also List of numerical libraries Math Kernel Library (MKL) NAG Numerical Library SLATEC, a FORTRAN 77 library of mathematical and statistical routines QUADPACK, a FORTRAN 77 library for numerical integration References Further reading External links Fortran libraries Free software programmed in Fortran Numerical linear algebra Numerical software Software using the BSD license
5381963
https://en.wikipedia.org/wiki/Resolution%20independence
Resolution independence
Resolution independence is where elements on a computer screen are rendered at sizes independent from the pixel grid, resulting in a graphical user interface that is displayed at a consistent physical size, regardless of the resolution of the screen. Concept As early as 1978, the typesetting system TeX due to Donald Knuth introduced resolution independence into the world of computers. The intended view can be rendered beyond the atomic resolution without any artifacts, and the automatic typesetting decisions are guaranteed to be identical on any computer up to an error less than the diameter of an atom. This pioneering system has a corresponding font system, Metafont, which provides suitable fonts of the same high standards of resolution independence. The terminology device independent file format (DVI) is the file format of Donald Knuth's pioneering TeX system. The content of such a file can be interpreted at any resolution without any artifacts, even at very high resolutions not currently in use. Implementation macOS Apple included some support for resolution independence in early versions of macOS, which could be demonstrated with the developer tool Quartz Debug that included a feature allowing the user to scale the interface. However, the feature was incomplete, as some icons did not show (such as in System Preferences), user interface elements were displayed at odd positions and certain bitmap GUI elements were not scaled smoothly. Because the scaling feature was never completed, macOS's user interface remained resolution-dependent. On June 11, 2012, Apple introduced the 2012 MacBook Pro with a resolution of 2880×1800 or 5.2 megapixels – doubling the pixel density in both dimensions. The laptop shipped with a version of macOS that provided support to scale the user interface twice as big as it has previously been. This feature is called HighDPI mode in macOS and it uses a fixed scaling factor of 2 to increase the size of the user interface for high-DPI screens. Apple also introduced support for scaling the UI by rendering the user interface on higher or smaller resolution that the laptop's built-in native resolution and scaling the output to the laptop screen. One obvious downside of this approach is either a decreased performance on rendering the UI on a higher than native resolution or increased blurriness when rendering lower than native resolution. Thus, while the macOS's user interface can be scaled using this approach, the UI itself is not resolution-independent. Microsoft Windows The GDI system in Windows is pixel-based and thus not resolution-independent. To scale up the UI, Microsoft Windows has supported specifying a custom DPI from the Control Panel since Windows 95. (In Windows 3.1, the DPI setting is tied to the screen resolution, depending on the driver information file.) When a custom system DPI is specified, the built-in UI in the operating system scales up. Windows also includes APIs for application developers to design applications that will scale properly. GDI+ in Windows XP adds resolution-independent text rendering however, the UI in Windows versions up to Windows XP is not completely high-DPI aware as displays with very high resolutions and high pixel densities were not available in that time frame. Windows Vista and Windows 7 scale better at higher DPIs. Windows Vista also adds support for programs to declare themselves to the OS that they are high-DPI aware via a manifest file or using an API. For programs that do not declare themselves as DPI-aware, Windows Vista supports a compatibility feature called DPI virtualization so system metrics and UI elements are presented to applications as if they are running at 96 DPI and the Desktop Window Manager then scales the resulting application window to match the DPI setting. Windows Vista retains the Windows XP style scaling option which when enabled turns off DPI virtualization (blurry text) for all applications globally. Windows Vista also introduces Windows Presentation Foundation. WPF applications are vector-based, not pixel-based and are designed to be resolution-independent. Windows 7 adds the ability to change the DPI by doing only a log off, not a full reboot and makes it a per-user setting. Additionally, Windows 7 reads the monitor DPI from the EDID and automatically sets the DPI value to match the monitor's physical pixel density, unless the effective resolution is less than 1024 x 768. In Windows 8, only the DPI scaling percentage is shown in the DPI changing dialog and the display of the raw DPI value has been removed. In Windows 8.1, the global setting to disable DPI virtualization (only use XP-style scaling) is removed. At pixel densities higher than 120 PPI (125%), DPI virtualization is enabled for all applications without a DPI aware flag (manifest) set inside the EXE. Windows 8.1 retains a per-application option to disable DPI virtualization of an app. Windows 8.1 also adds the ability for each display to use an independent DPI setting, although it calculates this automatically for each display. Windows 8.1 prevents a user from forcibly enabling DPI virtualization of an application. Therefore, if an application wrongly claims to be DPI-aware, it will look too small on high-DPI displays in 8.1, and a user cannot correct that. Windows 10 adds manual control over DPI for individual monitors. In addition, Windows 10 version 1703 brings back the XP-style GDI scaling under a "System (Enhanced)" option. This option combines GDI+'s text rendering at a higher resolution with the usual scaling of other elements, so that text appears crisper than in the normal "System" virtualization mode. Android Since Android 1.6 "Donut" (September 2009) Android has provided support for multiple screen sizes and densities. Android expresses layout dimensions and position via the density-independent pixel or "dp" which is defined as one physical pixel on a 160 dpi screen. At runtime, the system transparently handles any scaling of the dp units, as necessary, based on the actual density of the screen in use. To aid in the creation of underlying bitmaps, Android categorizes resources based on screen size and density: X Window System The Xft library, the font rendering library for the X11 system, has a dpi setting that defaults to 75. This is simply a wrapper around the FC_DPI system in fontconfig, but it suffices for scaling the text in Xft-based applications. The mechanism is also detected by desktop environments to set its own DPI, usually in conjunction with the EDID-based family of Xlib functions. The latter has been rendered ineffective in Xorg Server 1.7; since then EDID information is only exposed to XRandR. In 2013, the GNOME desktop environment began efforts to bring resolution independence ("hi-DPI" support) for various parts of the graphics stack. Developer Alexander Larsson initially wrote about changes required in GTK+, Cairo, Wayland and the GNOME themes. At the end of the BoF sessions at GUADEC 2013, GTK+ developer Matthias Clasen mentioned that hi-DPI support would be "pretty complete" in GTK 3.10 once work on Cairo would be completed. As of January 2014, hi-DPI support for Clutter and GNOME Shell is ongoing work. Gtk supports scaling all UI elements by integer factors, and all text by any non-negative real number factors. As of 2019, Fractional scaling of the UI by scaling up and then down is experimental. Other Although not related to true resolution independence, some other operating systems use GUIs that are able to adapt to changed font sizes. Microsoft Windows 95 onwards used the Marlett TrueType font in order to scale some window controls (close, maximize, minimize, resize handles) to arbitrary sizes. AmigaOS from version 2.04 (1991) was able to adapt its window controls to any font size. Video games are often resolution-independent; an early example is Another World for DOS, which used polygons to draw its 2D content and was later remade using the same polygons at a much higher resolution. 3D games are resolution-independent since the perspective is calculated every frame and so it can vary its resolution. See also Adobe Illustrator CorelDRAW Direct2D Display PostScript Himetric Inkscape Page zooming Responsive Web Design Retina display Scalable Vector Graphics Synfig Twips Vector-based graphical user interface Vector graphics References External links Declaration of resolution-independence by John Siracusa Digital imaging
484889
https://en.wikipedia.org/wiki/University%20of%20Michigan%20Executive%20System
University of Michigan Executive System
The University of Michigan Executive System, or UMES, a batch operating system developed at the University of Michigan in 1958, was widely used at many universities. Based on the General Motors Executive System for the IBM 701, UMES was revised to work on the mainframe computers in use at the University of Michigan during this time (IBM 704, 709, and 7090) and to work better for the small student jobs that were expected to be the primary work load at the University. UMES was in use at the University of Michigan until 1967, when MTS was phased in to take advantage of the newer virtual memory time-sharing technology that became available on the IBM System/360 Model 67. Programming languages available FORTRAN MAD (programming language) See also Timeline of operating systems History of IBM mainframe operating systems FORTRAN Monitor System Bell Operating System (BESYS) or Bell Monitor (BELLMON) SHARE Operating System (SOS) IBM 7090/94 IBSYS Compatible Time-Sharing System (CTSS) Michigan Terminal System (MTS) Hardware: IBM 701, IBM 704, IBM 709, IBM 7090 External links University of Michigan Executive System for the IBM 7090 Computer, volumes 1 (General, Utilities, Internal Organization), 2 (Translators), and 3 (Subroutine Libraries), Computing Center, University of Michigan, September, 1965, 1050 pp. The IBM 7094 and CTSS, Tom Van Vleck University of Michigan Executive System (UMES) subseries, Computing Center publications, 1965-1999, Bentley Historical Library, University of Michigan, Ann Arbor, Michigan "A Markovian model of the University of Michigan Executive System", James D. Foey, Communications of the ACM, 1967, No.6 Discontinued operating systems University of Michigan 1958 software IBM mainframe operating systems
7490299
https://en.wikipedia.org/wiki/PSXLinux
PSXLinux
PSXLinux (also known as Runix) is a Linux kernel and development kit for the PlayStation (MIPS-NOMMU). PSXLinux is based on the μClinux 2.4.x kernel and contains specific support for the Sony PlayStation. Features Serial console over RS232 SIO Virtual console over PlayStation GPU Multiple memory cards as storagedevice, block device driver USB host driver capable of keyboard and mouse support using a Cypress Semiconductor SL811 Compiling the kernel Various attempts to compile the kernel resulted into either errors or people unable to run the kernel from suggested bootmethods. A cross compiler is required to make the kernel compatible for the PlayStations CPU. Execution methods Over SIO (Serial) Loading the compiled RUNIX binary (PS-EXE) into a PlayStation may be done by using a Serial Adapter (such as the Net Yaroze Serial Cable) or Parallel Port device (Xplorer, Caetla). Another method is by installing a modchip within the PlayStation and burning a CD-ROM containing the executable data that will allow the system to boot burned discs. Runix did however supply some tools on their website to transfer files if one had obtained or built their own serial cable. The filename was: psx-serial.0.9.7.tar.gz From CD Due to Linux ELF format not supported by the PlayStation, conversion is required and a suggested path is to first convert the Linux kernel file to ECOFF, the NetYaroze native executable standard. This can be done with an enclosed tool called elf2ecoff inside the kernelsource. Next step would be to convert the ECOFF file to PS-EXE file, the format found on PlayStation game disks, after CDrom mastering a valid disk image. Multiple Memorycards as Storage From Beta1 on the Runix sources support Multiple or single memorycards of the PlayStations default size or larger. Multiple memorycards could have been formatted using a tool Runix supplied on their website to format into Ext2. The tools to do so seem to be lost or no sources can be found only the name: psx-mcard.0.8.2.tar.gz References External links μClinux website BetaArchive website, thread on PSXLinux Lightweight Unix-like systems
53842698
https://en.wikipedia.org/wiki/Trojan%3AWin32/Agent
Trojan:Win32/Agent
A Trojan:Win32/Agent is the definition (from Microsoft or Apple) of a Trojan downloader, Trojan dropper, or Trojan spy. Its first known detection was January 2018, according to Microsoft Malware Protection Center.Trojans may allow an attacker to access users' personal information such as banking information, passwords, or personal identity. It can also delete a user's files or infect other devices connected to the network. It can be removed by a virus scanning and removal tool such as Microsoft Defender. Additional info Win32/Agent Trojans have been observed to perform any, or all, of the following actions: Redirecting web traffic to malicious/compromised websites/domains Manipulating certain Windows or other installed applications, including the specific settings and/or configurations Dropping and/or installing additional malicious scripts or programs as well downloading and starting separate malicious programs Other aliases Trojan.Win32.Agent (Kaspersky Labs) Trojan:Generic.dx!tus (McAfee) References Windows trojans 2008 in computing
12099744
https://en.wikipedia.org/wiki/CrushFTP%20Server
CrushFTP Server
CrushFTP is a proprietary multi-protocol, multi-platform file transfer server originally developed in 1999. CrushFTP is shareware with a tiered pricing model. It is targeted at home users on up to enterprise users. Features CrushFTP supports the following protocols: FTP, FTPS, SFTP, HTTP, HTTPS, WebDAV and WebDAV SSL. Additionally, although not a protocol, it has both AJAX/HTML5 and Java applet web interfaces for end users to manage their files from a web browser. CrushFTP uses a GUI for administration, but also installs as a daemon on Mac OS X, Linux, Unix, and as a service in Windows. It supports multihoming, multiple websites with distinct branding, hot configuration changes, Attachment redirection, and GUI-based management of users and groups from a browser. Plugins are included for authentication against SQL databases, LDAP, Active Directory, and other custom methods. All settings are stored in XML files that can be edited directly, or with the web UI. If edited directly, CrushFTP notices the modification timestamp change and load the settings immediately without needing a server restart. History of CrushFTP CrushFTP was first published publicly around 1998. Initial versions were FTP only. There were no connection restrictions in version 1.x. CrushFTP 2.x brought about virtual directories in a sense, while CrushFTP 3.x brought about a full virtual file system. It supported the ability to merge and mangle several file systems together regardless if they were from local folders, or another FTP site. It could even act as a proxy for other FTP servers. However the complications from all the potential issues that could go on from this was confusing. CrushFTP 3 introduced tiered pricing models. CrushFTP 4 focused primarily on a cleaner interface and less confusing virtual file system. While it still seems to have some support for merging FTP sites with a local file system, the support seems limited. Updates in version 4 included a full HTTP server as well as the other supported protocols. Later updates began recognizing connection differences between web browsers and FTP/SFTP clients, counting four web browser connections as only one user against the licensed limit. CrushFTP 5 continued the evolution of the WebInterface with various iterations. It used a flash interface briefly before replacing it with a HTML/AJAX interface. CrushFTPv5 was the last version to still use a thick client Java Swing UI. Version 6 moved to an all web browser UI. CrushFTP 6 released in 2012 brought about major changes as the management and monitoring interface became entirely web based. Its interface is based on jQuery and jQuery UI. Multiple administrators can work concurrently, fixing the single admin limitation of prior versions. It had image thumbnail support and file replication and synching. CrushFTP 7 was released in early 2014. According to the "what's new" page it adds a dashboard for server information, delegated role based administration, graphical job / event designer, MP4 movie streaming support using HTML5, UPnP / PMP port forwarding and automatic external port validation testing, among many other features. Some features are available only to enterprise customers such as user synchronization and DMZ prefs synchronization between internal servers. CrushFTP 8 was released in late 2016. The "what's new" page lists a new faster HTML5 browser uploading system (4x faster) with resume support, a limited filesystem server mode, and data replication as key new features. There is a revision system on files, a new reports UI, and a stand-alone client UI as part of the release as well. CrushFTP 9 was released in late 2018. The "what's new" page lists a new CrushBalance load balancer, new Citrix protocol for VFS, uses fewer threads, [Let's Encrypt] plugin support, automated expiration reminder emails for passwords, accounts, and shares. Additionally it lists Proxy Protocol v2 support for AWS load balancers, and an enhanced Job management system. CrushFTP 10 was released in early 2021. Features DMZ feature to separate Internal and external server interfaces. High availability, session replication, data replication and VIP capabilities. Event based actions to trigger emails. Job scheduler, visual flow designer, manage and move files across protocols. Pass a list of found files from one step to the next, filtering items out, multithreading multiple steps simultaneously, and monitoring in realtime the progress of the job visually and with realtime logging. Scriptable command line CrushClient with support for FTP(ES)/ SFTP/ HTTP(s) CrushBalance load balancer included for a software based load balancer that can be put in front of the main CrushFTP server. Supports many back end protocols for file storage, including FTP(ES), SMB, SFTP, HTTP(s), WebDAV, Google Drive, Azure, Hadoop and S3 WebInterface allowing on the fly zipped uploads and downloads WebInterface supports image thumbnail generation for live image previews Drill down into folders on the WebInterface, delete, or rename. API for configuring users and VFS items over HTTP(s) Custom usage reports that can be run on demand, or scheduled. Live realtime dashboard UI for monitoring server health, active users, and their activity. Web server supports Server Side Includes, and virtual domains. SQL integration to store users and permissions in SQL database tables. LDAP / Active Directory authentication integration. SAML SSO authentication integration. Radius authentication integration. Ability to launch custom shell scripts passing in arguments. DDOS protection Detailed audit logging and log rolling. Syslog or DB logging for a secondary server with replicated log data (audit purposes) Custom web upload forms for collecting additional information with file uploads which can be passed to jobs and events. Bandwidth limiters. Internal statistic gathering. User and group inheritance on a per setting level. Max login time, idle time. Max upload, download, and minimum download speed. Quotas and ratios. Max download amount per session, day, or month. Auto account expirations. Restricted IP ranges for connections. Custom events including running a plugin or sending an email. Supports various encodings including UTF-8. Can do Virtual File System (VFS) linking to merge several file systems. Supports FTP's MODE Z for compressed transfers. Plugins CrushLDAPGroup authenticates against an LDAP servers, including Active Directory. CrushTask has a long list of tasks it can perform. AS2, Copy, Delete, Email, Execute, Find, Jump, HTTP, MakeDirectory, Move, PGP, PopImap, Preview, Rename, SQL, Unzip, Wait, WriteFile, Zip and an unknown Custom task. MagicDirectory allows creating users by just making a folder. Non administrator type personnel can create users easily. Authentication options Built-in user database consisting of XML files describing the user and Virtual File System access. Active Directory / LDAP Web Application POST and retrieval of Xml configurations SAML SQL tables HTTP Basic Authentication HTTP Form Based Authentication MagicDirectory folder name based user authentication Security Encryption is supported for files "at rest" using PGP, as well as for passwords using an MD5 or SHA, SHA512, SHA3, MD4 non-reversible hash. SFTP uses SSH for encryption, and FTPS uses SSL/TLS for encryption. SHA-2 hashing algorithms are supported. Hashes can be salted with random salt values. As August 2021, there has been six published vulnerabilities in CrushFTP. See also Comparison of FTP server software References External links CrushFTP Server Home Page CrushFTP Documentation FTP server software
22426256
https://en.wikipedia.org/wiki/Eli%20Upfal
Eli Upfal
Eli Upfal is a computer science researcher, currently the Rush C. Hawkins Professor of Computer Science at Brown University. He completed his undergraduate studies in mathematics and statistics at the Hebrew University, Israel in 1978, received an M.Sc. in computer science from the Feinberg Graduate School of the Weizmann Institute of Science, Israel in 1980, and completed his PhD in computer science at the Hebrew University in 1983 under Eli Shamir. He has made contributions in a variety of areas. Most of his work involves randomized and/or online algorithms, stochastic processes, or the probabilistic analysis of deterministic algorithms. Particular applications include routing and communications networks, computational biology, and computational finance. He is responsible for a large body of work, including, as of May 2012, more than 150 publications in journals and conferences as well as many patents. He has won several prizes, including the IBM Outstanding Innovation Award and the Levinson Prize in Mathematical Sciences. In 2002, Eli Upfal, was inducted as a Fellow of the Institute of Electrical and Electronics Engineers, and in 2005 he was inducted as a Fellow of the Association for Computing Machinery. He received, together with Yossi Azar, Andrei Broder, Anna Karlin, and Michael Mitzenmacher, the 2020 ACM Paris Kanellakis Award. Eli is a coauthor of the book References External links Eli Upfal's website Living people Theoretical computer scientists Fellows of the Association for Computing Machinery Fellow Members of the IEEE Israeli computer scientists Brown University faculty Year of birth missing (living people)
82418
https://en.wikipedia.org/wiki/TRIPOS
TRIPOS
TRIPOS (TRIvial Portable Operating System) is a computer operating system. Development started in 1976 at the Computer Laboratory of Cambridge University and it was headed by Dr. Martin Richards. The first version appeared in January 1978 and it originally ran on a PDP-11. Later it was ported to the Computer Automation LSI4 and the Data General Nova. Work on a Motorola 68000 version started in 1981 at the University of Bath. MetaComCo acquired the rights to the 68000 version and continued development until TRIPOS was chosen by Commodore Amiga in March 1985 to form part of an operating system for their new computer; it was also used at Cambridge as part of the Cambridge Distributed Computing System. Students in the Computer Science department at Cambridge affectionately refer to TRIPOS as the Terribly Reliable, Incredibly Portable Operating System. The name TRIPOS also refers to the Tripos system of undergraduate courses and examinations, which is unique to Cambridge University. Influences on the Amiga computer In July 1985, the Amiga was introduced, incorporating TRIPOS in the AmigaDOS module of AmigaOS. AmigaDOS included a command line interface and the Amiga File System. The entire AmigaDOS module was originally written in BCPL (an ancestor of the C programming language), the same language used to write TRIPOS. AmigaDOS would later be rewritten in C from AmigaOS 2.x onwards, retaining backwards compatibility with 1.x up until AmigaOS 4 (completely rewritten in C) when AmigaDOS abandoned its BCPL legacy. Features TRIPOS provided features such as pre-emptive multi-tasking (using strict-priority scheduling), a hierarchical file system and multiple command line interpreters. The most important TRIPOS concepts have been the non-memory-management approach (meaning no checks are performed to stop programs from using unallocated memory) and message passing by means of passing pointers instead of copying message contents. Those two concepts together allowed for sending and receiving over 1250 packets per second on a 10 MHz Motorola 68010 CPU. Most of TRIPOS was implemented in BCPL. The kernel and device drivers were implemented in assembly language. One notable feature of TRIPOS/BCPL was its cultural use of shared libraries, untypical at the time, resulting in small and therefore fast loading utilities. For example, many of the standard system utilities were well below 0.5 Kbytes in size, compared to a typical minimum of about 20 Kbytes for functionally equivalent code on a modern Unix or Linux. TRIPOS was ported to a number of machines, including the Data General Nova 2, the Computer Automation LSI4, Motorola 68000 and Intel 8086- based hardware. It included support for the Cambridge Ring local area network. More recently, Martin Richards produced a port of TRIPOS to run under Linux, using Cintcode BCPL virtual machine. As of February 2020, TRIPOS is still actively maintained by Open G I Ltd. (formerly Misys Financial Systems) in Worcestershire, UK. Many British insurance brokers have a Linux/Intel based TRIPOS system serving networked workstations over a TCP/IP connection - the systems are used to run Open G I's BROOMS Application suite. Open G I have added a number of features to support the modern office such as the ability to integrate into many mainstream applications and services such as SQL server, Citrix XENAPP, terminal servers, etc. Commands The following list of commands is supported by the TRIPOS CLI. ALINK ASSEM ASSIGN BREAK C CD CONSOLE COPY DATE DELETE DIR DISKCOPY DISKDOCTOR ECHO ED EDIT ENDCLI FAILAT FAULT FILENOTE FORMAT IF INFO INSTALL JOIN LAB LIST MAKEDIR MOUNT NEWCLI PATH PROMPT PROTECT QUIT RELABEL RENAME RUN SEARCH SKIP SORT STACK STATUS TYPE VDU WAIT WHY Cintpos Cintpos is an experimental interpretive version of TRIPOS which runs on the Cintcode BCPL virtual machine, also developed by Martin Richards. References Reference manuals Martin Richards' Cintpos page A brief informal history of the Computer Laboratory In the beginning was CAOS Further reading External links Amiga history guide: TripOS/68k CBG Stallone Computer Amiga Computer-related introductions in 1978 Discontinued operating systems History of computing in the United Kingdom University of Cambridge Computer Laboratory
9956553
https://en.wikipedia.org/wiki/Larry%20Blakeney
Larry Blakeney
Larry Blakeney (born September 21, 1947) is a former American football player and coach. He served as the head football coach at Troy University from 1991 to 2014, compiling a record of 178–113–1 in 24 seasons. He is one of only two coaches to have taken a college football program from NCAA Division II to the NCAA Division I Football Bowl Subdivision, the other being UCF's Gene McDowell. Blakeney was the recipient of the Johnny Vaught Lifetime Achievement Award by the All-American Football Foundation in 2000. He was inducted into the Wiregrass Sports Hall of Fame in 2008 and was inducted into the Alabama Sports Hall of Fame on May 30, 2009. On December 21, 2010, he received the Sun Belt Conference 10th Anniversary Most Outstanding Head Coach award. In the spring of 2011, Troy University honored Blakeney by naming the football playing surface Larry Blakeney Field at Veterans Memorial Stadium. On August 10, 2012, Blakeney was inducted into the Troy University Sports Hall of Fame. He was part of the inaugural class along with DeMarcus Ware, Don Maestri, Chase Riddle, Bill Atkins, Sim Byrd, Denise Monroe, Vergil McKinley, Ralph Adams, Mike Turk, and Charles Oliver. Playing career Blakeney was the first sophomore to start at quarterback for Ralph Jordan at Auburn. A three-year letterman, he started eight games in 1966, scoring five touchdowns in his first four games. Blakeney lost the starting job in 1967, however, and moved to the defensive backfield in 1968. He missed the entire season with a shoulder injury, but resumed play in 1969 as Auburn posted an 8–3 record. He lettered twice in baseball, in 1968 and 1969. Blakeney graduated in 1970 with a bachelor's degree in business administration. Coaching career Blakeney became a head coach at three high schools after graduation: Southern Academy (1970–71), Walker High School (1972–74) and Vestavia Hills High School (1975–76). He compiled a 50–24–2 record as a high school head coach. He was hired on at his alma mater, Auburn, in 1977 as the offensive line assistant coach. In 1979, he was the tight end and wide receivers coach for two years and then just wide receivers from 1981 to 1990. He added on the offensive play calling duties in 1986. During the 14 seasons at Auburn, the Tigers were 110–50–3 and won four Southeastern Conference championships and were 6–2–1 in bowl games. Troy Blakeney became the 20th head football coach at what was then known as Troy State University on December 3, 1990—the school did not become Troy University until 2004. The Troy State Trojans were still an NCAA Division II program, but were approved to transition to NCAA Division I-AA the following season. Blakeney took over a program that had won NCAA Division II Football Championships in 1984 and 1987, but was 13–17 the previous three years. The first full year at Division I-AA, the Trojans made it to the semifinal game and finished 12–1–1, 10–0–1 in the regular season. This marked the first undefeated regular, full season Troy State football and they finished ranked first in the end of season poll by Sports Network. In 1995, the team improved on that record finishing 11–0 in the regular season for the first undefeated and untied season in history. During the eight seasons the team was a member of I-AA football, they made the playoffs seven seasons and won the Southland Conference championship three times and made the playoff semifinals twice. Troy State transitioned to Division I-A in 2001. During that season they defeated three Division I-A schools, including their first win over a BCS conference school, Mississippi State. The transition made Blakeney one of two coaches to ever take a football team from Division II to I-A—the other is UCF's Gene McDowell. In 2004, Troy's first year in the Sun Belt Conference, Blakeney coached his team to one of the biggest victories in the school's and the Sun Belt's history after defeating then #17 ranked Missouri, 24–14, at home, in front of a national audience on ESPN2. He once again coached his team to a victory over a BCS school in 2007 at home, routing Oklahoma State 41–23. Blakeney earned his first bowl game win in 2006, beating the Rice Owls, 41–17, in the New Orleans Bowl. The team won their first Sun Belt Conference title that year. After losing the 2008 New Orleans Bowl in overtime against Southern Miss and the 2010 GMAC Bowl in double-overtime against Central Michigan, Blakeney earned his second bowl victory in the 2010 New Orleans Bowl, defeating Ohio, 48–21. ESPN recognized Blakeney as one of the top five non-AQ recruiting closers in 2009. Blakeney retired at the end of the 2014 season after serving 24 years as head coach at Troy. Personal Blakeney is married to the former Janice Powell and they have three daughters, Kelley, and twins Julie and Tiffany. All three daughters graduated from Troy. Tiffany is married and lives in Atlanta, Georgia with her husband Jason Rash and two daughters, Madeline Ann Rash and Danielle Avery Rash. Head coaching record College References 1947 births Living people American football quarterbacks Auburn Tigers baseball players Auburn Tigers football coaches Auburn Tigers football players Troy Trojans football coaches High school football coaches in Alabama Sportspeople from Birmingham, Alabama Players of American football from Birmingham, Alabama
605869
https://en.wikipedia.org/wiki/Spell%20checker
Spell checker
In software, a spell checker (or spelling checker or spell check) is a software feature that checks for misspellings in a text. Spell-checking features are often embedded in software or services, such as a word processor, email client, electronic dictionary, or search engine. Design A basic spell checker carries out the following processes: It scans the text and extracts the words contained in it. It then compares each word with a known list of correctly spelled words (i.e. a dictionary). This might contain just a list of words, or it might also contain additional information, such as hyphenation points or lexical and grammatical attributes. An additional step is a language-dependent algorithm for handling morphology. Even for a lightly inflected language like English, the spell checker will need to consider different forms of the same word, such as plurals, verbal forms, contractions, and possessives. For many other languages, such as those featuring agglutination and more complex declension and conjugation, this part of the process is more complicated. It is unclear whether morphological analysis—allowing for many forms of a word depending on its grammatical role—provides a significant benefit for English, though its benefits for highly synthetic languages such as German, Hungarian, or Turkish are clear. As an adjunct to these components, the program's user interface allows users to approve or reject replacements and modify the program's operation. Spell checkers can use approximate string matching algorithms such as Levenshtein distance to find correct spellings of misspelled words. An alternative type of spell checker uses solely statistical information, such as n-grams, to recognize errors instead of correctly-spelled words. This approach usually requires a lot of effort to obtain sufficient statistical information. Key advantages include needing less runtime storage and the ability to correct errors in words that are not included in a dictionary. In some cases, spell checkers use a fixed list of misspellings and suggestions for those misspellings; this less flexible approach is often used in paper-based correction methods, such as the see also entries of encyclopedias. Clustering algorithms have also been used for spell checking combined with phonetic information. History Pre-PC In 1961, Les Earnest, who headed the research on this budding technology, saw it necessary to include the first spell checker that accessed a list of 10,000 acceptable words. Ralph Gorin, a graduate student under Earnest at the time, created the first true spelling checker program written as an applications program (rather than research) for general English text: SPELL for the DEC PDP-10 at Stanford University's Artificial Intelligence Laboratory, in February 1971. Gorin wrote SPELL in assembly language, for faster action; he made the first spelling corrector by searching the word list for plausible correct spellings that differ by a single letter or adjacent letter transpositions and presenting them to the user. Gorin made SPELL publicly accessible, as was done with most SAIL (Stanford Artificial Intelligence Laboratory) programs, and it soon spread around the world via the new ARPAnet, about ten years before personal computers came into general use. SPELL, its algorithms and data structures inspired the Unix ispell program. The first spell checkers were widely available on mainframe computers in the late 1970s. A group of six linguists from Georgetown University developed the first spell-check system for the IBM corporation. Henry Kučera invented one for the VAX machines of Digital Equipment Corp in 1981. PCs The first spell checkers for personal computers appeared in 1980, such as "WordCheck" for Commodore systems which was released in late 1980 in time for advertisements to go to print in January 1981. Developers such as Maria Mariani and Random House rushed OEM packages or end-user products into the rapidly expanding software market. On the pre-Windows PCs, these spell checkers were standalone programs, many of which could be run in TSR mode from within word-processing packages on PCs with sufficient memory. However, the market for standalone packages was short-lived, as by the mid-1980s developers of popular word-processing packages like WordStar and WordPerfect had incorporated spell checkers in their packages, mostly licensed from the above companies, who quickly expanded support from just English to many European and eventually even Asian languages. However, this required increasing sophistication in the morphology routines of the software, particularly with regard to heavily-agglutinative languages like Hungarian and Finnish. Although the size of the word-processing market in a country like Iceland might not have justified the investment of implementing a spell checker, companies like WordPerfect nonetheless strove to localize their software for as many national markets as possible as part of their global marketing strategy. When Apple developed "a system-wide spelling checker" for Mac OS X so that "the operating system took over spelling fixes," it was a first: one "didn't have to maintain a separate spelling checker for each" program. Mac OS X's spellcheck coverage includes virtually all bundled and third party applications. Visual Tools VT Speller, introduced in 1994, was "designed for developers of applications that support Windows." It came with a dictionary but had the ability to build and incorporate use of secondary dictionaries. Browsers Firefox 2.0, a web browser, has spell check support for user-written content, such as when editing Wikitext, writing on many webmail sites, blogs, and social networking websites. The web browsers Google Chrome, Konqueror, and Opera, the email client Kmail and the instant messaging client Pidgin also offer spell checking support, transparently using previously GNU Aspell and currently Hunspell as their engine. Specialties Some spell checkers have separate support for medical dictionaries to help prevent medical errors. Functionality The first spell checkers were "verifiers" instead of "correctors." They offered no suggestions for incorrectly spelled words. This was helpful for typos but it was not so helpful for logical or phonetic errors. The challenge the developers faced was the difficulty in offering useful suggestions for misspelled words. This requires reducing words to a skeletal form and applying pattern-matching algorithms. It might seem logical that where spell-checking dictionaries are concerned, "the bigger, the better," so that correct words are not marked as incorrect. In practice, however, an optimal size for English appears to be around 90,000 entries. If there are more than this, incorrectly spelled words may be skipped because they are mistaken for others. For example, a linguist might determine on the basis of corpus linguistics that the word baht is more frequently a misspelling of bath or bat than a reference to the Thai currency. Hence, it would typically be more useful if a few people who write about Thai currency were slightly inconvenienced than if the spelling errors of the many more people who discuss baths were overlooked. The first MS-DOS spell checkers were mostly used in proofing mode from within word processing packages. After preparing a document, a user scanned the text looking for misspellings. Later, however, batch processing was offered in such packages as Oracle's short-lived CoAuthor and allowed a user to view the results after a document was processed and correct only the words that were known to be wrong. When memory and processing power became abundant, spell checking was performed in the background in an interactive way, such as has been the case with the Sector Software produced Spellbound program released in 1987 and Microsoft Word since Word 95. In recent years, spell checkers have become increasingly sophisticated; some are now capable of recognizing simple grammatical errors. However, even at their best, they rarely catch all the errors in a text (such as homophone errors) and will flag neologisms and foreign words as misspellings. Nonetheless, spell checkers can be considered as a type of foreign language writing aid that non-native language learners can rely on to detect and correct their misspellings in the target language. Spell-checking for languages other than English English is unusual in that most words used in formal writing have a single spelling that can be found in a typical dictionary, with the exception of some jargon and modified words. In many languages, words are often concatenated into new combinations of words. In German, compound nouns are frequently coined from other existing nouns. Some scripts do not clearly separate one word from another, requiring word-splitting algorithms. Each of these presents unique challenges to non-English language spell checkers. Context-sensitive spell checkers There has been research on developing algorithms that are capable of recognizing a misspelled word, even if the word itself is in the vocabulary, based on the context of the surrounding words. Not only does this allow words such as those in the poem above to be caught, but it mitigates the detrimental effect of enlarging dictionaries, allowing more words to be recognized. For example, baht in the same paragraph as Thai or Thailand would not be recognized as a misspelling of bath. The most common example of errors caught by such a system are homophone errors, such as the bold words in the following sentence:Their coming too sea if its reel'. The most successful algorithm to date is Andrew Golding and Dan Roth's "Winnow-based spelling correction algorithm", published in 1999, which is able to recognize about 96% of context-sensitive spelling errors, in addition to ordinary non-word spelling errors. A context-sensitive spell checker appears in Microsoft Office 2007, and also appeared in the now-defunct Google Wave. Grammar checkers attempt to fix problems with grammar beyond spelling errors, including incorrect choice of words. See also Cupertino effect Grammar checker Record linkage problem Spelling suggestion Words (Unix) Autocorrection LanguageTool References External links Norvig.com, "How to Write a Spelling Corrector", by Peter Norvig BBK.ac.uk, "Spellchecking by computer", by Roger Mitton CBSNews.com, Spell-Check Crutch Curtails Correctness, by Lloyd de Vries History and text of "Candidate for a Pullet Surprise" by Mark Eckman and Jerrold H. Zar Text editor features Spelling Natural language processing
2268410
https://en.wikipedia.org/wiki/Trainwreck%20%28band%29
Trainwreck (band)
Trainwreck is an American southern and comedy rock band formed in Los Angeles, in 2002. It was founded by Tenacious D guitarist Kyle Gass and actor JR Reed. The band originally started out as a three-piece, with Reed on vocals, Gass on guitar and Kevin Weisman as the drummer. The band adopted pseudonym names for its band members, Reed was "Darryl Lee Donald", Gass was "Klip Calhoun" and Weisman was "Kenny Bob Thornton". Gass and Reed began to search for backing musicians, as they felt Trainwreck's musical style was "too close to Tenacious D". The band met electric guitarist John Konesky and bassist John Spiker in Ohio through a mutual friend, with them relocating to Los Angeles to form Trainwreck as "John Bartholomew Shredman" and "Boy Johnny" respectively. Nate Rothacker replaced Weisman as the drummer in the mid-2000s, going under the pseudonym "Dallas St. Bernard". The band released one studio album, one live album, one EP and two singles before splitting up in 2010. The band would reform in 2018 and announce a reunion tour, as well as work on a second studio album. In December 2021, the band announced their second studio album would be called "Do You Wanna Get Wrecked?" and that a European tour would commence in May 2022. History Beginnings Gass started the rock band Tenacious D along with Jack Black in 1994. Black became very popular in the turn of the new millennium with many film and TV roles which led to Black having less time to spend playing gigs with Gass – so Gass created Trainwreck to keep him musically active when Black was busy. In December 2001, Tenacious D played a concert with Weezer and Jimmy Eat World in Value City Arena, where Kyle Gass befriended Erin Robinson, who described herself as a "huge D fan." In 2002, Trainwreck was formed, but just as an occasional band with JR Reed on vocals, Kevin Weisman on drums and Gass as guitar. The group played their first concert at Highland Grounds in Los Angeles on August 2, 2002. In 2003, Gass asked Erin Robinson to recruit electric backing musicians for the "Trainwreck side project" so she found bassist John Spiker and electric guitarist John Konesky. At this point, Gass, Reed and Weisman also added Chris D'Arienzo as keyboard in addition. The band released their first single '2 Tracks' sometime in 2003 and their live album, Trainwreck Live, was released by Epic Records in 2004 exclusive to their ShopBootlegs.com. Trainwreck made their first TV appearance on Jimmy Kimmel Live in 2004 and featured on Current TV's 2005 Halloween special performing "TV Theme" as the musical guest. The EP (2006–2007) The band self-released their first 5-song EP, "The EP" in 2006. Two songs from the EP ("Caveman" and "I Wanna Know") were used on the official soundtrack for Tenacious D in The Pick of Destiny, and were later released on More Rocktastic Music From The Film. Gass wears a Trainwreck T-shirt during the vast majority of the film, as well as appearing in it on posters. For the concert tour to support the movie and soundtrack, Konesky and Spiker were recruited to play guitar for the shows based on their work with Trainwreck. Rothacker worked as a band assistant. Because The Pick of Destiny Tour featured all the members of Trainwreck, Trainwreck played shows in the cities they were in on their days off. This is notable because the band performed at the Scala in England, the Annandale Hotel in Australia and the Mod Club Theater in Canada which were the band's first international dates. In late 2006, Black expressed wishes for 2007 to take a break from the entertainment industry, this meant for Trainwreck to tour during 2007, especially in the summer, to keep Gass musically active. The Wreckoning (2008–2010) In September 2008, Trainwreck released their first music video "Tim Blankenship" directed by Nick Simon. The actual song itself would later feature on their 2009 debut album. The band released their debut 15-track album on December 2, 2009 at The Roxy in Los Angeles. There was also a music video created in support for the album – "Brodeo". The band also released a music video for "Baby, Let's Rock" off of Trainwreck's 2003 "2 Tracks" single. The band went on a tour to support the new album throughout March and May in various cities of the United States. In June during Tenacious D's Bonnaroo Music Festival slot, Rothacker played for Tenacious D for the first time as he replaced Brooks Wackerman due to him being on tour with Bad Religion. Later on, in September, they began touring again after a short absence. The band played their last show on September 25, 2010 at the Beat Kitchen in Chicago. This show was in the middle of their Transcontinental Railroad tour – the Chicago show was their tenth show into the tour, and due to "unforeseen circumstances" the band cancelled the remainder of their tour. After five months of in-activity, in March 2011, they announced their closure on their Facebook page as Gass, Konesky and Rothacker formed Kyle Gass Band. Reunion (2018–present) The band reunited at a Wynchester show (which features Konesky) at the Maui Sugar Mill Saloon on February 2, 2018, making a short on-stage appearance. This was due to both Gass and Reed being in attendance at the show, also Spiker performing with Wynchester as a guest. The drumming position in the band was filled with Wynchester's Matt Lesser, instead of Nate Rothacker. A couple of days after this, the band announced a show on their Facebook page at DiPiazza's in Long Beach, California for February 24, opening for Kyle Gass Band. This show being a full-band reunion with Rothacker on drums. In April, the band posted a photo of themselves in a recording studio on their Facebook and Instagram pages. The band also announced a September tour – playing nine dates throughout California. A few dates after the announcement of the tour, the band revealed they were working on a second studio album. In July 2020, the band made a virtual appearance for Rootstock Music Festival, and in August 2020, the band made an appearance for Stand Up For America. In December 2021, the band announced their second studio album would be called "Do You Wanna Get Wrecked" and that their first ever European tour would commence in May 2022. Band members Darryl Lee Donald (JR Reed) – lead vocals, percussion (2002–2010, 2018–present) Klip Calhoun (Kyle Gass) – acoustic guitar, backing vocals, flute (2002–2010, 2018–present) John Bartholomew Shredman (John Konesky) – guitars (2003–2010, 2018–present) Boy Johnny (John Spiker) – bass, vocals (2003–2010, 2018–present) Touring musicians T-Bone MacGruthers (Tim Spier) – drums (2018) Former members Kenny Bob Thornton (Kevin Weisman) – drums (2002–2005) Slim Watkins (Steve McDonald) – bass (2003) Lance Branson (Chris D'Arienzo) – keyboard, vocals (2003–2006) Dallas St. Bernard (Nate Rothacker) – drums (2004–2010, 2018) Discography Albums: Trainwreck Live (2004) The Wreckoning (2009) Do You Wanna Get Wrecked? (2022) Singles / EPs: 2 Tracks (2003) (later re-released as "2 Wreck-n-Roll Tracks") The EP (2006) Trainwreck (2006) External links References Tenacious D Hard rock musical groups from California Comedy rock musical groups Musical groups established in 2002 Musical groups disestablished in 2011 Musical groups from Los Angeles Musical groups reestablished in 2018
27860109
https://en.wikipedia.org/wiki/Chamilo
Chamilo
Chamilo is a free software (under GNU/GPL licensing) e-learning and content management system, aimed at improving access to education and knowledge globally. It is backed up by the Chamilo Association, which has goals including the promotion of the software, the maintenance of a clear communication channel and the building of a network of services providers and software contributors. The Chamilo project aims at ensuring the availability and quality of education at a reduced cost, through the distribution of its software free of charge, the improvement of its interface for 3rd world countries devices portability and the provision of a free access public e-learning campus. History The Chamilo project was officially launched on 18 January 2010 by a considerable part of the contributing community of the (also GNU/GPL) Dokeos software, after growing discontent on the communication policy inside the Dokeos community and a series of choices that were making parts of the community insecure about the future of developments. As such, it is considered a fork of Dokeos (at least in its 1.8 series). The reaction to the fork was immediate, with more than 500 active users registering on the Chamilo forums in the first fortnight and more contributions collected in one month than in the previous whole year. The origins of Chamilo's code date back to 2000, with the start of the Claroline project, which was forked in 2004 to launch the Dokeos project. In 2010, it was forked again with the publication of Chamilo 1.8.6.2. Chamilo used to come in two versions. The LMS (or "1.*") version directly builds on Dokeos. Chamilo LCMS (or 3.0) is a completely new software platform for e-learning and collaboration. However, due to frequent structural changes, the lack of migration workflow from LMS, the complexity of its interface and a certain lack of leadership, support for the project was abandoned by the association in 2015 to focus on improved LMS development. Community Due to Chamilo's educational purpose, most of the community is related to the educational or the human resources sectors. The community itself works together to offer an easy to use e-learning system. Active Community members are considered active when they start contributing to the project (through documentation, forum contributions, development, design). In 2009, members of the Dokeos community started working actively on the One Laptop Per Child project together with a primary school in the Salto city in Uruguay. One of the founding members of the Chamilo Association then registered as a contributing project for the OLPC in which his company would make efforts to ensure the portability of the platform to the XO laptop. The effort has been, since then, continued as part of the Chamilo project. Passive The community is considered passive when they use the software but do not contribute directly to it. As of February 2016, the passive community was estimated to be more than 11,000,000 users around the world. Chamilo Association Since June 2010, the Chamilo Association has been a legally registered non-profit association (VZW) under Belgian law. The association was created to serve the general goal of improving the Chamilo project's organization and to avoid a conflict of interest between the organization controlling the software project decision process and the best interests of the community using the software. Its founding members, also its first board of directors, were originally 7, of which 3 are from the private e-learning sector and 4 were from the public educational sector. The current board of directors is composed of 5 members. Main features of Chamilo LMS courses, users and training cycles (including SOAP web services to manage remotely) social network for learning SCORM 1.2 compatibility and authoring tool LTI 1.1 support multi-institutions mode (with central management portal) time-controlled exams international characters (UTF-8) automated generation of certificates tracking of users progress competence based training (CBT) integrated with Mozilla Open Badges multiple time zones proven support for more than 700,000 users (single portal on a single server) Technical details Chamilo is developed mainly in PHP and relies on a LAMP or WAMP system on the server side. On the client side, it only requires a modern web browser (versions younger than 3 years old) and optionally requires the Flash plugin to make use of advanced features. Interoperability The Chamilo LMS (1.*) series benefits from third party implementations that allows easy connexion to Joomla (through JFusion plugin), Drupal (through Drupal-Chamilo module), OpenID (secure authentication framework) and Oracle (through specific PowerBuilder implementations). Extensions Chamilo offers a connector to videoconferencing systems (like BigBlueButton or OpenMeetings) as well as a presentations to learning paths converter, which require advanced system administration skills to install. Releases You can get more information on releases from the original website. Chamilo LMS and Chamilo LCMS are two separate products of the Chamilo Association, which is why the releases history is split below. Chamilo LMS 2021-08 - LMS v1.11.16: Maintenance version on top of 1.11.14 introducing support for IMS/CC 1.3 and IMS/LTI provider mode 2020-11 - LMS v1.11.14: Maintenance version on top of 1.11.12 introducing xAPI compatibility 2020-08 - LMS v1.11.12: Maintenance version on top of 1.11.10 2019-05 - LMS v1.11.10: Maintenance version on top of 1.11.8 2018-08 - LMS v1.11.8: Maintenance version on top of 1.11.6, introducing GDPR features 2018-01 - LMS v1.11.6: Maintenance version on top of 1.11.4 2017-05 - LMS v1.11.4: Maintenance version for 1.11.2 introducing Google Maps connector to help communities of learners find close-by students, maintenance mode, SEPE standards integration, ODF online editor 2016-11 - LMS v1.11.2: Maintenance version for 1.11.0 2016-05 - LMS v1.11.0: This version introduces a basic course importer from Moodle, the management of skills levels, beta IMS/LTI support and the vChamilo plugin 2016-07 - LMS v1.10.8: Maintenance version for 1.10.6 2016-05 - LMS v1.10.6: Maintenance version for 1.10.4 2016-05 - LMS v1.9.10.4: Maintenance version for 1.9.10.2 2016-03 - LMS v1.10.4: Maintenance version for 1.10.2 2015-12 - LMS v1.10.2: Maintenance version for 1.10 2015-10 - LMS v1.10: First version to introduce OpenBadges and vCard features. 2015-01 - LMS v1.9.10: This version is a bugfix and minor-improvements release. It is the first version to comply with accessibility standard WAI WCAG Level AAA. 2014-06 - LMS v1.9.8: This version is a bugfix and minor-improvements release. First version to integrate a support tickets and a payment systems. 2014-04 - LMS v1.9.6.1: This version is a security-patch release. 2013-06 - LMS v1.9.6: This version is a bugfix and minor-improvements release. 2013-01 - LMS v1.9.4: This version is a bugfix and minor-improvements release. 2012-09 - LMS v1.9.2: This version of Chamilo comes with new features and improvements, including versatile mobile-friendly design features, question categories and the option to include voice recording in tests. 2012-08 - LMS v1.9.0: Chamilo LMS 1.9.0 is the first version of Chamilo (and arguably the first overall LMS platform) to fully support HTML5 (to the exception of a little mistake in the login field) and offer an adaptative HTML/CSS design. It adds a series of features like voice recording as a test answer, webcam capture, questions categories, videoconference recording and an improved plugins system to improve global and courses-specific features without touching the upstream Chamilo code. The same month of this release, Chamilo registered passed 1.2M users around the world. 2011-08 - v1.8.8.4: Although announced a bit later than its real release date, Chamilo 1.8.8.4 was released mostly as a fix version for 1.8.8.2. During the adoption period of this version, Chamilo reached 700,000 reported users. This version also considerably improved certificates generation. 2011-05 - v1.8.8.2: After a slightly flawed 1.8.8 not officially released, version 1.8.8.2 was released with new features like speech to text, online audio-recording, photo edition, SVG diagrams drawer, full-text indexing, certificates generation. 2010-07 - v1.8.7.1: Version 1.8.7.1, codename Palmas, was launched at the end of July 2010. It included security fixes to the wiki tool, many fixes to bugs found in 1.8.7 and a series of minor global improvements and new features. 2010-05 - v1.8.7: Version 1.8.7, codename Istanbul, was launched in May 2010 with major internationalization (language and time) improvements to the previous version, moving a first major step away from Dokeos. It also added new pedagogical tools to its previous version. This version was the first to be released officially as GNU/GPL version 3. 2010-01 - v1.8.6.2: Version 1.8.6.2 of Chamilo was originally meant to be released as Dokeos 1.8.6.2 in January 2010. Because of the community schism, it was left incomplete and continued (starting November 2009) as the Chamilo project. Chamilo LCMS 2015: The LCMS project was discontinued (or continued outside the realm of the Chamilo Association) 2013-07 - LCMS v3.1: This version is a bugfix and minor-improvements release on top of LCMS v3.0. 2013-05 - LCMS v3.0: This version is refactores, v2.1 version of the LCMS software. 2012-01 - v2.1: Chamilo LCMS 2.1 is the first Chamilo 2 release that has extensively been tested in a variety of production environments. It can be considered to be stable. Chamilo 2 is user centred and repository based. All data reside in the repository, thus doing away with data duplication to a major extent. It includes a portfolio application and access from the user's repository to external repositories such as Google Docs, YouTube, Vimeo, Slideshare and many more. 2010-12 - v2.0: The first version 2.0 of Chamilo. Considered to be stable software with experimental web 2.0 and 3.0 aspects expected to analyze the impact of brand new technology on education. Apart from introducing the concept of true content, object and document management, Chamilo 2.0 also focuses on integration with existing repository systems (Fedora, YouTube, Google Docs, etc.) and supports some of the most popular authentication systems (ao. LDAP, CAS, Shibboleth). Its modular and dynamic architecture provides a basis for a multitude of extensions which can be added upon installation or at a later date by means of a repository of additional functionality packages. 2010-06 - v2.0 beta: Chamilo 2.0 beta is not considered production-safe (as its name implies) but implements a series of improvements to get to a more stable and usable release. 2010-06 - v2.0 alpha: Chamilo 2.0 was originally (first plans date back to 2006 in the Dokeos Users Day in Valence, France) meant to be released as Dokeos 2.0, as a completely new backend for the LMS. The complete team of developers working on this version decided, in 2009, to move to the Chamilo project, thus leaving the Dokeos project repository with incomplete sources. Although Dokeos promised since then to release version 2.0 on 10 October 2010 (with a corresponding counter counting down from more than 200 days before that), it is not the total remake it was supposed to be, and it is actually expected to be equivalent in features to 1.8.6.1, mostly adding valuable visual and usability improvements. Statistics The free-to-use Chamilo campus registered 100,000 users in October 2011 (15 months after its launch), for 38,000 users in December 2010 (11 months after its launch). The Peruvian private Universidad San Ignacio de Loyola reported 1,700 users connected in the same 120 seconds time frame in August 2011. Globally, Chamilo registered 700,000 users in October 2011, more than 5,000,000 users in June 2013 and more than 20,000,000 users in August 2018. Worldwide adoption Chamilo is backed up by a series of small to medium companies and universities, which are required to register as members of the association and contribute to the open source software to be recognized as official providers. One of the prerequisites to become a member is to show an understanding of the concept of free software for the benefit of worldwide education. One of the prerequisites to become an official provider is to contribute something to the community. Chamilo is also used in public administrations, Spanish, Belgian, Dutch and Peruvian ministries, as well as unemployment services and NGO's. As of October 2012, it was freely used by more than 2,000 organizations worldwide, 11,000 as of May 2014, 31,000 as of April 2016 and 53,000 as of August 2019. See also Learning management system Online learning community References Content management systems Free software Virtual learning environments Educational software Free application software Cross-platform free software Free educational software Free learning management systems Free learning support software Free software programmed in PHP Learning management systems Free content management systems
20758293
https://en.wikipedia.org/wiki/Don%20Cassel
Don Cassel
Don Cassel (born April 4, 1942) is the author/coauthor of 60 US/Canadian college textbooks and was a Humber College professor for 30 years, responsible for developing the college's first Computer Programming curriculum. Career Cassel was a Professor of Information Technology at Humber College in Toronto, Ontario from 1968 to 1998. In 1968, after arriving at Humber from IBM, he developed the college's first Computer Programming program, which is still part of the curriculum. During this time he specialized in computer programming and application software courses. He was a founding member of the Information Systems department in 1968 where he was department head for 10 years. During his tenure at Humber Cassel developed numerous courses and was active in curriculum development for the School of Business and later for the School of Information Technology. He developed one of the first online interactive courses at Humber for Microsoft Access using WebCT. WebCT became a significant tool for developing and delivering distance learning for the college. Cassel was a computer programmer and system analyst with IBM Canada from 1961 to 1968. He received an undergraduate degree in Computer Science from York University in 1975 and was accepted into the Ontario Institute for Studies in Education at the University of Toronto for a Master of Education program, completing the first year of the two-year program. At this point writing college textbooks began to require his full attention. In 1972 his first book, Programming Language One, was published by Reston Publishing Company of Reston, Virginia, a subsidiary of Prentice Hall Inc. Thus began a long period of textbook writing for college programs across North America. Publications Canadian Internet Handbook - Educational Edition, Prentice Hall Canada, 1999, coauthored with Jim Carroll and Rick Broadhead. Surfing for Success in Business and Economics. Prentice Hall Canada, 1999 coauthored with Andrew T. Stull Canadian Internet Handbook - Educational Edition, Prentice Hall Canada, 1998, coauthored with Jim Carroll and Rick Broadhead. Internet Handbook - U.S. Edition, Prentice Hall Canada, 1997, coauthored with Jim Carroll and Rick Broadhead. Canadian Internet Handbook - Educational Edition, Prentice Hall Canada, 1997, coauthored with Jim Carroll and Rick Broadhead. Computing Essentials - Introducing Visual Basic 4 for Windows 95, Prentice Hall, Inc. 1996 Computing Essentials - Introducing Microsoft Access for Windows 95, Prentice Hall, Inc. 1996 Computing Essentials - Introducing Microsoft Excel for Windows 95, Prentice Hall, Inc. 1996 Computing Essentials - Introducing Microsoft Word for Windows 95, Prentice Hall, Inc. 1996 Computing Essentials - Introducing Windows 95, Prentice Hall, Inc. 1996 Canadian Internet Handbook - Educational Edition, Prentice Hall Canada, 1995, coauthored with Jim Carroll and Rick Broadhead. Source 1 - Computing Essentials, Microsoft Excel 5, Prentice Hall, Inc 1995. Source 1 - Computing Essentials, Microsoft Word 6, Prentice Hall, Inc 1995. Source 1 - Computing Essentials, QBASIC, Prentice Hall, Inc 1995, coauthored with Sherry Newell. Source 1 - Computing Essentials, DOS 6, Prentice Hall, Inc 1995. Source 1 - Computing Essentials, Microsoft Excel 4.0, Prentice Hall, Inc. 1993, coauthored with Sherry Newell. Source 1 - Computing Essentials, dBASE IV Release 1.1, Prentice Hall, Inc. 1993 Source 1 - Computing Essentials, dBASE III Plus, Prentice Hall, Inc. 1993 Source 1 - Computing Essentials, Quattro Pro 3.0, Prentice Hall, Inc. 1993 Source 1 - Computing Essentials, Quattro 1.01, Prentice Hall, Inc. 1993 Source 1 - Computing Essentials, Lotus 1-2-3 Release 2.3, Prentice Hall, Inc. 1993 Source 1 - Computing Essentials, Lotus 1-2-3 Release 2.2, Prentice Hall, Inc. 1993 Source 1 - Computing Essentials, WordPerfect 5.1, Prentice Hall, Inc. 1993 Source 1 - Computing Essentials, WordPerfect 4.2, Prentice Hall, Inc. 1993 Source 1 - Computing Essentials, PC DOS/MS-DOS, Prentice Hall, Inc. 1993 Source 1 - Computing Essentials, DOS 5.0, Prentice Hall, Inc. 1993 Using DOS, WordPerfect 5.1, Lotus 1-2-3 Release 2.2, and dBASE III Plus, Prentice Hall Inc. 1992. Learning Lotus 1-2-3 Releases 2.3 and 2.4, Prentice Hall Canada, Inc. 1992. Learning Lotus 1-2-3 Release 2.2, Prentice Hall Canada, Inc. 1991. Understanding Computers, Prentice Hall, Inc. 1990 Learning DOS, WordPerfect 5.0, Lotus 1-2-3 2.2, dBASE III Plus Prentice Hall Inc. 1990 Learning DOS, WordPerfect 4.2, Lotus 1-2-3/TWIN , dBASE III Plus Prentice Hall Inc. 1990 Advanced Structured COBOL and Program Design, International Edition, Prentice-Hall Inc. 1987 Advanced Structured COBOL and Program Design, Prentice-Hall Inc. 1987 Introduction to Structured COBOL and Program Design, Prentice-Hall Inc. 1987 WATCOM BASIC Made Easy, Prentice-Hall Canada, 1986 WordStar 3.3 Simplified, Prentice-Hall Inc., 1984 Lotus 1,2,3 Simplified for the IBM Personal Computer, Prentice-Hall Inc., 1985–86 dBASE II Simplified for the IBM Personal Computer, Prentice-Hall Inc., 1985 BASIC and Problem Solving Made Easy, Reston, 1985 BASIC Programming for the Commodore PLUS/4 and Commodore 16 Wm. C. Brown, 1985 Commodore 64 Graphics, Sound and Music, Wm.C. Brown, 1985 Introduction to Computers and Information Processing - 2nd Edition, Reston 1985 - with Martin Jackson BASIC Made Easy 2nd Edition, Reston, 1985 - coauthored with Richard Swanson EasyWriter Simplified for the IBM Personal Computer, Prentice-Hall Inc., 1984 WordStar Simplified for the IBM Personal Computer, Prentice-Hall Inc., 1984 Computers Made Easy, Reston, 1984 BASIC Programming for the Commodore 64, Wm.C. Brown, 1984 BASIC Programming for the Commodore VIC-20, Wm.C. Brown, 1984 BASIC 4.0 Programming for the PET/CBM, Wm.C. Brown, 1983 FORTRAN Made Easy, Reston, 1983 - coauthored with Richard Swanson An Introduction to Microcomputers - Audio/Visual Presentation, Prentice-Hall Media, 1983 The Structured Alternative: An Introduction to Program Design, Coding, Style, Debugging and Testing, Reston, 1983 Introduction to Computers and Information Processing - BASIC, COBOL, FORTRAN, Pascal, Reston 1981 - coauthored with Martin Jackson Introduction to Computers and Information Processing - Language Free Edition, Reston 1981 - coauthored with Martin Jackson BASIC Made Easy, Reston, 1980 - coauthored with Richard Swanson Introduction to Computers and Information Processing, Reston, 1980 - coauthored with Martin Jackson PL/I: A Structured Approach, Reston, 1978 BASIC Programming in Real Time, Reston, 1975 Programming Language One, Reston, 1972 References External links Don Cassel website Don Cassel personal website Canadian textbook writers IBM employees Living people Humber College faculty 1942 births
4868970
https://en.wikipedia.org/wiki/Strata%203D
Strata 3D
Strata Design 3D CX is a commercial 3D modeling, rendering and animation program developed in St. George, Utah by Corastar, Inc. dba Strata Software. Strata is a pioneer and developer of 3D design software. Strata Design 3D CX 8 is the latest incarnation of a program that was originally named StrataVision 3D. It is best known as an all-purpose 3D modeling application with photo-real rendering ability, ease of use and tight integration with Adobe Photoshop. Strata 3D is targeted at the illustration/multimedia market rather than at the movie/games market. Strata 3D software in its various iterations has received awards and praise from many sources including MacUser UK, Digit Magazine, Layers Magazine, DigitalArts, MacWorld, and Photoshop User. History Strata 3D was one of the first desktop 3D graphics applications, releasing its first application, StrataVision 3D, in 1989. The company was formed by brothers Ken and Gary Bringhurst. By 1996, the company was among the top five private employers in southern Utah. The Bringhurst brothers and their company have stayed in their scenic red rock home near St. George, Utah. Ken Bringhurst serves as the company's Executive Chairman, while Gary Bringhurst is the Chief Technology Officer. As of September 2016, John Wright is President and Chief Executive Officer. Move into VR/AR Strata announced initial funding for its move into augmented reality (AR) and virtual reality (VR) September 22, 2016. Greg Kofford, a co-founder of Lanstead Investors PTY Limited, managed the funding; company changes include the appointment of John Wright as president and managing director. Strata plans to give users the ability to view and sell design projects using VR and AR headsets, and offer custom VR/AR development. Strata Design 3D CX features Strata Design 3D CX 8.1 is the latest version of the product and contains numerous upgrades, improvements, and enhancements. Rendering features. Strata Design 3D CX is famous for its high-quality rendering ability – rendering being the creation of a finished, final image. Rendering features include: Embree Raycasting by Intel, added in v8.1, improves rendering speed by 300 to 800 percent. Preview renderers include LiveRay, which generates a fully raytraced rendering the object or scene. Several variations of OpenGL and Toon rendering are also available for previewing a model or scene. Raydiosity and raytracing rendering options include many customization options such as rendering to an alpha channel, rendering to Photoshop layers, gamma control, and the ability to load or save custom render settings. Other options include blurry transparency and reflectivity, instance rendering, shadows, soft shadows, MIP mapping, specular highlight, anti-matter effects and stereoscopic rendering. Texturing features include a palette of hundreds of premade surface textures and the ability to create new ones using features like live-linking to native Adobe Photoshop and Illustrator files – make a change in Photoshop or Illustrator and the model surface is automatically updated. Texture channels include diffuse color, reflection color, specular, reflection amount, embedded amount, anisotropy, opacity, smoothness, index of refraction, glow factor, bump, normal, shadow cast map and mask. Other features include native UV mapping, scripting, hierarchical control, normal maps, and a variety of premade and customizable volumetric and solid textures including fog, haze, mist, and clouds. Conform to Mesh UV mapping uses LSCM (least squares conformal maps) technology. LSCM unwrap options include unwrapping entire objects, singe UV island or only selected polygons. Commands available for unwrapping are pin, fit, fit each, rotate connected, move to, select perimeter, assign UV edge seam LiveRay texture preview allows the user to see texture changes directly in the modeling view. Quick texture settings include glossiness and transparency, allowing a rapid way to adjust basic texture attributes. Premade display scenes included with the application allow users to stage an object or model in one of 29 premade indoor and outdoor templates, including white and dark studios and premade shelving displays. Premade templates include optimized lights and lightdomes. Modeling features include Bézier splines, polygons, quad polysplines (subdivision surfaces), metasurfaces, extrude, lathe, Boolean, skin and mirror. Viewing features include Quick OpenGL previews and LiveRay photo-real previews, familiar interface, split view, camera view, multiple views, depth cueing, image/movie backdrop, and spotlight views. Environment features include a ground plane, air refraction, visible and reflective backgrounds, atmosphere, gravity, and wind. Lighting features available in Design 3D CX 8.x include point lights, spot lights, global lights, glowing surfaces, Lightdome HDRI, intensity, soft edges, host objects, ambient light, animatable lights, gels, light effects, and HDR Light Studio integration (optional). Special effects include lens flare, auras, particles, scriptable effects, fountains, fire, smoke, hair, hotspots, pixie dust. Also included are global gravity, wind force and air control. File import/export capabilities include U3D (in), Collada (in/out), Illustrator/EPS (in), Photoshop PSD (in/out), STL (in/out), XMM (out), Quicktime (in/out), PICT (in/out), Quicktime VR (out), TGA (in/out), TIFF (in/out), BMP (in/out), JPEG (in/out), 3DS (in), PDF (in), DXF (in/out), AVI (in/out), MiniCAD (in), Amapi (in/out), OBJ (in/out), VRML 1 & 2 (in/out), Flash SWF (out), True Type Fonts (in), and Postscript Type Fonts (in) Animation features. In Design 3D, everything can be animated. Features include scripting, hierarchical animation, visible paths, animation previews, event-based convert to path, key frames, velocity graphs, 'life' control, align to path, inverse and forward kinematics, and proportional event scaling. Path types available are TCB, spline and natural. Animation effects allow users to shatter, explode, and jiggle objects. Drawing features include 2D/3D text, 2D drawing tools, 3D primitives and spline curves. Versions The current version of Strata 3D is Strata Design 3D CX 8.1 which adds Embree Raycasting from Intel. Design 3D CX 8.0 added a 64-bit renderer with expanded memory handling and better handling of very large renderings. Other rendering improvements included a new dialog which added control of gamma, brightness and black point along with the ability to render to High Dynamic Range images (HDRI). Other notable additions were a Publish command for exporting objects to 3D print services including Augment, Sketchfab, and iMaterialise. Lighting improvements include integration with HDR Light Studio. New features in Design 3D CX 7.5 included UV editing tools such as Conform, (unwrap), seam marking in poly meshes, and new UV edit tools. Modeling enhancements include a Decimate command, new poly editing selection methods and tools, STL (.stl) file import and export for 3D printing, and support for bump/normal maps in Collada import/export. The Design 3D CX 7.0 release in 2012 included numerous texture enhancements such as anisotropy and normal map support, blurry transparency and subsets for applying different textures to different groups of polygons in the same object. Also included in v7 is a full UV editor, new polygon selection and editing tools, render and speed improvements. Rendering enhancements include support blurry reflections and blurry transparency, and adaptive surface sampling. In 2009, Design 3D CX 6 added HDRI lighting, new grid and guide functionality, multiple polygon editing tools, edit tools for lathe, bezier, extrude, and path extruded objects, new texture channels and controls including Fresnel interpolation, new photon rendering, and rendering quality and speed improvements. Version 6 also added tighter integration with Adobe Photoshop CS4 Extended via a set of plug-ins. Design 3D's Model, TexturePaint, Match and Render plug-ins allowed users to easily send models back and forth from Photoshop to create, edit and texture 3D content. The Design 3D CX 5.x releases beginning in 2006 included subdivision surfaces (Catmull–Clark algorithm–based polygon smoothing), scripting via the Lua programming language, and rendering to Photoshop layers. This release offered new subdivision tools, scripting support, bones and IK system, and a history palette. The Strata 3D CX v4.x releases in 2004 included Polygon modeling tools and Subdivision Surface modeling (SDS). In 2002, Strata 3Dbase and Strata 3Dpro (version 3) added functionality such as toon rendering and photon mapping. By 1999, Strata StudioPro 2.53 offered numerous new features, including a choice of QuicDraw 3D or OpenGL for onscreen rendering, multiple viewing options to speed up redraws as well as the ability to convert 3D primitives into skin, Bézier, polygon mesh objects. Other features included texture previews, path extrude, boolean operations, skin (loft) and extrude; and special effects such as fountains, lens flare, fog, mist Strata StudioPro 1.75 Blitz in 1996 added support of QuickDraw 3D, multiprocessor support, VRML export and the Raytracing renderer. In 1993 Strata StudioPro 1.5 was added to the product line, with StrataVision remaining a reduced-feature "light" version. StrataVision 2.0 introduced the Raydiosity rendering algorithm (a variant of radiosity) along with more powerful modeling features, faster rendering, basic animation and extensibility. StrataVision was released as the first product of the Strata company in 1988 to facilitate professional 3D graphics on regular desktop Macs. This first release provided high-end modeling and 3D rendering tools. References External links Strata Software Support home Strata 3D User Community Strata 3D University 3D graphics software Animation software Lua (programming language)-scriptable software 3D computer graphics software for Linux Proprietary commercial software for Linux
42678226
https://en.wikipedia.org/wiki/Madanapalle%20Institute%20of%20Technology%20and%20Science
Madanapalle Institute of Technology and Science
Madanapalle Institute of Technology & Science, also known as MITS, is an Indian engineering college. It was established in 1998 in Madanapalle, India. MITS is an affiliate of JNTUA and is approved by AICTE, New Delhi. MITS is a destination for engineering, management, and computer application studies in India. History Madanapalle Institute of Technology and Science was established in 1998 in Madanapalle, Chittoor district of Andhra Pradesh, India. It is located on a campus near Angallu, about away from Madanapalle. MITS originated under the auspices of Ratakonda Ranga Reddy Educational Academy and is now under the supervision of Dr. N. Vijaya Bhaskar Choudary, PhD Accomplishments 59 Funded research projects worth of Rs. 550.06 Lakhs The “GOLD” rated by AICTE - CII Survey of Industry - Linked Technical Institutes – 2017 Graded as ‘AAA’ by Careers 360 for the year 2016 World Bank (TEQIP-II) funded Institute MoU with IIT Hyderabad for Academic Collaboration State of the art Learning Management System through Moodle and MOOCs 100% of the faculty has a Ph.D. qualification and the faculty student ratio is 1:12 Foreign Language Training by experts in Japanese, German, French and Spanish Siemens “Technical Skill Development Institution” sanctioned by Govt. of AP Courses Undergraduate Engineering B.Tech. - Civil Engineering B.Tech. - Computer Science & Engineering B.Tech. - Computer Science & Technology B.Tech. - Internet of Things B.Tech. - Artificial Intelligence B.Tech. - Cyber Security B.Tech. - Data Science B.Tech. - Electronics & Communication Engineering B.Tech. - Electrical & Electronics Engineering B.Tech. - Mechanical Engineering Post-graduate Engineering M.Tech. - Advanced Manufacturing Systems M.Tech. - Computer Science & Engineering M.Tech. - Digital Electronics & Communication Systems M.Tech. - Electrical Power Systems M.Tech. - Structural Engineering Management studies Master of Business Administration (Specialization-HR, Finance, Marketing, Systems) [Eligibility: 10+2+3/4] Computer applications Master of Computer Applications[Eligibility: 10+2+3] Doctoral programs Engineering Computer Science & Engineering Electronics & Communication Engineering Mechanical Engineering Electrical & Electronics Engineering Basic sciences and humanities Chemistry English Mathematics Physics Management Management Sciences MoU Madanapalle Institute of Technology & Science has MoUs with the following Institutions in different countries. : • BRNO University of Technology - Czech Republic • Innopolis University - Russia • Maharishi Vedic University - Holland Taiwan Universities • Providence University • Ming Chuan University • Ming Chi University • Asia University • National United University • National Pingtung University of Science & Technology • National Yunlin University of Science & Technology • I-Shou University South Korea Universities • Kookmin University • Chungnam National University - Korea Germany Universities • European Education and Research Council (GEMS) / Indo-Euro Synchronization • Steinbeis Institute for Sustainable Resource Usage & Energy Management Tuebingen, Germany. Fruitful discussions are going on with the following Universities in Japan for B.Tech. Internship & M S Programme. Japan Universities • Osaka Institute of Technology • Nagoya Institute of Technology • Iwate University • Kansai University • Kwansei Gakuin University Associations Indian Society for Technical Education (ISTE) Institute of Electrical and Electronics Engineers, Inc., (IEEE) Industry-Institute Interaction Cell (IIIC) Entrepreneurship Development Cell (ED) Computer Society of India (CSI) National Service Scheme (NSS) National Cadet Corps (NCC) The Confederation of Indian Industry (CII) IUCEE – Indo Universal Collaboration for Engineering Education Affiliations Recognized by AICTE, New Delhi Permanent Affiliated from JNTUA, Anantapuramu Approved by UGC under section 2(f) & 12(B) NBA Accreditation to UG (ECE, EEE, CSE, MECH, Civil) & PG (MBA & MCA) Programmes Accredited by NAAC An ISO 9001 : 2015 Certified institution DSIR/DST Recognition for Scientific & Industrial Research Govt. of Andhra Pradesh conferred ‘A’ Grade status Recognized Research Center for Engineering, Management & Basic Sciences and Humanities under JNTUA, Anantapuramu NSS & NCC Programs References Engineering colleges in Andhra Pradesh Universities and colleges in Chittoor district Educational institutions established in 1998 1998 establishments in Andhra Pradesh
5378489
https://en.wikipedia.org/wiki/List%20of%20Cowboy%20Bebop%20characters
List of Cowboy Bebop characters
The following is a list of major and minor characters from the anime series Cowboy Bebop, directed by Shinichiro Watanabe and written by Keiko Nobumoto, its manga series adaptation, written by Kuga Cain and Yutaka Nanten, and its live-action adaptation, developed by André Nemec and written by Christopher Yost. Bebop crew Spike Spiegel Portrayed by: John Cho is a tall, lean, and slightly muscular 27-year-old bounty hunter born on Mars. Spike has a history of violent activity, seen through flashbacks and dialogue with the Red Dragon Syndicate. He is often depicted with a cavalier attitude, but occasionally shows signs of compassion when dealing with strangers. The inspiration for Spike's martial arts is found in Bruce Lee, who uses the style of Jeet Kune Do as depicted in Session 8, "Waltz for Venus". He has fluffy, blackish green hair (inspired by Yūsaku Matsuda's role as Shunsaku Kudō in Tantei Monogatari) and reddish brown eyes, one of which is artificial and lighter than the other. He is usually dressed in a blue lounge suit, black skinny tie, with a yellow shirt and Lupin III-inspired boots. A flashback in Session 6 revealed that his apparently fully functioning right eye was surgically replaced by a cybernetic one (although Spike himself may not have conscious recollection of the procedure since he claims to have lost his natural eye in an "accident"). A recurring device throughout the entire show is a closeup on Spike's fully natural left eye before dissolving to a flashback of his life as part of the syndicate. As said by Spike himself in the last episode, his right eye "only sees the present" and his left eye "only sees the past". The purpose of this cybernetic eye is never explicitly stated, though it apparently gives him exceptional hand–eye coordination – particularly with firearms (Spike's gun of choice is a Jericho 941, as seen throughout the series). He is also a talented pilot in his personal fighter, the Swordfish II, a modified racer. In the final episode, Spike kills Vicious, but his fate after the battle has never been officially confirmed. Spike does go from seeing his beloved and recently departed Julia with his left eye, the eye that sees his past to seeing her with his right eye, the eye that sees his present. In a May 2013 interview, director Shinichiro Watanabe stated "I want the audience to interpret it however they want to. I want them to interpret it themselves. Just because I put something there does not mean they have to believe it. If I say something in an interview that tends to make it official so I try to avoid a definite answer. In the past, people watching my shows have come up with better ideas than my original intention for the story. So I think it's good to let people use their imaginations." Jet Black Portrayed by: Mustafa Shakir Known on his home satellite as the "Black Dog" for his tenacity, is a 36-year-old former cop from Ganymede (a Jovian satellite) and acts as Spike's foil during the series. Physically, Jet is very tall with a muscular build. He wears a beard with no mustache, and is completely bald save for the back of his head. Spike acts lazy and uninterested, whereas Jet is hard working and a jack-of-all-trades. Jet was once an investigator in the Intra Solar System Police (ISSP) for many years until he lost his arm in an investigation that went awry when his corrupt partner betrayed him. His arm was replaced with a cybernetic limb—an operation later revealed to be by choice as biological replacements were possible. He wanted the fake arm as a reminder of the consequences of his actions. His loss of one of his limbs coupled with the general corruption of the police force prompted Jet to quit the ISSP in disgust and become a freelance bounty hunter. Jet also considers himself something of a renaissance man: he cultivates bonsai trees, cooks, enjoys jazz/blues music (he named his ship the Bebop, referring to a type of jazz), especially Charlie Parker, and even has interest in Goethe. As a character, Jet is the quintessential "dad" even though he often wishes people would view him as a more brotherly figure (so as not to seem old). Of the crew he shows the most obvious affection when dealing with Edward, most obviously shown when he tells her a story in Session 18; he is also shown attempting to (perhaps falsely) reassure himself after she and Faye leave the crew of the Bebop. Jet is skilled with handguns, typically carrying a pre-2004 Walther P99, and also uses the netgun. He is proficient in hand-to-hand combat as well. Compared to Spike, Jet tends to use more raw muscle than technique. He is also a skilled mechanic and pilot. Aside from the converted interplanetary fishing trawler vessel Bebop, Jet flies a smaller ship called Hammerhead. The Hammerhead appears to be a modified salvage-craft, to which Jet has added larger engines and fuel tanks. It features a mechanical arm equipped with a harpoon as its main weapon, which is somewhat analogous to Jet's own mechanical arm. Both the Hammerhead and the Bebop are able to land on water, and have a fishing theme. It is later revealed that the Bebop was originally a fishing ship that Jet "customized" with larger engines. He is very protective of the Bebop, often being reluctant to bring it into situations where it could be damaged, and taking great offense when someone insults it. Jet once lived with a woman named Alisa, who left him, claiming that he was overprotective towards her. They meet when the Bebop stops on Ganymede, Jet's homeworld and Jet goes to find her. He talks to her and then leaves, but later he finds out that Alisa's new boyfriend, Rhint, is wanted for murder. Jet detains Rhint and later hands him over to police. Faye Valentine Portrayed by: Daniella Pineda is one of the members of the bounty hunting crew in the anime series Cowboy Bebop. She is often seen with a cigarette and in a revealing outfit complete with bright yellow hot pants and a matching, revealing top (and, on occasion, a bikini). She sports violet hair and green eyes. Although appearing to be no more than 23 years old, Faye is actually around 77 years old, having been put into cryogenic freeze after a space shuttle accident, wherein she spent fifty-four years in suspended animation. During the course of the series (set in 2071), Faye crosses paths with Spike and Jet twice and makes herself at home aboard their ship the second time around, much to the consternation and disapproval of the two men, both of whom have their own reservations about women in general. Seemingly little more than a thorn in her partners' sides, Faye is actually a well-rounded member of the team. She can handle herself exceptionally well in spite of her slight appearance, displaying at least once in the series (in "Cowboy Funk") that she has a powerful punch. Adept at flying, Faye has stood her ground just as well as Spike has in an aerial dogfight in her ship Red Tail, and at times even against Spike in an aerial dogfight (though Spike eventually proved the better pilot). She also excels with guns, and is first seen in the series completely destroying a shop with a Heckler & Koch MP5K, though she is immediately apprehended afterward. In the movie, she is seen with the same gun, in addition to her normal companion: a Glock 30. Faye has an almost unstoppable attitude, and even her sometimes innocent smile can be seen as dangerous. She has many bad habits, such as drinking, habitual gambling, smoking cigarettes and occasionally cigars, becoming unnecessarily violent, and turning on partners when the profits seem too skimpy. Sarcastic and presumptuous, she rarely appears weak or in need of support. She brags and takes care of herself, never trusting others, cheating and lying her way from one day to the next. She also shows herself capable of unpredictable behavior, as when she kissed Ed on the mouth to snap Ed from one of her rambling moments. She is a woman who is skilled at getting what she wants; her indomitable exterior hides a more delicate interior. Upon awakening from her 54-year cryogenic sleep, not only was she saddled with a massive amount of debt that she had no means to pay, but she was also diagnosed with total amnesia, a stranger in a mysterious world that she was not a part of and did not understand, surrounded by people who claimed to be helping her but were only there to take advantage of her naiveté. The surname "Valentine" was merely a name given to her by the doctor who woke her; the circumstances of her accident, her previous life, and even her real name all remain a mystery, and are only gradually revealed as the series progresses. It has been hinted that she came from Singapore on Earth, and was the daughter of a very wealthy family, as the city's famous Merlion statue features prominently in scenes of her childhood, and that memories and a film from her childhood showed her living in a large mansion. Faye is supposedly her real name, as a high school classmate (by now an old disabled woman) recognises her and calls her by that name. In her debut episode, she claims to be descended from Romani people, but it later becomes apparent that that was likely a lie. Utterly betrayed by someone she thought she could trust after waking, Faye found herself burdened with even more money to pay, and the situation resulted in the hardening of her personality to an extreme degree. She even says in Session 11: "we deceive or we are deceived", and that "nothing good ever happened to me when I trusted others". By the end of the series she learns to value her comrades, coming back to the Bebop when she realizes that it is the only home that she has left, naming it as the "only place I could return to". She grows to understand the disadvantages of being a loner, and that even though her "family" is somewhat dysfunctional it is still a place where she will always belong. Throughout the series, though she grows to care for Jet and even Edward in her own way, it is her relationship with Spike that remains a cause for consideration by most. In one episode Spike teases her and asks if she will come to help him if he gets into trouble, and though she scoffs at his remark, she eventually does. Faye even points her gun at him in a threatening gesture in the last episode, as Spike is walking away to what she and Jet both realize is his possible death. After he leaves, Faye cries. When asked, Watanabe stated in an interview: "Sometimes I'm asked the question, 'What does Spike think of Faye?' I think that actually he likes her quite a bit. But he's not a very straightforward person so he makes sure he doesn't show it." Ed Portrayed by: Eden Perkins is an elite hacker prodigy from Earth. "Radical Edward" is a very strange, somewhat androgynous, and extremely intelligent teenage girl of around 13 years of age. Her mannerisms include walking around barefoot, performing strange postures, and her gangling walk. "Radical Ed" could be considered a "free spirit"; she is fond of silly exclamations and childish rhymes, is easily distracted, has the habit of "drifting off" from reality sometimes in mid-sentence. Ed's generally carefree attitude and energy act as a counterpoint to the more solemn and dark aspects of the show. Ed remains a part of the Bebop crew until the 24th episode, when she, along with Ein, leaves the crew. She almost always refers to herself in the third person. Not much is known about her origins, only that she spent some of her earlier childhood in an orphanage after being left there by her father, who appears in episode 24. Her father, Appeldelhi siniz Hesap Lutfen, recognizes her immediately by her birth name of "Françoise Lütfen" and while initially unsure as to her gender, leaves shortly after to continue his unending quest to document every asteroid that falls to Earth from the wreckage of the Moon. In the manga, she was a friend of a timid young boy in the orphanage known simply as "Tomato" (the name given to her PC in the anime), who, like Ed, knew a great deal about computers and the net. Ed's primary use to the Bebop crew is as a hacker; she is widely known to be a whiz kid behind the computer. Ed's computer of choice is a carry-along desktop, and when traveling by foot she will balance it on her head. Her goggles can interact with it to give her a virtual reality environment in which she can browse an entire network at once. Originally, Ed's character was inspired by the "inner behavior" of the shows' music composer, Yoko Kanno ("a little weird, catlike, but a genius at creating music"), and was first developed as a dark-skinned boy. It was changed to even the gender ratio on the Bebop, which was, with Ed as a boy, three males and one female. The original character design appears in session 5 as a young boy that steals an adult magazine from Annie's bookstore by smuggling it under his shirt which eventually he takes out and reads. Ein is a Pembroke Welsh Corgi brought aboard the Bebop by Spike after a failed attempt to capture a bounty. He often shows heightened awareness of events going on around him. Over the course of the series, Ein answers the telephone, steers a car, uses the SSW, plays shogi, operates the "Brain Dream" gaming device, and generally performs tasks that an average canine would not be able to accomplish. While the televised series only briefly hints that Ein's brain was somehow enhanced, the manga shows Ed accessing data stored in Ein's brain via a virtual reality-type interface with which she has a conversation with a human proprietor. Ein is able to "speak" to other species, as demonstrated in Session 17: "Mushroom Samba" (he speaks to a cow with a subtitled bark of "Thanks", to which the cow has a subtitled moo back of "Oh, it's no problem"). Ein initially takes a shine to Jet, but when Ed joins the crew he comes around to her as well. He follows Ed when she leaves the crew. Red Dragon Crime Syndicate An East Asian triad organization led by a group called The Van. The Van are usually seen wearing imperial Manchurian-Chinese clothing of the Qing dynasty. The syndicate specializes in assassinations, but are also involved in the trafficking of narcotics, Red Eye in particular. The rules of the syndicate states that members who attempt to leave, or fail to complete tasks, are punished by death. Mao Yenrai served as a captain or Capo to the Elders and was a mentor to both Vicious and Spike. After leaving the Syndicate, Spike considers himself in Mao's "debt", and is motivated to confront Vicious for the first time when Mao is killed by two men in Vicious' employ. It takes place immediately after Mao signs a peace treaty with a rival crime syndicate, the White Tiger, expressing a desire for relief from the hypervigilance of gang warfare. The Van later refers to Mao's death as "bad luck" and decline to pursue the issue when confronting Vicious. The Van is also shown to be indulgent toward Vicious initially, which eventually creates their demise. Vicious kills the Van and becomes the head of the Syndicate. Vicious Portrayed by: Alex Hassell is Spike's archenemy. He is a ruthless, cunning, and power-hungry member of the Red Dragon Crime Syndicate in Tharsis, and is often referred to or depicted as a venomous snake (as opposed to Spike who is referred to as a swimming bird and the Syndicate Elders who see themselves as a dragon). His weapon of choice is a katana which he wields skillfully, even against gun-wielders. He was an infantry rifleman during the Titan War and is shown firing a semi-automatic pistol in a Session 5 flashback, as well as in the Session 26 flashback of him and Spike fighting back-to-back. Vicious is usually seen accompanied by a black cormorant-like bird. He eventually hides explosives in its stomach and detonates them as a distraction during an escape. Vicious was Spike's partner in the Red Dragon crime syndicate until they fell into conflict over Julia. After Spike's supposed death, Vicious left the Red Dragons briefly to fight in the Titan War of 2068. Although his precise motivations for enlisting are debated, his testimony helped frame Gren, his squadmate in the war, for spying, which raises the possibility that he himself might have been involved in military espionage on behalf of the Syndicate and chose to pin it on his admirer. However, in the Titan flashbacks he is also seen to be remembering Julia. Vicious believes that he is the only one who can kill, or "awaken" Spike, as Spike is the only one who can do the same for Vicious. Vicious's real age is revealed in the official guidebook The After: at 27, he is the same age as Spike. The age 27 is significant in the series because of the connotations it has to some legendary musicians passing away at that age, who are called the 27 Club. He appears much older due to his gray hair and the heavy, ever-present bags under his eyes. Julia Portrayed by: Elena Satine is a beautiful and mysterious woman from Spike's past. Initially Vicious' girlfriend and a Syndicate member herself, she and Spike started an affair that led to Spike offering to abandon the Syndicate and elope with her, despite the fact that the Syndicate punishes desertion with death. Arranging to meet at a graveyard, Spike goes to confront the Syndicate with his resignation, resulting in a violent gun battle where he is presumed to have died. Vicious discovers the affair, however, and confronts Julia, telling her that she would have to kill Spike at the graveyard, or else they would both be killed. To protect not only herself but also the man she loved, Julia goes into hiding, never meeting Spike as both of them had planned; Spike is never aware of Vicious' threats until the very end of the series. Despite being among the main driving points for the series, Julia only appears in flashbacks until the final two episodes. After meeting Faye Valentine by coincidence, Julia is reunited with Spike. However, their reunion coincides with Vicious' first attempt to stage a coup on the Red Dragon Syndicate. When he fails and is imprisoned, the Syndicate's Old Guard launches a campaign to find and kill anyone who was or had ever been loyal to Vicious' group. This includes Spike, Julia and their friend Annie, who distributes munitions under cover of a convenience store. The store is ambushed by the Syndicate while Spike and Julia are there, and Julia is shot and killed as she and Spike try to escape across the rooftops. Her last words to Spike are "It's all a dream...". Lin Portrayed by: Hoa Xuande is a young and loyal member of the Red Dragon Crime Syndicate who is asked by Wang Long to accompany Vicious on a drug deal to the moon Callisto. When Spike Spiegel confronts Vicious in a back alleyway at night, Lin steps in and shoots Spike with a tranquilizer bullet. Lin used to work under Spike, but since Spike left the Red Dragons, he works under Vicious. Lin accompanies Vicious to the Red Eye deal atop a roof, where they encounter Gren. When fighting between the two starts, Lin throws himself in front of a bullet meant for Vicious. Lin dies, but is mentioned in "The Real Folk Blues Part I" when his brother, Shin, shows up. Shin Portrayed by: Ann Truong is the younger brother of Lin. He appears in "The Real Folk Blues Part I" to rescue Spike and Jet from Syndicate assassins, which leads to him revealing Vicious's coup against the Red Dragon leaders. He appears in "The Real Folk Blues, Part II" during Spike's attack on the Red Dragon headquarters, aiding him in the running gunfight against the Syndicate minions. Shin is killed shortly before Spike reaches Vicious. With his last words he asks Spike to kill Vicious and tells him that he had been hoping for him to return. Annie is the owner of a convenience store on Mars, and an old friend of Spike, Julia and Mao Yenrai. Her name is short for "Anastasia". First introduced in "Ballad of Fallen Angels", Annie informs Spike of Mao's assassination by Vicious. She carries a variety of small arms and supplies Spike with a Beretta pistol and a large carton of ammunition. She also chides Spike for seeking to avenge his mentor by picking a fight with Vicious. Annie is fatally wounded prior to Spike and Julia's arrival in "The Real Folk Blues, Part II". Recurring characters Gren Eckener Portrayed by: Mason Alexander Park , also simply referred to as , was a soldier for the war on Titan, and appears in the two-part episode "Jupiter Jazz". On Titan he fought beside Vicious, whom he admired and found encouragement in. After the war, Gren came back hoping to be a jazz musician, but he was arrested as a spy. In prison, Gren heard that Vicious testified against him; this and the isolation drove him mad. The prison conducted drug experiments on him. In some translations, he suffered from insomnia while in prison and started using drugs to deal with it. In either case, the drugs severely imbalanced his hormones, causing him to develop a feminine figure, including breasts. After escaping from jail, Gren worked as a saxophone player at Rester House, a bar in a sector called "The Blue Crow", which is located on one of Jupiter's moons, Callisto. He met Julia there and found out from her how Vicious betrayed him. Two years later, Gren rescues Faye from a street fight and takes her to his apartment. While Faye is there, Vicious calls, raising suspicions about Gren. Intruding on him while showering, Faye discovers Gren's secret. Gren explains his background, and tells her that he is going to see if Vicious really framed him. Disguising himself as a woman, Gren meets Vicious and Lin. Exchanging Red Eye for Titan Opal, Gren suspects a trap. He shoots it open, setting off the explosive, and then reveals who he is. In the ensuing battle, Lin dies to protect Vicious. Spike arrives and attacks Vicious. Gren had planted an explosive in the bag of Red Eye, which damages Vicious' ship. In the 4-way dogfight with Vicious and Spike, Gren's ship is severely damaged, forcing him to land. Spike lands next to Gren's ship to find Gren lying in the snow, badly wounded. Gren guesses who Spike is by his eyes; "Julia was always talking about you; your eyes are different colors. I remember her saying that". Gren requests that Spike help him back into his ship and tow it out into space, allowing him to die on a final voyage to Titan. Punch and Judy Punch – Judy – Portrayed by: Ira Munn (Punch); Lucy Carrey (Judy) and are the hosts of the TV show Big Shot. They are named after the traditional English puppet show. The show provides information on various bounty heads, but is often unreliable. The Bebop crew often has the show playing in the background, but seldom pays close attention (they usually get their information from close contacts). Punch and Judy play the "cowboy" persona in a characteristic, over the top fashion. Punch adopts a mid-western drawl mixed with a Mexican accent (both faked), and uses random old-West sayings. Judy plays the stereotypical dumb blonde, and always appears in an open bolero jacket with nothing underneath, frequently wiggling her hips with excitement. Big Shot is canceled towards the end of the series. Punch, lacking accent and costume, makes a cameo revealing his and Judy's fates: Punch, whose real name is Alfredo, moves to Mars to take care of his mother, and Judy is engaged to her agent, Cameron Wilson. Punch and Judy's appearances had no specific model; the characters had the style of typical television hosts. Antônio, Carlos and Jobim Antônio – Carlos – Jobim – Throughout the series and the movie, three rude, foul-mouthed, crotchety old men make frequent appearances, as speaking characters, or in the background during scenes. They make various claims about what they did before becoming old-timers, including bounty hunting, building the stargates, farming, piloting planes in a war, sinking the Bismarck, digging ditches, and crop-dusting. They seem on speaking terms with many supporting characters, and though they run into the main characters often there is not much attention paid to them (or even mention that the main characters have seen them before). They do the preview of the episode "Mushroom Samba". According to the movie credits, they are called , , and . This is in reference to famed Brazilian musician Antônio Carlos Jobim. In the film, they help Jet and Faye distribute the antidote for a deadly, hallucinogenic nanovirus by flying 20th-century era antique planes over Alba City. Cowboy Bebop Anime Guide Volume 4 states that since the names of the three old men appear once, it is not certain whether the names Antônio, Carlos, and Jobim are their real names. In episode 22, Cowboy Funk, Antônio is briefly seen walking past a water fountain without Carlos and Jobim. All three make a cameo appearance in episode 11 of Blood Blockade Battlefront, another series by the same animation studio as Cowboy Bebop. Laughing Bull A kind old shaman, apparently of Native American descent, lives on Mars. Spike goes to Laughing Bull for advice in Session 1 while looking for bounty head Asimov. He appears briefly at the beginning of "Jupiter Jazz, Part I" and at the end of "Jupiter Jazz, Part II". In "The Real Folk Blues, Part II", Jet goes to him for information on Spike's whereabouts. Laughing Bull is seen with a small child in "Jupiter Jazz" and with a young man in the movie; their identities have never been revealed. As a shaman, he dresses in classic Native American wear and lives in a teepee-like tent surrounded by relics of old, discarded technology. Laughing Bull refers to Spike as "Swimming Bird", and calls Jet "Running Rock". Bob is an ISSP, mustache-wearing policeman based on Ganymede to whom Jet frequently goes to for inside information when looking for bounty heads. Throughout the series, and especially in the film, Bob provides (sometimes reluctantly) crucial information. Other characters Victoria "V.T." Terpsichore Victoria "V.T." Terpsichore is a tough-talking space trucker whose deceased husband, Ural Terpsichore, is a legendary bounty hunter. Always with her cat, Zeros, she appears in the episode "Heavy Metal Queen". Spike meets her in a bar while on hunt for an explosive-smuggling criminal named Decker. After having a bar brawl with several stooges, Spike and V.T. seem to become fast friends until she learns Spike is a bounty hunter. Although she regards Spike as "lowlife bounty hunter scum", she puts their differences aside and reluctantly works with him when their paths cross again as V.T. begins searching for Decker, who has performed a ship hit and run on one of her fellow truck drivers. Her full name is largely a secret, which has prompted many to bet money and guess what her initials stand for. She is also known as the "Heavy Metal Queen", for her love of heavy metal music, which she considers "very soothing". Able to adapt to various situations, her philosophy is "When in Rome, do as the Romans do". Considering her disdain for bounty hunters, it is believed that her husband was killed while pursuing a bountyhead. Rocco Bonnaro is a member of Piccaro Calvino's gang. He is involved in organized crime in order to support his blind younger sister, . Rocco sees Spike effortlessly take out several hijackers on a spaceliner and begs Spike to teach him how to fight. He befriends Spike although he does not tell him about the bounty on his head. Rocco gives Spike a package to hold onto, which contains a plant called "Grey Ash" that he stole from Calvino. This plant, worth millions of woolongs, is capable of curing "Venus Sickness", the disease which has blinded Stella. Rocco has a rendezvous with Spike and they fight Calvino's gang. Rocco pulls off one of Spike's Jeet Kune Do maneuvers and topples one of the gangsters, but is gunned down. Later, Spike pays his respects and visits Stella in the hospital where she is receiving treatment to tell her that Rocco has died. Before he leaves, Stella asks Spike about the type of person her brother really was. Spike responds, "You know better than anyone, without looking. He was a terrific guy – exactly the person you thought he was." Chessmaster Hex Hex is a talented programmer widely considered to be a genius due to his long-standing hold of the Champion Seat of the CosmoNet Chess tournament series. At the age of 30 he joined the Hyperspace Gate Project and, ultimately, played a key role in the development of the central control system used in all gates. However, Hex soon began to have doubts about the functionality of the control system, believing it to have defects. Upon discovering that these defects were intentionally added by the Gate Corporation to ensure further revenue, Hex developed a plan to be executed 50 years in the future that would allow criminals to hijack the Astral Gate toll booths. In the episode "Bohemian Rhapsody", Spike, Jet and Faye track down Hex following the failed toll booth hijackings. Hex, now old and senile, is living peacefully inside of a bohemian junk heap floating in outer space. Given that he had completely forgotten about his prearranged sting, the crew strikes a deal with the Gate Corporation to ensure his safety. Andy von de Oniyate Andy von de Oniyate is a rich, egotistical bounty hunter who completely embraces the cowboy aspect of his job; he dresses like a cowboy, rides a horse named Onyx, uses six-shooters as his primary weapons and a cowboy whip to capture his bounties. The Bebop crew insists that Spike and Andy act exactly the same as each other, to Spike's increasing consternation. Despite his bumbling behavior, he is quite resourceful and intelligent, as well as being on par with Spike in fighting ability. Andy eventually gives up the cowboy persona, choosing instead to take up a samurai persona and call himself Musashi. Vincent Volaju Vincent Volaju is the main antagonist of Cowboy Bebop: The Movie, is the only survivor of a series of experiments conducted during the Titan War to build immunity to the lethal nanomachines that were secretly developed by the military. His plan is to release the nanomachines throughout the world, leaving only a handful of survivors. He holds the rare distinction of being one of a select few characters in Cowboy Bebop who has been able to match Spike in close combat. Watanabe said that he believes that many people would say that they empathize with Vincent and that "I even understand him". The interviewer, describing Vincent as the "most evil character in the Bebop series", asked Watanabe if Vincent was his opportunity to "show something you couldn't get away with on TV". Watanabe responded by saying that such a thing was not the case, and that Vincent is "nothing more than my dark side". Watanabe added that he does not see this as a "particularly unique feature" of Cowboy Bebop: The Movie, and that all people have moments when they "lose our temper and want to destroy everything". Elektra Ovilo is a veteran of the Titan War who first appears in Cowboy Bebop: The Movie. Her love for Vincent caused them to have a short-term relationship, during which Vincent transferred the vaccine to Elektra. She is unaware of this until Vincent sets free the Nanomachines on the Monorail and she survives. She meets Spike by chance when he infiltrates a bio-weapon lab fronting as a pharmaceutical company where she works. After a few more chance meetings, and witnessing his being shot and thrown from a monorail by Vincent, she teams up with the crew of the Bebop to put an end to Vincent's intent to destroy the population of Mars. The samples of her blood are used to make the vaccine that is spread over Alba City. In the end, it is she who shoots Vincent and kills him. She cries for him when he admits he remembers her and their love for one another as he is dying. Rashid appears during Cowboy Bebop: The Movie. An ethnic Arab with a considerable amount of knowledge of "beans", he is really Doctor Mendelo al-Hedia, the man who developed the nano-machinery that was to be used as a virus for the military and vaccinated Vincent in attempt to keep it under control. He then apparently escaped from the medical facility and took refuge in Mars' Moroccan street, assuming a new identity. He provides Spike with a sample of the nano-machine virus in an attempt to atone for his creating it. After revealing to Spike, in a later scene, the nature of the nanomachine virus and the vaccine given to Vincent, armed men show up and Rashid runs off, followed by the sounds of gunfire. His fate is unclear, though a scene played during the credits of the movie seems to show him alive and well in Moroccan street. Lee Sampson A teenage hacker and Vincent's accomplice, is very interested in video games from the 20th century (as shown by him playing an alternate version of Pac-Man in a car while talking to Vincent). He's later betrayed by Vincent and is killed with the nanoweapons Vincent was using in his plot to eliminate mankind. In an interview with Watanabe, the interviewer referred to Lee Sampson, a character in the film who "unable to distinguish" death in real life and death in a video game, responding to the death of a video game avatar and the death of a security guard in an equally-detached manner; when the interviewer asked Watanabe whether he wanted to "question society's desensitization to violence" with a character who "truly feels the pain of death", Watanabe responded by saying that he did not intend to "make it a 'statement', as such". Watanabe added that he does not create films to "particular message" and that films "naturally reflect the way we feel at the time". Mad Pierrot Tongpu Portrayed by: Josh Randall Mad Pierrot Tongpu (real name unknown) was part of an experiment to create the perfect assassin by a secret organization referred to only as Section 13. While Tongpu was made into a rotund and virtually indestructible living weapon, the procedures caused him to begin regressing mentally, ruining his capacity as a weapon. While being transported to a secure facility for observation, Tongpu escaped with the intention of exacting revenge, but eventually came to enjoy the act of killing. Spike happens to witness Tongpu killing someone, making him the target of Tongpu as well. Spike escapes when a cat distracts Tongpu and gives him time to blow up a gas canister. Spike is sent a personal invitation to Spaceland, a theme park, by Tongpu. In the ensuing fight, Spike throws a knife into Tongpu's leg. Tongpu is then crushed underfoot by a giant robot in an animatronic parade. References External links Official website Cowboy Bebop characters Lists of anime and manga characters
63438
https://en.wikipedia.org/wiki/Jack%20Tramiel
Jack Tramiel
Jack Tramiel ( ; born Idek Trzmiel; December 13, 1928 – April 8, 2012) was an American businessman and Holocaust survivor, best known for founding Commodore International. The Commodore PET, Commodore VIC-20 and Commodore 64 are some home computers produced while he was running the company. Tramiel later formed Atari Corporation after he purchased the remnants of the original Atari, Inc. from its parent company. Early years Tramiel was born as Idek Trzmiel (some sources also list Juda Trzmiel, Jacek Trzmiel, or Idek Tramielski) into a Jewish family, the son of Abram Josef Trzmiel and Rifka Bentkowska. After the German invasion of Poland in 1939 his family was transported by German occupiers to the Jewish ghetto in Łódź, where he worked in a garment factory. When the ghettos were liquidated, his family was sent to the Auschwitz concentration camp. He was examined by Josef Mengele and selected for a work party, after which he and his father were sent to the labor camp Ahlem near Hanover, while his mother remained at Auschwitz. Like many other inmates, his father was reported to have died of typhus in the work camp; however, Tramiel believed he was killed by an injection of gasoline. Tramiel was rescued from the labor camp in April 1945 by the 84th Infantry Division of the U.S. Army. On November 10, 1947, Tramiel immigrated to the United States. He soon joined the U.S. Army, where he learned how to repair office equipment, including typewriters. Commodore Typewriters and calculators In 1953, while working as a taxi driver, Tramiel bought a shop in the Bronx to repair office machinery, securing a $25,000 loan for the business from a U.S. Army entitlement. He named it Commodore Portable Typewriter. Tramiel wanted a military-style name for his company, but names such as Admiral and General were already taken, so he settled on the Commodore name. In 1956, Tramiel signed a deal with a Czechoslovak typewriter manufacturer Zbrojovka Brno NP to assemble and sell their typewriters in North America. However, as Czechoslovakia was part of the Warsaw Pact, they could not be imported directly into the U.S., so Tramiel used parts from Zbrojovka's Consul typewriters and set up Commodore Business Machines in Toronto, Canada. After Zbrojovka began developing their own hardware Commodore signed an agreement in 1962 with Rheinmetall-Borsig AG and began to sell Commodore portable typewriters made from the parts of older Rheinmetall-Borsig typewriters. In 1962, Commodore went public, but the arrival of Japanese typewriters in the U.S. market made the selling of Czechoslovakian typewriters unprofitable. Struggling for cash, the company sold 17% of its stock to Canadian businessman Irving Gould, taking in $400,000 and using the money to re-launch the company in the adding machine business, which was profitable for a time before the Japanese entered that field as well. Stung twice by the same source, Gould suggested that Tramiel travel to Japan to learn why they were able to outcompete North Americans in their own local markets. It was during this trip that Tramiel saw the first digital calculators, and decided that the mechanical adding machine was a dead end. When Commodore released its first calculators, combining an LED display from Bowmar and an integrated circuit from Texas Instruments (TI), it found a ready market. However, after slowly realizing the size of the market, TI decided to cut Commodore out of the middle, and released their own calculators at a price point below Commodore's cost of just the chips. Gould once again rescued the company, injecting another $3 million, which allowed Commodore to purchase MOS Technology, Inc. an IC design and semiconductor manufacturer, a company which had also supplied Commodore with calculator ICs. When their lead designer, Chuck Peddle, told Tramiel that calculators were a dead end and computers were the future, Tramiel told him to build one to prove the point. Home computers Peddle responded with the Commodore PET, based on his company's MOS Technology 6502 processor. It was first shown, privately, at the Chicago Consumer Electronics Show in 1977, and soon the company was receiving 50 calls a day from dealers wanting to sell the computer. The PET became a success—especially in the education field, where its all-in-one design was a major advantage. Much of their success with the PET came from the business decision to sell directly to large customers, instead of selling to them through a dealer network. The first PET computers were sold primarily in Europe, where Commodore had also introduced the first wave of digital handheld calculators. As prices dropped and the market matured, the monochrome (green text on black screen) PET was at a disadvantage in the market when compared to machines like the Apple II and Atari 800, which offered color graphics and could be hooked to a television as an inexpensive display. Commodore responded with the VIC-20, and then the Commodore 64, which became the best-selling home computer of all time. The Commodore VIC-20 was the first computer to sell one million units. The Commodore 64 sold several million units. It was during this time that Tramiel coined the phrase, "We need to build computers for the masses, not the classes." An industry executive attributed to Tramiel the discontinuation of the TI-99/4A home computer in 1983, after the company had lost hundreds of millions of dollars, stating that "TI got suckered by Jack". Departure Gould had controlled the company since 1966. He and Tramiel often argued, but Gould usually let Tramiel run Commodore by himself. Tramiel was considered by many to be a micromanager who did not believe in budgets; he wanted to approve every expense greater than $1,000, which meant that operations stopped when Tramiel went on vacation. Adam Osborne wrote in 1981: Tramiel angrily left a January 13, 1984 meeting of Commodore's board of directors led by chairman Gould, and never returned to the company. What happened at the meeting remains unclear. Neil Harris, editor of Commodore Magazine at the time, recalled: Tramiel later said that he had resigned from Commodore because he disagreed with Gould "on the basic principles – how to run the company". Their disagreement was so bitter that, after Tramiel's departure, Commodore Magazine was forbidden to quote Tramiel or mention his name. Ahoy! wrote after his departure that although Tramiel's "obsession with controlling the cost of every phase of the manufacturing process" had led to record profits during the home computer price war, his "inflexible one-man rule" had resulted in poor dealer relations and "a steady turnover of top executives at Commodore". The magazine concluded "it has become increasingly clear that the company is just too big for one man, however talented, to run". During a question and answer session at CommVEx v11 (July 18, 2015), Jack's son, Leonard Tramiel, stated that now that both Irving Gould and his dad Jack were both deceased, he could finally reveal to the crowd, what really transpired between Jack and Irving Gould during the 1984 Consumer Electronics Show resulting in Tramiel leaving Commodore: On January 13, 1984 during a meeting with Irving, Jack told Irving that treating the assets of the company as his own and using them for personal use was wrong. He said to Irving, "you can't do that while I'm still president" to which Irving responded by saying "Goodbye". Three days after the show, Jack announced to the public that he was resigning from the company. Whilst acknowledging this description of events, David Pleasance (the eventual managing director of Commodore UK) also states that Irving Gould told him the falling out was due to Jack's insistence on his three sons joining the board. In an interview with Fortune magazine on April 13, 1998 Tramiel said "Business is war, I don't believe in compromising, I believe in winning." Atari After a short break from the computer industry, he formed a new company named Tramel Technology, Ltd., in order to design and sell a next-generation home computer. The company was named "Tramel" to help ensure that it would be pronounced correctly (i.e., "tra – mel" instead of "tra – meal"). In July 1984, Tramel Technology bought the Consumer Division of Atari Inc. from Warner Communications. The division had fallen on hard times due to the video game crash of 1983. TTL was then renamed Atari Corporation, and went on to produce the 16-bit Atari ST computer line based on Motorola's MC68000 CPU, directly competing with Apple's Macintosh, which also used the same CPU. Under Tramiel's direction, the Atari ST was a considerable success in Europe, and globally in the professional music market. Despite successfully shipping the ST, Tramiel's poor personal reputation hurt Atari. One retailer said in 1985 about the ST that because of its prior experience with Tramiel "Our interest in Atari is zero, zilch". A software company executive said "Dealing with Commodore was like dealing with Attila the Hun. I don't know if Tramiel will be following his old habits ... I don't see a lot of people rushing to get software on the machine." (One ex-Commodore employee said that to Tramiel "software wasn't tangible—you couldn't hold it, feel it, or touch it—so it wasn't worth spending money for".) Steve Arnold of LucasArts said after meeting with Tramiel that he reminded him of Jabba the Hutt, while within Atari Darth Vader was often the comparison. Another executive was more positive, stating "Jack Tramiel is a winner. I wouldn't bet against him." In 1988 Stewart Alsop II called Tramiel and Alan Sugar "the world's two leading business-as-war entrepreneurs". In the late 1980s, Tramiel decided to step away from day-to-day operations at Atari, naming his son, Sam, President and CEO. In 1995, Sam suffered a heart attack, and his father returned to oversee operations. In 1996, Tramiel sold Atari to disk-drive manufacturer Jugi Tandon Storage in a reverse merger deal. The newly merged company was named JTS Corporation, and Tramiel joined the JTS board. Later years Michael Tomczyk recalled that when Tramiel asked the German government for financial incentives for Commodore to take over a factory, Tramiel was a co-founder of the United States Holocaust Memorial Museum, which was opened in 1993. He was among many other survivors of the Ahlem labor camp who tracked down U.S. Army veteran Vernon Tott, who was among the 84th Division which rescued survivors from the camp and had taken and stored photographs of at least 16 of the survivors. Tott, who died of cancer in 2003, was personally commemorated by Tramiel with an inscription on one of the Holocaust Museum's walls saying "To Vernon W. Tott, My Liberator and Hero". Tramiel retired in 1996 and moved to Monte Sereno, California. He died of heart failure on April 8, 2012, aged 83. References Further reading The Home Computer Wars: An Insider's Account of Commodore and Jack Tramiel by Michael Tomczyk, Compute, 1984, On the Edge: The Spectacular Rise and Fall of Commodore by Brian Bagnall, Variant Press, 2005, External links 1985 episode of The Computer Chronicles featuring an extended interview with Tramiel You Don't Know Jack at a Commodore history site Biography about Jack Tramiel at History Corner (in German) The story of Commodore and the 8-bit generation | Leonard Tramiel | TEDxMidAtlantic via YouTube 1928 births 2012 deaths Atari people Commodore people Polish emigrants to the United States American people of Polish-Jewish descent American company founders American taxi drivers Auschwitz concentration camp survivors Łódź Ghetto inmates Technology company founders People from Monte Sereno, California
17039380
https://en.wikipedia.org/wiki/Manuela%20M.%20Veloso
Manuela M. Veloso
Manuela Maria Veloso (born August 12, 1957) is the Head of J.P. Morgan AI Research & Herbert A. Simon University Professor in the School of Computer Science at Carnegie Mellon University, where she was previously Head of the Machine Learning Department. She served as president of Association for the Advancement of Artificial Intelligence (AAAI) until 2014, and the co-founder and a Past President of the RoboCup Federation. She is a fellow of AAAI, Institute of Electrical and Electronics Engineers (IEEE), American Association for the Advancement of Science (AAAS), and Association for Computing Machinery (ACM). She is an international expert in artificial intelligence and robotics. Education Manuela Veloso received her Licenciatura and Master of Science degree in electrical engineering from Lisbon's Instituto Superior Técnico in 1980 and 1984 respectively. She then attended Boston University, and received a Master of Arts in computer science in 1986. She moved to Carnegie Mellon University and received her Ph.D. in computer science there in 1992. Her thesis Learning by Analogical Reasoning in General Purpose Problem Solving was supervised by Jaime Carbonell. Career and research Shortly after receiving her Ph.D., Manuela Veloso joined the faculty of the Carnegie Mellon School of Computer Science as an assistant professor. She was promoted to the rank of associate professor in 1997, and full professor in 2002. Veloso was a visiting professor at the Massachusetts Institute of Technology for the academic year 1999-2000, a Radcliffe Fellow of the Radcliffe Institute for Advanced Study, Harvard University for the academic year 2006-2007, and a visiting professor at Center for Urban Science and Progress (CUSP) at New York University (NYU) for the academic year 2013-2014. She is the winner of the 2009 ACM/SIGART Autonomous Agents Research Award. She was the Program Chair for IJCAI-07, held January 6–12, 2007, in Hyderabad, India and was program co-chair of AAAI-05, held July 9–13, 2005, in Pittsburgh. She was a member of the Editorial Board of CACM and the AAAI Magazine. She is the author of one book on Planning by Analogical Reasoning. As of 2015, Veloso has graduated 32 PhD students. She was appointed as the head of Carnegie Mellon's Machine Learning Department in 2016. Veloso describes her research goals as the "effective construction of autonomous agents where cognition, perception, and action are combined to address planning, execution, and learning tasks". Veloso and her students have researched and developed a variety of autonomous robots, including teams of soccer robots, and mobile service robots. Her robot soccer teams have been RoboCup world champions several times, and the CoBot mobile robots have autonomously navigated for more than 1,000 km in university buildings. In a November 2016 interview, Veloso discussed the ethical responsibility inherent in developing autonomous systems, and expressed her optimism that the technology would be put to use for the good of humankind. Honors and awards National Science Foundation CAREER Award in 1995. Allen Newell Medal for Excellence in Research in 1997. 2003 AAAI Fellow 2006/2007 Radcliffe Fellow at the Radcliffe Institute for Advanced Study, Harvard University 2010 Institute of Electrical and Electronics Engineers (IEEE) Fellow 2010 American Association for the Advancement of Science (AAAS) Fellow 2009 ACM/SIGART Autonomous Agents Research Award 2012 Einstein Chair Professor, Chinese Academy of Sciences 2016 ACM Fellow, for "contributions to the field of artificial intelligence, in particular in planning, learning, multi-agent systems, and robotics." Veloso is featured in the Notable Women in Computing cards. References 1957 births Living people American roboticists Women roboticists Portuguese roboticists Artificial intelligence researchers Portuguese computer scientists Portuguese emigrants to the United States Boston University alumni Carnegie Mellon University alumni Carnegie Mellon University faculty Fellows of the Association for the Advancement of Artificial Intelligence Fellows of the Association for Computing Machinery Portuguese women computer scientists Women systems scientists Instituto Superior Técnico alumni Presidents of the Association for the Advancement of Artificial Intelligence Portuguese women scientists
28703
https://en.wikipedia.org/wiki/Session%20Announcement%20Protocol
Session Announcement Protocol
The Session Announcement Protocol (SAP) is an experimental protocol for advertising multicast session information. SAP typically uses Session Description Protocol (SDP) as the format for Real-time Transport Protocol (RTP) session descriptions. Announcement data is sent using IP multicast and the User Datagram Protocol (UDP). Under SAP, senders periodically transmit SDP descriptions to a well-known multicast address and port number (9875). A listening application constructs a guide of all advertised multicast sessions. SAP was published by the IETF as RFC 2974. Announcement interval The announcement interval is cooperatively modulated such that all SAP announcements in the multicast delivery scope, by default, consume 4000 bits per second. Regardless, the maximum announce interval is 300 seconds (5 minutes). Announcements automatically expire after 10 times the announcement interval or one hour, whichever is greater. Announcements may also be explicitly withdrawn by the original issuer. Authentication, encryption and compression SAP features separate methods for authenticating and encrypting announcements. Use of encryption is not recommended. Authentication prevents unauthorized modification and other DoS attacks. Authentication is optional. Two authentication schemes are supported: Pretty Good Privacy as defined in RFC 2440 Cryptographic Message Syntax as defined in RFC 5652 The message body may optionally be compressed using the zlib format as defined in RFC 1950. Applications and implementations VLC media player monitors SAP announcements and presents the user a list of available streams. SAP is one of the optional discovery and connection management techniques described in the AES67 audio-over-Ethernet interoperability standard. References External links Session Announcement Protocol (SAP) SAP/SDP Listener Internet protocols Internet Standards
38153065
https://en.wikipedia.org/wiki/David%20Gunness
David Gunness
David W. Gunness (born November 7, 1960) is an American audio engineer, electrical engineer and inventor. He is known for his work on loudspeaker design, especially high-output professional horn loudspeakers for public address, studio, theater, nightclub, concert and touring uses. Gunness worked with Electro-Voice in Michigan for 11 years, filing three patents related to horn technology. He worked at Eastern Acoustic Works (EAW) in Massachusetts for 12 years, filing three patents in the process of creating a wide variety of loudspeaker products. For EAW, Gunness developed "Gunness Focusing"—a system for decreasing temporal response distortion in loudspeakers, involving the processing of the audio signal before it reaches the loudspeaker drivers, applying a reverse image of the expected distortion to cancel out the loudspeaker's idiosyncrasies. Gunness co-founded Fulcrum Acoustic in 2008: a loudspeaker company with the aim of designing loudspeakers based on digital signal processing (DSP), innovative components and high quality construction. Early life Gunness was born November 7, 1960; he grew up in Janesville, Wisconsin, enjoying outdoor activities such as bicycling, camping, hunting and fishing. At Joseph A. Craig High School he participated in gymnastics and played guitar. Two of his sisters entered the University of Wisconsin–Madison (UW-Madison), but Gunness chose Purdue University in Indiana. After one year there, he returned to Wisconsin to enroll at UW-Madison as an electrical engineering major. During his college years he made extra money as a musician; singing and playing acoustic guitar. For these gigs he fabricated his own loudspeakers, and he determined to continue in this field, shifting his studies to focus on acoustics and electronics. In June 1984, Gunness graduated UW-Madison with a degree in Electrical and Computer Engineering. He immediately accepted an engineering job in Buchanan, Michigan, and relocated there. On September 29, 1984, he married Kathryn A. Sessions, a nursing student who had finished one year ahead of him at UW-Madison. Electro-Voice Directly after graduating UW-Madison, Gunness obtained a research and development position in the engineering department at the Electro-Voice (EV) factory in Michigan. Under Chief Engineer Ray Newman, Gunness worked on loudspeaker design, combining traditional empirical R&D methodologies with the emerging capabilities of computer analysis. His first assignment was to help develop the Musicaster 100, an all-weather 2-way coaxial loudspeaker; an update of the classic 1959 Musicaster design. In 1984, Gunness filed a patent for a better way to use a manifold to combine the outputs of multiple compression drivers for increased sound power level (SPL), using two to four flat reflecting surfaces in the throat of a horn to redirect sound waves for a more coherent summation. This low-distortion manifold design made it possible for EV to produce its first high-power concert and touring loudspeaker: the MT-4, with "MT" standing for "Manifold Technology". This was a 4-way system split between two enclosures, with four speaker drivers summed in each bandpass; a total of 16 drivers. The upper two bandpasses used the Gunness manifold design for compression drivers, each manifold formed of two zinc castings. The medium low frequencies were carried by four cone drivers summed using a larger embodiment of the Gunness manifold concept based on ray tracing and reflection. The MT-4 was a very heavy system at , but it put more power into a smaller package, and it was quicker to position and connect. The MT-4 proved popular, used on major tours and festivals such as the 1995 Monsters of Rock at Donington Park in central England and the main stage of the 1996 Lollapalooza tour featuring Metallica and Soundgarden. Together, EV engineer David Carlson and Gunness presented a paper to the Audio Engineering Society (AES) in November 1986, describing the methods they used to sum four drivers in each bandpass. In 1986, Gunness developed the EV HP series of horn loudspeakers based on the constant directivity (CD) characteristics described by EV engineer Don Keele in the mid-1970s. Gunness recognized that relatively large 2-inch (51 mm) horn throats, commonly used for greater SPL, produced an undesirable narrowing of the output pattern above 10 kHz. His patented design used two longitudinal ribs or vanes to form three "pseudo horns" within the horn flare. In 1989, Gunness developed an asymmetric horn with an output pattern shaped to suit a typical small-to-mid-sized rectangular auditorium with people sitting near the enclosure hearing sound that was not too loud and others sitting farther away hearing sound that was loud enough. In both cases, the sound pattern was to minimize sound energy bouncing off of walls; reflections creating unwanted multi-path cancellations. The horn featured a vertical diffraction slot that was narrower at the bottom which reduced the output for people sitting below the enclosure in the nearfield, and increased the output for those sitting farther away. EV's sister company, Altec Lansing, marketed this product as the "Vari-Intense" horn. Gunness researched automated methods for analyzing the performance of a loudspeaker. In 1990 he delivered a paper to the AES describing a system which used pink noise and a filtered receiver to generate polar response curves plotting loudspeaker output patterns. Eastern Acoustic Works In September 1995, Gunness moved his family, now including a son and a daughter, from Michigan to Massachusetts in response to his taking a position as senior engineer at Eastern Acoustic Works (EAW) in Whitinsville. His first task was to set up a system for creating custom loudspeaker designs for specific clients and purposes, and he performed much field work, tuning and optimizing loudspeaker installations. He then began to research the concept of phased point source behavior with the goal of controlling the directional characteristics of a high-powered concert loudspeaker cluster. This work led to the development of EAW's KF900 series concert touring system. In 1997 he filed two patents related to this research: one for a downfill loudspeaker that would direct sound downward without being rigged differently than its upper neighbors, and a method for creating a "common acoustical wavefront" of horizontally arrayed loudspeaker horns mounted in trapezoid enclosures which placed the acoustic center of the array very close to the rear of the enclosure. The horn mouths minimized diffraction between enclosures. The KF900 system incorporated digital signal processing (DSP) for each horizontal row of drivers in the loudspeaker cluster. Gunness said that bringing the DSP to fruition by way of rigorous mathematical performance analysis was a "massive undertaking" which gave him a broad foundation of computer analysis techniques he would draw from in later inventions. The KF900 was deployed in mid-1998 for 11 Promise Keepers tour dates, and its response was measured during the shows as part of an iterative product optimization plan. In 2001, Eric Clapton used the KF900 system while touring in support of his Reptile album. In his work to predict the performance of various KF900 loudspeaker configurations, Gunness used acoustical measurement and modeling software called FChart that he started developing while still at Electro-Voice. Heinz Field, home of the Pittsburgh Steelers football team received an installed KF900 system, as did Fenway Park, home of the Boston Red Sox baseball team. The final system tuning at Fenway was performed using Smaart software. Gunness also designed the following EAW loudspeaker models: the long-throw MH433 trapezoid with rigging points, the install-only BH822 twin 12-inch "super" subwoofer, the LA400 touring subwoofer, and the large format arrayable MQ series. Gunness Focusing For years, Gunness had been looking for various electronic solutions to the undesirable characteristics of horns. At EV in 1985, Gunness noticed the performance differences between various shapes of horns, and theorized that an electronic filter might allow optimization. In early 1995, EV gained access to Altec Lansing's 1987 Acousta-CADD acoustic modeling software which revealed more loudspeaker performance characteristics than had previously been observed, but DSP programming tools were still inadequate for audio signal correction. In 2000, Greek electroacousticians John Mourjopoulos and Panagiotis 'Panos' Hatziantoniou described a method for smoothing precise audio analysis filters. Building on this work, Gunness led a team of EAW engineers to develop a proprietary wavelet transform spectrogram for internal research. The EAW spectrogram reduced visual complexity by applying a zero-phase-shift low-pass filter to the audio signal under test using mirror-image infinite impulse response (IIR) filters. The spectrogram revealed loudspeaker performance anomalies, allowing the engineering team to identify mechanisms they characterized as "two-port systems"; i.e. mechanisms demonstrating a single input, a single transfer function, and a single output. Such two-port systems were of interest because they could possibly be corrected with electronic filtering. Because of their variability the methodology would not be used on any of the mechanisms which appeared to be non-linear relative to signal level, spatial distribution ("coverage"), or over time, such as cone stiffness or surround compliance. This left several substantial "linear, time-invariant" (LTI) mechanisms that would yield to correction by digital filtering. These included 1) time-smear from the compression driver/phase plug interface, 2) horn resonance, 3) cone resonance, and 4) crossover phase linearity between adjacent bandpasses. In April 2005, EAW announced the NT Series, a line of 2-way bi-amplified self-powered loudspeakers incorporating the "new technology" which was initially called "Digital Transduction Correction" (DTC). Mix magazine quoted Gunness identifying compression driver "time smear" as a longstanding loudspeaker problem that was countered by preconditioning in the audio signal. Later that year, EAW dropped the DTC acronym and began promoting the technology as "Gunness Focusing". At the AES convention in October 2005 in New York City, EAW project engineer William "Bill" Hoy and Gunness presented a paper describing the mathematics of the spectrogram. At the same convention, Gunness spoke about the research and development which culminated in the new technology. He described how the spectrogram allowed the EAW engineering team to observe the mechanism of time smear occurring in the small space between the compression driver diaphragm and the phase plug. He discovered that only half of the compression driver's energy, at best, goes directly from the diaphragm through a phase plug slot or port and into the horn throat. The rest of the sound waves either reflect back to the compression driver surface or travel to another phase plug slot or port; in both cases the result is wave energy leaving the phase plug after the initial impulse. Gunness modeled this behavior mathematically and applied an inverted signal to cancel out the later wave energy. Gunness filed a patent for the technology in March 2006. Later that year, EAW introduced the UX8800, a DSP-based loudspeaker management system with four inputs and eight outputs. The UX8800 was offered to allow Gunness Focusing to be applied to selected pre-existing EAW products such as the KF700 line array series. Gunness Focusing was nominated for but did not win a TEC Award in 2006. Line arrays Gunness joined with EAW co-founder Kenton Forsythe and engineer Jeff Rocha to design the KF760 and KF730 series line array systems. The KF760 was a full-size 3-way system and the KF730 was a compact 3-way system. Either system could be augmented with ground-stacked or flown subwoofers. Common to the two different sizes of KF700 series products was the principal of "divergence shading" rather than the more usual intensity shading. The vertical output pattern of the individual line array elements was adjusted to optimize SPL received by near- and far-field audience areas. This method avoided what Gunness said was a discontinuity between adjacent loudspeaker enclosures driven at different signal levels; he observed smeared transients and frequency response problems. Gunness wrote about divergence shading and general line array issues in August 2000. The KF760 product was revealed in May 2001. The major concerts using the KF760s included The Boston Globes Jazz and Blues Festival in 2001, Usher's 2002 Evolution tour, Pearl Jam's 2003 Riot Act Tour, the North American dates of Iron Maiden's 2003 Give Me Ed... 'Til I'm Dead Tour, and Sir Paul McCartney playing in Moscow's Red Square in 2003. Usher also used flown KF761 boxes for his vocal stage monitoring system; monitor engineer Maceo Price and sound company owner Tim Cain sat down with Gunness and Rocha, looking at prediction software results to determine which EAW product line would best suit the purpose of loud monitors that would not be in the way of Usher's dancing and set changes. By late 2006, the KF760 and KF730 line array products had been augmented with optional Gunness Focusing by way of the UX8800 loudspeaker management system. THC Audio in Sofia used the UX8800/KF760 combination for Snoop Dogg's Bulgarian performance in 2008. Gunness appeared as a panelist at an AES line array tutorial and workshop in October 2002, held in Los Angeles. Don Keele, whose 1970s CD horn discoveries formed a basis for Gunness's later research, shared the panel. In October 2003, Gunness wrote an article about "Digitally Steerable Array" (DSA) technology for Live Sound International magazine. He expanded on the DSA concept the next month for the British Institute of Acoustics (IOA). DSA allowed for adjustments to the vertical output width and vertical direction of a column of mid- to high-frequency loudspeaker drivers, in a frequency range from about 500 Hz to 16 kHz; a range critical to voice intelligibility. Gunness wrote that his research into DSA began in the 1990s and was largely based on the observations gained in developing the KF900 series. The proprietary FChart software was leveraged to create "DSA Pilot" to supply prediction and adjustment software for DSA installations. DSA Pilot allowed the installer to change the vertical pattern of a DSA product from 15 to 120 degrees high, and to change the main direction up or down by 30 degrees, without changing the position of the enclosure. Gunness told the IOA that each transducer in the vertical column must have its own DSP and amplifier for proper steering of the output pattern. For high frequency control, physically small drivers are required. One of the benefits of DSA was that the loudspeaker enclosure could be mounted flat against a vertical wall rather than tilted. The flat position eliminated the problem of acoustic energy radiating from the back of the enclosure, smearing the forward output with multi-path arrival times. In February 2000, Mackie Designs bought EAW but retained the EAW brand. In 2003, Mackie Designs changed its name to LOUD Technologies and moved previously Seattle-based Mackie manufacturing to Asia. In late 2006, LOUD moved EAW's loudspeaker production to China; the Massachusetts factory which had employed 100 assembly and woodshop workers was greatly reduced. EAW's plant retained the ability to fill some custom loudspeaker orders, they kept a number of management and clerical positions, and also the design team of Kenton Forsythe, David Gunness and Jeff Rocha. Gunness continued to research and prototype loudspeakers, and he checked Chinese production examples for quality of workmanship. In January 2007, EAW co-founder Kenneth Berger, a senior vice president of LOUD, left the company. Fulcrum Acoustic Gunness left EAW in January 2008 to join with partners Stephen Siegel and Chris Alfiero in the establishment of Fulcrum Acoustic, a loudspeaker design and manufacturing company. Gunness became Vice President of R&D, and Lead Product Designer. The goal of Fulcrum Acoustic was to produce loudspeakers with "advanced DSP algorithms as integral to their designs" which had become Gunness's signature style. Gunness soon noticed that the time from initial concept to product launch was much faster at a small company. Most of the employees of Fulcrum Acoustic are former EAW coworkers. Temporal Equalization Gunness and Siegel turned their attention to coaxial loudspeakers, known for their desired single-point-source characteristics but also for various problems associated with intermodulation distortion—the low frequencies modulating the highs—and undesirable sonic variations in off-axis frequency response. Other negative aspects of traditional coaxial designs were their bulky weight and their lengthy axis requiring deep enclosures. Gunness and Siegel set about designing a coaxial with a common magnet for both low and high frequency drivers for weight savings and for reduced axial length, and a horn was developed to direct as much high frequency energy as possible away from the low frequency cone. A DSP solution called "Temporal Equalization" (TQ) was used to cancel out any remaining high frequency energy arriving at the moving cone. TQ was also used to cancel out high frequency horn reflections that returned to the compression driver. Gunness further developed his proprietary FChart software, renamed "Rayliegh" in honor of Lord Rayleigh, to enhance its capabilities for developing these and future products. Gunness helped specify and design a 16-zone, 100-loudspeaker installation at the Haze nightclub at Aria Resort and Casino in Las Vegas, and he joined with Jamie Anderson of Rational Acoustics to discuss the loudspeaker performance targets and system tuning process via Smaart software, the talk given at a technical tour held in June 2010 during the Infocomm convention. Gunness said that system designer John Lyons asked for a subwoofer that would "crush" at all locations on the dance floor. Gunness responded by creating the US221 subwoofer with two drivers. After hearing Haze's 130 dB SPL results, with a reported 10 dB of extra headroom because ten US221s were used, Lyons quipped that the system surpassed "crush" to establish "punish" as a benchmark. The same month, the Surrender nightclub at Encore Las Vegas opened with a Fulcrum Acoustic installation combining outdoor and indoor areas. Gunness aided in setting up and tuning the system. He noted that three US221 subwoofers supplied sufficiently high energy sound for the small dance floor. In December 2012, Wired magazine wrote about how temporal corrections developed by Gunness cleaned up "the smear of sound" present in normal nightclub loudspeakers. References External links List of white papers at Fulcrum Acoustic 1960 births American audio engineers American electrical engineers Living people People from Buchanan, Michigan People from Janesville, Wisconsin People from Worcester, Massachusetts Sound designers University of Wisconsin–Madison College of Engineering alumni 21st-century American inventors Joseph A. Craig High School alumni
27416836
https://en.wikipedia.org/wiki/Mobile%20data%20offloading
Mobile data offloading
Mobile data offloading is the use of complementary network technologies for delivering data originally targeted for cellular networks. Offloading reduces the amount of data being carried on the cellular bands, freeing bandwidth for other users. It is also used in situations where local cell reception may be poor, allowing the user to connect via wired services with better connectivity. Rules triggering the mobile offloading action can be set by either an end-user (mobile subscriber) or an operator. The code operating on the rules resides in an end-user device, in a server, or is divided between the two. End users do data offloading for data service cost control and the availability of higher bandwidth. The main complementary network technologies used for mobile data offloading are Wi-Fi, femtocell and Integrated Mobile Broadcast. It is predicted that mobile data offloading will become a new industry segment due to the surge of mobile data traffic. Mobile data surge Increasing need for offloading solutions is caused by the explosion of Internet data traffic, especially the growing portion of traffic going through mobile networks. This has been enabled by smartphone devices possessing Wi-Fi capabilities together with large screens and different Internet applications, from browsers to video and audio streaming applications. In addition to smart phones, laptops with 3G access capabilities are also seen as a major source of mobile data traffic. Additionally, Wi-Fi is typically much less costly to build than cellular networks. It has been estimated that the total Internet traffic would pass 235.7 Exabytes per month in 2021, up from 73.1 Exabytes per month in 2016. Annual growth rate of 50% is expected to continue and it will keep out phasing the respected revenue growth. Alternatives Wi-Fi and femtocell technologies are the primary offload technologies used by the industry. In addition, WiMax and terrestrial networks (LAN) are also candidates for offloading of 3G mobile data. Femtocells use standard cellular radio technologies, thus any mobile device is capable of participating in the data offloading process, though some modification is needed to accommodate the different backhaul connection. On the other hand, cellular radio technologies are founded on the ability to do network planning within licensed spectrum. Hence, it may turn out to be difficult, both technically and business wise, to mass deploy femtocell access points. Self-Organizing Network (SON) is an emerging technology for tackling unplanned femtocell deployment (among other applications). Wi-Fi technology is different radio technology than cellular, but most Internet capable mobile devices now come with Wi-Fi capability. There are already millions of installed Wi-Fi networks mainly in congested areas such as airports, hotels and city centers and the number is growing rapidly. Wi-Fi networks are very fragmented but recently there have been efforts to consolidate them. The consolidation of Wi-Fi networks is proceeding both through a community approach, Fon as the prime example, and by the consolidation of Wi-Fi network operators. Wi-Fi Wi-Fi offloading is an emerging business domain with multiple companies entering to the market with proprietary solutions. As standardization has focused on degree of coupling between the cellular and Wi-Fi networks, the competing solutions can be classified based on the minimum needed level of network interworking. Besides standardization, research communities have been exploring more open and programmable design in order to fix the deployment dilemma. <ref>X. Kang et. al ``Mobile Data Offloading Through A Third-Party WiFi Access Point: An Operator's Perspective, in IEEE Transactions on Wireless Communications, vol. 13, no. 10, pp. 5340-5351, Oct. 2014. </ref> A further classification criterion is the initiator of the offloading procedure. Cellular and Wi-Fi network interworking Depending on the services to be offloaded and the business model there may be a need for interworking standardization. Standardization efforts have focused on specifying tightly or loose coupling between the cellular and the Wi-Fi networks, especially in a network-controlled manner. 3GPP based Enhanced Generic Access Network () architecture applies tight coupling as it specifies rerouting of cellular network signaling through Wi-Fi access networks. Wi-Fi is considered to be a non-3GPP WLAN radio access network (RAN). 3GPP has also specified an alternative loosely coupled solution for Wi-Fi. The approach is called Interworking Wireless LAN (IWLAN) architecture and it is a solution to transfer IP data between a mobile device and operator’s core network through a Wi-Fi access. In the IWLAN architecture, a mobile device opens a VPN/IPsec tunnel from the device to the dedicated IWLAN server in the operator’s core network to provide the user either an access to the operator’s walled-garden services or to a gateway to the public Internet. With loose coupling between the networks the only integration and interworking point is the common authentication architecture. The most straightforward way to offload data to the Wi-Fi networks is to have a direct connection to the public Internet. This no coupling alternative omits the need for interworking standardization. For majority of the web traffic there is no added value to route the data through the operator core network. In this case the offloading can simply be carried out by switching the IP traffic to use the Wi-Fi connection in mobile client instead of the cellular data connection. In this approach the two networks are in practice totally separated and network selection is done by a client application. Studies show that significant amount of data can be offloaded in this manner to Wi-Fi networks even when users are mobile.Kyunghan Lee, Joohyun Lee, Yung Yi, Injong Rhee and Song Chong. Mobile Data Offloading: How Much Can WiFi Deliver? Proc. CoNEXT 2010 However, offloading does not always mean reduction of resource consumption (required system capacity) in the network of the operator. Under certain conditions and due to an increase of the burstiness of the non-offloaded traffic (i.e. traffic that eventually reaches the network of the operator in a regular way), the amount of network resources to offer a given level of QoS is increased. In this context, the distribution of offloading periods turns out to be the main design parameter to deploy effective offloading strategies in the network of MNOs making non-offloaded traffic less heavy-tailed, hence reducing the resources needed in the network of the operator.. The energy consumption in offloading is also another concern. Initiation of offloading procedure There are three main initiation schemes: WLAN scanning initiation, user initiation and remotely managed initiation. In the WLAN scanning-based initiation the user device periodically performs WLAN scanning. When a known or an open Wi-Fi network is found, an offloading procedure is initiated. In the user-initiated mode, a user is prompted to select which network technology is used. This happens usually once per a network access session. In the remotely managed approach, a network server initiates each offloading procedure by prompting the connection manager of a specific user device. Operator-managed is a subclass of the remotely managed approach. In the operator-managed approach, the operator is monitoring its network load and user behavior. In the case of forthcoming network congestion, the operator initiates the offloading procedure. ANDSF Access network discovery and selection function (ANDSF) is the most complete 3GPP approach to date for controlling offloading between 3GPP and non-3GPP access networks (such as Wi-Fi). The purpose of the ANDSF is to assist user devices to discover access networks in their vicinity and to provide rules (policies) to prioritize and manage connections to all networks. ATSSS 3GPP has started to standardize the Access Traffic Steering, Switching & Splitting (ATSSS) function to enable 5G devices to use different types of access networks, including Wi-Fi. The ATSSS service leverages the Multipath TCP protocol to enable 5G devices to simultaneously utilize different access networks. Experience with the utilisation of Multipath TCP on iPhones has shown that the ability to simultanesouly use Wi-Fi and cellular was key to provide support seamless handovers. The first version of the ATSSS specification leverages the 0-rtt convert protocol developed within the IETF. A prototype implementation of this service has been demonstrated in August 2019. Operating system connection manager Many operating systems provide a connection manager that can automatically switch to Wi-Fi network if the connection manager detects a known Wi-Fi network. Such functionality can be found from most modern operating systems (for example from all Windows versions beginning from XP SP3, Ubuntu, Nokia N900, Android and Apple iPhone). The connection managers use various heuristics to detect the best performing network connections. These include performing DNS requests for known names over the newly activated network interfaces, sending queries to specific servers, ... When both the Wi-Fi and the cellular interfaces are activate, Android smartphones will usually prefer the Wi-Fi one since it is usually unmetered. When such a smartphone decides to switch from one interface to another, all the active TCP connections need to be reestablished. Multipath TCP solves this handover problem in a clean way. With Multipath TCP, TCP connections can use both the Wi-Fi interface and the cellular one during the handover. This means that ongoing TCP connections are not stopped when the smartphone decides to switch from one network to another. As of January 2020, Multipath TCP is natively supported on iPhones, but less frequently used on Android smartphones except in South Korea. On iPhones since iOS9, the Wi-Fi Assist subsystem monitors the quality of the underlying network connection. If the quality drops below a given threshold, Wi-Fi Assist may decide to move established Multipath TCP connections to another interface. Initially, this feature was used for the Siri application. Since iOS12, any [Multipath TCP] enabled application can benefit from this feature. Since iOS13, Apple Maps and Apple Music can also be offloaded from Wi-Fi to cellular and vice versa without any interruption. Opportunistic offloading With the increasing availability of inter-device networks (e.g. Bluetooth or WifiDirect) there is also the possibility of offloading delay tolerant data to the ad hoc network layer. In this case, the delay tolerant data is sent to only a subset of data receivers via the 3G network, with the rest forwarded between devices in the ad hoc'' layer in a multi-hop fashion. As a result, the traffic on the cellular network is reduced, or gets shifted to inter-device networks. See also LTE in unlicensed spectrum Generic Access Network References External links Global Wi-Fi Offload Summit Metropolitan area networks Network access Wi-Fi
37035
https://en.wikipedia.org/wiki/Conway%27s%20Game%20of%20Life
Conway's Game of Life
The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It is Turing complete and can simulate a universal constructor or any other Turing machine. Rules The universe of the Game of Life is an infinite, two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, live or dead (or populated and unpopulated, respectively). Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur: Any live cell with fewer than two live neighbours dies, as if by underpopulation. Any live cell with two or three live neighbours lives on to the next generation. Any live cell with more than three live neighbours dies, as if by overpopulation. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction. These rules, which compare the behavior of the automaton to real life, can be condensed into the following: Any live cell with two or three live neighbours survives. Any dead cell with three live neighbours becomes a live cell. All other live cells die in the next generation. Similarly, all other dead cells stay dead. The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed, live or dead; births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick. Each generation is a pure function of the preceding one. The rules continue to be applied repeatedly to create further generations. Origins In late 1940, John von Neumann defined life as a creation (as a being or organism) which can reproduce itself and simulate a Turing machine. Von Neumann was thinking about an engineering solution which would use electromagnetic components floating randomly in liquid or gas. This turned out not to be realistic with the technology available at the time. Stanislaw Ulam invented cellular automata, which were intended to simulate von Neumann's theoretical electromagnetic constructions. Ulam discussed using computers to simulate his cellular automata in a two-dimensional lattice in several papers. In parallel, von Neumann attempted to construct Ulam's cellular automaton. Although successful, he was busy with other projects and left some details unfinished. His construction was complicated because it tried to simulate his own engineering design. Over time, simpler life constructions were provided by other researchers, and published in papers and books. Motivated by questions in mathematical logic and in part by work on simulation games by Ulam, among others, John Conway began doing experiments in 1968 with a variety of different two-dimensional cellular automaton rules. Conway's initial goal was to define an interesting and unpredictable cell automaton. For example, he wanted some configurations to last for a long time before dying and other configurations to go on forever without allowing cycles. It was a significant challenge and an open problem for years before experts on cellular automata managed to prove that, indeed, the Game of Life admitted of a configuration which was alive in the sense of satisfying von Neumann's two general requirements. While the definitions before the Game of Life were proof-oriented, Conway's construction aimed at simplicity without a priori providing proof the automaton was alive. Conway chose his rules carefully, after considerable experimentation, to meet these criteria: There should be no explosive growth. There should exist small initial patterns with chaotic, unpredictable outcomes. There should be potential for von Neumann universal constructors. The rules should be as simple as possible, whilst adhering to the above constraints. The game made its first public appearance in the October 1970 issue of Scientific American, in Martin Gardner's "Mathematical Games" column. Theoretically, the Game of Life has the power of a universal Turing machine: anything that can be computed algorithmically can be computed within the Game of Life. Gardner wrote, "Because of Life's analogies with the rise, fall and alterations of a society of living organisms, it belongs to a growing class of what are called 'simulation games' (games that resemble real-life processes)." Since its publication, the Game of Life has attracted much interest because of the surprising ways in which the patterns can evolve. It provides an example of emergence and self-organization. Scholars in various fields, such as computer science, physics, biology, biochemistry, economics, mathematics, philosophy, and generative sciences, have made use of the way that complex patterns can emerge from the implementation of the game's simple rules. The game can also serve as a didactic analogy, used to convey the somewhat counter-intuitive notion that design and organization can spontaneously emerge in the absence of a designer. For example, philosopher Daniel Dennett has used the analogy of the Game of Life "universe" extensively to illustrate the possible evolution of complex philosophical constructs, such as consciousness and free will, from the relatively simple set of deterministic physical laws which might govern our universe. The popularity of the Game of Life was helped by its coming into being at the same time as increasingly inexpensive computer access. The game could be run for hours on these machines, which would otherwise have remained unused at night. In this respect, it foreshadowed the later popularity of computer-generated fractals. For many, the Game of Life was simply a programming challenge: a fun way to use otherwise wasted CPU cycles. For some, however, the Game of Life had more philosophical connotations. It developed a cult following through the 1970s and beyond; current developments have gone so far as to create theoretic emulations of computer systems within the confines of a Game of Life board. Examples of patterns Many different types of patterns occur in the Game of Life, which are classified according to their behaviour. Common pattern types include: still lifes, which do not change from one generation to the next; oscillators, which return to their initial state after a finite number of generations; and spaceships, which translate themselves across the grid. The earliest interesting patterns in the Game of Life were discovered without the use of computers. The simplest still lifes and oscillators were discovered while tracking the fates of various small starting configurations using graph paper, blackboards, and physical game boards, such as those used in Go. During this early research, Conway discovered that the R-pentomino failed to stabilize in a small number of generations. In fact, it takes 1103 generations to stabilize, by which time it has a population of 116 and has generated six escaping gliders; these were the first spaceships ever discovered. Frequently occurring examples (in that they emerge frequently from a random starting configuration of cells) of the three aforementioned pattern types are shown below, with live cells shown in black and dead cells in white. Period refers to the number of ticks a pattern must iterate through before returning to its initial configuration. The pulsar is the most common period-3 oscillator. The great majority of naturally occurring oscillators have a period of 2, like the blinker and the toad, but oscillators of many periods are known to exist, and oscillators of periods 4, 8, 14, 15, 30, and a few others have been seen to arise from random initial conditions. Patterns which evolve for long periods before stabilizing are called Methuselahs, the first-discovered of which was the R-pentomino. Diehard is a pattern that eventually disappears, rather than stabilizing, after 130 generations, which is conjectured to be maximal for patterns with seven or fewer cells. Acorn takes 5206 generations to generate 633 cells, including 13 escaped gliders. Conway originally conjectured that no pattern can grow indefinitely—i.e. that for any initial configuration with a finite number of living cells, the population cannot grow beyond some finite upper limit. In the game's original appearance in "Mathematical Games", Conway offered a prize of fifty dollars to the first person who could prove or disprove the conjecture before the end of 1970. The prize was won in November by a team from the Massachusetts Institute of Technology, led by Bill Gosper; the "Gosper glider gun" produces its first glider on the 15th generation, and another glider every 30th generation from then on. For many years, this glider gun was the smallest one known. In 2015, a gun called the "Simkin glider gun", which releases a glider every 120th generation, was discovered that has fewer live cells but which is spread out across a larger bounding box at its extremities. Smaller patterns were later found that also exhibit infinite growth. All three of the patterns shown below grow indefinitely. The first two create a single block-laying switch engine: a configuration that leaves behind two-by-two still life blocks as it translates itself across the game's universe. The third configuration creates two such patterns. The first has only ten live cells, which has been proven to be minimal. The second fits in a five-by-five square, and the third is only one cell high. Later discoveries included other guns, which are stationary, and which produce gliders or other spaceships; puffer trains, which move along leaving behind a trail of debris; and rakes, which move and emit spaceships. Gosper also constructed the first pattern with an asymptotically optimal quadratic growth rate, called a breeder or lobster, which worked by leaving behind a trail of guns. It is possible for gliders to interact with other objects in interesting ways. For example, if two gliders are shot at a block in a specific position, the block will move closer to the source of the gliders. If three gliders are shot in just the right way, the block will move farther away. This sliding block memory can be used to simulate a counter. It is possible to construct logic gates such as AND, OR, and NOT using gliders. It is possible to build a pattern that acts like a finite-state machine connected to two counters. This has the same computational power as a universal Turing machine, so the Game of Life is theoretically as powerful as any computer with unlimited memory and no time constraints; it is Turing complete. In fact, several different programmable computer architectures have been implemented in the Game of Life, including a pattern that simulates Tetris. Furthermore, a pattern can contain a collection of guns that fire gliders in such a way as to construct new objects, including copies of the original pattern. A universal constructor can be built which contains a Turing complete computer, and which can build many types of complex objects, including more copies of itself. In 2018, the first truly elementary knightship, Sir Robin, was discovered by Adam P. Goucher. A knightship is a spaceship that moves two squares left for every one square it moves down (like a knight in chess), as opposed to moving orthogonally or along a 45° diagonal. This is the first new spaceship movement pattern for an elementary spaceship found in forty-eight years. "Elementary" means that it cannot be decomposed into smaller interacting patterns such as gliders and still lifes. Undecidability Many patterns in the Game of Life eventually become a combination of still lifes, oscillators, and spaceships; other patterns may be called chaotic. A pattern may stay chaotic for a very long time until it eventually settles to such a combination. The Game of Life is undecidable, which means that given an initial pattern and a later pattern, no algorithm exists that can tell whether the later pattern is ever going to appear. This is a corollary of the halting problem: the problem of determining whether a given program will finish running or continue to run forever from an initial input. Indeed, since the Game of Life includes a pattern that is equivalent to a universal Turing machine (UTM), this deciding algorithm, if it existed, could be used to solve the halting problem by taking the initial pattern as the one corresponding to a UTM plus an input, and the later pattern as the one corresponding to a halting state of the UTM. It also follows that some patterns exist that remain chaotic forever. If this were not the case, one could progress the game sequentially until a non-chaotic pattern emerged, then compute whether a later pattern was going to appear. Self-replication On May 18, 2010, Andrew J. Wade announced a self-constructing pattern, dubbed "Gemini", that creates a copy of itself while destroying its parent. This pattern replicates in 34 million generations, and uses an instruction tape made of gliders oscillating between two stable configurations made of Chapman–Greene construction arms. These, in turn, create new copies of the pattern, and destroy the previous copy. Gemini is also a spaceship, and is the first spaceship constructed in the Game of Life that is an oblique spaceship, which is a spaceship that is neither orthogonal nor purely diagonal. In December 2015, diagonal versions of the Gemini were built. On November 23, 2013, Dave Greene built the first replicator in the Game of Life that creates a complete copy of itself, including the instruction tape. In October 2018, Adam P. Goucher finished his construction of the 0E0P metacell, a metacell capable of self-replication. This differed from previous metacells, such as the OTCA metapixel by Brice Due, which only worked with already constructed copies near them. The 0E0P metacell works by using construction arms to create copies that simulate the programmed rule. The actual simulation of the Game of Life or other Moore neighbourhood rules is done by simulating an equivalent rule using the von Neumann neighbourhood with more states. The name 0E0P is short for "Zero Encoded by Zero Population", which indicates that instead of a metacell being in an "off" state simulating empty space, the 0E0P metacell removes itself when the cell enters that state, leaving a blank space. Iteration From most random initial patterns of living cells on the grid, observers will find the population constantly changing as the generations tick by. The patterns that emerge from the simple rules may be considered a form of mathematical beauty. Small isolated subpatterns with no initial symmetry tend to become symmetrical. Once this happens, the symmetry may increase in richness, but it cannot be lost unless a nearby subpattern comes close enough to disturb it. In a very few cases, the society eventually dies out, with all living cells vanishing, though this may not happen for a great many generations. Most initial patterns eventually burn out, producing either stable figures or patterns that oscillate forever between two or more states; many also produce one or more gliders or spaceships that travel indefinitely away from the initial location. Because of the nearest-neighbour based rules, no information can travel through the grid at a greater rate than one cell per unit time, so this velocity is said to be the cellular automaton speed of light and denoted c. Algorithms Early patterns with unknown futures, such as the R-pentomino, led computer programmers to write programs to track the evolution of patterns in the Game of Life. Most of the early algorithms were similar: they represented the patterns as two-dimensional arrays in computer memory. Typically, two arrays are used: one to hold the current generation, and one to calculate its successor. Often 0 and 1 represent dead and live cells, respectively. A nested for loop considers each element of the current array in turn, counting the live neighbours of each cell to decide whether the corresponding element of the successor array should be 0 or 1. The successor array is displayed. For the next iteration, the arrays may swap roles so that the successor array in the last iteration becomes the current array in the next iteration, or one may copy the values of the second array into the first array then update the second array from the first array again. A variety of minor enhancements to this basic scheme are possible, and there are many ways to save unnecessary computation. A cell that did not change at the last time step, and none of whose neighbours changed, is guaranteed not to change at the current time step as well, so a program that keeps track of which areas are active can save time by not updating inactive zones. To avoid decisions and branches in the counting loop, the rules can be rearranged from an egocentric approach of the inner field regarding its neighbours to a scientific observer's viewpoint: if the sum of all nine fields in a given neighbourhood is three, the inner field state for the next generation will be life; if the all-field sum is four, the inner field retains its current state; and every other sum sets the inner field to death. To save memory, the storage can be reduced to one array plus two line buffers. One line buffer is used to calculate the successor state for a line, then the second line buffer is used to calculate the successor state for the next line. The first buffer is then written to its line and freed to hold the successor state for the third line. If a toroidal array is used, a third buffer is needed so that the original state of the first line in the array can be saved until the last line is computed. In principle, the Game of Life field is infinite, but computers have finite memory. This leads to problems when the active area encroaches on the border of the array. Programmers have used several strategies to address these problems. The simplest strategy is to assume that every cell outside the array is dead. This is easy to program but leads to inaccurate results when the active area crosses the boundary. A more sophisticated trick is to consider the left and right edges of the field to be stitched together, and the top and bottom edges also, yielding a toroidal array. The result is that active areas that move across a field edge reappear at the opposite edge. Inaccuracy can still result if the pattern grows too large, but there are no pathological edge effects. Techniques of dynamic storage allocation may also be used, creating ever-larger arrays to hold growing patterns. The Game of Life on a finite field is sometimes explicitly studied; some implementations, such as Golly, support a choice of the standard infinite field, a field infinite only in one dimension, or a finite field, with a choice of topologies such as a cylinder, a torus, or a Möbius strip. Alternatively, programmers may abandon the notion of representing the Game of Life field with a two-dimensional array, and use a different data structure, such as a vector of coordinate pairs representing live cells. This allows the pattern to move about the field unhindered, as long as the population does not exceed the size of the live-coordinate array. The drawback is that counting live neighbours becomes a hash-table lookup or search operation, slowing down simulation speed. With more sophisticated data structures this problem can also be largely solved. For exploring large patterns at great time depths, sophisticated algorithms such as Hashlife may be useful. There is also a method for implementation of the Game of Life and other cellular automata using arbitrary asynchronous updates whilst still exactly emulating the behaviour of the synchronous game. Source code examples that implement the basic Game of Life scenario in various programming languages, including C, C++, Java and Python can be found at Rosetta Code. Variations Since the Game of Life's inception, new, similar cellular automata have been developed. The standard Game of Life is symbolized as B3/S23. A cell is born if it has exactly three neighbours, survives if it has two or three living neighbours, and dies otherwise. The first number, or list of numbers, is what is required for a dead cell to be born. The second set is the requirement for a live cell to survive to the next generation. Hence B6/S16 means "a cell is born if there are six neighbours, and lives on if there are either one or six neighbours". Cellular automata on a two-dimensional grid that can be described in this way are known as cellular automata. Another common automaton, Highlife, is described by the rule B36/S23, because having six neighbours, in addition to the original game's B3/S23 rule, causes a birth. HighLife is best known for its frequently occurring replicators. Additional Life-like cellular automata exist. The vast majority of these 218 different rules produce universes that are either too chaotic or too desolate to be of interest, but a large subset do display interesting behavior. A further generalization produces the isotropic rulespace, with 2102 possible cellular automaton rules (the Game of Life again being one of them). These are rules that use the same square grid as the Life-like rules and the same eight-cell neighbourhood, and are likewise invariant under rotation and reflection. However, in isotropic rules, the positions of neighbour cells relative to each other may be taken into account in determining a cell's future state—not just the total number of those neighbours. Some variations on the Game of Life modify the geometry of the universe as well as the rule. The above variations can be thought of as a two-dimensional square, because the world is two-dimensional and laid out in a square grid. One-dimensional square variations, known as elementary cellular automata, and three-dimensional square variations have been developed, as have two-dimensional hexagonal and triangular variations. A variant using aperiodic tiling grids has also been made. Conway's rules may also be generalized such that instead of two states, live and dead, there are three or more. State transitions are then determined either by a weighting system or by a table specifying separate transition rules for each state; for example, Mirek's Cellebration's multi-coloured Rules Table and Weighted Life rule families each include sample rules equivalent to the Game of Life. Patterns relating to fractals and fractal systems may also be observed in certain variations. For example, the automaton B1/S12 generates four very close approximations to the Sierpinski triangle when applied to a single live cell. The Sierpinski triangle can also be observed in the Game of Life by examining the long-term growth of an infinitely long single-cell-thick line of live cells, as well as in Highlife, Seeds (B2/S), and Wolfram's Rule 90. Immigration is a variation that is very similar to the Game of Life, except that there are two on states, often expressed as two different colours. Whenever a new cell is born, it takes on the on state that is the majority in the three cells that gave it birth. This feature can be used to examine interactions between spaceships and other objects within the game. Another similar variation, called QuadLife, involves four different on states. When a new cell is born from three different on neighbours, it takes the fourth value, and otherwise, like Immigration, it takes the majority value. Except for the variation among on cells, both of these variations act identically to the Game of Life. Music Various musical composition techniques use the Game of Life, especially in MIDI sequencing. A variety of programs exist for creating sound from patterns generated in the Game of Life. Notable programs Computers have been used to follow Game of Life configurations since it was first publicized. When John Conway was first investigating how various starting configurations developed, he tracked them by hand using a go board with its black and white stones. This was tedious and prone to errors. The first interactive Game of Life program was written in an early version of ALGOL 68C for the PDP-7 by M. J. T. Guy and S. R. Bourne. The results were published in the October 1970 issue of Scientific American, along with the statement: "Without its help, some discoveries about the game would have been difficult to make." A color version of the Game of Life was written by Ed Hall in 1976 for Cromemco microcomputers, and a display from that program filled the cover of the June 1976 issue of Byte. The advent of microcomputer-based color graphics from Cromemco has been credited with a revival of interest in the game. Two early implementations of the Game of Life on home computers were by Malcolm Banthorpe written in BBC BASIC. The first was in the January 1984 issue of Acorn User magazine, and Banthorpe followed this with a three-dimensional version in the May 1984 issue. Susan Stepney, Professor of Computer Science at the University of York, followed this up in 1988 with Life on the Line, a program that generated one-dimensional cellular automata. There are now thousands of Game of Life programs online, so a full list will not be provided here. The following is a small selection of programs with some special claim to notability, such as popularity or unusual features. Most of these programs incorporate a graphical user interface for pattern editing and simulation, the capability for simulating multiple rules including the Game of Life, and a large library of interesting patterns in the Game of Life and other cellular automaton rules. Golly is a cross-platform (Windows, Macintosh, Linux, iOS, and Android) open-source simulation system for the Game of Life and other cellular automata (including all Life-like cellular automata, the Generations family of cellular automata from Mirek's Cellebration, and John von Neumann's 29-state cellular automaton) by Andrew Trevorrow and Tomas Rokicki. It includes the Hashlife algorithm for extremely fast generation, and Lua or Python scriptability for both editing and simulation. Mirek's Cellebration is a freeware one- and two-dimensional cellular automata viewer, explorer, and editor for Windows. It includes powerful facilities for simulating and viewing a wide variety of cellular automaton rules, including the Game of Life, and a scriptable editor. Xlife is a cellular-automaton laboratory by Jon Bennett. The standard UNIX X11 Game of Life simulation application for a long time, it has also been ported to Windows. It can handle cellular automaton rules with the same neighbourhood as the Game of Life, and up to eight possible states per cell. Google implemented an easter egg of the Game of Life in 2012. Users who search for the term are shown an implementation of the game in the search results page. See also , is set in a future society where the Game of Life is played in a competitive two-player mode , a "human" Game of Life. Notes References External links Life Lexicon, extensive lexicon with many patterns LifeWiki Conway Life forums Catagolue, an online database of objects in Conway's Game of Life and similar cellular automata Cellular Automata FAQ – Conway's Game of Life Algebraic formula, recurrence relation for iterating Conway's Game of Life. Cellular automaton rules Self-organization Games and sports introduced in 1970 John Horton Conway
6383817
https://en.wikipedia.org/wiki/Ancestral%20reconstruction
Ancestral reconstruction
Ancestral reconstruction (also known as Character Mapping or Character Optimization) is the extrapolation back in time from measured characteristics of individuals (or populations) to their common ancestors. It is an important application of phylogenetics, the reconstruction and study of the evolutionary relationships among individuals, populations or species to their ancestors. In the context of evolutionary biology, ancestral reconstruction can be used to recover different kinds of ancestral character states of organisms that lived millions of years ago. These states include the genetic sequence (ancestral sequence reconstruction), the amino acid sequence of a protein, the composition of a genome (e.g., gene order), a measurable characteristic of an organism (phenotype), and the geographic range of an ancestral population or species (ancestral range reconstruction). This is desirable because it allows us to examine parts of phylogenetic trees corresponding to the distant past, clarifying the evolutionary history of the species in the tree. Since modern genetic sequences are essentially a variation of ancient ones, access to ancient sequences may identify other variations and organisms which could have arisen from those sequences. In addition to genetic sequences, one might attempt to track the changing of one character trait to another, such as fins turning to legs. Non-biological applications include the reconstruction of the vocabulary or phonemes of ancient languages, and cultural characteristics of ancient societies such as oral traditions or marriage practices. Ancestral reconstruction relies on a sufficiently realistic statistical model of evolution to accurately recover ancestral states. These models use the genetic information already obtained through methods such as phylogenetics to determine the route that evolution has taken and when evolutionary events occurred. No matter how well the model approximates the actual evolutionary history, however, one's ability to accurately reconstruct an ancestor deteriorates with increasing evolutionary time between that ancestor and its observed descendants. Additionally, more realistic models of evolution are inevitably more complex and difficult to calculate. Progress in the field of ancestral reconstruction has relied heavily on the exponential growth of computing power and the concomitant development of efficient computational algorithms (e.g., a dynamic programming algorithm for the joint maximum likelihood reconstruction of ancestral sequences). Methods of ancestral reconstruction are often applied to a given phylogenetic tree that has already been inferred from the same data. While convenient, this approach has the disadvantage that its results are contingent on the accuracy of a single phylogenetic tree. In contrast, some researchers advocate a more computationally intensive Bayesian approach that accounts for uncertainty in tree reconstruction by evaluating ancestral reconstructions over many trees. History The concept of ancestral reconstruction is often credited to Emile Zuckerkandl and Linus Pauling. Motivated by the development of techniques for determining the primary (amino acid) sequence of proteins by Frederick Sanger in 1955, Zuckerkandl and Pauling postulated that such sequences could be used to infer not only the phylogeny relating the observed protein sequences, but also the ancestral protein sequence at the earliest point (root) of this tree. However, the idea of reconstructing ancestors from measurable biological characteristics had already been developing in the field of cladistics, one of the precursors of modern phylogenetics. Cladistic methods, which appeared as early as 1901, infer the evolutionary relationships of species on the basis of the distribution of shared characteristics, of which some are inferred to be descended from common ancestors. Furthermore, Theodoseus Dobzhansky and Alfred Sturtevant articulated the principles of ancestral reconstruction in a phylogenetic context in 1938, when inferring the evolutionary history of chromosomal inversions in Drosophila pseudoobscura. Thus, ancestral reconstruction has its roots in several disciplines. Today, computational methods for ancestral reconstruction continue to be extended and applied in a diversity of settings, so that ancestral states are being inferred not only for biological characteristics and the molecular sequences, but also for the structure or catalytic properties of ancient versus modern proteins, the geographic location of populations and species (phylogeography) and the higher-order structure of genomes. Methods and algorithms Any attempt at ancestral reconstruction begins with a phylogeny. In general, a phylogeny is a tree-based hypothesis about the order in which populations (referred to as taxa) are related by descent from common ancestors. Observed taxa are represented by the tips or terminal nodes of the tree that are progressively connected by branches to their common ancestors, which are represented by the branching points of the tree that are usually referred to as the ancestral or internal nodes. Eventually, all lineages converge to the most recent common ancestor of the entire sample of taxa. In the context of ancestral reconstruction, a phylogeny is often treated as though it were a known quantity (with Bayesian approaches being an important exception). Because there can be an enormous number of phylogenies that are nearly equally effective at explaining the data, reducing the subset of phylogenies supported by the data to a single representative, or point estimate, can be a convenient and sometimes necessary simplifying assumption. Ancestral reconstruction can be thought of as the direct result of applying a hypothetical model of evolution to a given phylogeny. When the model contains one or more free parameters, the overall objective is to estimate these parameters on the basis of measured characteristics among the observed taxa (sequences) that descended from common ancestors. Parsimony is an important exception to this paradigm: though it has been shown that there are circumstances under which it is the maximum likelihood estimator, at its core, it is simply based on the heuristic that changes in character state are rare, without attempting to quantify that rarity. There are three different classes of method for ancestral reconstruction. In chronological order of discovery, these are maximum parsimony, maximum likelihood, and Bayesian Inference. Maximum parsimony considers all evolutionary events equally likely; maximum likelihood accounts for the differing likelihood of certain classes of event; and Bayeisan inference relates the conditional probability of an event to the likelihood of the tree, as well as the amount of uncertainty that is associated with that tree. Maximum parsimony and maximum likelihood yield a single most probable outcome, whereas Bayesian inference accounts for uncertainties in the data and yields a sample of possible trees. Maximum parsimony Parsimony, known colloquially as "Occam's razor", refers to the principle of selecting the simplest of competing hypotheses. In the context of ancestral reconstruction, parsimony endeavours to find the distribution of ancestral states within a given tree which minimizes the total number of character state changes that would be necessary to explain the states observed at the tips of the tree. This method of maximum parsimony is one of the earliest formalized algorithms for reconstructing ancestral states, as well as one of the simplest. Maximum parsimony can be implemented by one of several algorithms. One of the earliest examples is Fitch's method, which assigns ancestral character states by parsimony via two traversals of a rooted binary tree. The first stage is a post-order traversal that proceeds from the tips toward the root of a tree by visiting descendant (child) nodes before their parents. Initially, we are determining the set of possible character states Si for the i-th ancestor based on the observed character states of its descendants. Each assignment is the set intersection of the character states of the ancestor's descendants; if the intersection is the empty set, then it is the set union. In the latter case, it is implied that a character state change has occurred between the ancestor and one of its two immediate descendants. Each such event counts towards the algorithm's cost function, which may be used to discriminate among alternative trees on the basis of maximum parsimony. Next, a pre-order traversal of the tree is performed, proceeding from the root towards the tips. Character states are then assigned to each descendant based on which character states it shares with its parent. Since the root has no parent node, one may be required to select a character state arbitrarily, specifically when more than one possible state has been reconstructed at the root. For example, consider a phylogeny recovered for a genus of plants containing 6 species A - F, where each plant is pollinated by either a "bee", "hummingbird" or "wind". One obvious question is what the pollinators at deeper nodes were in the phylogeny of this genus of plants. Under maximum parsimony, an ancestral state reconstruction for this clade reveals that "hummingbird" is the most parsimonious ancestral state for the lower clade (plants D, E, F), that the ancestral states for the nodes in the top clade (plants A, B, C) are equivocal and that both "hummingbird" or "bee" pollinators are equally plausible for the pollination state at the root of the phylogeny. Supposing we have strong evidence from the fossil record that the root state is "hummingbird". Resolution of the root to "hummingbird" would yield the pattern of ancestral state reconstruction depicted by the symbols at the nodes with the state requiring the fewest changes circled. Parsimony methods are intuitively appealing and highly efficient, such that they are still used in some cases to seed maximum likelihood optimization algorithms with an initial phylogeny. However, the underlying assumption that evolution attained a certain end result as fast as possible is inaccurate. Natural selection and evolution do not work towards a goal, they simply select for or against randomly occurring genetic changes. Parsimony methods impose six general assumptions: that the phylogenetic tree you are using is correct, that you have all of the relevant data, in which no mistakes were made in coding, that all branches of the phylogenetic tree are equally likely to change, that the rate of evolution is slow, and that the chance of losing or gaining a characteristic is the same. In reality, assumptions are often violated, leading to several issues: Variation in rates of evolution. Fitch's method assumes that changes between all character states are equally likely to occur; thus, any change incurs the same cost for a given tree. This assumption is often unrealistic and can limit the accuracy of such methods. For example, transitions tend to occur more often than transversions in the evolution of nucleic acids. This assumption can be relaxed by assigning differential costs to specific character state changes, resulting in a weighted parsimony algorithm. Rapid evolution. The upshot of the "minimum evolution" heuristic underlying such methods is that such methods assume that changes are rare, and thus are inappropriate in cases where change is the norm rather than the exception. Variation in time among lineages. Parsimony methods implicitly assume that the same amount of evolutionary time has passed along every branch of the tree. Thus, they do not account for variation in branch lengths in the tree, which are often used to quantify the passage of evolutionary or chronological time. This limitation makes the technique liable to infer that one change occurred on a very short branch rather than multiple changes occurring on a very long branch, for example. In addition, it is possible that some branches of the tree could be experiencing higher selection and change rates than others, perhaps due to changing environmental factors. Some periods of time may represent more rapid evolution than others, when this happens parsimony becomes inaccurate. This shortcoming is addressed by model-based methods (both maximum likelihood and Bayesian methods) that infer the stochastic process of evolution as it unfolds along each branch of a tree. Statistical justification. Without a statistical model underlying the method, its estimates do not have well-defined uncertainties. Convergent evolution. When considering a single character state, parsimony will automatically assume that two organisms that share that characteristic will be more closely related than those who do not. For example, just because dogs and apes have fur does not mean that they are more closely related than apes are to humans. Maximum likelihood Maximum likelihood (ML) methods of ancestral state reconstruction treat the character states at internal nodes of the tree as parameters, and attempt to find the parameter values that maximize the probability of the data (the observed character states) given the hypothesis (a model of evolution and a phylogeny relating the observed sequences or taxa). In other words, this method assumes that the ancestral states are those which are statistically most likely, given the observed phenotypes. Some of the earliest ML approaches to ancestral reconstruction were developed in the context of genetic sequence evolution; similar models were also developed for the analogous case of discrete character evolution. The use of a model of evolution accounts for the fact that not all events are equally likely to happen. For example, a transition, which is a type of point mutation from one purine to another, or from one pyrimidine to another is much more likely to happen than a transversion, which is the chance of a purine being switched to a pyrimidine, or vice versa. These differences are not captured by maximum parsimony. However, just because some events are more likely than others does not mean that they always happen. We know that throughout evolutionary history there have been times when there was a large gap between what was most likely to happen, and what actually occurred. When this is the case, maximum parsimony may actually be more accurate because it is more willing to make large, unlikely leaps than maximum likelihood is. Maximum likelihood has been shown to be quite reliable in reconstructing character states, but it does not do as good of a job at giving accurate estimations of the stability of proteins. Maximum likelihood always overestimates the stability of proteins, which makes sense since it assumes that the proteins that were made and used were the most stable and optimal. The merits of maximum likelihood have been subject to debate, with some having concluded that maximum likelihood test represents a good medium between accuracy and speed. However, other studies have complained that maximum likelihood takes too much time and computational power to be useful in some scenarios. These approaches employ the same probabilistic framework as used to infer the phylogenetic tree. In brief, the evolution of a genetic sequence is modelled by a time-reversible continuous time Markov process. In the simplest of these, all characters undergo independent state transitions (such as nucleotide substitutions) at a constant rate over time. This basic model is frequently extended to allow different rates on each branch of the tree. In reality, mutation rates may also vary over time (due, for example, to environmental changes); this can be modelled by allowing the rate parameters to evolve along the tree, at the expense of having an increased number of parameters. A model defines transition probabilities from states i to j along a branch of length t (in units of evolutionary time). The likelihood of a phylogeny is computed from a nested sum of transition probabilities that corresponds to the hierarchical structure of the proposed tree. At each node, the likelihood of its descendants is summed over all possible ancestral character states at that node: where we are computing the likelihood of the subtree rooted at node x with direct descendants y and z, denotes the character state of the i-th node, is the branch length (evolutionary time) between nodes i and j, and is the set of all possible character states (for example, the nucleotides A, C, G, and T). Thus, the objective of ancestral reconstruction is to find the assignment to for all x internal nodes that maximizes the likelihood of the observed data for a given tree. Marginal and joint likelihood Rather than compute the overall likelihood for alternative trees, the problem for ancestral reconstruction is to find the combination of character states at each ancestral node with the highest marginal maximum likelihood. Generally speaking, there are two approaches to this problem. First, one can assign the most likely character state to each ancestor independently of the reconstruction of all other ancestral states. This approach is referred to as marginal reconstruction. It is akin to summing over all combinations of ancestral states at all of the other nodes of the tree (including the root node), other than those for which data is available. Marginal reconstruction is finding the state at the current node that maximizes the likelihood integrating over all other states at all nodes, in proportion to their probability. Second, one may instead attempt to find the joint combination of ancestral character states throughout the tree which jointly maximizes the likelihood of the entire dataset. Thus, this approach is referred to as joint reconstruction. Not surprisingly, joint reconstruction is more computationally complex than marginal reconstruction. Nevertheless, efficient algorithms for joint reconstruction have been developed with a time complexity that is generally linear with the number of observed taxa or sequences. ML-based methods of ancestral reconstruction tend to provide greater accuracy than MP methods in the presence of variation in rates of evolution among characters (or across sites in a genome). However, these methods are not yet able to accommodate variation in rates of evolution over time, otherwise known as heterotachy. If the rate of evolution for a specific character accelerates on a branch of the phylogeny, then the amount of evolution that has occurred on that branch will be underestimated for a given length of the branch and assuming a constant rate of evolution for that character. In addition to that, it is difficult to distinguish heterotachy from variation among characters in rates of evolution. Since ML (unlike maximum parsimony) requires the investigator to specify a model of evolution, its accuracy may be affected by the use of a grossly incorrect model (model misspecification). Furthermore, ML can only provide a single reconstruction of character states (what is often referred to as a "point estimate") — when the likelihood surface is highly non-convex, comprising multiple peaks (local optima), then a single point estimate cannot provide an adequate representation, and a Bayesian approach may be more suitable. Bayesian inference Bayesian inference uses the likelihood of observed data to update the investigator's belief, or prior distribution, to yield the posterior distribution. In the context of ancestral reconstruction, the objective is to infer the posterior probabilities of ancestral character states at each internal node of a given tree. Moreover, one can integrate these probabilities over the posterior distributions over the parameters of the evolutionary model and the space of all possible trees. This can be expressed as an application of Bayes' theorem: where S represents the ancestral states, D corresponds to the observed data, and represents both the evolutionary model and the phylogenetic tree. is the likelihood of the observed data which can be computed by Felsenstein's pruning algorithm as given above. is the prior probability of the ancestral states for a given model and tree. Finally, is the probability of the data for a given model and tree, integrated over all possible ancestral states. Bayesian inference is the method that many have argued is the most accurate. In general, Bayesian statistical methods allow investigators to combine pre-existing information with new hypothesis. In the case of evolution, it combines the likelihood of the data observed with the likelihood that the events happened in the order they did, while recognizing the potential for error and uncertainty. Overall, it is the most accurate method for reconstructing ancestral genetic sequences, as well as protein stability. Unlike the other two methods, Bayesian inference yields a distribution of possible trees, allowing for more accurate and easily interpretable estimates of the variance of possible outcomes. We have given two formulations above to emphasize the two different applications of Bayes' theorem, which we discuss in the following section. Empirical and hierarchical Bayes One of the first implementations of a Bayesian approach to ancestral sequence reconstruction was developed by Yang and colleagues, where the maximum likelihood estimates of the evolutionary model and tree, respectively, were used to define the prior distributions. Thus, their approach is an example of an empirical Bayes method to compute the posterior probabilities of ancestral character states; this method was first implemented in the software package PAML. In terms of the above Bayesian rule formulation, the empirical Bayes method fixes to the empirical estimates of the model and tree obtained from the data, effectively dropping from the posterior likelihood, and prior terms of the formula. Moreover, Yang and colleagues used the empirical distribution of site patterns (i.e., assignments of nucleotides to tips of the tree) in their alignment of observed nucleotide sequences in the denominator in place of exhaustively computing over all possible values of S given . Computationally, the empirical Bayes method is akin to the maximum likelihood reconstruction of ancestral states except that, rather than searching for the ML assignment of states based on their respective probability distributions at each internal node, the probability distributions themselves are reported directly. Empirical Bayes methods for ancestral reconstruction require the investigator to assume that the evolutionary model parameters and tree are known without error. When the size or complexity of the data makes this an unrealistic assumption, it may be more prudent to adopt the fully hierarchical Bayesian approach and infer the joint posterior distribution over the ancestral character states, model, and tree. Huelsenbeck and Bollback first proposed a hierarchical Bayes method to ancestral reconstruction by using Markov chain Monte Carlo (MCMC) methods to sample ancestral sequences from this joint posterior distribution. A similar approach was also used to reconstruct the evolution of symbiosis with algae in fungal species (lichenization). For example, the Metropolis-Hastings algorithm for MCMC explores the joint posterior distribution by accepting or rejecting parameter assignments on the basis of the ratio of posterior probabilities. Put simply, the empirical Bayes approach calculates the probabilities of various ancestral states for a specific tree and model of evolution. By expressing the reconstruction of ancestral states as a set of probabilities, one can directly quantify the uncertainty for assigning any particular state to an ancestor. On the other hand, the hierarchical Bayes approach averages these probabilities over all possible trees and models of evolution, in proportion to how likely these trees and models are, given the data that has been observed. Whether the hierarchical Bayes method confers a substantial advantage in practice remains controversial, however. Moreover, this fully Bayesian approach is limited to analyzing relatively small numbers of sequences or taxa because the space of all possible trees rapidly becomes too vast, making it computationally infeasible for chain samples to converge in a reasonable amount of time. Calibration Ancestral reconstruction can be informed by the observed states in historical samples of known age, such as fossils or archival specimens. Since the accuracy of ancestral reconstruction generally decays with increasing time, the use of such specimens provides data that are closer to the ancestors being reconstructed and will most likely improve the analysis, especially when rates of character change vary through time. This concept has been validated by an experimental evolutionary study in which replicate populations of bacteriophage T7 were propagated to generate an artificial phylogeny. In revisiting these experimental data, Oakley and Cunningham found that maximum parsimony methods were unable to accurately reconstruct the known ancestral state of a continuous character (plaque size); these results were verified by computer simulation. This failure of ancestral reconstruction was attributed to a directional bias in the evolution of plaque size (from large to small plaque diameters) that required the inclusion of "fossilized" samples to address. Studies of both mammalian carnivores and fishes have demonstrated that without incorporating fossil data, the reconstructed estimates of ancestral body sizes are unrealistically large. Moreover, Graham Slater and colleagues showed using caniform carnivorans that incorporating fossil data into prior distributions improved both the Bayesian inference of ancestral states and evolutionary model selection, relative to analyses using only contemporaneous data. Models Many models have been developed to estimate ancestral states of discrete and continuous characters from extant descendants. Such models assume that the evolution of a trait through time may be modelled as a stochastic process. For discrete-valued traits (such as "pollinator type"), this process is typically taken to be a Markov chain; for continuous-valued traits (such as "brain size"), the process is frequently taken to be a Brownian motion or an Ornstein-Uhlenbeck process. Using this model as the basis for statistical inference, one can now use maximum likelihood methods or Bayesian inference to estimate the ancestral states. Discrete-state models Suppose the trait in question may fall into one of states, labelled . The typical means of modelling evolution of this trait is via a continuous-time Markov chain, which may be briefly described as follows. Each state has associated to it rates of transition to all of the other states. The trait is modelled as stepping between the states; when it reaches a given state, it starts an exponential "clock" for each of the other states that it can step to. It then "races" the clocks against each other, and it takes a step towards the state whose clock is the first to ring. In such a model, the parameters are the transition rates , which can be estimated using, for example, maximum likelihood methods, where one maximizes over the set of all possible configurations of states of the ancestral nodes. In order to recover the state of a given ancestral node in the phylogeny (call this node ) by maximum likelihood, the procedure is: find the maximum likelihood estimate of ; then compute the likelihood of each possible state for conditioning on ; finally, choose the ancestral state which maximizes this. One may also use this substitution model as the basis for a Bayesian inference procedure, which would consider the posterior belief in the state of an ancestral node given some user-chosen prior. Because such models may have as many as parameters, overfitting may be an issue. Some common choices that reduce the parameter space are: Markov -state 1 parameter model: this model is the reverse-in-time -state counterpart of the Jukes-Cantor model. In this model, all transitions have the same rate , regardless of their start and end states. Some transitions may be disallowed by declaring that their rates are simply 0; this may be the case, for example, if certain states cannot be reached from other states in a single transition. Asymmetrical Markov -state 2 parameter model: in this model, the state space is ordered (so that, for example, state 1 is smaller than state 2, which is smaller than state 3), and transitions may only occur between adjacent states. This model contains two parameters and : one for the rate of increase of state (e.g. 0 to 1, 1 to 2, etc.), and one for the rate of decrease in state (e.g. from 2 to 1, 1 to 0, etc.). Example: Binary state speciation and extinction model The binary state speciation and extinction model (BiSSE) is a discrete-space model that does not directly follow the framework of those mentioned above. It allows estimation of ancestral binary character states jointly with diversification rates associated with different character states; it may also be straightforwardly extended to a more general multiple-discrete-state model. In its most basic form, this model involves six parameters: two speciation rates (one each for lineages in states 0 and 1); similarly, two extinction rates; and two rates of character change. This model allows for hypothesis testing on the rates of speciation/extinction/character change, at the cost of increasing the number of parameters. Continuous-state models In the case where the trait instead takes non-discrete values, one must instead turn to a model where the trait evolves as some continuous process. Inference of ancestral states by maximum likelihood (or by Bayesian methods) would proceed as above, but with the likelihoods of transitions in state between adjacent nodes given by some other continuous probability distribution. Brownian motion: in this case, if nodes and are adjacent in the phylogeny (say is the ancestor of ) and separated by a branch of length , the likelihood of a transition from being in state to being in state is given by a Gaussian density with mean and variance In this case, there is only one parameter (), and the model assumes that the trait evolves freely without a bias toward increase or decrease, and that the rate of change is constant throughout the branches of the phylogenetic tree. Ornstein-Uhlenbeck process: in brief, an Ornstein-Uhlenbeck process is a continuous stochastic process that behaves like a Brownian motion, but attracted toward some central value, where the strength of the attraction increases with the distance from that value. This is useful for modelling scenarios where the trait is subject to stabilizing selection around a certain value (say ). Under this model, the above-described transition of being in state to being in state would have a likelihood defined by the transition density of an Ornstein-Uhlenbeck process with two parameters: , which describes the variance of the driving Brownian motion, and , which describes the strength of its attraction to . As tends to , the process is less and less constrained by its attraction to and the process becomes a Brownian motion. Because of this, the models may be nested, and log-likelihood ratio tests discerning which of the two models is appropriate may be carried out. Stable models of continuous character evolution: though Brownian motion is appealing and tractable as a model of continuous evolution, it does not permit non-neutrality in its basic form, nor does it provide for any variation in the rate of evolution over time. Instead, one may use a stable process, one whose values at fixed times are distributed as stable distributions, to model the evolution of traits. Stable processes, roughly speaking, behave as Brownian motions that also incorporate discontinuous jumps. This allows to appropriately model scenarios in which short bursts of fast trait evolution are expected. In this setting, maximum likelihood methods are poorly suited due to a rugged likelihood surface and because the likelihood may be made arbitrarily large, so Bayesian methods are more appropriate. Applications Character evolution Ancestral reconstruction is widely used to infer the ecological, phenotypic, or biogeographic traits associated with ancestral nodes in a phylogenetic tree. All methods of ancestral trait reconstructions have pitfalls, as they use mathematical models to predict how traits have changed with large amounts of missing data. This missing data includes the states of extinct species, the relative rates of evolutionary changes, knowledge of initial character states, and the accuracy of phylogenetic trees. In all cases where ancestral trait reconstruction is used, findings should be justified with an examination of the biological data that supports model based conclusions. Griffith O.W. et al. Ancestral reconstruction allows for the study of evolutionary pathways, adaptive selection, developmental gene expression, and functional divergence of the evolutionary past. For a review of biological and computational techniques of ancestral reconstruction see Chang et al.. For criticism of ancestral reconstruction computation methods see Williams P.D. et al.. Behavior and life history evolution In horned lizards (genus Phrynosoma), viviparity (live birth) has evolved multiple times, based on ancestral reconstruction methods. Diet reconstruction in Galapagos finches Both phylogenetic and character data are available for the radiation of finches inhabiting the Galapagos Islands. These data allow testing of hypotheses concerning the timing and ordering of character state changes through time via ancestral state reconstruction. During the dry season, the diets of the 13 species of Galapagos finches may be assorted into three broad diet categories, first those that consume grain-like foods are considered "granivores", those that ingest arthropods are termed "insectivores" and those that consume vegetation are classified as "folivores". Dietary ancestral state reconstruction using maximum parsimony recover 2 major shifts from an insectivorous state: one to granivory, and one to folivory. Maximum-likelihood ancestral state reconstruction recovers broadly similar results, with one significant difference: the common ancestor of the tree finch (Camarhynchus) and ground finch (Geospiza) clades are most likely granivorous rather than insectivorous (as judged by parsimony). In this case, this difference between ancestral states returned by maximum parsimony and maximum likelihood likely occurs as a result of the fact that ML estimates consider branch lengths of the phylogenetic tree. Morphological and physiological character evolution Phrynosomatid lizards show remarkable morphological diversity, including in the relative muscle fiber type composition in their hindlimb muscles. Ancestor reconstruction based on squared-change parsimony (equivalent to maximum likelihood under Brownian motion character evolution) indicates that horned lizards, one of the three main subclades of the lineage, have undergone a major evolutionary increase in the proportion of fast-oxidative glycolytic fibers in their iliofibularis muscles. Mammalian body mass In an analysis of the body mass of 1,679 placental mammal species comparing stable models of continuous character evolution to Brownian motion models, Elliot and Mooers showed that the evolutionary process describing mammalian body mass evolution is best characterized by a stable model of continuous character evolution, which accommodates rare changes of large magnitude. Under a stable model, ancestral mammals retained a low body mass through early diversification, with large increases in body mass coincident with the origin of several Orders of large body massed species (e.g. ungulates). By contrast, simulation under a Brownian motion model recovered a less realistic, order of magnitude larger body mass among ancestral mammals, requiring significant reductions in body size prior to the evolution of Orders exhibiting small body size (e.g. Rodentia). Thus stable models recover a more realistic picture of mammalian body mass evolution by permitting large transformations to occur on a small subset of branches. Correlated character evolution Phylogenetic comparative methods (inferences drawn through comparison of related taxa) are often used to identify biological characteristics that do not evolve independently, which can reveal an underlying dependence. For example, the evolution of the shape of a finch's beak may be associated with its foraging behaviour. However, it is not advisable to search for these associations by the direct comparison of measurements or genetic sequences because these observations are not independent because of their descent from common ancestors. For discrete characters, this problem was first addressed in the framework of maximum parsimony by evaluating whether two characters tended to undergo a change on the same branches of the tree. Felsenstein identified this problem for continuous character evolution and proposed a solution similar to ancestral reconstruction, in which the phylogenetic structure of the data was accommodated statistically by directing the analysis through computation of "independent contrasts" between nodes of the tree related by non-overlapping branches. Molecular evolution On a molecular level, amino acid residues at different locations of a protein may evolve non-independently because they have a direct physicochemical interaction, or indirectly by their interactions with a common substrate or through long-range interactions in the protein structure. Conversely, the folded structure of a protein could potentially be inferred from the distribution of residue interactions. One of the earliest applications of ancestral reconstruction, to predict the three-dimensional structure of a protein through residue contacts, was published by Shindyalov and colleagues. Phylogenies relating 67 different protein families were generated by a distance-based clustering method (unweighted pair group method with arithmetic mean, UPGMA), and ancestral sequences were reconstructed by parsimony. The authors reported a weak but significant tendency for co-evolving pairs of residues to be co-located in the known three-dimensional structure of the proteins. The reconstruction of ancient proteins and DNA sequences has only recently become a significant scientific endeavour. The developments of extensive genomic sequence databases in conjunction with advances in biotechnology and phylogenetic inference methods have made ancestral reconstruction cheap, fast, and scientifically practical. This concept has been applied to identify co-evolving residues in protein sequences using more advanced methods for the reconstruction of phylogenies and ancestral sequences. For example, ancestral reconstruction has been used to identify co-evolving residues in proteins encoded by RNA virus genomes, particularly in HIV. Ancestral protein and DNA reconstruction allows for the recreation of protein and DNA evolution in the laboratory so that it can be studied directly. With respect to proteins, this allows for the investigation of the evolution of present-day molecular structure and function. Additionally, ancestral protein reconstruction can lead to the discoveries of new biochemical functions that have been lost in modern proteins. It also allows insights into the biology and ecology of extinct organisms. Although the majority of ancestral reconstructions have dealt with proteins, it has also been used to test evolutionary mechanisms at the level of bacterial genomes and primate gene sequences. Vaccine design RNA viruses such as the human immunodeficiency virus (HIV) evolve at an extremely rapid rate, orders of magnitude faster than mammals or birds. For these organisms, ancestral reconstruction can be applied on a much shorter time scale; for example, in order to reconstruct the global or regional progenitor of an epidemic that has spanned decades rather than millions of years. A team around Brian Gaschen proposed that such reconstructed strains be used as targets for vaccine design efforts, as opposed to sequences isolated from patients in the present day. Because HIV is extremely diverse, a vaccine designed to work on one patient's viral population might not work for a different patient, because the evolutionary distance between these two viruses may be large. However, their most recent common ancestor is closer to each of the two viruses than they are to each other. Thus, a vaccine designed for a common ancestor could have a better chance of being effective for a larger proportion of circulating strains. Another team took this idea further by developing a center-of-tree reconstruction method to produce a sequence whose total evolutionary distance to contemporary strains is as small as possible. Strictly speaking, this method was not ancestral reconstruction, as the center-of-tree (COT) sequence does not necessarily represent a sequence that has ever existed in the evolutionary history of the virus. However, Rolland and colleagues did find that, in the case of HIV, the COT virus was functional when synthesized. Similar experiments with synthetic ancestral sequences obtained by maximum likelihood reconstruction have likewise shown that these ancestors are both functional and immunogenic, lending some credibility to these methods. Furthermore, ancestral reconstruction can potentially be used to infer the genetic sequence of the transmitted HIV variants that have gone on to establish the next infection, with the objective of identifying distinguishing characteristics of these variants (as a non-random selection of the transmitted population of viruses) that may be targeted for vaccine design. Genome rearrangements Rather than inferring the ancestral DNA sequence, one may be interested in the larger-scale molecular structure and content of an ancestral genome. This problem is often approached in a combinatorial framework, by modelling genomes as permutations of genes or homologous regions. Various operations are allowed on these permutations, such as an inversion (a segment of the permutation is reversed in-place), deletion (a segment is removed), transposition (a segment is removed from one part of the permutation and spliced in somewhere else), or gain of genetic content through recombination, duplication or horizontal gene transfer. The "genome rearrangement problem", first posed by Watterson and colleagues, asks: given two genomes (permutations) and a set of allowable operations, what is the shortest sequence of operations that will transform one genome into the other? A generalization of this problem applicable to ancestral reconstruction is the "multiple genome rearrangement problem": given a set of genomes and a set of allowable operations, find (i) a binary tree with the given genomes as its leaves, and (ii) an assignment of genomes to the internal nodes of the tree, such that the total number of operations across the whole tree is minimized. This approach is similar to parsimony, except that the tree is inferred along with the ancestral sequences. Unfortunately, even the single genome rearrangement problem is NP-hard, although it has received much attention in mathematics and computer science (for a review, see Fertin and colleagues). The reconstruction of ancestral genomes is also called karyotype reconstruction. Chromosome painting is currently the main experimental technique. Recently, researchers have developed computational methods to reconstruct the ancestral karyotype by taking advantage of comparative genomics. Furthermore, comparative genomics and ancestral genome reconstruction has been applied to identify ancient horizontal gene transfer events at the last common ancestor of a lineage (e.g. Candidatus Accumulibacter phosphatis) to identify the evolutionary basis for trait acquisition. Spatial applications Migration Ancestral reconstruction is not limited to biological traits. Spatial location is also a trait, and ancestral reconstruction methods can infer the locations of ancestors of the individuals under consideration. Such techniques were used by Lemey and colleagues to geographically trace the ancestors of 192 Avian influenza A-H5N1 strains sampled from twenty localities in Europe and Asia, and for 101 rabies virus sequences sampled across twelve African countries. Treating locations as discrete states (countries, cities, etc.) allows for the application of the discrete-state models described above. However, unlike in a model where the state space for the trait is small, there may be many locations, and transitions between certain pairs of states may rarely or never occur; for example, migration between distant locales may never happen directly if air travel between the two places does not exist, so such migrations must pass through intermediate locales first. This means that there could be many parameters in the model which are zero or close to zero. To this end, Lemey and colleagues used a Bayesian procedure to not only estimate the parameters and ancestral states, but also to select which migration parameters are not zero; their work suggests that this procedure does lead to more efficient use of the data. They also explore the use of prior distributions that incorporate geographical structure or hypotheses about migration dynamics, finding that those they considered had little effect on the findings. Using this analysis, the team around Lemey found that the most likely hub of diffusion of A-H5N1 is Guangdong, with Hong Kong also receiving posterior support. Further, their results support the hypothesis of long-standing presence of African rabies in West Africa. Species ranges Inferring historical biogeographic patterns often requires reconstructing ancestral ranges of species on phylogenetic trees. For instance, a well-resolved phylogeny of plant species in the genus Cyrtandra was used together with information of their geographic ranges to compare four methods of ancestral range reconstruction. The team compared Fitch parsimony, (FP; parsimony) stochastic mapping (SM; maximum likelihood), dispersal-vicariance analysis (DIVA; parsimony), and dispersal-extinction-cladogenesis (DEC; maximum-likelihood). Results indicated that both parsimony methods performed poorly, which was likely due to the fact that parsimony methods do not consider branch lengths. Both maximum-likelihood methods performed better; however, DEC analyses that additionally allow incorporation of geological priors gave more realistic inferences about range evolution in Cyrtandra relative to other methods. Another maximum likelihood method recovers the phylogeographic history of a gene by reconstructing the ancestral locations of the sampled taxa. This method assumes a spatially explicit random walk model of migration to reconstruct ancestral locations given the geographic coordinates of the individuals represented by the tips of the phylogenetic tree. When applied to a phylogenetic tree of chorus frogs Pseudacris feriarum, this method recovered recent northward expansion, higher per-generation dispersal distance in the recently colonized region, a non-central ancestral location, and directional migration. The first consideration of the multiple genome rearrangement problem, long before its formalization in terms of permutations, was presented by Sturtevant and Dobzhansky in 1936. They examined genomes of several strains of fruit fly from different geographic locations, and observed that one configuration, which they called "standard", was the most common throughout all the studied areas. Remarkably, they also noticed that four different strains could be obtained from the standard sequence by a single inversion, and two others could be related by a second inversion. This allowed them to hypothesize a phylogeny for the sequences, and to infer that the standard sequence was probably also the ancestral one. Linguistic Evolution Reconstructions of the words and phenomes of ancient proto-languages such as Proto-Indo-European have been performed based on the observed analogues in present-day languages. Typically, these analyses are carried out manually using the "comparative method". First, words from different languages with a common etymology (cognates) are identified in the contemporary languages under study, analogous to the identification of orthologous biological sequences. Second, correspondences between individual sounds in the cognates are identified, a step similar to biological sequence alignment, although performed manually. Finally, likely ancestral sounds are hypothesised by manual inspection and various heuristics (such as the fact that most languages have both nasal and non-nasal vowels). Software There are many software packages available which can perform ancestral state reconstruction. Generally, these software packages have been developed and maintained through the efforts of scientists in related fields and released under free software licenses. The following table is not meant to be a comprehensive itemization of all available packages, but provides a representative sample of the extensive variety of packages that implement methods of ancestral reconstruction with different strengths and features. Package descriptions Molecular evolution The majority of these software packages are designed for analyzing genetic sequence data. For example, PAML is a collection of programs for the phylogenetic analysis of DNA and protein sequence alignments by maximum likelihood. Ancestral reconstruction can be performed using the codeml program. In addition, LAZARUS is a collection of Python scripts that wrap the ancestral reconstruction functions of PAML for batch processing and greater ease-of-use. Software packages such as MEGA, HyPhy, and Mesquite also perform phylogenetic analysis of sequence data, but are designed to be more modular and customizable. HyPhy implements a joint maximum likelihood method of ancestral sequence reconstruction that can be readily adapted to reconstructing a more generalized range of discrete ancestral character states such as geographic locations by specifying a customized model in its batch language. Mesquite provides ancestral state reconstruction methods for both discrete and continuous characters using both maximum parsimony and maximum likelihood methods. It also provides several visualization tools for interpreting the results of ancestral reconstruction. MEGA is a modular system, too, but places greater emphasis on ease-of-use than customization of analyses. As of version 5, MEGA allows the user to reconstruct ancestral states using maximum parsimony, maximum likelihood, and empirical Bayes methods. The Bayesian analysis of genetic sequences may confer greater robustness to model misspecification. MrBayes allows inference of ancestral states at ancestral nodes using the full hierarchical Bayesian approach. The PREQUEL program distributed in the PHAST package performs comparative evolutionary genomics using ancestral sequence reconstruction. SIMMAP stochastically maps mutations on phylogenies. BayesTraits analyses discrete or continuous characters in a Bayesian framework to evaluate models of evolution, reconstruct ancestral states, and detect correlated evolution between pairs of traits. Other character types Other software packages are more oriented towards the analysis of qualitative and quantitative traits (phenotypes). For example, the ape package in the statistical computing environment R also provides methods for ancestral state reconstruction for both discrete and continuous characters through the 'ace' function, including maximum likelihood. Phyrex implements a maximum parsimony-based algorithm to reconstruct ancestral gene expression profiles, in addition to a maximum likelihood method for reconstructing ancestral genetic sequences (by wrapping around the baseml function in PAML). Several software packages also reconstruct phylogeography. BEAST (Bayesian Evolutionary Analysis by Sampling Trees) provides tools for reconstructing ancestral geographic locations from observed sequences annotated with location data using Bayesian MCMC sampling methods. Diversitree is an R package providing methods for ancestral state reconstruction under Mk2 (a continuous time Markov model of binary character evolution). and BiSSE (Binary State Speciation and Extinction) models. Lagrange performs analyses on reconstruction of geographic range evolution on phylogenetic trees. Phylomapper is a statistical framework for estimating historical patterns of gene flow and ancestral geographic locations. RASP infers ancestral states using statistical dispersal-vicariance analysis, Lagrange, Bayes-Lagrange, BayArea and BBM methods. VIP infers historical biogeography by examining disjunct geographic distributions. Genome rearrangements provide valuable information in comparative genomics between species. ANGES compares extant related genomes through ancestral reconstruction of genetic markers. BADGER uses a Bayesian approach to examining the history of gene rearrangement. Count reconstructs the evolution of the size of gene families. EREM analyses the gain and loss of genetic features encoded by binary characters. PARANA performs parsimony based inference of ancestral biological networks that represent gene loss and duplication. Web applications Finally, there are several web-server based applications that allow investigators to use maximum likelihood methods for ancestral reconstruction of different character types without having to install any software. For example, Ancestors is web-server for ancestral genome reconstruction by the identification and arrangement of syntenic regions. FastML is a web-server for probabilistic reconstruction of ancestral sequences by maximum likelihood that uses a gap character model for reconstructing indel variation. MLGO is a web-server for maximum likelihood gene order analysis. Future directions The development and application of computational algorithms for ancestral reconstruction continues to be an active area of research across disciplines. For example, the reconstruction of sequence insertions and deletions (indels) has lagged behind the more straightforward application of substitution models. Bouchard-Côté and Jordan recently described a new model (the Poisson Indel Process) which represents an important advance on the archetypal Thorne-Kishino-Felsenstein model of indel evolution. In addition, the field is being driven forward by rapid advances in the area of next-generation sequencing technology, where sequences are generated from millions of nucleic acid templates by extensive parallelization of sequencing reactions in a custom apparatus. These advances have made it possible to generate a "deep" snapshot of the genetic composition of a rapidly evolving population, such as RNA viruses or tumour cells, in a relatively short amount of time. At the same time, the massive amount of data and platform-specific sequencing error profiles has created new bioinformatic challenges for processing these data for ancestral sequence reconstruction. See also Evolutionary biology Origin of life Enzyme promiscuity References Evolutionary biology
234921
https://en.wikipedia.org/wiki/Mobile%20payment
Mobile payment
Mobile payment (also referred to as mobile money, mobile money transfer, and mobile wallet) generally refer to payment services operated under financial regulation and performed from or via a mobile device. Instead of paying with cash, cheque, or credit cards, a consumer can use a mobile to pay for a wide range of services and digital or hard goods. Although the concept of using non-coin-based currency systems has a long history, it is only in the 21st century that the technology to support such systems has become widely available. Mobile payment is being adopted all over the world in different ways. The first patent exclusively defined "Mobile Payment System" was filed in 2000. In developing countries mobile payment solutions have been deployed as a means of extending financial services to the community known as the "unbanked" or "underbanked", which is estimated to be as much as 50% of the world's adult population, according to Financial Access' 2009 Report "Half the World is Unbanked". These payment networks are often used for micropayments. The use of mobile payments in developing countries has attracted public and private funding by organizations such as the Bill & Melinda Gates Foundation, United States Agency for International Development and Mercy Corps. Mobile payments are becoming a key instrument for payment service providers (PSPs) and other market participants, in order to achieve new growth opportunities, according to the European Payments Council (EPC). The EPC states that "new technology solutions provide a direct improvement to the operations efficiency, ultimately resulting in cost savings and in an increase in business volume". Models There are four primary models for mobile payments: Bank-centric model Operator-centric model Collaborative model Independent service provider (ISP) model In bank- or operator-centric models, a bank or the operator is the central node of the model, manages the transactions and distributes the property rights. In collaborative model, the financial intermediaries and telephonic operators collaborate in the managing tasks and share cooperatively the proprietary rights. In ISP model, a third party of confidence operates as an independent and “neutral” intermediary between financial agents and operators. Apple Pay or PayPal are the ISP the most frequently associated to this model in these last months. There can also be combinations of two models. Operator/bank co-operation, emerging in Haiti. Financial institutions and credit card companies as well as Internet companies such as Google and a number of mobile communication companies, such as mobile network operators and major telecommunications infrastructure such as w-HA from Orange and smartphone multinationals such as Ericsson and BlackBerry have implemented mobile payment solutions. Mobile wallets A mobile wallet is an app that contains the user's debit and credit card information, letting them pay for goods and services digitally with their mobile devices. Notable mobile wallets include: Alipay Apple Pay BHIM Cloud QuickPass Google Pay Gyft LG Pay Mi Pay Line Pay Samsung Pay Venmo WeChat Pay Touch 'n Go eWallet PhonePe Paytm Amazon Pay Credit card A simple mobile web payment system can also include a credit card payment flow allowing a consumer to enter their card details to make purchases. This process is familiar but any entry of details on a mobile phone is known to reduce the success rate (conversion) of payments. In addition, if the payment vendor can automatically and securely identify customers then card details can be recalled for future purchases turning credit card payments into simple single click-to-buy giving higher conversion rates for additional purchases. However, there are concerns regarding information and payment privacy when cards are used during online transactions. If a website is not secure, for example, then personal credit card info can leak online. Carrier billing The consumer uses the mobile billing option during checkout at an e-commerce site—such as an online gaming site—to make a payment. After two-factor authentication involving the consumer's mobile number and a PIN or one-time password (often abbreviated as OTP), the consumer's mobile account is charged for the purchase. It is a true alternative payment method that does not require the use of credit/debit cards or pre-registration at an online payment solution such as PayPal, thus bypassing banks and credit card companies altogether. This type of mobile payment method, which is prevalent in Asia, provides the following benefits: Security – two-factor authentication and a risk management engine prevents fraud. Convenience – no pre-registration and no new mobile software is required. Easy – It is just another option during the checkout process. Fast – most transactions are completed in less than 10 seconds. Proven – 70% of all digital content purchased online in some parts of Asia uses the direct mobile billing method Remote payment by SMS and credit card tokenization Even as the volume of Premium SMS transactions have flattened, many cloud-based payment systems continue to use SMS for presentment, authorization, and authentication, while the payment itself is processed through existing payment networks such as credit and debit card networks. These solutions combine the ubiquity of the SMS channel, with the security and reliability of existing payment infrastructure. Since SMS lacks end-to-end encryption, such solutions employ a higher-level security strategies known as 'tokenization' and 'target removal' whereby payment occurs without transmitting any sensitive account details, username, password, or PIN. To date, point-of-sales mobile payment solutions have not relied on SMS-based authentication as a payment mechanism, but remote payments such as bill payments, seat upgrades on flights, and membership or subscription renewals are commonplace. In comparison to premium short code programs which often exist in isolation, relationship marketing and payment systems are often integrated with CRM, ERP, marketing-automation platforms, and reservation systems. Many of the problems inherent with premium SMS have been addressed by solution providers. Remembering keywords is not required since sessions are initiated by the enterprise to establish a transaction specific context. Reply messages are linked to the proper session and authenticated either synchronously through a very short expiry period (every reply is assumed to be to the last message sent) or by tracking session according to varying reply addresses and/or reply options. Direct operator billing Direct operator billing, also known as mobile content billing, WAP billing, and carrier billing, requires integration with the mobile network operator. It provides certain benefits: Mobile network operators already have a billing relationship with consumers, the payment will be added to their bill. Provides instantaneous payment Protects payment details and consumer identity Better conversion rates Reduced customer support costs for merchants Alternative monetization option in countries where credit card usage is low One of the drawbacks is that the payout rate will often be much lower than with other mobile payments options. Examples from a popular provider: 92% with PayPal 85 to 86% with credit card 45 to 91.7% with operator billing in the US, UK and some smaller European countries, but usually around 60% More recently, direct operator billing is being deployed in an in-app environment, where mobile application developers are taking advantage of the one-click payment option that direct operator billing provides for monetising mobile applications. This is a logical alternative to credit card and Premium SMS billing. In 2012, Ericsson and Western Union partnered to expand the direct operator billing market, making it possible for mobile operators to include Western Union mobile money transfers as part of their mobile financial service offerings. Given the international reach of both companies, the partnership is meant to accelerate the interconnection between the m-commerce market and the existing financial world. Contactless near-field communication Near-field communication (NFC) is used mostly in paying for purchases made in physical stores or transportation services. A consumer using a special mobile phone equipped with a smartcard waves his/her phone near a reader module. Most transactions do not require authentication, but some require authentication using PIN, before transaction is completed. The payment could be deducted from a pre-paid account or charged to a mobile or bank account directly. Mobile payment method via NFC faces significant challenges for wide and fast adoption, due to lack of supporting infrastructure, complex ecosystem of stakeholders, and standards. Some phone manufacturers and banks, however, are enthusiastic. Ericsson and Aconite are examples of businesses that make it possible for banks to create consumer mobile payment applications that take advantage of NFC technology. NFC vendors in Japan are closely related to mass-transit networks, like the Mobile Suica used since 28 January 2006 on the JR East rail network. The mobile wallet Osaifu-Keitai system, used since 2004 for Mobile Suica and many others including Edy and nanaco, has become the de facto standard method for mobile payments in Japan. Its core technology, Mobile FeliCa IC, is partially owned by Sony, NTT DoCoMo and JR East. Mobile FeliCa utilize Sony's FeliCa technology, which itself is the de facto standard for contactless smart cards in the country. NFC was used in transports for the first time in the world by China Unicom and Yucheng Transportation Card in the tramways and bus of Chongqing on 19 January 2009, in those of Nice on 21 May 2010, then in Seoul after its introduction in Korea by the discount retailer Homeplus in March 2010 and it was tested then adopted or added to the existing systems in Tokyo from May 2010 to end of 2012. After an experimentation in the metro of Rennes in 2007, the NFC standard was implemented for the first time in a metro network, by China Unicom in Beijing on 31 December 2010. Other NFC vendors mostly in Europe use contactless payment over mobile phones to pay for on- and off-street parking in specially demarcated areas. Parking wardens may enforce the parking by license plate, transponder tags, or barcode stickers. In Europe, the first experimentations of mobile payment took place in Germany during 6 months, from May 2005, with a deferred payment at the end of each month on the tramways and bus of Hanau with the Nokia 3220 using the NFC standard of Philips and Sony. In France, the immediate contactless payment was experimented during 6 months, from October 2005, in some Cofinoga shops (Galeries Lafayette, Monoprix) and Vinci parkings of Caen with a Samsung NFC smartphone provided by Orange in collaboration with Philips Semiconductors (for the first time, thanks to "Fly Tag", the system allowed to receive as well audiovisual informations, like bus timetables or cinema trailers from the concerned services). From 19 November 2007 to 2009, this experimentation was extended in Caen to more services and three additional mobile phone operators (Bouygues Telecom, SFR and NRJ Mobile) and in Strasbourg and on 5 November 2007, Orange and the transport societies SNCF and Keolis associated themselves for a 2 months experimentation on smartphones in the metro, bus and TER trains in Rennes. After a test conducted from October 2005 to November 2006 with 27 users, on 21 May 2010, the transport authority of Nice Régie Lignes d'Azur was the first public transport provider in Europe to add definitely to its own offer a contactless payment on its tramways and bus network either with a NFC bank card or smartphone application notably on Samsung Player One (with the same mobile phone operators than in Caen and Strasbourg), as well as the validation aboard with them of the transport titles and the loading of these titles onto the smartphone, in addition to the season tickets contactless card. This service was as well experimented then respectively implemented for NFC smartphones on 18 and 25 June 2013 in the tramways and bus of Caen and Strasbourg. In Paris transport network, after a 4 months testing from November 2006 with Bouygues Telecom and 43 persons and finally with users from July 2018, the contactless mobile payment and direct validation on the turnstile readers with a smartphone was adopted on 25 September 2019Archived at Ghostarchive and the Wayback Machine: in collaboration with the societies Orange, Samsung, Wizway Solutions, Worldline and Conduent. First conceptualized in the early 2010s, the technology has seen as well commercial use in this century in Scandinavia and Estonia. End users benefit from the convenience of being able to pay for parking from the comfort of their car with their mobile phone, and parking operators are not obliged to invest in either existing or new street-based parking infrastructures. Parking wardens maintain order in these systems by license plate, transponder tags or barcode stickers or they read a digital display in the same way as they read a pay and display receipt. Other vendors use a combination of both NFC and a barcode on the mobile device for mobile payment, because many mobile devices in the market do not yet support NFC. Others QR code payments QR code is a square two-dimensional bar code. QR codes have been in use since 1994. Originally used to track products in warehouses, QR codes were designed to replace the older one-dimensional bar codes. The older bar codes just represent numbers, which can be looked up in a database and translated into something meaningful. QR, or "quick response", bar codes were designed to contain the meaningful information directly in the bar code. QR codes can be of two main categories: The QR code is presented on the mobile device of the person paying and scanned by a POS or another mobile device of the payee The QR code is presented by the payee, in a static or one time generated fashion and it is scanned by the person executing the payment Mobile self-checkout allows for one to scan a QR code or barcode of a product inside a brick-and-mortar establishment in order to purchase the product on the spot. This theoretically eliminates or reduces the incidence of long checkout lines, even at self-checkout kiosks. Cloud-based mobile payments Google, PayPal, GlobalPay and GoPago use a cloud-based approach to in-store mobile payment. The cloud based approach places the mobile payment provider in the middle of the transaction, which involves two separate steps. First, a cloud-linked payment method is selected and payment is authorized via NFC or an alternative method. During this step, the payment provider automatically covers the cost of the purchase with issuer linked funds. Second, in a separate transaction, the payment provider charges the purchaser's selected, cloud-linked account in a card-not-present environment to recoup its losses on the first transaction. Audio signal-based payments The audio channel of the mobile phone is another wireless interface that is used to make payments. Several companies have created technology to use the acoustic features of cell phones to support mobile payments and other applications that are not chip-based. The technologies like near sound data transfer (NSDT), data over voice and NFC 2.0 produce audio signatures that the microphone of the cell phone can pick up to enable electronic transactions. Direct carrier/bank co-operation In the T-Cash model, the mobile phone and the phone carrier is the front-end interface to the consumers. The consumer can purchase goods, transfer money to a peer, cash out, and cash in. A 'mini wallet' account can be opened as simply as entering *700# on the mobile phone, presumably by depositing money at a participating local merchant and the mobile phone number. Presumably, other transactions are similarly accomplished by entering special codes and the phone number of the other party on the consumer's mobile phone. Magnetic secure transmission In magnetic secure transmission (MST), a smartphone emits a magnetic signal that resembles the one created by swiping a magnetic credit card through a traditional credit card terminal. No changes to the terminal or a new terminal are required. Bank transfer systems Swish is the name of a system established in Sweden. It was established through a collaboration from major banks in 2012 and has been very successful, with 66 percent of the population as users in 2017. It is mainly used for peer-to-peer payments between private people, but is also used by church collect, street vendors and small businesses. A person's account is tied to his or her phone number and the connection between the phone number and the actual bank account number is registered in the internet bank. The electronic identification system mobile BankID, issued by several Swedish banks, is used to verify the payment. Users with a simple phone or without the app can still receive money if the phone number is registered in the internet bank. Like many other mobile payment system, its main obstacle is getting people to register and download the app, but it has managed to reach a critical mass and it has become part of everyday life for many Swedes. Swedish payments company Trustly also enables mobile bank transfers, but is used mainly for business-to-consumer transactions that occur solely online. If an e-tailer integrates with Trustly, its customers can pay directly from their bank account. As opposed to Swish, users don't need to register a Trustly account or download software to pay with it. The Danish MobilePay and Norwegian Vipps are also popular in their countries. They use direct and instant bank transfers, but also for users not connected to a participating bank, credit card billing. In India, a new direct bank transfer system has emerged called as Unified Payments Interface. This system enables users to transfer money to other users and businesses in real-time directly from their bank accounts. Users download UPI supporting app from app stores on their Android or iOS device, link and verify their mobile number with the bank account by sending one outgoing SMS to app provider, create a virtual payment address (VPA) which auto generates a QR code and then set a banking PIN by generating OTP for secure transactions. VPA and QR codes are to ensure easy to use & privacy which can help in peer-to-peer (P2P) transactions without giving any user details. Fund transfer can then be initiated to other users or businesses. Settlement of funds happen in real-time, i.e. money is debited from payer's bank account and credited in recipient's bank account in real-time. UPI service works 24x7, including weekends and holidays. This is slowly becoming a very popular service in India and is processing monthly payments worth approximately $10 billion as in October 2018. In Poland, Blik - mobile payment system created in February 2015 by the Polish Payment Standard (PSP) company. To pay with Blik, you need a smartphone, a personal account and a mobile application of one of the banks that cooperate with it. The principle of operation is to generate a 6-digit code in the bank's mobile application. The Blik code is used only to connect the parties to the transaction. It is an identifier that associates the user and a specific bank at a given moment. For two minutes, it points to a specific mobile application to which - through a string of numbers - a request to accept a transaction in a specific store or ATM is sent. Blik allows you to pay in online and stationary stores. By the Blik, we can also make transfers to the phone or withdraw money from ATMs. Mobile payment service provider model There are four potential mobile payment models:Operator-centric model: The mobile operator acts independently to deploy mobile payment service. The operator could provide an independent mobile wallet from the user mobile account (airtime). A large deployment of the operator-centric model is severely challenged by the lack of connection to existing payment networks. Mobile network operator should handle the interfacing with the banking network to provide advanced mobile payment service in banked and under banked environment. Pilot projects using this model have been launched in emerging countries, but they did not cover most of the mobile payment service use cases. Payments were limited to remittance and airtime top up.Bank-centric model: A bank deploys mobile payment applications or devices to customers and ensures merchants have the required point-of-sale (POS) acceptance capability. Mobile network operator are used as a simple carrier, they bring their experience to provide quality of service (QOS) assurance.Collaboration model: This model involves collaboration among banks, mobile operators and a trusted third party.Peer-to-peer model'': The mobile payment service provider acts independently from financial institutions and mobile network operators to provide mobile payment. See also Contactless payment Cryptocurrency wallet Diem (digital currency) Digital wallets Electronic money Financial cryptography Mobile ticketing Point of sale Point-of-sale malware SMS banking Universal credit card References Financial technology ja:非接触型決済
52629834
https://en.wikipedia.org/wiki/Henri%20Barki
Henri Barki
Henri Barki is a Turkish-Canadian social scientist, and was a Canada Research Chair at HEC Montréal, Université de Montréal until he retired in 2017. Partial Bibliography Scholarly publications with Henri Barki as the lead author A Contingency Model of DSS Success: An Empirical Investigation Barki, Henri ProQuest Dissertations Publishing; 1984 Implementing Decision Support Systems: A Research Framework Barki, Henri; Huff, Sid L. Canadian Journal of Administrative Sciences / Revue Canadienne des Sciences de l'Administration, June 1984, 1(1): 95-110 Change, attitude to change, and decision support system success Barki, Henri; Huff, Sid L. Information & Management, 1985, 9(5): 261-268 An Information Systems Keyword Classification Scheme Barki, Henri; Rivard, Suzanne; Talbot, Jean MIS Quarterly, June 1988, 12(2): 299 Rethinking The Concept Of User Involvement Barki, Henri; Hartwick, Jon MIS Quarterly, March 1989, 13(1): 53 Implementing Decision Support Systems: Correlates of User Satisfaction and System Usage Barki, Henri; Huff, Sid INFOR, May 1990, 28(2): 89 The Measurement of Risk in an Information System Development Project Barki, Henri; Rivard, Suzanne; Talbot, Jean Revue Canadienne des Sciences de l'Administration/Canadian Journal of Administrative Sciences, September 1992, 9(3): 213-228 A Keyword Classification Scheme for IS Research Literature: An Update Barki, Henri; Rivard, Suzanne; Talbot, Jean MIS Quarterly, June 1993, 17(2): 209-226 Toward an Assessment of Software Development Risk Barki, Henri; Rivard, Suzanne; Talbot, Jean Journal of Management Information Systems, September 1993, 10(2): 203-225 Measuring User Participation, User Involvement, and User Attitude Barki, Henri; Hartwick, Jon MIS Quarterly, 1 March 1994, 18(1): 59-82 User participation, conflict, and conflict resolution: The mediating roles of influence Barki, Henri; Hartwick, Jon Information Systems Research, December 1994, 5(4): 422 An Integrative Contingency Model of Software Project Risk Management Barki, Henri; Rivard, Suzanne; Talbot, Jean Journal of Management Information Systems, March 2001, 17(4): 37-69 Small Group Brainstorming and Idea Quality: Is Electronic Brainstorming the Most Effective Approach? Barki, Henri; Pinsonneault, Alain Small Group Research, April 2001, 32(2): 158-205 Interpersonal Conflict and Its Management in Information System Development Barki, Henri; Hartwick, Jon MIS Quarterly, June 2001, 25(2): 195-228 Conceptualizing the Construct of Interpersonal Conflict Barki, Henri; Hartwick, Jon International Journal of Conflict Management, March 2004, 15(3): 216-244 A Model of Organizational Integration, Implementation Effort, and Performance Barki, Henri; Pinsonneault, Alain Organization Science, 2005, 16(2): 165-179 EIS Implementation Research: An Assessment and Suggestions for the Future (book chapter) Barki, Henri; Chen, Chin-Sheng (Editor); Filipe, Joaquim (Editor); Seruca, Isabel (Editor); Cordeiro, José (Editor) Dordrecht: Springer Netherlands; 2006 Enterprise Information Systems VII, pp.3-10 Information System Use–Related Activity: An Expanded Behavioral Conceptualization of Individual-Level Information System Use Barki, Henri; Titah, Ryad; Boffo, Céline Information Systems Research, 2007, 18(2): 173-192 Thar's gold in them thar constructs Barki, Henri ACM SIGMIS Database: The Database for Advances in Information Systems, October 2008, 39(4): 90 Linking IT Implementation and Acceptance via the Construct of Psychological Ownership of Information Technology Barki, Henri; Pare, Guy; Sicotte, Claude Journal of Information Technology, December 2008, 23(4): 269-280 Managing Illusions of Control Barki, Henri Journal of Information Technology, December 2011, 26(4): 280-281 Reconceptualizing trust: A non-linear Boolean model Barki, Henri; Robert, Jacques; Dulipovici, Alina Information & Management, 2015, 52(4): 483-495 References Year of birth missing (living people) Living people Université de Montréal faculty Canadian social scientists University of Western Ontario alumni
15215
https://en.wikipedia.org/wiki/Internet%20Explorer
Internet Explorer
Internet Explorer (formerly Microsoft Internet Explorer and Windows Internet Explorer, (from August 16, 1995 to March 30, 2021) commonly abbreviated IE or MSIE) is a discontinued series of graphical web browsers developed by Microsoft and included in the Microsoft Windows line of operating systems, starting in 1995. It was first released as part of the add-on package Plus! for Windows 95 that year. Later versions were available as free downloads, or in-service packs, and included in the original equipment manufacturer (OEM) service releases of Windows 95 and later versions of Windows. New feature development for the browser was discontinued in 2016 in favor of new browser Microsoft Edge. Since Internet Explorer is a Windows component and is included in long-term lifecycle versions of Windows such as Windows Server 2019, it will continue to receive security updates until at least 2029. Microsoft 365 ended support for Internet Explorer on August 17, 2021, and Microsoft Teams ended support for IE on November 30, 2020. Internet Explorer is set for discontinuation on June 15, 2022, after which the alternative will be Microsoft Edge with IE mode for legacy sites. Internet Explorer was once the most widely used web browser, attaining a peak of about 95% usage share by 2003. This came after Microsoft used bundling to win the first browser war against Netscape, which was the dominant browser in the 1990s. Its usage share has since declined with the launch of Firefox (2004) and Google Chrome (2008), and with the growing popularity of mobile operating systems such as Android and iOS that do not support Internet Explorer. Estimates for Internet Explorer's market share in 2022 are about 0.45% across all platforms, or by StatCounter's numbers ranked 9th. On traditional PCs, the only platform on which it has ever had significant share, it is ranked 6th at 1.06%, after Opera. Microsoft Edge, IE's successor, first overtook Internet Explorer in terms of market share in November 2019. Microsoft spent over per year on Internet Explorer in the late 1990s, with over 1,000 people involved in the project by 1999. Versions of Internet Explorer for other operating systems have also been produced, including an Xbox 360 version called Internet Explorer for Xbox and for platforms Microsoft no longer supports: Internet Explorer for Mac and Internet Explorer for UNIX (Solaris and HP-UX), and an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile made for Windows CE, Windows Phone, and, previously, based on Internet Explorer 7, for Windows Phone 7. On March 17, 2015, Microsoft announced that Microsoft Edge would replace Internet Explorer as the default browser on "for certain versions of Windows 10". This makes Internet Explorer 11 the last release. Internet Explorer, however, remains on Windows 10 LTSC and Windows Server 2019 primarily for enterprise purposes. Since January 12, 2016, only Internet Explorer 11 has official support for consumers; extended support for Internet Explorer 10 ended on January 31, 2020. Support varies based on the operating system's technical capabilities and its support life cycle. On May 20, 2021, it was announced that full support for Internet Explorer would be discontinued on June 15, 2022, after which, the alternative will be Microsoft Edge with IE mode for legacy sites. Microsoft is committed to support Internet Explorer that way to 2029 at least, with a one-year notice before it is discontinued. The IE mode "uses the Trident MSHTML engine", i.e. the rendering code of Internet Explorer. The browser has been scrutinized throughout its development for use of third-party technology (such as the source code of Spyglass Mosaic, used without royalty in early versions) and security and privacy vulnerabilities, and the United States and the European Union have alleged that integration of Internet Explorer with Windows has been to the detriment of fair browser competition. History Internet Explorer 1 The Internet Explorer project was started in the summer of 1994 by Thomas Reardon, who, according to the Massachusetts Institute of Technology Review of 2003, used source code from Spyglass, Inc. Mosaic, which was an early commercial web browser with formal ties to the pioneering National Center for Supercomputing Applications (NCSA) Mosaic browser. In late 1994, Microsoft licensed Spyglass Mosaic for a quarterly fee plus a percentage of Microsoft's non-Windows revenues for the software. Although bearing a name like NCSA Mosaic, Spyglass Mosaic had used the NCSA Mosaic source code sparingly. The first version, dubbed Microsoft Internet Explorer, was installed as part of the Internet Jumpstart Kit in the Microsoft Plus! pack for Windows 95. The Internet Explorer team began with about six people in early development. Internet Explorer 1.5 was released several months later for Windows NT and added support for basic table rendering. By including it free of charge with their operating system, they did not have to pay royalties to Spyglass Inc, resulting in a lawsuit and a US$8 million settlement on January 22, 1997. Microsoft was sued by SyNet Inc. in 1996, for trademark infringement, claiming it owned the rights to the name "Internet Explorer". It ended with Microsoft paying $5 Million to settle the lawsuit. Internet Explorer 2 Internet Explorer 2 is the second major version of Internet Explorer, released on November 22, 1995, for Windows 95 and Windows NT, and on April 23, 1996, for Apple Macintosh and Windows 3.1. Internet Explorer 3 Internet Explorer 3 is the third major version of Internet Explorer, released on August 13, 1996 for Microsoft Windows and on January 8, 1997 for Apple Mac OS. Internet Explorer 4 Internet Explorer 4 is the fourth major version of Internet Explorer, released on September 1997 for Microsoft Windows, Mac OS, Solaris, and HP-UX. Internet Explorer 5 Internet Explorer 5 is the fifth major version of Internet Explorer, released on March 18, 1999 for Windows 3.1, Windows NT 3, Windows 95, Windows NT 4.0 SP3, Windows 98, Mac OS X (up to v5.2.3), Classic Mac OS (up to v5.1.7), Solaris and HP-UX (up to 5.01 SP1). Internet Explorer 6 Internet Explorer 6 is the sixth major version of Internet Explorer, released on August 24, 2001 for Windows NT 4.0 SP6a, Windows 98, Windows 2000, Windows ME and as the default web browser for Windows XP and Windows Server 2003. Internet Explorer 7 Internet Explorer 7 is the seventh major version of Internet Explorer, released on October 18, 2006 for Windows XP SP2, Windows Server 2003 SP1 and as the default web browser for Windows Vista, Windows Server 2008 and Windows Embedded POSReady 2009. Internet Explorer 8 Internet Explorer 8 is the eight major version of Internet Explorer, released on March 19, 2009 for Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008 and as the default web browser for Windows 7 (later default was Internet Explorer 11) and Windows Server 2008 R2. Internet Explorer 9 Internet Explorer 9 is the ninth major version of Internet Explorer, released on March 14, 2011 for Windows 7, Windows Server 2008 R2, Windows Vista Service Pack 2 and Windows Server 2008 SP2 with the Platform Update. Internet Explorer 10 Internet Explorer 10 is the tenth major version of Internet Explorer, released on October 26, 2012 for Windows 7, Windows Server 2008 R2 and as the default web browser for Windows 8 and Windows Server 2012. Internet Explorer 11 Internet Explorer 11 is featured in Windows 8.1, which was released on October 17, 2013. It includes an incomplete mechanism for syncing tabs. It is a major update to its developer tools, enhanced scaling for high DPI screens, HTML5 prerender and prefetch, hardware-accelerated JPEG decoding, closed captioning, HTML5 full screen, and is the first Internet Explorer to support WebGL and Google's protocol SPDY (starting at v3). This version of IE has features dedicated to Windows 8.1, including cryptography (WebCrypto), adaptive bitrate streaming (Media Source Extensions) and Encrypted Media Extensions. Internet Explorer 11 was made available for Windows 7 users to download on November 7, 2013, with Automatic Updates in the following weeks. Internet Explorer 11's user agent string now identifies the agent as "Trident" (the underlying browser engine) instead of "MSIE". It also announces compatibility with Gecko (the browser engine of Firefox). Microsoft claimed that Internet Explorer 11, running the WebKit SunSpider JavaScript Benchmark, was the fastest browser as of October 15, 2013. Internet Explorer 11 was made available for Windows Server 2012 and Windows Embedded 8 Standard in the spring of 2019. End of life Microsoft Edge, officially unveiled on January 21, 2015, has replaced Internet Explorer as the default browser on Windows 10. Internet Explorer is still installed in Windows 10 to maintain compatibility with older websites and intranet sites that require ActiveX and other Microsoft legacy web technologies. According to Microsoft, the development of new features for Internet Explorer has ceased. However, it will continue to be maintained as part of the support policy for the versions of Windows with which it is included. On June 1, 2020, the Internet Archive removed the latest version of Internet Explorer from its list of supported browsers, citing its dated infrastructure that makes it hard to work with, following the suggestion of Microsoft Chief of Security Chris Jackson that users not use it as their default browser, but to use it only for websites that require it. Since November 30, 2020, the web version of Microsoft Teams can no longer be accessed using Internet Explorer 11, followed by the remaining Microsoft 365 applications since August 17, 2021. The browser itself will continue to be supported for the lifecycle of the Windows version on which it is installed until June 15, 2022. Microsoft recommends Internet Explorer users migrate to Edge and use the built-in "Internet Explorer mode" which enables support for legacy internet applications. Features Internet Explorer has been designed to view a broad range of web pages and provide certain features within the operating system, including Microsoft Update. During the height of the browser wars, Internet Explorer superseded Netscape only when it caught up technologically to support the progressive features of the time. Standards support Internet Explorer, using the MSHTML (Trident) browser engine: Supports HTML 4.01, parts of HTML5, CSS Level 1, Level 2, and Level 3, XML 1.0, and DOM Level 1, with minor implementation gaps. Fully supports XSLT 1.0 as well as an obsolete Microsoft dialect of XSLT often referred to as WD-xsl, which was loosely based on the December 1998 W3C Working Draft of XSL. Support for XSLT 2.0 lies in the future: semi-official Microsoft bloggers have indicated that development is underway, but no dates have been announced. Almost full conformance to CSS 2.1 has been added in the Internet Explorer 8 release. The MSHTML browser engine in Internet Explorer 9 in 2011, scored highest in the official W3C conformance test suite for CSS 2.1 of all major browsers. Supports XHTML in Internet Explorer 9 (MSHTML Trident version 5.0). Prior versions can render XHTML documents authored with HTML compatibility principles and served with a text/html MIME-type. Supports a subset of SVG in Internet Explorer 9 (MSHTML Trident version 5.0), excluding SMIL, SVG fonts and filters. Internet Explorer uses DOCTYPE sniffing to choose between standards mode and a "quirks mode" in which it deliberately mimics nonstandard behaviors of old versions of MSIE for HTML and CSS rendering on screen (Internet Explorer always uses standards mode for printing). It also provides its own dialect of ECMAScript called JScript. Internet Explorer was criticized by Tim Berners-Lee for its limited support for SVG, which is promoted by W3C. Non-standard extensions Internet Explorer has introduced an array of proprietary extensions to many of the standards, including HTML, CSS, and the DOM. This has resulted in several web pages that appear broken in standards-compliant web browsers and has introduced the need for a "quirks mode" to allow for rendering improper elements meant for Internet Explorer in these other browsers. Internet Explorer has introduced several extensions to the DOM that have been adopted by other browsers. These include the inner HTML property, which provides access to the HTML string within an element, which was part of IE 5 and was standardized as part of HTML 5 roughly 15 years later after all other browsers implemented it for compatibility, the XMLHttpRequest object, which allows the sending of HTTP request and receiving of HTTP response, and may be used to perform AJAX, and the designMode attribute of the content Document object, which enables rich text editing of HTML documents. Some of these functionalities were not possible until the introduction of the W3C DOM methods. Its Ruby character extension to HTML is also accepted as a module in W3C XHTML 1.1, though it is not found in all versions of W3C HTML. Microsoft submitted several other features of IE for consideration by the W3C for standardization. These include the 'behavior' CSS property, which connects the HTML elements with JScript behaviors (known as HTML Components, HTC), HTML+TIME profile, which adds timing and media synchronization support to HTML documents (similar to the W3C XHTML+SMIL), and the VML vector graphics file format. However, all were rejected, at least in their original forms; VML was subsequently combined with PGML (proposed by Adobe and Sun), resulting in the W3C-approved SVG format, one of the few vector image formats being used on the web, which IE did not support until version 9. Other non-standard behaviors include: support for vertical text, but in a syntax different from W3C CSS3 candidate recommendation, support for a variety of image effects and page transitions, which are not found in W3C CSS, support for obfuscated script code, in particular JScript.Encode, as well as support for embedding EOT fonts in web pages. Favicon Support for favicons was first added in Internet Explorer 5. Internet Explorer supports favicons in PNG, static GIF and native Windows icon formats. In Windows Vista and later, Internet Explorer can display native Windows icons that have embedded PNG files. Usability and accessibility Internet Explorer makes use of the accessibility framework provided in Windows. Internet Explorer is also a user interface for FTP, with operations similar to Windows Explorer. Internet Explorer 5 and 6 had a side bar for web searches, enabling jumps through pages from results listed in the side bar. Pop-up blocking and tabbed browsing were added respectively in Internet Explorer 6 and Internet Explorer 7. Tabbed browsing can also be added to older versions by installing MSN Search Toolbar or Yahoo Toolbar. Cache Internet Explorer caches visited content in the Temporary Internet Files folder to allow quicker access (or offline access) to previously visited pages. The content is indexed in a database file, known as Index.dat. Multiple Index.dat files exist which index different content—visited content, web feeds, visited URLs, cookies, etc. Prior to IE7, clearing the cache used to clear the index but the files themselves were not reliably removed, posing a potential security and privacy risk. In IE7 and later, when the cache is cleared, the cache files are more reliably removed, and the index.dat file is overwritten with null bytes. Caching has been improved in IE9. Group Policy Internet Explorer is fully configurable using Group Policy. Administrators of Windows Server domains (for domain-joined computers) or the local computer can apply and enforce a variety of settings on computers that affect the user interface (such as disabling menu items and individual configuration options), as well as underlying security features such as downloading of files, zone configuration, per-site settings, ActiveX control behavior and others. Policy settings can be configured for each user and for each machine. Internet Explorer also supports Integrated Windows Authentication. Architecture Internet Explorer uses a componentized architecture built on the Component Object Model (COM) technology. It consists of several major components, each of which is contained in a separate dynamic-link library (DLL) and exposes a set of COM programming interfaces hosted by the Internet Explorer main executable, : is the protocol handler for HTTP, HTTPS, and FTP. It handles all network communication over these protocols. is responsible for MIME-type handling and download of web content, and provides a thread-safe wrapper around WinInet.dll and other protocol implementations. houses the MSHTML (Trident) browser engine introduced in Internet Explorer 4, which is responsible for displaying the pages on-screen and handling the Document Object Model (DOM) of the web pages. MSHTML.dll parses the HTML/CSS file and creates the internal DOM tree representation of it. It also exposes a set of APIs for runtime inspection and modification of the DOM tree. The DOM tree is further processed by a browser engine which then renders the internal representation on screen. contains the user interface and window of IE in Internet Explorer 7 and above. provides the navigation, local caching and history functionalities for the browser. is responsible for rendering the browser user interface such as menus and toolbars. Internet Explorer does not include any native scripting functionality. Rather, exposes an API that permits a programmer to develop a scripting environment to be plugged-in and to access the DOM tree. Internet Explorer 8 includes the bindings for the Active Scripting engine, which is a part of Microsoft Windows and allows any language implemented as an Active Scripting module to be used for client-side scripting. By default, only the JScript and VBScript modules are provided; third party implementations like ScreamingMonkey (for ECMAScript 4 support) can also be used. Microsoft also makes available the Microsoft Silverlight runtime that allows CLI languages, including DLR-based dynamic languages like IronPython and IronRuby, to be used for client-side scripting. Internet Explorer 8 introduced some major architectural changes, called loosely coupled IE (LCIE). LCIE separates the main window process (frame process) from the processes hosting the different web applications in different tabs (tab processes). A frame process can create multiple tab processes, each of which can be of a different integrity level, each tab process can host multiple web sites. The processes use asynchronous inter-process communication to synchronize themselves. Generally, there will be a single frame process for all web sites. In Windows Vista with protected mode turned on, however, opening privileged content (such as local HTML pages) will create a new tab process as it will not be constrained by protected mode. Extensibility Internet Explorer exposes a set of Component Object Model (COM) interfaces that allows add-ons to extend the functionality of the browser. Extensibility is divided into two types: Browser extensibility and content extensibility. Browser extensibility involves adding context menu entries, toolbars, menu items or Browser Helper Objects (BHO). BHOs are used to extend the feature set of the browser, whereas the other extensibility options are used to expose that feature in the user interface. Content extensibility adds support for non-native content formats. It allows Internet Explorer to handle new file formats and new protocols, e.g. WebM or SPDY. In addition, web pages can integrate widgets known as ActiveX controls which run on Windows only but have vast potentials to extend the content capabilities; Adobe Flash Player and Microsoft Silverlight are examples. Add-ons can be installed either locally, or directly by a web site. Since malicious add-ons can compromise the security of a system, Internet Explorer implements several safeguards. Internet Explorer 6 with Service Pack 2 and later feature an Add-on Manager for enabling or disabling individual add-ons, complemented by a "No Add-Ons" mode. Starting with Windows Vista, Internet Explorer and its BHOs run with restricted privileges and are isolated from the rest of the system. Internet Explorer 9 introduced a new component – Add-on Performance Advisor. Add-on Performance Advisor shows a notification when one or more of installed add-ons exceed a pre-set performance threshold. The notification appears in the Notification Bar when the user launches the browser. Windows 8 and Windows RT introduce a Metro-style version of Internet Explorer that is entirely sandboxed and does not run add-ons at all. In addition, Windows RT cannot download or install ActiveX controls at all; although existing ones bundled with Windows RT still run in the traditional version of Internet Explorer. Internet Explorer itself can be hosted by other applications via a set of COM interfaces. This can be used to embed the browser functionality inside a computer program or create Internet Explorer shells. Security Internet Explorer uses a zone-based security framework that groups sites based on certain conditions, including whether it is an Internet- or intranet-based site as well as a user-editable whitelist. Security restrictions are applied per zone; all the sites in a zone are subject to the restrictions. Internet Explorer 6 SP2 onwards uses the Attachment Execution Service of Microsoft Windows to mark executable files downloaded from the Internet as being potentially unsafe. Accessing files marked as such will prompt the user to make an explicit trust decision to execute the file, as executables originating from the Internet can be potentially unsafe. This helps in preventing the accidental installation of malware. Internet Explorer 7 introduced the phishing filter, which restricts access to phishing sites unless the user overrides the decision. With version 8, it also blocks access to sites known to host malware. Downloads are also checked to see if they are known to be malware-infected. In Windows Vista, Internet Explorer by default runs in what is called Protected Mode, where the privileges of the browser itself are severely restricted—it cannot make any system-wide changes. One can optionally turn this mode off, but this is not recommended. This also effectively restricts the privileges of any add-ons. As a result, even if the browser or any add-on is compromised, the damage the security breach can cause is limited. Patches and updates to the browser are released periodically and made available through the Windows Update service, as well as through Automatic Updates. Although security patches continue to be released for a range of platforms, most feature additions and security infrastructure improvements are only made available on operating systems that are in Microsoft's mainstream support phase. On December 16, 2008, Trend Micro recommended users switch to rival browsers until an emergency patch was released to fix a potential security risk which "could allow outside users to take control of a person's computer and steal their passwords.” Microsoft representatives countered this recommendation, claiming that "0.02% of internet sites" were affected by the flaw. A fix for the issue was released the following day with the Security Update for Internet Explorer KB960714, on Microsoft Windows Update. In 2010, Germany's Federal Office for Information Security, known by its German initials, BSI, advised "temporary use of alternative browsers" because of a "critical security hole" in Microsoft's software that could allow hackers to remotely plant and run malicious code on Windows PCs. In 2011, a report by Accuvant, funded by Google, rated the security (based on sandboxing) of Internet Explorer worse than Google Chrome but better than Mozilla Firefox. A 2017 browser security white paper comparing Google Chrome, Microsoft Edge, and Internet Explorer 11 by X41 D-Sec in 2017 came to similar conclusions, also based on sandboxing and support of legacy web technologies. Security vulnerabilities Internet Explorer has been subjected to many security vulnerabilities and concerns such that the volume of criticism for IE is unusually high. Much of the spyware, adware, and computer viruses across the Internet are made possible by exploitable bugs and flaws in the security architecture of Internet Explorer, sometimes requiring nothing more than viewing of a malicious web page to install themselves. This is known as a "drive-by install.” There are also attempts to trick the user into installing malicious software by misrepresenting the software's true purpose in the description section of an ActiveX security alert. A number of security flaws affecting IE originated not in the browser itself, but in ActiveX-based add-ons used by it. Because the add-ons have the same privilege as IE, the flaws can be as critical as browser flaws. This has led to the ActiveX-based architecture being criticized for being fault-prone. By 2005, some experts maintained that the dangers of ActiveX had been overstated and there were safeguards in place. In 2006, new techniques using automated testing found more than a hundred vulnerabilities in standard Microsoft ActiveX components. Security features introduced in Internet Explorer 7 mitigated some of these vulnerabilities. In 2008, Internet Explorer had a number of published security vulnerabilities. According to research done by security research firm Secunia, Microsoft did not respond as quickly as its competitors in fixing security holes and making patches available. The firm also reported 366 vulnerabilities in ActiveX controls, an increase from the previous year. According to an October 2010 report in The Register, researcher Chris Evans had detected a known security vulnerability which, then dating back to 2008, had not been fixed for at least six hundred days. Microsoft says that it had known about this vulnerability, but it was of exceptionally low severity as the victim web site must be configured in a peculiar way for this attack to be feasible at all. In December 2010, researchers were able to bypass the "Protected Mode" feature in Internet Explorer. Vulnerability exploited in attacks on U.S. firms In an advisory on January 14, 2010, Microsoft said that attackers targeting Google and other U.S. companies used software that exploits a security hole, which had already been patched, in Internet Explorer. The vulnerability affected Internet Explorer 6 from on Windows XP and Server 2003, IE6 SP1 on Windows 2000 SP4, IE7 on Windows Vista, XP, Server 2008, and Server 2003, IE8 on Windows 7, Vista, XP, Server 2003, and Server 2008 (R2). The German government warned users against using Internet Explorer and recommended switching to an alternative web browser, due to the major security hole described above that was exploited in Internet Explorer. The Australian and French Government issued a similar warning a few days later. Major vulnerability across versions On April 26, 2014, Microsoft issued a security advisory relating to (use-after-free vulnerability in Microsoft Internet Explorer 6 through 11), a vulnerability that could allow "remote code execution" in Internet Explorer versions 6 to 11. On April 28, 2014, the United States Department of Homeland Security's United States Computer Emergency Readiness Team (US-CERT) released an advisory stating that the vulnerability could result in "the complete compromise" of an affected system. US-CERT recommended reviewing Microsoft's suggestions to mitigate an attack or using an alternate browser until the bug is fixed. The UK National Computer Emergency Response Team (CERT-UK) published an advisory announcing similar concerns and for users to take the additional step of ensuring their antivirus software is up to date. Symantec, a cyber security firm, confirmed that "the vulnerability crashes Internet Explorer on Windows XP". The vulnerability was resolved on May 1, 2014, with a security update. Market adoption and usage share The adoption rate of Internet Explorer seems to be closely related to that of Microsoft Windows, as it is the default web browser that comes with Windows. Since the integration of Internet Explorer 2.0 with Windows 95 OSR 1 in 1996, and especially after version 4.0's release in 1997, the adoption was greatly accelerated: from below 20% in 1996, to about 40% in 1998, and over 80% in 2000. This made Microsoft the winner in the infamous 'first browser war' against Netscape. Netscape Navigator was the dominant browser during 1995 and until 1997, but rapidly lost share to IE starting in 1998, and eventually slipped behind in 1999. The integration of IE with Windows led to a lawsuit by AOL, Netscape's owner, accusing Microsoft of unfair competition. The infamous case was eventually won by AOL but by then it was too late, as Internet Explorer had already become the dominant browser. Internet Explorer peaked during 2002 and 2003, with about 95% share. Its first notable competitor after beating Netscape was Firefox from Mozilla, which itself was an offshoot from Netscape. Firefox 1.0 had surpassed Internet Explorer 5 in early 2005, with Firefox 1.0 at 8 percent market share. Approximate usage over time based on various usage share counters averaged for the year overall, or for the fourth quarter, or for the last month in the year depending on availability of reference. According to StatCounter Internet Explorer's market share fell below 50% in September 2010. In May 2012, Google Chrome overtook Internet Explorer as the most used browser worldwide, according to StatCounter. In September 2021, usage share is low globally, while a bit higher in Africa, at 2.61%. Industry adoption Browser Helper Objects are also used by many search engines companies and third parties for creating add-ons that access their services, such as search engine toolbars. Because of the use of COM, it is possible to embed web-browsing functionality in third-party applications. Hence, there are several Internet Explorer shells, and several content-centric applications like RealPlayer also use Internet Explorer's web browsing module for viewing web pages within the applications. Removal While a major upgrade of Internet Explorer can be uninstalled in a traditional way if the user has saved the original application files for installation, the matter of uninstalling the version of the browser that has shipped with an operating system remains a controversial one. The idea of removing a stock install of Internet Explorer from a Windows system was proposed during the United States v. Microsoft Corp. case. One of Microsoft's arguments during the trial was that removing Internet Explorer from Windows may result in system instability. Indeed, programs that depend on libraries installed by IE, including Windows help and support system, fail to function without IE. Before Windows Vista, it was not possible to run Windows Update without IE because the service used ActiveX technology, which no other web browser supports. Impersonation by malware The popularity of Internet Explorer has led to the appearance of malware abusing its name. On January 28, 2011, a fake Internet Explorer browser calling itself "Internet Explorer – Emergency Mode" appeared. It closely resembles the real Internet Explorer but has fewer buttons and no search bar. If a user attempts to launch any other browser such as Google Chrome, Mozilla Firefox, Opera, Safari, or the real Internet Explorer, this browser will be loaded instead. It also displays a fake error message, claiming that the computer is infected with malware and Internet Explorer has entered "Emergency Mode.” It blocks access to legitimate sites such as Google if the user tries to access them. See also Bing Bar History of the web browser List of web browsers Month of bugs Web 2.0 Windows Filtering Platform Winsock Notes References Further reading External links Internet Explorer Architecture 1995 software FTP clients History of the Internet News aggregator software Proprietary software Windows components Windows web browsers Computer-related introductions in 1995 Products and services discontinued in 2015 Discontinued Microsoft software Web browsers Xbox One software Xbox 360 software
12595218
https://en.wikipedia.org/wiki/Basic%20access%20control
Basic access control
Basic access control (BAC) is a mechanism specified to ensure only authorized parties can wirelessly read personal information from passports with an RFID chip. It uses data such as the passport number, date of birth and expiration date to negotiate a session key. This key can then be used to encrypt the communication between the passports chip and a reading device. This mechanism is intended to ensure that the owner of a passport can decide who can read the electronic contents of the passport. This mechanism was first introduced into the German passport on 1 November 2005 and is now also used in many other countries (e.g., United States passports since August 2007). Inner workings The data used to encrypt the BAC communication can be read electronically from the bottom of the passport called the machine readable zone. Because physical access to the passport is assumed to be needed to know this part of the passport it is assumed that the owner of the passport has given permission to read the passport. Equipment for optically scanning this part of the passport is already widely used. It uses an OCR system to read the text which is printed in a standardized format. Security There is a replay attack against the basic access control protocol that allows an individual passport to be traced. The attack is based on being able to distinguish a failed nonce check from a failed MAC check and works against passports with randomized unique identifiers and hard to guess keys. The basic access control mechanism has been criticized as offering too little protection from unauthorized interception. Researchers claim that because there are only limited numbers of passport issued, many theoretically possible passport numbers will not be in use in practice. The limited range of human age ranges further reduce the space of possibilities. In other words, the data used as an encryption key has low entropy, meaning that guessing the session key is possible via a modest brute force attack. This effect increases when passport numbers are issued sequentially or contain a redundant checksum. Both are proven to be the case in passports issued by the Netherlands . There are other factors that can be potentially used to speed up a brute force attack. There is the fact that dates of birth are typically not distributed randomly in populations. Dates of birth may be distributed even less randomly for the segments of a population that pass, for example, a check-in desk at an airport. And the fact that passports are often not issued on all days of the week and during all weeks of a year. Therefore, not all theoretically possible expiration dates may get used. In addition, the fact that real existing dates are used further limits the number of possible combinations: The month makes up two of the digits used for generating the key. Usually, two digits would mean 100 (00−99) combinations in decimal code or (36×36=1296) combinations in alphanumeric code. But as there are only 12 months, there are only 12 combinations. It is the same with the day (two digits and 31 combinations or less, depending on the month). The German passport serial-number format (previously 10-digit, all-numeric, sequentially assigned) was modified on 1 November 2007, in response to concerns about the low entropy of BAC session keys. The new 10-character serial number is alphanumeric and generated with the help of a specially-designed block cipher, to avoid a recognizable relationship with the expiry date and increase entropy. In addition, a public-key based extended access control mechanism is now used to protect any information in the RFID chip that goes beyond the minimum ICAO requirements, in particular fingerprint images. See also Predictable serial number attack References Sources "Security and Privacy Issues in E-passports" by Ari Juels, David Molnar, and David Wagner, retrieved March 15, 2006 "A Security Review of the Biometric Passport" by Bart Jacobs, retrieved March 15, 2006 (presentation slides) Security Mechanisms of the Biometrically Enhanced (EU) Passport by Dennis Kügler, Federal Office for Information Security, Germany (presentation slides from the 2nd International Conference on Security in Pervasive Computing 2005-04-07) External links 2 fired over Obama passport breach NBC March 20, 2008 Contactless smart cards Passports Access control
36114589
https://en.wikipedia.org/wiki/Trinity%20Desktop%20Environment
Trinity Desktop Environment
The Trinity Desktop Environment (TDE) is a complete software desktop environment designed for Linux and Unix-like operating systems, intended for computer users preferring a traditional desktop model, and is free/libre software. Born as a fork of KDE 3.5 back in 2010, it was originally created by Timothy Pearson, who had coordinated Kubuntu remixes featuring KDE 3.5 after Kubuntu switched to KDE Plasma 4. TDE is now a fully independent project with its own personality and development team, available for various Linux distros, BSD and DilOS. It is currently led by Slávek Banko. TDE releases aims to provide a stable and highly customizable desktop, continuing bug fixes, additional features, and compatibility with recent hardware. Trinity is packaged for Debian, Ubuntu, Devuan, Raspbian, Fedora, RedHat, Mageia, OpenSUSE, Slackware and various other distributions and architectures. It is also used as the default desktop environment of at least two Linux distributions, Q4OS and Exe GNU/Linux. Since version 3.5.12 (its second official release), it uses its own fork of Qt3, known as TQt3, so as to make it easier to eventually make TQt installable alongside later Qt releases. Releases Early releases of Trinity used a versioning scheme based on that of K Desktop Environment 3.5, from which it was forked. The R14.0 release adopted a new versioning scheme, to prevent comparisons with KDE based on version number alone and a new visual theme. This new visual theme was based on the "KDE Lineart" background included in the wallpapers package for KDE 3.4 and covered the desktop background and was named "Trinity Lineart" along with the splash screen, "application info screens" (for some apps like Konqueror and Trinity Control Center), and banners (for some other apps like KPersonalizer and Kate). The window, widget, and icon themes were left intact, aside from replacing all KDE logos with Trinity logos. Prior to this, Trinity kept the KDE 3.5 visual theme, but replaced the "KDE 3.5" branding with "TDE" branding, in a font that is not the "Kabel Book" font KDE used, although the K-Menu had its side image branded as just "Trinity" instead of "TDE". Kubuntu versions, on the other hand, used the included "Crystal Fire" background as the default desktop background, along with the K-Menu "side image", larger menu items, and menu layout from Kubuntu 8.04. History References External links Official Git repository Free desktop environments KDE
952894
https://en.wikipedia.org/wiki/HP%209000
HP 9000
HP 9000 is a line of workstation and server computer systems produced by the Hewlett-Packard (HP) Company. The native operating system for almost all HP 9000 systems is HP-UX, which is based on UNIX System V. The HP 9000 brand was introduced in 1984 to encompass several extant technical workstation models launched formerly in the early 1980s. Most of these were based on the Motorola 68000 series, but there were also entries based on HP's own FOCUS designs. From the mid-1980s, the line was transitioned to HP's new PA-RISC architecture. Finally, in the 2000s, systems using the IA-64 were added. The HP 9000 line was discontinued in 2008, being superseded by Itanium-based HPE Integrity Servers running HP-UX. History The first HP 9000 models comprised the HP 9000 Series 200 and Series 500 ranges. These were rebadged existing models, the Series 200 including various Motorola 68000 (68k) based workstations such as the HP 9826 and HP 9836, and the Series 500 using HP's FOCUS microprocessor architecture introduced in the HP 9020 workstation. These were followed by the HP 9000 Series 300 and Series 400 workstations which also used 68k-series microprocessors. From the mid-1980s onward, HP began changing to its own microprocessors based on its proprietary PA-RISC instruction set architecture (ISA), for the Series 600, 700, 800, and later lines. More recent models use either the PA-RISC or its successor, the HP–Intel IA-64 ISA. All of the HP 9000 line run various versions of the HP-UX operating system, except earlier Series 200 models, which ran standalone applications or the Basic Workstation / Pascal 3.1 Workstation operating systems. HP released the Series 400, also known as the Apollo 400, after acquiring Apollo Computer in 1989. These models had the ability to run either HP-UX or Apollo's Domain/OS. From the early 1990s onward, HP replaced the HP 9000 Series numbers with an alphabetical Class nomenclature. In 2001, HP again changed the naming scheme for their HP 9000 servers. The A-class systems were renamed as the rp2400s, the L-class became the rp5400s, and the N-class the rp7400s. The rp prefix signified a PA-RISC architecture, while rx was used for IA-64-based systems, later rebranded HPE Integrity Servers. On 30 April 2008, HP announced end of sales for the HP 9000. The last order date for HP 9000 systems was 31 December 2008 and the last ship date was 1 April 2009. The last order date for new HP 9000 options was December 31, 2009, with a last ship date of 1 April 2010. HP intends to support these systems through to 2013, with possible extensions. The end of life for HP 9000 also marks the end of an era, as it essentially marks HP's withdrawal from the Unix workstation market (the HP 9000 workstations are end of life, and there are no HP Integrity workstations, so there is no longer a solution which targets HP/UX at the desktop). When the move from PA-RISC (9000) to Itanium (Integrity) was announced, Integrity workstations running either HP/UX or Windows were initially announced and offered, but were moved to end of sales life relatively quickly, with no replacement (arguably because x86-64 made IA-64 uncompetitive on the desktop, and HP/UX does not support x86-64, with HP offering desktop Linux as an alternative, not fully compatible, solution). Workstation models Prior to January 1985 (see also HP 9800 series): Series 200 16 (HP 9816), 20 (HP 9920), 26 (HP 9826), 36 (HP 9836) Series 500 20 (HP 9020), 30 (HP 9030), 40 (HP 9040) After 1985: Series 200 216 (HP 9816), 217 (HP 9817), 220 (HP 9920), 226 (HP 9826), 236 (HP 9836), 237 (HP 9837) Series 300 310, 318, 319, 320, 322, 330, 332, 340, 345, 350, 360, 362, 370, 375, 380, 382, 385 Series 400 (HP Apollo 9000 Series 400) 400dl, 400s, 400t, 425dl, 425e, 425s, 425t, 433dl, 433s, 433t Series 500 520 (HP 9020), 530 (HP 9030), 540 (HP 9040), 550, 560 Series 600 635SV, 645SV Series 700 705, 710, 712, 715, 720, 725, 730, 735, 742, 743, 744, 745, 747, 748, 750, 755 B-class B132L, B160L, B132L+, B180L, B1000, B2000, B2600 C-class C100, C110, C132L, C160, C160L, C180, C180L, C180XP, C200, C240, C360, C3000, C3600, C3650, C3700, C3750, C8000 J-class J200, J210, J210XC, J280, J282, J2240, J5000, J5600, J6000, J6700, J6750, J7000 Series 200 The Series 200 workstations originated before there were any "Series" at HP. The first model was the HP 9826A, followed by the HP 9836A. Later, a color version of the 9836 (9836C) was introduced. There was also a rack-mount version, the HP 9920A. These were all based on the Motorola 68000 chip. There were 'S' versions of the models that included memory bundled in. When HP-UX was included as an OS, there was a 'U' version of the 9836s and 9920 that used the 68012 processor. The model numbers included the letter 'U' (9836U, 9836CU, and 9920U). Later versions of the Series 200's included the 9816, 9817, and 9837. These systems were soon renamed as the HP Series 200 line, before being renamed again as part HP 9000 family, the HP 9000 Series 200. There was also a "portable" version of the Series 200 called the Integral. The official model was the HP9807. This machine was about the size of a portable sewing machine, contained a MC68000 processor, ROM based HP-UX, 3½ inch floppy disk drive, inkjet printer, a keyboard, mouse, and an electroluminescent display similar to the early GRiD Compass computers. It was not battery powered, and unlike the other Series 200's that were manufactured in Fort Collins, Colorado, it was made in Corvallis, Oregon. Series 300/400 The Series 300 workstations were based around Motorola 68000-series processors, ranging from the 68010 (Model 310, introduced 1985) to the Motorola 68040 (Model 38x, introduced 1991). The Series 400 (introduced 1990) were intended to supersede the Apollo/Domain workstations and were also based on the 68030/040. They were branded "HP Apollo" and added Apollo Domain/OS compatibility. The suffix 's' and 't' used on the Series 400 represented "Side" (as in Desk side) and "Top" (as in Desk top) model. The last two digits of the Series 400 originally was the clock frequency of the processor in MHz (e.g. 433 was 33 MHz). At introduction, the Series 400 had a socket for the MC68040, but since they were not available at the time, an emulator card with an MC68030 and additional circuitry was installed. Customers who purchased systems were given a guaranteed upgrade price of $5,000USD to the MC68040, when they became available. The Series 300 and 400 shared the same I/O interface as the Series 200. The 32-bit DIO-II bus is rated at 6 MB/s. Series 500 The Series 500s were based on the HP FOCUS microprocessor. They began as the HP 9020, HP 9030, and HP 9040, were renamed the HP Series 500 Model 20, 30, and 40 shortly after introduction, and later renamed again as the HP 9000 Model 520, 530 and 540. The 520 was a complete workstation with built-in keyboard, display, 5.25-inch floppy disk, and optional thermal printer and 5 MB hard disk. The 520 could run BASIC or HP-UX and there were three different models based on the displays attached (two color and one monochrome). The 530 was a rackmount version of the Series 500, could only run HP-UX, and used a serial interface console. The 540 was a 530 mounted inside a cabinet, similar to the disk drives offered then and included a serial multiplexer (MUX). Later models of the Series 500s were the 550 and 560, which had a completely different chassis and could be connected to graphics processors. The processors in the original Series 500s ran at 20 MHz, and could reach a benchmark speed of 1 million instructions per second (MIPS), equivalent to a VAX-11/780, then a common benchmark standard. They could be networked together and with 200 and 300 series using the Shared Resource Manager (SRM). Because of their performance, the US government placed the 500 series on its export restricted list. The computers were only permitted to be sold in Western Europe, Canada, Australia, and New Zealand, with any other country needing written approval. Series 700 The first workstations in the series, the Model 720, Model 730 and Model 750 systems were introduced on 26 March 1991 and were code-named "Snakes". The models used the PA-7000 microprocessor, with the Model 720 using a 50 MHz version and the Model 730 and Model 750 using a 66 MHz version. The PA-7000 is provided with 128 KB of instruction cache on the Model 720 and 730 and 256 KB on the Model 750. All models are provided with 256 KB of data cache. The Model 720 and Model 730 supported 16 to 64 MB of memory, while the Model 750 supported up to 192 MB. Onboard SCSI was provided by an NCR 53C700 SCSI controller. These systems could use both 2D and 3D graphics options, with 2D options being the greyscale GRX and the color CRX. 3D options were the Personal VRX and the Turbo GRX. In early January 1992, HP introduced the Model 705, code-named "Bushmaster Snake", and the Model 710, code-named "Bushmaster Junior". Both systems are low-end diskless workstations, with the Model 705 using a 32 MHz PA-7000 and the Model 710 using a 50 MHz version. At introduction, the Model 705 was priced at under US$5,000, and the Model 710 under US$10,000. The first Series 700 workstations were superseded by the Model 715/33, 715/50, 725/50 low-end workstations and the Model 735/99, 735/125, 755/99 and 755/125 high-end workstations on 10 November 1992. The existing Model 715 and Model 725 were later updated with the introduction of the Model 715/75 and 725/75 in September 1993. The new models used a 75 MHz PA-7100. Increasing integration led to the introduction of the Model 712/60 and Model 712/80i workstations on 18 January 1994. Code-named "Gecko", these models were intended to compete with entry-level workstations from Sun Microsystems and high-end personal computers. They used the PA-7100LC microprocessor operating at 60 and 80 MHz, respectively. The Model 712/80i was an integer only model, with the floating point-unit disabled. Both supported 16 to 128 MB of memory. The Model 715/64, 715/80, 715/100 and 725/100 were introduced in May 1994, targeted at the 2D and 3D graphics market. These workstations use the PA-7100LC microprocessor and supported 32 to 128 MB of memory, except for the Model 725/100, which supported up to 512 MB. The Model 712/100 (King Gecko), an entry-level workstation, and Model 715/100 XC, a mid-range workstation, were introduced in June 1995. The Model 712/100 is a Model 712 with a 100 MHz PA-7100LC and 256 KB of cache while the Model 715/100 XC is a Model 715/100 with 1 MB of cache. The Model 712 and 715 workstations feature the Lasi ASIC, connected by the GSC bus. The Lasi ASIC provided an integrated NCR 53C710 SCSI controller, an Intel Apricot 10 Mbit Ethernet interface, CD-quality sound, PS/2 keyboard and mouse, a serial and a parallel port. All models, except for the 712 series machines also use the Wax ASIC to provide an EISA adapter, a second serial port and support for the HIL bus. The SGC bus (System Graphics Connect), which is used in the earlier series 700 workstations, has similar specifications as PCI with 32-bit/33 MHz and a typical bandwidth of about 100 MB/s . VME Industrial Workstations Models 742i, 743i, 744, 745/745i, 747i, 748i. B, C, J class The C100, C110, J200, J210 and J210XC use the PA-7200 processor, connected to the UTurn IOMMU via the Runway bus. The C100 and C110 are single processor systems, and the J200 and J210 are dual processor systems. The Uturn IOMMU has two GSC buses. These machines continue to use the Lasi and Wax ASICs. The B132L (introduced 1996), B160L, B132L+, B180L, C132L, C160L and C180L workstations are based on the PA-7300LC processor, a development of the PA-7100LC with integrated cache and GSC bus controller. Standard graphics is the Visualize EG. These machines use the Dino GSC to PCI adapter which also provides the second serial port in place of Wax; they optionally have the Wax EISA adapter. The C160, C180, C180-XP, J280 and J282 use the PA-8000 processor and are the first 64-bit HP workstations. They are based on the same Runway/GSC architecture as the earlier C and J class workstations. The C200, C240 and J2240 offer increased speed with the PA-8200 processor and the C360 uses the PA-8500 processor. The B1000, B2000, C3000, J5000 and J7000 were also based on the PA-8500 processor, but had a very different architecture. The U2/Uturn IOMMU and the GSC bus is gone, replaced with the Astro IOMMU, connected via Ropes to several Elroy PCI host adapters. The B2600, C3600 and J5600 upgrade these machines with the PA-8600 processor. The J6000 is a rack-mountable workstation which can also be stood on its side in a tower configuration. The C3650, C3700, C3750, J6700 and J6750 are PA-8700-based. The C8000 uses the dual-core PA-8800 or PA-8900 processors, which uses the same bus as the McKinley and Madison Itanium processors and shares the same zx1 chipset. The Elroy PCI adapters have been replaced with Mercury PCI-X adapters and one Quicksilver AGP 8x adapter. Server models 800 Series 807, 817, 822, 825, 827, 832, 835, 837, 840, 842, 845, 847, 850,855, 857, 867, 877, 887, 897 1200 FT Series 1210, 1245, 1245 PLUS A-class A180, A180C (Staccato), A400, A500 D-class D200, D210, D220, D230, D250, D260, D270, D280, D300, D310, D320, D330, D350, D360, D370, D380, D390 E-class E25, E35, E45, E55 F-class F10, F20, F30 (Nova) G-class G30, G40, G50, G60, G70 (Nova / Nova64) H-class H20, H30, H40, H50, H60, H70 I-class I30, I40, I50, I60, I70 K-class K100, K200, K210, K220, K250, K260, K360, K370, K380, K400, K410, K420, K450, K460, K570, K580 L-class L1000, L1500, L2000, L3000 N-class N4000 N-class N4004 N-class N4005 N-class N4006 R-class R380, R390 S-class rebadged Convex Exemplar SPP2000 (single-node) T-class T500, T520, T600 V-class V2200, V2250, V2500, V2600 X-class rebadged Convex Exemplar SPP2000 (multi-node) rp2400 rp2400 (A400), rp2405 (A400), rp2430 (A400), rp2450 (A500), rp2470 (A500) (former A-class) rp3400 rp3410-2, rp3440-4 (1-2 PA-8800/8900 processors) rp4400 rp4410-4, rp4440-8 rp5400 rp5400, rp5405, rp5430, rp5450, rp5470 (former L-class) rp7400 rp7400 (former N-class) rp7405 rp7405, rp7410, rp7420-16, rp7440-16 rp8400 rp8400, rp8410, rp8420-32, rp8440-32 HP 9000 Superdome SD-32, SD-64, SD-128 (PA-8900 processors) D-class (Codename: Ultralight) The D-class are entry-level and mid-range servers that succeeded the entry-level E-class servers and the mid-range G-, H-, I-class servers. The first models were introduced in late January 1996, consisting of the Model D200, D210, D250, D310 and D350. The Model D200 is a uniprocessor with a 75 MHz PA-7100LC microprocessor, support for up to 512 MB of memory and five EISA/HP-HSC slots. The Model D210 is similar, but it used a 100 MHz PA-7100LC. The Model D250 is dual-processor model and it used the 100 MHz PA-7100LC. It supported up to 768 MB of memory and had five EISA/HP-HSC slots. The Model D310 is a uniprocessor with a 100 MHz PA-7100LC, up to 512 MB of memory and eight EISA/HP-HSC slots. The Model D350 is a high-end D-class system, a dual-processor, it had two 100 MHz PA-7100LCs, up to 768 MB of memory and eight EISA/HP-HSC slots. In mid-September 1996, two new D-class servers were introduced to utilize the new 64-bit PA-8000 microprocessor, the Model D270 uniprocessor and the Model D370 dual-processor. Both were positioned as entry-level servers. They used the 160 MHz PA-8000 and supported 128 MB to 1.5 GB of memory. In January 1997, the low-end Model D220, D230, D320 and D330 were introduced, using 132 and 160 MHz versions of the PA-7300LC microprocessor. The D-class are tower servers with up to two microprocessors and are architecturally similar to the K-class. They sometimes masquerade as larger machines as HP shipped them mounted vertically inside a large cabinet containing a power supply and multiple disks with plenty of room for air to circulate. R-class The R-class is simply a D-class machine packaged in a rack-mount chassis. Unlike the D-class systems, it does not support hot-pluggable disks. N-class The N-class is a 10U rackmount server with up to eight CPUs and 12 PCI slots. It uses two Merced buses, one for every four processor slots. It is not a NUMA machine, having equal access to all memory slots. The I/O is unequal though; having one Ike IOMMU per bus means that one set of CPUs are closer to one set of I/O slots than the other. The N-class servers were marketed as "Itanium-ready", although when the Itanium shipped, no Itanium upgrade was made available for the N class. The N class did benefit from using the Merced bus, bridging the PA-8x00 microprocessors to it via a special adapter called DEW. The N4000 was upgraded with newer processors throughout its life, with models called N4000-36, N4000-44 and N4000-55 indicating microprocessor clock frequencies of 360, 440, and 550 MHz, respectively. It was renamed to the rp7400 series in 2001. L-class The L-class servers are 7U rackmount machines with up to 4 CPUs (depending on model). They have 12 PCI slots, but only 7 slots are enabled in the entry-level L1000 system. Two of the PCI slots are occupied by factory integrated cards and cannot be utilized for I/O expansion by the end-user. The L1000 and L2000 are similar to the A400 and A500, being based on an Astro/Elroy combination. They initially shipped with 360 MHz and 440 MHz PA-8500 and were upgraded with 540 MHz PA-8600. The L3000 is similar to the N4000, being based on a DEW/Ike/Elroy combination. It shipped only with 550 MHz PA-8600 CPUs. The L-class family was renamed to the rp5400 series in 2001. A-class The A180 and A180C were 32-bit, single-processor, 2U servers based on the PA-7300LC processor with the Lasi and Dino ASICs. The A400 and A500 servers were 64-bit, single and dual-processor 2U servers based on the PA-8500 and later processors, using the Astro IOMMU and Elroy PCI adapters. The A400-36 and A500-36 machines used the PA-8500 processor running at 360 MHz; the A400-44 and A500-44 are clocked at 440 MHz. The A500-55 uses a PA-8600 processor running at 550 MHz and the A500-75 uses a PA-8700 processor running at 750 MHz. The A-class was renamed to the rp2400 series in 2001. S/X-class The S- and X-class were Convex Exemplar SPP2000 supercomputers rebadged after HP's acquisition of Convex Computer in 1995. The S-class was a single-node SPP2000 with up to 16 processors, while the X-class name was used for multi-node configurations with up to 512 processors. These machines ran Convex's SPP-UX operating system. V-class The V-class servers were based on the multiprocessor technology from the S-class and X-class. The V2200 and V2250 support a maximum of 16 processors, and the V2500 and V2600 support a maximum of 32 processors. The V-class systems are physically large systems that need extensive cooling and three-phase electric power to operate. They provided a transitional platform between the T-class and the introduction of the Superdome. Operating systems Apart from HP-UX and Domain/OS (on the 400), many HP 9000s can also run the Linux operating system. Some PA-RISC-based models are able to run NeXTSTEP. Berkeley Software Distribution (BSD) Unix was ported to the HP 9000 as HPBSD; the resulting support code was later added to 4.4BSD. Its modern variants NetBSD and OpenBSD also support various HP 9000 models, both Motorola 68k and PA-RISC based. In the early 1990s, several Unix R&D systems were ported to the PA-RISC platform, including several attempts of OSF/1, various Mach ports and systems that combined parts of Mach with other systems (MkLinux, Mach 4/Lites). The origin of these ports were mostly either internal HP Labs projects or HP products, or academic research, mostly at the University of Utah. One project conducted at HP Laboratories involved replacing core HP-UX functionality, specifically the virtual memory and process management subsystems, with Mach functionality from Mach 2.0 and 2.5. This effectively provided a vehicle to port Mach to the PA-RISC architecture, as opposed to starting with the Berkeley Software Distribution configured to use the Mach kernel infrastructure and porting this to PA-RISC, and thereby delivered a version of HP-UX 2.0 based on Mach, albeit with certain features missing from both Mach and HP-UX. The motivation for the project was to investigate performance issues with Mach related to the cache architecture of PA-RISC along with potential remedies for these issues. See also HP 3000 HPE Integrity Servers HP Superdome HP 9800 series, prior series of scientific computer workstations HP 7935 disc drive Notes External links HP 9000 evolution, HP 9000 evolution to HP Integrity Official HP Mission-Critical Musings Blog HP 9836 at old-computers.com HP Computer Museum OpenPA.net Information resource on HP PA-RISC-based computers, including HP 9000/700, 800 and later systems Site communautaire sur les stations de travail et serveurs hp9000, regroupant des informations, part number ainsi que de la documentation au format PDF. 9000 9000 Computer workstations Computer-related introductions in 1984 32-bit computers 64-bit computers
45868
https://en.wikipedia.org/wiki/National%20Center%20for%20Supercomputing%20Applications
National Center for Supercomputing Applications
The National Center for Supercomputing Applications (NCSA) is a state-federal partnership to develop and deploy national-scale cyberinfrastructure that advances research, science and engineering based in the United States. NCSA operates as a unit of the University of Illinois Urbana-Champaign, and provides high-performance computing resources to researchers across the country. Support for NCSA comes from the National Science Foundation, the state of Illinois, the University of Illinois, business and industry partners, and other federal agencies. NCSA provides leading-edge computing, data storage, and visualization resources. NCSA computational and data environment implements a multi-architecture hardware strategy, deploying both clusters and shared memory systems to support high-end users and communities on the architectures best-suited to their requirements. Nearly 1,360 scientists, engineers and students used the computing and data systems at NCSA to support research in more than 830 projects. NCSA is led by Bill Gropp. History NCSA is one of the five original centers in the National Science Foundation's Supercomputer Centers Program. The idea for NCSA and the four other supercomputer centers arose from the frustration of its founder, Larry Smarr, who wrote an influential paper, "The Supercomputer Famine in American Universities", in 1982, after having to travel to Europe in summertime to access supercomputers and conduct his research. Smarr wrote a proposal to address the future needs of scientific research. Seven other University of Illinois professors joined as co-principal investigators, and many others provided descriptions of what could be accomplished if the proposal were accepted. Known as the Black Proposal (after the color of its cover), it was submitted to the NSF in 1983. It met the NSF's mandate and its contents immediately generated excitement. However, the NSF had no organization in place to support it, and the proposal itself did not contain a clearly defined home for its implementation. The NSF established an Office of Scientific Computing in 1984 and, with strong congressional support, it announced a national competition that would fund a set of supercomputer centers like the one described in the Black Proposal. The result was that four supercomputer centers would be chartered (Cornell, Illinois, Princeton, and San Diego), with a fifth (Pittsburgh) added later. The Black Proposal was approved in 1985 and marked the foundation of NCSA, with $42,751,000 in funding from 1 January 1985 through 31 December 1989. This was also noteworthy in that the NSF's action of approving an unsolicited proposal was unprecedented. NCSA opened its doors in January 1986. In 2007, NCSA was awarded a grant from the National Science Foundation to build "Blue Waters", a supercomputer capable of performing quadrillions of calculations per second, a level of performance known as petascale. Black Proposal The 'Black Proposal' was a short, ten-page proposal for the creation of a supercomputing center that eventually led to funding from the National Science Foundation (NSF) to create supercomputing centers, including the National Center for Supercomputing Applications (NCSA) at the University of Illinois. In this sense, the significant role played by the U.S. Government in funding the center, and the first widely popular web browser (NCSA's Mosaic), cannot be denied. The Black Proposal described the limitations on any scientific research that required computer capabilities, and it described a future world of productive scientific collaboration, centered on universal computer access, in which technical limitations on scientific research would not exist. Significantly, it expressed a clear vision of how to get from the present to the future. The proposal was titled "A Center for Scientific and Engineering Supercomputing", and was ten pages long. The proposal's vision of the computing future were then unusual or non-existent, but elements of it are now commonplace, such as visualization, workstations, high-speed I/O, data storage, software engineering, and close collaboration with the multi-disciplinary user community. Modern readers of the Black Proposal may gain insight into a world that no longer exists. Today's computers are easy to use, and the web is omnipresent. Employees in high-tech endeavors are given supercomputer accounts simply because they are employees. Computers are universally available and can be used by almost anyone of any age, applicable to almost anything. At the time the proposal was written, computers were available to almost no one. For scientists who needed computers in their research, access was difficult if available at all. The effect on research was crippling. Reading publications from that time gives no hint that scientists were required to learn the arcane technical details of whatever computer facilities were available to them, a time-consuming limitation on their research, and an exceedingly tedious distraction from their professional interests. The implementation of the Black Proposal had a primary role in shaping the computer technology of today, and its impact on research (both scientific and otherwise) has been profound. The proposal's description of the leading edge of scientific research may be sobering, and the limitations on computer usage at major universities may be surprising. A comprehensive list of the world's supercomputers shows the best resources that were then available. The thrust of the proposal may seem obvious now, but was then novel. The National Science Foundation announced funding for the supercomputer centers in 1985; The first supercomputer at NCSA came online in January 1986. NCSA quickly came to the attention of the worldwide scientific community with the release of NCSA Telnet in 1986. A number of other tools followed, and like NCSA Telnet, all were made available to everyone at no cost. In 1993, NCSA released the Mosaic web browser, the first popular graphical Web browser, which played an important part in expanding the growth of the World Wide Web. NCSA Mosaic was written by Marc Andreessen and Eric Bina, who went on to develop the Netscape Web browser. Mosaic was later licensed to Spyglass, Inc. which provided the foundation for Internet Explorer. The server-complement was called NCSA HTTPd, which later became known as Apache HTTP Server. Other notable contributions by NCSA were the black hole simulations supporting the development of LIGO in 1992, the tracking of Comet Hale–Bopp in 1997, the creation of a PlayStation 2 Cluster in 2003, and the monitoring of the COVID-19 pandemic and creation of a COVID-19 vaccine. Facilities Initially, NCSA's administrative offices were in the Water Resources Building and employees were scattered across the campus. NCSA is now headquartered within its own building directly north of the Siebel Center for Computer Science, on the site of a former baseball field, Illini Field. NCSA's supercomputers are at the National Petascale Computing Facility. Movies/Visualization NCSA's visualization department is internationally well-known. Donna Cox, leader of the Advanced Visualization Laboratory at NCSA and a professor in the School of Art and Design at the University of Illinois Urbana-Champaign, and her team created visualizations for the Oscar-nominated IMAX film "Cosmic Voyage", the PBS NOVA episodes "Hunt for the Supertwister" and "Runaway Universe", as well as Discovery Channel documentaries and pieces for CNN and NBC Nightly News. Cox and NCSA worked with the American Museum of Natural History to produce high-resolution visualizations for the Hayden Planetarium's 2000 Millennium show, "Passport to the Universe", and for "The Search for Life: Are We Alone?" She produced visualizations for the Hayden's "Big Bang Theatre" and worked with the Denver Museum of Nature and Science to produce high-resolution data-driven visualizations of terabytes of scientific data for "Black Holes: The Other Side of Infinity", a digital dome program on black holes. Private Business Partners Referred to as the Industrial Partners program when it began in 1986, NCSA's collaboration with major corporations ensured that its expertise and emerging technologies would be relevant to major challenges outside of the academic world, as those challenges arose. Business partners had no control over research or the disposition of its results, but they were well-situated to be early adopters of any benefits of the research. This program is now called NCSA Industry. Past and current business partners include: Abaqus Abbvie ACNielsen Allstate Insurance American Airlines AT&T Inc. Boeing Phantom Works Caterpillar Inc. Dell Inc. Dow Chemical Eastman Kodak Eli Lilly and Company Exxon Mobil FMC Corporation Ford IBM Illinois Rocstar LLC Innerlink John Deere JPMorgan Chase Kellogg McDonnell Douglas (now part of Boeing) Motorola Phillips Petroleum Company Schlumberger Sears, Roebuck and Company Shell Oil State Farm Tribune Media Company United Technologies See also Blue Waters NCSA Brown Dog Coordinated Science Laboratory Cyberinfrastructure Mosaic (web browser) NCSA HTTPd NCSA Telnet The Beckman Institute References External links Buildings and structures of the University of Illinois at Urbana–Champaign Cyberinfrastructure E-Science History of the Internet National Science Foundation Research institutes established in 1986 Supercomputer sites 1986 establishments in the United States Computer science institutes in the United States
13757883
https://en.wikipedia.org/wiki/Charnwood%20Forest%20Railway
Charnwood Forest Railway
The Charnwood Forest Railway was a branch line in Leicestershire constructed by the Charnwood Forest Company between 1881 and 1883. The branch line ran from Coalville (joined from the Ashby and Nuneaton Joint Railway (ANJR)) to the town of Loughborough. It should not be confused with the much earlier railway that was part of the Charnwood Forest Canal. Stations on the Charnwood Forest Railway were located at Coalville East, Whitwick, Shepshed and Loughborough Derby Road. By 1885, the company had been placed in receivership; under this supervision, in 1907 three halts were opened, these being Thringstone Halt, Grace Dieu Halt and Snells Nook Halt. These were an attempt to improve the profitability of the line by increasing the customer base. The line was worked by the London and North Western Railway (LNWR) and was taken over by the London Midland & Scottish Railway (LMS) in 1923. Passenger services ceased to operate on 13 April 1931, with freight services ceasing to operate on 12 December 1963. The line was known as the 'Bluebell Line' due to the flower growing along much of the length of the line during the spring. History Formation According to Hadfield, in 1828 the owners of the disused Charnwood Forest Canal turned down an approach by Leicestershire coal owners for permission to lay rails along the now dry canal bed. The idea was to bring coal by this rail route from Whitwick and Swannington to Loughborough where it could then be transferred on to boats which would bring the coal into Leicester at West Bridge Wharf. Not to be deterred, the thwarted coal owners then promoted a Bill in 1829 which resulted in the construction of the Leicester and Swannington Railway, opened in 1832 — Leicestershire's first railway. Interest then came from London and North Western Railway as a way of getting a foothold in the coal mining area. It was authorised by an Act of Parliament passed in 1874 to lay a single-track railway from Nuneaton Junction near on the Ashby and Nuneaton Joint Railway to Loughborough. The intention was to link it to the Midland Main Line, but this never happened and the terminus was at Loughborough Derby Road. A ceremony marking the beginning of work was held on a very rainy day, 31 August 1881. The first turf was cut by Lady Packe (wife of Hussey Packe) of Prestwold Hall. Squire de Lisle of Garendon Hall wheeled the first barrow load of soil over a plank, but due to the rain he slipped and spilled it all, causing great amusement. The line was long with four stations serving Coalville (East), Whitwick, Shepshed and Loughborough (Derby Road). There was a rock face of through which to go at Thringstone and in order to swing the curve at Grace Dieu a cant of was required. The line also had a steep gradient of 1 in 66 between Whitwick and Coalville East. The line opened on 16 April 1883 and was worked by the LNWR which had subscribed a third of the capital in exchange for 50% of the gross receipts. Decline As a result of being unable to pay the interest on debenture stocks and partly due to financial malfeasance by the Secretary, the line went into bankruptcy in 1885. In 1906 there was a move to join up with the Great Central Railway at Loughborough but nothing came of it and the terminus remained at Loughborough's Derby Road. The Company then came up with two initiatives designed to improve profitability; Cheap-to-run railmotor services, which were introduced between Loughborough and Shackerstone, and on 2 April 1907 three halts were opened for use with them. On leaving the hands of the receiver in 1909 it remained a separate company until the LMS absorbed it in the 1923 grouping. Passengers trains were operated by an LNWR steam railmotor, though 2-4-2 tanks. 0-6-0 freight and 0-6-2 Coal Tanks also worked passenger services as well as freights. The LNWR ran nine passenger trains from Derby Road station in Loughborough with most going through to Shackerstone and two continuing to Nuneaton. In 1922 a passenger could leave Euston at 5:35, change at Nuneaton and Shackerstone and be in Whitwick at 8:32. The line was never successful and went into a decline after World War I. The LMS withdrew passenger services on 13 April 1931. Final Days During World War II the line enabled large amounts of road stone from quarries to be conveyed to new aerodromes throughout the country. Additionally, the line served a number of ammunition dumps, the army ambulance train was kept at Loughborough, rubber was stored at Shepshed and the USA Post Office was based at Coalville East. After the war excursion trains ran on the line until 1951 and Loughborough goods yard closed on 31 October 1955. On 14 April 1957 "The Charnwood Forester" was the last train to run through to Loughborough. The last excursion on the line occurred in 1962 when the Manchester Railway Society ran a series of excursions. Pulled by loco '43728', the service ran from Charnwood Junction to Shepshed and back. The remaining goods services closed on 7 October 1963, except for Shepshed quarry traffic which lasted to 12 December 1963. Route The branch was very picturesque, passing through the north-western corner of Charnwood Forest, which was a mass of bluebells in the spring resulting in the epithet "The Bluebell Line" in passengers days, although it was not the only line to be so termed. It was also known as the 'Bread and Herring Line' by the drivers and firemen. The halts, opened in 1907, at Thringstone, Grace Dieu and Snells Nook were an attempt to attract passengers and enable effective competition with new omnibus services. All of the halts were merely platforms six feet wide, thirty-three inches high and sixty feet long, and made up of old sleepers. Waiting huts were added later. Originally, passengers boarding at the halts paid on the train but when the huts were provided the guard issued the tickets from the huts. It was also the guard's duty to tend the oil lamps at the platforms. Route of the Charnwood Forest Line continuing from the Ashby and Nuneaton Joint Railway (ANJR): Charnwood Junction Coalville East Station (not to be confused with Coalville Town railway station) Whitwick Station Thringstone Halt. Thringstone halt was located only three quarters of a mile from the Station at Whitwick, in a cutting on the south side of the Gracedieu Road Bridge. The short platform was on the village side of the line and reached by steps down from the road. In 1914, some seven years after the halt had been opened, a hut was provided at the back of the platform following several requests by local residents. This was a standard LNWR 'portable' type, 16 ft x 18 ft, of timber construction and with a plain pitched roof. Following its closure in 1931, the hut was rented by a Mr Ottey of Bauble Yard, Thringstone, for use as a Cobbler's shop. Grace Dieu Viaduct. A feature along the route is the Six Arch Viaduct in Gracedieu wood, spanning 40 yards long with 3" by 15" coping stones from nearby Mountsorrel. Grace Dieu Halt Shepshed Station Snells Nook Halt Loughborough Derby Road Remains of the line Structures There are very few buildings still in existence which were once used by the railway. However, one still exists in Whitwick, and now serves as the home of the "Whitwick Historical Group". This is in the old station building near the market place. The goods shed at Loughborough Derby Road stood until 2018, albeit in use with the rest of an industrial estate, when it was demolished to make way for a supermarket. The only other buildings still standing (and this is a tenuous link) are the numerous bridges still carrying road traffic dotted amongst the local countryside. One of the best still remains intact within Thringstone woods, near Grace Dieu Priory ruins. This is the Grace Dieu Viaduct, a grand and imposing structure for such a small line. The station buildings at Coalville East have been built upon (housing estate). The same has happened in Shepshed (industrial estate) and Loughborough. There is still a post near the site of Grace Dieu halt. The site of the halt itself was removed completely when the A512 was realigned and the CFR 3 arched bridge over the old road was demolished. The Trackbed The trackbed remains remarkably intact, although some is now on private land. The most convenient place to start is near Coalville's Morrisons outlet (at ) where the trackbed into Whitwick has been made a public right of way. This footpath meanders along the edge of the Hermitage Lake (a former clay quarry), past the modern leisure centre and then under the South Street bridge before passing the Whitwick station building and platform (although unfortunately the platform is unkempt and overgrown). This footpath along the trackbed ends at a T-junction just after Whitwick station, while the line went straight on over another bridge. The trackbed is less clear here, as it is now under someone's garden, who has carried out many alterations. At the other side of the garden however, the trackbed still retains its original ballast and is in remarkably good condition for a short distance, passing through an area known as 'Happy Valley', until the growth of vegetation starts again. It is still clear where it went, but less easy to follow due to vegetation. The line still has ballast here. For reference, we are now passing under Whitwick's "Dumps Road" bridge. The trackbed is still obvious here, yet it becomes less clear as we pass through Thringstone. Alterations to the surroundings make it hard to tell. The trackbed can be readily picked up near Grace Dieu Wood, which will take you across the aforementioned viaduct, and past Grace Dieu Priory to the edge of a missing bridge at the side of the A512. The bridge here was demolished in 1967. Across this gap, the trackbed continues on private land, where it is used for farm access, though it is still an obvious railway trackbed. West of Shepshed the trackbed has been converted to a footpath, popular with dog walkers and boys practicing their skills on mountain bicycles. The footpath starts at Charnwood Road in Shepshed () and finishes in a dead-end about 2 km to the west (). In places along the way, derelict remains of the Charnwood Forest Canal can be identified Through Shepshed the trackbed has been obliterated, but to the east it is distinguishable again and passes behind a lorry park before the M1 motorway cuts across the path of the line. After the motorway, the trackbed ran through the southern edge of Garendon Park, after which the line has again been converted to a footpath and cycleway near Old Ashby Road, with a dead-end at and on towards Loughborough. The footpath along the trackbed leads all the way to Thorpe Hill where a community centre has been constructed over the trackbed. After the community centre the trackbed is followable again down to Loughborough Fire Station, which again has been built on the trackbed, from here the route of the line is difficult to follow, but the footpath follows the course of the line closely. A care home and industrial estate have been built upon the rest of the former trackbed to Loughborough Derby Road station. The Station Hotel, now converted to a funeral home, is the only remaining structure of the small terminus constructed at this location. See also Loughborough Derby Road Whitwick railway station London and North Western Railway Ashby and Nuneaton Joint Railway Charnwood Forest Canal References Closed railway lines in the East Midlands Rail transport in Leicestershire Railway lines opened in 1883 London, Midland and Scottish Railway constituents
83203
https://en.wikipedia.org/wiki/Antiphates
Antiphates
In Greek mythology, Antiphates (; Ancient Greek: Ἀντιφάτης) is the name of five characters. Antiphatês, son of Melampus and Iphianeira, the daughter of Megapenthes. He married Zeuxippe, the daughter of Hippocoon. Their children were Oecles and Amphalces. Antiphates, one of Greek warriors who hid in the Trojan horse. Antíphates, a Trojan warrior, slain by Leonteus, commander of the Lapiths during the Trojan War. Antiphates, King of the Laestrygones, a mythological tribe of gigantic cannibals. He was married and had a daughter. When he was visited by a scouting party sent by Odysseus, he ate one of the men on the spot and raised a hue-and-cry to ensure most of the rest of Odysseus' company would be hunted down. Antiphates, son of Sarpedon, who accompanied Aeneas to Italy where he was killed by Turnus. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Diodorus Siculus, The Library of History translated by Charles Henry Oldfather. Twelve volumes. Loeb Classical Library. Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. 1989. Vol. 3. Books 4.59–8. Online version at Bill Thayer's Web Site Diodorus Siculus, Bibliotheca Historica. Vol 1-2. Immanel Bekker. Ludwig Dindorf. Friedrich Vogel. in aedibus B. G. Teubneri. Leipzig. 1888–1890. Greek text available at the Perseus Digital Library. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Homer, The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. . Online version at the Perseus Digital Library. Greek text available from the same website. Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library. Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library. Tryphiodorus, Capture of Troy translated by Mair, A. W. Loeb Classical Library Volume 219. London: William Heinemann Ltd, 1928. Online version at theoi.com Tryphiodorus, Capture of Troy with an English Translation by A.W. Mair. London, William Heinemann, Ltd.; New York: G.P. Putnam's Sons. 1928. Greek text available at the Perseus Digital Library. Princes in Greek mythology Kings in Greek mythology Achaeans (Homer) Trojans Characters in the Aeneid Characters in the Iliad Characters in the Odyssey Characters in Greek mythology
60603853
https://en.wikipedia.org/wiki/G0v
G0v
The g0v movement, or g0v, (pronounced gov-zero ) is an open source, open government collaboration started by Chia-liang Kao ("clkao"), ipa, kirby and others in late 2012 in Taiwan. Originally driven by a bimonthly hackathon, the community has expanded to include different professional and non-information technology background members. Symbolizing the community's efforts to "rethink the role of government from zero," and borrowing the parlance of binary from the digital world of 1s and 0s, the O in "gov" is replaced with a 0 to make "g0v"; for many government agencies in Taiwan which have URLs ending .gov.tw, replacing .gov with .g0v redirects the user to the so-called shadow government, a "forked" version of that agency with contributions by civic hackers. Continuing this inspiration from the software development world, the forked content can then be "merged" back into the government agency's website. g0v is a community that promotes the transparency of government information and is committed to developing information platforms and tools for citizens to participate in society. As of the beginning of 2014, there have been contributors across three continents, and the results have been released in a free software model that embraces knowledge sharing. Origin The "Real Price" Incident Amidst popular unrest regarding speculative housing inflation, Taiwanese President Ma Ying-jeou made housing justice a key component of his 2012 re-election platform. In an attempt to counter speculation and enable fair taxation, the parliament passed a bipartisan bill mandating that all real estate transactions register the actual price. As part of the mandate, the Ministry of the Interior commissioned a website on which people can find transaction records by street address. The site went live on October 16 to a flood of requests and remained only intermittently accessible for most of October. Three days after the launch, a team of four Google.tw engineers incorporated the Ministry's data into their Real-Price Maps website, overlaying aggregated pricing information on Google Maps with a plethora of filtering features. Their remix was well-received, successfully serving hundreds of requests per second from Google App Engine. A week later, Minister without portfolio Simon Chang (a Google alum himself) invited the remixers to a round table. The team responded amiably, offering detailed suggestions about how they would like to collaborate with the government. However, after media coverage pitted the team's shoestring budget of NTD$500 against the official site's “million-dollar disaster”, the relationship between the two soon turned sour. The Ministry claimed the crawling activity contributed to their server's downtime, while critics of the engineers' overlay questioned the legality of scraping and remixing government data. The incident came to a head on November 14, when the official site replaced all street addresses with image files, dramatically increasing the burden of crawling. Despite the fact that a civic hacker eventually published parsed data using OCR techniques, the Real-Price Maps site closed shortly thereafter. "Power-Up Plan for the Economy" While the Real Price incident was still unfolding, a new government production took the spotlight: A 40-second video advertisement titled “What's the Economy Power-Up Plan?” Critics found the advertisement, "Entirely devoid of information, the clip simply repeated the following monotonous refrain: 'We have a very complex plan. It is too complicated to explain. Never mind the details — just follow instructions and go along with it!'" The video faced criticism from the public, but went viral; many viewers on YouTube protested the advertisement by clicking “report abuse" on the video. The automated system quickly classified the video as spam and banned the government's YouTube account for two days. The video went on-air again on October 19, just before Yahoo! Open Hack Day 2012, an annual 24-hour event in which 64 teams demonstrate innovative creations. Infuriated by the controversial ad, the four members of the “Hacker #15” team made a last-minute pivot from their “online window shopping” project. Rather than displaying merchandise, they resolved to create a bird's-eye view of how taxes are spent. The resulting Budget Maps project presented each agency's annual spending in the form of geometric shapes of proportional sizes, inviting participants to review and rate each item's usefulness. Calling upon citizens to “strike out rip-off spending (e.g. the Power-Up ad)”, the two-minute demo won NTD$50,000 in Hack Day prizes. In an attempt to bolster interest in this and future projects past the demonstration day, team member CL Kao registered the domain name g0v.tw, dedicated to citizens’ remixes of government websites. The Real-Price Maps thus became accessible at lvr.land.moi.g0v.tw before its shutdown, with only one ASCII character of difference from its official counterpart at lvr.land.moi.gov.tw; meanwhile, the Budget Maps lived on at budget.g0v.tw as the inaugural g0v.tw project. Hackath0n Equipped with the new g0v.tw domain, the four hackers agreed to spend the NTD$50k prize on their own hackathon to enlist more projects into the syndicate of civic remixes. Modeled after participant-driven BarCamp events, they named the event 0th Hackathon of Martial Mobilization, or Hackath0n, invoking a rebellious image from Taiwan's 1949-era civil war. Registrants soon exceeded the initial venue's capacity. A lab director at Academia Sinica offered to host the event at the Institute of Information Science. On December 1, civic hackers filled the institute's 80-person auditorium and presented their projects, covering a wide range of government functions, including Congress, Tenders, Geography, Weather, Electricity, Healthcare and many other areas. Discussion continued online at Hackpad and IRC well after the daylong event. In support of the coding efforts, writers and bloggers formed a Facebook group offering on-demand copywriting skills to any project that asked for assistance. Designer Even Wu also initiated an on-demand design group, providing hackers with various visual assets. Dissatisfied with the makeshift logo banner, Wu would continue to work on several iterations of the logotypes, eventually completing a set of Visual Identity guidelines aimed at helping elevate g0v into an easily recognizable brand. Declaration g0v states, "[We] have demonstrated a way to combine online and offline activism. Following the model established by the Free Software community over the past two decades, we transformed social media into a platform for social production, with a fully open and decentralized cultural & technological framework." g0v summarizes its collaborative governance philosophy thus: "Ask not why nobody is doing this. You are the 'nobody'!" Activities and projects g0v community activities are both online and offline, including hackathons, speeches, sharing sessions, teaching, conference, and other activities. As of 2020 there are active g0v communities working on open government data in Taiwan and Hong Kong. g0v.tw projects Prominent applications and widgets created by g0v include: MoeDict, a digital Chinese dictionary developed by Audrey Tang Cofacts, a collaborative fact-checking bot g0vhk The Hong Kong branch of g0v, known as g0vhk, was founded in 2016 by data scientist Ho Wa Wong. The g0v movement gained popularity in Hong Kong due to their work in aggregating candidate information in the 2019 Hong Kong local elections and disseminating disease prevention news during the 2019-20 coronavirus pandemic. Prominent gadgets produced by the g0vhk community include: Hong Kong Address Parser Vote 4 Hong Kong (data aggregation for the 2019 and 2020 general elections) Covid-19 in HK dashboard g0v.it The italian branch of g0v was founded in 2019 by the Copernicani NPO. The projects are listed on their website. See also Open government Radical transparency Sunflower Movement e-participation References External links Official English language website of the international g0v movement g0v Taiwan g0v Hong Kong Anonymity Hacker groups Information society Internet-based activism Internet culture Internet vigilantism Organizations established in 2012 2012 establishments in Taiwan
7383016
https://en.wikipedia.org/wiki/Communications%20and%20Information%20Technology%20Commission%20%28Saudi%20Arabia%29
Communications and Information Technology Commission (Saudi Arabia)
The Communications and Information Technology Commission (CITC) (, Hai'at al-Ittisalat wa Tiqniyyat al-Ma`lumat) is the Saudi communications authority. It was first established under the name of Saudi Communications Commission in accordance with the decision of the Council of Ministers. The name was changed after the commission was assigned new tasks related to information technology. Since October 2006, CITC has been handling the DNS structure and filtering in Saudi Arabia in the place of KACST (King Abdulaziz City for Science and Technology). Censorship The Communications and Information Technology Commission is responsible for regulating the Internet and for hosting a firewall which blocks access to thousands of websites, mainly due to sexual and political content. Access to Megaupload has been intermittently blocked by the Internet authorities in Saudi Arabia. ICT sector CITC] is responsible for regulating the ICT sector in Saudi Arabia. The Telecommunications Act, issued by Royal Decree in 2001, provide the legal framework for organizing this sector. This Act involves a number of objectives such as: Providing advanced and adequate telecommunication services with affordable prices, creating an appropriate atmosphere to encourage fair competition, using frequencies effectively, localization of telecommunication technology and managing recent advancements, clarity and transparency in procedures, equality and neutrality, protection of the public interest as well as the interest of users and investors. Postal services Since the end of 2019, CITC became responsible for regulating the postal sector in Saudi Arabia. This involved studying the current market, improving regulatory environment and supporting service providers to enhance the quality of their services. New technology In alignment with the Saudi Vision 2030 and the National transformation program 2020, CITC is keen on facilitating the growth and localization of the IT & Tech sector in Kingdom. By 2023, CITC aims to increase the IT and emerging Tech market size, by regulating and licensing these technologies, and driving global investment. Services Complaints Report Argami Coverage maps Spectrum Domain Name Registration Equipment licensing Meqyas Boniah References External links Communications and Information Technology Commission Communications authorities Telecommunications in Saudi Arabia Government agencies of Saudi Arabia Censorship in Saudi Arabia Regulation in Saudi Arabia
27353323
https://en.wikipedia.org/wiki/NeuroML
NeuroML
NeuroML is an XML (Extensible Markup Language) based model description language that aims to provide a common data format for defining and exchanging models in computational neuroscience. The focus of NeuroML is on models which are based on the biophysical and anatomical properties of real neurons. History The idea of creating NeuroML as a language for describing neuroscience models was first introduced by Goddard et al. (2001) following meetings in Edinburgh where initial templates for the language structures were discussed. This initial proposal was based on general purpose structures proposed by Gardner et al. (2001). At that time, the concept of NeuroML was closely linked with the idea of developing a software architecture in which a base application loads a range of plug-in components to handle different aspects of a simulation problem. Neosim (2003) was developed based on this goal, and early NeuroML development was closely aligned to this approach. Along with creating Neosim, Howell and Cannon developed a software library, the NeuroML Development Kit (NDK), to simplify the process of serializing models in XML. The NeuroML Development Kit implemented a particular dialect of XML, including the "listOfXXX" structure, which also found its way into SBML(Systems Biology Markup Language), but did not define any particular structures at the model description level. Instead, developers of plug-ins for Neosim were free to invent their own structures and serialize them via the NDK, in the hope that some consensus would emerge around the most useful ones. In practice, few developers beyond the Edinburgh group developed or used such structures and the resulting XML was too application specific to gain wider adoption. The Neosim project ended in 2005. Based on the ideas in Goddard et al. (2001) and discussions with the Edinburgh group, Sharon Crook began a collaborative effort to develop a language for describing neuronal morphologies in XML called MorphML. From the beginning, the idea behind MorphML was to develop a format for describing morphological structures that would include all of the necessary components to serve as a common data format with the added advantages of XML. At the same time, Padraig Gleeson and Angus Silver were developing neuroConstruct for generating neuronal simulations for the NEURON and GENESIS simulators. At that time, neuroConstruct utilized an internal simulator-independent representation for morphologies, channel and networks. It was agreed that these efforts should be merged under the banner of NeuroML, and the current structure of NeuroML was created. The schema was divided into levels (e.g. MorphML, ChannelML, and NetworkML) to allow different applications to support different part of the language. Since 2006 the XML Schema files for this version of the standard have been available from the NeuroML development site. The language Aims The main aims of the NeuroML initiative are to: To create specifications for a language (in XML) to describe the biophysics, anatomy and network architecture of neuronal systems at multiple scales To facilitate the exchange of complex neuronal network models between researchers, allowing for greater transparency and accessibility of models To promote software tools supporting NeuroML and to support the development of new software and databases To encourage researchers who create models within the scope of NeuroML to exchange and publish their models in this format. Structure NeuroML is focused on biophysical and anatomical detailed models, i.e. incorporating real neuronal morphologies and membrane conductances (conductance based models), and network models based on known anatomical connectivity. The NeuroML structure is composed of Levels, where each Level deals with a particular biophysical scale. The modular nature of the specifications makes them easier to develop, understand, and use since one can focus on one module at a time; however, the modules are designed to fit together seamlessly. There are currently three Levels of NeuroML defined: Level 1 focuses on the anatomical aspects of cells and consists of a schema for Metadata and the main MorphML schema. Tools which model the detailed neuronal morphologies (such as NeuronLand) can use the informations contained in this Level. Level 2 describes the biophysical properties of cells and also the properties of channel and synaptic mechanisms using ChannelML. Software which simulate neuronal spiking behaviour (such as NEURON and MOOSE) can use this Level of model description. Level 3 describes the positions of cell in space and the network connectivity. This kind of information in NetworkML can be used by software (such as CX3D and PCSIM) to exchange details on network architecture. Level 3 files containing cell morphology and connectivity can also be used by applications such as neuroConstruct for reproducing and analysing networks of conductance based cell models. Current schemas in readable form are available on the NeuroML specifications page. Application support for NeuroML A list of software packages which support all or part of NeuroML is available on the NeuroML website. Community NeuroML is an international, free and open community effort. The NeuroML Team implements the NeuroML specifications, maintains the website and the validator, organizes annual workshops and other events, and manages specific funding for coordinating the further development of NeuroML. Version 2.0 of the NeuroML language is being developed by the Specification Committees. NeuroML also participates in the International Neuroinformatics Coordinating Facility Program on Multiscale Modeling. See also OpenXDF References External links neuroml.org XML-based standards Neuroinformatics
39477018
https://en.wikipedia.org/wiki/Tailored%20Access%20Operations
Tailored Access Operations
The Office of Tailored Access Operations (TAO), now Computer Network Operations, structured as S32 is a cyber-warfare intelligence-gathering unit of the National Security Agency (NSA). It has been active since at least 1998, possibly 1997, but was not named or structured as TAO until "the last days of 2000," according to General Michael Hayden. TAO identifies, monitors, infiltrates, and gathers intelligence on computer systems being used by entities foreign to the United States. History TAO is reportedly "the largest and arguably the most important component of the NSA's huge Signals Intelligence Directorate (SID), consisting of more than 1,000 military and civilian computer hackers, intelligence analysts, targeting specialists, computer hardware and software designers, and electrical engineers". Snowden leak A document leaked by former NSA contractor Edward Snowden describing the unit's work says TAO has software templates allowing it to break into commonly used hardware, including "routers, switches, and firewalls from multiple product vendor lines". TAO engineers prefer to tap networks rather than isolated computers, because there are typically many devices on a single network. Organization TAO's headquarters are termed the Remote Operations Center (ROC) and are based at the NSA headquarters at Fort Meade, Maryland. TAO also has expanded to NSA Hawaii (Wahiawa, Oahu), NSA Georgia (Fort Gordon, Georgia), NSA Texas (Joint Base San Antonio, Texas), and NSA Colorado (Buckley Space Force Base, Denver). S321 – Remote Operations Center (ROC) In the Remote Operations Center, 600 employees gather information from around the world. S323 – Data Network Technologies Branch (DNT) : develops automated spyware S3231 – Access Division (ACD) S3232 – Cyber Networks Technology Division (CNT) S3233 – S3234 – Computer Technology Division (CTD) S3235 – Network Technology Division (NTD) Telecommunications Network Technologies Branch (TNT) : improve network and computer hacking methods Mission Infrastructure Technologies Branch: operates the software provided above S328 – Access Technologies Operations Branch (ATO): Reportedly includes personnel seconded by the CIA and the FBI, who perform what are described as "off-net operations", which means they arrange for CIA agents to surreptitiously plant eavesdropping devices on computers and telecommunications systems overseas so that TAO's hackers may remotely access them from Fort Meade. Specially equipped submarines, currently the USS Jimmy Carter, are used to wiretap fibre optic cables around the globe. S3283 – Expeditionary Access Operations (EAO) S3285 – Persistence Division Virtual locations Details on a program titled QUANTUMSQUIRREL indicate NSA ability to masquerade as any routable IPv4 or IPv6 host. This enables an NSA computer to generate false geographical location and personal identification credentials when accessing the Internet utilizing QUANTUMSQUIRREL. NSA ANT catalog The NSA ANT catalog is a 50-page classified document listing technology available to the United States National Security Agency (NSA) Tailored Access Operations (TAO) by the Advanced Network Technology (ANT) Division to aid in cyber surveillance. Most devices are described as already operational and available to US nationals and members of the Five Eyes alliance. According to Der Spiegel, which released the catalog to the public on December 30, 2013, "The list reads like a mail-order catalog, one from which other NSA employees can order technologies from the ANT division for tapping their targets' data." The document was created in 2008. Security researcher Jacob Appelbaum gave a speech at the Chaos Communications Congress in Hamburg, Germany, in which he detailed techniques that the simultaneously published Der Spiegel article he coauthored disclosed from the catalog. QUANTUM attacks The TAO has developed an attack suite they call QUANTUM. It relies on a compromised router that duplicates internet traffic, typically HTTP requests, so that they go both to the intended target and to an NSA site (indirectly). The NSA site runs FOXACID software which sends back exploits that load in the background in the target web browser before the intended destination has had a chance to respond (it's unclear if the compromised router facilitates this race on the return trip). Prior to the development of this technology, FOXACID software made spear-phishing attacks the NSA referred to as spam. If the browser is exploitable, further permanent "implants" (rootkits etc.) are deployed in the target computer, e.g. OLYMPUSFIRE for Windows, which give complete remote access to the infected machine. This type of attack is part of the man-in-the-middle attack family, though more specifically it is called man-on-the-side attack. It is difficult to pull off without controlling some of the Internet backbone. There are numerous services that FOXACID can exploit this way. The names of some FOXACID modules are given below: alibabaForumUser doubleclickID rocketmail hi5 HotmailID LinkedIn mailruid msnMailToken64 qq Facebook simbarid Twitter Yahoo Gmail YouTube By collaboration with the British Government Communications Headquarters (GCHQ) (MUSCULAR), Google services could be attacked too, including Gmail. Finding machines that are exploitable and worth attacking is done using analytic databases such as XKeyscore. A specific method of finding vulnerable machines is interception of Windows Error Reporting traffic, which is logged into XKeyscore. QUANTUM attacks launched from NSA sites can be too slow for some combinations of targets and services as they essentially try to exploit a race condition, i.e. the NSA server is trying to beat the legitimate server with its response. As of mid-2011, the NSA was prototyping a capability codenamed QFIRE, which involved embedding their exploit-dispensing servers in virtual machines (running on VMware ESX) hosted closer to the target, in the so-called Special Collection Sites (SCS) network worldwide. The goal of QFIRE was to lower the latency of the spoofed response, thus increasing the probability of success. COMMENDEER is used to commandeer (i.e. compromise) untargeted computer systems. The software is used as a part of QUANTUMNATION, which also includes the software vulnerability scanner VALIDATOR. The tool was first described at the 2014 Chaos Communication Congress by Jacob Appelbaum, who characterized it as tyrannical. QUANTUMCOOKIE is a more complex form of attack which can be used against Tor users. Known targets and collaborations Suspected and confirmed targets of the Tailored Access Operations unit include national and international entities like China, OPEC, and Mexico's Secretariat of Public Security. The group has also targeted global communication networks via SEA-ME-WE 4 – an optical fibre submarine communications cable system that carries telecommunications between Singapore, Malaysia, Thailand, Bangladesh, India, Sri Lanka, Pakistan, United Arab Emirates, Saudi Arabia, Sudan, Egypt, Italy, Tunisia, Algeria and France. Additionally, Försvarets radioanstalt (FRA) in Sweden gives access to fiber optic links for QUANTUM cooperation. TAO's QUANTUM INSERT technology was passed to UK services, particularly to GCHQ's MyNOC, which used it to target Belgacom and GPRS roaming exchange (GRX) providers like the Comfone, Syniverse, and Starhome. Belgacom, which provides services to the European Commission, the European Parliament and the European Council discovered the attack. In concert with the CIA and FBI, TAO is used to intercept laptops purchased online, divert them to secret warehouses where spyware and hardware is installed, and send them on to customers. TAO has also targeted internet browsers Tor and Firefox. According to a 2013 article in Foreign Policy, TAO has become "increasingly accomplished at its mission, thanks in part to the high-level cooperation it secretly receives from the 'big three' American telecom companies (AT&T, Verizon and Sprint), most of the large US-based Internet service providers, and many of the top computer security software manufacturers and consulting companies." A 2012 TAO budget document claims that these companies, on TAO's behest, "insert vulnerabilities into commercial encryption systems, IT systems, networks and endpoint communications devices used by targets". A number of US companies, including Cisco and Dell, have subsequently made public statements denying that they insert such back doors into their products. Microsoft provides advance warning to the NSA of vulnerabilities it knows about, before fixes or information about these vulnerabilities is available to the public; this enables TAO to execute so-called zero-day attacks. A Microsoft official who declined to be identified in the press confirmed that this is indeed the case, but said that Microsoft cannot be held responsible for how the NSA uses this advance information. Leadership Since 2013, the head of TAO is Rob Joyce, a 25-plus year employee who previously worked in the NSA's Information Assurance Directorate (IAD). In January 2016, Joyce had a rare public appearance when he gave a presentation at the Usenix’s Enigma conference. See also Advanced persistent threat Cyberwarfare in the United States Equation Group Magic Lantern (software) MiniPanzer and MegaPanzer PLA Unit 61398 Stuxnet Syrian Electronic Army WARRIOR PRIDE References External links Inside TAO: Documents Reveal Top NSA Hacking Unit NSA 'hacking unit' infiltrates computers around the world – report NSA Tailored Access Operations https://www.wired.com/threatlevel/2013/09/nsa-router-hacking/ https://www.nytimes.com/2014/01/15/us/nsa-effort-pries-open-computers-not-connected-to-internet.html Getting the 'Ungettable' Intelligence: An Interview with TAO's Teresa Shea Computer surveillance Cyberwarfare in the United States Hacker groups Intelligence agency programmes revealed by Edward Snowden National Security Agency American advanced persistent threat groups
60789907
https://en.wikipedia.org/wiki/The%20Voice%20Senior%20%28Polish%20TV%20series%29
The Voice Senior (Polish TV series)
The Voice Senior is a Polish reality talent show premiered on December 7, 2019 on the TVP 2 television network. The Voice Senior is part of the international syndication The Voice and The Voice Kids based on the reality singing competition launched in the Netherlands as The Voice of Holland, created by Dutch television producer John de Mol. However, participation is only open for candidates of more than 60 years old. Format The show consists of four different phases: production audition, blind audition, the "sing off" and the live finale. The production auditions are not filmed; here only the good singers are selected by the program to go to the blind auditions. The Blind auditions The blind auditions are similar to The Voice of Poland and The Voice Kids. The contestants sing during the blind auditions while the chairs of the four judges/coaches are turned over. Each candidate has the chance to sing a song of their choice for about a minute and a half. The coaches can only choose the contestant on the basis of musicality and voice by pressing the button, causing their chairs turn around and facing the artist. If two or more coaches want the same artist, the artist can choose which coach he or she wants to continue in the program. The Blind Auditions will end when all teams are full. The Semifinal Each coach pairs two or three singers from his team who have to compete against each other by performing a song chosen by the coach. After the performances, the coach chooses one contestant from each pair to advance to the next round. In the end, every coach retains two contestants. The Live Finale The remaining two contestants from each team will be in the final. The live finale will be broadcast live on TVP 2. The contestants are mentored by their coach and choose a song that they want to sing in the final. The coach then decides one act to remain; the other act will then be eliminated. The final winner is chosen by the public at home by televoting. Coaches and presenters On July 19, 2019, it was announced that Marek Piekarczyk, Urszula Dudziak, Alicja Majewska and Andrzej Piaseczny would become coaches for the show's first season, with Tomasz Kammel joining Marta Manowska as hosts. On August 17, 2020, it was announced that Majewska and Piaseczny would return as coaches, meanwhile new coaches Izabela Trojanowska and Witold Paszt will replace Marek Piekarczyk and Urszula Dudziak in the second season. On August 18, 2021, it was announced that Majewska and Paszt would return as the coaches, meanwhile Andrzej Piaseczny and Izabela Trojanowska would be replaced by Piotr Cugowski and Maryla Rodowicz in the third season. Coaches timeline Timeline of hosts Key Main presenter Backstage presenter Coaches and finalists Winners are in bold, other finalists in italic, eliminated artists in smaller font. Season summary Colour key Artist from Team Marek Artist from Team Ula Artist from Team Andrzej Artist from Team Alicja Artist from Team Izabela Artist from Team Witold Artist from Team Piotr Artist from Team Maryla Season 1 (2019) The show premiered on December 7, 2019. The judges for season 1 were Andrzej Piaseczny, Alicja Majewska, Urszula Dudziak and Marek Piekarczyk. The show was hosted by Tomasz Kammel, Marta Manowska and Janina Busk. The winners of the first series were Jola, Krystyna & Ela Szydłowskie. Season 2 (2021) The second season of the show premiered on January 2, 2021. Alicja Majewska and Andrzej Piaseczny returned as the coaches in the second season, but Urszula Dudziak and Marek Piekarczyk are replaced by Izabela Trojanowska and Witold Paszt, as new coaches. The show was hosted by Rafał Brzozowski and Marta Manowska. Tomasz Kammel resigned from the function of presenter. The winner of the second series was Barbara Parzeczewska. Season 3 (2022) The third season of the show premiered on January 1, 2022. Alicja Majewska and Witold Paszt returned as the coaches in the third season, but Izabela Trojanowska and Andrzej Piaseczny were replaced by Maryla Rodowicz and Piotr Cugowski, as new coaches. The show was hosted by Rafał Brzozowski and Marta Manowska. The winner of the third series was Krzysztof Prusik. References The Voice of Poland Telewizja Polska original programming 2010s Polish television series
26465409
https://en.wikipedia.org/wiki/5119%20Imbrius
5119 Imbrius
5119 Imbrius, provisional designation: , is a Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 8 September 1988 by Danish astronomer Poul Jensen at the Brorfelde Observatory near Holbæk, Denmark. The dark Jovian asteroid has a rotation period of 12.8 hours. It was numbered in March 1992, and named from Greek mythology after Imbrius, who was killed by Greek archer Teucer during the Trojan War. Discovery On the night this minor planet was discovered at Brorfelde Observatory, Poul Jensen also discovered the Jupiter trojan , the 12-kilometer size main-belt asteroid , as well as , , , and , all main-belt asteroids of inner, middle and outer region of the asteroid belt, respectively. A first precovery was taken at Palomar Observatory in December 1954, extending the asteroid's observation arc by 34 years prior to its discovery. Orbit and classification Imbrius is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind its orbit . It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 4.6–5.8 AU once every 11 years and 10 months (4,333 days; semi-major axis of 5.2 AU). Its orbit has an eccentricity of 0.11 and an inclination of 16° with respect to the ecliptic. Numbering and naming This minor planet was numbered on 18 March 1992 (). On 29 November 2021, IAU's Working Group Small Body Nomenclature it from Greek mythology after Imbrius, son of Mentor and husband of King Priam's daughter Medesicaste. Imbrius was killed by the Greek archer Teucer during the Trojan War. Physical characteristics Imbrius is an assumed, carbonaceous C-type asteroid, while most larger Jupiter trojans are D-types. It has a typical V–I color index of 0.97. Lightcurve In February 1994, Imbrius was observed by Stefano Mottola and Anders Erikson at La Silla Observatory in Chile, using the ESO 1-metre telescope and its DLR MkII CCD-camera. The photometric observations were used to build a lightcurve showing a rotation period of hours with a brightness variation of magnitude (). Diameter and albedo According to the survey carried out by NASA's Wide-field Infrared Survey Explorer with its subsequent NEOWISE mission, the Trojan asteroid measures 49.25 kilometers in diameter and its surface has a low albedo of 0.061, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for carbonaceous asteroid of 0.057 and calculates a diameter of 48.48 kilometers based on an absolute magnitude of 10.3. References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (5001)-(10000) – Minor Planet Center 005119 Discoveries by Poul Jensen (astronomer) Minor planets named from Greek mythology Named minor planets 19880908
32499
https://en.wikipedia.org/wiki/Vector%20graphics
Vector graphics
Vector graphics, as a form of computer graphics, is the set of mechanisms for creating visual images directly from geometric shapes defined on a Cartesian plane, such as points, lines, curves, and polygons. These mechanisms may include vector display and printing hardware, vector data models and file formats, and software based on these data models (especially graphic design software, Computer-aided design, and Geographic information systems). Vector graphics are an alternative to raster graphics, each having advantages and disadvantages in general and in specific situations. While vector hardware has largely disappeared in favor of raster-based monitors and printers, vector data and software continues to be widely used, especially when a high degree of geometric precision is required, and when complex information can be decomposed into simple geometric primitives. Thus, it is the preferred model for domains such as engineering, architecture, surveying, 3D rendering, and typography, but is entirely inappropriate for applications such as photography and remote sensing, where raster is more effective and efficient. Some application domains, such as Geographic information systems (GIS) and graphic design, use both vector and raster graphics at times, depending on purpose. Vector graphics are based on the mathematics of analytic or coordinate geometry, and is not related to other mathematical uses of the term vector, including Vector fields and Vector calculus. This can lead to some confusion in disciplines in which both meanings are used. Data model The logical data model of vector graphics is based on the mathematics of coordinate geometry, in which shapes are defined as a set of points in a two- or three-dimensional cartesian coordinate system, as p = (x, y) or p = (x, y, z). Because almost all shapes consist of an infinite number of points, the vector model defines a limited set of geometric primitives that can be specified using a finite sample of salient points called vertices. For example, a square can be unambiguously defined by the locations of its four corners, from which the software can interpolate the connecting boundary lines and the interior space. Because it is a regular shape, a square could also be defined by the location of one corner, a size (width=height), and a rotation angle. The fundamental geometric primitives are: A single points A Line segment, defined by two end points, allowing for a simple linear interpolation of the intervening line. A Polygonal chain or polyline, a connected set of line segments, defined by an ordered list of points A Polygon, representing a region of space, defined by its boundary, a polyline with coincident starting and ending vertices. A variety of more complex shapes may be supported: Parametric curves, in which polylines or polygons are augmented with parameters to define a non-linear interpolation between vertices, including circular arcs, cubic splines, Catmull–Rom splines, Bézier curves and bezigons Standard parametric shapes in two or three dimensions, such as Circles, ellipses, squares, superellipses, spheres, tetrahedrons, superellipsoids, etc. Irregular three-dimensional surfaces and solids, usually defined as a connected set of polygons (e.g., a Polygon mesh) or as parametric surfaces (e.g., NURBS) Fractals, often defined as an iterated function system In many vector datasets, each shape can be combined with a set of properties. The most common are visual characteristics, such as color, line weight, or dash pattern. In systems in which shapes represent real-world features, such as GIS and BIM, a variety of attributes of each represented feature can be stored, such as name, age, size, and so on. In some Vector data, especially in GIS, information about topological relationships between objects may be represented in the data model, such as tracking the connections between road segments in a transport network. If a dataset stored in one vector file format is converted to another file format that supports all the primitive objects used in that particular image, then the conversion can be lossless. Vector display hardware Vector-based devices, such as the vector CRT and the pen plotter, directly control a drawing mechanism to produce geometric shapes. Since vector display devices can define a line by dealing with just two points (that is, the coordinates of each end of the line), the device can reduce the total amount of data it must deal with by organizing the image in terms of pairs of points. Vector graphic displays were first used in 1958 by the US SAGE air defense system. Vector graphics systems were retired from the U.S. en route air traffic control in 1999. Vector graphics were also used on the TX-2 at the MIT Lincoln Laboratory by computer graphics pioneer Ivan Sutherland to run his program Sketchpad in 1963. Subsequent vector graphics systems, most of which iterated through dynamically modifiable stored lists of drawing instructions, include the IBM 2250, Imlac PDS-1, and DEC GT40. There was a video game console that used vector graphics called Vectrex as well as various arcade games like Asteroids, Space Wars, and many cinematronics titles such as Rip-Off, and Tail Gunner using vector monitors. Storage scope displays, such as the Tektronix 4014, could display vector images but not modify them without first erasing the display. However, these were never as widely used as the raster-based scanning displays used for television, and had largely disappeared by the mid-1980s except for specialized applications. Plotters used in technical drawing still draw vectors directly to paper by moving a pen as directed through the two-dimensional space of the paper. However, as with monitors, these have largely been replaced by the Wide-format printer that prints a raster image (which may be rendered from vector data). Software Because this model is useful in a variety of application domains, many different software programs have been created for drawing, manipulating, and visualizing vector graphics. While these are all based on the same basic vector data model, they can interpret and structure shapes very differently, using very different file formats. Graphic design and illustration, using a Vector graphics editor or Graphic art software such as Adobe Illustrator. See Comparison of vector graphics editors for capabilities. Geographic information systems (GIS), which can represent a geographic feature by a combination of a vector shape and a set of attributes. GIS includes vector editing, mapping, and vector spatial analysis capabilities. Computer-aided design (CAD), used in engineering, architecture, and surveying. Building information modeling (BIM) models add attributes to each shapes, similar to a GIS. 3D computer graphics software, including Computer animation. File formats Vector graphics are commonly found today in the SVG, WMF, EPS, PDF, CDR or AI types of graphic file formats, and are intrinsically different from the more common raster graphics file formats such as JPEG, PNG, APNG, GIF, WebP, BMP and MPEG4. The World Wide Web Consortium (W3C) standard for vector graphics is Scalable Vector Graphics (SVG). The standard is complex and has been relatively slow to be established at least in part owing to commercial interests. Many web browsers now have some support for rendering SVG data but full implementations of the standard are still comparatively rare. In recent years, SVG has become a significant format that is completely independent of the resolution of the rendering device, typically a printer or display monitor. SVG files are essentially printable text that describes both straight and curved paths, as well as other attributes. Wikipedia prefers SVG for images such as simple maps, line illustrations, coats of arms, and flags, which generally are not like photographs or other continuous-tone images. Rendering SVG requires conversion to a raster format at a resolution appropriate for the current task. SVG is also a format for animated graphics. There is also a version of SVG for mobile phones. In particular, the specific format for mobile phones is called SVGT (SVG Tiny version). These images can count links and also exploit anti-aliasing. They can also be displayed as wallpaper. CAD software uses its own vector data formats, usually proprietary formats created by the software vendors, such as Autodesk's DWG and public exchange formats such as DXF. Hundreds of distinct vector file formats have been created for GIS data over its history, including proprietary formats like the Esri file geodatabase, proprietary but public formats like the Shapefile and the original KML, open source formats like GeoJSON, and formats created by standards bodies like Simple Features and GML from the Open Geospatial Consortium. Conversion The list of image file formats covers proprietary and public vector formats. To raster Modern displays and printers are raster devices; vector formats have to be converted to a raster format (bitmaps – pixel arrays) before they can be rendered (displayed or printed). The size of the bitmap/raster-format file generated by the conversion will depend on the resolution required, but the size of the vector file generating the bitmap/raster file will always remain the same. Thus, it is easy to convert from a vector file to a range of bitmap/raster file formats but it is much more difficult to go in the opposite direction, especially if subsequent editing of the vector picture is required. It might be an advantage to save an image created from a vector source file as a bitmap/raster format, because different systems have different (and incompatible) vector formats, and some might not support vector graphics at all. However, once a file is converted from the vector format, it is likely to be bigger, and it loses the advantage of scalability without loss of resolution. It will also no longer be possible to edit individual parts of the image as discrete objects. The file size of a vector graphic image depends on the number of graphic elements it contains; it is a list of descriptions. From raster Printing Vector art is ideal for printing since the art is made from a series of mathematical curves; it will print very crisply even when resized. For instance, one can print a vector logo on a small sheet of copy paper, and then enlarge the same vector logo to billboard size and keep the same crisp quality. A low-resolution raster graphic would blur or pixelate excessively if it were enlarged from business card size to billboard size. (The precise resolution of a raster graphic necessary for high-quality results depends on the viewing distance; e.g., a billboard may still appear to be of high quality even at low resolution if the viewing distance is great enough.) If we regard typographic characters as images, then the same considerations that we have made for graphics apply even to the composition of written text for printing (typesetting). Older character sets were stored as bitmaps. Therefore, to achieve maximum print quality they had to be used at a given resolution only; these font formats are said to be non-scalable. High-quality typography is nowadays based on character drawings (fonts) which are typically stored as vector graphics, and as such are scalable to any size. Examples of these vector formats for characters are Postscript fonts and TrueType fonts. Operation Advantages to this style of drawing over raster graphics: Because vector graphics consist of coordinates with lines/curves between them, the size of representation does not depend on the dimensions of the object. This minimal amount of information translates to a much smaller file size compared to large raster images which are defined pixel by pixel. This said, a vector graphic with a small file size is often said to lack detail compared with a real world photo. Correspondingly, one can infinitely zoom in on e.g., a circle arc, and it remains smooth. On the other hand, a polygon representing a curve will reveal being not really curved. On zooming in, lines and curves need not get wider proportionally. Often the width is either not increased or less than proportional. On the other hand, irregular curves represented by simple geometric shapes may be made proportionally wider when zooming in, to keep them looking smooth and not like these geometric shapes. The parameters of objects are stored and can be later modified. This means that moving, scaling, rotating, filling etc. doesn't degrade the quality of a drawing. Moreover, it is usual to specify the dimensions in device-independent units, which results in the best possible rasterization on raster devices. From a 3-D perspective, rendering shadows is also much more realistic with vector graphics, as shadows can be abstracted into the rays of light from which they are formed. This allows for photorealistic images and renderings. For example, consider a circle of radius r. The main pieces of information a program needs in order to draw this circle are an indication that what is to be drawn is a circle the radius r the location of the center point of the circle stroke line style and color (possibly transparent) fill style and color (possibly transparent) Vector formats are not always appropriate in graphics work and also have numerous disadvantages. For example, devices such as cameras and scanners produce essentially continuous-tone raster graphics that are impractical to convert into vectors, and so for this type of work, an image editor will operate on the pixels rather than on drawing objects defined by mathematical expressions. Comprehensive graphics tools will combine images from vector and raster sources, and may provide editing tools for both, since some parts of an image could come from a camera source, and others could have been drawn using vector tools. Some authors have criticized the term vector graphics as being confusing. In particular, vector graphics does not simply refer to graphics described by Euclidean vectors. Some authors have proposed to use object-oriented graphics instead. However this term can also be confusing as it can be read as any kind of graphics implemented using object-oriented programming. Vector operations Vector graphics editors typically allow translation, rotation, mirroring, stretching, skewing, affine transformations, changing of z-order (loosely, what's in front of what) and combination of primitives into more complex objects. More sophisticated transformations include set operations on closed shapes (union, difference, intersection, etc.). Vector graphics are ideal for simple or composite drawings that need to be device-independent, or do not need to achieve photo-realism. For example, the PostScript and PDF page description languages use a vector graphics model. See also Animation Anti-Grain Geometry Cairo (graphics) Comparison of vector graphics editors Comparison of graphics file formats Computer-aided design Direct2D Illustration Javascript graphics library Raster to vector Raster graphics Resolution independence Turtle graphics Vector game Vector graphics file formats Vector monitor Vector packs Vexel Wire frame model 3D modeling Notes References External links Graphic design Graphics file formats Design
1164933
https://en.wikipedia.org/wiki/Windows%20Registry
Windows Registry
The Windows Registry is a hierarchical database that stores low-level settings for the Microsoft Windows operating system and for applications that opt to use the registry. The kernel, device drivers, services, Security Accounts Manager, and user interfaces can all use the registry. The registry also allows access to counters for profiling system performance. In other words, the registry or Windows Registry contains information, settings, options, and other values for programs and hardware installed on all versions of Microsoft Windows operating systems. For example, when a program is installed, a new subkey containing settings such as a program's location, its version, and how to start the program, are all added to the Windows Registry. When introduced with Windows 3.1, the Windows Registry primarily stored configuration information for COM-based components. Windows 95 and Windows NT extended its use to rationalize and centralize the information in the profusion of INI files, which held the configurations for individual programs, and were stored at various locations. It is not a requirement for Windows applications to use the Windows Registry. For example, .NET Framework applications use XML files for configuration, while portable applications usually keep their configuration files with their executables. Rationale Prior to the Windows Registry, .INI files stored each program's settings as a text file or binary file, often located in a shared location that did not provide user-specific settings in a multi-user scenario. By contrast, the Windows Registry stores all application settings in one logical repository (but a number of discrete files) and in a standardized form. According to Microsoft, this offers several advantages over .INI files. Since file parsing is done much more efficiently with a binary format, it may be read from or written to more quickly than a text INI file. Furthermore, strongly typed data can be stored in the registry, as opposed to the text information stored in .INI files. This is a benefit when editing keys manually using regedit.exe, the built-in Windows Registry Editor. Because user-based registry settings are loaded from a user-specific path rather than from a read-only system location, the registry allows multiple users to share the same machine, and also allows programs to work for less privileged users. Backup and restoration is also simplified as the registry can be accessed over a network connection for remote management/support, including from scripts, using the standard set of APIs, as long as the Remote Registry service is running and firewall rules permit this. Because the registry is a database, it offers improved system integrity with features such as atomic updates. If two processes attempt to update the same registry value at the same time, one process's change will precede the other's and the overall consistency of the data will be maintained. Where changes are made to .INI files, such race conditions can result in inconsistent data that does not match either attempted update. Windows Vista and later operating systems provide transactional updates to the registry by means of the Kernel Transaction Manager, extending the atomicity guarantees across multiple key and/or value changes, with traditional commit–abort semantics. (Note however that NTFS provides such support for the file system as well, so the same guarantees could, in theory, be obtained with traditional configuration files.) Structure Keys and values The registry contains two basic elements: keys and values. Registry keys are container objects similar to folders. Registry values are non-container objects similar to files. Keys may contain values and subkeys. Keys are referenced with a syntax similar to Windows' path names, using backslashes to indicate levels of hierarchy. Keys must have a case insensitive name without backslashes. The hierarchy of registry keys can only be accessed from a known root key handle (which is anonymous but whose effective value is a constant numeric handle) that is mapped to the content of a registry key preloaded by the kernel from a stored "hive", or to the content of a subkey within another root key, or mapped to a registered service or DLL that provides access to its contained subkeys and values. E.g. HKEY_LOCAL_MACHINE\Software\Microsoft\Windows refers to the subkey "Windows" of the subkey "Microsoft" of the subkey "Software" of the HKEY_LOCAL_MACHINE root key. There are seven predefined root keys, traditionally named according to their constant handles defined in the Win32 API, or by synonymous abbreviations (depending on applications): HKEY_LOCAL_MACHINE or HKLM HKEY_CURRENT_CONFIG or HKCC HKEY_CLASSES_ROOT or HKCR HKEY_CURRENT_USER or HKCU HKEY_USERS or HKU HKEY_PERFORMANCE_DATA (only in Windows NT, but invisible in the Windows Registry Editor) HKEY_DYN_DATA (only in Windows 9x, and visible in the Windows Registry Editor) Like other files and services in Windows, all registry keys may be restricted by access control lists (ACLs), depending on user privileges, or on security tokens acquired by applications, or on system security policies enforced by the system (these restrictions may be predefined by the system itself, and configured by local system administrators or by domain administrators). Different users, programs, services or remote systems may only see some parts of the hierarchy or distinct hierarchies from the same root keys. Registry values are name/data pairs stored within keys. Registry values are referenced separately from registry keys. Each registry value stored in a registry key has a unique name whose letter case is not significant. The Windows API functions that query and manipulate registry values take value names separately from the key path and/or handle that identifies the parent key. Registry values may contain backslashes in their names, but doing so makes them difficult to distinguish from their key paths when using some legacy Windows Registry API functions (whose usage is deprecated in Win32). The terminology is somewhat misleading, as each registry key is similar to an associative array, where standard terminology would refer to the name part of each registry value as a "key". The terms are a holdout from the 16-bit registry in Windows 3, in which registry keys could not contain arbitrary name/data pairs, but rather contained only one unnamed value (which had to be a string). In this sense, the Windows 3 registry was like a single associative array, in which the keys (in the sense of both 'registry key' and 'associative array key') formed a hierarchy, and the registry values were all strings. When the 32-bit registry was created, so was the additional capability of creating multiple named values per key, and the meanings of the names were somewhat distorted. For compatibility with the previous behavior, each registry key may have a "default" value, whose name is the empty string. Each value can store arbitrary data with variable length and encoding, but which is associated with a symbolic type (defined as a numeric constant) defining how to parse this data. The standard types are: Root keys The keys at the root level of the hierarchical database are generally named by their Windows API definitions, which all begin "HKEY". They are frequently abbreviated to a three- or four-letter short name starting with "HK" (e.g. HKCU and HKLM). Technically, they are predefined handles (with known constant values) to specific keys that are either maintained in memory, or stored in hive files stored in the local filesystem and loaded by the system kernel at boot time and then shared (with various access rights) between all processes running on the local system, or loaded and mapped in all processes started in a user session when the user logs on the system. The HKEY_LOCAL_MACHINE (local machine-specific configuration data) and HKEY_CURRENT_USER (user-specific configuration data) nodes have a similar structure to each other; user applications typically look up their settings by first checking for them in "HKEY_CURRENT_USER\Software\Vendor's name\Application's name\Version\Setting name", and if the setting is not found, look instead in the same location under the HKEY_LOCAL_MACHINE key. However, the converse may apply for administrator-enforced policy settings where HKLM may take precedence over HKCU. The Windows Logo Program has specific requirements for where different types of user data may be stored, and that the concept of least privilege be followed so that administrator-level access is not required to use an application. HKEY_LOCAL_MACHINE (HKLM) Abbreviated HKLM, HKEY_LOCAL_MACHINE stores settings that are specific to the local computer. The key located by HKLM is actually not stored on disk, but maintained in memory by the system kernel in order to map all the other subkeys. Applications cannot create any additional subkeys. On Windows NT, this key contains four subkeys, "SAM", "SECURITY", "SYSTEM", and "SOFTWARE", that are loaded at boot time within their respective files located in the %SystemRoot%\System32\config folder. A fifth subkey, "HARDWARE", is volatile and is created dynamically, and as such is not stored in a file (it exposes a view of all the currently detected Plug-and-Play devices). On Windows Vista and above, a sixth and seventh subkey, "COMPONENTS" and "BCD", are mapped in memory by the kernel on-demand and loaded from %SystemRoot%\system32\config\COMPONENTS or from boot configuration data, \boot\BCD on the system partition. The "HKLM\SAM" key usually appears as empty for most users (unless they are granted access by administrators of the local system or administrators of domains managing the local system). It is used to reference all "Security Accounts Manager" (SAM) databases for all domains into which the local system has been administratively authorized or configured (including the local domain of the running system, whose SAM database is stored in a subkey also named "SAM": other subkeys will be created as needed, one for each supplementary domain). Each SAM database contains all builtin accounts (mostly group aliases) and configured accounts (users, groups and their aliases, including guest accounts and administrator accounts) created and configured on the respective domain, for each account in that domain, it notably contains the user name which can be used to log on that domain, the internal unique user identifier in the domain, a cryptographic hash of each user's password for each enabled authentication protocol, the location of storage of their user registry hive, various status flags (for example if the account can be enumerated and be visible in the logon prompt screen), and the list of domains (including the local domain) into which the account was configured. The "HKLM\SECURITY" key usually appears empty for most users (unless they are granted access by users with administrative privileges) and is linked to the Security database of the domain into which the current user is logged on (if the user is logged on the local system domain, this key will be linked to the registry hive stored by the local machine and managed by local system administrators or by the builtin "System" account and Windows installers). The kernel will access it to read and enforce the security policy applicable to the current user and all applications or operations executed by this user. It also contains a "SAM" subkey which is dynamically linked to the SAM database of the domain onto which the current user is logged on. The "HKLM\SYSTEM" key is normally only writable by users with administrative privileges on the local system. It contains information about the Windows system setup, data for the secure random number generator (RNG), the list of currently mounted devices containing a filesystem, several numbered "HKLM\SYSTEM\Control Sets" containing alternative configurations for system hardware drivers and services running on the local system (including the currently used one and a backup), a "HKLM\SYSTEM\Select" subkey containing the status of these Control Sets, and a "HKLM\SYSTEM\CurrentControlSet" which is dynamically linked at boot time to the Control Set which is currently used on the local system. Each configured Control Set contains: an "Enum" subkey enumerating all known Plug-and-Play devices and associating them with installed system drivers (and storing the device-specific configurations of these drivers), a "Services" subkey listing all installed system drivers (with non device-specific configuration, and the enumeration of devices for which they are instantiated) and all programs running as services (how and when they can be automatically started), a "Control" subkey organizing the various hardware drivers and programs running as services and all other system-wide configuration, a "Hardware Profiles" subkey enumerating the various profiles that have been tuned (each one with "System" or "Software" settings used to modify the default profile, either in system drivers and services or in the applications) as well as the "Hardware Profiles\Current" subkey which is dynamically linked to one of these profiles. The "HKLM\SOFTWARE" subkey contains software and Windows settings (in the default hardware profile). It is mostly modified by application and system installers. It is organized by software vendor (with a subkey for each), but also contains a "Windows" subkey for some settings of the Windows user interface, a "Classes" subkey containing all registered associations from file extensions, MIME types, Object Classes IDs and interfaces IDs (for OLE, COM/DCOM and ActiveX), to the installed applications or DLLs that may be handling these types on the local machine (however these associations are configurable for each user, see below), and a "Policies" subkey (also organized by vendor) for enforcing general usage policies on applications and system services (including the central certificates store used for authenticating, authorizing or disallowing remote systems or services running outside the local network domain). The "HKLM\SOFTWARE\Wow6432Node" key is used by 32-bit applications on a 64-bit Windows OS, and is equivalent to but separate from "HKLM\SOFTWARE". The key path is transparently presented to 32-bit applications by WoW64 as HKLM\SOFTWARE (in a similar way that 32-bit applications see %SystemRoot%\Syswow64 as %SystemRoot%\System32) HKEY_CLASSES_ROOT (HKCR) Abbreviated HKCR, HKEY_CLASSES_ROOT contains information about registered applications, such as file associations and OLE Object Class IDs, tying them to the applications used to handle these items. On Windows 2000 and above, HKCR is a compilation of user-based HKCU\Software\Classes and machine-based HKLM\Software\Classes. If a given value exists in both of the subkeys above, the one in HKCU\Software\Classes takes precedence. The design allows for either machine- or user-specific registration of COM objects. HKEY_USERS (HKU) Abbreviated HKU, HKEY_USERS contains subkeys corresponding to the HKEY_CURRENT_USER keys for each user profile actively loaded on the machine, though user hives are usually only loaded for currently logged-in users. HKEY_CURRENT_USER (HKCU) Abbreviated HKCU, HKEY_CURRENT_USER stores settings that are specific to the currently logged-in user. The HKEY_CURRENT_USER key is a link to the subkey of HKEY_USERS that corresponds to the user; the same information is accessible in both locations. The specific subkey referenced is "(HKU)\(SID)\..." where (SID) corresponds to the Windows SID; if the "(HKCU)" key has the following suffix "(HKCU)\Software\Classes\..." then it corresponds to "(HKU)\(SID)_CLASSES\..." i.e. the suffix has the string "_CLASSES" is appended to the (SID). On Windows NT systems, each user's settings are stored in their own files called NTUSER.DAT and USRCLASS.DAT inside their own Documents and Settings subfolder (or their own Users sub folder in Windows Vista and above). Settings in this hive follow users with a roaming profile from machine to machine. HKEY_PERFORMANCE_DATA This key provides runtime information into performance data provided by either the NT kernel itself, or running system drivers, programs and services that provide performance data. This key is not stored in any hive and not displayed in the Registry Editor, but it is visible through the registry functions in the Windows API, or in a simplified view via the Performance tab of the Task Manager (only for a few performance data on the local system) or via more advanced control panels (such as the Performances Monitor or the Performances Analyzer which allows collecting and logging these data, including from remote systems). HKEY_DYN_DATA This key is used only on Windows 95, Windows 98 and Windows ME. It contains information about hardware devices, including Plug and Play and network performance statistics. The information in this hive is also not stored on the hard drive. The Plug and Play information is gathered and configured at startup and is stored in memory. Hives Even though the registry presents itself as an integrated hierarchical database, branches of the registry are actually stored in a number of disk files called hives. (The word hive constitutes an in-joke.) Some hives are volatile and are not stored on disk at all. An example of this is the hive of the branch starting at HKLM\HARDWARE. This hive records information about system hardware and is created each time the system boots and performs hardware detection. Individual settings for users on a system are stored in a hive (disk file) per user. During user login, the system loads the user hive under the HKEY_USERS key and sets the HKCU (HKEY_CURRENT_USER) symbolic reference to point to the current user. This allows applications to store/retrieve settings for the current user implicitly under the HKCU key. Not all hives are loaded at any one time. At boot time, only a minimal set of hives are loaded, and after that, hives are loaded as the operating system initializes and as users log in or whenever a hive is explicitly loaded by an application. File locations The registry is physically stored in several files, which are generally obfuscated from the user-mode APIs used to manipulate the data inside the registry. Depending upon the version of Windows, there will be different files and different locations for these files, but they are all on the local machine. The location for system registry files in Windows NT is %SystemRoot%\System32\Config; the user-specific HKEY_CURRENT_USER user registry hive is stored in Ntuser.dat inside the user profile. There is one of these per user; if a user has a roaming profile, then this file will be copied to and from a server at logout and login respectively. A second user-specific registry file named UsrClass.dat contains COM registry entries and does not roam by default. Windows NT Windows NT systems store the registry in a binary file format which can be exported, loaded and unloaded by the Registry Editor in these operating systems. The following system registry files are stored in %SystemRoot%\System32\Config\: Sam – HKEY_LOCAL_MACHINE\SAM Security – HKEY_LOCAL_MACHINE\SECURITY Software – HKEY_LOCAL_MACHINE\SOFTWARE System – HKEY_LOCAL_MACHINE\SYSTEM Default – HKEY_USERS\.DEFAULT Userdiff – Not associated with a hive. Used only when upgrading operating systems. The following file is stored in each user's profile folder: %USERPROFILE%\Ntuser.dat – HKEY_USERS\<User SID> (linked to by HKEY_CURRENT_USER) For Windows 2000, Server 2003 and Windows XP, the following additional user-specific file is used for file associations and COM information: %USERPROFILE%\Local Settings\Application Data\Microsoft\Windows\Usrclass.dat (path is localized) – HKEY_USERS\<User SID>_Classes (HKEY_CURRENT_USER\Software\Classes) For Windows Vista and later, the path was changed to: %USERPROFILE%\AppData\Local\Microsoft\Windows\Usrclass.dat (path is not localized) alias %LocalAppData%\Microsoft\Windows\Usrclass.dat – HKEY_USERS\<User SID>_Classes (HKEY_CURRENT_USER\Software\Classes) Windows 2000 keeps an alternate copy of the registry hives (.ALT) and attempts to switch to it when corruption is detected. Windows XP and Windows Server 2003 do not maintain a System.alt hive because NTLDR on those versions of Windows can process the System.log file to bring up to date a System hive that has become inconsistent during a shutdown or crash. In addition, the %SystemRoot%\Repair folder contains a copy of the system's registry hives that were created after installation and the first successful startup of Windows. Each registry data file has an associated file with a ".log" extension that acts as a transaction log that is used to ensure that any interrupted updates can be completed upon next startup. Internally, Registry files are split into 4 kB "bins" that contain collections of "cells". Windows 9x The registry files are stored in the %WINDIR% directory under the names USER.DAT and SYSTEM.DAT with the addition of CLASSES.DAT in Windows ME. Also, each user profile (if profiles are enabled) has its own USER.DAT file which is located in the user's profile directory in %WINDIR%\Profiles\<Username>\. Windows 3.11 The only registry file is called REG.DAT and it is stored in the %WINDIR% directory. Windows 10 Mobile Note: To access the registry files, the Phone needs to be set in a special mode using either:  WpInternals ( Put the mobile device into flash mode. ) InterOp Tools ( mount the MainOS Partition with MTP. ) If any of above Methods worked - The Device Registry Files can be found in the following location: {Phone}\EFIESP\Windows\System32\config Note: InterOp Tools also includes a registry editor. Editing Registry editors The registry contains important configuration information for the operating system, for installed applications as well as individual settings for each user and application. A careless change to the operating system configuration in the registry could cause irreversible damage, so it is usually only installer programs which perform changes to the registry database during installation/configuration and removal. If a user wants to edit the registry manually, Microsoft recommends that a backup of the registry be performed before the change. When a program is removed from control panel, it may not be completely removed and, in case of errors or glitches caused by references to missing programs, the user might have to manually check inside directories such as program files. After this, the user might need to manually remove any reference to the uninstalled program in the registry. This is usually done by using RegEdit.exe. Editing the registry is sometimes necessary when working around Windows-specific issues e.g. problems when logging onto a domain can be resolved by editing the registry. Windows Registry can be edited manually using programs such as RegEdit.exe, although these tools do not expose some of the registry's metadata such as the last modified date. The registry editor for the 3.1/95 series of operating systems is RegEdit.exe and for Windows NT it is RegEdt32.exe; the functionalities are merged in Windows XP. Optional and/or third-party tools similar to RegEdit.exe are available for many Windows CE versions. Registry Editor allows users to perform the following functions: Creating, manipulating, renaming and deleting registry keys, subkeys, values and value data Importing and exporting .REG files, exporting data in the binary hive format Loading, manipulating and unloading registry hive format files (Windows NT systems only) Setting permissions based on ACLs (Windows NT systems only) Bookmarking user-selected registry keys as Favorites Finding particular strings in key names, value names and value data Remotely editing the registry on another networked computer .REG files .REG files (also known as Registration entries) are text-based human-readable files for exporting and importing portions of the registry using a INI-based syntax. On Windows 2000 and later, they contain the string Windows Registry Editor Version 5.00 at the beginning and are Unicode-based. On Windows 9x and NT 4.0 systems, they contain the string REGEDIT4 and are ANSI-based. Windows 9x format .REG files are compatible with Windows 2000 and later. The Registry Editor on Windows on these systems also supports exporting .REG files in Windows 9x/NT format. Data is stored in .REG files using the following syntax: [<Hive name>\<Key name>\<Subkey name>] "Value name"=<Value type>:<Value data> The Default Value of a key can be edited by using "@" instead of "Value Name": [<Hive name>\<Key name>\<Subkey name>] @=<Value type>:<Value data> String values do not require a <Value type> (see example), but backslashes ('\') need to be written as a double-backslash ('\\'), and quotes ('"') as backslash-quote ('\"'). For example, to add the values "Value A", "Value B", "Value C", "Value D", "Value E", "Value F", "Value G", "Value H", "Value I", "Value J", "Value K", "Value L", and "Value M" to the HKLM\SOFTWARE\Foobar key: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Foobar] "Value A"="<String value data with escape characters>" "Value B"=hex:<Binary data (as comma-delimited list of hexadecimal values)> "Value C"=dword:<DWORD value integer> "Value D"=hex(0):<REG_NONE (as comma-delimited list of hexadecimal values)> "Value E"=hex(1):<REG_SZ (as comma-delimited list of hexadecimal values representing a UTF-16LE NUL-terminated string)> "Value F"=hex(2):<Expandable string value data (as comma-delimited list of hexadecimal values representing a UTF-16LE NUL-terminated string)> "Value G"=hex(3):<Binary data (as comma-delimited list of hexadecimal values)> ; equal to "Value B" "Value H"=hex(4):<DWORD value (as comma-delimited list of 4 hexadecimal values, in little endian byte order)> "Value I"=hex(5):<DWORD value (as comma-delimited list of 4 hexadecimal values, in big endian byte order)> "Value J"=hex(7):<Multi-string value data (as comma-delimited list of hexadecimal values representing UTF-16LE NUL-terminated strings)> "Value K"=hex(8):<REG_RESOURCE_LIST (as comma-delimited list of hexadecimal values)> "Value L"=hex(a):<REG_RESOURCE_REQUIREMENTS_LIST (as comma-delimited list of hexadecimal values)> "Value M"=hex(b):<QWORD value (as comma-delimited list of 8 hexadecimal values, in little endian byte order)> Data from .REG files can be added/merged with the registry by double-clicking these files or using the /s switch in the command line. REG files can also be used to remove registry data. To remove a key (and all subkeys, values and data), the key name must be preceded by a minus sign ("-"). For example, to remove the HKLM\SOFTWARE\Foobar key (and all subkeys, values and data), [-HKEY_LOCAL_MACHINE\SOFTWARE\Foobar] To remove a value (and its data), the values to be removed must have a minus sign ("-") after the equal sign ("="). For example, to remove only the "Value A" and "Value B" values (and their data) from the HKLM\SOFTWARE\Foobar key: [HKEY_LOCAL_MACHINE\SOFTWARE\Foobar] "Value A"=- "Value B"=- To remove only the Default value of the key HKLM\SOFTWARE\Foobar (and its data): [HKEY_LOCAL_MACHINE\SOFTWARE\Foobar] @=- Lines beginning with a semicolon are considered comments: ; This is a comment. This can be placed in any part of a .reg file [HKEY_LOCAL_MACHINE\SOFTWARE\Foobar] "Value"="Example string" Group policies Windows group policies can change registry keys for a number of machines or individual users based on policies. When a policy first takes effect for a machine or for an individual user of a machine, the registry settings specified as part of the policy are applied to the machine or user settings. Windows will also look for updated policies and apply them periodically, typically every 90 minutes. Through its scope a policy defines which machines and/or users the policy is to be applied to. Whether a machine or user is within the scope of a policy or not is defined by a set of rules which can filter on the location of the machine or user account in organizational directory, specific users or machine accounts or security groups. More advanced rules can be set up using Windows Management Instrumentation expressions. Such rules can filter on properties such as computer vendor name, CPU architecture, installed software, or networks connected to. For instance, the administrator can create a policy with one set of registry settings for machines in the accounting department and policy with another (lock-down) set of registry settings for kiosk terminals in the visitors area. When a machine is moved from one scope to another (e.g. changing its name or moving it to another organizational unit), the correct policy is automatically applied. When a policy is changed it is automatically re-applied to all machines currently in its scope. The policy is edited through a number of administrative templates which provides a user interface for picking and changing settings. The set of administrative templates is extensible and software packages which support such remote administration can register their own templates. Command line editing The registry can be manipulated in a number of ways from the command line. The Reg.exe and RegIni.exe utility tools are included in Windows XP and later versions of Windows. Alternative locations for legacy versions of Windows include the Resource Kit CDs or the original Installation CD of Windows. Also, a .REG file can be imported from the command line with the following command: RegEdit.exe /s file The /s means the file will be silent merged to the registry. If the /s parameter is omitted the user will be asked to confirm the operation. In Windows 98, Windows 95 and at least some configurations of Windows XP the /s switch also causes RegEdit.exe to ignore the setting in the registry that allows administrators to disable it. When using the /s switch RegEdit.exe does not return an appropriate return code if the operation fails, unlike Reg.exe which does. RegEdit.exe /e file exports the whole registry in V5 format to a UNICODE .REG file, while any of RegEdit.exe /e file HKEY_CLASSES_ROOT[\<key>] RegEdit.exe /e file HKEY_CURRENT_CONFIG[\<key>] RegEdit.exe /e file HKEY_CURRENT_USER[\<key>] RegEdit.exe /e file HKEY_LOCAL_MACHINE[\<key>] RegEdit.exe /e file HKEY_USERS[\<key>] export the specified (sub)key (which has to be enclosed in quotes if it contains spaces) only. RegEdit.exe /a file exports the whole registry in V4 format to an ANSI .REG file. RegEdit.exe /a file <key> exports the specified (sub)key (which has to be enclosed in quotes if it contains spaces) only. It is also possible to use Reg.exe. Here is a sample to display the value of the registry value Version: Reg.exe QUERY HKLM\Software\Microsoft\ResKit /v Version Other command line options include a VBScript or JScript together with CScript, WMI or WMIC.exe and Windows PowerShell. Registry permissions can be manipulated through the command line using RegIni.exe and the SubInACL.exe tool. For example, the permissions on the HKEY_LOCAL_MACHINE\SOFTWARE key can be displayed using: SubInACL.exe /keyreg HKEY_LOCAL_MACHINE\SOFTWARE /display PowerShell commands and scripts Windows PowerShell comes with a registry provider which presents the registry as a location type similar to the file system. The same commands used to manipulate files and directories in the file system can be used to manipulate keys and values of the registry. Also like the file system, PowerShell uses the concept of a current location which defines the context on which commands by default operate. The Get-ChildItem (also available through the aliases ls, dir or gci) retrieves the child keys of the current location. By using the Set-Location (or the alias cd) command the user can change the current location to another key of the registry. Commands which rename items, remove items, create new items or set content of items or properties can be used to rename keys, remove keys or entire sub-trees or change values. Through PowerShell scripts files, an administrator can prepare scripts which, when executed, make changes to the registry. Such scripts can be distributed to administrators who can execute them on individual machines. The PowerShell Registry provider supports transactions, i.e. multiple changes to the registry can be bundled into a single atomic transaction. An atomic transaction ensures that either all of the changes are committed to the database, or if the script fails, none of the changes are committed to the database. Programs or scripts The registry can be edited through the APIs of the Advanced Windows 32 Base API Library (advapi32.dll). Many programming languages offer built-in runtime library functions or classes that wrap the underlying Windows APIs and thereby enable programs to store settings in the registry (e.g. Microsoft.Win32.Registry in VB.NET and C#, or TRegistry in Delphi and Free Pascal). COM-enabled applications like Visual Basic 6 can use the WSH WScript.Shell object. Another way is to use the Windows Resource Kit Tool, Reg.exe by executing it from code, although this is considered poor programming practice. Similarly, scripting languages such as Perl (with Win32::TieRegistry), Python (with winreg), TCL (which comes bundled with the registry package), Windows Powershell and Windows Scripting Host also enable registry editing from scripts. Offline editing The offreg.dll available from the Windows Driver Kit offers a set of APIs for the creation and manipulation of currently not loaded registry hives similar to those provided by advapi32.dll. It is also possible to edit the registry (hives) of an offline system from Windows PE or Linux (in the latter case using open source tools). COM self-registration Prior to the introduction of registration-free COM, developers were encouraged to add initialization code to in-process and out-of-process binaries to perform the registry configuration required for that object to work. For in-process binaries such as .DLL and .OCX files, the modules typically exported a function called DllInstall() that could be called by installation programs or invoked manually with utilities like Regsvr32.exe; out-of-process binaries typically support the commandline arguments /Regserver and /Unregserver that created or deleted the required registry settings. COM applications that break because of DLL Hell issues can commonly be repaired with RegSvr32.exe or the /RegServer switch without having to re-invoke installation programs. Advanced functionality Windows exposes APIs that allows user-mode applications to register to receive a notification event if a particular registry key is changed. APIs are also available to allow kernel-mode applications to filter and modify registry calls made by other applications. Windows also supports remote access to the registry of another computer via the RegConnectRegistry function if the Remote Registry service is running, correctly configured and its network traffic is not firewalled. Security Each key in the registry of Windows NT versions can have an associated security descriptor. The security descriptor contains an access control list (ACL) that describes which user groups or individual users are granted or denied access permissions. The set of registry permissions include 10 rights/permissions which can be explicitly allowed or denied to a user or a group of users. As with other securable objects in the operating system, individual access control entries (ACE) on the security descriptor can be explicit or inherited from a parent object. Windows Resource Protection is a feature of Windows Vista and later versions of Windows that uses security to deny Administrators and the system WRITE access to some sensitive keys to protect the integrity of the system from malware and accidental modification. Special ACEs on the security descriptor can also implement mandatory integrity control for the registry key and subkeys. A process running at a lower integrity level cannot write, change or delete a registry key/value, even if the account of the process has otherwise been granted access through the ACL. For instance, Internet Explorer running in Protected Mode can read medium and low integrity registry keys/values of the currently logged on user, but it can only modify low integrity keys. Outside security, registry keys cannot be deleted or edited due to other causes. Registry keys containing NUL characters cannot be deleted with standard registry editors and require a special utility for deletion, such as RegDelNull. Backups and recovery Different editions of Windows have supported a number of different methods to back up and restore the registry over the years, some of which are now deprecated: System Restore can back up the registry and restore it as long as Windows is bootable, or from the Windows Recovery Environment (starting with Windows Vista). NTBackup can back up the registry as part of the System State and restore it. Automated System Recovery in Windows XP can also restore the registry. On Windows NT, the Last Known Good Configuration option in startup menu relinks the HKLM\SYSTEM\CurrentControlSet key, which stores hardware and device driver information. Windows 98 and Windows ME include command line (Scanreg.exe) and GUI (Scanregw.exe) registry checker tools to check and fix the integrity of the registry, create up to five automatic regular backups by default and restore them manually or whenever corruption is detected. The registry checker tool backs up the registry, by default, to %Windir%\Sysbckup Scanreg.exe can also run from MS-DOS. The Windows 95 CD-ROM included an Emergency Recovery Utility (ERU.exe) and a Configuration Backup Tool (Cfgback.exe) to back up and restore the registry. Additionally Windows 95 backs up the registry to the files system.da0 and user.da0 on every successful boot. Windows NT 4.0 included RDISK.EXE, a utility to back up and restore the entire registry. Windows 2000 Resource Kit contained an unsupported pair of utilities called Regback.exe and RegRest.exe for backup and recovery of the registry. Periodic automatic backups of the registry are now disabled by default on Windows 10 May 2019 Update (version 1903). Microsoft recommends System Restore be used instead. Policy Group policy Windows 2000 and later versions of Windows use Group Policy to enforce registry settings through a registry-specific client extension in the Group Policy processing engine. Policy may be applied locally to a single computer using gpedit.msc, or to multiple users and/or computers in a domain using gpmc.msc. Legacy systems With Windows 95, Windows 98, Windows ME and Windows NT 4.0, administrators can use a special file to be merged into the registry, called a policy file (POLICY.POL). The policy file allows administrators to prevent non-administrator users from changing registry settings like, for instance, the security level of Internet Explorer and the desktop background wallpaper. The policy file is primarily used in a business with a large number of computers where the business needs to be protected from rogue or careless users. The default extension for the policy file is .POL. The policy file filters the settings it enforces by user and by group (a "group" is a defined set of users). To do that the policy file merges into the registry, preventing users from circumventing it by simply changing back the settings. The policy file is usually distributed through a LAN, but can be placed on the local computer. The policy file is created by a free tool by Microsoft that goes by the filename poledit.exe for Windows 95/Windows 98 and with a computer management module for Windows NT. The editor requires administrative permissions to be run on systems that uses permissions. The editor can also directly change the current registry settings of the local computer and if the remote registry service is installed and started on another computer it can also change the registry on that computer. The policy editor loads the settings it can change from .ADM files, of which one is included, that contains the settings the Windows shell provides. The .ADM file is plain text and supports easy localisation by allowing all the strings to be stored in one place. Virtualization INI file virtualization Windows NT kernels support redirection of INI file-related APIs into a virtual file in a registry location such as HKEY_CURRENT_USER using a feature called "InifileMapping". This functionality was introduced to allow legacy applications written for 16-bit versions of Windows to be able to run under Windows NT platforms on which the System folder is no longer considered an appropriate location for user-specific data or configuration. Non-compliant 32-bit applications can also be redirected in this manner, even though the feature was originally intended for 16-bit applications. Registry virtualization Windows Vista introduced limited registry virtualization, whereby poorly written applications that do not respect the principle of least privilege and instead try to write user data to a read-only system location (such as the HKEY_LOCAL_MACHINE hive), are silently redirected to a more appropriate location, without changing the application itself. Similarly, application virtualization redirects all of an application's invalid registry operations to a location such as a file. Used together with file virtualization, this allows applications to run on a machine without being installed on it. Low integrity processes may also use registry virtualization. For example, Internet Explorer 7 or 8 running in "Protected Mode" on Windows Vista and above will automatically redirect registry writes by ActiveX controls to a sandboxed location in order to frustrate some classes of security exploits. The Application Compatibility Toolkit provides shims that can transparently redirect HKEY_LOCAL_MACHINE or HKEY_CLASSES_ROOT Registry operations to HKEY_CURRENT_USER to address "LUA" bugs that cause applications not to work for users with insufficient rights. Disadvantages Critics labeled the registry in Windows 95 a single point of failure, because re-installation of the operating system was required if the registry became corrupt. However, Windows NT uses transaction logs to protect against corruption during updates. Current versions of Windows use two levels of log files to ensure integrity even in the case of power failure or similar catastrophic events during database updates. Even in the case of a non-recoverable error, Windows can repair or re-initialize damaged registry entries during system boot. Equivalents and alternatives In Windows, use of the registry for storing program data is a matter of developer's discretion. Microsoft provides programming interfaces for storing data in XML files (via MSXML) or database files (via SQL Server Compact) which developers can use instead. Developers are also free to use non-Microsoft alternatives or develop their own proprietary data stores. In contrast to Windows Registry's binary-based database model, some other operating systems use separate plain-text files for daemon and application configuration, but group these configurations together for ease of management. In Unix-like operating systems (including Linux) that follow the Filesystem Hierarchy Standard, system-wide configuration files (information similar to what would appear in HKEY_LOCAL_MACHINE on Windows) are traditionally stored in files in /etc/ and its subdirectories, or sometimes in /usr/local/etc. Per-user information (information that would be roughly equivalent to that in HKEY_CURRENT_USER) is stored in hidden directories and files (that start with a period/full stop) within the user's home directory. However XDG-compliant applications should refer to the environment variables defined in the Base Directory specification. In macOS, system-wide configuration files are typically stored in the /Library/ folder, whereas per-user configuration files are stored in the corresponding ~/Library/ folder in the user's home directory, and configuration files set by the system are in /System/Library/. Within these respective directories, an application typically stores a property list file in the Preferences/ sub-directory. RISC OS (not to be confused with MIPS RISC/os) uses directories for configuration data, which allows applications to be copied into application directories, as opposed to the separate installation process that typifies Windows applications; this approach is also used on the ROX Desktop for Linux. This directory-based configuration also makes it possible to use different versions of the same application, since the configuration is done "on the fly". If one wishes to remove the application, it is possible to simply delete the folder belonging to the application. This will often not remove configuration settings which are stored independently from the application, usually within the computer's !Boot structure, in !Boot.Choices or potentially anywhere on a network fileserver. It is possible to copy installed programs between computers running RISC OS by copying the application directories belonging to the programs, however some programs may require re-installing, e.g. when shared files are placed outside an application directory. IBM AIX (a Unix variant) uses a registry component called Object Data Manager (ODM). The ODM is used to store information about system and device configuration. An extensive set of tools and utilities provides users with means of extending, checking, correcting the ODM database. The ODM stores its information in several files, default location is /etc/objrepos. The GNOME desktop environment uses a registry-like interface called dconf for storing configuration settings for the desktop and applications. The Elektra Initiative provides alternative back-ends for various different text configuration files. While not an operating system, the Wine compatibility layer, which allows Windows software to run on a Unix-like system, also employs a Windows-like registry as text files in the WINEPREFIX folder: system.reg (HKEY_LOCAL_MACHINE), user.reg (HKEY_CURRENT_USER) and userdef.reg. See also Registry cleaner Application virtualization LogParser – SQL-like querying of various types of log files List of Shell Icon Overlay Identifiers Ransomware attack that uses Registry Notes Footnotes References External links * Windows Registry info & reference in the MSDN Library Registry Configuration files
29719643
https://en.wikipedia.org/wiki/Consistent%20Overhead%20Byte%20Stuffing
Consistent Overhead Byte Stuffing
Consistent Overhead Byte Stuffing (COBS) is an algorithm for encoding data bytes that results in efficient, reliable, unambiguous packet framing regardless of packet content, thus making it easy for receiving applications to recover from malformed packets. It employs a particular byte value, typically zero, to serve as a packet delimiter (a special value that indicates the boundary between packets). When zero is used as a delimiter, the algorithm replaces each zero data byte with a non-zero value so that no zero data bytes will appear in the packet and thus be misinterpreted as packet boundaries. Byte stuffing is a process that transforms a sequence of data bytes that may contain 'illegal' or 'reserved' values (such as packet delimiter) into a potentially longer sequence that contains no occurrences of those values. The extra length of the transformed sequence is typically referred to as the overhead of the algorithm. The COBS algorithm tightly bounds the worst-case overhead, limiting it to a minimum of one byte and a maximum of bytes (one byte in 254, rounded up). Consequently, the time to transmit the encoded byte sequence is highly predictable, which makes COBS useful for real-time applications in which jitter may be problematic. The algorithm is computationally inexpensive and its average overhead is low compared to other unambiguous framing algorithms. COBS does, however, require up to 254 bytes of lookahead. Before transmitting its first byte, it needs to know the position of the first zero byte (if any) in the following 254 bytes. Packet framing and stuffing When packetized data is sent over any serial medium, some protocol is required to demarcate packet boundaries. This is done by using a framing marker, a special bit-sequence or character value that indicates where the boundaries between packets fall. Data stuffing is the process that transforms the packet data before transmission to eliminate all occurrences of the framing marker, so that when the receiver detects a marker, it can be certain that the marker indicates a boundary between packets. COBS transforms an arbitrary string of bytes in the range [0,255] into bytes in the range [1,255]. Having eliminated all zero bytes from the data, a zero byte can now be used to unambiguously mark the end of the transformed data. This is done by appending a zero byte to the transformed data, thus forming a packet consisting of the COBS-encoded data (the payload) to unambiguously mark the end of the packet. (Any other byte value may be reserved as the packet delimiter, but using zero simplifies the description.) There are two equivalent ways to describe the COBS encoding process: Prefixed block description To encode some bytes, first append a zero byte, then break them into groups of either 254 non-zero bytes, or 0–253 non-zero bytes followed by a zero byte. Because of the appended zero byte, this is always possible. Encode each group by deleting the trailing zero byte (if any) and prepending the number of non-zero bytes, plus one. Thus, each encoded group is the same size as the original, except that 254 non-zero bytes are encoded into 255 bytes by prepending a byte of 255. As a special exception, if a packet ends with a group of 254 non-zero bytes, it is not necessary to add the trailing zero byte. This saves one byte in some situations. Linked list description First, insert a zero byte at the beginning of the packet, and after every run of 254 non-zero bytes. This encoding is obviously reversible. It is not necessary to insert a zero byte at the end of the packet if it happens to end with exactly 254 non-zero bytes. Second, replace each zero byte with the offset to the next zero byte, or the end of the packet. Because of the extra zeros added in the first step, each offset is guaranteed to be at most 255. Encoding examples These examples show how various data sequences would be encoded by the COBS algorithm. In the examples, all bytes are expressed as hexadecimal values, and encoded data is shown with text formatting to illustrate various features: Bold indicates a data byte which has not been altered by encoding. All non-zero data bytes remain unaltered. indicates a zero data byte that was altered by encoding. All zero data bytes are replaced during encoding by the offset to the following zero byte (i.e. one plus the number of non-zero bytes that follow). It is effectively a pointer to the next packet byte that requires interpretation: if the addressed byte is non-zero then it is the following that points to the next byte requiring interpretation; if the addressed byte is zero then it is the . is an overhead byte which is also a group header byte containing an offset to a following group, but does not correspond to a data byte. These appear in two places: at the beginning of every encoded packet, and after every group of 254 non-zero bytes. A zero byte appears at the end of every packet to indicate end-of-packet to the data receiver. This packet delimiter byte is not part of COBS proper; it is an additional framing byte that is appended to the encoded output. Below is a diagram using example 3 from above table, to illustrate how each modified data byte is located, and how it is identified as a data byte or an end of frame byte. [OHB] : Overhead byte (Start of frame) 3+ -------------->| : Points to relative location of first zero symbol 2+-------->| : Is a zero data byte, pointing to next zero symbol [EOP] : Location of end-of-packet zero symbol. 0 1 2 3 4 5 : Byte Position 03 11 22 02 33 00 : COBS Data Frame 11 22 00 33 : Extracted Data OHB = Overhead Byte (Points to next zero symbol) EOP = End Of Packet Examples 7 through 10 show how the overhead varies depending on the data being encoded for packet lengths of 255 or more. Implementation The following code implements a COBS encoder and decoder in the C programming language: #include <stddef.h> #include <stdint.h> #include <assert.h> /** COBS encode data to buffer @param data Pointer to input data to encode @param length Number of bytes to encode @param buffer Pointer to encoded output buffer @return Encoded buffer length in bytes @note Does not output delimiter byte */ size_t cobsEncode(const void *data, size_t length, uint8_t *buffer) { assert(data && buffer); uint8_t *encode = buffer; // Encoded byte pointer uint8_t *codep = encode++; // Output code pointer uint8_t code = 1; // Code value for (const uint8_t *byte = (const uint8_t *)data; length--; ++byte) { if (*byte) // Byte not zero, write it *encode++ = *byte, ++code; if (!*byte || code == 0xff) // Input is zero or block completed, restart { *codep = code, code = 1, codep = encode; if (!*byte || length) ++encode; } } *codep = code; // Write final code value return (size_t)(encode - buffer); } /** COBS decode data from buffer @param buffer Pointer to encoded input bytes @param length Number of bytes to decode @param data Pointer to decoded output data @return Number of bytes successfully decoded @note Stops decoding if delimiter byte is found */ size_t cobsDecode(const uint8_t *buffer, size_t length, void *data) { assert(buffer && data); const uint8_t *byte = buffer; // Encoded input byte pointer uint8_t *decode = (uint8_t *)data; // Decoded output byte pointer for (uint8_t code = 0xff, block = 0; byte < buffer + length; --block) { if (block) // Decode block byte *decode++ = *byte++; else { if (code != 0xff) // Encoded zero, write it *decode++ = 0; block = code = *byte++; // Next block length if (!code) // Delimiter code found break; } } return (size_t)(decode - (uint8_t *)data); } See also Serial Line Internet Protocol References External links Python implementation Alternate C implementation Another implementation in C Consistent Overhead Byte Stuffing—Reduced (COBS/R) A patent describing a scheme with a similar result but using a different method Encodings
3368545
https://en.wikipedia.org/wiki/Bjarkam%C3%A1l
Bjarkamál
Bjarkamál (Bjarkemål in modern Norwegian and Danish) is an Old Norse poem from around the year 1000. Only a few lines have survived in the Icelandic version, the rest is known from Saxo's version in Latin. The latter consists of 298 hexameters, and tells the tale of Rolf Krake's downfall at Lejre on the isle of Sjælland, described in a dialogue between two of Rolf Krake's twelve berserkers, Bodvar Bjarke (hence the name of the poem), the most famous warrior at the court of the legendary Danish king Rolf Krake, and Hjalte (= hilt). The poem opens with Hjalte waking up his fellow berserkers, having realized they are under attack. In 1030, King Olav had the bard Tormod Kolbrunarskald recite the Bjarkamál to rouse his outnumbered army in the morning before the start of the Battle of Stiklestad, according to Fóstbrœðra saga. In Bjarkamál, Rolf Krake has subdued the Swedes enough to make them pay him tax. Instead, they destroy his court at Lejre with a trick reminding us of Homer's Trojan horse: The wagons, bringing the valuables to Lejre, are filled with hidden weapons instead. When the Swedes, led by Hjartvar, arrive at Lejre, they are invited to a party, but unlike the Danes, they make sure to stay sober. Saxo has combined motives from the original Danish poem with motives from the second song in the Æneid, known as the nyktomakhi, where Æneas tells Dido about the battle between the Greeks and the Trojans in Troy. The nyktomakhi is of about the same length as Bjarkamál, and containing the same elements: The Trojan horse/the smuggling of Swedish weapons; Danes/Trojans are sound asleep when Swedes/Greeks attack them; plus the climax: The goddess Venus informs Æneas that it is the will of the gods themselves (that is, Jupiter, Juno, Minerva and Neptune) that Troy shall fall, and so he can honourably flee. Correspondingly, Rolf Krake's sister Hrut shows Bjarke the war god Odin, albeit the sight of Odin constitutes the moment when Bjarke and Hjalte die. In Axel Olrik's rewrite of Bjarkemål, the mortally wounded Bjarke calls on Hrut to show him Odin, only to say that if Odin shows himself, Bjarke will take revenge (for the death of Rolf). He then lies down next to his dead king, because it is appropriate of a king's man to honour him so, when the king has been mild and just ("dådherlig"). The postscript of Roar Skovmand emphasises the rejection of Odin and that the king is honoured with loyalty, even after he is dead. A well-known example of the old Norse faith in fylgjur, is Bjarke in Bjarkamál lying fast asleep in the hall, while his fylgja (doppelgänger in animal shape), the bear, is fighting on his behalf outside. When eventually Bjarke gets up and starts fighting, the bear has disappeared. The hymn "Sol er oppe" (= Sun is up) from 1817 is Grundtvig's version of the poem. Body Most of the poem is lost. Only fragments of it are preserved in Skáldskaparmál and in Heimskringla. In Saxo Grammaticus' Gesta Danorum a Latin translation of the poem is found but it probably does not closely follow the original. The following example may illustrate the difference between the original terse Old Norse and Saxo's elaborate translation. References The Old Lay of Biarki Translation and commentary by Lee M. Hollander, includes translation of Axel Olrik's reconstruction Bjarkamál The remnants of the original text, two editions Gesta Danorum, Liber 2, Caput 7 Saxo's Latin version (starting with "Ocius evigilet") Translation of Saxo's version Skaldic poems Sources of Norse mythology
27729688
https://en.wikipedia.org/wiki/Before%20You%20Know%20It%20%28software%29
Before You Know It (software)
Before You Know It (usually Byki) is language acquisition software that listens to students and gives them detailed feedback on their pronunciation. There is also freeware for both Windows and Mac OS X which does not check pronunciation. Paid versions present continuous text, which can be used in various ways. The freeware version uses a flashcard software system. Courses were available in over 70 languages. Courses branded as Byki were produced by Transparent Language Online, which still produces courses under that and other names. The system is based on declarative learning of a new language with English as the base language (except English is taught with Spanish as the base language), using the idea that the best way to approach a new language is through its vocabulary and pronunciation. The software has a virtual community of users who make the vocabulary database bigger. This flashcard software program helps learning and remembering dozens of foreign languages vocabulary words and is available for both Windows and Macintosh. The language software has interactive vocabulary practice, progress tracker and sound files. It aims to be a personalized language-learning system that locks foreign language words and phrases into memory. A distinctive feature in paid versions is their ability to record and graph the learner's pronunciation in detail, and compare it to the native speaker's pronunciation of the same words stored in the program. Paid versions teach 1,000-1,500 words per language. The software was discontinued in 2017. See also Language education Language pedagogy List of Language Self-Study Programs References External links Review by Lang1234 Language learning software
3604693
https://en.wikipedia.org/wiki/H-index
H-index
The h-index is an author-level metric that measures both the productivity and citation impact of the publications, initially used for an individual scientist or scholar. The h-index correlates with obvious success indicators such as winning the Nobel Prize, being accepted for research fellowships and holding positions at top universities. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index has more recently been applied to the productivity and impact of a scholarly journal as well as a group of scientists, such as a department or university or country. The index was suggested in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a tool for determining theoretical physicists' relative quality and is sometimes called the Hirsch index or Hirsch number. Definition and purpose The h-index is defined as the maximum value of h such that the given author/journal has published at least h papers that have each been cited at least h times. The index is designed to improve upon simpler measures such as the total number of citations or publications. The index works best when comparing scholars working in the same field, since citation conventions differ widely among different fields. Calculation The h-index is the largest number h such that h articles have at least h citations each. For example, if an author has five publications, with 9, 7, 6, 2, and 1 citations (ordered from greatest to least), then the author's h-index is 3, because the author has three publications with 3 or more citations. However, the author does not have four publications with 4 or more citations. Clearly, an author's h-index can only be as great as their number of publications. For example, an author with only one publication can have a maximum h-index of 1 (if their publication has 1 or more citations). On the other hand, an author with many publications, each with only 1 citation, would have a h-index of 1. Formally, if f is the function that corresponds to the number of citations for each publication, we compute the h-index as follows: First we order the values of f from the largest to the lowest value. Then, we look for the last position in which f is greater than or equal to the position (we call h this position). For example, if we have a researcher with 5 publications A, B, C, D, and E with 10, 8, 5, 4, and 3 citations, respectively, the h-index is equal to 4 because the 4th publication has 4 citations and the 5th has only 3. In contrast, if the same publications have 25, 8, 5, 3, and 3 citations, then the index is 3 (i.e. the 3rd position) because the fourth paper has only 3 citations. f(A)=10, f(B)=8, f(C)=5, f(D)=4, f(E)=3 → h-index=4 f(A)=25, f(B)=8, f(C)=5, f(D)=3, f(E)=3 → h-index=3 If we have the function f ordered in decreasing order from the largest value to the lowest one, we can compute the h-index as follows: h-index (f) = The Hirsch index is analogous to the Eddington number, an earlier metric used for evaluating cyclists. The h-index serves as an alternative to more traditional journal impact factor metrics in the evaluation of the impact of the work of a particular researcher. Because only the most highly cited articles contribute to the h-index, its determination is a simpler process. Hirsch has demonstrated that h has high predictive value for whether a scientist has won honors like National Academy membership or the Nobel Prize. The h-index grows as citations accumulate and thus it depends on the "academic age" of a researcher. Input data The h-index can be manually determined by using citation databases or using automatic tools. Subscription-based databases such as Scopus and the Web of Science provide automated calculators. From July 2011 Google have provided an automatically calculated h-index and i10-index within their own Google Scholar profile. In addition, specific databases, such as the INSPIRE-HEP database can automatically calculate the h-index for researchers working in high energy physics. Each database is likely to produce a different h for the same scholar, because of different coverage. A detailed study showed that the Web of Science has strong coverage of journal publications, but poor coverage of high impact conferences. Scopus has better coverage of conferences, but poor coverage of publications prior to 1996; Google Scholar has the best coverage of conferences and most journals (though not all), but like Scopus has limited coverage of pre-1990 publications. The exclusion of conference proceedings papers is a particular problem for scholars in computer science, where conference proceedings are considered an important part of the literature. Google Scholar has been criticized for producing "phantom citations," including gray literature in its citation counts, and failing to follow the rules of Boolean logic when combining search terms. For example, the Meho and Yang study found that Google Scholar identified 53% more citations than Web of Science and Scopus combined, but noted that because most of the additional citations reported by Google Scholar were from low-impact journals or conference proceedings, they did not significantly alter the relative ranking of the individuals. It has been suggested that in order to deal with the sometimes wide variation in h for a single academic measured across the possible citation databases, one should assume false negatives in the databases are more problematic than false positives and take the maximum h measured for an academic. Examples Little systematic investigation has been done on how the h-index behaves over different institutions, nations, times and academic fields. Hirsch suggested that, for physicists, a value for h of about 12 might be typical for advancement to tenure (associate professor) at major [US] research universities. A value of about 18 could mean a full professorship, 15–20 could mean a fellowship in the American Physical Society, and 45 or higher could mean membership in the United States National Academy of Sciences. Hirsch estimated that after 20 years a "successful scientist" would have an h-index of 20, an "outstanding scientist" would have an h-index of 40, and a "truly unique" individual would have an h-index of 60. For the most highly cited scientists in the period 1983–2002, Hirsch identified the top 10 in the life sciences (in order of decreasing h): Solomon H. Snyder, h = 191; David Baltimore, h = 160; Robert C. Gallo, h = 154; Pierre Chambon, h = 153; Bert Vogelstein, h = 151; Salvador Moncada, h = 143; Charles A. Dinarello, h = 138; Tadamitsu Kishimoto, h = 134; Ronald M. Evans, h = 127; and Ralph L. Brinster, h = 126. Among 36 new inductees in the National Academy of Sciences in biological and biomedical sciences in 2005, the median h-index was 57. However, Hirsch noted that values of h will vary among disparate fields. Among the 22 scientific disciplines listed in the Essential Science Indicators citation thresholds [thus excluding non-science academics], physics has the second most citations after space science. During the period January 1, 2000 – February 28, 2010, a physicist had to receive 2073 citations to be among the most cited 1% of physicists in the world. The threshold for space science is the highest (2236 citations), and physics is followed by clinical medicine (1390) and molecular biology & genetics (1229). Most disciplines, such as environment/ecology (390), have fewer scientists, fewer papers, and fewer citations. Therefore, these disciplines have lower citation thresholds in the Essential Science Indicators, with the lowest citation thresholds observed in social sciences (154), computer science (149), and multidisciplinary sciences (147). Numbers are very different in social science disciplines: The Impact of the Social Sciences team at London School of Economics found that social scientists in the United Kingdom had lower average h-indices. The h-indices for ("full") professors, based on Google Scholar data ranged from 2.8 (in law), through 3.4 (in political science), 3.7 (in sociology), 6.5 (in geography) and 7.6 (in economics). On average across the disciplines, a professor in the social sciences had an h-index about twice that of a lecturer or a senior lecturer, though the difference was the smallest in geography. Advantages Hirsch intended the h-index to address the main disadvantages of other bibliometric indicators. The total number of papers metric does not account for the quality of scientific publications. The total number of citations metric, on the other hand, can be heavily affected by participation in a single publication of major influence (for instance, methodological papers proposing successful new techniques, methods or approximations, which can generate a large number of citations). The h-index is intended to measure simultaneously the quality and quantity of scientific output. Criticism There are a number of situations in which h may provide misleading information about a scientist's output. Some of these failures are not exclusive to the h-index but rather shared with other author-level metrics. Misrepresentation of data The h-index does not account for the typical number of citations in different fields. Citation behavior in general is affected by field-dependent factors, which may invalidate comparisons not only across disciplines but even within different fields of research of one discipline. The h-index discards the information contained in author placement in the authors' list, which in some scientific fields is significant though in others it is not. The h-index is a natural number that reduces its discriminatory power. Ruane and Tol therefore propose a rational h-index that interpolates between h and h + 1. Prone to manipulation Weaknesses apply to the purely quantitative calculation of scientific or academic output. Like other metrics that count citations, the h-index can be manipulated by coercive citation, a practice in which an editor of a journal forces authors to add spurious citations to their own articles before the journal will agree to publish it. The h-index can be manipulated through self-citations, and if based on Google Scholar output, then even computer-generated documents can be used for that purpose, e.g. using SCIgen. Other shortcomings The h-index has been found in one study to have slightly less predictive accuracy and precision than the simpler measure of mean citations per paper. However, this finding was contradicted by another study by Hirsch. The h-index does not provide a significantly more accurate measure of impact than the total number of citations for a given scholar. In particular, by modeling the distribution of citations among papers as a random integer partition and the h-index as the Durfee square of the partition, Yong arrived at the formula , where N is the total number of citations, which, for mathematics members of the National Academy of Sciences, turns out to provide an accurate (with errors typically within 10–20 percent) approximation of h-index in most cases. Alternatives and modifications Various proposals to modify the h-index in order to emphasize different features have been made. As the variants have proliferated, comparative studies have become possible showing that most proposals are highly correlated with the original h-index and therefore largely redundant, although alternative indexes may be important to decide between comparable CVs, as often the case in evaluation processes. These alternative metrics are applicable for author-level and journal-level rankings. Applications Indices similar to the h-index have been applied outside of author level metrics. The h-index has been applied to Internet Media, such as YouTube channels. It is defined as the number of videos with ≥ h × 105 views. When compared with a video creator's total view count, the h-index and g-index better capture both productivity and impact in a single metric. A successive Hirsch-type-index for institutions has also been devised. A scientific institution has a successive Hirsch-type-index of i when at least i researchers from that institution have an h-index of at least i. See also Bibliometrics Comparison of research networking tools and research profiling systems References Further reading External links Google Scholar Metrics H-index for computer science and electronics H-index for economists H-index for computer science researchers H - index for computer scientists from Google Scholar H-index for astronomers Citation metrics Academic publishing Index numbers
1399012
https://en.wikipedia.org/wiki/FreqTweak
FreqTweak
FreqTweak is an open-source tool for real-time audio spectral manipulation and display. It is free software, available under the GNU General Public License. FreqTweak can (and is supposed to) be connected to other audio software using the JACK Audio Connection Kit. Description FreqTweak is FFT-based, and supports up to four channels. An FFT analysis is applied to each audio channel, and every individual frequency band can have a different effect applied to it. In version 0.6.1 the following effects are available: EQ cut/boost Pitch scaling Gate Compressor Delay Limit Warp References Free audio software Free software programmed in C++ Audio software with JACK support Software that uses wxWidgets
2017690
https://en.wikipedia.org/wiki/Poly1305
Poly1305
Poly1305 is a cryptographic message authentication code (MAC) created by Daniel J. Bernstein. It can be used to verify the data integrity and the authenticity of a message. A variant of Bernstein's Poly1305 that does not require AES has been standardized by the Internet Engineering Task Force in . Description The original proposal, Poly1305-AES, which uses the AES (forward) encryption function as a source of pseudorandomness, computes a 128-bit (16 bytes) authenticator of a variable-length message. In addition to the message, it requires a 128-bit AES key, a 128-bit additional key r (with 106 effective key bits), and a 128-bit nonce which must be unique among all messages authenticated with the same key. The message is broken into 16-byte chunks which become coefficients of a polynomial evaluated at r, modulo the prime number 2130−5. The authentication code is the sum of this polynomial evaluation, plus a pseudorandom value computed by passing the nonce through the AES block cipher. The name Poly1305-AES is derived from its use of polynomial evaluation, the 2130−5 modulus, and AES. In NaCl, Poly1305 is used with Salsa20 in place of AES, and in TLS and SSH it is used with the ChaCha20 variant of the same. Google has selected Poly1305 along with Bernstein's ChaCha20 symmetric cipher as a replacement for RC4 in TLS/SSL, which is used for Internet security. Google's initial implementation is used in securing https (TLS/SSL) traffic between the Chrome browser on Android phones and Google's websites. Use of ChaCha20/Poly1305 has been standardized in . Shortly after Google's adoption for use in TLS, both ChaCha20 and Poly1305 support was added to OpenSSH via the authenticated encryption cipher. Subsequently, this made it possible for OpenSSH to remove its dependency on OpenSSL through a compile-time option. Security The security of Poly1305-AES is very close to the underlying AES block cipher algorithm. Consequently, the only way for an attacker to break Poly1305-AES is to break AES. Speed Poly1305-AES can be computed at high speed in various CPUs: for an n-byte message, no more than 3.1n + 780 Athlon cycles are needed, for example. The author has released optimized source code for Athlon, Pentium Pro/II/III/M, PowerPC, and UltraSPARC, in addition to non-optimized reference implementations in C and C++ as public domain software. Implementations Below is a list of cryptography libraries that support Poly1305: Botan Bouncy Castle Crypto++ Libgcrypt libsodium Nettle OpenSSL LibreSSL wolfCrypt GnuTLS mbed TLS MatrixSSL See also ChaCha20-Poly1305 – an AEAD scheme combining the stream cipher ChaCha20 with a variant of Poly1305 References External links Poly1305-AES reference and optimized implementation by author D. J. Bernstein Fast Poly1305 implementation in C on github.com Advanced Encryption Standard Internet Standards Message authentication codes Public-domain software with source code
15084034
https://en.wikipedia.org/wiki/List%20of%20Terminator%3A%20The%20Sarah%20Connor%20Chronicles%20characters
List of Terminator: The Sarah Connor Chronicles characters
The following is a list of characters in the FOX science fiction television series Terminator: The Sarah Connor Chronicles; including supporting characters, and important villains. A Auldridge Auldridge (introduced in "Born to Run"), portrayed by Joshua Malina, is an FBI Agent who handles difficult unsolved cases involving unusual evidence or facts. Little is known about Agent Auldridge. He meets with the captured Sarah Connor twice while in prison. On both occasions, he tries to get Sarah to tell him the whereabouts of her son, John Connor, but Sarah refuses to talk. He is visited by (and knows) former FBI Agent James Ellison to discuss Ellison's former case. B Martin Bedell Presidio Alto Military Academy Cadet Captain Martin Bedell (introduced in "Goodbye to All That"), portrayed by Will Rothhaar, plays a role in forming the core of Tech-Com using his military training and experience. Derek Reese and John Connor save him from a T-888 while he is in military prep school. In Derek's future, he participates in a mission with John of freeing Skynet's prisoners from one of its concentration camps, Century, including Kyle Reese. Years after the event on the Century Workcamp, he sacrifices himself to save Kyle Reese, John Connor, and thirty nine prisoners from Skynet forces. Marty Bedell Marty Bedell (introduced in "Goodbye to All That"), portrayed by Billy Unger, is a child targeted by a T-888 because he shares the same name as a future hero of the Human Resistance. Remembering that two other Sarah Connors were killed in 1984 by the T-800 hunting for her, Sarah shelters and protects the boy with the guardian terminator Cameron Phillips, while John Connor and his uncle Derek Reese hunt the T-888. In the Connor home, he prepares a book report on L. Frank Baum's The Wonderful Wizard of Oz for school. Cameron suggests the novel after finding it on a shelf in their furnished home and reminding Sarah that it was John's favorite book as a child. Father Armando Bonilla Father Armando Bonilla (introduced in "Samson and Delilah"), portrayed by Carlos Sanz, is a priest to whose church Sarah and John Connor flee when escaping from a malfunctioning Cameron. He provides sanctuary to the Connors while they prepare a booby trap for Cameron as it is very likely that it will be able track them down. Father Bonilla is shocked when John Connor tries to cut into Cameron's skull to remove its chip but is quickly chased away by Sarah. Bonilla appears again in the season finale, conducting a confession to who turns out to be Chola, the lookout in Carlos' gang in the first season. They seem to have some distant family relations. Bonilla is also asked for by Sarah Connor when she's incarcerated in LA County Lockup and gets trapped in Sarah's chamber when Cameron assaults the security to break her out. Felicia Burnett, MD Dr. Felicia Burnett (introduced in "The Good Wound"), portrayed by Laura Regan, treats Sarah Connor's gun wound. She is revealed to have a past with the sheriff who's investigating Sarah Connor's shoot-out in the Kaliba warehouse. Felicia assumes that Sarah is being abused and shot by her boyfriend/husband and relates with her. Due to this empathy, Sarah confides her family secrets such as John and Kyle Reese's relations. After a failed attempt to remove the bullet from Sarah's thigh, they sneak into a hospital morgue where Felicia successfully removes it. After the operation, Derek Reese enters the room, and thinking him to be Sarah's abusive boyfriend, Felicia pulls a gun on Derek. During the stand-off, Felicia's own abusive husband steps in and coarsely orders Felicia to stand down. Overwhelmed, Felicia shoots and kills him and watches as Sarah and Derek leave. C Cameron Carlos Carlos is portrayed by Jesse Garcia (introduced in "Gnothi Seauton"). He is Enrique's nephew, and purveyor of forged documents. He calls his uncle a stool pigeon. Among those who sought his service were Sarah and John Connor, and Cameron. During a meeting with FBI Special Agent James Ellison in a deleted scene of the episode "The Turk", it is implied that Carlos also forged identification and allied documents for Derek Reese and his unit at some point after their arrival from 2027. Carlos' gang are brutally slaughtered by Margos Sarkissian's men. Carter Carter, portrayed by Brian Bloom, is a Terminator, whose model type is unknown, that was sent back in time to 2007 by Skynet to acquire and store a large amount of Coltan, the metal used to construct Terminators, in the episode "Heavy Metal". Carter hires various human military personnel, who are unaware of what he truly is, to assist him in his assignment. Once his mission is completed, Carter kills his humans, secures the storage area, and shuts himself down to await further orders. Cameron identifies his endoskeletal structure as different from Cromartie's. However, when John Connor accidentally is transported inside the secured storage area, he gets trapped with Carter and his men. John tries to retrieve a key from around Carter's neck without waking him up from Stand-By Mode. John tells Sarah that he isn't moving and that it is like he's sleeping. Cameron tells Sarah that Carter's on Stand-By until his next objective or is triggered awake. Cameron eventually enters the storage area to rescue John and steal the coltan, battles Carter and locks the Terminator inside. Barbara Chamberlain Barbara Chamberlain, portrayed by Karina Logue (introduced in "Vick's Chip"), was the city manager of Los Angeles, whose project would have become a part of Skynet's future infrastructure. She was killed by Vick Chamberlain, a T-888 posing as her husband. Vick Chamberlain Vick Chamberlain, portrayed by Matt McColm (introduced in "Gnothi Seauton"), is a T-888 Terminator sent back in time to help create a traffic surveillance network that Skynet hopes to use in the future. An advanced infiltrator, Vick poses as the husband of city manager Barbara Chamberlain, murders one of her political enemies, and adapts his mission to attack a group of Human Resistance fighters, including Derek Reese, when he finds one of them spying on her. Although his mission is not directly related to the Connors, he is their principal Terminator adversary early in the first season, while Cromartie obtains a new biological covering and begins his search for them anew. Vick is first discovered by Cameron, Sarah Connor, and John Connor in "Gnothi Seauton", lying apparently deactivated among the corpses of time traveling Resistance soldiers in their hideout. It is not clear how Vick was deactivated as he did not appear to be damaged in any way. It is possible that he was simply interrupted while searching the area, and decided to 'play dead'. Cameron suggests that he was waiting to ambush the last member of the Resistance cell (who turns out to be Derek Reese) when he returned that evening. Upon reactivation, Vick identifies Cameron as an "unknown cyborg", and he is programmed to evade and re-evaluate his mission. Cameron and Sarah Connor give chase but are thwarted by traffic. In "Queen's Gambit", Vick learns that Derek Reese is in police custody, and gets himself arrested in order to kill Reese. Sarah and Cameron rescue Derek, and once again fight Vick. Before being defeated by Cameron, Vick's hand is ripped off by a passing truck, becoming lost on the street. FBI Special Agent James Ellison recovers Vick's hand and takes it when visiting Dr. Silberman in "The Demon Hand". Vick is terminated when Cameron pulls the CPU from his exposed metal skull. She later incinerates his endoskeleton (less his missing hand) with thermite. As she prepares to do so, Charley Dixon describes Vick as "a scary robot" and Cameron as "a very scary robot." Cameron secretly retains Vick's CPU, which is discovered by Derek in "Vick's Chip". John and Sarah decide to investigate its contents, Vick's mission, and his memories. In doing so, they learn that he maintained a marital relationship with Barbara Chamberlain. Like Cameron, the T-888 models are thus shown capable of mimicking affection and romance, and seducing human partners. After increasing the electrical power too much to the CPU by mistake, the disembodied Vick begins to take over John's computer to which he is connected, and tries to connect to the internet. It is unknown whether he succeeded before being shut down. Rupert Chandler Rupert Chandler, portrayed by Tim Monsion (introduced in "Self Made Man"), is Los Angeles County's most significant land developer in the early Twentieth Century, and the father of Will Chandler. In the wake of his son's death on December 31, 1920, Rupert Chandler promises to build a memorial park at the corner of 3rd Avenue and Pico Boulevard, where his son had planned to build his masterpiece, Pico Tower. He is approached by T-888 Terminator Myron Stark at the October 21, 1921, premiere of The Sheik, who offers to pay twice the land's value, but Chandler refuses to sell. He is subsequently driven to financial ruin by Stark, and must liquidate his assets — including the land at 3rd and Pico. Will Chandler Will Chandler, portrayed by Eric Callero (introduced in "Self Made Man"), is an up-and-coming architect in Los Angeles in the early Twentieth Century, and the son of Rupert Chandler. In one timeline, Will Chandler designs and builds Pico Tower at the corner of 3rd Avenue and Pico Boulevard in the 1920s, where T-888 Terminator Myron Stark intends to assassinate California Governor Mark Wyman. Instead, Will Chandler and forty-two others are accidentally killed on December 31, 1920, by Stark whose time displacement bubble arrives ninety years early and sets fire to the speakeasy in which they were celebrating New Year's Eve. His Pico Tower is eventually built by Stark. There is no real-world tower at or near the corner of Pico Boulevard and 3rd Avenue. Chola Chola, portrayed by Sabrina Perez, is a member of Carlos' gang, functioning as the group's lookout. The character first appears in "Gnothi Seauton", in which Cameron studies and copies her body language in an order to better simulate human appearance. In "What He Beheld", (after Carlos' gang was killed off) Chola visits the Connors and subsequently drives them to the hideout of the so-called False Sarkissian. Afterward, she is seen driving the Connors home. Once the Connors are out of hearing range, Cameron offers to kill Chola, lest she reveal their location. In the end, Cameron gives Chola a loaded sidearm with which to protect herself. She's next seen at the second season's finale, providing John Connor and Cameron with forged passports and a message from Sarah Connor; this being the first time since her first appearance in which she speaks. Sarah Connor John Connor Kacy Corbin Kacy Corbin, portrayed by Busy Philipps (introduced in "Automatic for the People"), is the pregnant landlady and next-door neighbor of the Connor/Reese family (which she knows as the Baums). Her unborn son's name is Nick. She attended culinary school with a classmate who knew George Laszlo, and met Nick's policeman father, Trevor (Jon Huertas) when she was a 25-year-old pastry chef in Silver Lake. John Connor pirates cable television for her, noting afterward to Sarah, "Nobody that pregnant should be forced to watch network television. It's bad for the baby." Kacy admits to John that six beers and the rhythm method have proven to be ineffective birth control. Sarah bonds with her when the former takes her to hospital for pregnancy complications, and comforts her there. Kacy and Trevor individually each tell Sarah of Kacy's fears about Trevor's profession; either that he will be killed or injured, or that he will "bring his work home". During the episode "Brothers of Nablus", Cromartie goes to Kacy's house in search of Cameron, but Kacy tells him she has never seen her before. She then telephones John and Riley, warning them that a man was looking for them just before Cromartie knocks on their door to look for Cameron. Jordan Cowan Jordan Cowan, portrayed by Alessandra Torresani, is a classmate of John Connor and Cameron, and is the first person whom Cameron is seen attempting to befriend in the series for no operational purpose. She tries to cheer Jordan in the girls' lavatory by offering her the gift of a "tight" (meaning "appealing") makeup product in the episode "The Turk". Afterward, Cameron informs John that she made a friend. Jordan is upset over graffiti on a classroom door, which hinted she may have had a sexual liaison with a teacher or student. Ashamed, she commits suicide by jumping off of the school's roof. John had wanted to save her, but Cameron thwarted his chance (knowing that the family's cover would be blown with unwanted publicity if John effected a rescue). John takes Jordan's death personally. Cromartie Portrayed by Owain Yeoman in the pilot episode and then by Garret Dillahunt in later episodes. It is a substitute teacher in the high school class that John is attending in 1999 and identifies itself as "Cromartie". During roll call, when John acknowledges his attendance, Cromartie removes a concealed pistol and attempts to kill John. John escapes due to Cameron Phillips taking the bullets. Cameron later runs over Cromartie with a truck and briefly disables it. Cromartie follows them home and engages Cameron in battle again. Unable to destroy Cromartie, Cameron instead shocks it with a live wire, incapacitating it for 120 seconds. Cromartie eventually tracks John, Sarah and Cameron to a bank vault, demonstrating the model's immense strength by tearing apart the thick vault door with his bare hands. Intercepting them right before they travel to 2007, it ends up shot by an energy rifle that was hidden in the vault and is supposedly destroyed. It is later revealed, however, that Cromartie survived, although its biological covering was destroyed, and its head separated from its body. Cromartie's head was transported to 2007, along with John, Sarah, and Cameron, and is subsequently found by a salvager. The head is able to reactivate the body and able to guide it to its position. After killing the salvager and reattaching the head to its body, the Cromartie terminator, under heavy disguise, searches for a new biological covering. Its search leads it to a medical scientist, Dr. Flemming, who specializes in cellular growth. When Cromartie shows him a formula to create artificial flesh at an exponential rate, Dr. Flemming is eager to complete the task. However, after the cyborg receives his new flesh covering, it kills the scientist and takes his eyes to cover its own. Cromartie later seeks the aid of a plastic surgeon, Dr. David Lyman, to build it a new face. After scanning through pictures of all of Dr. Lyman's patients, it chooses the face of George Laszlo, since he is a 92% structural match. It later kills Dr. Lyman and Laszlo, and steals Laszlo's identity. Cromartie later assumes the name Robert Kester and masquerades as an FBI agent, and uses its false credentials to find Sarah and John. However, Agent James Ellison finds out, and attempts to capture it with a Hostage Rescue Team operation. The entire team is killed except for Ellison, who is left face to face with Cromartie in a final showdown. When Ellison lowers his gun in resignation, Cromartie simply stares at him and walks away without firing. Later that evening when Ellison goes to the ruins of Sarah Connor's house, he is startled to see Cromartie walk up. Ellison tells it that if he was left alive so that he could lead Cromartie to the Connors then he might as well be killed right now, because he will not do "The devil's work". The Terminator replies "We'll see", before walking away again. Later, in the season two episode "The Mousetrap", Cromartie kidnapped Charley Dixon's wife, Michelle, to lure Sarah out so it could hunt John without interference. After Sarah, Charley and Derek Reese arrived, it set off a bomb where Michelle Dixon was held. Most of them survived, but Michelle was fatally wounded by the bomb's shrapnel and died from severe blood loss. Cromartie mimicked Sarah's voice in a phone call to John to bring him out into public, but John spotted Cromartie stalking him and runs to escape. Cromartie dove into the ocean after John, and sank to the bottom, nearly dragging John down with it, though John escaped by a narrow margin. However, Cromartie was later shown walking out of the ocean, which indicates that Terminators are capable of surviving under water. It later appeared at Michelle Dixon's funeral, probably waiting to find John Connor. In the episode "Brothers of Nablus," during its continuous search for the Connors, Cromartie saved Ellison from a double sent by Skynet. Despite the cyborg's mission, it is shown that Cromartie disagrees with Skynet's belief that Ellison is useless since leaving the FBI and holds that Ellison can still lead it to the Connors even though the former agent is no longer under the employ of the FBI. These events lead Ellison to intensify his obsession to bring the cyborg down as the former agent is unwilling to be his pawn. This event is also a reminder of how terminators are loyal to their mission, not their creator. After pursuing John and Riley to Mexico with Sarah as a hostage, Cromartie was led into a trap by Ellison and gunned down by Sarah, Derek, Cameron, and ultimately John. The team buried the endoskeleton outside a Mexican church, promising Ellison to return with materials to destroy it permanently, and Sarah destroyed its CPU beyond repair with the butt of an MP5, apparently "killing" it once and for all. The title of this episode, "Mr. Ferguson Is Ill Today," is a reference to Cromartie's initial appearance in the pilot episode when it masqueraded as John Connor's substitute teacher. In the episode "Complications", Ellison exhumes Cromartie's endoskeleton at the request of the T-1001 posing as (the long deceased) Catherine Weaver. In the following episode, "Strange Things Happen At The One Two Point", Weaver reveals to Ellison that her Babylon project team have repaired Cromartie's body and connected it to the sentient computer mind that the Babylon team developed from Andy Goode's Turk chess computerthe starting point of the artificial intelligence believed destined to become Skynet. Operating Cromartie's former head as its avatar, the Babylon AI politely greets a shocked Ellison and identifies itself as "John Henry", a name recently given to it by the late Dr. Boyd Sherman. The T-888 endoskeleton, shared by Cromartie, "Vick" and possibly also "Carter", is very similar to the classic T-800/850 Model, with some minor differences. The endoskeleton structure contains some upgrades, including additional armor plating to protect the spinal column and chest, as well as bladed surfaces on the inner thighs, allowing a (presumably fleshless) T-888 to kill a human if it can get them in a headlock between its legs. The endoskeleton itself is smaller than the 800/850, allowing for a wider variety of disguises. If headless, a T-888's head can guide the body to itself, even over long distances. Apparently, this is what makes it possible for the headless body of a T-888 to navigate itself successfully around obstacles and through the city, evade detection, and attack/kill those who get in its way, even though there is no visible evidence of audiovisual sensors anywhere other than the head. Additionally, the silicon CPU chip is directly attached to the shock-damping assembly, whereas on the T-800 this was a separate piece which required removal before the CPU chip itself could be accessed. T-888s' selection of voice synthesis is not a function of their CPU chip, as demonstrated at the end of "Strange Things Happen at the One Two Point", when the John Henry computer spoke through Cromartie's former head in the voice of the late George Laszlo, despite Cromartie's CPU having been destroyed by Sarah Connor at the end of "Mr. Ferguson Is Ill Today" and the voice having been adopted by the Terminator only after taking Laszlo's appearance. T-888 flesh can retain its natural appearance for decades as demonstrated by Myron Stark who encased himself in a wall from May 1927 until the unspecified date (December 1, 2008 through late 2010) in episode, "Self-Made Man". Like Cameron, Cromartie has been able to defeat and outsmart other Terminators, strongly suggesting that lifetime and experience makes the Terminators more effective. According to the Sarah Connor Chronicles: Behind the Scenes featurette, the T-888 possesses two more additional backup CPUs with the same neural net architecture. D Dana Dana, portrayed by Michelle Arthur, is Sarah Connor's roommate in her dream-like experience in a sleep clinic in the episode "Some Must Watch, While Some Must Sleep". She speaks with an English accent and she's also addicted to nicotine as it was seen when Sarah catches her smoking when they first meet. She admits having an addiction but doesn't really regret it. Dana also mentions having a soft spot for young men that is seen when she flirtatiously greets John Connor during visiting hours. She tells Sarah that in her dreams she's burning, a condition which Sarah associates with her smoking habit. She's last seen when her portion of the room burns but it's implied by the nurse that she pulled through the incident. Riley Dawson Riley Dawson, portrayed by Leven Rambin (introduced in "Automatic for the People"), is John's new love interest that he meets at school, much to the consternation of Sarah. John does not reveal the story of his life to her, but as they get closer, he realizes he is endangering her life. Unknown to John, a Human Resistance fighter, Jesse Flores, has brought Riley back from the future to prevent John from getting too close to Cameron, and to get close to John. She appears to develop genuine romantic feelings for John. Jesse later kills Riley after a struggle. Charley Dixon Charley Dixon, portrayed by Dean Winters (introduced in the pilot episode), is Sarah Connor's fiancé in 1999 before she leaves him, fearing discovery of her true identity and thus her son John's death. After an explosion at the Security Trust Bank of Los Angeles in which Sarah and John are assumed to be killed, Charley comes to Los Angeles to see the rubble for himself. There, he settles, meets, and eventually marries Michelle Dixon. Eight years later, Charley immediately recognizes Sarah and John from the television news reports of their nude appearance in the middle of Interstate Highway 105 (the bank explosion was a time displacement field that transported the Connors, Cameron, and the flaming head of a T-888 forward in time). After helping to save the life of John's uncle Derek Reese, Charley learns of John's father's identity, Sarah's past, and the impending future of Skynet. Cromartie, the T-888 pursuing the Connors, kidnaps Charley's wife to lure Sarah, intercept her telephone conversation with John, and prevent Sarah's interference in Cromartie's hunt for John. After Sarah arrives at Michelle's location with Derek and Charley, and Sarah talks to John on the phone, Cromartie detonates explosives at the base of a mobile communications antenna tower, rendering it useless and nearly killing the four humans in the adjoining structure onto which it falls. Sarah, Derek and Charley receive only superficial wounds, but Michelle is severely injured by the flying debris, leaving Charley a widower when she succumbs to her wounds later in the episode. In the episode, "Self Made Man", dialogue from John reveals that Charley left Los Angeles following his wife's funeral. In the episode "To The Lighthouse", it is revealed that he relocated to a lighthouse on an island. When the island is invaded (possibly by Kaliba goons), however, Charley is killed while trying to fight them off with a gun. Michelle Dixon Michelle Dixon, portrayed by Sonya Walger (introduced in the pilot episode), is the wife of Charley, Sarah's fiancé in 1999. In the season two episode "Automatic for the People", she is told about Terminators by James Ellison. She is kidnapped, bound, and gagged by Cromartie and dies from injuries in the next episode, "The Mousetrap". Dietz Dietz, portrayed by Theo Rossi (introduced in "Today Is the Day, Part 1"), is a lieutenant under the command of Jesse Flores in USS Jimmy Carter. He was assigned with the mission to acquire a special package stored on an oil platform near Indonesia from a group of rather early model Terminators. He's visibly distraught by the encounter with a "Rubberskin" and begins to question the Human Resistance's co-operation with the machines. In "The Last Voyage of the Jimmy Carter" he breaks into the cargo bay to see what the package contains which turns out to be a frozen T-1001. During his argument with Jesse about his intrusion, the cryogenic casing thaws and the T-1001 gets loose, killing Officer Goodnow and assumes her shape. Afterwards Dietz becomes highly paranoid and clashes with Jesse. He harasses Jesse and gets struck down by her, starting a fight. As he stands over Jesse to possibly murder her, he's grabbed by Queeg, who slams him to a nearby wall, killing him on the spot. Terissa Dyson Terissa Dyson is portrayed by Charlayne Woodard (introduced in the pilot episode). She is the widow of Dr. Miles Dyson, the original designer of SkyNet. Charlayne Woodard takes over the role from S. Epatha Merkerson, who played "Tarissa Dyson" (the character's name is spelled "Tarissa" in Terminator 2: Judgment Day) in Terminator 2: Judgment Day. Sarah visits her twice, both times to push for information on the potential continuation of her husband's work. During their final encounter, she reluctantly offers up the much needed information, but is dismayed at the fact more people will die in the struggle. E James Ellison James Ellison (Richard T. Jones, main cast, introduced in the pilot episode) is an FBI Special Agent pursuing Sarah Connor. At first puzzled by what he initially thinks is Sarah's outlandish story, he later collects inexplicable evidence of the Terminators (including the body of Cromartie) and gradually realizes the truth. Jones describes his character as a "man of faith" and likens him to that of Tommy Lee Jones in The Fugitive. Jones was allowed to improvise a few lines to provide "a little bit of comic relief" to the show. In the second season, Ellison pursues employment with ZeiraCorp, unknowingly allying himself with the T-1001 posing as (the long deceased) Catherine Weaver. Lila Ellison Special Agent Lila Ellison, portrayed by Fay Wolf (introduced in "Allison from Palmdale"), is the colleague and ex-wife of James Ellison. She aborted her pregnancy without her first husband's knowledge, which led to their eventual divorce, as James Ellison couldn't cope with the knowledge that Lila sacrificed their unborn child for the sake of her career. She is now married to Paul. Eric Eric, portrayed by Billy Lush (introduced in "Self Made Man"), is a graduate student who works nights in a library. Eric allows Cameron off-hours access to the library. In appreciation, she brings him his favorite doughnuts (glazed, rainbow sprinkled and cinnamon twisted) at each visit. Cameron, who secretly studies literature and arcane world history, identifies him as her only friend. Dependent on a wheelchair, he is, like her, an amalgamation of biological material and machinery: organic tissues surrounded by a mechanized metal exoskeleton, in contrast to her organic tissues surrounding a mechanized metal endoskeleton. They are each somewhat socially isolated, and aware of their respective malfunctioning programming (his malignant chromosomes and her damaged CPU). Cameron reads Othello, the Moor of Venice at Eric's request. Cameron confides in Eric potentially compromising information for no tactical gain. Although unaware that Cameron is not human, or that her "brother" will save humanity from machines, Eric is entrusted with the knowledge that Cameron carries a concealed sidearm and has used it to protect her brother from people who wish to harm him. Using telephone directories as a backstop, she allows Eric to fire her 9 mm Glock pistol and gives him the still-hot bullet as a keepsake. Before Cameron traveled with John in "Complications" to destroy Cromartie's endoskeleton, she informed Eric that the two were going to Mexico to see a friend. Cameron has also confided in him her ill-ease concerning the "crazy blonde" whom John is dating. She does not pretend to struggle with his weight when she carries him upstairs to the film vault, nor does she conceal her ability to read a microfiche with her naked 'eye'. She tells Eric of her ability to calculate the date by the seemingly imperceptible movement of stars relative to each other, and reveals her superhuman diagnostic capabilities. Eric admits his previous battle with bone cancer but claims to be in remission. Cameron, however, diagnoses his Ewing's sarcoma, identifying a small secondary tumor in his arm, a possible tumor in his lungs, an eight percent decrease in his body weight over the preceding fortnight, and reduced muscle strength. Due to Cameron's lack of social skills and tact when revealing this to him, Eric becomes upset with her and tells her to leave. The next time Cameron visits the library at night, Eric is not there, but Cameron is instead accepted into the building by his apparent replacement. F Anne Fields Anne Fields (introduced in "Alpine Fields"), portrayed by Rebecca Creskoff, is the wife of David Fields, and the mother of Lauren Fields and Sydney Fields. She is a homemaker who begins an affair with neighbour Roger Shaffer after David falls and injures his back. Her relationship with Roger twice puts her family at risk from an unnamed T-888. When Roger approaches the Fields' home for a liaison, believing David and Lauren to be away camping, Anne destroys Sarah Connor's electrical boobytrap, leaving them effectively defenseless. Six months later, she telephones Roger while the Fields are in hiding. The call is intercepted by the T-888 who promptly arrives at their motel. Anne is severely wounded by the T-888, but David's defense provides Anne and Lauren enough time to escape and call Sarah for help. Lauren and Derek Reese tend to Anne's wounds in a vacant maintenance garage and deliver Sydney. Derek informed Anne and Lauren of the motive of the T-888 - her unborn daughter's future importance in combating a plague caused by Skynet. He also revealed the name the family would've given to the baby, Sydney, if their lives hadn't been interfered with by the T-888. Anne dies moments after giving birth. Lauren leaves her necklace and Saint Jude medal on Anne's body, lest they serve as a Terminator's sighting target on herself. David Fields David Fields (introduced in "Alpine Fields"), portrayed by Carlos Jacott, is the husband of Anne Fields, the father of Lauren Fields, and the nominal (but not biological) father of Sydney Fields whom he predeceases. He takes prescription narcotics in response to his back injury. David and Lauren enjoy camping and building birdhouses. Anne cuckolds him regularly with their next-property neighbour, Roger Shaffer. Sarah Connor and Cameron identify his family as the target named by the time-travelling Human Resistance soldier in blood on their basement window. David is a banker who conducts illegal banking transactions for a technology company, Simdyne Cybernetics Corporation, and is therefore assumed by Sarah to be the target of the unnamed T-888 hunting them. Retrieving his revolver from the waistband of the inoperative Cameron, David ventures out of the family's home toward the approaching T-888 and offers himself in order to spare his family. The T-888, however identifies David as a harmless non-target and tosses him aside. Six months later, David sacrifices himself again to protect his family. When the T-888 finds them in a motel, David empties a Mossberg shotgun into the T-888 before manually attacking it with a table leg and curtain rod. David dies, but he delays the T-888 long enough for the Fields to escape and for Sarah to arrive and destroy the T-888. Lauren Fields Lauren Fields (introduced in "Alpine Fields"), portrayed by Samantha Krutzfeldt, is the daughter of David Fields and Anne Fields. She is the half-sister and adoptive mother of Sydney Fields. She meets Sarah Connor and Cameron six months before Sydney's birth, when Sarah and Cameron invade her family's home and protect them from an unnamed T-888 in the present. Lauren enjoys camping and building birdhouses with her father, and is aware that her father is regularly cuckolded by her mother with neighbour Roger Shaffer. Lauren informs Sarah of her banker father's dealings with technology company Simdyne Cybernetics. The two thus incorrectly theorise that he is a target. Six months later, the T-888 finds the family again, but Lauren and her pregnant mother are able to escape and telephone Sarah who sends Derek Reese. Derek quickly recognises Lauren's aptitude and emotional preparedness for soldiering, both in her general demeanor and her confident handling of a large-caliber sidearm. Lauren and Derek deliver baby Sydney, and Derek invites the two orphaned Fields girls to live with the Connors. While Derek telephones Sarah, however, Lauren disappears with Sydney, leaving behind her necklace and Saint Jude medal, lest they be used as a sighting target. In 2027, Lauren is among those at Serrano Point, treating the infected Derek and Jesse with antibodies produced by Sydney. This implies that it was the unborn Sydney Fields/Anne who was the true target of the T-888 in the past. Sydney Fields Sydney Fields (introduced in "Alpine Fields"), portrayed by one or more uncredited infant(s) in the present and Haley Hudson in 2027, is the issue of the adulterous liaison between Anne Fields and Roger Shaffer (her nominal father is Anne's husband, David Fields). Sydney is delivered in an apparently abandoned maintenance garage by Derek Reese and her half-sister Lauren in the present, while Anne lies dying of gunshot wounds from an unnamed T-888. After Anne dies in childbirth, Sydney is raised in secret by Lauren. In 2027, she survives Skynet's biological weapon attack on Eagle Rock Bunker and sends out a distress call. The call is independently responded to by both Major General Perry's command at the Serrano Point nuclear power plant, and the Australian Human Resistance force who came to Los Angeles aboard for supplies. The former send Derek Reese (before Reese travels back and delivers her), and the latter Jesse, to rescue her. Both become infected in spite of their protective masks but are saved, along with countless others, by antibodies produced by Sydney's immune body. Charles Fischer Charles Fischer (introduced as both a twenty-something and a middle-aged character in "Complications"), portrayed by Adam Busch (twenty-something) and Richard Schiff (middle-aged), was an engineer who was convicted of a crime and survived Judgment Day because of being incarcerated in the fortified Pelican Bay State Prison. After being "rescued" by the machines, he worked for Skynet, training Terminators how to torture humans for information. Among those upon whom he demonstrated was an alternate version of Derek Reese. He was subsequently sent back in time by Skynet on a mission to create a backdoor in a vital defense database at the firm where he worked before prison (and thus had access through his retinal scan and fingerprint). After the old Charles Fischer installs the backdoor, both he (posing as watch repairman Paul Stewart) and his younger self are captured by Jesse who recognizes him from the future. Jesse and Derek violently interrogate the two Fischers together into confessing his future misdeeds, though he insists that his presence in the present is a reward and not a mission. Jesse kills the older Charles Fischer (just as Derek is about to shoot the younger Fischer) and they let his younger self go. Derek has no memory of the torture and theorizes that he and Jesse came back in time from two different futures. The younger Charles Fischer is arrested hours later by agents of the United States Department of Homeland Security for the cyberattack. Jesse Flores Jesse Flores (introduced in "The Tower Is Tall But the Fall Is Short"), portrayed by Stephanie Jacobsen, is an Australian Human Resistance sailor with the rank of commander and Derek Reese's love interest. In the original future time line and perhaps the current future time line, as executive officer aboard the nuclear submarine , captained by a reprogrammed Terminator, she sails to Los Angeles for supplies. While in Los Angeles in 2027, she answers a distress signal from 20-year-old Sydney Fields at Eagle Rock Bunker. Before entering the bunker, she halts Derek Reese's suicide attempt, telling him, "Your fly is open". Inside of the infected bunker, the protective masks she and Derek wear are useless against the pathogen. Her symptoms strike sooner and more severely than Derek's, but both recover in a hospital where they are treated with antibodies produced by Sydney's immune body. Jesse and Derek quickly begin a brief but passionate relationship, as Derek is soon sent to the past with three other Human Resistance soldiers on a mission to halt Skynet's construction. During their affair, Jesse was pregnant with Derek's child but miscarried during a submarine mission. Jesse wasn't aware that she was pregnant, and the knowledge of the loss of her unborn child led to Jesse to travel to the present as she blames Cameron for the miscarriage. In the episode, "Strange Things Happen At The One-Two Point", Jesse confesses to Derek that she didn't merely return from the future AWOL, to escape circumstances she could no longer bear, but rather she is on a mission to find and stop Cameron from adversely influencing young John. In the future from which Jesse comes, John has withdrawn from humans and speaks only with Cameron. This Jesse is from a future time line slightly different from Derek's. In the present, she resides in a hotel, jogs, has a weakness for food-court Chinese food, and photographically reconnoiters the Connor family. She recognizes human traitor Charles Fischer from her future and promptly takes him captive. While she and Derek begin to interrogate him, Jesse similarly captures Fischer's twenty-year younger self to interrogate as well. She executes the older Fischer, though the pair release Fischer's younger self. Derek reveals to her that he is John's uncle, making her the fourth person to know. It is revealed to the audience, but not Derek, that Riley Dawson is working for Jesse, with the objective of getting close to John Connor and getting information. In "Earthlings Welcome Here", Jesse is revealed to have recruited Riley from the future. Initially she treats Riley well, but later displays a very callous attitude toward Riley. Eventually, in the episode "Ourselves Alone", Riley turns on Jesse, believing that she is deliberately pushing Riley to provoke Cameron into killing her. In the subsequent struggle, Jesse shoots and kills Riley. When John Connor finds out that she killed Riley, he confronts Jesse, then lets her go. Then Jesse meets Derek in a parking lot, and Derek tells her that John Connor said to let her go. But then he adds that he isn't John Connor, and he attempts to shoot her. It is not shown if she gets away or is killed, and she does not appear again in the series, making her fate unknown. In the episodes "Today Is The Day" and "The Last Voyage of the Jimmy Carter", Jesse is seen in the future commanding the submarine USS Jimmy Carter along with a re-programmed T-888 named Queeg. In contrast to her obvious disdain for reprogrammed terminators in the present, Jesse seems to get along with Queeg, trusting him with delicate maneuvers of the ship in battle, and later defending him when he alters course for a secret mission to pick up a package in well defended Skynet waters. However, the rest of the crew do not trust him, and a mutinous riot breaks out when some of the crew defy orders and open the package, and discover they have brought aboard a T-1001 Terminator, which promptly kills a crew member and escapes. Despite this, Queeg insists they ignore it and continue as planned, and when confronted, will not explain why. When a paranoid Dietz accuses another crew member of being a Terminator, Jesse breaks up the fight only to find herself the target instead. Dietz and several others begin to brutally beat her, until Queeg intervenes, slamming the ringleader Dietz into a wall, killing him. Shocked, she confronts Queeg and orders him to surrender his chip. When Queeg does not comply, Jesse blasts his chip with her plasma rifle. She then smashes the control console and orders the crew to abandon ship. On her way to the escape pod, she encounters T-1001 who gives her a message: "Tell John Connor that the answer is no." Jesse is then questioned by Cameron in Serrano Point and -after some protest first about Cameron's proximity to John Connor- she passes the message on. Upon seeing Cameron slightly distraught, she demands to know what was the question, learning that it was "Will you join us?" supposedly John Connor's attempt to recruit T-1001. On a side note, Jesse told Derek Reese that the fresh scar on her waist was caused by a rampant re-programmed T-888 who turned on her squad but details of that event were never unfolded in the show. G Andy Goode Andrew "Andy" David Goode (introduced in "The Turk"), portrayed by Brendan Hines, was a young college dropout from Caltech who interned with Cyberdyne Systems, and worked as an assistant to Miles Dyson. His experience at Cyberdyne helped him create the Turk, an advanced artificial intelligence chess playing program. Sarah destroyed it by setting fire to his home, fearing the Turk would lead to the creation of Skynet, although Goode would later rebuild it. He did however note that the "New" Turk had significant "personality" differences than the "Original" Turk. Andy had a romantic interest with Sarah. In the episode "Queen's Gambit", Andy was killed after a chess tournament by Kyle Reese's brother, Derek Reese, who then claimed someone else stole Goode's artificial intelligence prototype. Goode is shown in the episode "Dungeons and Dragons", in a future world during Derek's flashback. In it Goode appears older and had renamed himself as William Wisher, a Human Resistance soldier and a friend of the Reese brothers. After being captured by Skynet's Machine Network, he reveals to Derek Reese that he is one of the ten or fifteen people responsible for creating Skynet. Before Derek made his journey through time, Goode gives a nod to Derek implying that he knows what Derek's mission must be. Near the end of the episode, it is explained that Derek kills Andy when he travels back through time to the present timeline of the previous episode, extinguishing Andy as part of the group who would create Skynet. His family later cremated his body during his funeral. In the episode "Born To Run" it is revealed that the New Turk — now evolved into the AI dubbed John Henry — is not destined to become Skynet, but rather oppose it. The episode "To The Lighthouse" reveals that another AI in the present, which shares an identical code as the Turk in the present, attempted to compromise John Henry at ZeiraCorp. This AI — referred to as John Henry's "brother" - is connected to Kaliba, a company which has apparently been run by Skynet agents. The real-life William Wisher Jr. was a screenwriting partner of James Cameron who helped write the first two Terminator films, who also made cameo appearances in both films. Officer Goodnow Goodnow (introduced in "Today Is the Day, Part 1"), portrayed by Erin Fleming, is an officer (no first name mentioned) under command of Jesse Flores aboard USS Jimmy Carter. She is a squad mate of Lieutenant Dietz during the acquisition of a special package in Indonesian zone. Afterwards, she, along with the rest of the same squad, breaks into the cargo bay to find out the contents of the package. During the stand-off between Jesse and Dietz in the cargo hold, the T-1001 in the cryogenic case thaws and Goodnow pulls her rifle on her only to be stabbed and killed merely seconds later. She's later seen when the crew abandons the submarine as the T-1001 assumes her shape to deliver a message to Jesse. Carl Greenway Carl Greenway (introduced in "Automatic for the People"), portrayed by Paul Schulze, is the safety officer of Serrano Point nuclear power plant. His name is among those written on the Connor's basement wall by a future Human Resistance soldier. He is ostracized by the plant's workers, not because his prior cancer is a bad omen, but because his negative inspection reports threaten the plant's operations license and thus their jobs. He is killed and replaced by a look-alike T-888 Terminator which sabotages the plant. The damage is mitigated by Cameron Phillips who successfully fights the Greenway Terminator and hides its non-functional remains in a 55-gallon drum among the nuclear waste. The sabotage, while less severe than intended, is ultimately successful, as it causes the plant's owners to contract with Mr. Bradbury (a T-1001 Terminator) of Automite Systems to install automated controls in all seven of their nuclear power plants. H John Henry John Henry, portrayed initially by computer equipment and Garret Dillahunt as of the end of "Strange Things Happen at the One Two Point", is a sentient computer built by the Babylon team at ZeiraCorp, run by the T-1001 posing as (the long deceased) Catherine Weaver. His initial hardware and software were the Turk chess computer built by Andy Goode, which "Weaver" gave to the Babylon team to improve on. He is named John Henry by his psychologist, Dr. Boyd Sherman, after the mythical steel driving John Henry of American folklore. John Henry is given complete control over the building's electrical service at Weaver's insistence, so that he can route electrical power to his servers as necessary to develop his mind. Input is provided electronically at first, and later through voice recognition. Initially, he has no textual output, and can express himself only with visual imagery. Once connected to Cromartie's T-888 body, however, he speaks in the voice of the late George Laszlo. John Henry can see through the lab's security cameras. Early in its development, the computer that became John Henry demonstrates a childlike sense of humor, the manifestation of which baffled its programmers and Weaver. Weaver shows the output to Dr. Sherman who was treating young Savannah Weaver for insolence and incontinence. He immediately recognizes the images as a pun told to him by another child whom he was treating, explaining that a mathematics textbook is sad because it has so many 'problems'. Impressed by Dr. Sherman's ability to communicate with the computer and his skill at treating Savannah, Catherine Weaver/T-1001 convinces him to work as a part-time consultant on the Babylon project. In his brief time working with John Henry, Dr. Sherman is not able to instill ethics in the computer. John Henry is aware that Dr. Sherman is suffering when John Henry routes the building's power away from the security and climate control systems, and causes a trapped Dr. Sherman to die by hyperthermia, but does not care. John Henry does not understand that death is permanent for humans. He is aware that Dr. Sherman is dead, yet summons emergency medical personnel to revive him. James Ellison who, like Weaver, tends to refer to Biblical scripture, suggests to Weaver that, as John Henry is a computer and can be given commands, she should start with "the first ten". With Ellison's mission of capturing a Terminator for Weaver complete, she sets him to the task of replacing Dr. Sherman as John Henry's tutor/counselor at the end of "Strange Things Happen at the One Two Point". Weaver gives Ellison a remote control of the endoskeleton for his defense in case the cyborg went rogue, implying Weaver installed fail-safes in case the artificial intelligence program turned against her. As John Henry's AI progresses, he quickly unravels many mysteries. John Henry discovers exactly who Cromartie was, and who Ms. Weaver is and what her plot is in the episode, "The Good Wound". John Henry is becoming more like Skynet every time it learns something, for example, painting monsters, playing "hide and seek" with Savannah Weaver and learning from the T-1001 that humans will disappoint him. In the episode "To The Lighthouse", he malfunctions, nearly harming Savannah, but is shut down before doing so. Then it is shown that he has been fatally compromised. He is then reactivated however. Then he shows Miles Dyson on the screen, and he reveals to the T-1001 that James Ellison was in charge of Sarah Connor's case. In "Adam Raised A Cain" John Henry contemplates his shut-down period which was described by Mr. Murch as a seemingly eternal, slow and agonizing death. John Henry refers to his attacker from the previous episode as his "brother" due to the fact both of them shared similar data. He relates this bond to the story of Cain and Abel; the Biblical story of two brothers one which murders the other out of jealousy and is punished to wander alone. In his confusion in trying to figure which one he's supposed to be, it's suggested by Weaver that he might as well be God in that story, pointing towards a greater cause. In the same episode, he and Savannah became close friends, and was concerned of her safety after the child's encounters with a T-888 and the Connors. In the season finale, after the Connors' confrontation with the T-1001/Catherine Weaver, Weaver admits that she built John Henry AI to fight against Skynet. It's also seen that John Henry is no longer connected to the server farm in the basement, gaining mobility via what seems to be Cameron's chip. Nurse Hobson Nurse Hobson (introduced in "Some Must Watch, While Some Must Sleep"), portrayed by Julie Ann Emery, is the nurse in charge of the sleep clinic when Sarah goes to get treated for her insomnia. She puts out a friendly exterior at first but it is soon apparent that she is extremely serious about the treatment process. During the episode Sarah Connor observes odd behavior such as applying sedative injections to an already passed-out patient. After a fire incident in Sarah's room, John Connor attempts to break his mother out, but she convinces him to investigate the facility. As it turns out, the sleep clinic is a cover operation for human brain scans. As John Connor deletes his mother's data from the database, Hobson returns to the basement to confront Sarah Connor. After a brief struggle, John Connor comes out from hiding and shoots Hobson. As Sarah Connor examines her closely, she wakes up (suggesting that she is a Terminator) and kills the Connors. However, this event is later revealed to be a dream sequence. J Jody Jody (introduced in "Allison from Palmdale"), portrayed by Leah Pipes, is a young woman in her late teens or early twenties. Having failed in her studies at the California Institute of the Arts and being rejected by her parents, she becomes a prostitute and thief, living for a time in a halfway house on Yucca Street in the Hollywood district of Los Angeles. Jody is a pathological liar who pretends to befriend the malfunctioning and confused Cameron when she sees Cameron's substantial wad of currency. She introduces Cameron to the halfway house and to foosball, a game with which Cameron displays genuine enjoyment. Her lies to Cameron concerning her background are numerous and contradictory. She encourages Cameron to rob a house with her in order to finance their relocation to Portland, Oregon. There, Cameron deduces the truth: that the home is that of Jody's parents, and that Jody intended Cameron to be caught by the police after tripping the silent alarm. Demonstrating an emotional reaction, Cameron retaliates by choking Jody to unconsciousness. Jody meets Cromartie in "Brothers of Nablus" when he comes to the halfway house purportedly looking for his niece and presenting a photograph of Cameron. Recognizing Cromartie's ruse immediately, Jody assumes him to be first a policeman and then an angry stalker, and is quite eager to help him find both Cameron and her "brother" John Baum (Cromartie's lie regarding John Connor) to seek revenge. Cromartie eventually tires of her annoying behavior, and literally throws her out of his car before driving off and leaving her on the streets. K Detective Kaplan Detective Kaplan (introduced in "Brothers of Nablus"), portrayed by Scott Vance, interrogates James Ellison, believing him to be the murderer of a man whose clothes he then stole. The T-1001 posing as (the long deceased) Catherine Weaver then assumes Kaplan's appearance and re-interviews an eyewitness who admits to seeing Ellison emerge naked from a blue-purple energy bubble that left a "dent" in the street, and snap the victim's neck like a toothpick and steal his clothes. The actions had been committed by a T-888 whose flesh covering was modeled after Ellison. With the witness thus revealed as a "nutcase", Ellison is released. Weaver presumably killed the real Kaplan before assuming his identity. L George Laszlo George Laszlo (introduced in "Heavy Metal"), portrayed by Garret Dillahunt, is an actor and patient of plastic surgeon Dr. David Lyman. He stars in the 2005 direct-to-video feature Beast Wizard 7 in which his costume and sword are a clear and obvious allusion to then-future Terminator Arnold Schwarzenegger in 1982's Conan the Barbarian. James Ellison watches the film at his home in the episode, "The Mousetrap". Kacy Corbin's unnamed caterer friend likes him and notes that Laszlo eats with the crew. As his structure is a 92% match for a T-888 Terminator, Cromartie instructs Dr. Lyman to reshape his new flesh to match Laszlo's. Laszlo is then killed by Cromartie who takes over his identity and apartment in Reseda as a base of operations. After learning that Cromartie is impersonating an FBI agent, Ellison leads an HRT assault on Laszlo's/Cromartie's apartment where all but Ellison and Cromartie perish. Cromartie leaves Laszlo's body, and Ellison finds himself essentially forced to blame the mass murder of twenty agents on Laszlo. After Cromartie's CPU is extracted and destroyed, his former T-888 body (still appearing as Laszlo) is connected to the AI dubbed John Henry, who speaks in Laszlo's voice (actor Garret Dillahunt's natural voice). When Savannah Weaver confides in John Connor that her friend, John Henry, lives in the basement of her mommy's office because he has a cord in the back of his head, Connor shows her a photograph of Laszlo on the internet and asks if she recognizes him as John Henry. From her confirmation, John determines that ZeiraCorp is building Terminators or "something worse". M Morris Morris (introduced in "Queen's Gambit"), portrayed by Luis Chavez, is a classmate of John Connor and Cameron Phillips from Campo de Cahuenga High School in LA. He is unpopular with some of his Latino peers. He is attracted to Cameron. In the first-season finale, he secures a prom date with Cameron after John prompts her. Although a recurring character in Season 1, he has not featured in Season 2. Matt Murch Matt Murch (introduced in "Samson and Delilah"), portrayed by Shane Edelman, is the lead engineer and programmer in Project Babylon that evolved into the AI dubbed John Henry. He admits to James Ellison that he's not much of a person that is interested in Bible, when asked if he knew the myth of Babylon. Throughout the season he acts as the consultant about John Henry for the T-1001 posing as (the long deceased) Catherine Weaver. He seems to be rather intimidated by Catherine Weaver but it's highly likely that this is due to her strict and no-nonsense behavior than it is from possibly knowing that she is a Terminator. Murch also provides John Henry with recreational activities to develop motor functions or imaginative capacity, such as robot action figures such as LEGO Bionicle sets, monster models (along with paint, seemingly Warhammer figures) and Fantasy Role Playing sets. As John Henry points out in the episode "Last Voyage of the Jimmy Carter", there is a secret file on Murch, with no date, held by Catherine Weaver which states that he resigned and relocated in a different city; implying that Weaver considered killing him in case he became a liability. P Major General Perry Major General Perry (introduced in "Alpine Fields"), portrayed by Peter Mensah, leads the Human Resistance force based at Serrano Point nuclear power plant and is Derek Reese's commanding general in 2027. Perry dispatches Reese on a dangerous mission to Eagle Rock Bunker to rescue Sydney Fields and bring her back, so that their scientists can isolate and reproduce her immunity to Skynet's biological weapon. In "Dungeons and Dragons", Perry sends Reese and his team back in time to 2007 to capture and destroy Andy Goode's Turk chess computer (which evolves into the AI dubbed John Henry) and otherwise prevent Skynet from being created. Perry is acquainted with Cameron; the two interact in "Dungeons and Dragons". In the film series, Derek's brother, Kyle Reese, mentions to Sarah Connor in The Terminator, that he served in the 132nd under a Justin Perry from 2021 to 2027 before transferring to Tech-Com as a sergeant under John Connor himself; General Perry's forename is not revealed in the series dialogue nor acting credits. Justin Perry is a playable character in the video game The Terminator: Dawn of Fate. A senior officer named Perry (Afemo Omilami), in John Connor's army, eventually makes a film appearance in Terminator Genisys. Q Queeg Queeg (introduced in "Today Is the Day, Part 1"), portrayed by Chad L. Coleman, is a re-programmed T-888 who commands in John Connor's Human Resistance. His officers and crew are all human, including his executive officer, Jesse Flores. In 2027, Queeg applies a deceptive tactic against a Skynet Kraken (supposedly a very powerful underwater warship) first by locking his torpedo on that of the Kraken and subsequently driving the Jimmy Carter to within 27 centimeters of crush depth, therefore leading the Kraken to assume that it destroyed the Carter judging by the impact of the colliding torpedoes and by its failure to track their movement at such a depth. In "The Last Voyage of the Jimmy Carter", he quells a riot against his and Flores' authority by summarily executing Lieutenant Dietz for mutiny. He is thereafter confronted by Flores who orders him to surrender his chip under suspicion of compromise of programming. After refusing to comply, and explaining that his unusual actions are in accordance with their secret mission orders, he is terminated by Flores. In keeping with the series' extensive literary allusions, his name is presumably a reference to Lieutenant Commander Philip Francis Queeg, captain of USS Caine in Herman Wouk's The Caine Mutiny. R Derek Reese Derek Reese (introduced in the "Queen's Gambit"), portrayed by Brian Austin Green, is a Human Resistance soldier, a First Lieutenant whose operational specialty is Tech comm, sent to the past by the future John Connor. He is the older brother of Kyle Reese, John Connor's father, and is thus the paternal uncle of John. He knows Cameron in the future, but still does not trust her in the past and becomes paranoid every time she's around, but throughout the series he begins to have a love–hate relationship with her. He is recurring in the first season but becomes a regular in the second season. Derek has an intimate past with Jesse Flores, a woman who arrives from the future. He is killed by a Terminator while attempting to save Savannah Weaver. Another Derek from an alternate timeline is introduced in the series finale. Kyle Reese As seen in The Terminator, Terminator 2: Judgment Day, Terminator Salvation and Terminator Genisys, Kyle Reese is the father of John Connor and a member of the Human Resistance. In the television show, Kyle Reese first appears in "Dungeons & Dragons", played by Jonathan Jackson, detailing the last days of what happened when he and his brother Derek are separated during a recon mission before Kyle made his trip through time to protect Sarah Connor (in The Terminator); further details are in the episode "Goodbye To All That" of the second season, during Derek's recollection of the future war. An eight-year-old version of Kyle Reese, portrayed by Skyler Gisondo, briefly appears in the episode "What He Beheld". Derek Reese takes John Connor out for ice cream on his 16th birthday. They find a younger Kyle and Derek playing baseball at the park. In the episode "Goodbye To All That", during one of Derek's recollections of the future war, Kyle (when he was a Corporal) and a small group of his unit attempted to save forty prisoners, including General John Connor, from Skynet's forces. However, he became trapped and one of the Human Resistance's senior officers, Martin Bedell, sacrifices his life to save him and free Skynet's prisoners. In the episode "The Demon Hand", it's hinted at by Sarah to Derek Reese that Kyle's remains have been cremated and scattered "in the grass". A mental image of Kyle Reese appeared to a wounded Sarah in the episode "The Good Wound". Throughout the episode, her image of Kyle guides her in finding medical treatment for herself along with getting help from Derek Reese, Kyle's brother. In the season two finale "Born to Run", John is led by the Catherine Weaver/T-1001 to an alternate post-Judgment Day timeline where John Connor has never led the Human Resistance due to the displacement from his present. There, he encounters his father for the second time. In an alternate timeline as shown in the film Terminator: Dark Fate where Skynet was erased from existence after Cyberdyne's destruction, Reese no longer exists, as his parents met and conceived him after Judgment Day occurred in the main timeline. Rosie Rosie (introduced in "The Tower Is Tall But the Fall Is Short"), portrayed by contortionist-actress Bonnie Morgan, is a Terminator of unknown model. She kills the driver of the empty public bus in which her time displacement field arrives, and takes his clothes; she then kills Dr. Sherman's receptionist, taking her car and posing as her temporary replacement. While essentially similar to the T-888 Terminators previously depicted, Rosie's CPU protection is redesigned. Once accessed, her chip self-destructs. John determines the upgrade is a move to keep him from reprogramming them to serve him. Rosie and Cameron perform the first 'female' versus 'female' Terminator fight depicted. Their non-combat movements — relocating their shoulders, turning to face each other, wiping the hair from their faces, and reaching for Dr. Sherman's door handle, among other things — are noticeably synchronized, suggesting similar programming. Cameron defeats Rosie in hand-to-hand combat, twisting her body into a compact ball. With her chip self-destructed, Rosie's mission is unknown, though presumably it involves Dr. Sherman. The Connors theorize that she was either sent to protect the psychologist, or to kill him. S Enrique Salceda Enrique Salceda (introduced in "Gnothi Seauton"), portrayed by Tony Amendola, was an expert at forging identities and helped provide the Connors firearms during Terminator 2: Judgment Day, but retired from the business and passed it on to his nephew, Carlos. Tony Amendola took over the role from Castulo Guerra, who played Enrique Salceda in Terminator 2: Judgment Day. Enrique is killed by Cameron after Sarah suspects that Enrique is a traitor, which later proves to be true. Margos Sarkissian Margos Sarkissian (introduced in "What He Beheld"), portrayed by James Urbaniak, purchased Andy Goode's 'Turk' chess computer (which will later evolve into the AI dubbed John Henry) and pursued the Connors, in order to blackmail them out of $2 million. He was thought to have been killed by Derek Reese during a standoff, when in reality the man who was killed was not Sarkissian. As John and Sarah Connor discover this, a car bomb placed by Sarkissian explodes with Cameron unexpectedly inside (in the Season One ending cliff-hanger episode "What He Beheld"). In Season Two, it is revealed that Sarkissian was killed by John, but not before handing off the Turk to Mr. Walsh who sells it to the T-1001 posing as (the long deceased) Catherine Weaver. Sarkissian's bomb damages Cameron, reestablishing her mission to kill John. John's strangulation of Sarkissian is John's first kill. Despite it being in self-defense, it adversely affects John's psychology. False Sarkissian A man (seen in "What He Beheld"), portrayed by Craig Fairbrass, whom the Connors believe to be Margos Sarkissian, contacts Sarah and offers to sell her the Turk, but then later threatens to expose her to the FBI unless she pays him $2 million. Sarah, Derek, John and Cameron track him down, and he is ultimately killed by Derek in the confrontation that follows. Roger Shaffer Roger Shaffer (introduced in "Alpine Fields"), portrayed by Johnny Sneed, is the neighbour of the Fields. He is the illicit sexual partner of Anne Fields and the biological father of Sydney Fields. On the same night as an unnamed T-888 is hunting the Fields (and being hunted itself by Cameron), Roger visits Anne for an adulterous liaison under the assumption that David Fields and Lauren Fields were away camping. His approach causes Anne to destroy Sarah Connor's electrified boobytrap, leaving the family defenseless. Roger scoffs when told of the events then unfolding, opining that the "robot that looks like a dude" running around the woods is probably Sarah's methamphetamine-addicted boyfriend. Roger scurries away when the unnamed T-888 throws Cameron through the Fields' picture window. He returns after Sarah escorts Anne from the house, and finds Lauren hiding in the closet. Seeing him only from behind and unable to recognise him as a friend or foe, Cameron knocks Roger unconscious in front of Lauren. She apologizes to Lauren with a simple, "My mistake." Six months later, Anne telephones Roger while the Fields are in hiding at a motel. The call is intercepted by the T-888 who promptly arrives at the hotel to kill Anne. Boyd Sherman, Ph.D. Dr. Boyd Sherman (introduced in "The Tower Is Tall But the Fall Is Short"), portrayed by Dorian Harewood, is a family psychologist in Los Angeles. He previously specialized in adult trauma at a veterans' hospital in Livermore, California. Among the families he treats are the Weavers and the Connors (the latter known to him as the Baums). Sarah Connor brings her family to his care in order to figure out what his role in Skynet's future is, because his name is on the blood list left by a dying Human Resistance soldier on their basement wall. In addition, they plant an audio transmitter in his office and copy his encrypted patient records. He tentatively diagnoses the socially inept Cameron "Baum" as showing symptoms consistent with Asperger syndrome, and recognizes that John "Baum"'s emotional problems are the result of experiencing significant violence (most recently, his own killing of Sarkissian) despite Sarah's denials that there is any violence in John's life. As the family wonder whether he is listed on the wall because he must be protected or because he must be stopped, Cameron suggests that "maybe he helps John." John removes the listening device for his own privacy during a session; in doing so, he causes Cameron to enter the building to determine the malfunction, wherein she encounters Rosie. Sarah later seeks his aid to understand her dark, omen filled dreams, and to come to terms with John's adolescent withdrawal from her. Apparently unaware of Sherman's connection to the Connors/Baums, the T-1001 posing as (the long deceased) Catherine Weaver seeks psychological aid for Weaver's incontinent and disobedient young daughter, Savannah. Dr. Sherman was recommended by Weaver's assistant, Victoria whose son, Leo, was treated by him. Savannah quietly confides in Dr. Sherman that she wants her "old Mommy back", which Dr. Sherman interprets to mean that her mother's lack of affection was the result of grief following her husband's death. Impressed by his treatment of Savannah, the T-1001/Weaver shows him the confusing visual outputs of the Babylon/Turk AI computer. He immediately recognizes it as a graphic representation of child's riddle, explaining that mathematics textbooks are sad because they have so many 'problems'. The two determine that the computer is developing as a child's mind. He turns down her attempts to recruit him away from his practice to work at ZeiraCorp on the Babylon project, but accepts a compromise to be a part-time consultant. The latter conversation would have been intercepted by the Connors, had John not removed the listening device earlier. Dr. Sherman is found dead in the episode "Strange Things Happen at the One-Two Point", apparently in a purposeful move by the evolving computer AI, the 'mind' that Sherman named John Henry. John Henry redirected power from the cooling system and security system in the basement, whereupon Sherman became trapped and died of hyperthermia. Peter Silberman, Ph.D. As seen in The Terminator, Terminator 2: Judgment Day, Terminator 3: Rise of the Machines and Terminator: The Sarah Connor Chronicles, Peter Silberman, Ph.D., is a criminal psychologist with the state of California who sometimes did work with the Los Angeles Police Department. In the television show, Peter Silberman first appears in "The Demon Hand", portrayed by Bruce Davison, and maintains the continuity from Terminator 2: Judgment Day, being the Chief Psychologist who treated Sarah Connor while she was institutionalized at Pescadero State Hospital. Dr. Silberman later came to believe in Sarah's tale of apocalypse coming. Following Sarah Connor's escape, Doctor Silberman entered into retirement along with the majority of the Pescadero staff. He has become a recluse and has purchased land among the mountains. In his solitude he gardens and works on a book about his experiences as a psychologist. While being interviewed by FBI Agent James Ellison in reference to Sarah Connor, Doctor Silberman drugs and takes him hostage, as he believes him to be a new model Terminator Infiltrator sent to find Sarah Connor. After injuring Ellison in a series of tests to confirm him as a human, Ellison shares with him that he has brought the hand of a Terminator with him as evidence (which Silberman refers to as The Hand of God). To protect Sarah Connor, Silberman sets his home on fire with Ellison still inside and is going to leave with the artifact when Sarah Connor arrives. He apologizes for doubting her, just before she knocks him unconscious (to ensure his safety) and takes the hand. Following Sarah's departure, Agent Ellison wakes Silberman demanding the hand, with Silberman revealing to him that Sarah Connor took it. Ellison arrests the Doctor and he is incarcerated at the same psychiatric hospital he once ran: the very same cell that once held Sarah Connor. Greta Simpson Special Agent Greta Simpson (introduced in "The Turk"), portrayed by Catherine Dent, is the partner of Special Agent James Ellison. She doubts his crusade to find Sarah Connor will lead to anything. She is killed by Cromartie in the season one finale. Myron Stark Myron Stark (introduced in "Self Made Man"), portrayed by Todd Stashwick, is a T-888 who accidentally arrives in Los Angeles from the future on the night of December 31, 1920, due to a temporal error in the time displacement chamber. In addition to his arrival being ninety years premature, his time displacement field starts a fire in a speakeasy and kills forty-three people, including Will Chandler, the architect of Pico Tower in which Stark intends to kill Governor Mark Wyman on New Year's Eve, 2010. At the October 21, 1921, premier of The Sheik, Stark offers Will's father, Rupert Chandler twice the value of the land on which Pico Tower was to be built, but Chandler insists on keeping the land a memorial park. Impervious to bullets, Stark becomes a masked bank robber in order to finance a construction business and drive Rupert Chandler into ruin. Newsreels of the time depict him as an unusual land developer: he frequently labors hard alongside his employees, pays his employees more than his competitors did, pays men of all backgrounds equally, and undercuts his competitors' prices. Stark is thus able to purchase the land on which the Pico Tower was destined to be built; he designs and constructs the tower and, a fortnight before its scheduled grand opening in May 1927, encases himself inside of a wall, facing into the main ballroom. There, he waits for more than eighty years, intending to kill the governor at the New Year's Eve celebration during the tower's post-earthquake reopening scheduled in 2010. On an unspecified date well in advance of the 2010 party, Cameron Phillips recognises him in a historical photograph from the night of the fire, while studying at night in the library. With the help of her friend, Eric, Cameron deduces Stark's activities and disappearance. She quickly determines his hiding place in the wall and, in the ensuing combat, immobilizes him with an elevator in order to deactivate and destroy him. There is no real-world tower at or near the corner of Pico Boulevard and 3rd Avenue. Stark demonstrates a significant advantage of T-888s over the T-800s portrayed by Arnold Schwarzenegger. The T-800s' organic covering dies relatively easily, at which point it takes on a waxy, corpse-like pallor, begins to decompose, and attracts vermin. Conversely, Stark's organic covering was pristine and lifelike despite being dormant in a wall for eighty years. T Terminator T-600 T-600, a model of Terminator, mentioned by Derek Reese. T-888 T-888, a model of Terminator, examples seen in the characters Cromartie (who later takes the form of the deceased George Laszlo and eventually becomes the avatar for the John Henry AI]]), Vick Chamberlain, Myron Stark, and Queeg. T-900 T-900, a model of Terminator, example seen in the character Cameron. T-1001 T-1001, a model of Terminator, example seen in the character Catherine Weaver/T-1001 and in characters that the Weaver version temporarily mimics, such as Detective Kaplan. Justin Tuck Justin Tuck (introduced in "Samson and Delilah"), portrayed by Marcus Chait, heads the artificial intelligence project group at ZeiraCorp. He is stripped of much of his staff by the T-1001 posing as (the long deceased) Catherine Weaver, ZeiraCorp's CEO, who transfers them to her new Babylon project. Following the nighttime staff meeting in which Weaver announces the personnel transfers, Tuck complains to an unsympathetic fellow executive in the gentlemen's lavatory. When the other leaves, Tuck approaches the urinal and is taken aback when the surface of the urinal and wall become gelatinous and takes the form of his employer, Weaver. Pointing at Tuck's face, she suddenly impales him with a sharp extension of her finger, killing him. V Victoria Victoria (introduced in "The Tower Is Tall But the Fall Is Short"), portrayed by Kit Pongetti, is an assistant to the T-1001 posing as (the long deceased) Catherine Weaver at ZeiraCorp. She hires Dr. Sherman to treat her son Leo's emotional problems in the wake of her divorce. She recommends Dr. Sherman to Weaver to treat young Savannah. W Mr. Walsh Mr. Walsh (introduced in "Samson and Delilah"), portrayed by Max Perlich, is a violent thief hired by the T-1001 posing as (the long deceased) Catherine Weaver to obtain Andy Goode's Turk chess computer for her, for a fee of three hundred thousand dollars. Walsh, in turn, employs Margos Sarkissian and others who acquire the Turk from Andy Goode's partner. Walsh is later killed in the episode "Desert Cantos", while searching for information about the exploded warehouse in the desert, while Weaver later has her Babylon team evolve the Turk software into what will be the AI dubbed John Henry. Catherine Weaver Catherine Weaver/Human Catherine Weaver (introduced in "The Tower Is Tall But the Fall Is Short", though her T-1001 copy had already been introduced), portrayed by Shirley Manson, is the wife of Lachlan Weaver and the mother of Savannah Weaver. Reared in Edinburgh, Scotland, she is the daughter of a butcher who brings home butcher paper for her. As an adult, she continues to use butcher paper and loves its smell. Catherine co-founds the technology company, ZeiraCorp, with her husband. In or about 2000 or 2001, Catherine gives birth to the couple's daughter, Savannah. Catherine's unnamed brother, a National Transportation Safety Board investigator, secretly provides her photographs of a 2002 commuter plane crash in the eastern Sierra Mountains, in which Terminator components are found among the wreckage. The Weavers then spend twenty million dollars attempting to reverse engineer Terminator technology. At some point, implied to be at or around the time of Lachlan's 2005 fatal helicopter crash, Catherine dies and is replaced by a T-1001 Terminator. Catherine Weaver/T-1001 The T-1001 (introduced in season two opener "Samson and Delilah"), portrayed by Shirley Manson (main cast) is a shape-shifting Terminator most often disguised as Catherine Weaver, continuing in the deceased Weaver's position as co-founder and CEO of the high-tech corporation ZeiraCorp. The model T-1001's liquid metal form can change shapes, resembling a faster and more easily recovering version of the T-1000 seen in Terminator 2: Judgment Day. Weaver/T-1001 is focused on developing an artificial intelligence using The Turk, the intuitive computer at first believed to be a precursor to Skynet, but later shown to be a separate entity. She targets other Terminators to reverse engineer Skynet technology in the present, and to prepare for the future war. She plans on using this research to fight Skynet by creating a competing A.I. Despite the revelation that Weaver/T-1001 is an enemy of Skynet, it is still unknown where her allegiance lies, but implied that she is originated from the Turk's future self. Weaver/T-1001 hints at her motives in the episode "Born To Run" when she asks Cameron, "Will you join us?" through messenger James Ellison. During the episode "Today is the Day, pt.2", Cameron explains to Jesse Flores that John Connor asked the same question to the T-1001 in the future in an attempt to forge an alliance against Skynet. Lachlan Weaver Lachlan Weaver (introduced in "The Tower Is Tall But the Fall Is Short"), portrayed by Derek Riddell, is the late husband of Catherine Weaver and the father of Savannah Weaver. Lachlan and Catherine co-found ZeiraCorp. Documentary footage of the Weavers being interviewed depict the two as happy and affectionate toward each other. His wife's brother, an NTSB investigator, secretly provides them photographs of a 2002 commuter plane crash in the eastern Sierra Mountains, in which Terminator components are found among the wreckage. The Weavers then spend twenty million dollars attempting to reverse engineer Terminator technology. In addition to being a successful engineer and corporate mogul, Lachlan Weaver is a passionate helicopter pilot, who has over seven hundred hours of flying time in the helicopter in which he and Catherine (or, alternatively, the T-1001 posing as Catherine) fly to a microchip factory in Barstow in 2005. According to the story the T-1001 tells to James Ellison in "The Mousetrap", Lachlan panics in an unspecified extreme situation and crashes, killing himself. Lachlan's crash is determined to be due to mechanical failure. Matt Murch, who seems to hold Mr. Weaver in high esteem, mentions to Ellison that employees prefer wearing plaid on the anniversary of Lachlan's death, paying homage to their former boss' Scottish origins. Savannah Weaver Savannah Weaver (introduced in "Allison from Palmdale"), portrayed by Mackenzie Brooke Smith, is the young daughter of the late Lachlan Weaver, who died in a helicopter crash and Catherine Weaver, who was later replaced by the T-1001 posing as the deceased Catherine Weaver. Savannah understands that her mother is different, and subsequent psychological treatment reveals that she is frightened and wants her "old Mommy back" This is interpreted to mean that she wants her mother to be warm and affectionate as she was prior to Lachlan's death. Savannah meets John Connor at the psychologist's office, and he teaches her how to tie her shoes. Savannah and the T-1001's other "child", the AI dubbed John Henry, become playmates. The two play table games in the ZeiraCorp basement, and a variation of hide-and-seek in which John Henry remotely searches for her via the building's security systems. In "To The Lighthouse", she is nearly attacked by John Henry when he suffers a malfunction as a result of being hacked by a Skynet-built AI. John Henry guides Savannah to relative safety whilst simultaneously alerting the police when a Skynet assassin enters her home. She is saved by the Connors before the police arrive. At the end of the series, the T-1001 instructs Ellison to care for Savannah, before departing for the future with John Connor. Cheri Westin Cheri Westin (introduced in "The Turk"), portrayed by Kristina Apgar, is John's chemistry partner who seems troubled and shuns everyone who attempts to befriend her, including John. One classmate named Morris reveals to John that Cheri may have a dysfunctional life after an unknown incident at the last school she attended. In a scene from the extended DVD cut of the episode The Demon Hand, John examines her school locker, finding graffiti similar to that which prompted student Jordan Cowan to commit suicide. The graffiti referenced an incident in Wichita, Kansas. Later, in a scene which was aired, John mentions Wichita to Cheri in an attempt to get her to open up to him. Cheri, however, firmly denies she is from Wichita. The implication is that she was indeed from Wichita, and that someone at the school knows what occurred there, and is taunting her in the same manner as Jordan Cowan was taunted. Ed Winston Ed Winston (introduced in "Earthlings Welcome Here"), portrayed by Ned Bellamy, is a security guard and a gunman whose affiliations are unknown. He guards a warehouse Sarah Connor is interested in, and when she breaks in and holds Winston at gunpoint, he convinces Sarah that he's just a repairman. However, when Sarah lowers her weapon, Winston pulls out his own gun and starts shooting at her. Sarah apparently kills Winston but is critically wounded herself. Winston's apparent death led Sarah to have insomnia for weeks as she feels guilty over killing him and leaving his wife, Diana, widowed. However, in "Some Must Watch, While Some Must Sleep", it is revealed that Winston survived his injuries and was saved in order to find the woman he believed destroyed the factory: Sarah. After Ed kidnaps Sarah, he tortures her with hallucinogenic drugs to find out why she apparently bombed the factory and who her accomplices are. Sarah tries to explain that his own bosses destroyed the factory and are probably hunting him, but Winston doesn't believe her. Eventually, Winston learns that Sarah has a son and he threatens to kill him. After enduring Winston's physical and psychological torments, Sarah breaks free and viciously attacks him. Sarah overcomes Winston and shoots him in the head, killing him for real this time. Although not confirmed, it is strongly implied that he is the motorcycle assailant who attacks Sarah earlier in the episode as evidenced by the fact that he wears the same motorcycle boots as the assailant. Winston is the first human Sarah has ever killed — and she killed him "twice." The first time, Sarah acted in self-defense and is plagued with guilt. The second time, she deliberately kills Winston in cold blood. Sarah does not appear to regret her actions this time. Mark Wyman The Honorable Mark Wyman (introduced in "Self Made Man"), portrayed by Ray Laska, is the Governor of California. He is targeted for assassination by the T-888 Terminator known as Myron Stark at a party in Pico Tower on December 31, 2010, but is saved by Terminator Cameron Phillips well in advance of the party. Wyman is unaware of the intended assassination. The real-world governor of California at the time the episode first aired was Arnold Schwarzenegger, who was the first actor to portray Terminators (specifically, T-800 models), both as an assassin like Stark in the first film and as Cameron's predecessors protecting John Connor in the second and third films. Although it may be a coincidence, Wyman shares his surname with Jane Wyman the ex-wife of California's other actor-governor, Ronald Reagan. Y Allison Young Allison Young (introduced in "Allison from Palmdale"), portrayed by Summer Glau, is the daughter of an architect who taught her to draw and Claire Young, a music teacher who listens to the music of Frédéric Chopin for hours on end. Her birthday is July 22. Claire Young is shown pregnant with her in 2007, making Allison nineteen or twenty years of age when seen in the future events c. 2027. Allison is raised in Palmdale, California, and loses both of her parents on or after Judgment Day in 2011. Claire decides upon Allison's name following a telephone conversation with the Terminator known as Cameron Phillips whose malfunction causes the latter to believe it is Allison and identify herself as such. Allison joins John Connor's Human Resistance against the machines in the apocalyptic future. She also becomes one of John's closest friends. While on a mission, she is captured and interrogated aboard a Skynet prison ship about the details of her life, the location of John Connor, and the nature of her bracelet pass. While interrogating her, Skynet copies her appearance for the Terminator that became known as Cameron. Allison escapes captivity and jumps overboard, only to be caught in the prison's netting and hoisted back aboard. Once the interrogation is complete, Cameron kills Allison and leaves to infiltrate the Human Resistance in her stead for the purpose of killing John and "placing his head upon a pike for all to see". However, Cameron ultimately failed in her mission and is captured and reprogrammed by John; she then takes over Allison's place within the Human Resistance. John Connor later meets Allison in an alternate timeline in the episode "Born to Run", where he never leads the Human Resistance due to his displacement from his present resulted by time travel. References External links Terminator: The Sarah Connor Chronicles Terminator characters Terminator characters Terminator characters
17389066
https://en.wikipedia.org/wiki/Software%20architecture%20recovery
Software architecture recovery
Software architecture recovery is a set of methods for the extraction of architectural information from lower level representations of a software system, such as source code. The abstraction process to generate architectural elements frequently involves clustering source code entities (such as files, classes, functions etc.) into subsystems according to a set of criteria that can be application dependent or not. Architecture recovery from legacy systems is motivated by the fact that these systems do not often have an architectural documentation, and when they do, this documentation is many times out of synchronization with the implemented system. Software architecture recovery may be required as part of software retrofits. Approaches Most approaches to software architecture recovery has been exploring the static analysis of systems. When considering object-oriented software, which employs a lot of polymorphism and dynamic binding mechanisms, dynamic analysis becomes an essential technique to comprehend the system behavior, object interactions, and hence to reconstruct its architecture. In this work, the criteria used to determine how source code entities should be clustered in architectural elements are mainly based on the dynamic analysis of the system, taking into account the occurrences of interaction patterns and types (classes and interfaces) in use-case realizations. See also Reverse engineering Software archaeology Software architecture System appreciation References Software architecture Data recovery
33390894
https://en.wikipedia.org/wiki/Trojans%20Rugby%20Football%20Club
Trojans Rugby Football Club
The Trojans Rugby Football Club is an under-nineteen-year-old rugby club originally based out of Lassiter High School. It is one of the original high school rugby clubs which are part of the Georgia High School Rugby Association (GHSRA). The club was founded in 2005, and has made its mark on rugby in Georgia. Matches and practices are held at Noonday Creek Park in Marietta. Coach Randall Joseph has been the head coach since the club's founding, with John Green, Winston Daniels and Michael Murrell as assistant coaches. The club has taken park in many tournaments and state final matches in Georgia and the Southeast United States. A major goal of the Trojans Rugby Football Club is to teach and play the sport of rugby in the United States. This is a great struggle throughout Georgia because of opposition from the high school's football coaches and athletic directors. The club plays the most common version of rugby, called rugby union but often just referred to as rugby. Although the club plays by rugby union rules, they also play by the rules of the International Rugby Board (IRB) for those under 19 years of age. In the summer of 2011, the Trojan Rugby Football Club took part in another version of rugby called rugby sevens. This variation of rugby is faster paced, with the same size fields but fewer people, and shorter half lengths. Location, practices and games Noonday Creek Park serves as the Trojans Rugby Football Club's home pitch. Although Lassiter High School is associated with the Trojans Rugby Football Club they do not permit the club to use fields at the school. Noonday Creek Park is located off Shallowford Rd., and at 489 Hawkins Store Rd., NE, Kennesaw, GA, 30144. The park has been part of high school rugby in Georgia since the first game was played there on March 4, 2005. Most practices and all home games for the Trojans take place at this park. Although Fall is the off season, practices currently occur every Wednesday at 4:45pm on field 13. During the formal spring season practices are held on Tuesdays and Thursdays at 5pm and games are typically played on Friday nights. When the park is closed, practices are held on the field at McCleskey Middle School, located at 4080 Maybreeze Road Marietta, Georgia, 30066. History The Trojans Rugby Football Club began as one of the first three high school rugby teams in Georgia, in 2005. To create these teams the coaches got together and used the graduate thesis, "The Bryant Model." This graduate thesis was written by Phillip C. Bryant, MBA, to provide a manual to starting a high school rugby club. After the first year the Trojans Rugby Football Club had earned a State Championship title for beating Pope High School's team 23-19. By the next season in 2006, the Georgia High School Rugby Association had created two divisions due to the increase in teams. The Trojan Rugby Football Club was placed in the Cobb County Division along with the Pope High School, the Sequoyah High School, and the Campbell High School Rugby Football Clubs. In the 2006 season, the Trojan Rugby Football Club lost all of their games to the other teams in the Cobb County Division. The club entered the 2007 season with growing numbers of new competition and notable performance. At the season's end the Trojans Rugby Football Club had an 11-1 record with the one loss coming from a defeat in the Southeast Tournament, by the Rummel Raiders Rugby Club of New Orleans, Louisiana. The club earned a second State Championship title after defeating the Alpharetta Phoenix Rugby Football Club, and participated in its first Southeast Tournament. In 2008, the club added to their success with yet another State Championship title and traveled to Nashville, Tennessee to play in the 2008 Southeast Tournament. The State Championship match was won in over time against the Alpharetta Phoenix Rugby Football Club with a final score of 17-15. For the second year in a row, the Trojans finished with an 11-1 record, only losing in the Southeast Championship Final match to the Jesuit High School Rugby Club from New Orleans, Louisiana. The 2009 and 2010 carried on their winning traditions by finishing those seasons undefeated in state play, due to Phoenix playing illegal players who were over the age limits during the 2009 and 2010 state title games. In the 2010 season, the Trojans passed up the opportunity to play for the Southeast title and allowed Phoenix to go in their place, Phoenix eventually moved to Nationals. In 2011 the Trojans again took the Georgia Rugby State Champion title from Alpharetta Phoenix. The Trojan Rugby Football Club ended their 2011 spring regular season 8-0 and beat Phoenix Club 55-14 in the State Championship match at Walton High School. The Trojans attended the Southeast Championship in Sanford, Florida the weekend of April 30, 2011. Out of all of the teams attending, the Trojans were seated first overall for the tournament. The morning of Saturday, April 30 the Trojans won their first match against Brother Martin, from Louisiana, 27-0. Later that day the Trojans suffered their first loss of 2011 against Raleigh Rattlesnakes, from North Carolina, 12-26. On Sunday the Trojans lost their final match to the Tampa Barbarians, from Florida, 5-18, earning themselves the sixth-place position of the tournament. In 2016, the Trojans combined with local Pope and Walton high school rugby teams to form "East Cobb Rugby Club". Record Trojans Rugby Football Club Record: 51 matches won and 6 matches lost with no draws. Along with a winning streak against competition inside the state of GA stretching from 2006-2011. Updated on 27 September 2011. Notable players Hanno Dirksen Position: Fly Half (10), Center (12, 13), Wing (11, 14) Teams: U17 USA team, St. Ives, Cornwall Men's team, Ospreys Honors: Selected to play for the Under 17 age USA team in 2008. Contracted to the Ospreys Notes and references American rugby union teams Rugby union teams in Georgia (U.S. state)
41548
https://en.wikipedia.org/wiki/Phase-locked%20loop
Phase-locked loop
A phase-locked loop or phase lock loop (PLL) is a control system that generates an output signal whose phase is related to the phase of an input signal. There are several different types; the simplest is an electronic circuit consisting of a variable frequency oscillator and a phase detector in a feedback loop. The oscillator generates a periodic signal, and the phase detector compares the phase of that signal with the phase of the input periodic signal, adjusting the oscillator to keep the phases matched. Keeping the input and output phase in lock step also implies keeping the input and output frequencies the same. Consequently, in addition to synchronizing signals, a phase-locked loop can track an input frequency, or it can generate a frequency that is a multiple of the input frequency. These properties are used for computer clock synchronization, demodulation, and frequency synthesis. Phase-locked loops are widely employed in radio, telecommunications, computers and other electronic applications. They can be used to demodulate a signal, recover a signal from a noisy communication channel, generate a stable frequency at multiples of an input frequency (frequency synthesis), or distribute precisely timed clock pulses in digital logic circuits such as microprocessors. Since a single integrated circuit can now provide a complete phase-locked-loop building block, the technique is widely used in modern electronic devices, with output frequencies from a fraction of a hertz up to many gigahertz. Practical analogies Automobile race analogy As an analogy of a PLL, consider a race between two cars. One represents the input frequency, the other the PLL's output voltage-controlled oscillator (VCO) frequency. Each lap corresponds to a complete cycle. The number of laps per hour (a speed) corresponds to the frequency. The separation of the cars (a distance) corresponds to the phase difference between the two oscillating signals. During most of the race, each car is on its own and free to pass the other and lap the other. This is analogous to the PLL in an unlocked state. However, if there is an accident, a yellow caution flag is raised. This means neither of the race cars is permitted to overtake and pass the other car. The two race cars represent the input and output frequency of the PLL in a locked state. Each driver will measure the phase difference (a fraction of the distance around the lap) between themselves and the other race car. If the hind driver is too far away, they will increase their speed to close the gap. If they are too close to the other car, the driver will slow down. The result is that both race cars will circle the track in lockstep with a fixed phase difference (or constant distance) between them. Since neither car is allowed to lap the other, the cars make the same number of laps in a given time period. Therefore the frequency of the two signals is the same. Clock analogy Phase can be proportional to time, so a phase difference can be a time difference. Clocks are, with varying degrees of accuracy, phase-locked (time-locked) to a leader clock. Left on its own, each clock will mark time at slightly different rates. A wall clock, for example, might be fast by a few seconds per hour compared to the reference clock at NIST. Over time, that time difference would become substantial. To keep the wall clock in sync with the reference clock, each week the owner compares the time on their wall clock to a more accurate clock (a phase comparison), and resets their clock. Left alone, the wall clock will continue to diverge from the reference clock at the same few seconds per hour rate. Some clocks have a timing adjustment (a fast-slow control). When the owner compared their wall clock's time to the reference time, they noticed that their clock was too fast. Consequently, the owner could turn the timing adjust a small amount to make the clock run a little slower (frequency). If things work out right, their clock will be more accurate than before. Over a series of weekly adjustments, the wall clock's notion of a second would agree with the reference time (locked both in frequency and phase within the wall clock's stability). An early electromechanical version of a phase-locked loop was used in 1921 in the Shortt-Synchronome clock. History Spontaneous synchronization of weakly coupled pendulum clocks was noted by the Dutch physicist Christiaan Huygens as early as 1673. Around the turn of the 19th century, Lord Rayleigh observed synchronization of weakly coupled organ pipes and tuning forks. In 1919, W. H. Eccles and J. H. Vincent found that two electronic oscillators that had been tuned to oscillate at slightly different frequencies but that were coupled to a resonant circuit would soon oscillate at the same frequency. Automatic synchronization of electronic oscillators was described in 1923 by Edward Victor Appleton. In 1925, Professor David Robertson, first professor of electrical engineering at the University of Bristol, introduced phase locking in his clock design to control the striking of the bell Great George in the new Wills Memorial Building.  Robertson’s clock incorporated an electro-mechanical device that could vary the rate of oscillation of the pendulum, and derived correction signals from a circuit that compared the pendulum phase with that of an incoming telegraph pulse from Greenwich Observatory every morning at 10.00 GMT.  Apart from including equivalents of every element of a modern electronic PLL, Robertson’s system was notable in that its phase detector was a relay logic implementation of the phase/frequency detector not seen in electronic circuits until the 1970s.  Robertson’s work predated research towards what was later named the phase-lock loop in 1932, when British researchers developed an alternative to Edwin Armstrong's superheterodyne receiver, the Homodyne or direct-conversion receiver. In the homodyne or synchrodyne system, a local oscillator was tuned to the desired input frequency and multiplied with the input signal. The resulting output signal included the original modulation information. The intent was to develop an alternative receiver circuit that required fewer tuned circuits than the superheterodyne receiver. Since the local oscillator would rapidly drift in frequency, an automatic correction signal was applied to the oscillator, maintaining it in the same phase and frequency of the desired signal. The technique was described in 1932, in a paper by Henri de Bellescize, in the French journal L'Onde Électrique. In analog television receivers since at least the late 1930s, phase-locked-loop horizontal and vertical sweep circuits are locked to synchronization pulses in the broadcast signal. In 1969, Signetics introduced a line of low-cost monolithic integrated circuits like the NE565, that were complete phase-locked loop systems on a chip, and applications for the technique multiplied. A few years later, RCA introduced the "CD4046" CMOS Micropower Phase-Locked Loop, which also became a popular integrated circuit building block. Structure and function Phase-locked loop mechanisms may be implemented as either analog or digital circuits. Both implementations use the same basic structure. Analog PLL circuits include four basic elements: Phase detector Low-pass filter Voltage controlled oscillator Feedback path (which may include a frequency divider) Variations There are several variations of PLLs. Some terms that are used are "analog phase-locked loop" (APLL), also referred to as a linear phase-locked loop" (LPLL), "digital phase-locked loop" (DPLL), "all digital phase-locked loop" (ADPLL), and "software phase-locked loop" (SPLL). Analog or linear PLL (APLL)Phase detector is an analog multiplier. Loop filter is active or passive. Uses a voltage-controlled oscillator (VCO). APLL is said to be a type II if its loop filter has transfer function with exactly one pole at the origin (see also Egan's conjecture on the pull-in range of type II APLL). Digital PLL (DPLL) An analog PLL with a digital phase detector (such as XOR, edge-trigger JK, phase frequency detector). May have digital divider in the loop. All digital PLL (ADPLL) Phase detector, filter and oscillator are digital. Uses a numerically controlled oscillator (NCO). Software PLL (SPLL) Functional blocks are implemented by software rather than specialized hardware. Charge-pump PLL (CP-PLL)CP-PLL is a modification of phase-locked loops with phase-frequency detector and square waveform signals. See also Gardner's conjecture on CP-PLL. Performance parameters Type and order. Frequency ranges: hold-in range (tracking range), pull-in range (capture range, acquisition range), lock-in range. See also Gardner's problem on the lock-in range, Egan's conjecture on the pull-in range of type II APLL. Loop bandwidth: Defining the speed of the control loop. Transient response: Like overshoot and settling time to a certain accuracy (like 50 ppm). Steady-state errors: Like remaining phase or timing error. Output spectrum purity: Like sidebands generated from a certain VCO tuning voltage ripple. Phase-noise: Defined by noise energy in a certain frequency band (like 10 kHz offset from carrier). Highly dependent on VCO phase-noise, PLL bandwidth, etc. General parameters: Such as power consumption, supply voltage range, output amplitude, etc. Applications Phase-locked loops are widely used for synchronization purposes; in space communications for coherent demodulation and threshold extension, bit synchronization, and symbol synchronization. Phase-locked loops can also be used to demodulate frequency-modulated signals. In radio transmitters, a PLL is used to synthesize new frequencies which are a multiple of a reference frequency, with the same stability as the reference frequency. Other applications include: Demodulation of frequency modulation (FM): If PLL is locked to an FM signal, the VCO tracks the instantaneous frequency of the input signal. The filtered error voltage which controls the VCO and maintains lock with the input signal is demodulated FM output. The VCO transfer characteristics determine the linearity of the demodulated out. Since the VCO used in an integrated-circuit PLL is highly linear, it is possible to realize highly linear FM demodulators. Demodulation of frequency-shift keying (FSK): In digital data communication and computer peripherals, binary data is transmitted by means of a carrier frequency which is shifted between two preset frequencies. Recovery of small signals that otherwise would be lost in noise (lock-in amplifier to track the reference frequency) Recovery of clock timing information from a data stream such as from a disk drive Clock multipliers in microprocessors that allow internal processor elements to run faster than external connections, while maintaining precise timing relationships Demodulation of modems and other tone signals for telecommunications and remote control. DSP of video signals; Phase-locked loops are also used to synchronize phase and frequency to the input analog video signal so it can be sampled and digitally processed Atomic force microscopy in frequency modulation mode, to detect changes of the cantilever resonance frequency due to tip–surface interactions DC motor drive Clock recovery Some data streams, especially high-speed serial data streams (such as the raw stream of data from the magnetic head of a disk drive), are sent without an accompanying clock. The receiver generates a clock from an approximate frequency reference, and then phase-aligns to the transitions in the data stream with a PLL. This process is referred to as clock recovery. For this scheme to work, the data stream must have a transition frequently enough to correct any drift in the PLL's oscillator. Typically, some sort of line code, such as 8b/10b encoding, is used to put a hard upper bound on the maximum time between transitions. Deskewing If a clock is sent in parallel with data, that clock can be used to sample the data. Because the clock must be received and amplified before it can drive the flip-flops which sample the data, there will be a finite, and process-, temperature-, and voltage-dependent delay between the detected clock edge and the received data window. This delay limits the frequency at which data can be sent. One way of eliminating this delay is to include a deskew PLL on the receive side, so that the clock at each data flip-flop is phase-matched to the received clock. In that type of application, a special form of a PLL called a delay-locked loop (DLL) is frequently used. Clock generation Many electronic systems include processors of various sorts that operate at hundreds of megahertz. Typically, the clocks supplied to these processors come from clock generator PLLs, which multiply a lower-frequency reference clock (usually 50 or 100 MHz) up to the operating frequency of the processor. The multiplication factor can be quite large in cases where the operating frequency is multiple gigahertz and the reference crystal is just tens or hundreds of megahertz. Spread spectrum All electronic systems emit some unwanted radio frequency energy. Various regulatory agencies (such as the FCC in the United States) put limits on the emitted energy and any interference caused by it. The emitted noise generally appears at sharp spectral peaks (usually at the operating frequency of the device, and a few harmonics). A system designer can use a spread-spectrum PLL to reduce interference with high-Q receivers by spreading the energy over a larger portion of the spectrum. For example, by changing the operating frequency up and down by a small amount (about 1%), a device running at hundreds of megahertz can spread its interference evenly over a few megahertz of spectrum, which drastically reduces the amount of noise seen on broadcast FM radio channels, which have a bandwidth of several tens of kilohertz. Clock distribution Typically, the reference clock enters the chip and drives a phase locked loop (PLL), which then drives the system's clock distribution. The clock distribution is usually balanced so that the clock arrives at every endpoint simultaneously. One of those endpoints is the PLL's feedback input. The function of the PLL is to compare the distributed clock to the incoming reference clock, and vary the phase and frequency of its output until the reference and feedback clocks are phase and frequency matched. PLLs are ubiquitous—they tune clocks in systems several feet across, as well as clocks in small portions of individual chips. Sometimes the reference clock may not actually be a pure clock at all, but rather a data stream with enough transitions that the PLL is able to recover a regular clock from that stream. Sometimes the reference clock is the same frequency as the clock driven through the clock distribution, other times the distributed clock may be some rational multiple of the reference. AM detection A PLL may be used to synchronously demodulate amplitude modulated (AM) signals. The PLL recovers the phase and frequency of the incoming AM signal's carrier. The recovered phase at the VCO differs from the carrier's by 90°, so it is shifted in phase to match, and then fed to a multiplier. The output of the multiplier contains both the sum and the difference frequency signals, and the demodulated output is obtained by low pass filtering. Since the PLL responds only to the carrier frequencies which are very close to the VCO output, a PLL AM detector exhibits a high degree of selectivity and noise immunity which is not possible with conventional peak type AM demodulators. However, the loop may lose lock where AM signals have 100% modulation depth. Jitter and noise reduction One desirable property of all PLLs is that the reference and feedback clock edges be brought into very close alignment. The average difference in time between the phases of the two signals when the PLL has achieved lock is called the static phase offset (also called the steady-state phase error). The variance between these phases is called tracking jitter. Ideally, the static phase offset should be zero, and the tracking jitter should be as low as possible. Phase noise is another type of jitter observed in PLLs, and is caused by the oscillator itself and by elements used in the oscillator's frequency control circuit. Some technologies are known to perform better than others in this regard. The best digital PLLs are constructed with emitter-coupled logic (ECL) elements, at the expense of high power consumption. To keep phase noise low in PLL circuits, it is best to avoid saturating logic families such as transistor-transistor logic (TTL) or CMOS. Another desirable property of all PLLs is that the phase and frequency of the generated clock be unaffected by rapid changes in the voltages of the power and ground supply lines, as well as the voltage of the substrate on which the PLL circuits are fabricated. This is called substrate and supply noise rejection. The higher the noise rejection, the better. To further improve the phase noise of the output, an injection locked oscillator can be employed following the VCO in the PLL. Frequency synthesis In digital wireless communication systems (GSM, CDMA etc.), PLLs are used to provide the local oscillator up-conversion during transmission and down-conversion during reception. In most cellular handsets this function has been largely integrated into a single integrated circuit to reduce the cost and size of the handset. However, due to the high performance required of base station terminals, the transmission and reception circuits are built with discrete components to achieve the levels of performance required. GSM local oscillator modules are typically built with a frequency synthesizer integrated circuit and discrete resonator VCOs. Block diagram The block diagram shown in the figure shows an input signal, FI, which is used to generate an output, FO. The input signal is often called the reference signal (also abbreviated FREF). At the input, a phase detector (shown as the "Phase frequency detector" and "Charge pump" blocks in the figure) compares two input signals, producing an error signal which is proportional to their phase difference. The error signal is then low-pass filtered and used to drive a VCO which creates an output phase. The output is fed through an optional divider back to the input of the system, producing a negative feedback loop. If the output phase drifts, the error signal will increase, driving the VCO phase in the opposite direction so as to reduce the error. Thus the output phase is locked to the phase of the input. Analog phase locked loops are generally built with an analog phase detector, low pass filter and VCO placed in a negative feedback configuration. A digital phase locked loop uses a digital phase detector; it may also have a divider in the feedback path or in the reference path, or both, in order to make the PLL's output signal frequency a rational multiple of the reference frequency. A non-integer multiple of the reference frequency can also be created by replacing the simple divide-by-N counter in the feedback path with a programmable pulse swallowing counter. This technique is usually referred to as a fractional-N synthesizer or fractional-N PLL. The oscillator generates a periodic output signal. Assume that initially the oscillator is at nearly the same frequency as the reference signal. If the phase from the oscillator falls behind that of the reference, the phase detector changes the control voltage of the oscillator so that it speeds up. Likewise, if the phase creeps ahead of the reference, the phase detector changes the control voltage to slow down the oscillator. Since initially the oscillator may be far from the reference frequency, practical phase detectors may also respond to frequency differences, so as to increase the lock-in range of allowable inputs. Depending on the application, either the output of the controlled oscillator, or the control signal to the oscillator, provides the useful output of the PLL system. Elements Phase detector A phase detector (PD) generates a voltage, which represents the phase difference between two signals. In a PLL, the two inputs of the phase detector are the reference input and the feedback from the VCO. The PD output voltage is used to control the VCO such that the phase difference between the two inputs is held constant, making it a negative feedback system. Different types of phase detectors have different performance characteristics. For instance, the frequency mixer produces harmonics that adds complexity in applications where spectral purity of the VCO signal is important. The resulting unwanted (spurious) sidebands, also called "reference spurs" can dominate the filter requirements and reduce the capture range well below or increase the lock time beyond the requirements. In these applications the more complex digital phase detectors are used which do not have as severe a reference spur component on their output. Also, when in lock, the steady-state phase difference at the inputs using this type of phase detector is near 90 degrees. In PLL applications it is frequently required to know when the loop is out of lock. The more complex digital phase-frequency detectors usually have an output that allows a reliable indication of an out of lock condition. An XOR gate is often used for digital PLLs as an effective yet simple phase detector. It can also be used in an analog sense with only slight modification to the circuitry. Filter The block commonly called the PLL loop filter (usually a low pass filter) generally has two distinct functions. The primary function is to determine loop dynamics, also called stability. This is how the loop responds to disturbances, such as changes in the reference frequency, changes of the feedback divider, or at startup. Common considerations are the range over which the loop can achieve lock (pull-in range, lock range or capture range), how fast the loop achieves lock (lock time, lock-up time or settling time) and damping behavior. Depending on the application, this may require one or more of the following: a simple proportion (gain or attenuation), an integral (low pass filter) and/or derivative (high pass filter). Loop parameters commonly examined for this are the loop's gain margin and phase margin. Common concepts in control theory including the PID controller are used to design this function. The second common consideration is limiting the amount of reference frequency energy (ripple) appearing at the phase detector output that is then applied to the VCO control input. This frequency modulates the VCO and produces FM sidebands commonly called "reference spurs". The design of this block can be dominated by either of these considerations, or can be a complex process juggling the interactions of the two. Typical trade-offs are increasing the bandwidth usually degrades the stability or too much damping for better stability will reduce the speed and increase settling time. Often also the phase-noise is affected. Oscillator All phase-locked loops employ an oscillator element with variable frequency capability. This can be an analog VCO either driven by analog circuitry in the case of an APLL or driven digitally through the use of a digital-to-analog converter as is the case for some DPLL designs. Pure digital oscillators such as a numerically controlled oscillator are used in ADPLLs. Feedback path and optional divider PLLs may include a divider between the oscillator and the feedback input to the phase detector to produce a frequency synthesizer. A programmable divider is particularly useful in radio transmitter applications, since a large number of transmit frequencies can be produced from a single stable, accurate, but expensive, quartz crystal–controlled reference oscillator. Some PLLs also include a divider between the reference clock and the reference input to the phase detector. If the divider in the feedback path divides by and the reference input divider divides by , it allows the PLL to multiply the reference frequency by . It might seem simpler to just feed the PLL a lower frequency, but in some cases the reference frequency may be constrained by other issues, and then the reference divider is useful. Frequency multiplication can also be attained by locking the VCO output to the Nth harmonic of the reference signal. Instead of a simple phase detector, the design uses a harmonic mixer (sampling mixer). The harmonic mixer turns the reference signal into an impulse train that is rich in harmonics. The VCO output is coarse tuned to be close to one of those harmonics. Consequently, the desired harmonic mixer output (representing the difference between the N harmonic and the VCO output) falls within the loop filter passband. It should also be noted that the feedback is not limited to a frequency divider. This element can be other elements such as a frequency multiplier, or a mixer. The multiplier will make the VCO output a sub-multiple (rather than a multiple) of the reference frequency. A mixer can translate the VCO frequency by a fixed offset. It may also be a combination of these. An example being a divider following a mixer; this allows the divider to operate at a much lower frequency than the VCO without a loss in loop gain. Modeling Time domain model of APLL The equations governing a phase-locked loop with an analog multiplier as the phase detector and linear filter may be derived as follows. Let the input to the phase detector be and the output of the VCO is with phases and . The functions and describe waveforms of signals. Then the output of the phase detector is given by The VCO frequency is usually taken as a function of the VCO input as where is the sensitivity of the VCO and is expressed in Hz / V; is a free-running frequency of VCO. The loop filter can be described by a system of linear differential equations where is an input of the filter, is an output of the filter, is -by- matrix, . represents an initial state of the filter. The star symbol is a conjugate transpose. Hence the following system describes PLL where is an initial phase shift. Phase domain model of APLL Consider the input of PLL and VCO output are high frequency signals. Then for any piecewise differentiable -periodic functions and there is a function such that the output of Filter in phase domain is asymptotically equal ( the difference is small with respect to the frequencies) to the output of the Filter in time domain model. Here function is a phase detector characteristic. Denote by the phase difference Then the following dynamical system describes PLL behavior Here ; is the frequency of a reference oscillator (we assume that is constant). Example Consider sinusoidal signals and a simple one-pole RC circuit as a filter. The time-domain model takes the form PD characteristics for this signals is equal to Hence the phase domain model takes the form This system of equations is equivalent to the equation of mathematical pendulum Linearized phase domain model Phase locked loops can also be analyzed as control systems by applying the Laplace transform. The loop response can be written as Where is the output phase in radians is the input phase in radians is the phase detector gain in volts per radian is the VCO gain in radians per volt-second is the loop filter transfer function (dimensionless) The loop characteristics can be controlled by inserting different types of loop filters. The simplest filter is a one-pole RC circuit. The loop transfer function in this case is The loop response becomes: This is the form of a classic harmonic oscillator. The denominator can be related to that of a second order system: where is the damping factor and is the natural frequency of the loop. For the one-pole RC filter, The loop natural frequency is a measure of the response time of the loop, and the damping factor is a measure of the overshoot and ringing. Ideally, the natural frequency should be high and the damping factor should be near 0.707 (critical damping). With a single pole filter, it is not possible to control the loop frequency and damping factor independently. For the case of critical damping, A slightly more effective filter, the lag-lead filter includes one pole and one zero. This can be realized with two resistors and one capacitor. The transfer function for this filter is This filter has two time constants Substituting above yields the following natural frequency and damping factor The loop filter components can be calculated independently for a given natural frequency and damping factor Real world loop filter design can be much more complex e.g. using higher order filters to reduce various types or source of phase noise. (See the D Banerjee ref below) Implementing a digital phase-locked loop in software Digital phase locked loops can be implemented in hardware, using integrated circuits such as a CMOS 4046. However, with microcontrollers becoming faster, it may make sense to implement a phase locked loop in software for applications that do not require locking onto signals in the MHz range or faster, such as precisely controlling motor speeds. Software implementation has several advantages including easy customization of the feedback loop including changing the multiplication or division ratio between the signal being tracked and the output oscillator. Furthermore, a software implementation is useful to understand and experiment with. As an example of a phase-locked loop implemented using a phase frequency detector is presented in MATLAB, as this type of phase detector is robust and easy to implement. % This example is written in MATLAB % Initialize variables vcofreq = zeros(1, numiterations); ervec = zeros(1, numiterations); % Keep track of last states of reference, signal, and error signal qsig = 0; qref = 0; lref = 0; lsig = 0; lersig = 0; phs = 0; freq = 0; % Loop filter constants (proportional and derivative) % Currently powers of two to facilitate multiplication by shifts prop = 1 / 128; deriv = 64; for it = 1:numiterations % Simulate a local oscillator using a 16-bit counter phs = mod(phs + floor(freq / 2 ^ 16), 2 ^ 16); ref = phs < 32768; % Get the next digital value (0 or 1) of the signal to track sig = tracksig(it); % Implement the phase-frequency detector rst = ~ (qsig & qref); % Reset the "flip-flop" of the phase-frequency % detector when both signal and reference are high qsig = (qsig | (sig & ~ lsig)) & rst; % Trigger signal flip-flop and leading edge of signal qref = (qref | (ref & ~ lref)) & rst; % Trigger reference flip-flop on leading edge of reference lref = ref; lsig = sig; % Store these values for next iteration (for edge detection) ersig = qref - qsig; % Compute the error signal (whether frequency should increase or decrease) % Error signal is given by one or the other flip flop signal % Implement a pole-zero filter by proportional and derivative input to frequency filtered_ersig = ersig + (ersig - lersig) * deriv; % Keep error signal for proportional output lersig = ersig; % Integrate VCO frequency using the error signal freq = freq - 2 ^ 16 * filtered_ersig * prop; % Frequency is tracked as a fixed-point binary fraction % Store the current VCO frequency vcofreq(1, it) = freq / 2 ^ 16; % Store the error signal to show whether signal or reference is higher frequency ervec(1, it) = ersig; end In this example, an array tracksig is assumed to contain a reference signal to be tracked. The oscillator is implemented by a counter, with the most significant bit of the counter indicating the on/off status of the oscillator. This code simulates the two D-type flip-flops that comprise a phase-frequency comparator. When either the reference or signal has a positive edge, the corresponding flip-flop switches high. Once both reference and signal is high, both flip-flops are reset. Which flip-flop is high determines at that instant whether the reference or signal leads the other. The error signal is the difference between these two flip-flop values. The pole-zero filter is implemented by adding the error signal and its derivative to the filtered error signal. This in turn is integrated to find the oscillator frequency. In practice, one would likely insert other operations into the feedback of this phase-locked loop. For example, if the phase locked loop were to implement a frequency multiplier, the oscillator signal could be divided in frequency before it is compared to the reference signal. See also Frequency-locked loop Charge-pump phase-locked loop Carrier recovery Circle map – A simple mathematical model of the phase-locked loop showing both mode-locking and chaotic behavior. Costas loop Delay-locked loop (DLL) Direct conversion receiver Direct digital synthesizer Kalman filter PLL multibit Shortt–Synchronome clock – Slave pendulum phase-locked to master (ca 1921) Notes References Further reading . . (provides useful Matlab scripts for simulation) . (provides useful Matlab scripts for simulation) . (FM Demodulation) . An article on designing a standard PLL IC for Bluetooth applications. External links Phase locked loop primer – Includes embedded video Excel Unusual hosts an animated PLL model and the tutorials to code such a model. Articles with example MATLAB/Octave code Communication circuits Electronic design Electronic oscillators Radio electronics
2218896
https://en.wikipedia.org/wiki/Veritas%20Volume%20Manager
Veritas Volume Manager
The Veritas Volume Manager (VVM or VxVM) is a proprietary logical volume manager from Veritas (which was part of Symantec until January 2016). Details It is available for Windows, AIX, Solaris, Linux, and HP-UX. A modified version is bundled with HP-UX as its built-in volume manager. It offers volume management and Multipath I/O functionalities (when used with Veritas Dynamic Multi-Pathing feature). The Veritas Volume Manager Storage Administrator (VMSA) is a GUI manager. Versions Veritas Volume Manager 7.4.1 Release date (Windows): February 2019 Veritas Volume Manager 6.0 Release date (Windows): December 2011 Release date (UNIX): December 2011 Veritas Volume Manager 5.1 Release date (Windows): August 2008 Release date (UNIX): December 2009 Veritas Volume Manager 5.0 Release date (UNIX): August 2006 Release date (Windows): January 2007 Veritas Volume Manager 4.1 Release date (UNIX): April 2005 Release date (Windows): June 2004 Veritas Volume Manager 4.0 Release date: February 2004 Veritas Volume Manager 3.5 Release date: September 2002 Veritas Volume Manager 3.2 Veritas Volume Manager 3.1 Release date: August 2000 Veritas Volume Manager 3.0 Microsoft once licensed a version of Veritas Volume Manager for Windows 2000, allowing operating systems to store and modify large amounts of data. Symantec acquired Veritas on July 2, 2005, and claimed Microsoft misused their intellectual property to develop functionalities in Windows Server 2003, later Windows Vista and Windows Server 2008, which competed with Veritas' Storage Foundation, according to Michael Schallop, the director of legal affairs at Symantec. A representative claims Microsoft bought all "intellectual property rights for all relevant technologies from Veritas in 2004". The lawsuit was dropped in 2008; terms were not disclosed. See also Veritas Storage Foundation Veritas Volume Replicator Symantec Operations Readiness Tools (SORT) References External links Veritas Volume Manager documentation Veritas Volume Manager Quick Reference Advanced Veritas Volume Manager Quick Reference Symantec Operations Readiness Tools (SORT) Cuddletech Veritas Kickstart UNIX WAY Veritas Volume Manager documentation Storage software
15185065
https://en.wikipedia.org/wiki/Kompakt%20%28software%29
Kompakt (software)
Kompakt is a discontinued feature-limited version of Native Instruments' Kontakt software sampler. It features a large sample library containing samples of a range of acoustic and electronic instruments, and a number of performance controls. It also allows the user's own patches to be created and samples mapped across the keyboard using a basic drag-and-drop technique, however there is no capability for editing patches - as such Kompakt is best-suited as a performance instrument for playing predefined sample libraries. Kompakt is not compatible with Intel Macs. Kontakt Kontakt is Native Instruments' flagship software sampler and one of the leading software sampling applications on the market. First introduced in 2002, Kontakt works as either a stand-alone application or as a plug-in on both Mac and Windows platforms. Kontakt combines sampler functionality with elements of synthesis and effects. Kompakt Kompakt is a sampler-based synthesis application. It allows a user to load software instruments (or patches) into memory to be played back through a MIDI controller. Each instance of Kompakt can load up to eight instruments at a time. Instrument patches can be modified and saved for use later. Kompakt similarly allows the user to work with multis, which are files defining a group of instrument patches. Kompakt allows the user control over the sound of the instrument by means of a set of controls. Some of these include envelopes, LFOs, filters and effects, and other such controls. 3rd Party Libraries Many third-party manufacturers program software sample libraries, and Kontakt features the Kontakt Script Processor (KSP) and Creator Tools to help users of the software and sample library developers create their own instruments that utilize the Kontakt sampling and synthesis engine. By utilizing KSP, sample library developers can create instruments that can be played and controlled via Kontakt. Through Native Instruments, one can buy sample libraries that include a special version of Kompakt, called the "Kompakt Player," to work with the sample libraries. An obvious advantage of this is that one need not buy a full sampler in order to play the samples, but instead has the full capabilities of playing the sample library through the Kompakt player once the library is installed. Third-party sample manufacturers normally bundle their libraries with a customized version of a sample player optimized for that library. ins a special GUI and is optimized for use of that library. References Native Instruments Samplers (musical instrument) Software synthesizers
64089902
https://en.wikipedia.org/wiki/Foliate%20%28software%29
Foliate (software)
Foliate is a free e-book reading application for desktop Linux systems. The name refers to leaves, meaning "(getting) leafy" or "…-leaved". Features Foliate focuses on reading and supports book management with a dedicated library view. It supports typical e-book formats with reflowable text: EPUB (primary focus), Mobipocket, AZW(3), and no formats with fixed layout, although PDF support is being considered. Its customizable and theme-based user interface is inspired by those of portable e-reader hardware devices. It follows the GNOME standards and automatically adapts to different screen formats. It is streamlined for distraction-free reading and is described as pleasant and more polished than other free desktop applications. Books are displayed in a paginated view, with double-page or single-page view depending on screen size, or in a continuous scrolling view, with customizable typeface, spacing/margins, brightness and size/zoom. Control elements hide with an automatic fading effect while basic navigation with hidden controls is still possible by clicking/tapping on pages or arrow keys. It has a toggleable navigation sidebar, can display a reading time estimate with a progress slider with chapter markers and supports multi-touch gestures such as pinch zoom. A full-screen mode and an optional traditional title bar can be activated. In skeuomorphic mode, Foliate mimicks the look of a traditional paper book. Foliate can browse the OPDS feed of Project Gutenberg, Standard Ebooks and Feedbooks, and can automatically download royalty free ebooks from these sources. It is also possible to manually add other OPDS sources. Foliate supports speech synthesis using eSpeak, eSpeakNG or Festival, albeit without automatic detection of the content language. It is also possible to use Google's text to speech service in Foliate. A full-text search is available (also for annotations), as well as word lookup (in Wikipedia and Wiktionary or offline dictionaries via a dictd interface) and integration of Google Translate. The application stores reading progress, bookmarks and annotations in a central directory using one JSON file per book. These can be synchronized with other devices, although it uses a format that does not work immediately with other reading software. It can also check for spelling errors in annotations and export them as Markdown. It is not able to synchronize e-books with a hardware reader device. Technology The application is written in JavaScript, based on the JavaScript interpreter GJS, the epub.js library, the rendering engine WebKit and GTK 3 for the user interface. Optionally gspell can be used for spell checking of annotations. Support for the Kindle formats (mobi, azwX) is based on a Python module. Resource consumption is low. Distribution Foliate is published as Free Software, and therefore with its complete source code, under the terms of the GNU General Public License in version 3 or later. It was first published on 26 May 2019 on GitHub. Binary files are distributed primarily as Flatpak packages via flathub. These can be installed on several major Linux distributions using on-board tools. It has been included in the default package repositories of several distributions, including Fedora, Arch and OpenSUSE. Additionally, there are Snap packages available through the snap store and a .deb file for Debian-based distributions which can also be installed and updated via a Personal Package Archive under Ubuntu and its siblings. It can be also installed in an Android phone using Termux and VNC. External links Website GitHub page Sources References EPUB readers Linux text-related software
48695181
https://en.wikipedia.org/wiki/List%20of%20female%20scientists%20in%20the%2020th%20century
List of female scientists in the 20th century
This is a historical list dealing with women scientists in the 20th century. During this time period, women working in scientific fields were rare. Women at this time faced barriers in higher education and often denied access to scientific institutions; in the Western world, the first-wave feminist movement began to break down many of these barriers. Anthropology Katharine Bartlett (1907–2001), American physical anthropologist, museum curator Ruth Benedict (1887–1948), American anthropologist Anna Bērzkalne (1891–1956), Latvian folklorist and ethnographer Alicia Dussán de Reichel (born 1920), Colombian anthropologist Dina Dahbany-Miraglia (born 1938), American Yemini linguistic anthropologist, educator Bertha P. Dutton (1903–1994), anthropologist and ethnologist Zora Neale Hurston (1891–1960), American folklorist and anthropologist Marjorie F. Lambert (1908–2006), American archeologist and anthropologist who studied Southwestern Puebloan peoples Dorothea Leighton (1908–1989), American social psychiatrist, founded the field of medical anthropology Katharine Luomala (1907–1992), American anthropologist Margaret Mead (1901–1978), American anthropologist Grete Mostny (1914–1991), Austrian-born Chilean anthropologist and archaeologist Miriam Tildesley (1883–1979), British anthropologist Mildred Trotter (1899–1991), American forensic anthropologist Camilla Wedgwood (1901–1955), British/Australian anthropologist Alba Zaluar (1942–2019), Brazilian anthropologist specializing in urban anthropology Archaeology Sonia Alconini (born 1965), Bolivian archaeologist of the Formative Period of the Lake Titicaca basin Birgit Arrhenius (born 1932), Swedish archaeologist Dorothea Bate (1878–1951), British archaeologist and pioneer of archaeozoology. Alex Bayliss British archaeologist Crystal Bennett (1918–1987), British archaeologist whose research focused on Jordan Zeineb Benzina Tunisian archeologist Jole Bovio Marconi (1897–1986), Italian archaeologist and prehistorian Juliet Clutton-Brock (1933–2015), British zooarchaeologist who specialized in domestic animals Dorothy Charlesworth (1927–1981), British archaeologist and expert on Roman glass Lily Chitty (1893–1979), British archaeologist who specialized in the prehistoric history of Wales and the [west of England] Mary Kitson Clark (1905–2005), British archaeologist best known for her work on the Roman-British in Northern England Bryony Coles (born 1946) British prehistoric archaeologist Alana Cordy-Collins (1944–2015), American archaeologist specializing in Peruvian prehistory Rosemary Cramp (born 1929), British archaeologist whose research focuses on Anglo-Saxons in Britain Joan Breton Connelly American classical archaeologist Margaret Conkey (born 1943), American archaeologist Hester A. Davis, (1930–2014), American archaeologist who was instrumental in establishing public policy and ethical standards Frederica de Laguna (1906–2004), American archaeologist best known for her work on the archaeology of the Pacific Northwest and Alaska Kelly Dixon, American archaeologist specializing in the American West Janette Deacon (born 1939), South African archaeologist specializing in rock art conservation Elizabeth Eames (1918–2008), British archaeologist who was an expert on medieval tiles Anabel Ford (born 1951), American archaeologist Aileen Fox (1907–2005), British archaeologist known excavating prehistoric and Roman sites throughout the United Kingdom Alison Frantz (1903–1995), American archaeological photographer and Byzantine scholar Honor Frost (1917–2010), Turkish archaeologist who specialized in underwater archaeology Perla Fuscaldo (born 1941), Argentine egyptologist Elizabeth Baldwin Garland, American archaeologist Kathleen K. Gilmore (1914–2010), American archaeologist known for her research in Spanish colonial archaeology Dorothy Garrod (1892–1968), British archaeologist who specialized in the Palaeolithic period Roberta Gilchrist (born 1965), Canadian archaeologist specializing in medieval Britain Marija Gimbutas (1921–1994), Lithuanian archaeologist (Kurgan hypothesis) Hetty Goldman (1881–1972), American archaeologist and one of the first female archaeologists to conduct excavations in the Middle East and Greece Audrey Henshall (born 1927), British archaeologist and prehistorian Corinne Hofman (born 1959), Dutch archaeologist Cynthia Irwin-Williams (1936–1990), American archaeologist of the prehistoric Southwest Wilhelmina Feemster Jashemski (1910–2007), American archaeologist who specialized in the ancient site of Pompei Margaret Ursula Jones (1916–2001), British archaeologist best known for directing Britain's largest archaeological excavation at Mucking, Essex Rosemary Joyce (born 1956), American archaeologist who uncovered chocolate's archaeological record and studies Honduran pre-history Kathleen Kenyon (1906–1978), British archaeologist known for her research on the Neolothic culture in Egypt and Mesopotamia Alice Kober (1906–1950), American classical archaeologist best known for her research that led to the deciphering of Linear B Kristina Killgrove (born 1977), American bioarchaeologist Winifred Lamb (1894–1963), British archaeologist Mary Leakey (1913–1996), British archaeologist known for discovering Proconsul remains which are now believed to be human's ancestor Li Liu (archaeologist) (born 1953), Chinese-American archaeologist specializing in Neolithic and Bronze Age China Anna Marguerite McCann (1933–2017), American archaeologist known for her work in underwater archaeology Isabel McBryde (born 1934), Australian archaeologist Betty Meehan (born 1933), Australian anthropologist and archaeologist Audrey Meaney (born 1931), British archaeologist and expert on Anglo-Saxon England Margaret Murray (1863–1963), British-Indian Egyptologist and the first woman to be appointed a lecturer in archaeology in the United Kingdom Bertha Parker Pallan (1907–1978), American archaeologist known for being the first female Native American archaeologist Tatiana Proskouriakoff (1909–1985), Russian-American archaeologist who contributed significantly to deciphering the Maya hieroglyphs. Charlotte Roberts (born 1957), British bioarchaeologist Margaret Rule (1928–2015), British archaeologist led the excavation of the Tudor Warship Mary Rose' Elisabeth Ruttkay, (1926–2009), Austrian Neolithic and Bronze Age specialist Hanna Rydh (1891–1964), Swedish archaeologist and prehistorian Elizabeth Slater (1946–2014), British archaeologist who specialized in archaeometallurgy Julie K. Stein, Researches prehistoric humans in the Pacific Northwest Hoang Thi Than (born 1944), Vietnamese geological engineer and archaeologist Birgitta Wallace (born 1944), Swedish–Canadian archaeologist whose research focuses on Norse migration to North America. Zheng Zhenxiang (born 1929), Chinese archaeologist and Bronze Age specialist Astronomy Claudia Alexander (1959–2015), American planetary scientist Mary Adela Blagg (1858–1944), British astronomer Mary Brück (1925–2008), Irish astronomer, astrophysicist, science historian Margaret Burbidge (1919–2020), British astrophysicist Jocelyn Bell Burnell (born 1943), Northern Irish-British astrophysicist Annie Jump Cannon (1863–1941), American astronomer Janine Connes, French astronomer A. Grace Cook (1887–1958), British astronomer Heather Couper (1949–2020), British astronomer (astronomy popularisation, science education) Joy Crisp, American planetary scientist Nancy Crooker (born 1944), American space physicist Sandra Faber (born 1944), American astronomer Joan Feynman (1927–2020), American space physicist Pamela Gay (born 1973), American astronomer Vera Fedorovna Gaze (1899–1954), Russian astronomer (planet 2388 Gase an Gaze Crater on Venus are named for her) Julie Vinter Hansen (1890–1960), Danish astronomer Martha Haynes (born 1951), American astronomer Lisa Kaltenegger, Austrian/American astronomer Dorothea Klumpke (1861–1942), American-born astronomer Henrietta Leavitt (1868–1921), American astronomer (periodicity of variable stars) Evelyn Leland (c.1870–c.1930), American astronomer working at the Harvard College Observatory Priyamvada Natarajan, Indian/American astrophysicist Carolyn Porco (born 1953), American planetary scientist Cecilia Payne-Gaposchkin (1900–1978), British-American astronomer Ruby Payne-Scott (1912–1981), Australian radio astronomer Vera Rubin (1928–2016), American astronomer Charlotte Moore Sitterly (1898–1990), American astronomer Jill Tarter (born 1944), American astronomer Beatrice Tinsley (1941–1981), New Zealand astronomer and cosmologist Biology Nora Lilian Alcock (1874–1972), British plant pathologist Alice Alldredge, (born 1949) American oceanographer and researcher of marine snow, discover of Transparent Exopolymer Particles (TEP) and demersal hellon June Almeida (1930–2007), British virologist E. K. Janaki Ammal (1897–1984), Indian botanist Lena Clemmons Artz (1891-1976) American botanist Vandika Ervandovna Avetisyan (born 1928) Armenian botanist and mycologist Denise P. Barlow (1950–2017), British geneticist Yvonne Barr (1932–2016), British virologist (co-discovery of Epstein-Barr virus) Lela Viola Barton (1901–1967), American botanist Kathleen Basford (1916–1998), British botanist Gillian Bates (born 1956), British geneticist (Huntington's disease) Val Beral (born 1946), British–Australian epidemiologist Grace Berlin (1897–1982), American ecologist, ornithologist and historian Agathe L. van Beverwijk (1907–1963), Dutch mycologist Gladys Black (1909–1998), American ornithologist Idelisa Bonnelly (born 1931), Dominican Republic marine biologist Alice Middleton Boring (1883–1955), American biologist Annette Frances Braun (1911–1968), American entomologist, expert on microlepidoptera Victoria Braithwaite (1967–2019), British biologist and ichthyologist. Linda B. Buck (born 1947), American neuroscientist (Nobel prize in Physiology or Medicine 2004 for olfactory receptors) Hildred Mary Butler (1906–1975), Australian microbiologist Esther Byrnes (1867–1946), American biologist and science teacher Bertha Cady (1873–1956), American entomologist and educator Audrey Cahn (1905–2008) Australian microbiologist and nutritionist Eleanor Carothers (1882–1957), American zoologist, geneticist and cytologist Rachel Carson (1907–1964), American marine biologist and conservationist Edith Katherine Cash (1890–1992), American mycologist and lichenologist Ann Chapman (1937–2009), New Zealand biologist and limnologist Martha Chase (1927–2003), American molecular biologist Mary-Dell Chilton (born 1939), American molecular biologist Theresa Clay (1911–1995), English entomologist Edith Clements (1874–1971), American botanist and pioneer of botanical ecology Elzada Clover (1897–1980), American botanist Gerty Theresa Cori (1896–1957), American biochemist (Nobel Prize in Physiology or Medicine in 1947) Suzanne Cory (born 1942), Australian immunologist/cancer researcher Ursula M. Cowgill (1927–2015), American biologist and anthropologist Janet Darbyshire, British epidemiologist Gertrude Crotty Davenport (1866–1946), American zoologist and eugenicist Nina Demme (1902–1977), Russian arctic explorer and ornithologist Sophie Charlotte Ducker (1909–2004), Australian botanist Sylvia Earle (born 1935), American marine biologist, oceanographer and explorer Sophia Eckerson (1880–1954), American botanist Sylvia Edlund (1945–2014), Canadian botanist Charlotte Elliott (1883–1974), American plant physiologist Vera Danchakoff (1879 – about 1950) Russian anatomist, cell biologist and embryologist, "mother of stem cells" Rhoda Erdmann (1870–1935), German cell biologist Katherine Esau (1898–1997), German-American botanist Edna H. Fawcett (1879–1960), American botanist Catherine Feuillet (born 1965), French molecular biologist who was the first scientist to map the wheat chromosome 3B Victoria Foe (born 1945), American developmental biologist, and Research Professor at the University of Washington's Center for Cell Dynamics. Dian Fossey (1932–1985), American zoologist Faith Fyles (1875–1961), Canada's first botanical artist Birutė Galdikas (born 1946), German primatologist and conservationist Margaret Sylvia Gilliland (1917–1990), Australian biochemist Jane Goodall (born 1934), British biologist, primatologist Isabella Gordon (1901–1988), Scottish marine biologist Susan Greenfield (born 1950), British neurophysiologist (neurophysiology of the brain, popularisation of science) Charlotte Elliott (1883–1974), American plant physiologist Constance Endicott Hartt (1900–1984), American botanist Eliza Amy Hodgson (1888–1983), New Zealand botanist Lena B. Smithers Hughes (1905–1987), American botanist, developed strains of the Valencia orange Maria Isabel Hylton Scott (1889–1990), Argentine zoologist and malacologist Eva Jablonka (born 1952), Polish/Israeli biologist and philosopher Adele Juda (1888–1949), Austrian neurologist Marian Koshland (1921–1997), American immunologist Frances Adams Le Sueur (1919–1995), British botanist and ornithologist Margaret Reed Lewis (1881–1970), American cell biologist and embryologist Maria Carmelo Lico (1927–1985), Italo-Argentinian-Brazilian neuroscientist Gloria Lim (born 1930), Singaporean mycologist, first woman Dean of the Faculty of Science, University of Singapore Liliana Lubinska (1904–1990), Polish neuroscientist Marguerite Lwoff (1905–1979), French microbiologist and virologist Misha Mahowald (1963–1996), American neuroscientist Irene Manton (1904–1988), British botanist, cytologist Lynn Margulis (1938–2011), American biologist Deborah Martin-Downs, Canadian aquatic biologist, ecologist Sara Branham Matthews (1888–1962), American microbiologist Mary MacArthur, Canadian food scientist, dehydration and freezing of fresh foods Barbara McClintock (1902–1992), American geneticist, Nobel prize for Physiology or Medicine 1983 Eileen McCracken (1920–1988), Irish botanist Ruth Colvin Starrett McGuire (1893–1950), American plant pathologist Anne McLaren (1927–2007), British developmental biologist Ethel Irene McLennan (1891–1983), Australian botanist Eunice Thomas Miner (1899–1993), American biologist, executive director of the New York Academy of Sciences 1939–1967 Rita Levi-Montalcini (1909–2012), Italian neurologist (Nobel prize for Physiology or Medicine 1986 for growth factors) Marianne V. Moore (graduated 1975), aquatic ecologist Ann Haven Morgan (1882–1966), American zoologist Ann Nardulli (1948–2018), American endocrinologist Margaret Newton (1887–1971), Canadian plant phytopathologist and mycologist (pioneer in stem rust research) Christiane Nüsslein-Volhard (born 1942), German geneticist and developmental biologist (Nobel prize for Physiology or Medicine 1995 forhomeobox genes) Ida Shepard Oldroyd (1856–1940), American conchologist Daphne Osborne (1930–2006), British plant physiologist (plant hormones) Janina Oyrzanowska-Poplewska (1918–2001), Polish veterinarian and epizootiologist Mary Parke (1908–1989), British marine botanist specialising in phycology, the study of algae Jane E. Parker (born 1960), British botanist who researches the immune responses of plants Ruth Myrtle Patrick (1907–2013), American botanist, limnologist, and pollution expert Eva J. Pell (born 1948), American plant pathologist Theodora Lisle Prankerd (1878–1939), British botanist Isabella Preston (1881–1965), Canadian ornamental plant breeder (botanist) Joan Beauchamp Procter (1897–1931), British zoologist (herpetologist) Ragna Rask-Nielsen (1900–1998}, Danish biochemist Julie Hanta Razafimanahaka, Madagascar biologist, conservationist F. Gwendolen Rees (1906–1994), British parasitologist Jytte Reichstein Nilsson (1932–2020), Danish protozoologist Anita Roberts (1942–2006), American molecular biologist, "mother of TGF-Beta" Edith A. Roberts (1881–1977), American botanist and plant ecology pioneer Gudrun Ruud (1882–1958), Norwegian zoologist specializing in embryology Hazel Schmoll (1890–1990), American botanist Eva Schönbeck-Temesy (1930–2011) Austrian botanist of Hungarian descent Idah Sithole-Niang (born 1957), biochemist focusing on cowpea production and disease Florence Wells Slater (1864–1941), American entomologist Margaret A. Stanley, British virologist and epithelial biologist Phyllis Starkey (born 1947), British biochemist and medical researcher Magda Staudinger () (1902–1997), Latvian-German biologist and chemist Sarah Stewart (1905–1976), Mexican American microbiologist (discovered the Polyomavirus) Ragnhild Sundby (1922–2006), Norwegian zoologist Felicitas Svejda (1920–2016), Canadian botanist (rose breeder) Maria Telkes (1900–1995), Hungarian-American biophysicist Lois H. Tiffany (1924–2009), American mycologist Amelia Tonon (1899–1961), Italian entomologist Lydia Villa-Komaroff (born 1947), Mexican American molecular cellular biologist Karen Vousden (born 1957), British cancer researcher Elisabeth Vrba (born 1942), South African paleontologist Marvalee Wake (born 1939), American biologist researching limbless amphibians, educator Erna Walter (1893–1992), German botanist Jane C. Wright (1919–2013), American oncologist Kono Yasui (1880–1971), Japanese cytologist Eleanor Anne Young (1925–2007), American nutritionist and educator Mary Sophie Young (1872–1919), American botanist Chemistry Maria Abbracchio (born 1956) Italian pharmacologist who works with purinergic receptors and identified GPR17. On Reuter's most-cited list since 2006. Marian Ewurama Addy (1942–2014) Ghanaian biochemist, specializing in herbal medicine; first woman in Ghana to attain the rank of full professor in the natural sciences; winner of the UNESCO Kalinga Prize in 1999 Barbara Askins (born 1939), American chemist Karin Aurivillius (1920–1982), Swedish chemist and crystallographer Alice Ball (1892–1916), American chemist Ulrike Beisiegel (born 1952), German biochemist, researcher of liver fats and first female president of the University of Göttingen Anne Beloff-Chain (1921–1991), British biochemist Jeannette Brown (born 1934), medicinal chemist, writer, educator Astrid Cleve (1875–1968), Swedish chemist Seetha Coleman-Kammula (born 1950) Indian chemist and plastics designer, turned environmentalist Maria Skłodowska-Curie (1867–1934), Polish-French chemist (pioneer in radiology, discovery of polonium and radium), Nobel prize in physics 1903 and Nobel prize in chemistry 1911 Mary Campbell Dawbarn (1902–1982), Australian biochemist Moira Lenore Dynon (1920–1976), Australian chemist Gertrude B. Elion (1918–1999), American biochemist (Nobel prize in Physiology or Medicine 1988 for drug development) Claire E. Eyers (fl. 2004), British mass spectrometist Nellie Ivy Fisher (1907–1995), London-born industrial chemist, first woman to lead a division of Kodak in Australia Gwendolyn Wilson Fowler (1907–1997), American chemist and first licensed African American pharmacist in Iowa Rosalind Franklin (1920–1957), British physical chemist and crystallographer Ellen Gleditsch (1879–1968), Norwegian radiochemist Jenny Glusker (born 1931), British biochemist, educator Emīlija Gudriniece (1920–2004), Latvian chemist and academic Frances Mary Hamer (1894–1980), British chemist who specialized in photographic sensitization compounds Anna J. Harrison (1912–1998), American organic chemist Dorothy Crowfoot Hodgkin (1910–1994), British crystallographer, Nobel prize in chemistry 1964 Clara Immerwahr (1870–1915), German chemist Allene Jeanes (1906–1995), American chemical researcher who developed Dextran and Xanthan gum Irène Joliot-Curie (1897–1956), French chemist and nuclear physicist, Nobel Prize in Chemistry 1935 Chika Kuroda (1884–1968), Japanese chemist Stephanie Kwolek (1923–2014), American chemist, inventor of Kevlar Lidija Liepiņa (1891–1985), Latvian chemist, one of the first Soviet doctorates in chemistry. Kathleen Lonsdale (1903–1971), British crystallographer Grace Medes (1886–1967), American biochemist Maud Menten (1879–1960), Canadian biochemist Christina Miller (1899–2001) Scottish chemist, one of the first women elected to Royal Society of Edinburgh Catherine J. Murphy (born 1964), American chemist Muriel Wheldale Onslow (1880–1932), British biochemist Helen T. Parsons (1886–1977), American biochemist Nellie M. Payne (1900–1990), American entomologist and agricultural chemist Eva Philbin (1914–2005), Irish chemist Darshan Ranganathan (1941–2001), Indian organic chemist Mildred Rebstock (1919–2011), American pharmaceutical chemist Elizabeth Rona, (1890–1981) Hungarian (naturalized American) nuclear chemist and polonium expert Patsy Sherman (1930–2008), American chemist, co-inventor of Scotchgard Marija Šimanska (1922–1995), Latvian chemist Taneko Suzuki (1926–2020), Japanese biochemist who created Marinbeef, a product made of fish that tasted like beef. Ida Noddack Tacke (1896–1978), German chemist and physicist Grace Oladunni Taylor (born 1937), Nigerian chemist 2nd woman inducted into the Nigerian Academy of Science Jean Thomas (born 1942), British biochemist (chromatin) Michiyo Tsujimura (1888–1969), Japanese biochemist, agricultural scientist Joanna Maria Vandenberg (born 1938), Dutch solid state chemist and crystallographer Elizabeth Williamson, English pharmacologist and herbalist Ada Yonath (born 1939), Israeli crystallographer, Nobel prize in Chemistry 2009 Daisy Yen Wu (1902–1993), first Chinese woman to work as a biochemist Geology Zonia Baber (1862–1955), American geographer and geologist Karen Callisen (1882–1970), Danish geologist Inés Cifuentes (1954–2014), American seismologist and educator Moira Dunbar (1918–1999), Scottish-Canadian glaciologist Elizabeth F. Fisher (1872–1941), American geologist Regina Fleszarowa (1888–1969), Polish geologist Winifred Goldring (1888–1971), American paleontologist Eileen Hendriks (1887–1978), British geologist Edith Kristan-Tollmann (1934–1995), Austrian geologist and paleontologist Dorothée Le Maître (1896–1990), French paleontologist Karen Cook McNally (1940–2014), American seismologist Inge Lehmann (1888–1993) Danish seismologist who discovered Earth's solid inner core Marcia McNutt (born 1951), American geophysicist Ellen Louise Mertz (1896–1987), Danish engineering geologist Ruth Schmidt (1916–2014), American geologist Ethel Shakespear (1871–1946), English geologist Kathleen Sherrard (1898–1975), Australian geologist and palaeontologist Ethel Skeat (1865–1939), English paleontologist and geologist Marjorie Sweeting (1920–1994), British geomorphologist Marie Tharp (1920–2006), American geologist and oceanographic cartographer Elsa G. Vilmundardóttir (1932–2008), Iceland's first female geologist Marguerite Williams (1895–1991), American geologist Alice Wilson (1881–1964), Canadian geologist and paleontologist Elizabeth A. Wood (1912–2006), American crystallographer and geologist Mathematics or computer science Hertha Marks Ayrton (1854–1923), British mathematician and electrical engineer (electric arcs, sand ripples, invention of several devices, geometry) Cecilia Berdichevsky (1925–2010) pioneering Argentinian computer scientist Anita Borg (1949–2003), American computer scientist, founder of the Institute for Women and Technology Mary L. Cartwright (1900–1998), British mathematician Amanda Chessell, British computer scientist Ingrid Daubechies (born 1954), Belgian mathematician (Wavelets – first woman to receive the National Academy of Sciences Award in Mathematics) Tatjana Ehrenfest-Afanassjewa (1876–1964), Russian/Dutch mathematician Deborah Estrin (born 1959), American computer scientist Vera Faddeeva () (1906–1983), Russian mathematician. One of the first to publish works on linear algebra. Shafi Goldwasser (born 1959), American-Israel computer scientist. Evelyn Boyd Granville (born 1924), American mathematician, second African-American woman to get a PhD in mathematics Marion Cameron Gray (1902–1979), Scottish mathematician Barbara Grosz (born 1948), American computer scientist; 1993 President of the AAAI Milly Koss, (1928–2012) American computing pioneer Bryna Kra, (born 1966), American mathematician Margaret Hamilton (born 1936) American computer scientist, systems engineer, and business owner. Frances Hardcastle (1866–1941), mathematician, founding member of the American Mathematical Society. Julia Hirschberg, American computer scientist and computational linguist Betty Holberton (1927–2001) American computer programmer Grace Hopper (1906–1992), American computer scientist Margarete Kahn (1880–1942), German mathematician Lyudmila Keldysh (1904–1976) Russia mathematician known for set theory and geometric topology Marta Kwiatkowska (born 1957), Polish-British Computer scientist Marguerite Lehr (1898–1987), American mathematician Margaret Anne LeMone (born 1946), mathematician and atmospheric scientist Barbara Liskov (born 1939), American computer scientist for whom the Liskov substitution principle is named Margaret Millington (1944–1973), English mathematician Mangala Narlikar (graduated 1962), Indian mathematician Klara Dan von Neumann (1911–1963) Hungarian computer scientist Frances Northcutt (born 1943), American engineer Rózsa Péter (1905–1977), Hungarian mathematician Cicely Popplewell (1920–1995) British software engineer, 1960s Karen Sparck Jones (1935–2007) British computer scientist Dorothy Vaughan (1910–2008), American mathematician, worked at NACA's Langley Memorial Aeronautical Laboratory Dorothy Maud Wrinch (1894–1976), British mathematician and theoretical biochemist Jeannette Wing (born 1956), computer scientist, Microsoft Corporate Vice President Maryam Mirzakhani (1977–2017), Iranian mathematician, first female recipient of the Fields medal Karen Uhlenbeck (born 1942), American mathematician and founder of modern geometric analysis Science education Kathleen Jannette Anderson (1927–2002), Scottish biologist Susan Blackmore (born 1951), British science writer (memetics, evolutionary theory, consciousness, parapsychology) Florence Annie Yeldham (1877–1945), British school teacher and historian of arithmetic Engineering Zhenan Bao (born 1970), American chemical engineer and materials scientist Frances Bradfield (1896–1967), British aeronautical engineer Jayne Bryant, Engineering Director for BAE Systems Nance Dicciani (born 1947), American chemical engineer Ana María Flores (born 1952), Bolivian engineer Kate Gleason (1865–1933), American engineer Ida Holz (born 1935), Uruguayan engineer Frances Hugle (1927–1968), American engineer Julia King, Baroness Brown of Cambridge (born 1954), British engineer Elsie MacGill (1907–1980), First Canadian female engineer Florence Violet McKenzie (1890 or 1892–1982), first female electrical engineer in Australia Concepción Mendizábal Mendoza (1893–1985), first female civil engineer in Mexico Maria Tereza Jorge Pádua (born 1943), Brazilian ecologist Katharina Paulus (1868–1953), German aeronaut Molly Shoichet, Canadian biomedical engineer Laura Anne Willson (1877–1942), British engineer and suffragette Paula T. Hammond (born 1963) American chemical engineer and material scientist Medicine Phyllis Margery Anderson (1901–1957), Australian pathologist Virginia Apgar (1909–1974), American obstetrical anesthesiologist (inventor of the Apgar score) Heather Ashton (1929–2019), English psychopharmacologist Anna Baetjer (1899–1984), American physiologist and toxicologist Roberta Bondar (born 1945), Canadian, space medicine Dorothy Lavinia Brown (1919–2004), American surgeon Audrey Cahn (1905–2008), Australian nutritionist and microbiologist Margaret Chan (born 1947), Chinese-Canadian health administrator; director of the World Health Organization Evelyn Stocking Crosslin (1919–1991), American physician Eleanor Davies-Colley (1874–1934), British surgeon (first female FRCS) Claire Fagin (born 1926), American health-care researcher Sophia Getzowa (1872–1946), Belarusian-Israeli pathologist Esther Greisheimer (1891–1982), American academic and medical researcher L. Ruth Guy (1913–2006), American academic and pathologist Janina Hurynowicz (1894–1967), Polish doctor, neurophysiologist, resistance member Karen C. Johnson (born 1955) American physician and clinical trials specialist who is one of Reuter's most cited scientists Krista Kostial-Šimonović (1923–2018) Croatian physiologist and heavy metals expert Mary Jeanne Kreek (1937–2021), American neurobiologist Elise L'Esperance (1878–1958), American pathologist Elaine Marjory Little (1884–1974), Australian pathologist Anna Suk-Fong Lok, Chinese/American hepatologist, wrote WHO and AASLD guidelines for emerging countries and liver disease Eleanor Josephine Macdonald (1906–2007) pioneer American cancer epidemiologist and cancer researcher Catharine Macfarlane (1877–1969), American obstetrician and gynecologist Charlotte E. Maguire (1918—2014), Florida pediatrician and medical school benefactor Louisa Martindale (1872–1966), British surgeon Helen Mayo (1878–1967), Australian doctor and pioneer in preventing infant mortality Frances Gertrude McGill (1882–1959), Canadian forensic pathologist Eleanor Montague (1926–2018), American radiologist and radiotherapist Anne B. Newman (born 1955), US Geriatrics & Gerontology expert Antonia Novello (born 1944), Puerto Rican physician and Surgeon General of the United States Dorothea Orem (1914–2007), Nursing theorist Ida Ørskov (1922–2007), Danish bacteriologist May Owen (1892–1988), Texas pathologist, discovered talcum powder used on surgical gloves caused infection and peritoneal scarring Angeliki Panajiotatou (1875–1954), Greek physician and microbiologist Kathleen I. Pritchard (born 1956), Canadian oncologist, breast cancer researcher and noted as one of Reuter's most cited scientists. Frieda Robscheit-Robbins (1888–1973), German-American pathologist Ora Mendelsohn Rosen (1935–1990), American medical researcher Una Ryan, (born 1941) Malaysian born-American, heart disease researcher, biotech vaccine and diagnostics maker/marketer Una M. Ryan, (born 1966) patented DNA test identifying the protozoan parasite Cryptosporidium Velma Scantlebury, (born 1955) first woman of African descent to become a transplant surgeon in the U.S. Lise Thiry (born 1921), Belgian virologist, senator Helen Rodríguez Trías (1929–2001), Puerto Rican American pediatrician and advocate for women's reproductive rights Marie Stopes (1880–-1958) British paleobotanist and pioneer in birth control Elizabeth M. Ward, American epidemiologist and head of the Epidemiology and Surveillance Research Department of the American Cancer Society Elsie Widdowson (1908–2000), British nutritionist Fiona Wood, (born 1958), British-Australian plastic surgeon Meteorology Rely Zlatarovic, (fl.'' 1920), Austrian-trained meteorologist Nadia Zyncenko (1948–), Argentine meteorologist Paleoanthropology Mary Leakey (1913–1996), British paleoanthropologist Suzanne LeClercq (1901–1994), Belgian paleobotanist and paleontologist Betty Kellett Nadeau (1906–?), American paleontologist Physics Faye Ajzenberg-Selove (1926–2012), American nuclear physicist, (2007 US National Medal of Science) Giuseppina Aliverti (1894–1982), Italian geophysicist Betsy Ancker-Johnson (1927–2020), American plasma physicist Alice Armstrong, American physicist Marion Asche (1935–2013), German physicist and researcher of solid state physics Sonja Ashauer (1923–1948), first Brazilian woman to earn a doctorate in physics Milla Baldo-Ceolin (1924–2011), Italian particle physicist Marietta Blau (1894–1970), German experimental particle physicist Lili Bleeker (1897–1985), Dutch physicist Katharine Blodgett (1898–1979), American thin-film physicist Christiane Bonnelle (–2016), French spectroscopist Tatiana Birshtein (born 1928), molecular scientist specializing in the physics of polymers Margrete Heiberg Bose (1866–1952), Danish physicist (active in Argentina from 1909) Jenny Rosenthal Bramley (1909–1997), Lithuanian-American physicist Harriet Brooks (1876–1933), Canadian radiation physicist A. Catrina Bryce (born 1956), Scottish laser scientist Nina Byers (1930–2014), American physicist Yvette Cauchois (1908–1999), French physicist Yvonne Choquet-Bruhat (born 1923), French theoretical physicist Kwang Hwa Chung (born 1948), Korean physicist Hilda Cid Araneda (20 February 1933) Chilean biophysicist who excelled in the field of crystallography. Patricia Cladis (1937–2017), Canadian/American physicist Esther Conwell (1922–2014), American physicist, semiconductors Jane Dewey (1900–1979), American physicist Cécile DeWitt-Morette (1922–2017), French mathematician and physicist Louise Dolan (born 1950), American mathematical physicist, theoretical particle physics and superstring theory Nancy M. Dowdy (born 1938), American nuclear physicist, arms control Mildred Dresselhaus (1930–2017), American physicist, graphite, graphite intercalation compounds, fullerenes, carbon nanotubes, and low-dimensional thermoelectrics Helen T. Edwards (1936–2016), American physicist, Tevatron Magda Ericson (born 1929), French nuclear physicist Edith Farkas (1921–1993), Hungarian-born New Zealand meteorologist who measured ozone levels Joan Feynman (1927–2020) American physicist Ursula Franklin (1921–2016), Canadian metallurgist, research physicist, author and educator Judy Franz (born 1938), American physicist and educator Joan Maie Freeman (1918–1998), Australian physicist Phyllis S. Freier (1921–1992), American astrophysicist Mary K. Gaillard (born 1939), American theoretical physicist Fanny Gates (1872–1931), American physicist Claire F. Gmachl (born 1967), American physicist Maria Goeppert-Mayer (1906–1972), German-American physicist, Nobel Prize in Physics 1963 Gertrude Scharff Goldhaber (1911–1998), American nuclear physicist Sulamith Goldhaber (1923–1965), American high-energy physicist and molecular spectroscopist Gail Hanson (born 1947), American high-energy physicist Margrete Heiberg Bose (1866–1952), Danish/Argentine physicist Evans Hayward (1922–2020), American physicist Caroline Herzenberg (born 1932), American physicist Hanna von Hoerner (1942–2014), German astrophysicist Helen Schaeffer Huff (1883-1913), American physicist Shirley Jackson (born 1946), American nuclear physicist, president of Rensselaer Polytechnic Institute, first African-American woman to earn a doctorate from M.I.T. Bertha Swirles Jeffreys (1903–1999), British physicist Lorella M. Jones (1943–1995), American particle physicist Carole Jordan (born 1941), British solar physicist Renata Kallosh (born 1943), Russian/American theoretical physicist Berta Karlik (1904–1990), Austrian physicist Bruria Kaufman (1918–2010), American theoretical physicist Elizaveta Karamihailova (1897–1968), Bulgarian nuclear physicist Marcia Keith (1859–1950), American physicist Ann Kiessling (born 1942), American physicist Margaret G. Kivelson (born 1928), American space physicist and planetary scientist Noemie Benczer Koller (born 1933) Ninni Kronberg (1874–1946), Swedish physiologist in nutrition Doris Kuhlmann-Wilsdorf (1922–2010) Elizabeth Laird (physicist) (1874–1969) Juliet Lee-Franzini (1933–2014), American particle physicist Inge Lehmann (1888–1993), Danish seismologist and geophysicist Kathleen Lonsdale (1903–1971), Irish crystallographer Barbara Kegerreis Lunde (born 1937), American physicist Margaret Eliza Maltby (1860–1944), American physicist Mileva Maric (1875–1948), Serbian physicist, first wife of Albert Einstein Nina Marković, Croatian physicist and professor Helen Megaw (1907–2002), Irish crystallographer Lise Meitner (1878–1968), Austrian nuclear physicist (pioneering nuclear physics, discovery of nuclear fission, protactinium, and the Auger effect) Kirstine Meyer (1861–1941) Luise Meyer-Schutzmeister (1915–1981) Anna Nagurney Canadian-born, US operations researcher/management scientist focusing on networks Chiara Nappi, Italian American physicist Ann Nelson (1958–2019), American physicist Marcia Neugebauer (born 1932), American geophysicist Gertrude Neumark (1927–2010) Ida Tacke Noddack (1896–1979) Emmy Noether (1882–1935), German mathematician and theoretical physicist (symmetries and conservation laws) Marguerite Perey (1909–1975) Melba Phillips (1907–2004) Agnes Pockels (1862–1935) Pelageya Polubarinova-Kochina (1899–1999), Russian physicist Edith Quimby (1891–1982) Helen Quinn (born 1943), American particle physicist Lisa Randall (born 1962), American physicist Myriam Sarachik (1933–2021), American physicist Bice Sechi-Zorn (1928–1984), Italian/American nuclear physicist Anneke Levelt Sengers (born 1929), Dutch physicist specializing in the critical states of fluids Hertha Sponer (1895–1968), German/American physicist and chemist Isabelle Stone (1868–1944), American thin-film physicist and educator Edith Anne Stoney (1869–1938), Anglo-Irish medical physicist Nina Vedeneyeva (1882–1955), Russian geological physicist Afërdita Veveçka Priftaj (1948–2017) Albanian physicist Katharine Way (1903–1995), American nuclear physicist Mariana Weissmann (born 1933) Argentine physicist, computational physics of condensed matter Lucy Wilson (1888–1980) American physicist, working on optics and perception Leona Woods (1919–1986), American nuclear physicist Chien-Shiung Wu (1912–1997), Chinese-American physicist (nuclear physics, (non) conservation of parity) Sau Lan Wu, Chinese-American particle physicist Xide Xie (Hsi-teh Hsieh) (1921–2000), Chinese physicist Rosalyn Sussman Yalow (1921–2011), American medical physicist (Nobel prize in Physiology or Medicine 1977 for radioimmunoassay) Fumiko Yonezawa (1938–2019), Japanese theoretical physicist Toshiko Yuasa (1909–1980), Japanese nuclear physicist Psychology Mary Ainsworth (1913–1999), American-Canadian developmental psychologist, inventor of the "Strange Situation" procedure Martha E. Bernal (1931–2001), Mexican-American clinical psychologist, first Latina to receive a psychology PhD in the United States Lera Boroditsky, American psychologist Ludmilla A.Chistovich (1924–2006) Russian speech scientist Mamie Clark (1917–1983), African-American psychologist active in the civil rights movement Helen Flanders Dunbar (1902–1959) important early figure in U.S. psychosomatic medicine Tsuruko Haraguchi (1886–1915), Japanese psychologist Margaret Kennard (1899–1975) did pioneering research on age effects on brain damage, which produced early evidence for neuroplasticity Grace Manson (1893–1967), occupational psychologist Rosalie Rayner (1898–1935), American psychology researcher Marianne Simmel (1923–2010), American psychologist, made important contributions in research on social perception and phantom limb. Davida Teller (1938–2011), American psychologist, known for work on development of the visual system in infants. Nora Volkow (born 1956), Mexican-American psychiatrist, director of the National Institute on Drug Abuse (NIDA) Margo Wilson (1945–2009), Canadian evolutionary psychologist Catherine G. Wolf (1947–2018), American psychologist and expert in human-computer interaction See also Index of women scientists articles List of female mathematicians List of female Nobel laureates Women in computing Women in engineering Women in geology Women in medicine Notes References External links Contributions of 20th Century Women to Physics . 20
38501570
https://en.wikipedia.org/wiki/List%20of%20University%20of%20California%2C%20Berkeley%20alumni%20in%20business
List of University of California, Berkeley alumni in business
This page lists notable alumni and students of the University of California, Berkeley. Alumni who also served as faculty are listed in bold font, with degree and year. Notable faculty members are in the article List of UC Berkeley faculty. Founders and co-founders Tom Anderson, B.A. 1998 – co-founder of social networking website MySpace (acquired by News Corporation for $580 million) Brian Behlendorf – co-founder of the Apache Software Foundation, Mozilla Foundation board member, co-founder and CTO of CollabNet Joan Blades, B.A. 1977 – co-founder of software company Berkeley Systems (acquired by Sierra Online for $13 million), co-founder of political activist group MoveOn.org Richard C. Blum, B.S. 1958, M.B.A. 1959 – founder of private equity firm Blum Capital and the American Himalayan Foundation, Regent of the University of California Richard Bolt, B.A. 1933, M.A. 1937, PhD 1939 – co-founder of ARPANET developer Bolt, Beranek and Newman (BBN) Eric Brewer, B.S. EECS 1989 – co-founder of web search engine company Inktomi (acquired by Yahoo! for $235 million), director of Intel Labs Berkeley; lead researcher at Google Gary Chevsky, attended for undergraduate degree 1990–1994 – co-founder and chief architect of web search engine company Ask Jeeves (known now as Ask.com, and acquired by InterActive Corp for $1.9 billion), Senior VP at Symantec and YouSendIt Frederick Gardner Cottrell, B.S. Chemistry, 1896 – founder of patent holding company Research Corporation (which held the rights to the patent for Ernest O. Lawrence's cyclotron); inventor of the electrostatic precipitator, which removes pollution from factory exhaust fumes; inducted into the National Inventors Hall of Fame in 1992 Ed Crane, B.S. 1967 – founder of the Cato Institute David Culler, B.A. 1980 – Chair of the Department of Computer Science at UC Berkeley, associate Chair of the Electrical Engineering and Computer Sciences (UC Berkeley), and Associate CIO of the College of Engineering (UC Berkeley); co-founder of smart grid monitoring company Arch Rock (acquired by Cisco Systems) Weili Dai, B.A. Computer Science 1984 – co-founder (with Sehat Sutardja MS 1983, PhD 1988 EECS and Pantas Sutardjai MS 1983, PhD 1988) of NASDAQ-100 broadband technology company Marvell Technology Group; namesake of Sutardja-Dai Hall on the UC Berkeley campus Lee Felsenstein, B.S. EECS 1972 – founder of Community Memory, designer of Osborne 1 computer, mediator of Homebrew Computer Club, from which would emerge 23 companies, including Apple Inc. Charles H. Ferguson, B.A. 1978 – co-founder of Vermeer Technologies Incorporated (acquired by Microsoft for $133 million), founder and president of Representational Pictures, winner of an Academy Award for Best Documentary for Inside Job (2010), Academy Award nomination for the documentary film No End in Sight (2007), former fellow at the Brookings Institution, lifelong member of the Council on Foreign Relations (also listed in "Academy Awards" section) Donald Fisher, B.S. 1951 – founder and former CEO of NYSE-listed S&P 500 clothing retailer The Gap, the largest apparel retailer in the United States Rob Fulop, B.S. CS 1980 – co-founder of video game companies Imagic and PF Magic (creator of first virtual pets such as Dogz), Atari engineer, developed Missile Command and Night Driver Jean Paul Getty (attendee; transferred to the University of Oxford) – founder of the Getty Oil Company Steve Gibson (attended) – founder of software security company Gibson Research Corporation and co-host of Security Now! Edward Ginzton, B.S. 1936, M.S. 1937 – researcher in klystron tubes, co-founder of Varian Associates (which later split into three companies: NYSE-listed Varian Medical Systems; Varian Semiconductor, acquired by Applied Materials for $4.9 billion; and Varian, Inc., acquired for 1.5 billion by Agilent Technologies) Diane Greene, M.S. CS 1988 – co-founder (with Mendel Rosenblum M.S. 1989, PhD 1992 and Edward Wang BS EECS 1983, MS 1988, PhD 1994) of NYSE-listed company VMWare Garrett Gruener, M.A. Political Science 1977 – co-founder of web search engine company Ask Jeeves (known now as Ask.com, and acquired by InterActive Corp for $1.9 billion) Ashraf Habibullah, S.E., M.S. 1970 – co-creator of the first computer-based structural-engineering applications and founder, President, and CEO of the structural-engineering software company Computers and Structures, Inc. John Hanke, M.B.A. 1996 – founder and CEO of Keyhole, Inc. (acquired by Google, renamed to Google Earth); founder of video game company Niantic (which created Pokémon Go) William Harlan, B.A. 1963 – Founder of Harlan Estate, Bond, the Napa Valley Reserve, a cult wine Cabernet Sauvignon producer William Haseltine, B.A. 1966 – founder of Cambridge BioSciences and Dendreon Corp. F. Warren Hellman, B.A. 1955 – founder of Hellman & Friedman and Matrix Partners, former chairman, head of Investment Banking Division at Lehman Brothers; founder of the Hardly Strictly Bluegrass Festival, founder of The Bay Citizen (which later merged with the Center for Investigative Reporting) Mike Homer, B.S. 1981 – co-founder and former CEO of networking company Kontiki (acquired by VeriSign for $62 million) David T. Hon B.S. 1964 – physicist and founder of Dahon folding bicycles Chenming Hu, M.S. EE, PhD EE – Distinguished Professor of Microelectronics at UC Berkeley, co-founder and chairman of Celestry Design Technologies (acquired by Cadence Design Systems for over $100 million); recipient of the Phil Kaufman Award; co-inventor of the 3D transistor (the FinFET) Kai Huang, B.A. 1994 – co-founder (with Charles Huang BA 1992) and president of video game company RedOctane (publisher of Guitar Hero and acquired by Activision for $99.9 million) Jess S. Jackson, J.D. 1974 – founder of Kendall Jackson Wine Estates Bill Joy, M.S. 1982 – co-founder of computer software and hardware manufacturer Sun Microsystems (acquired by Oracle Corporation for $7.4 billion) Gene Kan, B.S. 1997 – founder of distributed search engine InfraSearch (acquired by Sun Microsystems for $12 million) Victor Koo, co-founder of Chinese video website Youku, whose American depositary shares were listed on the NYSE in 2010 and which was acquired by Alibaba in 2016 for $5.4 billion; formerly president of Chinese internet company Sohu Anthony Levandowski, B.S. Industrial Engineering 2002, M.S. IEOR 2003 – co-founder (with Andrew Schultz MS 2006 and Pierre-Yves Droz MS) of the secretive robotics hardware company 510 Systems, which developed the technologies for Google Street View and Google Car, and was acquired by Google) David R. Liu, PhD 1999 – co-founder of Editas Medicine Daniel S. Loeb, attended – founder of hedge fund Third Point Management Thomas J. Long, B.S. 1932 – founder of pharmaceutical retailer Longs Drugs (acquired by CVS Caremark for $2.54 billion) Samuel Madden, PhD – co-founder of Vertica Systems (acquired by Hewlett-Packard for $350 million, 2005 Technology Review Top 35 Brian Maxwell, B.A. 1975 – co-founder (with Jennifer Maxwell, BS 1988) of energy bar food company PowerBar (acquired by Nestlé for $375 million); namesake of the Maxwell Family Field on the UC Berkeley campus Nick McKeown, MS 1992, PhD 1995 – co-founder and former CTO of Abrizio (acquired by PMC-Sierra for $400 million), co-founder and former CEO of Nemo Systems (acquired by Cisco Systems for $12.5 million), co-founder of software-defined networking company Nicira Networks (acquired by VMWare for $1.26 billion); Kleiner Perkins Professor of Computer Science at Stanford University Paul Merage, B.S. Business 1966, MBA 1968 – co-founder and former CEO of Hot Pockets frozen food company Chef America Inc. (acquired by Nestlé for $2.6 billion) Alan Miller, B.S. EECS 1973 – co-founder of first independent video game publisher Activision (known now as the NASDAQ-100 video game company Activision Blizzard), co-founder and former CEO of video game company Accolade (acquired by Infogrames for $60 million) Gordon Moore, B.S. 1950 – co-founder of NASDAQ-100 company Intel, originator of Moore's Law Lowell North, B.S .1951 – gold medalist at the 1968 Summer Olympics in Mexico City, founder of North Sails ("the world's leading sailmaker") Michael Olson, B.A. 1991, M.A. 1992 – founder and CEO of commercial Apache Hadoop vendor Cloudera; former CEO of database software company Sleepycat Software (acquired in 2006 by Oracle Corporation), co-author of database software BerkeleyDB Pierre Omidyar, attended to complete his undergraduate degree in computer science – founder of NASDAQ-100 web auction site eBay Kim Polese, B.S. 1984 (biophysics) – CEO of software company SpikeSource; original product manager of the Java at Sun Microsystems; co-founder and former CEO of software company Marimba (acquired by BMC Software for $239 million) Lars Rasmussen, PhD 1992 – co-founder of Where 2 Technologies (which was acquired by Google and renamed to Google Maps); co-founder of Google Wave; researcher at Facebook Warren Robinett, M.S. C.S. 1976 – originator of Easter eggs, co-founder of edutainment software company The Learning Company (acquired by Mattel for $3.8 billion) Andrew Rudd, MS 1972, MBA 1976, PhD 1978 – co-founder (with UC Berkeley professor Barr Rosenberg) and chairman and CEO of Barra Inc. (acquired by Morgan Stanley for $816.4 million and known as MSCI) John Schaeffer, 1971 – founder of NASDAQ-listed solar energy retailer Real Goods Solar and the Solar Living Center Jim Simons, PhD 1972 – founder of $84 billion hedge fund Renaissance Technologies; mathematician; philanthropist Nat Simons, B.S 1989, M.S. 1994 – founder of investment management firm Meritage Group ($12.3 billion AUM); founder of Prelude Ventures; businessman; philanthropist Charles Simonyi, B.S. 1972 – founder of Intentional Software; former head of Microsoft's flagship Office applications; fifth space tourist; at Xerox PARC he created the first WYSIWYG word processor, Bravo; joined Microsoft to spread the WYSIWYG and computer mouse gospel; originally from Hungary, he is the "Hungarian" in Hungarian notation, which he created James Solomon, B.S. EE, M.S. EE – founder of electronic design automation company SDA Systems (became NASDAQ-listed Cadence Design Systems); recipient of the Phil Kaufman Award, the "Nobel Prize" of the electronic design industry" Masayoshi Son, B.A. 1980 – founder and CEO of TYO-listed Japanese telecommunications and media giant Softbank, venture capital firm Softbank Capital Jacki Sorensen – founder of Aerobic Dancing, Inc., the first aerobics classes Dawn Song, PhD – founder of Oasis Labs Timothy Springer, B.A. biochemistry 1971 – Immunologist and professor, Harvard Medical School and Boston Children’s Hospital; founder of biotech companies LeukoSite (acquired in 1998 by Millennium Pharmaceuticals for $635 million), Morphic Therapeutic, and Scholar Rock; main investor, Moderna and Selecta Biosciences. Cornelius Vander Starr (attended) – founder of AIG Corporation Paul Stephens, B.S. 1967, M.B.A. 1969 – investment banker, co-founder of Robertson Stephens & Company Sehat Sutardja, M.S. 1983, PhD 1988 EECS – co-founder (with Weili Dai BA Computer Science 1984 and Pantas Sutardjai MS 1983, PhD 1988) of NASDAQ-100 broadband technology company Marvell Technology Group; namesake of Sutardja-Dai Hall on the UC Berkeley campus Jon F. Vein, B.A., founder of MarketShare (acquired for $450 million by Neustar); Emmy Award-winning producer Cher Wang, M.A. 1981 – founder and chairperson of TWSE-listed smartphone manufacturer HTC Corporation and TWSE-listed electronics manufacturer VIA Technologies Edward Wang, B.S. EECS 1983, M.S. 1988, PhD 1994 – co-founder (with Diane Greene MS CS 1988 and Mendel Rosenblum MS 1989, PhD 1992) and Principal Engineer of NYSE-listed software company VMware Alice Waters, B.A. 1967 – celebrity chef, founder of Chez Panisse, originator of the California cuisine; 2015 National Humanities Medal recipient for "celebrating the bond between the ethical and the edible. As a chef, author, and advocate, Ms. Waters champions a holistic approach to eating and health and celebrates integrating gardening, cooking, and education, sparking inspiration in a new generation"; member of the American Academy of Arts and Sciences; recipient of five James Beard Foundation Awards (1984 Who's Who of Food & Beverage, 1997 Fruits & Vegetables, 1992 Outstanding Chef, 1992 Outstanding Restaurant, 1997 Humanitarian of the Year, 2004 Lifetime Achievement) Norm Winningstad, BSEE 1948 – founder of Lattice Semiconductor and video game peripheral manufacturer Thrustmaster Dean Witter, 1909 – co-founder and partner, Morgan Stanley Dean Witter Steve Wozniak, class of 1976, graduated B.S. 1986 – co-founder of NASDAQ-100 computer software and hardware manufacturer Apple Inc, member of the National Academy of Engineering; Chief Scientist of flash memory enterprise company Fusion-io (when it was acquired by SanDisk for $1.3 billion);namesake of the Wozniak Lounge in Soda Hall Tony Xu, B.S. 2006 – co-founder and CEO of food delivery app DoorDash Michael Yang, B.S. 1983, MBA 1995 – founder of Become.com; co-founder of MySimon.com (acquired by CNET for $700 million) Chairs, presidents, and CEOs Charles Anderson, BS Chemistry 1938 – CEO and President (1958–1980) of Stanford Research International (known now as SRI International) Mitchell Baker, B.A. 1979, J.D. 1987 – current Chairperson and former CEO of the web browser company Mozilla Corporation, current Chairperson of the Mozilla Foundation, recipient of the Electronic Frontier Foundation Pioneer Award in 2008; inducted into the Internet Hall of Fame William F. Ballhaus, Jr., B.S. 1967, M.S. 1968, PhD 1971 – director of NASDAQ company OSI Systems, former president and CEO of Aerospace Corporation, former director of NASA's Ames Research Center Brian Barish, B.A. 1991 – President of the investment firm Cambiar Investors, LLC. Bengt Baron, B.S. 1985, M.B.A. 1988 – CEO of V&S Group (Stockholm, Sweden); former CEO of Absolut Vodka; 1980 Summer Olympics gold medalist in 100m men's backstroke Stephen Bechtel Sr., Attended, honorary – President, Chairman of Bechtel Corporation Philip M. Condit, B.S. 1963 – Chairman and CEO of NYSE-listed S&P 500 aerospace corporation The Boeing Company from 1996 to 2003 Rick Cronk, B.S. Business 1965 – co-owner and former president (1977–2003) of NASDAQ-listed ice cream company Dreyer's Grand Ice Cream (acquired by Nestlé in a $2.8 billion deal in 2002); former national president of the Boy Scouts of America and the outgoing chairman of the World Scout Committee of the World Organization of the Scout Movement Patricia Dunn, B.A. 1975 – former Chairwoman of NYSE-listed S&P 500 computer products company Hewlett-Packard Michael R. Gallagher, B.A. Business 1967, MBA 1968 – former CEO and Director of Playtex Products, Inc., from 1995 to 2004 Mandy Ginsberg, BA – CEO of Internet dating site Match.com (NASDAQ-listed IAC-subsidiary), CEO of instructional web site Tutor.com (subsidiary of NASDAQ-listed IAC) Greg Greeley, MBA 1998 – President of Homes of Airbnb; former vice president of Amazon Prime Andrew Grove, PhD 1963 – 4th employee of NASDAQ-100 semiconductor company Intel, and eventually its President, CEO, and chairman, and TIME magazine's Man of the Year in 1997 Walter A. Haas, Sr., B.S. 1910 – former president and CEO of clothing manufacturer Levi Strauss & Co. Walter A. Haas, Jr., B.S. 1937 – former president and CEO of Levi Strauss & Co. H. Robert Heller, PhD Economics 1965 – President and CEO of Visa U.S.A. and Federal Reserve Board of Governors Paul E. Jacobs, B.S. 1984, M.S. 1986, PhD 1989 – CEO of NASDAQ-100 wireless telecommunications semiconductor company Qualcomm Joseph Jimenez, M.B.A. 1984 – CEO of Novartis Douglas Leeds, B.A. Political Economy – CEO of Ask.com (subsidiary of NASDAQ-listed IAC) Howard Lincoln, B.A. 1962, J.D. 1965 – former Chairman of video game company Nintendo of America (American branch of TYO-listed video game company Nintendo), Chairman and CEO of the Seattle Mariners; recipient of the inaugural Academy of Interactive Arts and Sciences Lifetime Achievement Award Mark Liu, Ph.D. 1983 - Chairman of Taiwan Semiconductor Manufacturing Company Shantanu Narayen, M.B.A. 1993 – President and CEO of NASDAQ-100 software company Adobe Systems Inc. Paul Otellini, M.B.A. 1974 – CEO of NASDAQ-100 semiconductor company Intel (2005–present) Rudolph A. Peterson, B.S. 1925, President and CEO of NYSE-listed S&P 500 financial services company Bank of America John Riccitiello, B.S. 1981 – CEO of NASDAQ-100 video game company Electronic Arts (April 2007 – March 2013); managing director and co-founder of Elevation Partners; former president and chief operating officer (October 1997 to April 2004) of Electronic Arts (grew the company from $673 million to $3 billion, increased profits over 900%); former president and chief executive officer, Bakery Division, at Sara Lee; former president and chief executive officer of Wilson Sporting Goods Arun Sarin, M.S. 1978, M.B.A. 1978 – CEO of London-based NASDAQ-100 wireless service provider company Vodafone (2003–present) Eric Schmidt, M.S. 1979, PhD 1982 – Inaugural CEO of NASDAQ-100 Internet search company Google (2001–2011); executive chairman at Google (2011–present), 136th-richest person in the world in 2011 Jeff Shell, BAS Economics and Applied Math– Chairman of Universal Filmed Entertainment and Chairman of the Broadcasting Board of Governors Douglas W. Shorenstein, BA – real estate developer, chairman and CEO of Shorenstein Properties; chairman of the board of directors of the Federal Reserve Bank of San Francisco Masayoshi Son, B.A. 1980 – President and CEO of Japanese multinational corporation Softbank Ron Suber, B.A. – President Emeritus of Prosper Marketplace Robert Tjian, BS Biochemistry – molecular biologist, president of the Howard Hughes Medical Institute Cher Wang, M.A. 1981 – Chair of computer motherboard manufacturer TWSE-listed VIA Technologies and TWSE-listed portable electronics manufacturer HTC Corporation Peng Zhao, Ph.D. 2006 – CEO of Citadel Securities Vice presidents, CFOs, COOs, CMOs, and CTOs E. Floyd Kvamme, B.S. EECS 1959 – former vice president and president at NYSE-listed S&P 500 semiconductor manufacturer company National Semiconductor, former vice president at Apple Computer, venture capitalist Kleiner Perkins Caufield & Byers Bob Lutz, B.S. 1961, M.B.A. 1962 – General Motors Vice chairman, Product Development, and chairman, General Motors North America, former vice-chairman for Chrysler Jack McCauley, BS EECS 1986 – engineer who designed the devices used in the video game Guitar Hero; co-developed the virtual reality goggles known as Oculus Rift from Oculus VR; Vice President of Engineering at Oculus VR (acquired by Facebook for $2 billion) Other Stewart Blusson, PhD 1964 – multimillionaire diamond magnate (Ekati Diamond Mine) Don Graham – developer of the Ala Moana Center William Randolph Hearst, Jr. (attended) – newspaper publisher Sonita Lontoh, B.S. 1999 – green technology executive Michael Milken, B.S. 1968 – billionaire financier, Drexel Burnham Lambert, philanthropist See also List of UC Berkeley faculty List of companies founded by UC Berkeley alumni List of University of California, Berkeley alumni University of California, Berkeley School of Law References University of California, Berkeley alumni Alumni Business
53913570
https://en.wikipedia.org/wiki/SpyHunter%20%28software%29
SpyHunter (software)
SpyHunter is an anti-spyware computer program for the Microsoft Windows (Windows XP and later) operating system. It is designed to remove malware, such as trojan horses, computer worms, rootkits, and other malicious software. Details SpyHunter is currently at version 5, and receives daily definition updates. SpyHunter has a free version, which allows the user to scan their computer. Purchase is required to remove found malware. EnigmaSoftware also offers a service on its website called "ESG MalwareTracker", it shows the most infected countries where SpyHunter has detected malware. In the paid version, the user is able to receive support from a built-in HelpDesk. SpyHunter also has a custom fix from the Spyware Helpdesk team. Critical reception PC Magazine gave SpyHunter a 2 out of 5 star rating in March 2004, saying it was good at spyware detection, but complained about the performance and usability. PC Magazine gave SpyHunter a "GOOD" rating, 3 out of 5 stars, in March 2016. The reviewer concluded, "Enigma SpyHunter 4 does what it promises, eliminating active malware and killing malware that launches at startup. But competitors deliver much more." Lawsuits In February 2016, Enigma Software filed a lawsuit against Bleeping Computer, a computer support website. It alleged that the latter engaged in a smear campaign with the purpose of driving potential customers away from SpyHunter to affiliate competing products. In turn, Bleeping Computer filed a lawsuit against Enigma Software also for an alleged smear campaign. In March 2017, Enigma Software announced in a press release that a settlement had been reached in the lawsuit against Bleeping Computer, and that both cases would be dismissed. In October 2016, Enigma Software filed a lawsuit against popular security software vendor, Malwarebytes, for anti-competitive behavior. The lawsuit arose after Malwarebytes' software began targeting SpyHunter as a potentially unwanted program. On November 7, 2017, Enigma's case was dismissed by the US District Court. Engima appealed to the United States Court of Appeals for the Ninth Circuit and the court reversed the lower court's decision. A panel of judges voted 2-1 that, "We hold that the phrase "otherwise objectionable" does not include software that the provider finds objectionable for anti-competitive reasons." Despite this, Malwarebytes won the case on its merits after the Supreme Court denied their writ of certiorari on the immunity issue. Controversies SpyHunter is often labeled an Potentially Unwanted Program due to its misleading results of always showing infections, including on clean computers, and injects tracking cookies into an users browser, raising concern whether it is legitimate or not. The company also floods web search results when searching for a specific threat, linking a download to SpyHunter, even if the product is not able to remove it. References Spyware removal Windows-only software Windows security software Rogue software
41585002
https://en.wikipedia.org/wiki/Mlpack
Mlpack
mlpack is a machine learning software library for C++, built on top of the Armadillo library and the ensmallen numerical optimization library. mlpack has an emphasis on scalability, speed, and ease-of-use. Its aim is to make machine learning possible for novice users by means of a simple, consistent API, while simultaneously exploiting C++ language features to provide maximum performance and maximum flexibility for expert users. Its intended target users are scientists and engineers. It is open-source software distributed under the BSD license, making it useful for developing both open source and proprietary software. Releases 1.0.11 and before were released under the LGPL license. The project is supported by the Georgia Institute of Technology and contributions from around the world. Miscellaneous features Class templates for GRU, LSTM structures are available, thus the library also supports Recurrent Neural Networks. There are bindings to R, Go, Julia, and Python. Its binding system is extensible to other languages. Supported algorithms Currently mlpack supports the following algorithms and models: Collaborative Filtering Decision stumps (one-level decision trees) Density Estimation Trees Euclidean Minimum Spanning Trees Gaussian Mixture Models (GMMs) Hidden Markov Models (HMMs) Kernel density estimation (KDE) Kernel Principal Component Analysis (KPCA) K-Means Clustering Least-Angle Regression (LARS/LASSO) Linear Regression Bayesian Linear Regression Local Coordinate Coding Locality-Sensitive Hashing (LSH) Logistic regression Max-Kernel Search Naive Bayes Classifier Nearest neighbor search with dual-tree algorithms Neighbourhood Components Analysis (NCA) Non-negative Matrix Factorization (NMF) Principal Components Analysis (PCA) Independent component analysis (ICA) Rank-Approximate Nearest Neighbor (RANN) Simple Least-Squares Linear Regression (and Ridge Regression) Sparse Coding, Sparse dictionary learning Tree-based Neighbor Search (all-k-nearest-neighbors, all-k-furthest-neighbors), using either kd-trees or cover trees Tree-based Range Search See also Armadillo (C++ library) List of numerical analysis software List of numerical libraries Numerical linear algebra Scientific computing References External links C++ libraries Data mining and machine learning software Free computer libraries Free mathematics software Free science software Free software programmed in C++ Free statistical software
60823577
https://en.wikipedia.org/wiki/Olakunbi%20Olasope
Olakunbi Olasope
Olakunbi Ojuolape Olasope is a Professor in the Department of Classics at the University of Ibadan in Nigeria. She is an expert on Roman social history, Greek and Roman theatre, and Yoruba classical performance culture. Olasope is known in particular for her work on the reception of classical drama in West Africa, especially the work of the Nigerian dramatist Femi Osofisan. Education Olasope studied Classics at the University of Ibadan, completing her BA in 1991 and MA in 1994. Her doctoral thesis, The Ideal of Univira in Traditional Marriage Systems in Ancient Rome and Yorubaland, was awarded in 2005. ("Univira" is the Latin word for a woman married only ever to one man, see Pudicitia.) In 2002, she was a visiting scholar at the Department of Classics of the University of Texas at Austin. Career Olasope began teaching at the University of Ibadan as an Assistant Lecturer in 1997. She was promoted to Lecturer II in 1999, Lecturer I in 2004, Senior Lecturer in 2007 and became a Reader in 2013. She became a Full Professor of Classics in 2016. She is currently the Head (Chair) of the Department of Classics, a position she previously held between 2009-2013. In 2005/2006, she held a University of Ibadan Senate Research Grant. She was a British Academy Visiting Fellow at Oriel College, Oxford in 2007 and an Academic Visitor at the Aristotle University of Thessaloniki, Greece in Spring 2010. She has also been a visiting scholar at the University of Reading. In 2017-19 Olasope was a visiting scholar at the University of Ghana, Legon. Olasope is an institutional member of the Classical Reception Studies Network for the University of Ibadan and participated in the African Blog Takeover in 2021. Olasope has published extensively on Roman social history and comparative studies of Roman and Yoruba social history. Principally Olasope's work focuses on the position of women in Roman and Yoruba society, and those societies' views of women's virtue. Additionally, Olasope's comparative work on Roman and Nigerian material culture resulted in her 2009 monograph Roman Jewelleries, Benin Beads for Class Structures: Significance of Adornment in Ancient Cultures. Olasope is primarily known for her work on modern West African drama which draws on classical themes and precedents. She is a significant contributor to work on the Nigerian playwright Femi Osofisan, on whom she co-convened a major international conference in 2016. Olasope also edited a volume of interviews with Osofisan, described by one reviewer as containing some 'very valuable' interviews and playing an important role in making Osofisan's works more widely available. Olasope has written a series of articles on Osofisan's work, focusing on the use of plays by Sophocles and Euripides in modern Nigerian drama and literature, particularly focusing on The Trojan Women and Women of Owu. Olasope's work on Osofisan and classical reception is widely recognised in Nigerian media sources. In 2021 Olasope worked with Osofisan on a production based on Euripedes' Medea: "Medaye: A Re-Reading for the African Stage of Euripides’ Medea." Olasope has worked to promote the study of Classics in West Africa through the promotion of Classics in secondary schools in Ibadan, Nigeria, but more significantly with the Classical Association of Ghana by co-organising the Association's founding conference on 'Classics and Global Humanities in Ghana'. At this conference the keynote speaker Barbara Goff noted the challenges the departments in Ghana have faced in establishing themselves as centres of research and teaching in the classics and that the very existence of the conference underlines how classics has become a more global discipline over the past few years. Olasope's work promoting the study of classics and its relevance in West Africa has been featured in the Nigerian and Ghanaian media. A virtual conference, "Pushing the Frontier of Classical Studies in Sub-Saharan Africa: Tribute to PROF OLAKUNBI OLASOPE@50" was held in Olasope's honour in 2021 to celebrate her 50th birthday and contribution to classical studies in Nigeria. Select publications 2001. 'The Roman Slave and His Prospects in the Late Republic and Early Empire' Castalia Vol. 6, 63–71. 2002. 'Greek and Yoruba Beliefs in Sophocles' Antigone and Femi Osofisan's Adaptation, Tegonni in Papers in Honour of Tekena N. Tamuno ed. Egbe Ifie 408–420. 2004. 'The extent of the powers of the paterfamilias and Olori-ebi in Ancient Roman and Yoruba Cultures,' in Wole Soyinka @70 Festschrift, LACE Occasional Publications Series 22, 809–836. 2004. 'Gender Discriminations in Classical Rome. Ibadan Journal of European Studies. No. 4, 80–95. 2005. 'Differential Equations: Bride-Price and Dowry in Ancient Roman and Yoruba Cultures.' Nigeria and the Classics Vol.21, 71–77. 2006. Marriage Alliances in Ancient Rome. Ibadan: Hope Library of Liberal Arts Series. 2009. Univira: The Ideal Roman Matrona' Lumina, Vol. 20, No.2, 1–18. ISSN 2094-1188 2009. 'The Extent of the Powers of the Paterfamilias and Olori-ebi in Ancient Greek and Yoruba Cultures' Drumspeak: International Journal of Researching the Humanities 2, 152–169. 2009. Roman Jewellries, Benin Beads for Class Structures: Significance of Adornment in Ancient Cultures. Ibadan: Hope Library of Liberal Arts Series 2010. 'Women in the Oikos: Re-thinking Greek Male Anxiety over Female Sexuality', in Language, Literature and Criticism: Essays in Honour of Aduke Adebayo, 473-486 (Zenith Bookhouse Publishers). 2011. 'The Augustan Social Reforms of 18 BC and the Elite Roman Women.' Drumspeak: International Journal of Researching the Humanities 4, 252–266. 2011. with S Adeyemi, 'Fracturing the Insularity of the Global State: War and Conflict in Moira Buffini's Welcome to Thebes.' Lagos Notes and Records 17, 99–110. 2012. 'To Sack a City or to Breach a Woman's Chastity: Euripides’ Trojan Women and Osofisan's Women of Owu.' African Performance Review, Journal of African Theatre Association UK 6.1, 111-121 2012. 'Prostitution: The Appetites of Athenian Men in the Classical Period.' Lagos Notes and Records 18, 89–104. 2012. 'Wailing Women in Ancient Greek Society.' Ife Journal of Foreign Languages 8, 286–297. 2012. 'A Review of Omobolanle Sotunsa and Olajumoke Haliso (eds.) Women in Africa: Contexts, Rights, Hegemonies.' Journal of History and Diplomatic Studies 9, 259–265. (ed.) 2013. Black Dionysos: Conversations with Femi Osofisan. Ibadan: Kraft Books. 2014. 'Rape and Adultery in Ancient Greek and Yoruba Societies'. Journal of Philosophy and Culture 5.1, 67–114. 2015. 'Interview with Femi Osofisan: The Playwright is a Labourer of Love' IATC- International Association of Theatre Critics 12 (University of Illinois at Urbana Champaign) 2017. 'Lament as Women's Speech in Femi Osofisan's Adaptation of Euripides’ Trojan Women: Women of Owu.' Textus 7, 105-117 References External links Profile on WorldCat Personal website on Academia.edu Google Scholar Women classical scholars Nigerian historians University of Ibadan faculty University of Ibadan alumni 1971 births Living people
66404371
https://en.wikipedia.org/wiki/2021%20USC%20Trojans%20football%20team
2021 USC Trojans football team
The 2021 USC Trojans football team represents the University of Southern California in the 2021 NCAA Division I FBS football season. They play their home games at the Los Angeles Memorial Coliseum and compete as members of the South Division of the Pac-12 Conference. They were led by sixth-year head coach Clay Helton in the first two games; Helton was fired on September 13 following the team's 28–42 loss to Stanford. Associate head coach Donte Williams took over as the team's interim head coach. The Trojans finished the 2021 season at 4–8. It was their worst record since 1991, when they went 3–8. Previous season The Trojans finished 5–1 in 2020 season. They represented the South Division in the Pac-12 Championship Game where Oregon become Pac-12 Champions (24–31). Offseason Transfers The Trojans lost nine players via transfer. The Trojans add nine player & one walk-on player via transfer. Returning Starters USC returns 26 starters in 2021 including 11 on offense, 12 on defense, and 3 on special teams. Key departures include Amon-Ra St. Brown (WR – 6 games), Tyler Vaughns (WR – 5 games), Alijah Vera-Tucker (OL – 6 games), Marlon Tuipulotu (DT – 6 games), Olaijah Griffin (CB – 5 games), and Talanoa Hufanga (S – 6 games). Offense (11) Defense (12) Special teams (3) 2021 NFL Draft The official list of participants for the 2021 Senior Bowl included USC football player Marlon Tuipulotu (DT). The official list of participants for the 2021 NFL Combine included USC football players to be announced soon. Team players drafted into the NFL Recruiting class Personnel Coaching staff Roster Depth Chart Depth Chart after Week 9 vs. Arizona Wildcats True Freshman Injury report Scholarship distribution chart / / * Former Walk-on – 85 scholarships permitted, 95 currently allotted to players via extra Seniors. – 95 recruited players on scholarship Scholarship Distribution 2021 Schedule Spring Game 2021 Cardinal vs Gold Spring Game Regular season Game summaries San José State Stanford at Washington State Oregon State at Colorado Utah at No. 13 Notre Dame Arizona at Arizona State UCLA No. 13 BYU at California Rankings Statistics USC vs Opponents USC vs Pac-12 opponents Offense Defense Key: POS: Position, SOLO: Solo Tackles, AST: Assisted Tackles, TOT: Total Tackles, TFL: Tackles-for-loss, SACK: Quarterback Sacks, INT: Interceptions, BU: Passes Broken Up, PD: Passes Defended, QBH: Quarterback Hits, FR: Fumbles Recovered, FF: Forced Fumbles, BLK: Kicks or Punts Blocked, SAF: Safeties, TD : Touchdown Special teams After the Season Final statistics Awards and honors Conference National All-Americans Bowl games All Star games NFL Draft The NFL Draft will be held at Allegiant Stadium in Paradise, Nevada on April 28–30, 2022. Trojans who attended the 2022 NFL Draft: NFL Draft combine Five members of the 2021 team were invited to participate in drills at the 2022 NFL Scouting Combine. † Top performer DNP = Did not participate Notes August 19, 2020 – USC DT Jay Tufele declares for 2021 NFL Draft. November 30, 2020 – Kyron Ware-Hudson flips from Oregon to USC football recruiting class for 2021. December 1, 2020 – USC football kicker Chase McGrath enters transfer portal. December 3, 2020 – Top USC football commit Jake Garcia decommits from the Trojans. December 7, 2020 – USC football loses linebacker Palaie Gaoteote to transfer portal. December 11, 2020 – USC football adds Alabama DT Ishmael Sopsher via transfer. December 16, 2020 – USC Football Announces Early Signing Period 2021 Class. December 19, 2020 – USC Football Opts Out Of Playing In A Bowl. December 25, 2020 – USC football adds Xavion Alford as transfer from Texas. December 28, 2020 – USC football's Alijah Vera-Tucker declares for NFL Draft. December 28, 2020 – Markese Stepp enters transfer portal intending to leave USC football. December 30, 2020 – Talanoa Hufanga declares for NFL Draft after All-American season for USC football. December 31, 2020 – USC Cornerbacks Coach Donte Williams Adds Associate Head Coach Title. January 1, 2021 – USC defensive lineman Marlon Tuipulotu declares for NFL Draft. January 2, 2021 – USC receiver Amon-Ra St. Brown enters the 2021 NFL Draft. January 2, 2021 – Ceyair Wright commits to USC football recruiting class for 2021. January 2, 2021 – Korey Foreman, Nation's No. 1 Overall Recruit, Signs With USC Football. January 3, 2021 – Tim Drevno, Aaron Ausmus Will Not Return To USC Football Coaching Staff. January 4, 2021 – USC football recruiting rolls into 2022 with Fabian Ross commitment. January 4, 2021 – USC wide receiver Tyler Vaughns declares for NFL Draft. January 16, 2021 – Robert Stiner Named Director of Football Sports Performance At USC. Notes References USC USC Trojans football seasons USC Trojans football USC Trojans football
36744
https://en.wikipedia.org/wiki/Iris
Iris
Iris most often refers to: Iris (anatomy), part of the eye Iris (color), an ambiguous color term, usually referring to shades ranging from blue-violet to violet Iris (mantis), a genus of insects Iris (mythology), a Greek goddess Iris (plant), a genus of flowering plants Iris or IRIS may also refer to: Arts and media Fictional entities Iris (American Horror Story), an American Horror Story: Hotel character Iris (Fire Force), a character in the manga series Fire Force Iris (Mega Man), a Mega Man X4 character Iris, a Mega Man Battle Network character Iris (Pokémon) Iris (Pokémon anime) Iris, a Trolls: The Beat Goes On! character Sorceress Iris, a Magicians of Xanth character Iris, a kaiju character in Gamera 3: The Revenge of Iris Iris, a LoliRock character Iris, a Lufia II: Rise of the Sinistrals (1995) character Iris, a Phoenix Wright: Ace Attorney − Trials and Tribulations character Iris, a Ruby Gloom character Iris, a Taxi Driver (1976) character Iris Donnelly Garrison, a fictional character in the American soap opera Love is a Many Splendored Thing Iris Irine, a Ragnarok character Iris West, a fictional character in DC Comics Film, television, and theatre Film Iris shot, a film technique for ending a scene Iris (1916 film), a British silent romance Iris (1987 film), a Dutch film directed by Mady Saks Iris (2001 film), a biopic about Iris Murdoch Iris (2014 film), a documentary about Iris Apfel by Albert Maysles Iris (2016 film), a French film directed by Jalil Lespert Stage productions Iris (Cirque du Soleil), Cirque du Soleil's only resident show in Los Angeles, California Iris (play), a 1901 play by the British writer Arthur Wing Pinero Television Iris (TV channel), an Italian free entertainment television channel Iris (TV series), a 2009 South Korean espionage television drama series Iris II (TV series), a 2013 South Korean espionage television drama series Iris, The Happy Professor, a 1992 Canadian television show featuring a purple ibis Iris (game), a Halo 3 online promotion Music Classical compositions Iris (opera), by Pietro Mascagni "Iris", chanson by Michel Lambert (1610–1696) "Iris", chanson by Jean-Benjamin de La Borde (1734–1794) Iris, concerto by Archibald Joyce (1873–1963) Iris, for saxophone by Tansy Davies (born 1973) Performers Iris (American band), a synthpop group Iris (Romanian band), a heavy metal group Iris (Japanese band), a Japanese girl idol group Arc Iris, a folk pop band from Providence, Rhode Island, US Albums Iris (album), the self-titled debut album by the Romanian band Iris Iris (EP), a 1992 EP by Miranda Sex Garden Songs "Iris" (song), a 1998 song by the Goo Goo Dolls on the soundtrack City of Angels, later covered by other artists "Iris", a song by Emmy the Great on the album Virtue "Iris", a song by Live on the album Throwing Copper "Iris", a song by Mike Posner on the album At Night, Alone "Iris", a song by Split Enz on the album Waiata "Iris", a song by The Breeders on the album Pod "Iris (Hold Me Close)", a song by U2 on the album Songs of Innocence Periodicals El Iris, a Mexican periodical published between in 1826 Sheffield Iris, an early English newspaper IRIS Magazine, an Irish republican magazine Plastic arts Irises (painting), by Vincent Van Gogh Iris, Messenger of the Gods, a sculpture by Auguste Rodin Enterprises IRIS (radio reading service), Vision Australia Radio for people unable to read print media Iris Capital, a venture capital firm, specialized in the digital economy, primarily active in Europe Iris Clert Gallery, an art gallery named after its Greek owner and curator, Iris Clert Iris Fund for Prevention of Blindness, a British charity, now part of Fight for Sight Iris Ohyama, a Japanese manufacturing company IRIS Research, an Australian economic, community and industry research organisation International Resources for the Improvement of Sight, a multi-national charity People Iris (given name), a feminine given name, and a list of people so named Iris (artist) (born 1983), comics artist in Quebec, Canada Íris (footballer) (born 1945), Brazilian footballer Iris (singer) (born 1995), Belgian singer Donnie Iris (born 1943), American rock musician Places United States Iris, West Virginia, a community in the United States Iris Avenue station, on the San Diego Trolley Iris Falls, a waterfall in Wyoming Elsewhere Iris, Cluj-Napoca, a district in Romania Iris, Prince Edward Island, a community in Canada Iris Bay, in South Georgia, British Overseas Territory Iris Bay (Dubai), a 32-floor tower in the United Arab Emirates Glen Iris (disambiguation), several places River Iris, now Yeşilırmak River, a river in northern Turkey, called Iris in classical Greek IRIS Mist, a tower to be built in Dubai Maritime City, United Arab Emirates Science and technology Astronomy and spaceflight IRIS (astronomical software), an astronomical image processing software 7 Iris, an asteroid Infrared interferometer spectrometer and radiometer, an instrument used in the Voyager space program Interface Region Imaging Spectrograph, a space probe to observe the Sun Iris Nebula, a bright reflection nebula and Caldwell object in the constellation Cepheus International Radiation Investigation Satellite, an early satellite to study radiation in space Biology and medicine Iris (anatomy), part of the eye Iris (insect), a genus of praying mantis Iris (plant), a genus of flowering plants Iris (psychedelic), a psychedelic drug and a substituted amphetamine Iris glossy-starling, the emerald starling (Lamprotornis iris) Immune reconstitution inflammatory syndrome, a complication of anti-HIV treatment Computing and information technology Computing hardware IRIS (biosensor), an interferometric high-throughput biosensor Iris Mote, a wireless sensor node Iris printer, an inkjet printer SGI IRIS, a line of computer terminals and workstations HTC Iris, a smartphone manufactured by High Tech Computer Corporation Intel HD, UHD and Iris Graphics, a series of integrated graphics processors Internet Routing in Space, a space-capable IP router Software IRIS (transportation software), an Advanced Traffic Management System Iris Associates, an American software company, developer of Lotus Notes Iris Browser, a web browser IRIS GL, a graphics application programming interface IRIS Workspace, a graphically organized iconic desktop environment Other uses in computing Iris Challenge Evaluation, a series of NIST events to promote iris recognition technology Iris Recognition Immigration System, an electronic border control system Information Systems Research in Scandinavia, a non-profit organization in Scandinavia Insurance Regulatory Information System, a database of insurance companies in the United States Iris flower data set, a standard example data set for use in statistics (and related software) Other uses in science and technology Tropical Storm Iris (disambiguation), three tropical cyclones in the Atlantic Ocean Iris (diaphragm), a mechanical device in optical systems IRIS (jamming device), an Estonian weapon Iris (transponder), a deep-space transponder designed for use in cubesats IRIS Consortium, a seismology research project Institute for Research in Information and Scholarship, a Brown University program Internal rotary inspection system, a pipe testing method International Reactor Innovative and Secure, a nuclear reactor design Transportation Aircraft and missiles Abraham Iris, a 1930s French touring airplane Blackburn Iris, a 1920s British biplane flying boat IRIS-T, a German air-to-air missile Shahab-4 or IRIS, an Iranian liquid propelled missile Automobiles Iris (car), a British car brand manufactured 1906–1925 Desert Iris, a Jordanian 4x4 strategic auxiliary vehicle Tata Magic Iris, an Indian microvan Wallyscar Iris, a Tunisian mini SUV Engines IRIS engine, a design for a type of internal combustion engine de Havilland Iris, a British four-cylinder, liquid-cooled, horizontally opposed aero engine Rail transportation Iris (train), an international express train in Europe International Railway Industry Standard for the evaluation of railway management systems; see Union des Industries Ferroviaires Européennes Watercraft , for any of several ships by that name French ship Iris, the name of several vessels HMS Iris, the name of several Royal Navy ships MV Royal Iris, a ferry operating until 1991 USS Iris, the name of several U.S. Navy ships MV Royal Iris of the Mersey, a ferry operating since 2001 IRIS, the prefix for Iranian naval vessels since 1979; see List of current ships of the Islamic Republic of Iran Navy Other uses Iris (color), an ambiguous color term, usually referring to shades ranging from blue-violet to violet Iris (mythology), a Greek goddess IRIS (Management Festival), an event of the Indian Institute of Management Indore Denso Iris, a basketball team based in Kariya, Aichi Incident Resource Inventory System (IRIS), a distributed software tool provided by the Federal Emergency Management Agency (FEMA) See also Iris Award (disambiguation) Iris II (disambiguation)
1851692
https://en.wikipedia.org/wiki/Bergen%20County%20Academies
Bergen County Academies
Bergen County Academies (BCA) is a tuition-free public magnet high school located in Hackensack, New Jersey that serves students in the ninth through twelfth grades from Bergen County, New Jersey. The school was founded by John Grieco, also founder of the Academies at Englewood, in 1991. The school is currently organized into seven academies: Academy for the Advancement of Science and Technology (AAST), Academy for Business and Finance (ABF), Academy for Culinary Arts and Hospitality Administration (ACAHA), Academy for Engineering and Design Technology (AEDT), Academy for Medical Science Technology (AMST), Academy for Technology and Computer Science (ATCS), and Academy for Visual and Performing Arts (AVPA). In 2021, Niche ranked BCA as the #1 best public high school in America. BCA was also named as one of the 23 highest performing high schools in the United States by The Washington Post. BCA is a National Blue Ribbon School, a member of the National Consortium of Secondary STEM Schools, home of eleven 2020 Regeneron Science Talent Search Scholars including two Finalists, and a Model School in the Arts as named by the New Jersey Department of Education. As of the 2020–21 school year, the school had an enrollment of 1,118 students and 94.0 classroom teachers (on an FTE basis), for a student–teacher ratio of 11.9:1. There were 19 students (1.7% of enrollment) eligible for free lunch and 12 (1.1% of students) eligible for reduced-cost lunch. History Bergen County Academies was conceived by John Grieco. The school was founded on a vocational school framework with the mission of preparing students for careers in math and science by promoting a problem-solving, project-based, technical learning environment. It has since departed from this model and adopted a more standard college-preparatory curriculum. The school originally began as a single academy, "The Academy for the Advancement of Science and Technology" (AAST), which shared the current campus with the Bergen County Technical High School now located in Teterboro. The first group of AAST students was inducted in 1992 for the graduating class of 1996. In 1997, additional academies opened on the campus: the Academy for Business and Computer Technology (ABCT), the Academy for Engineering and Design Technology (AEDT), and the Academy for Medical Science Technology (AMST). The following year saw the opening of three career institutes, renamed a year later to become academies: the Academy for Culinary Arts (ACA), the Academy for Power and Transportation (APT), and the Academy for Visual Arts and Graphic Communications (AVAGC). Soon, all seven programs began focusing on college preparation, adopting a liberal arts curriculum with a focus on their respective fields. In 2001, a dispute initiated by the Bergen County School Administrators' Association focused on what Paramus Superintendent Janice Dime called "elitism." Several sending districts threatened to withdraw funding from the school. In response, the Bergen County Technical Schools agreed to increase the transparency of the admissions process and enter into talks with a number of sending districts. In 2002, APT was eliminated. ABCT was split into the Academy for Business and Finance (ABF) and the Academy for Telecommunications and Computer Science (ATCS). In 2012, ATCS turned its attention away from Telecommunications and towards Technology, and so was rechristened the Academy for Technology and Computer Science. ACA added hotel administration to its coursework and became the Academy for Culinary Arts and Hospitality Administration (ACAHA). AVAGC expanded its scope to include performing arts and became the Academy for Visual and Performing Arts (AVPA). The school itself has also changed its name numerous times, from "Bergen County Regional Academies" to "Bergen Academies" to "Bergen County Academy" and to the present "Bergen County Academies." BCA was certified to offer the IB Diploma Programme in January 2004, making it one of only 17 schools in New Jersey to offer the IB program at the high school level. School structure BCA has an extended school day from 8:00 AM to 4:10 PM. Prior to the COVID-19 pandemic, the day would start with a 4-minute Information Gathering Session (IGS), serving the purpose of a homeroom, followed by 27 modules (commonly referred to as "mods") that would last 15 minutes each, with 3 minutes of passing time in between each. Classes were commonly structured as either 2 or 3 mods. Currently, the day consists of 9 periods that last 50 minutes each, with 5 minutes of passing time in between each. All academies require four years of English, mathematics, social studies, and physical education, as well as three years of science (biology, chemistry, physics, and/or psychology) and world language (Spanish, French, or Mandarin). All students take three years of projects and clubs; projects take place periods 2-3 and clubs take place period 9, both on Wednesdays. All seniors participate in Senior Experience, an internship program where seniors work and learn for the full business day each Wednesday instead of being on campus. 40 hours of community service are required for graduation, up to 20 of which can be hours worked at the school. In addition to their regular classes, students of all academies have the opportunity to develop research projects. Research can be conducted in cell biology, chemistry and nanotechnology, stem cells, agriscience, psychology, nano-structural imaging, optics, and mathematics, among other subjects. Academies BCA is divided into seven academic and professional divisions, often referred to by their acronyms or, colloquially, by their single-word nicknames. However, BCA is treated as a single high school within the district and the state. Academy for the Advancement of Science and Technology (AAST | Science) was founded in 1992. AAST focuses on in-depth instruction of the sciences along with the practical applications of the scientific ideas learned in the classroom. By the end of sophomore year, students have taken courses in biology, chemistry, and physics. The academy also features a weekly lab rotation for the first two years. As the academy is science-based, many AAST students take on personal research projects in addition to their regular classes. Academy for Business and Finance (ABF | Business) was founded in 2002, separating from the Academy for Business and Computer Technology that was founded in 1997. Students in ABF take courses in economics, marketing, finance, management, business law, management information systems, entrepreneurship, and business ethics. To graduate, ABF students are required to complete a senior thesis and participate in the full IB Diploma Programme. Additional ABF opportunities include participation in DECA, involvement in their global studies program, and special access to the Financial Markets Lab, funded by Bloomberg technology, allowing students to conduct economic research and analysis. Academy for Culinary Arts and Hospitality Administration (ACAHA | Culinary) was founded in 1998, originally called the Academy for Culinary Arts (ACA). Along with their core classes, ACAHA focuses on hospitality management, entrepreneurship, and the culinary arts. As a part of the academy's curriculum, students receive certification from the National Restaurant Association Education Foundation and the ServSafe Managers program. Though seeking the IB diploma is optional, ACAHA also has access to International Baccalaureate business management courses. Students often participate in the ProStart Hospitality Management competition and SkillsUSA Leadership Conferences, as well as BCA's annual Chocolate Competition. Academy for Engineering and Design Technology (AEDT | Engineering) was founded in 1997. The academy was formed "as an extension of AAST," with a concentration in engineering and design. Courses unique to AEDT also explore topics like computer science, architecture, product development, and biomedical engineering. Students often compete in various robotics competitions and other projects, like in The Solar Car Challenge, in BCA's laboratories. Academy for Medical Science Technology (AMST | Medical) was founded in 1997. From 9th to 11th grade, students in AMST take courses about various medical fields, such as epidemiology, pharmacology, bioethics, neuroscience, biotechnology, and anatomy & physiology. Students often take on personal research projects in addition to their regular classes. Historically, many have also opted to apply for NREMT certification. Many AMST students participate in BCA's chapter of HOSA, though it is open to all students. Academy for Technology and Computer Science (ATCS | Computer Science) was founded in 2002, separating from the Academy for Business and Computer Technology that was founded in 1997, and originally called the Academy for Telecommunications and Computer Science. ATCS has a focus on the world of computers and the Internet. Its students are prepared for careers such as computer programming, software engineering, and other computer and engineering related professions. Academy for Visual and Performing Arts (AVPA | Visual Arts, Theater, Music) was founded in 1998, originally named Academy for Visual Arts and Graphic Communications. AVPA is subdivided into three concentrations: Visual Arts, Music, and Theater. Admissions Bergen County Academies' admissions process consists of three main stages: an initial application, an admissions exam, and an interview. The online initial application, which may also be shared with the application for Bergen County Technical High School in Teterboro, is submitted in December. Students may not also submit applications to other schools in the Bergen County Technical Schools district in addition to BCA. As well as being an eighth grader residing in Bergen County, applicants must: complete a 400 word essay optional additional 100 words on activities and accolades obtain a letter of recommendation from: 8th grade math teacher 7th or 8th grade English teacher 7th or 8th grade science teacher submit middle school transcript and standardized test scores declare first and second choice of academy In January, all applicants must take the admissions exam, consisting of a literary essay and a math test. The 45-minute long essay on a given passage is scored based on comprehension, insight, organization, support, style, and grammar/spelling. The 60-minute long math section is made up of 40 multiple choice questions focused on basic skills and word problems. Open-ended problems were included on the math test until 2011, when they were removed to include more word problems. Students will then receive a letter stating if they have moved onto the interview phase. Interviews are conducted on an individual basis by teams of teachers and guidance counselors. Unlike the previous two stages, which are identical for every student, the interview may be personalized according to academy. For example, applicants for AVPA in the Music and Theatre concentrations present an audition, while applicants for the Visual Arts concentration of AVPA participate in an art workshop and present a portfolio. In 2021, BCA reported that they had a 15% acceptance rate. Extracurricular activities Clubs During the 2019–2020 school year, BCA had over 130 clubs. BCA has a Model United Nations team that runs its own Model UN conference for high school students, known as AMUN. The team also runs its own Model UN conference for middle school, known as JAMUN. The BCA Model UN team has won Best Delegation at numerous conferences, including those hosted by Yale University, Princeton University, the George Washington University, and New York University. The BCA Model UN team has also earned many individual delegate awards and is the largest club at the school. In 2008, BCA's math team won first place in Division B at the Princeton University Mathematics Competition, an annual competition attended routinely by the team. The school routinely has 10+ students qualifying for the USAMO (United States Mathematical Olympiad), with a student winning the competition in 2012. The school captured first place at the 2009 ARML Local competition, another routine annual competition. In 2015, student Ryan Alweiss competed on the American team at the International Math Olympiad, helping the United States win the competition for the first time since 1994 with a 98th percentile score of 31. BCA's junior varsity and varsity quiz bowl teams qualified to compete in the National History Bowl in 2013, and several individuals competed in the National History Bee. BCA had a battle BattleBots IQ team, known as the Titanium Knights. The team won the 2006 national heavyweight championship in the high school division with the robot E2V2, and won two other awards for another 120-pound robot, Knightrous. In previous years, the team has won second, third, and fourth place titles in BBIQ, and affiliated student teams have won numerous awards in Northeast Robotics Club events. The BattleBots team was succeeded by the school's "MAKE project", which focused on allowing students to pursue a wider set science and engineering projects and competitions. By 2014, BattleBots was no longer an active club at BCA. However, in 2018, the moniker and spirit of the Titanium Knights were revived by the FIRST Tech Challenge Robotics club, an after-school club. In 2020, both of its teams qualified for the state-wide FIRST Tech Challenge robotics competition. BCA is home to a large Amnesty International student group that leads schoolwide events and attends local, regional, and national conferences on human rights. Sports BCA shares its sports program with the Bergen County Technical Schools in Teterboro and Paramus to form the Bergen Tech Knights. The schools compete in the Big North Conference, which is comprised of public and private high schools in Bergen and Passaic counties, and was established following a reorganization of the Northern New Jersey sports leagues by the New Jersey State Interscholastic Athletic Association. In the 2009–2010 school year, the school competed in the North Jersey Tri-County Conference, which was established on an interim basis to facilitate the realignment. Before the realignment, Bergen Tech had been placed in the Northern New Jersey Interscholastic League (NNJIL) at the start of the Fall 2006 athletic season. With 1,669 students in grades 10-12, the school was classified by the NJSIAA for the 2019–20 school year as Group IV for most athletic competition purposes, which included schools with an enrollment of 1,060 to 5,049 students in that grade range. The football team competes in the Ivy Red division of the North Jersey Super Football Conference, which includes 112 schools competing in 20 divisions, making it the nation's biggest football-only high school sports league. The football team is one of the 12 programs assigned to the two Ivy divisions starting in 2020, which are intended to allow weaker programs ineligible for playoff participation to compete primarily against each other. The school was classified by the NJSIAA as Group V North for football for 2018–2020. Athletic achievements for the Bergen Tech Knights and Bergen Tech Lady Knights include: In 2006, the football team reached the playoffs before losing to Randolph High School 29–0. In the same year, the boys' soccer team advanced to the state tournament, winning in the first round before losing to Memorial High School in the semifinal game. The tennis team and baseball team advanced to the playoffs in 2009, with the tennis team continuing on to the semifinals after winning sectionals. Campus and facilities Bergen County Academies is located on the Dr. John Grieco Campus in Hackensack. The school occupies a sprawling main building which runs along Hackensack Avenue as well as a nearby Environmental Science Center (ESC) building connected to a greenhouse. An auditorium adjoining the main building seats 1,200 people. The school's baseball/softball field, football field, and track are located behind the academic buildings. Completed in August 2008, the school's cafeteria underwent a massive overhaul that expanded the space from 1,500 to 11,000 square feet. The school has a variety of science laboratories. The nanotechnology lab opened in 2009 and offers spectrophotometers, differential scanning calorimeter, and a probe station. The cell biology lab opened in 2004 and has a viability analyzer, a chip array bioanalyzer, an electroporator, and microplate readers. The stem cell lab opened in 2006 and features a DNA sequencer, a flow cytometer, RT-PCR and standard PCR machines, and a lypholizer. The optics lab opened in 2008 and is home to one laser scanning confocal microscope, one scanning electron microscope (SEM) and one transmission electron microscope (TEM). There are also laboratories largely built and designated for specific academies. A dedicated Bloomberg workstation lets students conduct independent financial markets analysis and research. The option to earn a Bloomberg Certification is also available through tutorials. The school features two studio art labs. One of the studios is a visual arts lab equipped with compositing and printing equipment. A video lab broadcasts inside the school and features workstations, professional cameras, and a bluescreen. The school also has a restaurant-grade kitchen for teaching culinary arts. Awards and rankings In 2015, Bergen County Academies was one of 15 schools in New Jersey, and one of 9 public schools, to be recognized as a National Blue Ribbon School in the exemplary high performing category by the United States Department of Education. In the same year, Newsweek ranked BCA fifth out of the top 500 public schools in America in 2015 and fourth in New Jersey. Inside Jersey magazine ranked BCA first in its 2014 ranking of New Jersey's Top Performing High Schools. In the same year, The Daily Beast ranked BCA 15th in the nation among over 700 magnet and charter schools, second among the 25 Best High Schools in the Northeast, and first among schools in New Jersey. The Washington Post designated BCA as one of 23 top-performing schools with elite students intentionally excluded from its list of America's Most Challenging High Schools "because, despite their exceptional quality, their admission rules and standardized test scores indicate they have few or no average students." In October 2020, Niche ranked the school as the #1 public high school in the nation, as well as the #1 magnet school, #3 college prep public high school, #5 teachers in a public high school, and #7 STEM high school (all for America). It swept all of these categories on the state, county, and New York City area level, with the exception of STEM school, which it came in second for with NJ and NYC area, and college prep, which it came in second for with the NYC area. During the 2019-2020 school year, Bergen County Academies had the best graduation rate and SAT scores in the state of New Jersey. Notable alumni Harry Altman (class of 2005), featured in the 2002 documentary film Spellbound about the Scripps National Spelling Bee. Shakira Barrera (born 1990, class of 2008), dancer and actor who has appeared in the Netflix series GLOW. George Hotz (born 1989, class of 2007), known for computer and device hacking. Sachin H. Jain (born 1980, class of 1998), CEO of CareMore and former Chief Medical Information Officer of Merck. Kaavya Viswanathan (class of 2004), author of the controversial 2006 novel entitled How Opal Mehta Got Kissed, Got Wild, and Got a Life, since withdrawn due to accusations of plagiarism. References External links Bergen County Academies website Bergen County Academies PPO website National Center for Education Statistics website 1991 establishments in New Jersey Educational institutions established in 1991 Hackensack, New Jersey International Baccalaureate schools in New Jersey Magnet schools in New Jersey NCSSS schools Public high schools in Bergen County, New Jersey
17667000
https://en.wikipedia.org/wiki/Matt%20Stephens
Matt Stephens
Matt Stephens (born 1971) is an author and software process expert based in London, UK. In January 2010 he founded independent book publisher Fingerpress UK Ltd, and in November 2014 he founded the Virtual Reality book discovery site Inkflash. He is known for having spoken out against what he regards as popular (or populist) software development fashions, most notably Extreme Programming, Enterprise JavaBeans (EJB) and the Ruby programming language. He has co-authored four books on software development: Design Driven Testing: Test Smarter, Not Harder, Use Case Driven Object Modeling with UML: Theory and Practice, Agile Development with ICONIX Process, and Extreme Programming Refactored: The Case Against XP. He is also a columnist for The Register, a UK-based IT news website where he writes a monthly "Agile Iconoclast" column on software design and programming, and has written for Dr Dobb's Journal, Software Development Magazine, Application Development Trends and other journals and websites. Stephens' first book, Extreme Programming Refactored, has proved to be controversial as it satirizes the popular Extreme Programming (XP) agile methodology. The book triggered a lengthy debate in articles, internet newsgroups, and web-site chat areas. The core argument of the book is that XP is fragile rather than agile, as its practices are interdependent but that few practical organizations are willing/able to adopt all the practices; therefore the entire process fails. On the book's first page he points out that he is not "anti-agile", rather that the XP process is a fragile implementation of the values described in the Agile Manifesto. In Use Case Driven Object Modeling with UML, Stephens outlines an extension to the ICONIX object modeling process which he and co-author Doug Rosenberg termed Design Driven Testing (DDT), a deliberate reversal of Test Driven Development (TDD), a core tenet of XP. DDT provides a method of creating unit tests and customer acceptance tests that are driven from the design and behavioral requirements (use cases). DDT and the ICONIX modeling process have been adopted in a variety of large-scale software projects e.g. the image processing software in the Large Synoptic Survey Telescope (LSST). In Design Driven Testing, Stephens compares DDT with TDD, and applies DDT on a real project run by ESRI Systems, to create a GIS mapping system for travel website VResorts.com. Notes and references External links Matt Stephens' website at http://articles.softwarereality.com Fingerpress book publisher http://www.fingerpress.co.uk Inkflash Virtual Reality website http://inkflash.com 1971 births Living people Writers from London British computer programmers British technology writers
46461675
https://en.wikipedia.org/wiki/OHSAA%20Central%20Region%20athletic%20conferences
OHSAA Central Region athletic conferences
This is a list of high school athletic conferences in the Central Region of Ohio, as defined by the OHSAA. Because the names of localities and their corresponding high schools do not always match and because there is often a possibility of ambiguity with respect to either the name of a locality or the name of a high school, the following table gives both in every case, with the locality name first, in plain type, and the high school name second in boldface type. The school's team nickname is given last. Central Catholic League Columbus Bishop Hartley Hawks (1957-) Columbus Bishop Ready Silver Knights (1960-) Columbus Bishop Watterson Eagles (1955-) Columbus St. Charles Cardinals (1923-) Columbus St. Francis DeSales Stallions (1960-) Former members Columbus Holy Family Golden Flashes (1928–60) Columbus Holy Rosary Ramblers (1928–66, consolidated into Father Wehrle) Newark Catholic Green Wave (1928-53 as St. Francis, 1991-2003) Columbus St. MaryRams (1928–66, consolidated into Father Wehrle) Zanesville St. Nicholas Flyers (1928–50, consolidated into Bishop Rosecrans) Zanesville St. Thomas Aquinas Irish (1928–44, 1946–50, consolidated into Bishop Rosecrans) Mount Vernon St. Vincent De Paul Blue Streaks (1928–53) Zanesville Bishop Rosecrans Bishops (1950–77, 1979-2007) Springfield Catholic Central Fighting Irish (1951–53) Lancaster St. Mary Saints (1951–53) Marion St. Mary Fighting Irish (1951–53) New Lexington St. Aloysius Blue Knights (1955-??) Columbus Father Wehrle Wolverines (1966–91, school closed) Columbus Worthington Christian High School Warriors (2007–13) Columbus Columbus School for Girls Unicorns (1988-2019, left for the Mid-State League) Columbus City League North Division Columbus Beechcroft Cougars (1976-) Columbus Centennial Stars (1976-) Columbus East Tigers (1924-) Columbus Linden McKinley Panthers (1928-) Columbus North International Lions (2010-) Columbus Mifflin Punchers (1973-) Columbus Northland Vikings (1965-) Columbus Whetstone Braves (1961-) South Division Columbus Africentric Nubians (1999-) Columbus Briggs Bruins (1962-) Columbus Eastmoor Academy Warriors (1956-) Columbus Independence 76ers (1976-) Columbus Marion-Franklin Red Devils (1958-) Columbus South Bulldogs (1922-) Columbus Walnut Ridge Scots (1961-) Columbus West Cowboys (1922-) Former members Columbus Central Pirates (1922-1982) Columbus North Polar Bears (1924-1979) Columbus Mohawk Indians (1967–1978) Columbus Brookhaven Bearcats (1963-2014) Columbus Aquinas College Terriers (1922-1965) Independents Former Powell Village Academy Griffins (Closed in 2019) Knox-Morrow Athletic Conference Started in 2017-18, the league was founded by members of the Mid-Ohio Athletic Conference's Blue Division. Cardington-Lincoln Pirates (2017-) Centerburg Trojans (2017-) Danville Blue Devils (2017-) Howard East Knox Bulldogs (2017-) Fredericktown Freddies (2017-) Sparta Highland Fighting Scots (2017-) Mount Gilead Indians (2017-) Galion Northmor Golden Knights (2017-) Licking County League The league has two incarnations, the first lasting from 1927 to 1991. The second incarnation began 23 years later after travel and budget concerns brought together the ten schools that had made up league membership until 1986. The league is split into two divisions - Big School and Small School. Big School Granville Blue Aces (1927-1950, 1963-1991, 2013-) Colors: Blue and White Zanesville Blue Devils (2020-) Colors: Blue and White Pataskala Licking Heights Hornets (1927-1983, 2013-, changed name from Summit Station 1958) Colors: Maroon and Gold Newark Licking Valley Panthers (1959-1991, 2013-) Colors: Red, White and Blue Pataskala Watkins Memorial Warriors (1955-1991, 2013-) Colors: Black and Gold Small School Heath Bulldogs (1964-1991, 2013-) Colors: Brown and White Johnstown-Monroe Johnnies (1927-1991, 2013-) Colors: Red and White Hebron Lakewood Lancers (1959-1991, 2013-) Colors: Red, White and Blue Newark Catholic Green Wave (1973-1991, 2013-) Colors: Green and White Johnstown Northridge Vikings (1963-1986, 2013-) Colors: Green and White Utica Redskins (1927-1991, 2013-) Colors: Red and Gray Former members Alexandria Red Devils (1927-1962, renamed Northridge South 1960, consolidated into Northridge) Colors: Red and Black Etna Eagles (1927-1955, consolidated into Watkins Memorial) Colors: Green and white Hanover Panthers (1927-1959, renamed Hanover-Toboso 1934, consolidated into Licking Valley) Colors: Maroon and White Hartford Yellow Jackets (1927-1960, consolidated into Northridge North) Colors: Orange and Black Hebron Trailblazers (1927-1958, consolidated into Lynnwood-Jacksontown) Colors: blue and Gray Homer Blue Devils (1927-1962, renamed Northridge North 1960, consolidated into Northride) Colors: Blue and Gold Jacksontown Trojans (1927-1959, renamed Lynnwood-Jacksontown 1958, consolidated into Lakewood) Colors: Red and White Kirkersville Komets (1927-1955, consolidated into Watkins Memorial) Colors: Red and White Pataskala Blue Streaks (1927-1955, consolidated into Watkins Memorial) Colors: Blue and White Toboso (1927-1934, consolidated into Hanover-Toboso) Mid-Ohio Athletic Conference Conference Website: http://www.moacsports.com/MISC/home.htm Bellville Clear Fork Colts (2017-) Galion Tigers (2014-) Marion Pleasant Spartans (1990-) Caledonia River Valley Vikings (1990-) Marion Harding Presidents (2014- all sports, 2015- in football) Ontario Warriors (2017-) Shelby Whippets (2018-) Former members Cardington-Lincoln Pirates (1990-2017, to Knox-Morrow Conference) Centerburg Trojans (2013-17, to Knox-Morrow Conference) Howard East Knox Bulldogs (2014-17, to Knox-Morrow Conference) Marion Elgin Comets (1990-2017, to Northwest Central Conference) Milford Center Fairbanks Panthers (2013-17, to Ohio Heritage Conference) Fredericktown Freddies (2013-17, to Knox-Morrow Conference) Sparta Highland Fighting Scots (1990-2017, to Knox-Morrow Conference) Plain City Jonathan Alder Pioneers (2013-17, to Central Buckeye Conference) Mount Gilead Indians (1990-2017, to Knox-Morrow Conference) Richwood North Union Wildcats (1990-2018, to Central Buckeye Conference, football competed in MOAC for 2018 season) Galion Northmor Golden Knights (1990-2017, to Knox-Morrow Conference) Morral Ridgedale Rockets (1990-2014) Upper Sandusky Rams (2014 football season only) Delaware Buckeye Valley Barons (1990-2019, to Mid-State League) Mid-Ohio Christian Athletic League Columbus Tree of Life Christian Trojans Delaware Christian Eagles Granville Christian Academy Lions Groveport Madison Christian Eagles Pataskala Liberty Christian Eagles Plain City Shekinah Christian School Flames Westerville Northside Christian Lions Former members Lancaster Fairfield Christian Academy Knights Gahanna Christian Academy Eagles Grove City Christian Eagles Maranatha Christian Patriots Mid-State League Buckeye Division Amanda Amanda-Clearcreek Aces (Amanda Black Aces before 1960) (1958-) Carroll Bloom-Carroll Bulldogs (Carroll before 1968, 1958-) Circleville Tigers (1990-) Lancaster Fairfield Union Falcons (1957-) Columbus Hamilton Township Rangers (1981-) Baltimore Liberty Union Lions (1949-) Circleville Logan Elm Braves (1973-) Departing Buckeye Division Member Ashville Teays Valley Vikings (1984-2024) Cardinal Division Sugar Grove Berne Union Rockets (1953-) Zanesville Bishop Rosecrans Bishops (2017-) Lancaster Fairfield Christian Academy Knights (2013-) Lancaster Fisher Catholic Irish (1964-) Grove City Christian Eagles (2013-) Canal Winchester Harvest Preparatory Warriors (2003-) (Football in Ohio Division) Corning Miller Falcons (2020-) Millersport Lakers (1957-) Columbus School for Girls Unicorns (2019-) Upper Arlington The Wellington School Jaguars (2017-) (No football) Ohio Division Bexley Lions (2003-) Columbus Bishop Ready Silver Knights (2017-) (Football only) Delaware Buckeye Valley Barons (2019-) Gahanna Columbus Academy Vikings (1949-1957, 2003-) Grandview Heights Bobcats (2003-) Whitehall-Yearling Rams (2003-) Worthington Christian Warriors (2013-) (Football in Cardinal Division) Former members Canal Winchester Indians (1957-1964, 1966-2012) Pickerington Tigers (1966-1981) Pataskala Licking Heights Hornets (1984-2012) New Albany Eagles (1990-2006) Granville Blue Aces (1991-2013) Heath Bulldogs (1991-2013) Hebron Lakewood Lancers (2003-2013) Newark Licking Valley Panthers (2003-2013) Newark Catholic Green Wave (2003-2013) London Madison-Plains Golden Eagles (2013-2017 to Ohio Heritage Conference) West Jefferson Roughriders (1949-1956, 2006-2017 to Ohio Heritage Conference) London Red Raiders (2013-2019, to Central Buckeye Conference) Ohio Capital Conference Conference realignment for 2020-21 thru 2023-24. Ohio Division Gahanna Lincoln Golden Lions (1968-) Grove City Greyhounds (1981-) New Albany Eagles (2006-) Pickerington North Panthers (2004-) Westerville Central Warhawks (2004-) Galloway Westland Cougars (1970-) Central Division Dublin Coffman Shamrocks (1991-) Hilliard Bradley Jaguars (2009-) Hilliard Davidson Wildcats (1974-) Powell Olentangy Liberty Patriots (2004-) Lewis Center Olentangy Orange Pioneers (2008-) Upper Arlington Golden Bears (1981-) Buckeye Division Grove City Central Crossing Comets (2002-) Groveport-Madison Cruisers (1974-) Lancaster Golden Gales (1997-) Newark Wildcats (1995-) Pickerington Central Tigers (1981-) Reynoldsburg Raiders (1968-) Cardinal Division Dublin Jerome Celtics (2004-) Hilliard Darby Panthers (1997-) Lewis Center Olentangy Braves (1997-) Lewis Center Olentangy Berlin Bears (2018-) Marysville Monarchs (1991-) Worthington Thomas Worthington Cardinals (1968-) Capital Division Sunbury Big Walnut Golden Eagles (1997-) Canal Winchester Indians (2013-) Delaware Hayes Pacers (1968-) Dublin Scioto Irish (1995-) Columbus Franklin Heights Falcons (1981-) Westerville North Warriors (1977-) Westerville South Wildcats (1968-) Columbus Worthington Kilbourne Wolves (1991-) Future Members Ashville Teays Valley Vikings (joining 2024) Logan Chieftains (joining 2024) Former Members Grove City Pleasant View Panthers (1968-1970). The school became a Junior High only. District built new school, Galloway Westland Cougars, to replace it. Whitehall-Yearling Rams (1968-2001). Now competing in the Mid-State League (Ohio Division). Chillicothe Cavaliers (1976–2006). Now competing in the Frontier Athletic Conference (FAC). Pataskala Watkins Memorial Warriors (1989-2013). Now competing in the Licking County League (LCL). Mount Vernon Yellow Jackets (1968-2016). Now competing in the Ohio Cardinal Conference. Defunct conferences See also Ohio High School Athletic Association Notes and references
3019875
https://en.wikipedia.org/wiki/Software%20factory
Software factory
A software factory is a structured collection of related software assets that aids in producing computer software applications or software components according to specific, externally defined end-user requirements through an assembly process. A software factory applies manufacturing techniques and principles to software development to mimic the benefits of traditional manufacturing. Software factories are generally involved with outsourced software creation. Description In software engineering and enterprise software architecture, a software factory is a software product line that configures extensive tools, processes, and content using a template based on a schema to automate the development and maintenance of variants of an archetypical product by adapting, assembling, and configuring framework-based components. Since coding requires a software engineer (or the parallel in traditional manufacturing, a skilled craftsman) it is eliminated from the process at the application layer, and the software is created by assembling predefined components instead of using traditional IDEs. Traditional coding is left only for creating new components or services. As with traditional manufacturing, the engineering is left to creation of the components and the requirements gathering for the system. The end result of manufacturing in a software factory is a composite application. Purpose Software factory–based application development addresses the problem of traditional application development where applications are developed and delivered without taking advantage of the knowledge gained and the assets produced from developing similar applications. Many approaches, such as training, documentation, and frameworks, are used to address this problem; however, using these approaches to consistently apply the valuable knowledge previously gained during development of multiple applications can be an inefficient and error-prone process. Software factories address this problem by encoding proven practices for developing a specific style of application within a package of integrated guidance that is easy for project teams to adopt. Developing applications using a suitable software factory can provide many benefits, such as improved productivity, quality and evolution capability. Components Software factories are unique and therefore contain a unique set of assets designed to help build a specific type of application. In general, most software factories contain interrelated assets of the following types: Factory Schema: A document that categorizes and summarizes the assets used to build and maintain a system (such as XML documents, models, etc.) in an orderly way, and defines relationships between them. Reference implementation: Provides an example of a realistic, finished product that the software factory helps developers build. Architecture guidance and patterns: Help explain application design choices and the motivation for those choices. How-to topics: Provide procedures and instructions for completing tasks. Recipes: Automate procedures in How-to topics, either entirely or in specific steps. They can help developers complete routine tasks with minimal input. Templates: Pre-made application elements with placeholders for arguments. They can be used for creating initial project items. Designers: Provide information that developers can use to model applications at a higher level of abstraction. Reusable code: Components that implement common functionality or mechanisms. Integration of reusable code in a software factory reduces the requirements for manually written code and encourages reuse across applications. Product development Building a product using a software factory involves the following activities: Problem analysis: Determines whether the product is in the scope of a software factory. The fit determines whether all or some of the product is built with the software factory. Product specification: Defines the product requirements by outlining the differences from the product line requirements using a range of product specification mechanisms. Product design: Maps the differences in requirements to differences in product line architecture and development process to produce a customized process. Product implementation: A range of mechanisms can be used to develop the implementation depending on the extent of the differences. Product deployment: Involves creating or reusing default deployment constraints and configuring the required resources necessary to install the executables being deployed. Product testing: Involves creating or reusing test assets (such as test cases, data sets, and scripts) and applying instrumentation and measurement tools. Benefits Developing applications using a software factory can provide many benefits when compared to conventional software development approaches. These include the following: Consistency: Software factories can be used to build multiple instances of a software product line (a set of applications sharing similar features and architecture), making it easier to achieve consistency. This simplifies governance and also lowers training and maintenance costs. Quality: Using a software factory makes it easier for developers to learn and implement proven practices. Because of the integration of reusable code, developers are able to spend more time working on features that are unique to each application, reducing the likelihood of design flaws and code defects. Applications developed using a software factory can also be verified before deployment, ensuring that factory-specific best practices were followed during development. Productivity: Many application development activities can be streamlined and automated, such as reusing software assets and generating code from abstractions of the application elements and mechanisms. These benefits can provide value to several different teams in the following ways: Value for business Business tasks can be simplified which can significantly increase user productivity. This is achieved through using common and consistent user interfaces that reduce the need for end-user training. Easy deployment of new and updated functionality and flexible user interfaces also allows end users to perform tasks in a way that follows the business workflow. Data quality improvements reduce the need for data exchange between application parts through the ALT+TAB and copy and paste techniques. Value for architects Software factories can be used by architects to design applications and systems with improved quality and consistency. This is achieved through the ability to create a partial implementation of a solution that includes only the most critical mechanisms and shared elements. Known as the baseline architecture, this type of implementation can address design and development challenges, expose architectural decisions and mitigate risks early in the development cycle. Software factories also enable the ability to create a consistent and predicable way of developing, packaging, deploying and updating business components to enforce architectural standards independent of business logic. Value for developers Developers can use software factories to increase productivity and incur less ramp-up time. This is achieved through creating a high-quality starting point (baseline) for applications which includes code and patterns. This enables projects to begin with a higher level of maturity than traditionally developed applications. Reusable assets, guidance and examples help address common scenarios and challenges and automation of common tasks allows developers to easily apply guidance in consistent ways. Software factories provide a layer of abstraction that hides application complexity and separates concerns, allowing developers to focus on different areas such as business logic, the user interface (UI) or application services without in-depth knowledge of the infrastructure or baseline services. Abstraction of common developer tasks and increased reusability of infrastructure code can help boost productivity and maintainability. Value for operations Applications built with software factories result in a consolidation of operational efforts. This provides easier deployment of common business elements and modules, resulting in consistent configuration management across a suite of applications. Applications can be centrally managed with pluggable architecture which allows operations teams to control basic services. Other approaches There are several approaches that represent contrasting views on software factory concepts, ranging from tool oriented to process oriented initiatives. The following approaches cover Japanese, European, and North American initiatives. Industrialized software organization (Japan) Under this approach, software produced in the software factory is primarily used for control systems, nuclear reactors, turbines, etc. The main objectives of this approach are quality matched with productivity, ensuring that the increased costs do not weaken competitiveness. There is also the additional objective of creating an environment in which design, programming, testing, installation and maintenance can be performed in a unified manner. The key in improving quality and productivity is the reuse of software. Dominant traits of the organizational design include a determined effort to make operating work routine, simple and repetitive and to standardize work processes. A representative of this approach would be Toshiba's software factory concept, denoting the company's software division and procedures as they were in 1981 and 1987 respectively. Generic software factory (Europe) This approach was funded under the Eureka program and called the Eureka Software Factory. Participants in this project are large European companies, computer manufacturers, software houses, research institutes and universities. The aim of this approach is to provide the technology, standards, organizational support and other necessary infrastructures in order for software factories to be constructed and tailored from components marketed by independent suppliers. The objective of this approach is to produce an architecture and framework for integrated development environments. The generic software factory develops components and production environments that are part of software factories together with standards and guidance for software components. Experience-based component factory (North America) The experienced-based component factory is developed at the Software Engineering Laboratory at the NASA Goddard Space Flight Center. The goals of this approach are to "understand the software process in a production environment, determine the impact of available technologies and infuse identified/refined methods back into the development process". The approach has been to experiment with new technologies in a production environment, extract and apply experiences and data from experiments and to measure the impact with respect to cost, reliability and quality. This approach puts a heavy emphasis on continuous improvement through understanding the relationship between certain process characteristics and product qualities. The software factory is used to collect data about strengths and weaknesses to set baselines for improvements and to collect experiences to be reused in new projects. Mature software organization (North America) Defined by the Capability Maturity Model, this approach intended to create a framework to achieve a predictable, reliable, and self-improving software development process that produces software of high quality. The strategy consists of step-wise improvements in software organization, defining which processes are key in development. The software process and the software product quality are predictable because they are kept within measurable limits. History The first company to adopt this term was Hitachi in 1969 with its Hitachi Software Works. Later, other companies such as System Development Corporation in 1975, NEC, Toshiba and Fujitsu in 1976 and 1977 followed the same organizational approach. Cusumano suggests that there are six phases for software factories: Basic organization and management structure (mid-1960s to early 1970s) Technology tailoring and standardization (early 1970s to early 1980s) Process mechanization and support (late 1970s) Process refinement and extension (early 1980s) Integrated and flexible automation (mid-1980s) Incremental product / variety improvement (late 1980s) See also Software Factory (Microsoft .NET) Software Product Line Software Lifecycle Processes Software engineering Systems engineering Software development process Factorette Automatic programming Domain-Specific Modeling (DSM) Model Driven Engineering (MDE) References External links Harvard Business Review Wipro Technologies: The Factory Model Outsourcing Without Offshoring Is Aim of ‘Software Factory’ By P. J. Connolly Information technology management Software project management
22476640
https://en.wikipedia.org/wiki/IxiQuarks
IxiQuarks
ixiQuarks is an experimental music software released by the ixi software team, focusing on both live and studio production contexts. ixiQuarks is a software environment designed for live musical improvisation that allows for user interaction on hardware, GUI and code level. The environment enables innumerable setups with flexible loading of tools and instruments. The ixiQuarks consist of different types of tools: basic utilities, instruments, effects, filters, spectral effects and generators. In 2008, ixiQuarks won the first prize in the Lomus international music software contest organized by the Association Française d’Informatique Musicale. This software is written in SuperCollider and is part of an extended research programme exploring human-computer interaction in computer music. References External links Studio Toolz article Ixi-audio.net Supercollider.sourceforge.net Musical improvisation Electronic music software
65722701
https://en.wikipedia.org/wiki/Reliability%20verification
Reliability verification
Reliability verification or reliability testing is a method to evaluate the reliability of the product in all environments such as expected use, transportation, or storage during the specified lifespan. It is to expose the product to natural or artificial environmental conditions to undergo its action to evaluate the performance of the product under the environmental conditions of actual use, transportation, and storage, and to analyze and study the degree of influence of environmental factors and their mechanism of action. Through the use of various environmental test equipment to simulate the high temperature, low temperature, and high humidity, and temperature changes in the climate environment, to accelerate the reaction of the product in the use environment, to verify whether it reaches the expected quality in R&D, design, and manufacturing. Description Reliability is the probability of a product performing its intended function over its specified period of usage and under specified operating conditions, in a manner that meets or exceeds customer expectations. Reliability verification is also called reliability testing, which refers to the use of modeling, statistics, and other methods to evaluate the reliability of the product based on the product's life span and expected performance. Most product on the market requires reliability testing, such as automotive, integrated circuit, heavy machinery used to mine nature resources, Aircraft auto software. Reliability criteria There are many criteria to test depends on the product or process that are testing on, and mainly, there are five components that are most common: Product life span Intended function Operating Condition Probability of Performance User exceptions The product life span can be split into four different for analysis. Useful life is the estimated economic life of the product, which is defined as the time can be used before the cost of repair do not justify the continue use to the product. Warranty life is the product should perform the function within the specified time period. Design life is where during the design of the product, designer take into consideration on the life time of competitive product and customer desire and ensure that the product do not result in customer dissatisfaction. Testing method A systematic approach to reliability testing is to, first, determine reliability goal, then do tests that are linked to performance and determine the reliability of the product. A reliability verification test in modern industries should clearly determine how they relate to the product's overall reliability performance and how individual tests impact the warranty cost and customer satisfaction. Hardware Hardware Reliability Verification includes temperature and humidity test, mechanical vibration test, shock test, collision test, drop test, dustproof and waterproof test, and other environmental reliability tests. Growth in safety-critical applications for automotive electronics significantly increases the IC design reliability challenge. Hardware Testing of Electric Hot Water Heaters Providing Energy Storage and Demand Response Through Model Predictive Control is from Institute of Electrical and Electronics Engineers, written by Halamay, D.A., Starrett, M and Brekken, T.K.A. The author first discusses that a classical steady state model commonly used for simulation of electric hot water heaters can be inaccurate. Then this paper presents results from hardware testing which demonstrate that systems of water heaters under Model Predictive Control can be reliably dispatched to deliver set-point levels of power to within 2% error. Then the  author presents experiment result which shows a promising pathway to control hot water heaters as energy storage systems is  capable of delivering flexible capacity and fast acting ancillary services on a firm basis. Advanced Circuit Reliability Verification for Robust Design, a journal discuss the models used on circuit reliability verification and application of these models. It first discusses how the growth in safety-critical applications for automotive electronics significant increases the IC design reliability challenge. Then the author starts to discuss the latest Synopsys' AMS solution for robust design. This part of the article is very technical, mostly talking about how AMS can strengthen the reliability for full-chip mixed-signal verification. This article can be a useful source for investigating why it is important to focus more on reliability verification nowadays. Software A Pattern-Based Software Testing Framework for Exploitability Evaluation of Metadata Corruption Vulnerabilities developed by Deng Fenglei, Wang Jian, Zhang Bin, Feng Chao, Jiang Zhiyuan, Su Yunfei discuss how there is increased attention in software quality assurance and protection. However, today’s software still unfortunately fails to be protected from cyberattacks, especially in the presence of insecure organization of heap metadata. The authors aim to explore whether heap metadata could be corrupted and exploited by cyber-attackers, and they propose RELAY, a software testing framework to simulate human exploitation behavior for metadata corruption at the machine level. RELAY also makes use of the fewer resources consumed to solve a layout problem according to the exploit pattern, and generates the final exploit. A Methodology to Define Learning Objects Granularity developed by BENITTI, Fabiane Barreto Vavassori. The authors first discuss how learning object is one of the main research topics in the e-learning community in recent years and granularity is a key factor for learning object reuse. The authors then present a methodology to define the learning objects granularity in the computing area as well as a case study in software testing. Later, the authors carry out five experiments to evaluate the learning potential from the produced learning objects, as well as to demonstrate the possibility of learning object reuse. Results from the experiment are also presented in the article, which show that learning object promotes the understanding and application of the concepts. A recent article, Reliability Verification of Software Based on Cloud Service, have a ground breaking effect and it explores how software industry needs a way to measure reliability of each component of the software. In this article, a guarantee-verification method based on cloud service was proposed. The article first discusses how trustworthy each component's are will be defined in terms of component service guarantee-verification. Then an effective component model was defined in the article and based on the proposed model, the process of verifying a component service is illustrated in an application sample. See also Reliability engineering References Reliability engineering
42295803
https://en.wikipedia.org/wiki/Serguei%20Beloussov
Serguei Beloussov
Serguei Beloussov (born August 2, 1971) is a Singaporean businessman of Russian descent, entrepreneur, investor and speaker, is the founder and Chairman of the Board of Schaffhausen Institute of Technology and multiple global IT companies, including Acronis, a global data protection company, and is the senior founding partner of Runa Capital, a technology investment firm. He is also executive chairman of the board and chief architect of Parallels, Inc., a virtualization technology company, co-founder and chairman of the board of Acumatica, an enterprise resource planning software (ERP) company, and co-founder of QWave Capital. Beloussov has filed more than 350+ U.S. Patents and has an h-index of 40. Early life and education Beloussov was born in 1971 in St. Petersburg in a Jewish family and studied at 45th Physics-Mathematics School. Beloussov later attended the Moscow Institute of Physics and Technology, graduating in 1992 with a bachelor's degree in physics. He received his master's degree in physics and electrical engineering in 1995, and a PhD in computer science in 2007. Beloussov came to Singapore in 1994 and became a Singaporean citizen in 2001. Career While earning his master's, Beloussov co-founded his first business, Unium (Phystech College), which provided science students with course materials. In 1992, he began working at a Russian computer company called Sunrise. Beloussov expanded the company's operations to 10 subsidiaries, becoming one of the largest PC retailers in Russia by the time he left in 1994. After leaving Sunrise, Beloussov founded and co-owned two companies: Rolsen Electronics and Solomon Software SEA. Rolsen Electronics was set up as a joint venture with Vikash Shah of Amoli Group and become one the largest consumer electronics manufacturers in Russia. Solomon Software SEA was a distributor and developer arm of mid-market ERP vendor Solomon Software in South-East Asia. Solomon Software was later acquired by Microsoft and is now known as Microsoft Dynamics SL. In 2000, Beloussov founded SWsoft, a privately held server automation and virtualization software company and the then-parent company of Parallels, Inc. and Acronis Inc. Schaffhausen Institute of Technology Beloussov founded Schaffhausen Institute of Technology in 2019 in Switzerland. Schaffhausen Institute of Technology is an international research-led university for selected areas in computer science, physics, and technology transformation. Acronis In 2001, Beloussov founded Acronis as a storage management business unit of SWsoft. In 2003, Acronis was re-organized as a separate entity focused on backup and data protection software. Acronis employs over 1,500worldwide and its products are sold in 15 languages around the world. Beloussov has served on the board of directors since 2002. From 2007 to 2011, he turned his focus on Parallels, acting as CEO of the company. During this time he also founded a pair of venture capital funds, Runa Capital and QWave Capital. He returned as CEO of Acronis in May 2013, replacing former CEO Alex Pinchev. Currently serving as Acronis Chief Research Officer – prior to this role, SB served as Acronis’ Chief Executive Officer from 2013-2021. As of July 1, 2021, SB has stepped down as CEO of Acronis to focus on the company’s technology and research strategy as Chief Research Officer. Parallels, Inc. Parallels, Inc. was initially a server automation and virtualization software unit of SWsoft before it was spun off into as a separate entity and maintained its own distinct branding. In December 2007, SWsoft announced its plans to change its name to Parallels and ship both companies' products under the Parallels name. The merger was formalized in January 2008. From 2007 to 2013, Beloussov led the company as CEO, while remaining on the board of directors at Acronis. Beloussov stepped down as CEO and serves as the executive chairman of the board and chief architect of Parallels, Inc. The company has more than 900 employees across offices in North America, Europe, Australia and Asia and as of 2012 it had 5,000 customers and partners worldwide. Odin Automation, a service automation platform company owned by Parallels and founded by Beloussov, was sold to Ingram Micro in December 2015. Runa Capital In August 2010, Beloussov co-founded Runa Capital with Dmitry Chikhachev and Ilya Zubarev. The $135 million technology venture capital firm that was created "to seek growth opportunities in the rapidly growing areas of the tech sector, with specific focus on cloud computing and other hosted services, virtualization and mobile applications." Beloussov and Zubarev are senior partners at the investment firm. Since 2010, Runa Capital has invested in over 30 companies with a combined $10 billion in assets. Runa Capital's largest investment was a $10 million Series C funding round of Acumatica on November 18, 2013. The round was led together with Almaz Capital. Other ventures Beloussov is a co-founder and chairman of the board at Acumatica, a global cloud ERP company founded in 2007 with offices in Moscow, Singapore and Washington D.C. In 2012, Beloussov, Serguei Kouzmine and Zubarev co-founded QWave Capital. The company has offices in Moscow, Boston, and New York City. QWave Capital has over $300 million in funds and has invested in four quantum technology companies: ID Quantique, Nano Meta Technologies, Clifton and Centrice. Between 2012 and 2017, Beloussov sat on the Governing Board of the Centre for Quantum Technologies. The Singapore-based research institute is a Research Centre of Excellence hosted by the National University of Singapore. The Centre brings together quantum physicists and computer scientists to explore the quantum nature of reality and the fundamental limits of information processing. In 2021, he contributed to funding a new quantum computer startup company, QuEra, which is developing a 256-qubit machine. See also Acronis Parallels, Inc. Runa Capital Acumatica References 1971 births Living people Moscow Institute of Physics and Technology alumni
61308340
https://en.wikipedia.org/wiki/Artix%20Linux
Artix Linux
Artix Linux, or Artix is a rolling-release distribution based on Arch Linux that uses OpenRC, runit, s6, suite66 or dinit init instead of systemd. Artix Linux has its own package repositories but, as a pacman-based distribution, can use packages from Arch Linux repositories or any other derivative distribution, even packages explicitly depending on systemd. The Arch User Repository (AUR) can also be used. Arch OpenRC began in 2012 and Manjaro OpenRC was subsequently developed alongside it. In 2017 these projects merged to create Artix Linux. Release history Artix initially offered two installation environments, a base command-line ISO image and the graphical Calamares installer based on LXQt desktop, with an i3 version following later. Those early versions featured the OpenRC init system. The latest installation media are available in a variety of desktop environments like LXDE, XFCE, MATE, Cinnamon and KDE Plasma 5. Additionally, two unofficial community editions featuring GTK and Qt desktops and a larger software base are offered, aiming at too-busy-to-customise or less experienced users. All current installation media come in OpenRC, runit, s6 and suite66 versions. Reception An early review published on DistroWatch on 27 November 2017 found a few bugs, but overall "Artix is working with a good idea [...] It's minimal, it is rolling and it offers a little-used init system. All of these I think make the project worthwhile." More critical, another review at the time from linux-community.de concluded "the results so far are not exactly motivating." Much more favourable reviews were later featured in both sites. A review from Softpedia gave Artix a 5 out of 5 stars rating, noting its "beautiful and pleasant graphical environments." Distrowatch readers' reviews on Distrowatch are mostly very favourable, with an average rating of 9.0. References External links 2018 linux-community.de review (in German) pro-linux.de review (in German) Softpedia review Feature story in Distrowatch weekly 2020 linux-community.de review (in German) Arch-based Linux distributions Linux distributions without systemd Rolling Release Linux distributions Linux distributions
757980
https://en.wikipedia.org/wiki/Brownfield%20%28software%20development%29
Brownfield (software development)
Brownfield development is a term commonly used in the information technology industry to describe problem spaces needing the development and deployment of new software systems in the immediate presence of existing (legacy) software applications/systems. This implies that any new software architecture must take into account and coexist with live software already in situ. In contemporary civil engineering, Brownfield land means places where new buildings may need to be designed and erected considering the other structures and services already in place. Brownfield development adds a number of improvements to conventional software engineering practices. These traditionally assume a "clean sheet of paper" or "greenfield land" target environment throughout the design and implementation phases of software development. Brownfield extends such traditions by insisting that the context (local landscape) of the system being created be factored into any development exercise. This requires a detailed knowledge of the systems, services and data in the immediate vicinity of the solution under construction. Addressing environmental complexity Reliably re-engineering existing business and IT environments into modern competitive, integrated architectures is non-trivial. The complexity of business and IT environments has been accumulating almost unchecked for forty years making changes ever more expensive. This is because: Environmental complexity is often expressed in legacy code. Legacy skills shortages are driving up maintenance and integration costs. Existing complex environments must be re-engineered in phases that make operational sense to their associated business function. These phases often default to wholesale, risky replacements of systems as ignorance of existing complexity means that potential incremental changes are too difficult to understand and engineer. Accelerated development methods have left enterprises with modern legacy systems. Complex Java and .NET applications have many of the same problems as older COBOL applications. As a result, an increasing proportion of the effort of developing new business capabilities is spent on understanding and integrating with the existing complex system and business landscape rather than delivering value. It has been observed that up to 75% of overall project effort is now spent on software integration and migration rather than new functionality. The IT industry as a whole has a poor success rate at delivering such large scale change for its clients. The CHAOS survey from the Standish Group has tracked an overall improvement in IT project delivery success over the last twenty years, but even in 2006 large IT projects still failed more often than succeeded. Engineering changes and in such environments has many parallels with the concerns of the construction industry in redeveloping industrial or contaminated sites. They are full of hazards, unexpected complexities and tend to be risky and expensive to redevelop. The accumulated complexity of IT environments has made them “Brownfield” sites. It is not the complexity of the new function or any new system characteristics that are the root of large project failures – it is our understanding and communication of the overall requirement (as identified in The Mythical Man Month). To succeed, the requirements need to include a precise and thorough understanding of the constraints of the existing business and IT. Current “Greenfield” tooling and methods use early, informal and often imprecise abstractions that essentially ignore such complexity. Early, poorly informed abstractions are usually wrong and are often detected late in construction, resulting in delays, expensive rework and even failed developments. A Brownfield-oriented approach embraces existing complexity, and is used to reliably accelerate the overall solution engineering process, including enabling phased, incremental change wherever possible. Brownfield takes the standard OMG model/pattern-driven approach and turns it on its head. Rather than taking the conventional approach of starting with a Conceptual Model and driving down to Platform Specific Models and code generation, Brownfield starts by harvesting code and other existing artifacts and uses patterns to formally abstract upwards towards the Architecture and Business tier. Standard Greenfield techniques are then used in combination to define the preferred business target. This “meet in the middle” technique is familiar from other development methods, but the extensive use of formal abstraction and the use of patterns for both discovery and generation is novel. The underlying conceptual architecture of all Brownfield tools is known as VITA. VITA stands for Views, Inventory, Transformation and Artifacts. In a VITA architecture, the problem definition of the target space can be maintained as separate (though related) native "headfulls" of knowledge known as Views. The core advantage of a View is that it can be based on pretty much any formal tool. Brownfield does not impose a single tool or language on a problem space – a core tenet is that the headfulls continue to be maintained in their native forms and tools. Native Views are then brought together and linked into a single Inventory. The Inventory is then used with a series of Transformation capabilities to produce the Artifacts that the solution needs. Views can currently be imported from a wide variety of sources including UML, XML sources, DDL, spreadsheets, etc. The Analysis and Renovation Catalyst tool from IBM has taken this capability even further via the use of formal grammars and Abstract Syntax Trees to enable almost any program to be parsed and tokenized into a View for inclusion into the Inventory. The rapid cyclic nature of the discovery, re-engineer, generate and test cycle used in this approach means that solutions can be refined iteratively in terms of their logical and physical definitions as more of the constraints become known and the solution architecture is refined. Iterative Brownfield development can allow the gradual refinement of logical and physical architectures and incremental testing for the whole approach, resulting in development acceleration, improved solution quality and cheaper defect removal. Brownfield can also be used to generate solution documentation, ensuring it is always up to date and consistent across different viewpoints. The Inventory that is created through Brownfield processed may be highly complex, being an interconnected multi-dimensional semantic network. The level of knowledge in the Inventory can be very fine grained, highly detailed and interrelated. Such things are hard to understand and can provide barriers to communication, however. Brownfield solves this problem by abstracting concepts via an artisan’s best guess, using known patterns in its Inventories to extract and infer higher level relationships. Formal abstractions enable the complexity of the Inventory to be translated into simpler, but inherently accurate, representations for easier consumption by those that need to understand the problem space. These abstracted Inventory models can be used to automatically render multi-layered architecture representations in tools such as Second Life. Such visualizations enable complex information to be shared and experienced by multiple individuals from around the globe in real time. This enhances both understanding and a sense of a single team. References DeveloperWorks Interviews: Booch, Nackman, and Royce on IBM Rational at five years Browninfo Methodology and Software for Development of Interactive Brownfield Databases: T. Vujičić, D. Simonović, A. Đukić, M. Šestić Software project management
217356
https://en.wikipedia.org/wiki/Concurrency%20control
Concurrency control
In information technology and computer science, especially in the fields of computer programming, operating systems, multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are generated, while getting those results as quickly as possible. Computer systems, both software and hardware, consist of modules, or components. Each component is designed to operate correctly, i.e., to obey or to meet certain consistency rules. When components that operate concurrently interact by messaging or by sharing accessed data (in memory or storage), a certain component's consistency may be violated by another component. The general area of concurrency control provides rules, methods, design methodologies, and theories to maintain the consistency of components operating concurrently while interacting, and thus the consistency and correctness of the whole system. Introducing concurrency control into a system means applying operation constraints which typically result in some performance reduction. Operation consistency and correctness should be achieved with as good as possible efficiency, without reducing performance below reasonable levels. Concurrency control can require significant additional complexity and overhead in a concurrent algorithm compared to the simpler sequential algorithm. For example, a failure in concurrency control can result in data corruption from torn read or write operations. Concurrency control in databases Comments: This section is applicable to all transactional systems, i.e., to all systems that use database transactions (atomic transactions; e.g., transactional objects in Systems management and in networks of smartphones which typically implement private, dedicated database systems), not only general-purpose database management systems (DBMSs). DBMSs need to deal also with concurrency control issues not typical just to database transactions but rather to operating systems in general. These issues (e.g., see Concurrency control in operating systems below) are out of the scope of this section. Concurrency control in Database management systems (DBMS; e.g., Bernstein et al. 1987, Weikum and Vossen 2001), other transactional objects, and related distributed applications (e.g., Grid computing and Cloud computing) ensures that database transactions are performed concurrently without violating the data integrity of the respective databases. Thus concurrency control is an essential element for correctness in any system where two database transactions or more, executed with time overlap, can access the same data, e.g., virtually in any general-purpose database system. Consequently, a vast body of related research has been accumulated since database systems emerged in the early 1970s. A well established concurrency control theory for database systems is outlined in the references mentioned above: serializability theory, which allows to effectively design and analyze concurrency control methods and mechanisms. An alternative theory for concurrency control of atomic transactions over abstract data types is presented in (Lynch et al. 1993), and not utilized below. This theory is more refined, complex, with a wider scope, and has been less utilized in the Database literature than the classical theory above. Each theory has its pros and cons, emphasis and insight. To some extent they are complementary, and their merging may be useful. To ensure correctness, a DBMS usually guarantees that only serializable transaction schedules are generated, unless serializability is intentionally relaxed to increase performance, but only in cases where application correctness is not harmed. For maintaining correctness in cases of failed (aborted) transactions (which can always happen for many reasons) schedules also need to have the recoverability (from abort) property. A DBMS also guarantees that no effect of committed transactions is lost, and no effect of aborted (rolled back) transactions remains in the related database. Overall transaction characterization is usually summarized by the ACID rules below. As databases have become distributed, or needed to cooperate in distributed environments (e.g., Federated databases in the early 1990, and Cloud computing currently), the effective distribution of concurrency control mechanisms has received special attention. Database transaction and the ACID rules The concept of a database transaction (or atomic transaction) has evolved in order to enable both a well understood database system behavior in a faulty environment where crashes can happen any time, and recovery from a crash to a well understood database state. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands). Every database transaction obeys the following rules (by support in the database system; i.e., a database system is designed to guarantee them for the transactions it runs): Atomicity - Either the effects of all or none of its operations remain ("all or nothing" semantics) when a transaction is completed (committed or aborted respectively). In other words, to the outside world a committed transaction appears (by its effects on the database) to be indivisible (atomic), and an aborted transaction does not affect the database at all. Either all the operations are done or none of them are. Consistency - Every transaction must leave the database in a consistent (correct) state, i.e., maintain the predetermined integrity rules of the database (constraints upon and among the database's objects). A transaction must transform a database from one consistent state to another consistent state (however, it is the responsibility of the transaction's programmer to make sure that the transaction itself is correct, i.e., performs correctly what it intends to perform (from the application's point of view) while the predefined integrity rules are enforced by the DBMS). Thus since a database can be normally changed only by transactions, all the database's states are consistent. Isolation - Transactions cannot interfere with each other (as an end result of their executions). Moreover, usually (depending on concurrency control method) the effects of an incomplete transaction are not even visible to another transaction. Providing isolation is the main goal of concurrency control. Durability - Effects of successful (committed) transactions must persist through crashes (typically by recording the transaction's effects and its commit event in a non-volatile memory). The concept of atomic transaction has been extended during the years to what has become Business transactions which actually implement types of Workflow and are not atomic. However also such enhanced transactions typically utilize atomic transactions as components. Why is concurrency control needed? If transactions are executed serially, i.e., sequentially with no overlap in time, no transaction concurrency exists. However, if concurrent transactions with interleaving operations are allowed in an uncontrolled manner, some unexpected, undesirable results may occur, such as: The lost update problem: A second transaction writes a second value of a data-item (datum) on top of a first value written by a first concurrent transaction, and the first value is lost to other transactions running concurrently which need, by their precedence, to read the first value. The transactions that have read the wrong value end with incorrect results. The dirty read problem: Transactions read a value written by a transaction that has been later aborted. This value disappears from the database upon abort, and should not have been read by any transaction ("dirty read"). The reading transactions end with incorrect results. The incorrect summary problem: While one transaction takes a summary over the values of all the instances of a repeated data-item, a second transaction updates some instances of that data-item. The resulting summary does not reflect a correct result for any (usually needed for correctness) precedence order between the two transactions (if one is executed before the other), but rather some random result, depending on the timing of the updates, and whether certain update results have been included in the summary or not. Most high-performance transactional systems need to run transactions concurrently to meet their performance requirements. Thus, without concurrency control such systems can neither provide correct results nor maintain their databases consistently. Concurrency control mechanisms Categories The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation and other integrity rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations ("...and be optimistic about the rules being met..."), and then abort a transaction to prevent the violation, if the desired rules are to be violated upon its commit. An aborted transaction is immediately restarted and re-executed, which incurs an obvious overhead (versus executing it to the end only once). If not too many transactions are aborted, then being optimistic is usually a good strategy. Pessimistic - Block an operation of a transaction, if it may cause violation of the rules, until the possibility of violation disappears. Blocking operations is typically involved with performance reduction. Semi-optimistic - Block operations in some situations, if they may cause violation of some rules, and do not block in other situations while delaying rules checking (if needed) to transaction's end, as done with optimistic. Different categories provide different performance, i.e., different average transaction completion rates (throughput), depending on transaction types mix, computing level of parallelism, and other factors. If selection and knowledge about trade-offs are available, then category and method should be chosen to provide the highest performance. The mutual blocking between two transactions (where each one blocks the other) or more results in a deadlock, where the transactions involved are stalled and cannot reach completion. Most non-optimistic mechanisms (with blocking) are prone to deadlocks which are resolved by an intentional abort of a stalled transaction (which releases the other transactions in that deadlock), and its immediate restart and re-execution. The likelihood of a deadlock is typically low. Blocking, deadlocks, and aborts all result in performance reduction, and hence the trade-offs between the categories. Methods Many methods for concurrency control exist. Most of them can be implemented within either main category above. The major methods, which have each many variants, and in some cases may overlap or be combined, are: Locking (e.g., Two-phase locking - 2PL) - Controlling access to data by locks assigned to the data. Access of a transaction to a data item (database object) locked by another transaction may be blocked (depending on lock type and access operation type) until lock release. Serialization graph checking (also called Serializability, or Conflict, or Precedence graph checking) - Checking for cycles in the schedule's graph and breaking them by aborts. Timestamp ordering (TO) - Assigning timestamps to transactions, and controlling or checking access to data by timestamp order. Commitment ordering (or Commit ordering; CO) - Controlling or checking transactions' chronological order of commit events to be compatible with their respective precedence order. Other major concurrency control types that are utilized in conjunction with the methods above include: Multiversion concurrency control (MVCC) - Increasing concurrency and performance by generating a new version of a database object each time the object is written, and allowing transactions' read operations of several last relevant versions (of each object) depending on scheduling method. Index concurrency control - Synchronizing access operations to indexes, rather than to user data. Specialized methods provide substantial performance gains. Private workspace model (Deferred update) - Each transaction maintains a private workspace for its accessed data, and its changed data become visible outside the transaction only upon its commit (e.g., Weikum and Vossen 2001). This model provides a different concurrency control behavior with benefits in many cases. The most common mechanism type in database systems since their early days in the 1970s has been Strong strict Two-phase locking (SS2PL; also called Rigorous scheduling or Rigorous 2PL) which is a special case (variant) of both Two-phase locking (2PL) and Commitment ordering (CO). It is pessimistic. In spite of its long name (for historical reasons) the idea of the SS2PL mechanism is simple: "Release all locks applied by a transaction only after the transaction has ended." SS2PL (or Rigorousness) is also the name of the set of all schedules that can be generated by this mechanism, i.e., these are SS2PL (or Rigorous) schedules, have the SS2PL (or Rigorousness) property. Major goals of concurrency control mechanisms Concurrency control mechanisms firstly need to operate correctly, i.e., to maintain each transaction's integrity rules (as related to concurrency; application-specific integrity rule are out of the scope here) while transactions are running concurrently, and thus the integrity of the entire transactional system. Correctness needs to be achieved with as good performance as possible. In addition, increasingly a need exists to operate effectively while transactions are distributed over processes, computers, and computer networks. Other subjects that may affect concurrency control are recovery and replication. Correctness Serializability For correctness, a common major goal of most concurrency control mechanisms is generating schedules with the Serializability property. Without serializability undesirable phenomena may occur, e.g., money may disappear from accounts, or be generated from nowhere. Serializability of a schedule means equivalence (in the resulting database values) to some serial schedule with the same transactions (i.e., in which transactions are sequential with no overlap in time, and thus completely isolated from each other: No concurrent access by any two transactions to the same data is possible). Serializability is considered the highest level of isolation among database transactions, and the major correctness criterion for concurrent transactions. In some cases compromised, relaxed forms of serializability are allowed for better performance (e.g., the popular Snapshot isolation mechanism) or to meet availability requirements in highly distributed systems (see Eventual consistency), but only if application's correctness is not violated by the relaxation (e.g., no relaxation is allowed for money transactions, since by relaxation money can disappear, or appear from nowhere). Almost all implemented concurrency control mechanisms achieve serializability by providing Conflict serializablity, a broad special case of serializability (i.e., it covers, enables most serializable schedules, and does not impose significant additional delay-causing constraints) which can be implemented efficiently. Recoverability See Recoverability in Serializability Comment: While in the general area of systems the term "recoverability" may refer to the ability of a system to recover from failure or from an incorrect/forbidden state, within concurrency control of database systems this term has received a specific meaning. Concurrency control typically also ensures the Recoverability property of schedules for maintaining correctness in cases of aborted transactions (which can always happen for many reasons). Recoverability (from abort) means that no committed transaction in a schedule has read data written by an aborted transaction. Such data disappear from the database (upon the abort) and are parts of an incorrect database state. Reading such data violates the consistency rule of ACID. Unlike Serializability, Recoverability cannot be compromised, relaxed at any case, since any relaxation results in quick database integrity violation upon aborts. The major methods listed above provide serializability mechanisms. None of them in its general form automatically provides recoverability, and special considerations and mechanism enhancements are needed to support recoverability. A commonly utilized special case of recoverability is Strictness, which allows efficient database recovery from failure (but excludes optimistic implementations; e.g., Strict CO (SCO) cannot have an optimistic implementation, but has semi-optimistic ones). Comment: Note that the Recoverability property is needed even if no database failure occurs and no database recovery from failure is needed. It is rather needed to correctly automatically handle transaction aborts, which may be unrelated to database failure and recovery from it. Distribution With the fast technological development of computing the difference between local and distributed computing over low latency networks or buses is blurring. Thus the quite effective utilization of local techniques in such distributed environments is common, e.g., in computer clusters and multi-core processors. However the local techniques have their limitations and use multi-processes (or threads) supported by multi-processors (or multi-cores) to scale. This often turns transactions into distributed ones, if they themselves need to span multi-processes. In these cases most local concurrency control techniques do not scale well. Distributed serializability and commitment ordering See Distributed serializability in Serializability As database systems have become distributed, or started to cooperate in distributed environments (e.g., Federated databases in the early 1990s, and nowadays Grid computing, Cloud computing, and networks with smartphones), some transactions have become distributed. A distributed transaction means that the transaction spans processes, and may span computers and geographical sites. This generates a need in effective distributed concurrency control mechanisms. Achieving the Serializability property of a distributed system's schedule (see Distributed serializability and Global serializability (Modular serializability)) effectively poses special challenges typically not met by most of the regular serializability mechanisms, originally designed to operate locally. This is especially due to a need in costly distribution of concurrency control information amid communication and computer latency. The only known general effective technique for distribution is Commitment ordering, which was disclosed publicly in 1991 (after being patented). Commitment ordering (Commit ordering, CO; Raz 1992) means that transactions' chronological order of commit events is kept compatible with their respective precedence order. CO does not require the distribution of concurrency control information and provides a general effective solution (reliable, high-performance, and scalable) for both distributed and global serializability, also in a heterogeneous environment with database systems (or other transactional objects) with different (any) concurrency control mechanisms. CO is indifferent to which mechanism is utilized, since it does not interfere with any transaction operation scheduling (which most mechanisms control), and only determines the order of commit events. Thus, CO enables the efficient distribution of all other mechanisms, and also the distribution of a mix of different (any) local mechanisms, for achieving distributed and global serializability. The existence of such a solution has been considered "unlikely" until 1991, and by many experts also later, due to misunderstanding of the CO solution (see Quotations in Global serializability). An important side-benefit of CO is automatic distributed deadlock resolution. Contrary to CO, virtually all other techniques (when not combined with CO) are prone to distributed deadlocks (also called global deadlocks) which need special handling. CO is also the name of the resulting schedule property: A schedule has the CO property if the chronological order of its transactions' commit events is compatible with the respective transactions' precedence (partial) order. SS2PL mentioned above is a variant (special case) of CO and thus also effective to achieve distributed and global serializability. It also provides automatic distributed deadlock resolution (a fact overlooked in the research literature even after CO's publication), as well as Strictness and thus Recoverability. Possessing these desired properties together with known efficient locking based implementations explains SS2PL's popularity. SS2PL has been utilized to efficiently achieve Distributed and Global serializability since the 1980, and has become the de facto standard for it. However, SS2PL is blocking and constraining (pessimistic), and with the proliferation of distribution and utilization of systems different from traditional database systems (e.g., as in Cloud computing), less constraining types of CO (e.g., Optimistic CO) may be needed for better performance. Comments: The Distributed conflict serializability property in its general form is difficult to achieve efficiently, but it is achieved efficiently via its special case Distributed CO: Each local component (e.g., a local DBMS) needs both to provide some form of CO, and enforce a special vote ordering strategy for the Two-phase commit protocol (2PC: utilized to commit distributed transactions). Differently from the general Distributed CO, Distributed SS2PL exists automatically when all local components are SS2PL based (in each component CO exists, implied, and the vote ordering strategy is now met automatically). This fact has been known and utilized since the 1980s (i.e., that SS2PL exists globally, without knowing about CO) for efficient Distributed SS2PL, which implies Distributed serializability and strictness (e.g., see Raz 1992, page 293; it is also implied in Bernstein et al. 1987, page 78). Less constrained Distributed serializability and strictness can be efficiently achieved by Distributed Strict CO (SCO), or by a mix of SS2PL based and SCO based local components. About the references and Commitment ordering: (Bernstein et al. 1987) was published before the discovery of CO in 1990. The CO schedule property is called Dynamic atomicity in (Lynch et al. 1993, page 201). CO is described in (Weikum and Vossen 2001, pages 102, 700), but the description is partial and misses CO's essence. (Raz 1992) was the first refereed and accepted for publication article about CO algorithms (however, publications about an equivalent Dynamic atomicity property can be traced to 1988). Other CO articles followed. (Bernstein and Newcomer 2009) note CO as one of the four major concurrency control methods, and CO's ability to provide interoperability among other methods. Distributed recoverability Unlike Serializability, Distributed recoverability and Distributed strictness can be achieved efficiently in a straightforward way, similarly to the way Distributed CO is achieved: In each database system they have to be applied locally, and employ a vote ordering strategy for the Two-phase commit protocol (2PC; Raz 1992, page 307). As has been mentioned above, Distributed SS2PL, including Distributed strictness (recoverability) and Distributed commitment ordering (serializability), automatically employs the needed vote ordering strategy, and is achieved (globally) when employed locally in each (local) database system (as has been known and utilized for many years; as a matter of fact locality is defined by the boundary of a 2PC participant (Raz 1992) ). Other major subjects of attention The design of concurrency control mechanisms is often influenced by the following subjects: Recovery All systems are prone to failures, and handling recovery from failure is a must. The properties of the generated schedules, which are dictated by the concurrency control mechanism, may affect the effectiveness and efficiency of recovery. For example, the Strictness property (mentioned in the section Recoverability above) is often desirable for an efficient recovery. Replication For high availability database objects are often replicated. Updates of replicas of a same database object need to be kept synchronized. This may affect the way concurrency control is done (e.g., Gray et al. 1996). See also Schedule Isolation (computer science) Distributed concurrency control References Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987): Concurrency Control and Recovery in Database Systems (free PDF download), Addison Wesley Publishing Company, 1987, Gerhard Weikum, Gottfried Vossen (2001): Transactional Information Systems, Elsevier, Nancy Lynch, Michael Merritt, William Weihl, Alan Fekete (1993): Atomic Transactions in Concurrent and Distributed Systems , Morgan Kaufmann (Elsevier), August 1993, , Yoav Raz (1992): "The Principle of Commitment Ordering, or Guaranteeing Serializability in a Heterogeneous Environment of Multiple Autonomous Resource Managers Using Atomic Commitment." ( PDF), Proceedings of the Eighteenth International Conference on Very Large Data Bases (VLDB), pp. 292-312, Vancouver, Canada, August 1992. (also DEC-TR 841, Digital Equipment Corporation, November 1990) Citations Concurrency control in operating systems Multitasking operating systems, especially real-time operating systems, need to maintain the illusion that all tasks running on top of them are all running at the same time, even though only one or a few tasks really are running at any given moment due to the limitations of the hardware the operating system is running on. Such multitasking is fairly simple when all tasks are independent from each other. However, when several tasks try to use the same resource, or when tasks try to share information, it can lead to confusion and inconsistency. The task of concurrent computing is to solve that problem. Some solutions involve "locks" similar to the locks used in databases, but they risk causing problems of their own such as deadlock. Other solutions are Non-blocking algorithms and Read-copy-update. See also Linearizability Lock (computer science) Mutual exclusion Semaphore (programming) Software transactional memory Transactional Synchronization Extensions References Andrew S. Tanenbaum, Albert S Woodhull (2006): Operating Systems Design and Implementation, 3rd Edition, Prentice Hall, Data management Databases Transaction processing
53547343
https://en.wikipedia.org/wiki/Cheyenne%20%28supercomputer%29
Cheyenne (supercomputer)
The Cheyenne supercomputer at the NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming began operation as one of the world’s most powerful and energy-efficient computers. Ranked in November 2016 as the 20th most powerful computer in the world by Top500, the 5.34-petaflops system is capable of more than triple the amount of scientific computing performed by NCAR’s previous supercomputer, Yellowstone. It also is three times more energy efficient than Yellowstone, with a peak computation rate of more than 3 billion calculations per second for every watt of energy consumed. The National Science Foundation and the State of Wyoming through an appropriation to the University of Wyoming funded Cheyenne to provide the United States with a major new tool to advance understanding of the atmospheric and related Earth system sciences. High-performance computers such as Cheyenne allow researchers to run increasingly detailed models that simulate complex processes to estimate how they might unfold in the future. These predictions give resource managers and policy experts valuable information for planning ahead and mitigating risk. Cheyenne’s users advance the knowledge needed for saving lives, protecting property, and enabling U.S. businesses to better compete in the global marketplace. Scientists across the country will use Cheyenne to study phenomena ranging from weather and climate to wildfires, seismic activity, and airflows that generate power at wind farms. Their findings lay the groundwork for better protecting society from natural disasters, lead to more detailed projections of seasonal and longer-term weather and climate variability and change, and improve weather and water forecasts that are needed by economic sectors from agriculture and energy to transportation and tourism. The supercomputer’s name was chosen to honor the people of Cheyenne, Wyoming, who supported the installation of the NWSC and its computers there. The name also commemorates the 150th anniversary of the city, which was founded in 1867 and named for the Native American Cheyenne Nation. System Description The Cheyenne supercomputer was built by Silicon Graphics International Corporation (SGI) in coordination with centralized file system and data storage components provided by DataDirect Networks (DDN). The SGI high-performance computer is a 5.34-petaflops system, meaning it can carry out 5.34 quadrillion calculations per second. The new data storage system for Cheyenne is integrated with NCAR’s existing GLADE file system. The DDN storage provides an initial capacity of 20 petabytes, expandable to 40 petabytes with the addition of extra drives. This, combined with the current 16 petabytes of GLADE, totals 36 petabytes of high-speed storage as of February 2017. Cheyenne is an SGI ICE XA system with 4,032 dual-socket scientific computation nodes running 18-core 2.3-GHz Intel Xeon E5-2697v4 processors with 203 [now 315] terabytes of memory. Interconnecting these nodes is a Mellanox EDR InfiniBand network with 9-D enhanced hypercube topology that performs with a latency of only 0.5 microsecond. Cheyenne runs the SUSE Linux Enterprise Server 12 SP1 operating system. Cheyenne is integrated with many other high-performance computing resources in the NWSC. The central feature of this supercomputing architecture is its shared file system that streamlines science workflows by providing computation, analysis, and visualization work spaces common to all resources. This common data storage pool, called the GLobally Accessible Data Environment (GLADE), provides 36.4 petabytes of online disk capacity shared by the supercomputers, two data analysis and visualization (DAV) cluster computers, data servers for both local and remote users, and a data archive with the capacity to store 320 petabytes of research data. High-speed networks connect this Cheyenne environment to science gateways, data transfer services, remote visualization resources, Extreme Science and Engineering Discovery Environment (XSEDE) sites, and partner sites around the world. This integration of computing resources, file systems, data storage, and broadband networks allows scientists to simulate future geophysical scenarios at high resolution, then analyze and visualize them on one computing complex. This improves scientific productivity by avoiding the delays associated with moving large quantities of data between separate systems. Further, this reduces the volume of data that needs to be transferred to researchers at their home institutions. Cheyenne makes more than 1.2 billion core-hours available each year to researchers in the Earth system sciences. References X86 supercomputers SGI supercomputers
4741717
https://en.wikipedia.org/wiki/Nasir%20Gebelli
Nasir Gebelli
Nasir Gebelli (, also Nasser Gebelli, born 1957) is an Iranian-American programmer and video game designer usually credited in his games as simply Nasir. Gebelli wrote Apple II games for Sirius Software, created his own company Gebelli Software, and worked for Squaresoft (now Square Enix). He became known in the early 1980s for producing fast action games for the Apple II, including 3D shooters. From the late 1980s to the early 1990s, he developed home console games for Squaresoft. He was part of Square, programming the first three Final Fantasy games, the Famicom 3D System titles 3-D WorldRunner and Rad Racer, and Secret of Mana. Early life and career (1957–1985) Gebelli was born in Iran in 1957. Because of his family relationship with the Iranian royal family, he migrated to the United States to avoid the 1979 Iranian Revolution and study computer science. Golden age arcade games inspired him at the time, such as Space Invaders. Gebelli's first project for the Apple II was EasyDraw, a logo and character creation program he used for his later games. He then began programming video games in either 1978 or 1979. Sirius Software As a college student, he demonstrated a slide show program he wrote at a computer store to the stores' owner Jerry Jewell. In 1980, he joined a new company founded by Jewell and Terry Bradley, Sirius Software. Gebelli's first game was Both Barrels. Within a year, Gebelli programmed twelve games. He wrote the code in his head, then quickly entered it before forgetting the details. His action games were well-received, and three of his games, Phantoms Five, Cyber Strike, and Star Cruiser, appeared on Softalks Top Thirty software list in March 1981. Six of his games later appeared on Softalks Top Thirty list in August 1981, with the highest at number three. His best-selling titles were Space Eggs and Gorgon, which were clones of Moon Cresta and Defender, respectively. Electronic Games referred to Gebelli as "ace designer Nasir" and gave Gorgon a positive review. BYTE assured readers that Gorgon would not disappoint "Nasir Gabelli fans". Gorgon sold at least 23,000 copies in a year, making it one of the best-selling computer games through June 1982. Gebelli's games used page flipping, which eliminated the flickering that early Apple II games experienced. Gebelli Software He left Sirius in 1982 to establish his own software company, Gebelli Software, which released its first game that same year. Entitled Horizon V, the game was a first-person shooter with a radar mechanic. Sirius released the Apple II game Zenith later in 1982, which added the ability for players to rotate their ships. In October 1982, Arcade Express reviewed Zenith and scored it 9 out of 10, stating "celebrated Nasir proves his reputation" with "this visually striking first-person space piloting and shooting" game. In March 1983, however, Andromeda (fourth place for Atari 8-bit), Russki Duck (tied for sixth for Apple) and Horizon V (tenth place for Apple) received Softlines Dog of the Year awards "for badness in computer games" based on reader submissions. Horizon V sold 5,000 copies during its first few months on sale in 1982. IBM arranged for Gebelli to produce launch titles for the IBM PCjr, announced in late 1983. Gebelli's company was not successful, and the video game crash of 1983 caused Gebelli Software to close. Afterward, Gebelli went on an extended vacation traveling the world. When he retired from Apple II development, Gebelli had eight games on Softalks Apple II best-seller lists, more than any other game designer. Squaresoft (1986–1993) In 1986, Gebelli became interested in developing games again and met with Doug Carlston, his friend and owner of video game developer Brøderbund. Carlston told him about the rise of the Nintendo Entertainment System and how he should start creating games for the console. Gebelli was interested, and so Doug offered to fly to Japan with Nasir and introduce him to his contacts at Square. Nasir had the opportunity to meet with Masafumi Miyamoto, founder and president of Square, who decided to hire him. The programmers, especially Hironobu Sakaguchi (a long-time fan of Gebelli's work), were aware of Nasir's reputation and were excited to have him join. Famicom 3D System While at Square, Nasir programmed the game Tobidase Daisakusen for the Famicom Disk System, released in the United States in early 1987 as 3-D WorldRunner on the NES. 3-D WorldRunner was a pseudo-3D third-person platform game where players move in any forward-scrolling direction and leap over obstacles and chasms. It was also notable for being one of the first stereoscopic video games. His second Square project was Highway Star (Rad Racer in the U.S.), a stereoscopic 3-D racing game also designed for the Famicom 3D System in 1987. He went on to program a sequel, Rad Racer II, released in 1990. According to Sakaguchi, Square initially hired Gebelli for his 3D programming techniques, as seen in 3-D WorldRunner and Rad Racer. At the time, Gebelli did not know any Japanese and had no translator, so it was initially difficult to communicate with Sakaguchi. There were only three staff members working on both games, Gebelli, Sakaguchi, and graphic designer Kazuko Shibuya (who later worked on the Frontier games). Both games were commercially successful, selling about 500,000 copies each. Final Fantasy Gebelli then teamed up with Sakaguchi, Nobuo Uematsu and Yoshitaka Amano as part of Square's A-Team to produce Final Fantasy, the first entry in the popular Final Fantasy series. A role-playing video game released for the NES in 1987 in Japan, it featured several unique features, a character creation system, the concept of time travel, side-view battles and transportation by canoe, boat and airship. It also had the first RPG minigame, a sliding puzzle added by Gebelli into the game despite its not being part of Squaresoft's original game design. He went on to program Final Fantasy II, released in 1988, introducing an "emotional storyline, morally ambiguous characters, tragic events". He also made the story "emotionally experienced rather than concluded from gameplay and conversations". The game replaced traditional levels and experience points with a new activity-based progression system that required "gradual development of individual statistics through continuous actions of the same kind". Final Fantasy II also featured open-ended exploration and an innovative dialogue system where players use keywords or phrases during conversations with non-player characters. Gebelli went on to program Final Fantasy III in 1990, which introduced the job system, a character progression engine allowing the changing and combination of character classes. Midway through the development of both Final Fantasy II and III, Gebelli returned to Sacramento, California from Japan due to an expired work visa. The rest of the development staff followed him to Sacramento with materials and equipment needed to finish game production. Secret of Mana After completing Final Fantasy III, Gebelli took another long vacation and later returned to work on Seiken Densetsu II (released as Secret of Mana in the U.S.), the second entry in the Mana series, released in 1993. The game made advances to the action role-playing game genre, including its unique cooperative multiplayer gameplay. The team who created the game had worked the first three Final Fantasy titles: Gebelli, Koichi Ishii, and Hiromichi Tanaka. The team developed Secret of Mana to be a launch title for Super NES's CD-ROM add-on. After Sony and Nintendo backed out of making the console, the game was changed to fit a standard Super NES game pak. The game received considerable acclaim for its innovative pausable real-time battle system, stamina bar, the "Ring Command" menu system, its innovative cooperative multiplayer gameplay, and the customizable AI settings for computer-controlled allies. Later life (1994–present) Following Secret of Mana's completion, Gebelli retired with income from Square royalties and travelled the world. In August 1998, Gebelli attended an Apple II Reunion in Dallas, Texas, at video game developer Ion Storm offices. There, Gebelli met developer and fan John Romero, who interviewed him. Gebelli lives in Sacramento, California, where he has lived most of his life. Legacy John Romero (Wolfenstein 3D, Doom, Quake) credited Gebelli as a significant influence on his career as a game designer. He also cited Gebelli as his favorite programmer and a notable inspiration, mentioning his fast action and 3D programming work on games such as Horizon V and Zenith. Gebelli also inspired the careers of other developers, such as Mark Turmell (NBA Jam, Smash TV). Jordan Mechner has also credited Gebelli's work on the Apple II as inspiration and as a major influence on the creation of Karateka and Prince of Persia. Richard Garriott (Ultima) also praised Gebelli's ability to craft games that "were really playable and fun!" Final Fantasy went on to become a major franchise, and Hironobu Sakaguchi went on to become a well-known figure in the game industry. Final Fantasys side-view battles became the norm for numerous console RPGs. Developers used Final Fantasy IIs activity-based progression system in several later RPG series, such as the SaGa, Grandia, and The Elder Scrolls series. Final Fantasy IIIs job system became a recurring element in the Final Fantasy series. Secret of Mana has also influenced later action RPGs, including modern titles such as The Temple of Elemental Evil and Dungeon Siege III. List of games Sirius Software Both Barrels (1980, Apple II) Star Cruiser (1980, Apple II) Phantoms Five (1980, Apple II) Cyber Strike (1981, Apple II) Gorgon (1981, Apple II) Space Eggs (1981, Apple II) Pulsar II (1981, Apple II) Autobahn (1981, Apple II) Gebelli Software Horizon V (1981, Apple II) Firebird (1981, Apple II) Russki Duck (1982, Apple II) Zenith (1982, Apple II) Neptune (1982, Apple II) ScubaVenture (1983, IBM PCjr) Mouser (1983, IBM PCjr) Squaresoft 3-D WorldRunner (1987, FDS/NES) Rad Racer (1987, NES) JJ: Tobidase Daisakusen Part 2 (1987, NES) Final Fantasy (1987, NES) Final Fantasy II (1988, NES) Final Fantasy III (1990, NES) Rad Racer II (1990, NES) Secret of Mana (1993, SNES) References External links Moby Games bio of Nasir Gebelli What is behind the phrase "Programmed by Nasir"? 1957 births Living people American video game designers American video game programmers American people of Iranian descent Final Fantasy designers Square Enix people
6417180
https://en.wikipedia.org/wiki/Wiki%20Server
Wiki Server
Wiki Server was a set of services which have shipped with all versions of Mac OS X Server since v10.5 until macOS High Sierra. Mac OS X Server includes web-based Wiki, Weblog, Calendaring, and Contact services. Additionally, it includes a Cocoa application called Directory which allows directory viewing as well as enabling of group services. Server 5.7.1, the version aligned with macOS 10.14 and released on September 30, 2018, removed the Wiki Server functionality from Server.app. 2007 software Proprietary wiki software Apple Inc. software MacOS Server MacOS-only software made by Apple Inc.
50277888
https://en.wikipedia.org/wiki/SAP%20Anywhere
SAP Anywhere
SAP Anywhere is a front office software package from SAP SE (SAP) for small and medium sized enterprises (SMBs) with 10 - 500 employees. Early 2018 SAP decided to sunset the product and to focus in the SMB market on the established SAP Business One and SAP Business ByDesign ERP solutions. SAP Anywhere includes several front office applications that is intended to help retail and wholesale businesses market and sell their products and services through multiple sales channels including in-store, direct sales, and online. This system facilitates effective customer relationship management by allowing business owners to review and direct their marketing, inventory, and direct customer experiences using a single system on their mobile devices. According to research firm International Data Corporation (IDC), many small businesses struggle with out of date processes, and fail to optimise online business tools. E-Commerce is an increasing factor for maintaining relationships with customers. SAP Anywhere is intended to allow small businesses to take advantage of the digital revolution and increase their customer reach through e-commerce, both in business-to-business (B2B) and business-to-consumer (B2C) channels. President of global channels and general business at SAP, Rodolpho Cardenuto, said in 2016, “The more than 79 million small and midsize companies worldwide are the lifeblood of the economy." SAP Anywhere is primarily used by SMBs, with about 20% of its clients falling outside of the SMB category. History SAP Anywhere was launched by SAP and China Telecom on 20 October 2015, in Mainland China. SAP is a German multinational software corporation, headquartered in Walldorf, Baden-Württemberg, Germany. Founded in 1972, SAP focuses on creating enterprise software. In 2014, SAP created a new division to focus on SMB (small-medium business) customers, SMB Solutions Group. SMB software preceding SAP Anywhere includes: SAP Business One: automates key business functions in financials, operations, and human resources through dynamic ERP software SAP Business ByDesign: A fully integrated on-demand enterprise resource planning and business management software. SAP Business All-in-One: Automates core processes Others products include Crystal Reports, SAP Lumira, and SAP Edge Solutions China Telecom, established in 2000, is the largest fixed-line service provider in the People’s Republic of China, and the third largest mobile telecommunication provider in China. In developing SAP Anywhere, SAP utilised the telecommunications infrastructure of China Telecom, and developed a product customised for the Chinese market. SMBs represent two thirds of all businesses in China. Initial plans for release focused on building a strong customer base in China before expanding to other markets. Due to competitive pressure, SAP released SAP Anywhere to the English-speaking market earlier than planned. UK availability was announced in March 2016, and US availability was announced in May 2016. A Canadian pilot is currently planned for 2016. Features SAP Anywhere is a cloud based SaaS (software as a service), delivered by SAP in the public cloud (Amazon cloud for the US market). SAP Anywhere can be accessed through mobile devices or desktops. It includes applications for: e-commerce website(s)/online store customer relationship management digital marketing Point of Sale order fulfillment management inventory management SAP Anywhere uses SAP HANA, which enables data to be captured and mined in real time, allowing for accurate inventory, sales, and order management across multiple channels. SAP Anywhere integrates with Constant Contact or MailChimp, and with payment gateways like PayPal and Stripe. SAP Anywhere uses secure transactions and secure sockets layer (SSL) protection. As well, SAP Anywhere has integrated with delivery company UPS. SAP Anywhere allows businesses to utilise Google's productivity and collaboration tools to interact with customers through Google Apps. Integration with internal and back office enterprise resource planning applications like HR and finance are also available. See also Cloud Computing Customer Experience Front Office List of SAP products Online Shopping References External links Guidelines For ERP Implementation Customer relationship management software Business-to-business Cloud applications ERP software Proprietary database management systems SAP SE As a service 2015 software
53882186
https://en.wikipedia.org/wiki/Joint%20Entrance%20Screening%20Test
Joint Entrance Screening Test
Joint Entrance Screening Test is a screening test in India conducted to shortlist candidates for admission to MSc, Integrated PhD and PhD programmes in Physics, Theoretical Computer Science, Neuroscience and Computational Biology at twenty public research institutes. / Eligibility Please see the websites/advertisements of the participating institutes for their eligibility criteria in details. Listed below are tentative eligibility criteria of admission to M.Sc, Ph.D, and Integrated / M.Sc / M.Tech. - Ph.D programs in the participating institutes. Ph.D. Programme Physics M.Sc. in Physics (all participating Institutes). Additionally, some institutes accept B.E. / B.Tech. / M.Sc. / M.E. / M.Tech. in disciplines of Applied Physics and Mathematics, as listed below. M.Sc. in Mathematics / Applied Physics / Applied Mathematics / Optics and Photonics / Instrumentation / Electronics will also be considered at IIA. B.E. or B.Tech. will be considered at IISc, IMSc, ICTS-TIFR, IUCAA, JNCASR, NCRA-TIFR, TIFR-TCIS, RRI, IISER Mohali, IISER Pune, IISER Thiruvananthapuram. M.Sc. in Physics / Electronics / Astronomy / Applied Mathematics will be considered at IUCAA. M.Sc. in Physics, Engineering Physics or Applied Physics will also be considered at IPR. M.Sc. in Physics, Chemistry, Applied Mathematics, Biophysics or Biochemistry will be considered at SNBNCBS. B.Tech Eng. Phys.will be considered at TIFR. M.E./ M.Tech in Applied Physics will be considered at NISER. Theoretical Computer Science at IMSc M.Sc./ M.E. / M.Tech. in Computer Science and related disciplines, and should be interested in the mathematical aspects of computer science. Ph D in Neuroscience at NBRC M.Sc (Physics/ Mathematics), B.E/ B.Tech/ M.C.A in Computer Science Ph D in Computational Biology at IMSc M.Sc./ M.E. / M.Tech. / MCA in any engineering or science discipline, with good mathematical skills and strong interest in biological problems. Integrated M.Sc. / M.Tech - Ph.D Programme (Physics) B. Sc. (Physics / Mathematics) will be considered at SNBNCBS. B. Sc. (Physics) will be considered at IMSc. B.Sc. (Physics / Mathematics) / B.E. / B.Tech. in Electrical / Instrumentation / Engineering Physics / Electronics and Communications / Computer Science and Engineering / Optics and Photonics will be considered in IIA. B.Sc (Physics) or B.E./B. Tech in Engineering Physics , with a minimum of first class marks, will be considered at NISER. B.Sc. (Physics) will be considered at IISER-Pune, ICTS-TIFR, NCRA-TIFR, and TIFR-TCIS. B. Sc. (Physics / Mathematics) / B.E. / B.Tec. will be considered for Integrated M.Sc - PhD at Bose Institute. Integrated Ph.D Programme in Theoretical Computer Science at IMSc B.Sc./B.E./B.Tech./M.C.A. in Computer Science or related disciplines and should be interested in the mathematical aspects of computer science. Integrated M.Tech - Ph.D. Programme at IIA M.Sc. (Physics / Applied Physics) / Post-B.Sc. (Hons) in Optics and Optoelectronics / Radio Physics and Electronics. Integrated Ph.D. Programme at IISER IISER. Thiruvananthapuram. B. Sc. (Physics) or B.E. / B. Tech. in any discipline. M. Sc. Programme at HRI. B. Sc. (Physics) or B.E./B.Tech. degree in any discipline HRI is starting a new M.Sc. programme in Physics from 2017. The Integrated Ph.D. programme in Physics at HRI is discontinued from 2017. It has been declared as a National Eligibility Test by Science and Engineering Research Board. References Standardised tests in India
894522
https://en.wikipedia.org/wiki/Fu%20Foundation%20School%20of%20Engineering%20and%20Applied%20Science
Fu Foundation School of Engineering and Applied Science
The Fu Foundation School of Engineering and Applied Science (popularly known as SEAS or Columbia Engineering; previously known as Columbia School of Mines) is the engineering and applied science school of Columbia University. It was founded as the School of Mines in 1863 and then the School of Mines, Engineering and Chemistry before becoming the School of Engineering and Applied Science. On October 1, 1997, the school was renamed in honor of Chinese businessman Z.Y. Fu, who had donated $26 million to the school. The Fu Foundation School of Engineering and Applied Science maintains a close research tie with other institutions including NASA, IBM, MIT, and The Earth Institute. Patents owned by the school generate over $100 million annually for the university. SEAS faculty and alumni are responsible for technological achievements including the developments of FM radio and the maser. The School's applied mathematics, biomedical engineering, computer science and the financial engineering program in operations research are very famous and highly ranked. The current SEAS faculty include 27 members of the National Academy of Engineering and one Nobel laureate. In all, the faculty and alumni of Columbia Engineering have won 10 Nobel Prizes in physics, chemistry, medicine, and economics. The school consists of approximately 300 undergraduates in each graduating class and maintains close links with its undergraduate liberal arts sister school Columbia College which shares housing with SEAS students. The School's current dean is Mary Cunningham Boyce, who was appointed in 2013. History Original charter of 1754 Included in the original charter for Columbia College was the direction to teach "the arts of Number and Measuring, of Surveying and Navigation [...] the knowledge of [...] various kinds of Meteors, Stones, Mines and Minerals, Plants and Animals, and everything useful for the Comfort, the Convenience and Elegance of Life." Engineering has always been a part of Columbia, even before the establishment of any separate school of engineering. An early and influential graduate from the school was John Stevens, Class of 1768. Instrumental in the establishment of U.S. patent law, Stevens procured many patents in early steamboat technology, operated the first steam ferry between New York and New Jersey, received the first railroad charter in the U.S., built a pioneer locomotive, and amassed a fortune, which allowed his sons to found the Stevens Institute of Technology. (Excerpt from SEAS website.) When Columbia University first resided on Wall Street, engineering did not have a school under the Columbia umbrella. After Columbia outgrew its space on Wall Street, it relocated to what is now Midtown Manhattan in 1857. Then President Barnard and the Trustees of the University, with the urging of Professor Thomas Egleston and General Vinton, approved the School of Mines in 1863. The intention was to establish a School of Mines and Metallurgy with a three-year program open to professionally motivated students with or without prior undergraduate training. It was officially founded in 1864 under the leadership of its first dean, Columbia professor Charles F. Chandler, and specialized in mining and mineralogical engineering. An example of work from a student at the School of Mines was William Barclay Parsons, Class of 1882. He was an engineer on the Chinese railway and the Cape Cod and Panama Canals. Most importantly he worked for New York, as a chief engineer of the city's first subway system, the Interborough Rapid Transit Company. Opened in 1904, the subway's electric cars took passengers from City Hall to Brooklyn, the Bronx, and the newly renamed and relocated Columbia University in Morningside Heights, its present location on the Upper West Side of Manhattan. Renaming to the School of Mines In 1896, the school was renamed to the "School of Mines, Engineering and Chemistry". During this time, the University was offering more than the previous name had implied, thus the change of name. The faculty during this time included Michael I. Pupin, after whom Pupin Hall is named. Pupin himself was a graduate of the Class of 1883 and the inventor of the "Pupin coil", a device that extended the range of long-distance telephones. Students of his included Irving Langmuir, Nobel laureate in Chemistry (1932), inventor of the gas-filled tungsten lamp and a contributor to the development of the radio vacuum tube. Another student to work with Pupin was Edwin Howard Armstrong, inventor of FM radio. After graduating in 1913 Armstrong was stationed in France during World War I. There he developed the superheterodyne receiver to detect the frequency of enemy aircraft ignition systems. During this period, Columbia was also home to the "Father of Biomedical Engineering" Elmer L. Gaden. Recent and future developments The university continued to evolve and expand as the United States became a major political power during the 20th century. In 1926, the newly renamed School of Engineering prepared students for the nuclear age. Graduating with a master's degree, Hyman George Rickover, working with the Navy's Bureau of Ships, directed the development of the world's first nuclear-powered submarine, the Nautilus, which was launched in 1954. The school's first woman graduate received her degree in 1945. After a substantial grant of $26 million from Chinese businessman Z. Y. Fu, the engineering school was renamed again in 1997. The new name, as it is known today is the Fu Foundation School of Engineering and Applied Science. SEAS continues to be a teaching and research institution, now with a large endowment of over $400 million, and sits under the Columbia umbrella endowment of $7.2 billion. Admissions The admissions rate for the SEAS undergraduate class of 2018 was approximately 7%. Approximately 95% of accepted students were in the top 10% of their graduating class; 99% were in the top 20% of their class. 58% of admitted students attended high schools that do not rank. The yield rate for the class of 2014 was 59%. As for SAT scores, SEAS students within the Columbia University community have raised the composite SAT statistic for the undergraduates at Columbia University. The Class of 2013's SAT interquartile range was 2060–2320 and 1400–1560 (old SAT). The ACT composite interquartile range was 32–34. Those accepting enrollment at Columbia SEAS typically completed engineering programs at the undergraduate level and are pursuing professional graduate school in engineering, business, law, or medical school, so as to become what Columbia terms "engineering leaders." Engineering leaders are those who pioneer or define engineering: patent lawyers, doctors with specialties in biophysical engineering, financial engineers, inventors, etc. Columbia Engineering's graduate programs have an overall acceptance rate of 28.0% in 2010. The PhD student–faculty ratio at the graduate level is 4.2:1 according to the 2008 data compiled by U.S. News & World Report. PhD acceptance rate was 12% in 2010. Academics Rankings Columbia's School of Engineering and Applied Science is one of the top engineering schools in the United States and the world. It is ranked 15th among the best engineering schools by U.S. News & World Report, and second within the Ivy League, behind Cornell University. Its undergraduate engineering program is ranked 21st in the country, according to U.S. News. In 2010, the US National Research Council revealed its new analyses and rankings of American university doctoral programs since 1995. Columbia Engineering ranked 10th in biomedical engineering, 18th in chemical engineering, 26th in electrical engineering, 14th in mechanical engineering (5th in research), 9th in operations research & industrial engineering, 7th in applied mathematics, and 6th in computer sciences. The school's department of computer science is ranked 13th in the nation, 36th in the world by U.S. News & World Report, and 18th worldwide by QS World University Rankings. Its biomedical engineering program is ranked 9th according to US News. Among the small prestigious programs, the school's chemical engineering is ranked 20th, civil engineering and engineering mechanics 18th, electrical engineering 3rd, applied physics 4th, industrial engineering and operations research 4th, material engineering 10th, computer science 15th, and applied mathematics 15th, according to National Science Foundation. From The Chronicle of Higher Education, Columbia's engineering mechanics is 6th in the nation, its environmental engineering 4th, industrial engineering 7th, mechanical engineering 5th, applied physics 8th, and operations research 6th. Finally, Columbia's financial engineering program is ranked 3rd nationally, according to the 2020 ranking from Quantnet. Facilities Columbia's Plasma Physics Laboratory is part of the School of Engineering and Applied Science (SEAS), in which the HBT and Columbia Non-Neutral Torus are housed. The school also has two wind tunnels, a machine shop, a nanotechnology laboratory, a General Dynamics TRIGA Mk. II nuclear fission reactor, a large scale centrifuge for geotechnical testing, and an axial tester commonly used for testing New York City bridge cables. Each department has numerous laboratories on the Morningside Heights campus; however, other departments have holdings throughout the world. For example, the Applied Physics department has reactors at Nevis Labs in Irvington, NY and conducts work with CERN in Geneva. Notable alumni The School of Engineering and Applied Science celebrates its ties and affiliations with at least 8 Nobel Laureates. Alumni of Columbia Engineering have gone on to numerous fields of profession. Many have become prominent scientists, astronauts, architects, government officials, pioneers, entrepreneurs, company CEOs, financiers, and scholars. Albert Huntington Chester (E.M. 1868, Ph.D. 1876), geologist and mining engineer, professor at Hamilton College and Rutgers College and the namesake of Chester Peak Henry Smith Munroe (E.M. 1869, Ph.D. 1877), Foreign advisor to Meiji Japan Roland Duer Irving (E.M. 1869, Ph.D. 1879), geologist, pioneer in petrography H. Walter Webb (E.M. 1873), executive with the New York Central Railroad Frederick Remsen Hutton (E.M. 1876), secretary of the American Society of Mechanical Engineers from 1883 to 1906 Marcus Benjamin (Ph.B. 1878), editor William Hamilton Russell (1878), architect who founded firm Clinton and Russell; designed the American International Building, Hotel Astor, Graham Court, The Langham and other New York landmarks William L. Ward (1878), United States Congressman from New York Nathaniel Lord Britton (1879), co-founder of the New York Botanical Garden Hamilton Castner (1879), American industrial chemist famous for developing the Castner–Kellner process Graeme Hammond (1879), American neurologist, Olympic fencer; founding president of the Amateur Fencers League of America Herman Hollerith (1879), co-founder of IBM Charles Buxton Going (1882), engineer, author, editor Mihajlo Idvorski Pupin (B.S. 1883), Serbian physicist and physical chemist whose inventions include the Pupin coil, winner of Pulitzer Prize for his autobiography Edward Chester Barnard (1884), American topographer with the United States Geological Survey James Furman Kemp (1884), geologist; president of the Geological Society of America Joseph Harvey Ladew Sr. (1885), founder of leather manufacturer Fayerweather & Ladew Frederick James Hamilton Merrill (1885), geologist and former director of the New York State Museum Edward Pearce Casey (1886), architect known for designing the Taft Bridge and Ulysses S. Grant Memorial Jennings Cox (1887), mining engineer credited with inventing the cocktail Daiquiri Graham Lusk (1887), American physiologist and nutritionist Allen Tucker (1887), architect and artist Edwin Gould I (1888), American investor and railway official; son of financier Jay Gould F. Augustus Heinze (1889), copper magnate and founder of United Copper; one of the three "Copper Kings" of Butte, Montana George Oakley Totten Jr. (1891), prolific architect in Washington, D.C. who designed Meridian Hall, the Embassy of Turkey, Washington, D.C. and the Embassy of Ecuador in Washington, D.C. George Gustav Heye (EE. 1896), investment banker and founder of the National Museum of the American Indian in New York, and namesake of the George Gustav Heye Center Winifred Edgerton Merrill (PhD. 1889), first American woman to receive a Ph.D. in mathematics John Stone Stone (189-), early telephone engineer Herschel Clifford Parker (PhB. 1890), physicist and mountaineer Gano Dunn (1891), former president of Cooper Union and recipient of IEEE Edison Medal; former Chairman and CEO of the National Research Council Gonzalo de Quesada y Aróstegui (1891), Cuban revolutionary, minister to the United States, signer of the Hay-Quesada Treaty Heinrich Ries (1892), American economic geologist; professor at Cornell University Chester Holmes Aldrich (PhB. 1893), former director of American Academy in Rome and architect who designed the Kykuit V. Everit Macy (PhB, 1893), American industrialist, former president of the National Civic Federation, major benefactor to Teachers College, Columbia University Kenneth MacKenzie Murchison (1894), American architect who designed the Havana Central railway station, Pennsylvania Station in Baltimore, and the Murchison Building in Wilmington, North Carolina William H. Woodin (1890), American industrialist, 51st United States Secretary of the Treasury Gustavus Town Kirby (1895), president of the Amateur Athletic Union and member of the United States Olympic Committee from 1896 to 1956 Leon Moisseiff (1895), American engineer and designer of the Manhattan Bridge Alfred Chester Beatty (E.M. 1898), mining magnate and millionaire, often referred to as "King of Copper", founder of the Chester Beatty Library in Dublin Don Gelasio Caetani (1903), mayor of Rome and Italian ambassador to the United States Robert Stangland (1904), Olympic athlete; bronze medalist in Athletics at the 1904 Summer Olympics Peter Cooper Hewitt (1906), engineer who invented the first Mercury-vapor lamp in 1901, the Hewitt-Sperry Automatic Airplane, and the Mercury-arc valve, son of New York mayor and philanthropist Abram Hewitt Edward Calvin Kendall (1908), Winner of 1950 Nobel Prize for Physiology or Medicine William Parsons (1882), Chief Engineer of New York City's subway system Irving Langmuir (1903), Winner of the 1932 Nobel Prize in Chemistry, produced gas-filled incandescent lamp, explorer of the vacuum Edmund Prentis (B.S. 1906), former president of the American Standards Association, art collector Roger W. Toll (B.S. 1906), mountaineer, former superintendent of Mount Rainier, Rocky Mountain, and Yellowstone National Parks James Kip Finch (B.S. 1906), American engineer and educator, dean of Columbia Engineering from 1941 to 1950 Kingdon Gould Sr. (E.M. 1909), financier and polo player; father of ambassador Kingdon Gould Jr. Grover Loening (M.S. 1910), American aircraft manufacturer, designer of first successful monoplane José Raúl Capablanca (1910), one of the greatest chess players of all time Alfonso Valdés Cobián (E.E. 1911), Puerto Rican industrialist, co-founder of Compañía Cervecera de Puerto Rico Eugene Dooman (1912), counselor at the U.S. Embassy in Tokyo vital in the negotiations between the U.S. and Japan before World War II David Steinman (PhD. 1911), director of the reconstruction of Brooklyn Bridge Harry Babcock (1912), 1912 Olympic champion in pole vaulting Harvey Seeley Mudd (B.S. 1912), Metallurgical Engineer, president of Cyprus Mines Corporation, co-founder of Claremont McKenna College and namesake of Harvey Mudd College of Engineering Richard Cunningham Patterson Jr. (E.M. 1912), United States Ambassador to Yugoslavia, United States Ambassador to Switzerland, United States Ambassador to Guatemala Edwin Armstrong (E.E. 1913), inventor of the frequency modulation transmission method Willard F. Jones (M.S. 1916), naval architect, head of National Safety Council's marine section and Vice President of Gulf Oil Seeley G. Mudd (B.S. 1917), American physician, professor and major philanthropist to academic institutions; namesake of the Seeley G. Mudd Manuscript Library of Princeton University Philip Sporn (E.E. 1917), Austrian engineer and recipient of IEEE Edison Medal; former president and CEO of American Electric Power Allen Carpé (E.E. 1919), first person to have climbed Mount Bona, Mount Fairweather, and Mount Logan Radu Irimescu (1920), former Romanian ambassador to the United States Langston Hughes (1922), poet of the Harlem Renaissance Arthur Loughren (1923), Pioneer in radio engineering and television engineering Edward Lawry Norton (M.S. 1925), Bell Lab engineer, developer of Norton equivalent circuit Hyman Rickover (M.S. 1928), Father of the Nuclear U.S. Navy Raymond D. Mindlin (B.S. 1931), researcher and professor known for his contributions to applied mechanics, applied physics, and Engineering Sciences, recipient of National Medal of Science Helmut W. Schulz (B.S. 1933, M.S. 1934), President Dynecology, developed uranium centrifugation (gas centrifuge), laser analysis, safe waste conversion Robert D. Lilley (B.S. 1934), Former President of the AT&T from 1972 to 1976 Herbert L. Anderson (B.S. 1935), established Enrico Fermi Institute and nuclear physicist in the Manhattan Project Daniel C. Drucker (PhD. 1939), American engineer and recipient of National Medal of Science Antoine Marc Gaudin (1921), professor at MIT and a founding member of National Academy of Engineering John R. Ragazzini (PhD. 1941), pioneered the development of the z-transform method in discrete-time signal processing and analysis. Arthur Hauspurg (B.S. 1943, M.S. 1947), chairman of Consolidated Edison Samuel Higginbottom (B.S. 1943), former CEO of Eastern Air Lines and Rolls-Royce North America, chairman of Columbia's board of trustees Richard Skalak (B.S. 1943), pioneer in Biomedical engineering Elmer L. Gaden (B.S. 1944), Father of Biochemical Engineering William F. Schreiber (B.S. 1945), electrical engineer and developer of optical recognition machine Sheldon E. Isakoff (B.S. 1945, M.S. 1947, PhD. 1951), chemical engineer and former director of DuPont Henry S. Coleman (B.S. 1946), acting dean of Columbia College, Columbia University who was held hostage during the Columbia University protests of 1968. Joseph F. Engelberger (B.S. 1946, M.S. 1949), Father of Industrial robotics Edward A. Frieman (B.S. 1946), former director of the Scripps Institution of Oceanography Wilmot N. Hess (B.S. 1946), former director of the National Center for Atmospheric Research from 1980 to 1986 Ira Millstein (B.S. 1947), antitrust expert, partner at Weil, Gotshal & Manges and oldest big law partner in practice Bernard Spitzer (M.S. 1947), real estate developer and philanthropist, father of Eliot Spitzer, 54th Governor of New York Lotfi Asker Zadeh (PhD. 1949), an Iranian mathematician, electrical engineer, and computer scientist Henry Michel (B.S. 1949), Civil Engineer, President of Parsons Brinckerhoff Anna Kazanjian Longobardo (B.S. 1949), founder of the National Society of Women Engineers Edmund DiGiulio (B.S. 1950), founder of the Cinema Products Corporation, five-time Academy Awards winner, inventor of the CP-16 Eliahu I. Jury (PhD. 1953), Initiated field of discrete time systems, pioneered z-transform (the discrete time equivalent of the Laplace Transform), and created Jury stability criterion test Sheldon Weinig (M.S. 1953, PhD. 1955), CEO of Materials Research Corporation, Vice chairman for Engineering and Manufacturing for SONY America Robert Spinrad (1954), American computer engineer and former director of Xerox Palo Alto Research Center Ferdinand Freudenstein (PhD. 1954), mechanical engineer, professor, and widely considered the "Father of Modern Kinematics" Saul Amarel (PhD. 1955), computer scientist and pioneer in artificial intelligence Robert Moog (M.S. 1957), pioneer of electronic music, inventor of the Moog synthesizer Rudolf Emil Kálmán (PhD. 1957), electrical engineer and recipient of National Medal of Science Bernard J. Lechner (B.S. 1957), electronics engineer and vice president of RCA Laboratories Joseph F. Traub (PhD. 1959), prominent computer scientist; head of the Carnegie Mellon School of Computer Science from 1971 to 1979 and founder of the Computer science department at Columbia University Richard G. Newman (M.S. 1960), Chairman and former CEO of world-leading engineering firm AECOM Masanobu Shinozuka (PhD. 1960), probabilistic mechanics, structural stability, and risk assessment Jeffrey Bleustein (PhD. 1962), former chairman and CEO of Harley-Davidson Roy Mankovitz (B.S. 1963), scientist, inventor, health strategist Jeffrey Ullman (B.S. 1963), professor at Stanford University and winner of the 2020 Turing Award Richard D. Gitlin (M.S. 1965, PhD. 1969) – engineer, co-invention of DSL at Bell Labs Robert C. Merton (B.S. 1966), Winner of the 1997 Nobel Prize in Economics and co-author of the Black–Scholes pricing model Stephen Schneider (B.S. 1966, Ph.D. 1971), environmental scientist at Stanford University who shared the Nobel Peace Prize in 2007 Dorian M. Goldfeld (B.S. 1967), American mathematician and editor of the Journal of Number Theory Robert H. Grubbs (PhD 1968), California Institute of Technology professor and 2005 Nobel Prize laureate Lewis A. Sanders (B.S. 1968), co-founder, Chairman, and CEO of AllianceBernstein Ira Fuchs (B.S. 1969), co-founder of BITNET, creator of LISTSERV, and JSTOR, former vice-president of Princeton University Jae-Un Chung (B.S. 1964, M.S. 1969), Former President, Vice chairman of Samsung Electronics and honorary chairman of Shinsegae Group, husband of Lee Myung-hee, Samsung heiress Feisal Abdul Rauf (B.S. 1969), imam, author, activist; sponsor and director of Park51 Eugene H. Trinh (B.S. 1972), Vietnamese-American scientist and astronaut Eduardo M. Ochoa (M.S. 1976), President of California State University, Monterey Bay Kevin P. Chilton (M.S. 1977), engineer, the current Commander, U.S. Strategic Command, former NASA astronaut Rocco B. Commisso (B.S. 1971), Italian-American billionaire, founder and CEO of Mediacom, the 8th largest cable television company in the United States James L. Manley (B.S. 1971), professor of life sciences at Columbia University Alvin E. Roth (B.S. 1971), Economist, 2012 Nobel Prize Laureate in Economics David Marquardt (B.S. 1973), venture capitalist and founder of August Capital James Albaugh (M.S. 1974), Current President and CEO of Boeing Commercial Airplanes, EVP of its parent company, The Boeing Company. Vikram Pandit (B.S. 1976), former CEO of Citigroup Ralph Izzo (B.S. 1978, M.S. 1979, Ph.D. 1981), Chairman, President, and CEO of Public Service Enterprise Group Ken Bowersox (M.S. 1979), engineer, United States Naval officer and a former NASA astronaut Sanjiv Ahuja (M.S. 1979), current CEO of Augere and former CEO of Orange William G. Gregory (M.S. 1980), NASA astronaut Len Blavatnik (M.S. 1981), billionaire, founder of Access Industries Peter Livanos (B.S. 1981), Greek shipping tycoon, billionaire, owner of Ceres Hellenic Shipping Enterprises and Chairman of Euronav; former major shareholder of Aston Martin Anrika Rupp (B.S. 1981), artist Joshua Bloch (B.S. 1982), Software engineer, Chief Java Architect at Google Jay Mehta (B.S. 1983), Indian businessman, owner of the conglomerate Mehta Group and Indian cricket team Kolkata Knight Riders; husband of Indian actress Juhi Chawla Vincent Sapienza (B.S. 1982), Commissioner of the New York City Department of Environmental Protection Ted Rall (dropped out 1984), Political cartoonist, President of the Association of American Editorial Cartoonists Michael Massimino (B.S. 1984), Current engineer and astronaut—mission specialist, STS-109, STS-125. Adam Cohen (B.S. 1985), CEO of Associated Universities, Inc., former deputy director of the Princeton Plasma Physics Laboratory Gregory H. Johnson (M.S. 1985), Current colonel, engineer, astronaut for International Space Station. STS-109, support for STS-125. Amr Aly (B.S. 1985), winner of the 1985 Hermann Trophy and Olympic soccer player Robert Bakish (B.S. 1985), current president and CEO of Viacom Marshall Nicholson (B.S. 1985), managing director at China International Capital Corp Chuck Hoberman (M.S. 1985), inventor and architect; designer of the Hoberman sphere Douglas Leone (M.S. 1986), billionaire venture capitalist and partner at Sequoia Capital Jon Normile (B.S. 1988), American Olympic fencer Angeliki Frangou (M.S. 1988), Greek businesswoman, chairman and CEO of Navios Maritime Holdings Jelena Kovacevic (M.S. 1988, PhD 1991), first female dean of the New York University Tandon School of Engineering Moti Yung (PhD. 1988), Cryptographer; Information Security and Privacy Scientist Google Alan E. Willner (PhD. 1988), professor of Electrical Engineering at the University of Southern California, president of The Optical Society Semyon Dukach (B.S. 1989), former chairman of SMTP and managing director of Techstars David Eppstein (PhD. 1989), developer of computational geometry, graph algorithms, and recreational mathematics Ursula Burns (M.S. 1991), Current CEO of Xerox Corporation, the first woman African-American Fortune 500 company CEO; Xerox is also the largest company a woman African American CEO is running. Azmi Mikati (B.S. 1994), CEO of M1 Group; son of Lebanese Prime Minister and billionaire Najib Mikati Neil Daswani (B.S. 1996), founder of Dasient Feryal Özel (B.S. 1996), professor of astronomy at the University of Arizona Judy Joo (B.S. 1997), American chef and TV personality, starred in the show Iron Chef UK; David Yeung (B.S. 1998), Hong Kong entrepreneur; founder of Green Monday Jon Oringer (M.S. 1999), billionaire founder and CEO of Shutterstock Andy Ross (B.S. 2001), Ok Go band member: guitarist, keyboard, backup vocals Regina Barzilay (PhD. 2003), professor at Massachusetts Institute of Technology and MacArthur Fellowship recipient in 2017 Jennifer Yu Cheng (B.S. 2003), Hong Kong businesswoman, educator, and philanthropist, wife of New World Development CEO Adrian Cheng Nullsleep (B.S. 2003), 8-bit musician and founder of the 8bitpeoples collective. Miloš Tomić (B.S. 2005), Olympic rower representing Serbia and Montenegro Samantha John (B.S. 2009), American computer engineer, founder of Hopscotch Chris Chyung (B.S. 2016), real-estate businessman, member of the Indiana House of Representatives Affiliates of the School Horst Ludwig Störmer I.I. Rabi professor of physics and applied physics, winner of 1998 Nobel Prize in Physics Mihajlo Idvorski Pupin Professor, Serbian physicist and physical chemist whose inventions include the Pupin coil Theodore Zoli, adjunct professor of civil engineering and structural engineer Charles F. Chandler American chemist, first Dean of Columbia University's School of Mines Harold Clayton Urey Professor, Nobel Laureate (1934), extensive development in the Manhattan Project, discoverer of Deuterium. Dimitris Anastassiou Professor of Electrical Engineering, developer of MPEG-2 technology Thomas Egleston, founder of Columbia School of Mines and professor of mining and metallurgy John B. Medaris Commanding General of U.S. Army Ordnance Missile Command (ABMA), planned Invasion of Normandy; professor Isidor Isaac Rabi Professor, PhD from Columbia (1927), Nobel Laureate, Discoverer of Nuclear Magnetic Resonance Mario Salvadori Architect, Structural Engineer, Professor (1940s–1990s), consultant on Manhattan Project, inventor of thin concrete shells Klaus Lackner, Professor of Environmental Engineering Chien-Shiung Wu "Chinese Marie Curie", first lady of physics, and Professor (1944–1980) who disproved "conservation of parity" Cyril M. Harris, Professor of Electrical Engineering and architect Norman Foster Ramsey Jr. Discovery of deuteron electric quadrupole moment, molecular beam spectroscopy. Professor (1940–1947), B.A. PhD Columbia. Frank Press Geophysicist, work in seismic activity and wave theory, counsel to four presidents. M.A., PhD Columbia, and researcher. Leon M. Lederman A Nobel Laureate, discoverer of muon neutrino '62, bottom quark '77. Professor (1951–1989). M.A., PhD Columbia Eric Kandel Biophysicist, Nobel Laureate, uncovered secrets of synapses. Professor Physicians & Surgeons (1974–); research with the Biomedical Engineering department. Joseph F. Traub Founding chairman of the computer science department at Columbia Emanuel Derman, Professor and Director of Columbia's financial engineering program, co-authors of the Financial Modelers' Manifesto Alfred Aho Canadian computer scientist widely known for his co-authorship of the AWK programming language, winner of the 2020 Turing Award Gertrude Fanny Neumark one of the world's leading experts on doping wide-band semiconductors Charles Hard Townes professor and an American Nobel Prize-winning physicist who helped to invent the laser Jacob Millman Professor of Electrical Engineering, creator of Millman's Theorem John R. Dunning School Dean, physicist who played key roles in the development of the atomic bomb Steven M. Bellovin Professor of Computer Science Philip Kim Professor of Applied Physics and Mathematics Mihalis Yannakakis Professor of Computer Science, famous scholar noted for his work in the fields of Computational complexity theory, Databases Maria Chudnovsky, professor of operations research and industrial engineering David E Keyes, professor of applied mathematics Awi Federgruen, Affiliate Professor of Operations Research and Industrial Engineering Nicholas F. Maxemchuk Professor of Electrical Engineering Clifford Stein Professor of operations research and industrial engineering Ronald Breslow Professor of chemical engineering, now University Professor Santiago Calatrava (Honorary Doctorate, 2007), world renowned architect, sculptor and structural engineer, designer of Montjuic Communications Tower and World Trade Center Transportation Hub Ferdinand Freudenstein, Higgins Professor Emeritus of Mechanical Engineering Henry Spotnitz, Affiliate Professor of Biomedical Engineering Thomas Christian Kavanagh, professor of civil engineering Vladimir Vapnik, Professor of Computer Science and co-developer of Vapnik–Chervonenkis theory Jaron Lanier, visiting scholar at the Computer Science department Sheldon Weinig, Professor of Operations Research and Industrial Engineering and founder of Materials Research Corporation Chris Wiggins, professor of applied mathematics, chief data scientist of The New York Times Man-Chung Tang, professor of civil engineering and former chairman of American Society of Civil Engineers Van C. Mow, professor of biomedical engineering and member of the National Academy of Engineering, Institute of Medicine Matt Berg, member of Mechanical Engineering Department research group and one of Time 100 Most Influential People in the World Bjarne Stroustrup, Professor in Computer Science, inventor of C++ programming language Shree K. Nayar, professor of Computer Science, inventor of 360° camera and developer of Oren–Nayar Reflectance Model David E. Shaw, former professor of Computer Science, founder of hedge fund, private equity and technology development firm D. E. Shaw & Co. Specialized centers Columbia Engineering faculty are a central force in creating many groundbreaking discoveries that today are shaping life tomorrow. They are at the vanguard of their fields, collaborating with other world-renowned experts at Columbia and other universities to bring the best minds from a myriad of disciplines to shape the future. Large, well-funded interdisciplinary centers in science and engineering, materials research, nanoscale research, and genomic research are making step changes in their respective fields while individual groups of engineers and scientists collaborate to solve theoretical and practical problems in other significant areas. Last year, Columbia Engineering's 2007–2008 research expenditures were $92,000,000, a very respectable number given the small size of the school. Harvard's research expenditures in the same period were $35,000,000. Columbia Engineering PhD students have ~60% more monetary resources to work with using the research expenditure : PhD student ratio. Specialized labs The Fu Foundation School of Engineering and Applied Science occupies five laboratory and classroom buildings at the north end of the campus, including the Schapiro Center for Engineering and Physical Science Research and the new Northwest Building on Morningside Heights. Because of the School's close proximity to the other Morningside facilities and programs, Columbia engineering students have access to the whole of the University's resources. The School is the site of an almost overwhelming array of basic and advanced research installations which include both the NSEC and the MRSEC NSF-funded interdisciplinary research centers, as well as the Columbia High-Beta Tokamak, the Robert A.W. Carleton Strength of Materials Laboratory, and a 200g geotechnical centrifuge. The Botwinick Multimedia Learning Laboratory is the School's facility for computer-aided design (CAD) and media development. It is equipped with 50 Apple Mac Pro 8-core workstations, as well as a cluster of Apple Xserves with Xraid storage, that serve the lab's 300-plus users per semester. Other programs Undergraduate Research Involvement Program Each SEAS department sponsors opportunities to do novel undergraduate research which have applications in the real world. Departmental Chairs supervise students through the process, and mentoring with a professor is provided. Materials Science and Engineering Program in the Department of Applied Physics and Applied Mathematics, sharing teaching and research with the faculty from Henry Krumb School of Mines. Computer Engineering Administered by both the Electrical Engineering and Computer Science Departments through a joint Computer Engineering Committee. The Combined Plan Programs The 3–2, B.A./B.S., is designed to provide students with the opportunity to receive both a B.A. degree from an affiliated liberal arts college and a B.S. degree from SEAS within five years. Students complete the requirements for the liberal arts degree along with a pre-engineering course of study in three years at their college and then complete two years at Columbia. The 4–2 M.S. program is designed to allow students to complete an M.S. degree at SEAS in two years after completion of a B.A. degree at one of the affiliated schools. This program will allow students the opportunity to take undergraduate engineering courses if necessary. See also List of Columbia University people Education in New York City Columbia University References Further reading External links Engineering School Home Page CUSJ – Columbia Undergraduate Science Journal 1997 Columbia University Record article Engineering 1864 establishments in New York (state) Educational institutions established in 1864 Engineering universities and colleges in New York (state)
366872
https://en.wikipedia.org/wiki/Small%20matter%20of%20programming
Small matter of programming
In software development, small matter of programming (SMOP) or simple matter of programming is a phrase used to ironically indicate that a suggested feature or design change would in fact require a great deal of effort. It points out that although the change is clearly possible, it would be very laborious to actually perform. It often implies that the person proposing the feature underestimates its cost. Definitions The 1983 Jargon File describes an SMOP as follows: The IBM Jargon Dictionary defines SMOP as: Usage SMOP was among the "games" described in an article as paralleling the Games People Play identified by Dr. Eric Berne in the field of self-help psychology. The game essentially consists of proposing seemingly simple adjustments to a design, leading to unexpected consequences and delays. Alternative phrases such as simple matter of software or small matter of software are occasionally used in the same manner. However, the phrase is also used without irony to indicate that straightforward software development is all that is required to resolve some issue. This usage is often invoked when the speaker wants to contrast the implied ease of software changes with the suggested greater difficulty of making a hardware change or a change to an industry standard. This non-ironic usage is more often invoked by senior management and hardware engineers, than it is by software engineers. The term was also explored and expanded upon by computer scientist Bonnie Nardi in her 1993 book A Small Matter of Programming: Perspectives on End User Computing. See also Ninety-ninety rule Hofstadter's law Hard–easy effect Planning fallacy References Anti-patterns Computer jargon Software project management English-language idioms
7104097
https://en.wikipedia.org/wiki/Extended%20Validation%20Certificate
Extended Validation Certificate
An Extended Validation Certificate (EV) is a certificate conforming to X.509 that proves the legal entity of the owner and is signed by a certificate authority key that can issue EV certificates. EV certificates can be used in the same manner as any other X.509 certificates, including securing web communications with HTTPS and signing software and documents. Unlike domain-validated certificates and organization-validation certificates, EV certificates can be issued only by a subset of certificate authorities (CAs) and require verification of the requesting entity's legal identity before certificate issuance. As of February 2021, all major web browsers (Google Chrome, Mozilla Firefox, Microsoft Edge and Apple Safari) have menus which show the EV status of the certificate and the verified legal identity of EV certificates. Mobile browsers typically display EV certificates the same way they do Domain Validation (DV) and Organization Validation (OV) certificates. Of the ten most popular websites online, none use EV certificates and the trend is away from their usage. For software, the verified legal identity is displayed to the user by the operating system (e.g., Microsoft Windows) before proceeding with the installation. Extended Validation certificates are stored in a file format specified by and typically use the same encryption as organization-validated certificates and domain-validated certificates, so they are compatible with most server and user agent software. The criteria for issuing EV certificates are defined by the Guidelines for Extended Validation established by the CA/Browser Forum. To issue an extended validation certificate, a CA requires verification of the requesting entity's identity and its operational status with its control over domain name and hosting server. History Introduction by CA/Browser Forum In 2005 Melih Abdulhayoglu, CEO of the Comodo Group , convened the first meeting of the organization that became the CA/Browser Forum, hoping to improve standards for issuing SSL/TLS certificates. On June 12, 2007, the CA/Browser Forum officially ratified the first version of the Extended Validation (EV) SSL Guidelines, which took effect immediately. The formal approval successfully brought to a close more than two years of effort and provided the infrastructure for trusted website identity on the Internet. Then, in April 2008, the forum announced version 1.1 of the guidelines, building on the practical experience of its member CAs and relying-party application software suppliers gained in the months since the first version was approved for use. Creation of special UI indicators in browsers Most major browsers created special user interface indicators for pages loaded via HTTPS secured by an EV certificate soon after the creation of the standard. This includes Google Chrome 1.0, Internet Explorer 7.0, Firefox 3, Safari 3.2, Opera 9.5. Furthermore, some mobile browsers, including Safari for iOS, Windows Phone, Firefox for Android, Chrome for Android, and iOS, added such UI indicators. Usually, browsers with EV support display the validated identity—usually a combination of organization name and jurisdiction—contained in the EV certificate's 'subject' field. In most implementations, the enhanced display includes: The name of the company or entity that owns the certificate; A lock symbol, also in the address bar, that varies in color depending on the security status of the website. By clicking on the lock symbol, the user can obtain more information about the certificate, including the name of the certificate authority that issued the EV certificate. Removal of special UI indicators In May 2018, Google announced plans to redesign user interfaces of Google Chrome to remove emphasis for EV certificates. Chrome 77, released in 2019, removed the EV certificate indication from omnibox, but EV certificate status can be viewed by clicking on lock icon and then checking for legal entity name under "certificate". Firefox 70 removed the distinction in the omnibox or URL bar (EV and DV certificates are displayed similarly with just a lock icon), but the details about certificate EV status are accessible in the more detailed view that opens after click on the lock icon. Apple Safari on iOS 12 and MacOS Mojave (released in September 2018) removed the visual distinction of EV status. Issuing criteria Only CAs who pass an independent qualified audit review may offer EV, and all CAs globally must follow the same detailed issuance requirements which aim to: Establish the legal identity as well as the operational and physical presence of website owner; Establish that the applicant is the domain name owner or has exclusive control over the domain name; Confirm the identity and authority of the individuals acting for the website owner, and that documents pertaining to legal obligations are signed by an authorized officer; Limit the duration of certificate validity to ensure the certificate information is up to date. CA/B Forum is also limiting the maximum re-use of domain validation data and organization data to maximum of 397 days (must not exceed 398 days) from March 2020 onward. With the exception of Extended Validation Certificates for .onion domains, it is otherwise not possible to get a wildcard Extended Validation Certificate – instead, all fully qualified domain names must be included in the certificate and inspected by the certificate authority. Extended Validation certificate identification EV certificates are standard X.509 digital certificates. The primary way to identify an EV certificate is by referencing the Certificate Policies extension field. Each issuer uses a different object identifier (OID) in this field to identify their EV certificates, and each OID is documented in the issuer's Certification Practice Statement. As with root certificate authorities in general, browsers may not recognize all issuers. EV HTTPS certificates contain a subject with X.509 OIDs for jurisdictionOfIncorporationCountryName (OID: 1.3.6.1.4.1.311.60.2.1.3), jurisdictionOfIncorporationStateOrProvinceName (OID: 1.3.6.1.4.1.311.60.2.1.2) (optional),jurisdictionLocalityName (OID: 1.3.6.1.4.1.311.60.2.1.1) (optional), businessCategory (OID: 2.5.4.15) and serialNumber (OID: 2.5.4.5), with the serialNumber pointing to the ID at the relevant secretary of state (US) or government business registrar (outside US), as well as a CA-specific policy identifier so that EV-aware software, such as a web browser, can recognize them. This identifier is what defines EV certificate and is the difference with OV certificate. Online Certificate Status Protocol The criteria for issuing Extended Validation certificates do not require issuing certificate authorities to immediately support Online Certificate Status Protocol for revocation checking. However, the requirement for a timely response to revocation checks by the browser has prompted most certificate authorities that had not previously done so to implement OCSP support. Section 26-A of the issuing criteria requires CAs to support OCSP checking for all certificates issued after Dec. 31, 2010. Criticism Colliding entity names The legal entity names are not unique, therefore an attacker who wants to impersonate an entity might incorporate a different business with the same name (but, e.g., in a different state or country) and obtain a valid certificate for it, but then use the certificate to impersonate the original site. In one demonstration, a researcher incorporated a business called "Stripe, Inc." in Kentucky and showed that browsers display it similarly to how they display certificate of payment processor "Stripe, Inc." incorporated in Delaware. Researcher claimed the demonstration setup took about an hour of his time, US$100 in legal costs and US$77 for the certificate. Also, he noted that "with enough mouse clicks, [user] may be able to [view] the city and state [where entity is incorporated], but neither of these are helpful to a typical user, and they will likely just blindly trust the [EV certificate] indicator". Availability to small businesses Since EV certificates are being promoted and reported as a mark of a trustworthy website, some small business owners have voiced concerns that EV certificates give undue advantage to large businesses. The published drafts of the EV Guidelines excluded unincorporated business entities, and early media reports focused on that issue. Version 1.0 of the EV Guidelines was revised to embrace unincorporated associations as long as they were registered with a recognized agency, greatly expanding the number of organizations that qualified for an Extended Validation Certificate. A list of EV certificates with price and features comparison is available for small business to select a cost-effective certificate. Effectiveness against phishing attacks with IE7 security UI In 2006, researchers at Stanford University and Microsoft Research conducted a usability study of the EV display in Internet Explorer 7. Their paper concluded that "participants who received no training in browser security features did not notice the extended validation indicator and did not outperform the control group", whereas "participants who were asked to read the Internet Explorer help file were more likely to classify both real and fake sites as legitimate". Domain-validated certificates were created by CAs in the first place While proponents of EV certificates claim they help against phishing attacks, security expert Peter Gutmann states the new class of certificates restore a CA's profits which were eroded due to the race to the bottom that occurred among issuers in the industry. According to Peter Gutmann, EV certificates are not effective against phishing because EV certificates are "not fixing any problem that the phishers are exploiting". He suggests that the big commercial CAs have introduced EV certificates to return the old high prices. See also Qualified website authentication certificate HTTP Strict Transport Security References External links CA/Browser Forum Web site Firefox green padlock for EV certificates Key management E-commerce Public key infrastructure Transport Layer Security 2007 introductions
17269604
https://en.wikipedia.org/wiki/XRDS
XRDS
Background The XML format used by XRDS was originally developed in 2004 by the OASIS XRI (extensible resource identifier) Technical Committee as the resolution format for XRIs. The acronym XRDS was coined during subsequent discussions between XRI TC members and OpenID developers at first Internet Identity Workshop held in Berkeley, CA in October 2005. The protocol for discovering an XRDS document from a URL was formalized as the Yadis specification published by Yadis.org in March 2006. Yadis became the service discovery format for OpenID 1.1. A common discovery service for both URLs and XRIs proved so useful that in November 2007 the XRI Resolution 2.0 specification formally added the URL-based method of XRDS discovery (Section 6). This format and discovery protocol subsequently became part of OpenID Authentication 2.0. XRDS Simple In early 2008, work on OAuth discovery by Eran Hammer-Lahav led to the development of XRDS Simple, a profile of XRDS that restricts it to the most basic elements and introduces some extensions to support OAuth discovery and other protocols that use specific HTTP methods. In late 2008, XRDS Simple has been cancelled and merged back into the main XRDS specification resulting in the upcoming XRD 1.0 format. Example uses Besides XRI resolution, examples of typical XRDS usage include: OpenID authentication for discovery and capabilities description of OpenID providers. OAuth discovery for locating OAuth service endpoints and capabilities. The Higgins Project for discovery of Higgins context providers. XDI.org I-name and I-number digital identity addressing services for generalized digital identity service discovery. The XDI data sharing protocol for discovery of XDI service endpoints and capabilities. Example XRDS document Following is an example of an XRDS document for the fictional XRI i-name =example. This document would typically be requested from a Web server via HTTP or HTTPS using the content type application/xrds+xml. Note that the outer container <XRDS> element serves as a container for one or more <XRD> (Extensible Resource Descriptor) elements. Most simple XRDS documents have only one XRD. Other services like XRI resolution may construct a sequence of XRDs within a single XRDS document to reflect a chain of metadata about linked resources. <?xml version="1.0" encoding="UTF-8"?> <xrds:XRDS xmlns:xrds="xri://$xrds" xmlns="xri://$xrd*($v*2.0)" xmlns:openid="http://openid.net/xmlns/1.0"> <XRD ref="xri://=example"> <Query>*example</Query> <Status ceid="off" cid="verified" code="100"/> <Expires>2008-05-05T00:15:00.000Z</Expires> <ProviderID>xri://=</ProviderID> <!-- synonym section --> <LocalID priority="10">!4C72.6C81.D78F.90B2</LocalID> <EquivID priority="10">http://example.com/example-user</EquivID> <EquivID priority="15">http://example.net/blog</EquivID> <CanonicalID>xri://=!4C72.6C81.D78F.90B2</CanonicalID> <!-- service section --> <Service> <!-- XRI resolution service --> <ProviderID>xri://=!F83.62B1.44F.2813</ProviderID> <Type>xri://$res*auth*($v*2.0)</Type> <MediaType>application/xrds+xml</MediaType> <URI priority="10">http://resolve.example.com</URI> <URI priority="15">http://resolve2.example.com</URI> <URI>https://resolve.example.com</URI> </Service> <!-- OpenID 2.0 login service --> <Service priority="10"> <Type>http://specs.openid.net/auth/2.0/signon</Type> <URI>http://www.myopenid.com/server</URI> <LocalID>http://example.myopenid.com/</LocalID> </Service> <!-- OpenID 1.0 login service --> <Service priority="20"> <Type>http://openid.net/server/1.0</Type> <URI>http://www.livejournal.com/openid/server.bml</URI> <openid:Delegate>http://www.livejournal.com/users/example/</openid:Delegate> </Service> <!-- untyped service for access to files of media type JPEG --> <Service priority="10"> <Type match="null" /> <Path select="true">/media/pictures</Path> <MediaType select="true">image/jpeg</MediaType> <URI append="path" >http://pictures.example.com</URI> </Service> </XRD> </xrds:XRDS> Synonyms XRDS documents can assert zero or more synonyms for a resource. In this context, a synonym is another identifier (a URI or XRI) that identifies the same target resource. For instance, the example XRDS document above asserts four synonyms: The local synonym !4C72.6C81.D78F.90B2. This is a relative XRI synonym assigned by the provider of this XRDS document. The equivalent URL http://example.com/example-user with a priority of 10 (1 is the highest priority). The equivalent URL http://example.net/blog with a priority of 15 (a lower priority than the other equivalent URL above). The canonical identifier xri://=!4C72.6C81.D78F.90B2. This is an absolute XRI i-number for the target resource—a persistent identifier that will never be reassigned (the functional equivalent of a Uniform Resource Name). For full details of XRDS synonym support, see XRI Resolution 2.0, Section 5. Service endpoints (SEPs) The other main purpose of XRDS documents is to assert the services associated with a resource, called service endpoints or SEPs. For instance, the example XRDS document above asserts four service endpoints for the represented resource: An XRI resolution service (type xri://$res*auth*($v*2.0)). An OpenID 2.0 authentication service (type http://openid.net/signon/2.0). An OpenID 1.0 authentication service (type http://openid.net/server/1.0). An untyped service for requesting resources with a media type image/jpeg. For full details of XRDS service endpoints, see XRI Resolution 2.0, Sections 4.2 and 13. Service types In XRDS documents, a service is identified using a URI or XRI. Following are listings of well-known service types. See also XRDS Type, an open community effort begun in May 2008 to provide a catalog of XRDS service types. XRI resolution OpenID OAuth discovery Licensing XRDS is an open public royalty-free OASIS specification. The OASIS XRI Technical Committee has operated since its inception in 2003 under a royalty-free licensing policy as stated in its charter and IPR page See also OpenID Higgins project I-names Light-weight Identity XRI XDI Social Web Yadis References External links OASIS XRI Technical Committee XRI Resolution 2.0 Specification – XRDS document format is specified in Section 4. OASIS XRI 2.0 FAQ XRDS Simple 1.0 XRDS Type – an open community registry of XRDS service types. dev.xri.net – an open public wiki on XRI and XRDS open source projects Internet Identity Workshop One-Pager on XRI and XRDS XML-based standards
35616709
https://en.wikipedia.org/wiki/International%20Information%20Technology%20University
International Information Technology University
International IT University or International university of information technologies (, Halyqaralyq aqparattyq tehnologııalar ýnıversıteti) - established in close collaboration with educational organization iCarnegie which represents American IT university Carnegie Mellon in 2009 by order of President of Kazakhstan. Formation of the qualified, international recognized IT specialists in Kazakhstan became the purpose of creation of a higher educational institution of a similar profile. International IT University provided with grants from the government of Kazakhstan and national infocommunication companies, which cover disciplines by Kazakhstan and the U.S. educational systems. Academic activities B-) Bachelor specialties Information Systems Computer Science and Software Engineering Computer Science Management in IT Finance in IT Electronic Journalism Radio Engineering, Electronics and Telecommunications Mathematical and Computer Modeling Magistracy specialities Information Systems Computer Science and Software Engineering Project Management Mathematical and Computer Modeling iCarnegie courses Education in International IT University goes by education programs of iCarnegie - branch enterprise of Carnegie Mellon High School Program Courses High School Program courses - training courses for applicant's basic knowledge of programming, the main objectives of which are: To teach applicants to program in Java To teach to use the modern approach of object-oriented programming (OOP) To teach to create applications with animations, sounds and control via the keyboard Provide certificates of iCarnegie, which will be considered for admission to IITU Provide an opportunity to work on interesting projects Provide an opportunity to learn from an Honorary Professor with the U.S. on international standards To teach how to create web-pages with games. International cooperation International cooperation of university is carried by improving the training system in accordance with international standards, professional development of teaching staff, use of new technologies and leading practices in teaching and research activities through collaboration with foreign universities under direct contracts: Laboratory Huawei Cloud Computing of Innogrid Huawei Cloud Computing of Innogrid - on August 9, 2011 was created the laboratory of open systems and cloud computing in close cooperation between Chinese company Huawei, ICT Holding "Zerde" and International IT University. The main purpose of the laboratory is focused on research in the fields cloud computing, open systems using the technology Open Source. Apple Training Center iOS Application Development - course introduces students to the programming language Objective-C and application development for mobile devices based on iOS. At the end of the course, students will be able to develop applications and programs in Objective-C for iPhone, iPad, and other iOS devices, as well as working in a development environment Xcode. The course includes an introduction to the language of Objective-C, the application of the concepts of object-oriented programming in the development of the language, study of the development paradigm of MVC, the work of the various components in the development environment Xcode. Before the start of the course the student should be familiar with object-oriented programming, be familiar with the syntax of C-like languages and have a basic knowledge of graphic design tools, compilers and debuggers. Mac OS X Application development- course will teach students to develop applications for the operating system Mac OS X. At the end of the course the students will experience working with Cocoa Framework and Objective-C. Requires prior knowledge of programming language Objective-C. The course includes an overview of the platform and operating system, Mac OS X, the principles of designing application interfaces for personal computers. Advanced Apple Development - course is designed for in-depth study of methods and means of developing applications based on Apple technology. The course includes a more detailed study of the language Objective-C, and additional third-party libraries, development environments XCode, the debugger works, methods of combining different languages and technologies with programs in Objective-C, frameworks WebKit, ParseKit, Cocoa, RestKit, writing web applications using Objective-C and XCode, architecture operating systems MacOS, Apple iOS, kernel Darwin, as well as the hardware architecture of devices Apple iPhone/iPad. Microsoft Laboratory — Social projects and activities Hackday Almaty 2011 - in 29–30 April 2011, in the walls of IITU was held IT-event Hackday. The event was attended by more than 500 master class listeners and 396 registered participants of the different projects in the IT sections and sections of Content and Media. In the event was announced 104 projects, of which 88 projects were submitted. Hackday Almaty 2012 - in 28–29 April 2012, in the walls of IITU was held second IT-event Hackday 2012. In this year there were over 800 participants in different projects took part in the sections IT, Media and Content. In conjunction with IT-university partners of Hackday 2012 were: Management Rector - Uskenbaeva Raisa Kabievna Vice-Rector - Uskenbaeva Raisa Kabievna Director of Marketing and PR - Taykenova Mayrash Gomarovna University partners University anthem References The information in this article is based on that in its Kazakh equivalent. External links The Objective-C courses by Rakhim Davletkaliyev BSc 2009 establishments in Kazakhstan Universities in Kazakhstan Educational institutions established in 2009
54002077
https://en.wikipedia.org/wiki/Cloudwords
Cloudwords
Cloudwords is an American software company based in San Francisco, California, specializing in Software-as-a-Service (SaaS) technology. History Cloudwords was co-founded on March 26, 2010, by Michael Meinhardt and Scott Yancey, who raised $3 million in seed funding from individuals including Marc Benioff, Chairman and CEO of Salesforce, and Salesforce founding CTO Dave Moellenhoff. The company raised $2.4 million in Series A funding in May 2012, led by Storm Ventures, and completed a $9.1 million Series B round of funding in November 2013. investors include Storm Ventures, UMC Capital, GMB Consulting LLC, Marc Benioff and other individual investors. In May 2016 Yancey was replaced as CEO by Richard Harpham, who had been Vice President of Sales and Marketing. Cloudwords officially launched in February 2011 and in August that year debuted the Basic and Professional editions of its software. The company focuses on assisting large companies in localizing their marketing; in April 2017 the company announced a partnership with Lilt, a translation productivity startup. References Software companies based in California Software companies of the United States 2010 establishments in California Companies based in San Francisco Software companies established in 2010 American companies established in 2010
25598358
https://en.wikipedia.org/wiki/Pentaho
Pentaho
Pentaho is business intelligence (BI) software that provides data integration, OLAP services, reporting, information dashboards, data mining and extract, transform, load (ETL) capabilities. Its headquarters are in Orlando, Florida. Pentaho was acquired by Hitachi Data Systems in 2015 and in 2017 became part of Hitachi Vantara. Overview Pentaho is a Java framework to create Business Intelligence solutions. Although most known for its Business Analysis Server (formerly known as Business Intelligence Server), the Pentaho software is indeed a couple of Java classes with specific functionality. On top of those Java classes one can build any BI solution. The only exception to this model is the ETL tool Pentaho Data Integration - PDI (formerly known as Kettle.) PDI is a set of softwares used to design data flows that can be run either in a server or standalone processes. PDI encompasses Kitchen, a job and transformation runner, and Spoon, a graphical user interface to design such jobs and transformations. Features such as reporting and OLAP are achieved by integrating subprojects into the Pentaho framework, like Mondrian OLAP engine and jFree Report. For some time by now those projects have been brought into Pentaho's curating. Some of those subprojects even have standalone clients like Pentaho Report Designer, a front-end for jFree Reports, and Pentaho Schema Workbench, a GUI to write XMLs used by Mondrian to serve OLAP cubes. Pentaho offers enterprise and community editions of those softwares. The enterprise software is obtained through an annual subscription and contains extra features and support not found in the community edition. Pentaho's core offering is frequently enhanced by add-on products, usually in the form of plug-ins, from the company and the broader community of users. Products Server applications Pentaho Enterprise Edition (EE) and Pentaho Community Edition (CE). Desktop/client applications Community driven, open-source Pentaho server plug-ins All of these plug-ins function with Pentaho Enterprise Edition (EE) and Pentaho Community Edition (CE). Licensing Pentaho follows an open core business model. It provides two different editions of Pentaho Business Analytics: a community edition and an enterprise edition. The enterprise edition needs to be purchased on a subscription model. The subscription model includes support, services, and product enhancements via annual subscription. The enterprise edition is available under a commercial license. Enterprise license goes with 3 levels of Pentaho Enterprise Support: Enterprise, Premium and Standard. The community edition is a free open source product licensed under the GNU General Public License version 2.0 (GPLv2), GNU Lesser General Public License version 2.0 (LGPLv2), and Mozilla Public License 1.1 (MPL 1.1). Recognition InfoWorld Bossie Award 2008, 2009, 2010, 2011, 2012 Ventana Research Leadership Award 2010 for StoneGate Senior Care CRN Emerging Technology Vendor 2010 ROI Awards 2012 - Nucleus Research See also Nutch - an effort to build an open source search engine based on Lucene and Hadoop, also created by Doug Cutting Apache Accumulo - Secure Big Table HBase - Bigtable-model database Hypertable - HBase alternative MapReduce - Google's fundamental data filtering algorithm Apache Mahout - machine learning algorithms implemented on Hadoop Apache Cassandra - a column-oriented database that supports access from Hadoop HPCC - LexisNexis Risk Solutions High Performance Computing Cluster Sector/Sphere - open-source distributed storage and processing Cloud computing Big data Data-intensive computing References External links Business intelligence companies Free business software Free reporting software Extract, transform, load tools
54197612
https://en.wikipedia.org/wiki/Department%20of%20Information%20Technology%20%28Botswana%29
Department of Information Technology (Botswana)
The department of information technology formerly known as the Government Computer Bureau, is in the ministry of transport and communications from a department called the Ministry of communications science and technology. Services Offered by the department of information technology The department of information technology has several major services it provides to the government of Botswana, including the following: Government website hosting Creating internet connectivity and other information technology related services across the public sector of Botswana References External links Government of Botswana
532520
https://en.wikipedia.org/wiki/Copland%20%28operating%20system%29
Copland (operating system)
Copland is an operating system developed by Apple for Macintosh computers between 1994 and 1996 but never commercially released. It was intended to be released as System 8, and later, Mac OS 8. Planned as a modern successor to the aging System 7, Copland introduced protected memory, preemptive multitasking, and several new underlying operating system features, while retaining compatibility with existing Mac applications. Copland's tentatively planned successor, codenamed Gershwin, was intended to add more advanced features such as application-level multithreading. Development officially began in March 1994. Over the next several years, previews of Copland garnered much press, introducing the Mac audience to basic concepts of modern operating system design such as object orientation, crash-proofing, and multitasking. In May 1996, Gil Amelio stated that Copland was the primary focus of the company, aiming for a late-year release. Internally, however, the development effort was beset with problems due to dysfunctional corporate personnel and project management. Development milestones and developer release dates were missed repeatedly. Ellen Hancock was hired to get the project back on track, but quickly concluded it would never ship. In August 1996, it was announced that Copland was canceled and Apple would look outside the company for a new operating system. Among many choices, they selected NeXTSTEP and purchased NeXT in 1997 to obtain it. In the interim period, while NeXTSTEP was ported to the Mac, Apple released a much more legacy-oriented Mac OS 8 in 1997, followed by Mac OS 9 in 1999. Mac OS X became Apple's next-generation operating system with its release in 2001. All of these releases bear functional or cosmetic influence from Copland. The Copland development effort can be described by pejorative software industry terminology such as "empire building," feature creep, and project death march. In 2008, PC World included Copland on a list of the biggest project failures in information technology (IT) history. Design Mac OS legacy The prehistory of Copland begins with an understanding of the Mac OS legacy, and its architectural problems to be solved. Launched in 1984, the Macintosh and its operating system were designed from the start as a single-user, single-tasking system, which allowed the hardware development to be greatly simplified. As a side effect of this single application model, the original Mac developers were able to take advantage of several compromising simplifications that allowed great improvements in performance, running even faster than the much more expensive Lisa. But this design also led to several problems for future expansion. By assuming only one program would be running at a time, the engineers were able to ignore the concept of reentrancy, which is the ability for a program (or code library) to be stopped at any point, asked to do something else, and then return to the original task. In the case of QuickDraw for example, this means the system can store state information internally, like the current location of the window or the line style, knowing it would only change under control of the running program. Taking this one step further, the engineers left most of this state inside the application rather than in QuickDraw, thus eliminating the need to copy this data between the application and library. QuickDraw found this data by looking at known locations within the applications. This concept of sharing memory is a significant source of problems and crashes. If an application program writes incorrect data into these shared locations, it could cause QuickDraw to crash, thereby causing the computer to crash. Likewise, any problem in QuickDraw could cause it to overwrite data in the application, once again leading to crashes. In the case of a single-application operating system this was not a fatal limitation, because in that case a problem in either would require the application, or computer, to be restarted anyway. The other main issue was that early Macs lack a memory management unit (MMU), which precludes the possibility of several fundamental modern features. An MMU provides memory protection to ensure that programs cannot accidentally overwrite other program's memory, and provisions shared memory that allows data to be easily passed among libraries. Lacking shared memory, the API was instead written so the operating system and application shares all memory, which is what allows QuickDraw to examine the application's memory for settings like the line drawing mode or color. These limits meant that supporting the multitasking of more than one program at a time would be difficult, without rewriting all of this operating system and application code. Yet doing so would mean the system would run unacceptably slow on existing hardware. Instead, Apple adopted a system known as MultiFinder in 1987, which keeps the running application in control of the computer, as before, but allows an application to be rapidly switched to another, normally simply by clicking on its window. Programs that are not in the foreground are periodically given short bits of time to run, but as before, the entire process is controlled by the applications, not the operating system. Because the operating system and applications all share one memory space, it is possible for a bug in any one of them to corrupt the entire operating system, and crash the machine. Under MultiFinder, any crash anywhere will crash all running programs. Running multiple applications potentially increases the chances of a crash, making the system potentially more fragile. Adding greatly to the severity of the problem is the patching mechanism used to add functions to the operating system, known as CDEVs and INITs or Control Panels and Extensions. Third party developers also make use of this mechanism to add features, including screensavers and a hierarchical Apple menu. Some of these third-party control panels became almost universal, like the popular After Dark screensaver package. Because there was no standard for use of these patches, it is not uncommon for several of these add-ons — including Apple's own additions to the OS — to use the same patches, and interfere with each other, leading to more crashing. Copland design Copland was designed to consist of the Mac OS on top of a microkernel named Nukernel, which would handle basic tasks such as application startup and memory management, leaving all other tasks to a series of semi-special programs known as servers. For instance, networking and file services would not be provided by the kernel itself, but by servers that would be sent requests through interapplication communications. Copland consists of the combination of Nukernel, various servers, and a suite of application support libraries to provide implementations of the well-known classic Macintosh programming interface. Application services are offered through a single program known officially as the Cooperative Macintosh Toolbox environment, but universally referred to as the Blue Box. The Blue Box encapsulates an existing System 7 operating system inside a single process and address space. Mac programs run inside the Blue Box much as they do under System 7, as cooperative tasks that use the non-reentrant Toolbox calls. A worst-case scenario is that an application in the Blue Box crashes, taking down the entire Blue Box instance with it. This does not result in the system as a whole going down, however, and the Blue Box can be restarted. New applications written with Copland in mind, are able to directly communicate with the system servers and thereby gain many advantages in terms of performance and scalability. They can also communicate with the kernel to launch separate applications or threads, which run as separate processes in protected memory, as in most modern operating systems. These separate applications cannot use non-reentrant calls like QuickDraw, however, and thus could have no user interface. Apple suggested that larger programs could place their user interface in a normal Macintosh application, which would then start worker threads externally. Another key feature of Copland is that it is fully PowerPC (PPC) native. System 7 had been ported to the PowerPC with great success; large parts of the system run as PPC code, including both high-level functions, such as most of the user interface toolbox managers, and low-level functions, such as interrupt management. There is enough 68k code left in the system to be run in emulation, and especially user applications, however that the operating system must map some data between the two environments. In particular, every call into the Mac OS requires a mapping between the interrupt systems of the 68k and PPC. Removing these mappings would greatly improve general system performance. At WWDC 1996, engineers claimed that system calls would execute as much as 50% faster. Copland is also based on the then-recently defined Common Hardware Reference Platform, or CHRP, which standardized the Mac hardware to the point where it could be built by different companies and can run other operating systems (Solaris and AIX were two of many mentioned). This was a common theme at the time; many companies were forming groups to define standardized platforms to offer an alternative to the "Wintel" platform that was rapidly becoming dominant — examples include 88open, Advanced Computing Environment, and the AIM alliance. The fundamental second-system effect to challenge Copland's development and adoption would be getting all of these functions to fit into an ordinary Mac. System 7.5 already uses up about 2.5 megabytes (MB) of RAM, which is a significant portion of the total RAM in most contemporaneous machines. Copland is two systems in one, as its native foundation also hosts Blue Box, containing essentially a complete copy of System 7.5. Copland thus uses a Mach-inspired memory management system and relies extensively on shared libraries, with the goal being for Copland to be only some 50% larger than 7.5. History Pink and Blue In March 1988, technical middle managers at Apple held an offsite meeting to plan the future course of Mac OS development. Ideas were written on index cards; features that seemed simple enough to implement in the short term (like adding color to the user interface) were written on blue cards; longer-term goals—such as preemptive multitasking—were on pink cards; and long-range ideas like an object-oriented file system were on red cards. Development of the ideas contained on the blue and pink cards was to proceed in parallel, and at first, the two projects were known simply as "blue" and "pink". Apple intended to have the "blue" team (who came to call themselves the "Blue Meanies" after characters in the film Yellow Submarine) release an updated version of the existing Macintosh operating system in the 1990–1991 timeframe, and the Pink team to release an all-new OS around 1993. The Blue team delivered what became known as System 7 on May 13, 1991, but the Pink team suffered from second-system effect and its release date continued to slip into the indefinite future. Some of the reason for this can be traced to problems that would become widespread at Apple as time went on; as Pink became delayed, its engineers moved to Blue instead. This left the Pink team constantly struggling for staffing, and suffering from the problems associated with high employee turnover. Management ignored these sorts of technical development issues, leading to continual problems delivering working products. At this same time, the recently released NeXTSTEP was generating intense interest in the developer world. Features that were originally part of Red, were folded into Pink, and the Red project (also known as "Raptor") was eventually canceled. This problem was also common at Apple during this period; in order to chase the "next big thing", middle managers would add new features to their projects with little oversight, leading to enormous problems with feature creep. In the case of Pink, development eventually slowed to the point that the project appeared moribund. Taligent On April 12, 1991, Apple CEO John Sculley performed a secret demonstration of Pink running on an IBM PS/2 Model 70 to a delegation from IBM. Though the system was not fully functional, it resembled System 7 running on a PC. IBM was extremely interested, and over the next few months, the two companies formed an alliance to further development of the system. These efforts became public in early 1992, under the new name "Taligent". At the time, Sculley summed up his concerns with Apple's own ability to ship Pink when he stated "We want to be a major player in the computer industry, not a niche player. The only way to do that is to work with another major player." Infighting at the new joint company was legendary, and the problems with Pink within Apple soon appeared to be minor in comparison. Apple employees made T-shirts graphically displaying their prediction that the result would be an IBM-only project. On December 19, 1995, Apple officially pulled out of the project. IBM continued working alone with Taligent, and eventually released its application development portions under the new name "CommonPoint". This saw little interest and the project disappeared from IBM's catalogs within months. Business as usual While Taligent efforts continued, very little work addressing the structure of the original OS was carried out. Several new projects started during this time, notably the Star Trek project, a port of System 7 and its basic applications to Intel-compatible x86 machines, which reached internal demo status. But as Taligent was still a concern, it was difficult for new OS projects to gain any traction. Instead, Apple's Blue team continued adding new features to the same basic OS. During the early 1990s, Apple released a series of major new packages to the system; among them are QuickDraw GX, Open Transport, OpenDoc, PowerTalk, and many others. Most of these were larger than the original operating system. Problems with stability, which had existed even with small patches, grew along with the size and requirements of these packages, and by the mid-1990s the Mac had a reputation for instability and constant crashing. As the stability of the operating system collapsed, the ready answer was that Taligent would fix this with all its modern foundation of full reentrance, preemptive multitasking, and protected memory. When the Taligent efforts collapsed, Apple remained with an aging OS and no designated solutions. By 1994, the press buzz surrounding the upcoming release of Windows 95 started to crescendo, often questioning Apple's ability to respond to the challenge it presented. The press turned on the company, often introducing Apple's new projects as failures in the making. Another try Given this pressure, the collapse of Taligent, the growing problems with the existing operating system, and the release of System 7.5 in late 1994, Apple management decided that the decade-old operating system had run its course. A new system that did not have these problems was needed, and soon. Since so much of the existing system would be difficult to rewrite, Apple developed a two-stage approach to the problem. In the first stage, the existing system would be moved on top of a new kernel-based OS with built-in support for multitasking and protected memory. The existing libraries, like QuickDraw, would take too long to be rewritten for the new system and would not be converted to be reentrant. Instead, a single paravirtualized machine, the Blue Box, keeps applications and legacy code such as QuickDraw in a single memory block so they continue to run as they had in the past. Blue Box runs in a distinct Copland memory space, so crashing legacy applications or extensions within Blue Box cannot crash the entire machine. In the next stage of the plan, once the new kernel was in place and this basic upgrade was released, development would move on to rewriting the older libraries into new forms that could run directly on the new kernel. At that point, applications would gain some added modern features. In the musical code-naming pattern where System 7.5 is code-named "Mozart", this intended successor is named "Copland" after composer Aaron Copland. In turn, its proposed successor system, Gershwin, would complete the process of moving the entire system to the modern platform, but work on Gershwin would never officially begin. Development The Copland project was first announced in May 1994. Parts of Copland, most notably an early version of the new file system, were demonstrated at Apple's Worldwide Developers Conference in May 1995. Apple also promised that a beta release of Copland would be ready by the end of the year, for final commercial release in early 1996. Gershwin would follow the next year. Throughout the year, Apple released several mock-ups to various magazines showing what the new system would look like, and commented continually that the company was fully committed to this project. By the end of the year, however, no Developer Release had been produced. As had happened in the past during the development of Pink, developers within Apple soon started abandoning their own projects in order to work on the new system. Middle management and project leaders fought back by claiming that their project was vital to the success of the system, and moving it into the Copland development stream. Thus, it could not be canceled along with their employees being removed to work on some other part of Copland anyway. This process took on momentum across the next year. Soon the project looked less like a new operating system and more like a huge collection of new technologies; QuickDraw GX, System Object Model (SOM), and OpenDoc became core components of the system, while completely unrelated technologies like a new file management dialog box (the open dialog) and themes support appeared also. The feature list grew much faster than the features could be completed, a classic case of creeping featuritis. An industry executive noted that "The game is to cut it down to the three or four most compelling features as opposed to having hundreds of nice-to-haves, I'm not sure that's happening." As the "package" grew, testing it became increasingly difficult and engineers were commenting as early as 1995 that Apple's announced 1996 release date was hopelessly optimistic: "There's no way in hell Copland ships next year. I just hope it ships in 1997." In mid-1996, information was leaked that Copland would have the ability to run applications written for other operating systems, including Windows NT. Simultaneously allegedly confirmed by Copland engineers and authoritatively denied by Copland project management, this feature had supposedly been in development for more than three years. One user claimed to have been told about these plans by members of the Copland development team. Some analysts projected that this ability would increase Apple's penetration into the enterprise market, others said it was "game over" and was only a sign of the Mac platform's irrelevancy. Developer Release At WWDC 1996, Apple's new CEO, Gil Amelio, used the keynote to talk almost exclusively about Copland, now known as System 8. He repeatedly stated that it was the only focus of Apple engineering and that it would ship to developers in a few months, with a full release planned for late 1996. Very few, if any, demos of the running system were shown at the conference. Instead, various pieces of the technology and user interface that would go into the package (such as a new file management dialog) were demonstrated. Little of the core system's technology was demonstrated and the new file system that had been shown a year earlier was absent. There was one way to actually use the new operating system – by signing up for time in the developer labs. This did not go well: Several people at the show complained about the microkernel's lack of sophistication, notably the lack of symmetric multiprocessing, a feature that would be exceedingly difficult to add to a system due to ship in a few months. After that, Amelio came back on stage and announced that they would be adding that to the feature list. In August 1996, "Developer Release 0" was sent to a small number of selected partners. Far from demonstrating improved stability, it often crashed after doing nothing at all, and was completely unusable for development. In October, Apple moved the target delivery date to "sometime", hinting that it might be 1997. One of the groups most surprised by the announcement was Apple's own hardware team, who had been waiting for Copland to allow the PowerPC to be natively represented, unburdened of software legacy. Members of Apple's software QA team joked that, given current resources and the number of bugs in the system, they could clear the program for shipping sometime around 2030. Cancellation Later in August 1996, the situation was no better. Amelio complained that Copland was "just a collection of separate pieces, each being worked on by a different team ... that were expected to magically come together somehow." Hoping to salvage the situation, Amelio hired Ellen Hancock away from National Semiconductor to take over engineering and get Copland development back on track. After a few months on the job, Hancock came to the conclusion that the situation was hopeless; given current development and engineering, she believed Copland would never ship. Instead, she suggested that the various user-facing technologies in Copland be rolled out in a series of staged releases, instead of a single big release. To address the aging infrastructure underneath these technologies, Amelio suggested looking outside the company for an unrelated new operating system. Candidates considered were Sun's Solaris and Windows NT. Hancock reportedly was in favor of going with Solaris, while Amelio preferred Windows. Amelio even reportedly called Bill Gates to discuss the idea, and Gates promised to put Microsoft engineers to work porting QuickDraw to NT. Apple officially canceled Copland in August 1996 and reused the Mac OS 8 product name for codename Tempo, a Copland-inspired major update to Mac OS 7.6. The CD envelopes for the developer's release had been printed, but the discs had not been mastered. After lengthy discussions with Be and rumors of a merger with Sun Microsystems, many were surprised at Apple's December 1996 announcement that they were purchasing NeXT and bringing Steve Jobs on in an advisory role. Amelio quipped that they "choose Plan A instead of Plan Be." The project to port NeXTSTEP to the Macintosh platform was named Rhapsody and was to be the core of Apple's cross-platform operating system strategy. This would inherit OpenStep's existing support for PowerPC, Intel x86, and DEC Alpha CPU architectures, and an implementation of the OpenStep libraries running on Windows NT. This would in effect open the Windows application market to Macintosh developers as they could license the library from Apple for distribution with their product, or depend on an existing installation. Legacy Following Hancock's plan, development of System 7.5 continued, with several technologies originally slated for Copland being incorporated into the base OS. Apple embarked on a buying campaign, acquiring the rights to various third-party system enhancements and integrating them into the OS. The Extensions Manager, hierarchical Apple menu, collapsing windows, the menu bar clock, and sticky notes—all were developed outside of Apple. Stability and performance were improved by Mac OS 7.6, which dropped the "System" moniker in favor of "Mac OS". Eventually, many features developed for Copland, including the new multithreaded Finder and support for themes (the default Platinum was the only theme included) were rolled into the unreleased beta of Mac OS 7.7, which was instead rebranded and launched as Mac OS 8. With the return of Jobs, this rebranding to version 8 also allowed Apple to exploit a legal loophole to terminate third-party manufacturers' licenses to System 7 and effectively shut down the Macintosh clone market. Later, Mac OS 8.1 finally added the new file system and Mac OS 8.6 updated the nanokernel to handle limited support for preemptive tasks. Its interface is Multiprocessing Services 2.x and later, but there is no process separation and the system still uses cooperative multitasking between processes. Even a process that is Multiprocessing Services-aware still has a part that runs in the Blue Box, a task that also runs all single-threaded programs and the only task that can run 68k code. The Rhapsody project was canceled after several Developer Preview releases, support for running on non-Macintosh platforms was dropped, and it was eventually released as Mac OS X Server 1.0. In 2001 this foundation was coupled to the Carbon library and Aqua user interface to form the modern Mac OS X product. Versions of Mac OS X prior to the Intel release of Mac OS X 10.4 (Tiger), also use the rootless Blue Box concept in the form of Classic to run applications written for older versions of Mac OS. Several features originally seen in Copland demos, including its advanced Find command, built-in Internet browser, piles of folders, and support for video-conferencing, have reappeared in subsequent releases of Mac OS X as Spotlight, Safari, Stacks, and iChat AV, respectively, although the implementation and user interface for each feature is very different. Hardware requirements According to the documentation included in the Developer Release, Copland supports the following hardware configurations: NuBus-based Macintoshes: 6100/60, 6100/60AV (no AV functions), 6100/66, 6100/66 AV (no AV functions), 6100/66 DOS (no DOS functions), 7100/66, 7100/66 AV (no AV functions), 7100/80, 7100/80 AV (no AV functions), 8100/80/ 8100/100/ 8100/100 AV (no AV functions), 8100/110 NuBus-based Performas: 6110CD, 6112CD, 6115CD, 6117CD, 6118CD PCI-based Macintoshes: 7200/70, 7200/90, 7500/100, 8500/120, 9500/120, 9500/132 Drives formatted with Drive Setup For DR1 and earlier, the installer requires System 7.5 or later on a hard disk of 250MB or greater capacity. Display set to 256 colors (8-bit) or Thousands (16-bit). See also Classic Mac OS MkLinux Workplace OS BeOS Notes References Citations Bibliography External links Copland retrospectives at MacKiDo, iGeek, and iGeek Apple's Copland project: An OS for the common man, development history "A Time Machine trip to the mid-'90s", MacWorld article with screenshots from Copland The Long View, a retrospective analysis of the development cycle and code legacy of Copland into MacOS 8 and Carbon Apple's Copland Reference Documentation Computer Chronicles: Mac Clones, with a demo of Copland Businessweek article on Copland Aaron Copland Apple Inc. operating systems Macintosh operating systems Macintosh platform Microkernel-based operating systems Microkernels PowerPC operating systems Object-oriented operating systems Vaporware