id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
20783988
https://en.wikipedia.org/wiki/3D%20user%20interaction
3D user interaction
In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant. The 3D space used for interaction can be the real physical space, a virtual space representation simulated in the computer, or a combination of both. When the real physical space is used for data input, the human interacts with the machine performing actions using an input device that detects the 3D position of the human interaction, among other things. When it is used for data output, the simulated 3D virtual scene is projected onto the real environment through one output device. The principles of 3D interaction are applied in a variety of domains such as tourism, art, gaming, simulation, education, information visualization, or scientific visualization. History Research in 3D interaction and 3D display began in the 1960s, pioneered by researchers like Ivan Sutherland, Fred Brooks, Bob Sproull, Andrew Ortony and Richard Feldman. But it was not until 1962 when Morton Heilig invented the Sensorama simulator. It provided 3D video feedback, as well motion, audio, and feedbacks to produce a virtual environment. The next stage of development was Dr. Ivan Sutherland’s completion of his pioneering work in 1968, the Sword of Damocles. He created a head-mounted display that produced 3D virtual environment by presenting a left and right still image of that environment. Availability of technology as well as impractical costs held back the development and application of virtual environments until the 1980s. Applications were limited to military ventures in the United States. Since then, further research and technological advancements have allowed new doors to be opened to application in various other areas such as education, entertainment, and manufacturing. Background In 3D interaction, users carry out their tasks and perform functions by exchanging information with computer systems in 3D space. It is an intuitive type of interaction because humans interact in three dimensions in the real world. The tasks that users perform have been classified as selection and manipulation of objects in virtual space, navigation, and system control. Tasks can be performed in virtual space through interaction techniques and by utilizing interaction devices. 3D interaction techniques were classified according to the task group it supports. Techniques that support navigation tasks are classified as navigation techniques. Techniques that support object selection and manipulation are labeled selection and manipulation techniques. Lastly, system control techniques support tasks that have to do with controlling the application itself. A consistent and efficient mapping between techniques and interaction devices must be made in order for the system to be usable and effective. Interfaces associated with 3D interaction are called 3D interfaces. Like other types of user interfaces, it involves two-way communication between users and system, but allows users to perform action in 3D space. Input devices permit the users to give directions and commands to the system, while output devices allow the machine to present information back to them. 3D interfaces have been used in applications that feature virtual environments, and augmented and mixed realities. In virtual environments, users may interact directly with the environment or use tools with specific functionalities to do so. 3D interaction occurs when physical tools are controlled in 3D spatial context to control a corresponding virtual tool. Users experience a sense of presence when engaged in an immersive virtual world. Enabling the users to interact with this world in 3D allows them to make use of natural and intrinsic knowledge of how information exchange takes place with physical objects in the real world. Texture, sound, and speech can all be used to augment 3D interaction. Currently, users still have difficulty in interpreting 3D space visuals and understanding how interaction occurs. Although it’s a natural way for humans to move around in a three-dimensional world, the difficulty exists because many of the cues present in real environments are missing from virtual environments. Perception and occlusion are the primary perceptual cues used by humans. Also, even though scenes in virtual space appear three-dimensional, they are still displayed on a 2D surface so some inconsistencies in depth perception will still exist. 3D user interfaces User interfaces are the means for communication between users and systems. 3D interfaces include media for 3D representation of system state, and media for 3D user input or manipulation. Using 3D representations is not enough to create 3D interaction. The users must have a way of performing actions in 3D as well. To that effect, special input and output devices have been developed to support this type of interaction. Some, such as the 3D mouse, were developed based on existing devices for 2D interaction. 3D user interfaces, are user interfaces where 3D interaction takes place, this means that the user's tasks occur directly within a three-dimensional space. The user must communicate with commands, requests, questions, intent, and goals to the system, and in turn this one has to provide feedback, requests for input, information about their status, and so on. Both the user and the system do not have the same type of language, therefore to make possible the communication process, the interfaces must serve as intermediaries or translators between them. The way the user transforms perceptions into actions is called Human transfer function, and the way the system transforms signals into display information is called System transfer function. 3D user interfaces are actually physical devices that communicate the user and the system with the minimum delay, in this case there are two types: 3D User Interface Output Hardware and 3D User Interface Input Hardware. 3D user interface output hardware Output devices, also called display devices, allow the machine to provide information or feedback to one or more users through the human perceptual system. Most of them are focused on stimulating the visual, auditory, or haptic senses. However, in some unusual cases they also can stimulate the user's olfactory system. 3D visual displays This type of devices are the most popular and its goal is to present the information produced by the system through the human visual system in a three-dimensional way. The main features that distinguish these devices are: field of regard and field of view, spatial resolution, screen geometry, light transfer mechanism, refresh rate and ergonomics. Another way to characterize these devices is according to the different categories of depth perception cues used to achieve that the user can understand the three-dimensional information. The main types of displays used in 3D UIs are: monitors, surround-screen displays, workbenches, hemispherical displays, head-mounted displays, arm-mounted displays and autostereoscopic displays. Virtual reality headsets and CAVEs (Cave Automatic Virtual Environment) are examples of a fully immersive visual display, where the user can see only the virtual world and not the real world. Semi-immersive displays allow users to see both. Monitors and workbenches are examples of semi-immersive displays. 3D audio displays 3D Audio displays are devices that present information (in this case sound) through the human auditory system, which is especially useful when supplying location and spatial information to the users. Its objective is to generate and display a spatialized 3D sound so the user can use its psychoacoustic skills and be able to determine the location and direction of the sound. There are different localizations cues: binaural cues, spectral and dynamic cues, head-related transfer functions, reverberation, sound intensity and vision and environment familiarity. Adding background audio component to a display also adds to the sense of realism. 3D haptic displays These devices use the sense of touch to simulate the physical interaction between the user and a virtual object. There are three different types of 3D Haptic displays: those that provide the user a sense of force, the ones that simulate the sense of touch and those that use both. The main features that distinguish these devices are: haptic presentation capability, resolution and ergonomics. The human haptic system has 2 fundamental kinds of cues, tactile and kinesthetic. Tactile cues are a type of human touch cues that have a wide variety of skin receptors located below the surface of the skin that provide information about the texture, temperature, pressure and damage. Kinesthetic cues are a type of human touch cues that have many receptors in the muscles, joints and tendons that provide information about the angle of joints and stress and length of muscles. 3D user interface input hardware These hardware devices are called input devices and their aim is to capture and interpret the actions performed by the user. The degrees of freedom (DOF) are one of the main features of these systems. Classical interface components (such as mouse and keyboards and arguably touchscreen) are often inappropriate for non 2D interaction needs. These systems are also differentiated according to how much physical interaction is needed to use the device, purely active need to be manipulated to produce information, purely passive do not need to. The main categories of these devices are standard (desktop) input devices, tracking devices, control devices, navigation equipment, gesture interfaces, 3D mice, and brain-computer interfaces. Desktop Input devices This type of devices are designed for an interaction 3D on a desktop, many of them have an initial design thought in a traditional interaction in two dimensions, but with an appropriate mapping between the system and the device, this can work perfectly in a three-dimensional way. There are different types of them: keyboards, 2D mice and trackballs, pen-based tablets and stylus, and joysticks. Nonetheless, many studies have questioned the appropriateness of desktop interface components for 3D interaction though this is still debated. Tracking devices 3D user interaction systems are based primarily on motion tracking technologies, to obtain all the necessary information from the user through the analysis of their movements or gestures, these technologies are called, tracking technologies. Trackers detect or monitor head, hand or body movements and send that information to the computer. The computer then translates it and ensures that position and orientation are reflected accurately in the virtual world. Tracking is important in presenting the correct viewpoint, coordinating the spatial and sound information presented to users as well the tasks or functions that they could perform. 3D trackers have been identified as mechanical, magnetic, ultrasonic, optical, and hybrid inertial. Examples of trackers include motion trackers, eye trackers, and data gloves. A simple 2D mouse may be considered a navigation device if it allows the user to move to a different location in a virtual 3D space. Navigation devices such as the treadmill and bicycle make use of the natural ways that humans travel in the real world. Treadmills simulate walking or running and bicycles or similar type equipment simulate vehicular travel. In the case of navigation devices, the information passed on to the machine is the user's location and movements in virtual space. Wired gloves and bodysuits allow gestural interaction to occur. These send hand or body position and movement information to the computer using sensors. For the full development of a 3D User Interaction system, is required to have access to a few basic parameters, all this technology-based system should know, or at least partially, as the relative position of the user, the absolute position, angular velocity, rotation data, orientation or height. The collection of these data is achieved through systems of space tracking and sensors in multiple forms, as well as the use of different techniques to obtain. The ideal system for this type of interaction is a system based on the tracking of the position, using six degrees of freedom (6-DOF), these systems are characterized by the ability to obtain absolute 3D position of the user, in this way will get information on all possible three-dimensional field angles. The implementation of these systems can be achieved by using various technologies, such as electromagnetic fields, optical, or ultrasonically tracking, but all share the main limitation, they should have a fixed external reference, either a base, an array of cameras, or a set of visible markers, so this single system can be carried out in prepared areas. Inertial tracking systems do not require external reference such as those based on movement, are based on the collection of data using accelerometers, gyroscopes, or video cameras, without a fixed reference mandatory, in the majority of cases, the main problem of this system, is based on not obtaining the absolute position, since not part of any pre-set external reference point so it always gets the relative position of the user, aspect that causes cumulative errors in the process of sampling data. The goal to achieve in a 3D tracking system would be based on obtaining a system of 6-DOF able to get absolute positioning and precision of movement and orientation, with a precision and an uncut space very high, a good example of a rough situation would be a mobile phone, since it has all the motion capture sensors and also GPS tracking of latitude, but currently these systems are not so accurate to capture data with a precision of centimeters and therefore would be invalid. However, there are several systems that are closely adapted to the objectives pursued, the determining factor for them is that systems are auto content, i.e., all-in-one and does not require a fixed prior reference, these systems are as follows: Nintendo Wii Remote ("Wiimote") The Wii Remote device does not offer a technology based on 6-DOF since again, cannot provide absolute position, in contrast, is equipped with a multitude of sensors, which convert a 2D device in a great tool of interaction in 3D environments. This device has gyroscopes to detect rotation of the user, accelerometers ADXL3000, for obtaining speed and movement of the hands, optical sensors for determining orientation and electronic compasses and infra-red devices to capture the position. This type of device can be affected by external references of infra-red light bulbs or candles, causing errors in the accuracy of the position. Google Tango Devices The Tango Platform is an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It uses computer vision and internal sensors (like gyroscopes) to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. It can therefore be used to provide 6-DOF input which can also be combined with its multi-touch screen. The Google Tango devices can be seen as more integrated solutions than the early prototypes combining spatially-tracked devices with touch-enabled-screens for 3D environments. Microsoft Kinect The Microsoft Kinect device offers us a different motion capture technology for tracking. Instead of basing its operation on sensors, this is based on a structured light scanner, located in a bar, which allows tracking of the entire body through the detection of about 20 spatial points, of which 3 different degrees of freedom are measured to obtain position, velocity and rotation of each point. Its main advantage is ease of use, and the no requirement of an external device attached by the user, and its main disadvantage lies in the inability to detect the orientation of the user, thus limiting certain space and guidance functions. Leap Motion The Leap Motion is a new system of tracking of hands, designed for small spaces, allowing a new interaction in 3D environments for desktop applications, so it offers a great fluidity when browsing through three-dimensional environments in a realistic way. It is a small device that connects via USB to a computer, and used two cameras with infra-red light LED, allowing the analysis of a hemispheric area about 1 meter on its surface, thus recording responses from 300 frames per second, information is sent to the computer to be processed by the specific software company. 3D Interaction Techniques 3D Interaction Techniques are the different ways that the user can interact with the 3D virtual environment to execute different kind of tasks. The quality of these techniques has a profound effect on the quality of the entire 3D User Interfaces. They can be classified into three different groups: Navigation, Selection and manipulation and System control. Navigation The computer needs to provide the user with information regarding location and movement. Navigation is the most used by the user in big 3D environments and presents different challenges as supporting spatial awareness, giving efficient movements between distant places and making navigation bearable so the user can focus on more important tasks. These techniques, navigation tasks, can be divided into two components: travel and wayfinding. Travel involves moving from the current location to the desired point. Wayfinding refers to finding and setting routes to get to a travel goal within the virtual environment. Travel Travel is a conceptual technique that consists in the movement of the viewpoint from one location to another. This orientation is usually handled in immersive virtual environments by head tracking. Exists five types of travel interaction techniques: Physical movement: uses the user's body motion to move through the virtual environment. Is an appropriate technique when is required an augmented perception of the feeling of being present or when is required physical effort form the user. Manual viewpoint manipulation: the user's hands movements determine the displacement on the virtual environment. One example could be when the user moves their hands in a way that seems like is grabbing a virtual rope and pulls his self up. This technique could be easy to learn and efficient, but can cause fatigue. Steering: the user has to constantly indicate where to move. Is a common and efficient technique. One example of this are the gaze-directed steering, where the head orientation determines the direction of travel. Target-based travel: user specifies a destination point and the system effectuates the displacement. This travel can be executed by teleport, where the user is instantly moved to the destination point or the system can execute some transition movement to the destiny. These techniques are very simple from the user's point of view because he only has to indicate the destination. Route planning: the user specifies the path that should be taken through the environment and the system executes the movement. The user may draw a path on a map of the virtual environment to plan a route. This technique allows users to control travel while they have the ability to do other tasks during motion. Wayfinding Wayfinding is the cognitive process of defining a route for the surrounding environment, using and acquiring spatial knowledge to construct a cognitive map of the environment. In virtual space it is different and more difficult to do than in the real world because synthetic environments are often missing perceptual cues and movement constraints. It can be supported using user-centered techniques such as using a larger field of view and supplying motion cues, or environment-centered techniques like structural organization and wayfinding principles. In order for a good wayfinding, users should receive wayfinding supports during the virtual environment travel to facilitate it because of the constraints from the virtual world. These supports can be user-centered supports such as a large field-of-view or even non-visual support such as audio, or environment-centered support, artificial cues and structural organization to define clearly different parts of the environment. Some of the most used artificial cues are maps, compasses and grids, or even architectural cues like lighting, color and texture. Selection and Manipulation Selection and Manipulation techniques for 3D environments must accomplish at least one of three basic tasks: object selection, object positioning and object rotation. Users need to be able to manipulate virtual objects. Manipulation tasks involve selecting and moving an object. Sometimes, the rotation of the object is involved as well. Direct-hand manipulation is the most natural technique because manipulating physical objects with the hand is intuitive for humans. However, this is not always possible. A virtual hand that can select and re-locate virtual objects will work as well. 3D widgets can be used to put controls on objects: these are usually called 3D Gizmos or Manipulators (a good example are the ones from Blender). Users can employ these to re-locate, re-scale or re-orient an object (Translate, Scale, Rotate). Other techniques include the Go-Go technique and ray casting, where a virtual ray is used to point to and select an object. Selection The task of selecting objects or 3D volumes in a 3D environments requires first being able to find the desired target and then being able to select it. Most 3D datasets/environments are severed by occlusion problems, so the first step of finding the target relies on manipulation of the viewpoint or of the 3D data itself in order to properly identify the object or volume of interest. This initial step is then of course tightly coupled with manipulations in 3D. Once the target is visually identified, users have access to a variety of techniques to select it. Usually, the system provides the user a 3D cursor represented as a human hand whose movements correspond to the motion of the hand tracker. This virtual hand technique is rather intuitive because simulates a real-world interaction with objects but with the limit of objects that we can reach inside a reach-area. To avoid this limit, there are many techniques that have been suggested, like the Go-Go technique. This technique allows the user to extend the reach-area using a non-linear mapping of the hand: when the user extends the hand beyond a fixed threshold distance, the mapping becomes non-linear and the hand grows. Another technique to select and manipulate objects in 3D virtual spaces consists in pointing at objects using a virtual-ray emanating from the virtual hand. When the ray intersects with the objects, it can be manipulated. Several variations of this technique has been made, like the aperture technique, which uses a conic pointer addressed for the user's eyes, estimated from the head location, to select distant objects. This technique also uses a hand sensor to adjust the conic pointer size. Many other techniques, relying on different input strategies, have also been developed. Manipulation 3D Manipulations occurs before a selection task (in order to visually identify a 3D selection target) and after a selection has occurred, to manipulate the selected object. 3D Manipulations require 3 DOF for rotations (1 DOF per axis, namely x, y, z) and 3 DOF for translations (1 DOF per axis) and at least 1 additional DOF for uniform zoom (or alternatively 3 additional DOF for non-uniform zoom operations). 3D Manipulations, like navigation, is one of the essential tasks with 3D data, objects or environments. It is the basis of many 3D software (such as Blender, Autodesk, VTK) which are widely used. These software, available mostly on computers, are thus almost always combined with a mouse and keyboard. To provide enough DOFs (the mouse only offers 2), these software rely on modding with a key in order to separately control all the DOFs involved in 3D manipulations. With the recent avent of multi-touch enabled smartphones and tablets, the interaction mappings of these software have been adapted to multi-touch (which offers more simultaneous DOF manipulations than a mouse and keyboard). A survey conducted in 2017 of 36 commercial and academic mobile applications on Android and iOS however suggested that most applications did not provide a way to control the minimum 6 DOFs required, but that among those which did, most made use of a 3D version of the RST (Rotation Scale Translation) mapping: 1 finger is used for rotation around x and y, while two-finger interaction controls rotation around z, and translation along x, y, and z. System Control System control techniques allows the user to send commands to an application, activate some functionality, change the interaction (or system) mode, or modify a parameter. The command sender always includes the selection of an element from a set. System control techniques as techniques that support system control tasks in three-dimensions can be categorized into four groups: Graphical menus: visual representations of commands. Voice commands: menus accessed via voice. Gestural interaction: command accessed via body gesture. Tools: virtual objects with an implicit function or mode. Also exists different hybrid techniques that combine some of the types. Symbolic input This task allows the user to enter and/or edit, for example, text, making it possible to annotate 3D scenes or 3D objects. See also Finger tracking Interaction technique Interaction design Human–computer interaction Cave Automatic Virtual Environment (CAVE) Virtual reality References Reading List 3D Interaction With and From Handheld Computers. Visited March 28, 2008 Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I. (2001, February). An Introduction to 3-D User Interface Design. Presence, 10(1), 96–108. Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I. (2005). 3D User Interfaces: Theory and Practice. Boston: Addison–Wesley. Bowman, Doug. 3D User Interfaces. Interaction Design Foundation. Retrieved October 15, 2015 Burdea, G. C., Coiffet, P. (2003). Virtual Reality Technology (2nd ed.). New Jersey: John Wiley & Sons Inc. Carroll, J. M. (2002). Human–Computer Interaction in the New Millennium. New York: ACM Press Csisinko, M., Kaufmann, H. (2007, March). Towards a Universal Implementation of 3D User Interaction Techniques [Proceedings of Specification, Authoring, Adaptation of Mixed Reality User Interfaces Workshop, IEEE VR]. Charlotte, NC, USA. Interaction Techniques. DLR - Simulations- und Softwaretechnik. Retrieved October 18, 2015 Larijani, L. C. (1993). The Virtual Reality Primer. United States of America: R. R. Donnelley and Sons Company. Rhijn, A. van (2006). Configurable Input Devices for 3D Interaction using Optical Tracking. Eindhoven: Technische Universiteit Eindhoven. Stuerzlinger, W., Dadgari, D., Oh, J-Y. (2006, April). Reality-Based Object Movement Techniques for 3D. CHI 2006 Workshop: "What is the Next Generation of Human–Computer Interaction?". Workshop presentation. The CAVE (CAVE Automatic Virtual Environment). Visited March 28, 2007 The Java 3-D Enabled CAVE at the Sun Centre of Excellence for Visual Genomics. Visited March 28, 2007 Vince, J. (1998). Essential Virtual Reality Fast. Great Britain: Springer-Verlag London Limited Virtual Reality. Visited March 28, 2007 Yuan, C., (2005, December). Seamless 3D Interaction in AR – A Vision-Based Approach. In Proceedings of the First International Symposium, ISVC (pp. 321–328). Lake Tahoe, NV, USA: Springer Berlin/ Heidelberg. External links Bibliography on 3D Interaction and Spatial Input The Inventor of the 3D Window Interface 1998 3DI Group 3D Interaction in Virtual Environments Human–computer interaction Virtual reality User interface techniques
4198408
https://en.wikipedia.org/wiki/SQL%20Server%20Reporting%20Services
SQL Server Reporting Services
SQL Server Reporting Services (SSRS) is a server-based report generating software system from Microsoft. It is part of a suite of Microsoft SQL Server services, including SSAS (SQL Server Analysis Services) and SSIS (SQL Server Integration Services). Administered via a Web interface, it can be used to prepare and deliver a variety of interactive and printed reports. The SSRS service provides an interface into Microsoft Visual Studio so that developers as well as SQL administrators can connect to SQL databases and use SSRS tools to format SQL reports in many complex ways. It also provides a 'Report Builder' tool for less technical users to format SQL reports of lesser complexity. SSRS competes with Crystal Reports and other business intelligence tools. History Reporting Services was first released in 2004 as an add-on to SQL Server 2000. Subsequent versions have been: Second version with SQL Server 2005 in November 2005 Third as part of SQL Server 2008 R2 in April 2010 Fourth version as part of SQL Server 2012 in March 2012 Fifth version as part of SQL Server 2014 in March 2014 Sixth version as part of SQL Server 2016 in March 2016 Seventh version as part of SQL Server 2017 in October 2017 Packaging Microsoft SQL Server Developer, Standard, and Enterprise editions all include SSRS as an install option. The free SQL Server Express includes a limited version. Use SQL Server Data Tools for Business Intelligence (SSDT BI) reduces the RDL (Report Definition Language) component to graphic icons in a GUI (Graphical User Interface). In this way, instead of writing code, the user can drag-and-drop graphic icons into an SSRS report format for most aspects of the SSRS report. Reports defined by RDL can be downloaded to a variety of formats including Excel, PDF, CSV, XML, TIFF (and other image formats), and HTML Web Archive. SQL Server 2008 and 2012 SSRS can also prepare reports in Microsoft Word (DOC) format, while third-party report generators offer additional output formats. Users can interact with the Report Server web service directly, or instead use Report Manager, a Web-based application that interfaces with the Report Server web service. With Report Manager, users can view, subscribe to, and manage reports as well as manage and maintain data sources and security settings. Report Manager can also deliver SQL reports by e-mail, or place them on a file share. Security is role-based and can be assigned on an individual item, such as a report or data source, a folder of items, or site wide. Security roles and rights are inherited and can be overloaded. Typically the reports are only revealed to users able to run them, and SQL connections in the source allow anyone to run with sufficient privileges. This is because configuring Windows Authentication through the reports execution is laborious and time-consuming: a Server Principal Name record (requiring DOMAIN ADMINISTRATOR access) is created in Active Directory associating the Sql Server Reporting service to the user the service runs under on the server (a network user to facilitate querying the Active Directory)... and the service account user has to have the delegation option enabled, the server must be trusted for delegation too; the windows users wishing to run reports must be set to allow delegation - so Kerberos authentication protocols will be used. The reporting service itself has to have the configuration edited to enable Kerberos protocols... but then the reports will be secure and only display data the individual users are permitted to (based on SQL security configuration). RDL reports can be viewed by using the standalone Report Server that comes with Microsoft SQL Server, or by using the ASP.NET ReportViewer web control, or by using the ReportViewer Windows Forms control. The latter method allows reports to be embedded directly into web pages or .NET Windows applications. The ReportViewer control will process reports by: (a) server processing, where the report is rendered by the Report Server; or (b) local processing, where the control renders the RDL file itself. SQL Server Reporting Services also support ad hoc reports: the designer develops a report schema and deploys it on the reporting server, where the user can choose relevant fields/data and generate reports. Users can then download the reports locally. Microsoft SQL Server 2012 SP1 expands Microsoft support for viewing reports to mobile platforms, including Microsoft Surface, Apple iOS 6 and Windows Phone 8. References External links Microsoft SQL Server: Reporting Services home page Microsoft SQL Server: Reporting Services resources page SSRS with Visual Basic and Visual C# SSRS in your ASP.Net application PHP library for connecting to SSRS over SOAP Custom SSRS solution I white paper by MindHARBOR Microsoft SQL Azure Enterprise Application Development, , Jayaram Krishnaswamy, 2010 Learn SQL Server Reporting Services 2008, , Jayaram Krishnaswamy, 2008 Learning SQL Server Reporting Services 2012, , Jayaram Krishnaswamy, 2013 Windows Authentication on SQL Server Reporting Services Microsoft database software Reporting software
40601443
https://en.wikipedia.org/wiki/Pasco%20County%20Library%20Cooperative
Pasco County Library Cooperative
The Pasco County Library Cooperative (PCLC) is the public library system that serves all residents of Pasco County, Florida, and is a member of the Tampa Bay Library Consortium. The Pasco County Library System, as it was originally known, was established by county ordinance in 1980. In 1999, the Pasco County Public Library Cooperative was established as a result of an Interlocal Agreement between the Pasco County Board of County Commissioners and the Zephyrhills City Council. It consists of seven branch libraries and one cooperative partner library, Zephyrhills Public Library. The Pasco County Libraries operate on a budget of $6,344,041 for fiscal year 2016. Pasco Libraries circulated 1,195,649 items for fiscal year 2016; up-to-date statistical information can be found on their website at: http://www.pascolibraries.org/stats/ The head of library services reports to the Assistant County Administrator for Public Services. History The Pasco County Library Cooperative was once known as the Pasco County Library System. It was established on July 22, 1980. There were municipal-owned and operated libraries in three different cities; Dade City, New Port Richey, and Zephyrhills. There were also three libraries being led by volunteers in the communities of Hudson, Holiday, and Land O’ Lakes that were incorporated into the system. In 1986, a $10 million bond referendum was passed by voters to improve the public libraries and build new parks countywide. After this referendum was passed, two facilities were replaced (Hudson and Land O’ Lakes) and two facilities were built in un-served areas (Regency Park and South Holiday). There was also a renovation and addition done to the Hugh Embry Library in Dade City. With additional funding secured through Federal programs, the Centennial Park Library and New River Library were also built. The system changed into a cooperative in 2000 when the City of Zephyrhills library joined it. Branches The Pasco County Public Library Cooperative consists of seven branches and one cooperative partner (Zephyrhills), with the administrative offices for the system located at the Hudson Regional Library. Centennial Park Branch Library The Centennial Park Branch Library is closed for remodeling until fall 2020. It offers many standard library services including books, audio books, DVDs, and a large meeting space. It provides computers, study rooms, and programs for all ages such as book clubs, reading clubs, gaming, story time, family movies. Patrons also have access to e-content such as databases, eBooks and audio books through Overdrive, Hoopla, Flipster Magazines, and New York Times. It also provides eGovernment resources as well as computers for job searching. Hudson Regional Library/Administration and Support Services The Hudson Regional Branch Library offers many standard library services including books, audio books, DVDs, and a large meeting space. It also features a music recording studio. It provides computers, study rooms, and programs for all ages such as book clubs, reading clubs, gaming, story time, family movies. Patrons also have access to e-content such as databases, eBooks and audio books through Overdrive, Hoopla, Flipster Magazines, and New York Times. It also provides eGovernment resources as well as computers for job searching. Hugh Embry Branch Library Hugh Embry Library was established in 1904 in Dade City, Florida when its namesake Hugh Embry (1879–1907), then 25, recovering from an illness had exhausted all the books he could read from his friends.  He raised to $50 start a library and called it the Shakespeare Club.  The library was run out of the Embry home until his death a few years later when the library became the property of the Pasco Library Association. The books were moved all around Dade City as there was no place to house them until The Women’s Club took them in.  At the time membership was free if you were a member or relative of the Women’s Club, otherwise it was 10 cents.  The Women’s Club began lending the books to the Grammar school and throughout the County for children to read. In 1927, the Women’s Club began letting the books out to the general population for free and the library was moved to a free room in the Herbert Massey building. The Women’s Club had no money to furnish the library or provide shelving.  The Club began to raise money for the Library by selling food to the workers of Dade City.  A month after the library was open there were 440 registered borrowers, and the unit was open three afternoons a week. By 1930 the project had grown too large for the Women’s club to handle.  The Library was incorporated August 24, 1930 and remained so for twenty-four years. In the 1930s the library began receiving funds from the City.  The mayor, Fred Touchton, gave the library $10 a month to purchase classical books for children.  However, the funds donated by the city only amounted to $37.50, enough to pay a librarian monthly.  There was no money for books or supplies so the Women’s Club jumped in and saved the library again through donations. In the 1940s the Works Progress Administration completed a new City Hall and the Library found a room there.  The books were thrown out the window of the second floor Massey Building into trucks and transported to the new library. The library continued to grow and in 1952 the City gave property across the street from the City Hall for the library.  In 1953, the City assumed the assets of the library and on July 12, 1953 Dade City became financially responsible for Hugh Embry Library. In 1962, the Friends of the Library group raised $12,000 to build a building for the library.  The city donated $25,000 and the “Friends” raised another $12,000 and in November 1963 the library moved into its permanent home. In 1981, Pasco County chartered its own library commission and began operating and maintaining the library under a no cost lease.  In 1988, the City Commission of Dade City sold the library to the county for $150,000. In 1991, the library was expanded by funds raised through a tax approved in 1986. The Hugh Embry Branch Library offers many standard library services including books, audio books, DVDs, and a large meeting space. It provides computers, study rooms, and programs for all ages such as book clubs, reading clubs, gaming, story time, family movies. Patrons also have access to e-content such as databases, eBooks and audio books through Overdrive, Hoopla, Flipster Magazines, and New York Times. It also provides eGovernment resources as well as computers for job searching. Land o' Lakes Branch Library The Land o' Lakes Branch Library began as a small section of a county building located on U.S. Highway 41 but was later moved to a larger location as a result of its increasing growth. In 1980, it became an official part of the countywide library system, moving into a Land o' Lakes plaza storefront in 1988. A new Land o' Lakes Branch Library was built at the current location with bond money and opened to the public on December 12, 1991. The library underwent an expansion project beginning in 2005, and it was reopened to the public on April 22, 2007. The Land o' Lakes Branch Library property covers a total area of 18,000 square feet. Following its renovation, the library housed several study rooms, a separate children's room, a teen room, and a computer lab. In 2015, the computer lab was moved next to the collections, and the space that had formerly housed the computer lab became a woodworking-based makerspace, The Foundry, which was opened to the public on December 17, 2015. The focus of The Foundry was decided by a committee that included both patrons of all ages who intended to use the space and library staff. New River Branch Library The New River Branch Library is closed for remodeling until fall 2020. It offers many standard library services including books, audio books, DVDs, and a large meeting space. It also features a community garden. It provides computers, study rooms, and programs for all ages such as book clubs, reading clubs, gaming, story time, family movies. Patrons also have access to e-content such as databases, eBooks and audio books through Overdrive, Hoopla, Flipster Magazines, and New York Times. It also provides eGovernment resources as well as computers for job searching. Regency Park Branch Library The Regency Park Branch Library opened its doors to the public on October 26, 1990 and was expanded in 2007. It offers many standard library services including books, audio books, DVDs, and a large meeting space. It also features a test kitchen makerspace, Regency Fresh, which opened to the public on October 17, 2019. The Regency Park Library provides computers, study rooms, and programs for all ages such as book clubs, reading clubs, gaming, story time, family movies. Patrons also have access to e-content such as databases, eBooks and audio books through Overdrive, Hoopla, Flipster Magazines, and New York Times. It also provides eGovernment resources as well as computers for job searching. South Holiday Branch Library The South Holiday Branch Library offers many standard library services including books, audio books, DVDs, a large meeting space. It also features a sewing makerspace. It provides computers, study rooms, and programs for all ages such as book clubs, reading clubs, gaming, story time, family movies. Patrons also have access to e-content such as databases, eBooks and audio books through Overdrive, Hoopla, Flipster Magazines, and New York Times. It also provides eGovernment resources as well as computers for job searching. Zephyrhills Public Library (Cooperative Partner) The Zephyrhills Public Library is a City of Zephyrhills funded facility, founded in 1912. The library seeks to encourage reading and the use of technology for life-long learning and the enhancement of the community’s quality of life. The library provides open and equal access to the resources and services of the library. In 1999, the library, together with the Pasco County Library System, formed the Pasco County Library Cooperative in order to offer residents of Pasco County a broader base of services. The same library card is used at the Zephyrhills Library and the county libraries. Makerspaces The Foundry The Foundry at the Land O' Lakes Branch Library is the first dedicated makerspace in Pasco Libraries. It was officially opened on December 17, 2015. The Foundry is equipped with two 3-D printers, as well as computer-aided-design (CAD) equipment, an Oculus Rift virtual reality system, and an audio recording studio. Other makerspace materials include various hand tools, power tools, and crafting equipment and supplies, like yarn and thread. The room serves as the primary meeting space for the Edgar Allan Ohms, the Land O' Lakes High School robotics team sponsored by the library. Library patrons must consent to the terms of the Pasco County Liability Waiver and Permission Form and the Maker Safety Playbook before they can use The Foundry, but it is otherwise open to all. Studio H Studio H is a makerspace located at the Hudson Regional Library that was opened in 2019. It is a multimedia recording studio that provides users with access to equipment, software, and musical instruments. A wide variety of musical instruments are available including electric and acoustic guitars, basses, drum kits, a keyboard, a virtual synthesizer, a banjo, and a mandolin. This makerspace can be used to create music, videos, podcasts, and photos. Users are required to attend an orientation before reserving studio time. Regency Fresh Regency Fresh is a makerspace located at the Regency Park Library that had its grand opening on October 17, 2019. It is a fully equipped kitchen with an induction cooktop, double convection oven, an Instant Pot, a sous-vide cooker, microwave, blender, large mixer, and a demonstration cart with an overhead mirror. This kitchen allows users to experience culinary demonstrations and presentations and, in some cases, can participate in these demonstrations. Users can attend a program or submit ideas for future demonstrations and presentations. The Creation Station The Creation Station is a kids' mini makerspace located at the Hugh Embry Library. It is a space that was designed with children in grades 2 through 5 in mind. It allows them it to explore, tinker, play, and create projects with many different materials. They are able to plan a project, complete it, put the supplies back, and clean up the area so that it is ready for the next child. There is an Idea Book available to help inspire them and the goal is to finish their project so that they have something to take home with them. There is also a dedicated space called Creation Station, Jr. for younger children. Community Garden The Community Garden is a makerspace at the New River Library, which is currently closed for remodeling. The Community Garden works closely with the County Cooperative Extension and the Master Gardeners to be a complete gardening resource. The Seed Library Through generous donors, the Pasco County Library Cooperative has created a seed library to share with the community. They have many different types of seeds available, including vegetables, fruits, herbs, and flowers. The seeds are packaged in small bundles and there are growing instructions placed on the outside of the bundles so that patrons can have some information about the plants they have checked out. With these seeds, patrons have the opportunity to grow some of the healthiest food possible. The library asks that patrons only take what they need so that there is plenty of seeds to go around. In order to check out seeds, patrons fill out a request form on the Pasco County Library Cooperative website and then can pick up their bundles at their home branch. Awards 2014 Resolution by the Pasco County Board of Commissioners for the success of the 2014 LAMECon (Library Anime and Manga Enthusiast Convention). Done and resolved on August 19, 2014 2013 NACo (Florida Association of Counties) Achievement Award for Rockus Maximus Battle of the Bands 2010 John Cotton Dana Award for excellence in public relations The Florida Library Association's Librarian of the Year award was given to Libraries Director Linda Allen. This honor is presented “in recognition of outstanding and ongoing contributions to Florida librarianship.” Florida Library Association's award for best public library website. This prize promotes “awareness of the importance of good design and usability in web page development and to recognize outstanding examples of effective library pages.” 2008 Florida Library of the Year from the Florida Library Association "I Love my Librarian!" award given to director Linda Allen by The New York Times Company Future of the Region Certificate of Excellence and Commemorative program for Public Education in Catastrophe Readiness and Response: Proactive Roles for Public Libraries awarded by Tampa Bay Regional Planning Council Pasco Libraries website The Pasco Libraries website is an integrated website that allows patrons to search the library catalog, check out e-books, search fee-based databases, and access language-learning resources. It also allows patrons to post book reviews, access RSS feeds, create lists, and tag materials. The website has links to the library’s social media content, library videos, and e-government sources. It was recognized by the Florida Library Association as the best library website in 2010. Visits to the library's website increased by 28 percent within its first year of being redesigned. Friends of the Pasco County Library System Friends of the Pasco County Library System, Inc., is a not-for-profit organization that works closely with the libraries within the cooperative to enrich the library experience of patrons through fundraising, volunteering, and advocacy efforts. The organization acts as a link between the libraries and the citizens of Pasco County. Each branch has its own Friends group that operates independently, however, they also come together to contribute to the success and financial stability of the Countywide Friends Organization. The individual groups work together to ensure that the libraries within this cooperative have the resources they need to better serve the people of Pasco County. The Friends organization provides funding for many of the cultural, art, music, storytime, summer reading, and technology programs that are offered at the libraries. References External links Pasco County Library System Public libraries in Florida Pasco County, Florida
613505
https://en.wikipedia.org/wiki/Distributive%20writing
Distributive writing
Distributive writing is the collective authorship (or distributed authorship) of texts. This further requires both a definition of collective and texts, where collective means a connected group of individuals and texts are inscribed symbols chained together to achieve a larger meaning than isolated symbols. This places emphasis on texts being represented as writings. This could be written words, iconic symbology (e.g., graffiti), computer programming languages (C/C++, Java, Perl, etc.), meta-level mark-up (HTML, XML, SVG, PostScript), and their derivative works. Also, not to be excluded are all the above in various languages. Further, to define texts, we must also have an interpreter for the texts. For computer programming languages, we have a compiler, for writings we have written words interpreted by our mental faculties, and for meta-level mark-up there are web browsers, printers to interpret postscript, and various software applications which turn textual representations into another format. (Patrick Deegan and Jon Phillips, 2004) In ancient and oral literatures The concept of distributed authorship has been applied to oral traditions in which one person's telling of a traditional story reflects the oral recitations of many previous tellers. It has likewise been applied to oral-derived written traditions, where a manuscript text is shaped by its transmission through multiple scribes, each of whom may alter the text. Software to support distributive writing Social Software Social Software enables people to connect, communicate, and collaborate. It is explicitly the social which is of importance and is what is operated on. It is the commodity in the system. This is different from Distributive Writing because social software is based upon software, whereas DW is not, and is not just about collaborative writing. It is also about other forms of socializing. Collaborative software, aka Groupware GroupWare is about software and more importantly, in common use to describe combining many pieces of software together into a group for so-called easy access for an individual. The original definition had to do with a group of people operating on something collaboratively through software, but this has changed meaning due to corporate appropriation to describe software suites like Microsoft Office and LibreOffice. Computer-supported collaboration (CSCT) Distributive writing is not just bound to computing like CSCT. Types Synchronous – System of authorship where both author's make changes in real time (at the same time). Asynchronous – System of authorship where both author's make changes in non-real time (render time or not at the time). References External links When No One’s Home: Being a Remote Writer in Distributed Teams Essential Skills for Every Writer - Infographic Writing
38823910
https://en.wikipedia.org/wiki/MCST
MCST
MCST (, acronym for Moscow Center of SPARC Technologies) is a Russian microprocessor company that was set up in 1992. Different types of processors made by MCST were used in personal computers, servers and computing systems. MCST develops microprocessors based on two different instruction set architecture (ISA): Elbrus and SPARC. MCST is a direct descendant of the Lebedev Institute of Precision Mechanics and Computer Engineering. MCST is the base organization of the Department of Informatics and Computer Engineering of the Moscow Institute of Physics and Technology. MCST develops Elbrus processor architecture and the eponymous family of universal VLIW microprocessors based on it with the participation of . The name "Elbrus" is an acronym for "ExpLicit Basic Resources Utilization Scheduling". Products Elbrus 1 (1973) was the fourth generation Soviet computer, developed by Vsevolod Burtsev. Implements tag-based architecture and ALGOL as system language like the Burroughs large systems. A side development was an update of the 1965 BESM-6 as Elbrus-1K2. Elbrus 2 (1977) was a 10-processor computer, considered the first Soviet supercomputer, with superscalar RISC processors. Re-implementation of the Elbrus 1 architecture with faster ECL chips. Elbrus 3 (1986) was a 16-processor computer developed by Boris Babayan. Differing completely from the architecture of both Elbrus 1 and Elbrus 2, it employed a VLIW architecture. Elbrus-90micro (1998–2010) is a computer line based on SPARC instruction set architecture (ISA) microprocessors: MCST R80, R150, R500, R500S, MCST-4R (MCST-R1000) and MCST-R2000 working at 80, 150, 500, 1000 and 2000 MHz. Elbrus-3M1 (2005) is a two-processor computer based on the Elbrus 2000 microprocessor employing VLIW architecture working at 300 MHz. It is a further development of the Elbrus 3 (1986). Elbrus МВ3S1/C (2009) is a ccNUMA 4-processor computer based on Elbrus-S microprocessor working at 500 MHz. Elbrus-2S+ (2011) is a dual-core Elbrus 2000 based microprocessor working at 500 MHz, with capacity to calculate 16 GFlops. Elbrus-2SM (2014) is a dual-core Elbrus 2000 based microprocessor working at 300 MHz, with capacity to calculate 9.6 GFlops. Elbrus-4S (2014) is a quad-core Elbrus 2000 based microprocessor working at 800 MHz, with capacity to calculate 50 GFlops. Elbrus-8S (2014–2015) is an octa-core Elbrus 2000 based microprocessor working at 1300 MHz, with capacity to calculate 250 GFlops. Elbrus-8SV (2018) is an octa-core Elbrus 2000 based microprocessor working at 1500 MHz, with capacity to calculate 576 GFlops. Elbrus-16S (2021) is 16-core Elbrus 2000 based microprocessor working at 2000 MHz, with capacity to calculate 750 GFlops at double precision and 1.5 TFlops at single precision operations. Elbrus-2S3 (2021) is a dual-core Elbrus 2000 based microprocessor working at 2000 MHz. See also History of computing in the Soviet Union Information technology in Russia References External links Elbrus website in Russian Companies established in 1992 1992 establishments in Russia Computer companies of Russia Russian brands Companies based in Moscow
692211
https://en.wikipedia.org/wiki/CMS%20Pipelines
CMS Pipelines
CMS Pipelines implements the pipeline concept under the VM/CMS operating system. The programs in a pipeline operate on a sequential stream of records. A program writes records that are read by the next program in the pipeline. Any program can be combined with any other because reading and writing is done through a device independent interface. Overview CMS Pipelines provides a CMS command, PIPE. The argument string to the PIPE command is the pipeline specification. PIPE selects programs to run and chains them together in a pipeline to pump data through. Because CMS programs and utilities don't provide a device independent stdin and stdout interface, CMS Pipelines has a built-in library of programs that can be called in a pipeline specification. These built-in programs interface to the operating system, and perform many utility functions. Data on CMS is structured in logical records rather than a stream of bytes. For textual data a line of text corresponds to a logical record. In CMS Pipelines the data is passed between the stages as logical records. CMS Pipelines users issue pipeline commands from the terminal or in EXEC procedures. Users can write programs in REXX that can be used in addition to the built-in programs. Example A simple example that reads a disk file, separates records containing the string "Hello" from those that do not. The selected records are modified by appending the string "World!" to each of them; the other records are translated to upper case. The two streams are then combined and the records are written to a new output file. PIPE (end ?) < input txt | a: locate /Hello/ | insert / World!/ after | i: faninany | > newfile txt a ? a: | xlate upper | i: In this example, the < stage reads the input disk file and passes the records to the next stage in the pipeline. The locate stage separates the input stream into two output streams. The primary output of locate (records containing Hello) passes the records to the insert stage. The insert stage modifies the input records as specified in its arguments and passes them to its output. The output is connected to faninany that combines records from all input streams to form a single output stream. The output is written to the new disk file. The secondary output of locate (marked by the second occurrence of the a: label) contains the records that did not meet the selection criterion. These records are translated to upper case (by the xlate stage) and passed to the secondary input stream of faninany (marked by the second occurrence of the i: label). The pipeline topology in this example consists of two connected pipelines. The end character (the ? in this example) separates the individual pipelines in the pipeline set. Records read from the input file pass through either of the two routes of the pipeline topology. Because neither of the routes contain stages that need to buffer records, CMS Pipelines ensures that records arrive at faninany in the order in which they passed through locate. The example pipeline is presented in 'portrait form' with the individual stages on separate lines. When a pipeline is typed as a CMS command, all stages are written on a single line. Features The concept of a simple pipeline is extended in these ways: A program can define a subroutine pipeline to perform a function on all or part of its input data. A network of intersecting pipelines can be defined. Programs can be in several pipelines concurrently, which gives the program access to multiple data streams. Data passed from one stage to the next is structured as records. This allows stages to operate on a single record without the no need for arbitrary buffering of data to scan for special characters that separate the individual lines. Stages normally access the input record in locate mode, and produce the output records before consuming the input record. This lock-step approach not only avoids copying the data from one buffer to the next; it also makes it possible to predict the flow of records in multi-stream pipelines. A program can dynamically redefine the pipeline topology. It can replace itself with another pipeline, it can insert a pipeline segment before or after itself, or both. A program can use data in the pipeline to build pipeline specifications. CMS Pipelines offers several features to improve the robustness of programs: A syntax error in the overall pipeline structure or in any one program causes the entire pipeline to be suppressed. Startup of the programs in the pipeline and allocation of resources is coordinated by the CMS Pipelines dispatcher. Individual programs can participate in that coordination to ensure irreversible actions are postponed to a point where all programs in the pipelines have had a chance to verify the arguments and are ready to process data. When the pipeline is terminated, the dispatcher ensures resources are released again. Errors that occur while data flow in the pipeline can be detected by all participating programs. For example, a disk file might not be replaced in such circumstances. History John Hartmann, of IBM Denmark, started development of CMS Pipelines in 1980. The product was marketed by IBM as a separate product during the 80's and integrated in VM/ESA late 1991. With each release of VM, the CMS Pipelines code was upgraded as well until it was functionally frozen at the 1.1.10 level in VM/ESA 2.3 in 1997. Since then, the latest level of CMS Pipelines has been available for download from the CMS Pipelines homepage for users who wish to explore new function. The current level of CMS Pipelines is included in the z/VM releases again since z/VM 6.4, available since November 11, 2016. An implementation of CMS Pipelines for TSO was released in 1995 as BatchPipeWorks in the BatchPipes/MVS product. The up-to-date TSO implementation has been available as a Service Offering from IBM Denmark until 2010. Both versions are maintained from a single source code base and commonly referred to as CMS/TSO Pipelines. The specification is available in the Author's Edition. See also BatchPipes Shell (computing) Flow-Based Programming References External links CMS/TSO Pipelines Runtime Library Distribution IBM software Concurrent programming languages Concatenative programming languages Dynamic programming languages Function-level languages Functional languages Inter-process communication Multi-paradigm programming languages Rexx Specification languages VM (operating system)
428953
https://en.wikipedia.org/wiki/Les%20Troyens
Les Troyens
Les Troyens (in English: The Trojans) is a French grand opera in five acts by Hector Berlioz. The libretto was written by Berlioz himself from Virgil's epic poem the Aeneid; the score was composed between 1856 and 1858. Les Troyens is Berlioz's most ambitious work, the summation of his entire artistic career, but he did not live to see it performed in its entirety. Under the title Les Troyens à Carthage, the last three acts were premièred with many cuts by Léon Carvalho's company, the Théâtre Lyrique, at their theatre (now the Théâtre de la Ville) on the Place du Châtelet in Paris on 4 November 1863, with 21 repeat performances. After decades of neglect, today the opera is considered by some music critics as one of the finest ever written. Composition history Berlioz began the libretto on 5 May 1856 and completed it toward the end of June 1856. He finished the full score on 12 April 1858. Berlioz had a keen affection for literature, and he had admired Virgil since his childhood. The Princess Carolyne zu Sayn-Wittgenstein was a prime motivator to Berlioz to compose this opera. On 3 May 1861, Berlioz wrote in a letter: "I am sure that I have written a great work, greater and nobler than anything done hitherto." Elsewhere he wrote: "The principal merit of the work is, in my view, the truthfulness of the expression." For Berlioz, truthful representation of passion was the highest goal of a dramatic composer, and in this respect he felt he had equalled the achievements of Gluck and Mozart. Early performance history Premiere of the second part In his memoirs, Berlioz described in excruciating detail the intense frustrations he experienced in seeing the work performed. For five years (from 1858 to 1863), the Paris Opéra – the only suitable stage in Paris – vacillated. Finally, tired of waiting, he agreed to let Léon Carvalho, director of the smaller Théâtre Lyrique, mount a production of the second half of the opera with the title Les Troyens à Carthage. It consisted of Acts 3 to 5, redivided by Berlioz into five acts, to which he added an orchestral introduction (Lamento) and a prologue. As Berlioz noted bitterly, he agreed to let Carvalho do it "despite the manifest impossibility of his doing it properly. He had just obtained an annual subsidy of a hundred thousand francs from the government. Nonetheless the enterprise was beyond him. His theater was not large enough, his singers were not good enough, his chorus and orchestra were small and weak." Even with this truncated version of the opera, many compromises and cuts were made, some during rehearsals, and some during the run. The new second act was the Chasse Royale et Orage ("Royal Hunt and Storm") [no. 29], an elaborate pantomime ballet with nymphs, sylvans and fauns, along with a chorus. Since the set change for this scene took nearly an hour, it was cut, despite the fact its staging had been greatly simplified with a painted waterfall backdrop rather than one with real water. Carvalho had originally planned to divert water from the nearby Seine, but during the rehearsals, a faulty switch nearly caused a disaster. The entries of the builders, sailors, and farm-workers , were omitted because Carvalho found them dull; likewise, the scene for Anna and Narbal and the second ballet [no. 33b]. The sentries' duet [no. 40] was omitted, because Carvalho had found its "homely style... out of place in an epic work". Iopas's stanzas [no. 25] disappeared with Berlioz's approval, the singer De Quercy "charged with the part being incapable of singing them well." The duet between Didon and Énée [no. 44] was cut because, as Berlioz himself realized, "Madame Charton's voice was unequal to the vehemence of this scene, which took so much out of her that she would not have had the strength left to deliver the tremendous recitative 'Dieux immortels! il part!' [no. 46], the final aria ['Adieu, fière cité', no. 48], and the scene on the pyre ." The "Song of Hylas" [no. 38], which was "greatly liked at the early performances and was well sung", was cut while Berlioz was at home sick with bronchitis. The singer of the part, Edmond Cabel, was also performing in a revival of Félicien David's La perle du Brésil, and since his contract only required him to sing fifteen times per month, he would have to be paid an extra two hundred francs for each additional performance. Berlioz lamented: "If I am able to put on an adequate performance of a work of this scale and character I must be in absolute control of the theatre, as I am of the orchestra when I rehearse a symphony." Even in its less than ideal form, the work made a profound impression. For example, Giacomo Meyerbeer attended 12 performances. Berlioz's son Louis attended every performance. A friend tried to console Berlioz for having endured so much in the mutilation of his magnum opus and pointed out that after the first night audiences were increasing. "See," he said encouragingly to Berlioz, "they are coming." "Yes," replied Berlioz, feeling old and worn out, "they are coming, but I am going." Berlioz never saw the first two acts, later given the name La prise de Troie ("The Capture of Troy"). Early concert performances of portions of the opera After the premiere of the second part at the Théâtre Lyrique, portions of the opera were next presented in concert form. Two performances of La prise de Troie were given in Paris on the same day, 7 December 1879: one by the Concerts Pasdeloup at the Cirque d'Hiver with Anne Charton-Demeur as Cassandre, Stéphani as Énée, conducted by Ernest Reyer; and another by the Concerts Colonne at the Théâtre du Châtelet with Leslino as Cassandre, Piroia as Énée, conducted by Edouard Colonne. These were followed by two concerts in New York: the first, Act 2 of La prise de Troie, was performed in English on 6 May 1882 by Thomas's May Festival at the 7th Regiment Armory with Amalie Materna as Cassandre, Italo Campanini as Énée, conducted by Theodore Thomas; the second, Les Troyens à Carthage (with cuts), was given in English on 26 February 1887 at Chickering Hall with Marie Gramm as Didon, Max Alvary as Énée, and possibly conducted by Frank Van der Stucken. First performance of both parts The first staged performance of the whole opera only took place in 1890, 21 years after Berlioz's death. The first and second parts, in Berlioz's revised versions of three and five acts, were sung on two successive evenings, 6 and 7 December, in German at Großherzoglichen Hoftheater in Karlsruhe (see Roles). This production was frequently revived over the succeeding eleven years and was sometimes given on a single day. The conductor, Felix Mottl, took his production to Mannheim in 1899 and conducted another production in Munich in 1908, which was revived in 1909. He rearranged some of the music for the Munich production, placing the "Royal Hunt and Storm" after the love duet, a change that "was to prove sadly influential." A production of both parts, with cuts, was mounted in Nice in 1891. In subsequent years, according to Berlioz biographer David Cairns, the work was thought of as "a noble white elephant – something with beautiful things in it, but too long and supposedly full of dead wood. The kind of maltreatment it received in Paris as recently as last winter in a new production will, I'm sure, be a thing of the past." Publication of the score Berlioz arranged for the entire score to be published by the Parisian music editors Choudens et Cie. In this published score, he introduced a number of optional cuts which have often been adopted in subsequent productions. Berlioz complained bitterly of the cuts that he was more or less forced to allow at the 1863 Théâtre Lyrique premiere production, and his letters and memoirs are filled with the indignation that it caused him to "mutilate" his score. In the early 20th century, the lack of accurate parts led musicologists W. J. Turner and Cecil Gray to plan a raid on the publisher's Paris office, even approaching the Parisian underworld for help. In 1969, Bärenreiter Verlag of Kassel, Germany, published a critical edition of Les Troyens, containing all the compositional material left by Berlioz. The preparation of this critical edition was the work of Hugh Macdonald, whose Cambridge University doctoral dissertation this was. The tendency since then has been to perform the opera in its complete form. In early 2016 the Bibliothèque nationale de France bought the 1859 autograph vocal score, which included scenes cut for the orchestral autograph score; the manuscript also includes annotations by Pauline Viardot. Later performance history On 9 June 1892 the Paris Opéra-Comique staged Les Troyens à Carthage (in the same theatre as its premiere) and witnessed a triumphant debut for the 17-year-old Marie Delna as Didon, with Stéphane Lafarge as Énée, conducted by Jules Danbé; these staged performances of Part 2 continued into the next year. In December 1906 the Théâtre de la Monnaie in Brussels commenced a run of performances with the two halves on successive nights. The Opéra in Paris presented a production of La prise de Troie in 1899, and in 1919 mounted a production of Les Troyens à Carthage in Nîmes. Both parts were staged at the Opéra in one evening on 10 June 1921, with mise-en-scène by Merle-Forest, sets by René Piot and costumes by Dethomas. The cast included Marguerite Gonzategui (Didon), Lucy Isnardon (Cassandre), Jeanne Laval (Anna), Paul Franz (Énée), Édouard Rouard (Chorèbe), and Armand Narçon (Narbal), with Philippe Gaubert conducting. Marisa Ferrer, who later sang the part under Sir Thomas Beecham in London, sang Didon in the 1929 revival, with Germaine Lubin as Cassandre and Franz again as Énée. Georges Thill sang Énée in 1930. Lucienne Anduran was Didon in 1939, with Ferrer as Cassandre this time, José de Trévi as Énée, and Martial Singher as Chorèbe. Gaubert conducted all performances in Paris before the Second World War. In the UK, concert performances of Les Troyens à Carthage took place in 1897 and 1928, then in 1935 a complete Les Troyens was performed by Glasgow Grand Opera Society, directed by Scottish composer Erik Chisholm. Les Troyens was performed for the first time in London in a concert performance conducted by Sir Thomas Beecham and broadcast at the BBC in 1947. His cast included Ferrer as both Didon and Cassandre, Jean Giraudeau as Énée and Charles Cambon as both Chorèbe (a role he had sung in Paris in 1929) and Narbal. An aircheck of this performance has been issued on CD. However, the 1957 production at the Royal Opera House, Covent Garden conducted by Rafael Kubelík and directed by John Gielgud, has been described as "the first full staging in a single evening that even approximated the composer's original intentions". It was sung in English. 1960s The Paris Opéra gave a new production of a condensed version of Les Troyens on March 17, 1961, directed by Margherita Wallmann, with sets and costumes by Piero Zuffi. Pierre Dervaux was the conductor, with Régine Crespin as Didon, Geneviève Serrès as Cassandre, Jacqueline Broudeur as Anna, Guy Chauvet as Énée, Robert Massard as Chorèbe and Georges Vaillant as Narbal; performances by this cast were broadcast on French radio. Several of these artists, in particular Crespin and Chauvet, participated in a set of extended highlights commercially recorded by EMI in 1965, Georges Prêtre conducting. The performance of Les Troyens used at various productions at the Paris Opéra and by Beecham and by Kubelík in London were the orchestral and choral parts from Choudens et Cie of Paris, the only edition then available. The Critical Edition score from Bärenreiter first published in 1969 was used by Colin Davis in the Covent Garden production that year and parallel Philips recording. The first American stage performance of Les Troyens (an abbreviated version, sung in English) was given by Boris Goldovsky with the New England Opera Theater on 27 March 1955, in Boston. The San Francisco Opera staged a heavily cut version of the opera (reducing it to about three hours), billed as the "American professional stage premiere", in 1966, with Crespin as both Cassandre and Didon and Canadian tenor Jon Vickers as Énée, and again in 1968 with Crespin and Chauvet; Jean Périsson conducted all performances. During the 1964 season at the Teatro Colón in Buenos Aires, Crespin as Cassandre and Didon and Chauvet were the leads for the South American premiere conducted by Georges Sébastian. Completeness as the norm The first complete American production of Les Troyens (with Crespin as Didon) was given in February 1972 by Sarah Caldwell with her Opera Company of Boston, at the Aquarius Theater. In 1973, Rafael Kubelík conducted the first Metropolitan Opera staging of Les Troyens, in the opera's first performances in New York City and the third staging in the United States. Shirley Verrett was both Cassandre and Didon at the Metropolitan Opera House premiere, with Jon Vickers as Énée. Christa Ludwig had been cast as Didon but was ill at the time of the premiere; she sang the role in the ten subsequent performances. Les Troyens opened the Metropolitan's centenary season in 1983 under James Levine with Plácido Domingo, Jessye Norman as Cassandre and Tatiana Troyanos as Didon. Les Troyens was staged again in 1990 for the opening of the new Opéra Bastille in Paris. It was a partial success, because the new theatre could not be quite ready on opening night, which caused much trouble during rehearsals. The performance had several cuts, authorised by Berlioz, including some dances in the third act. A full staged version conducted by Charles Dutoit and produced by Francesca Zambello took place at the Los Angeles Opera on September 14, 1991 with Carol Neblett, Nadine Secunde and Gary Lakes. In 1993, Charles Dutoit conducted the Canadian premiere of "Les Troyens" in a full concert version with the Montreal Symphony and Deborah Voigt, Françoise Pollet and Gary Lakes which was subsequently recorded by Decca. To mark the 200th anniversary of Berlioz's birth in 2003, Les Troyens was revived in productions at the Théâtre du Châtelet in Paris (conducted by John Eliot Gardiner), De Nederlandse Opera in Amsterdam (conducted by Edo de Waart), and at the Metropolitan in New York (with Lorraine Hunt Lieberson as Didon, conducted by Levine). The Met's production, by Francesca Zambello, was revived in the 2012–13 season with Susan Graham as Didon, Deborah Voigt as Cassandre, and Marcello Giordani and Bryan Hymel as Énée, conducted by Fabio Luisi. During June and July 2015 the San Francisco Opera presented the opera in a new production directed by Sir David McVicar that originated at the Royal Opera House in London. It featured Susan Graham as Didon, Anna Caterina Antonacci as Cassandre, and Bryan Hymel as Énée, conducted by Donald Runnicles. Critical evaluation Only knowing the work from a piano reduction, the British critic W. J. Turner declared that Les Troyens was "the greatest opera ever written" in his 1934 book on Berlioz, much preferring it to the vastly more popular works of Richard Wagner. American critic B. H. Haggin heard in the work Berlioz's "arrestingly individual musical mind operating in, and commanding attention with, the use of the [Berlioz] idiom with assured mastery and complete adequacy to the text's every demand". David Cairns described the work as "an opera of visionary beauty and splendor, compelling in its epic sweep, fascinating in the variety of its musical invention... it recaptures the tragic spirit and climate of the ancient world." Hugh Macdonald said of it: Roles {| class="wikitable" !Role !Voice type !Premiere cast,(Acts 3–5 only)4 November 1863(Conductor: Adolphe Deloffre) !Premiere cast,(complete opera)6–7 December 1890(Conductor:Felix Mottl) |- |Énée (Aeneas), Trojan hero, son of Venus and Anchises |tenor |Jules-Sébastien Monjauze |Alfred Oberländer |- |Chorèbe (Coroebus), a young prince from Asia, betrothed to Cassandre |baritone | – |Marcel Cordes |- |Panthée (Pantheus), Trojan priest, friend of Énée ||bass |Péront |Carl Nebe |- |Narbal, minister to Dido |bass |Jules-Émile Petit |Fritz Plank |- |Iopas, Tyrian poet to Didon's court |tenor |De Quercy |Hermann Rosenberg |- |Ascagne (Ascanius), Énée's young son (15 years) |soprano |Mme Estagel |Auguste Elise Harlacher-Rupp |- |Cassandre (Cassandra), Trojan prophetess, daughter of Priam |mezzo-soprano | – |Luise Reuss-Belce |- |Didon (Dido), Queen of Carthage, widow of Sychaeus, prince of Tyre |mezzo-soprano |Anne-Arsène Charton-Demeur |Pauline Mailhac |- |Anna, Didon's sister |contralto |Marie Dubois |Christine Friedlein |- |colspan="4"|Supporting roles: |- |Hylas, a young Phrygian sailor |tenor or contralto |Edmond Cabel |Wilhelm Guggenbühler |- |Priam, King of Troy |bass | – | |- |A Greek chieftain |bass | – |Fritz Plank |- |Ghost of Hector, Trojan hero, son of Priam |bass | | |- |Helenus, Trojan priest, son of Priam |tenor | – |Hermann Rosenberg |- |Two Trojan soldiers |basses |Guyot, Teste | |- |Mercure (Mercury), a God |baritone or bass | | |- |A Priest of Pluto |bass | | |- |Polyxène (Polyxena), sister of Cassandre |soprano | – |Annetta Heller |- |Hécube (Hecuba), Queen of Troy |soprano | – |Pauline Mailhac |- |Andromaque (Andromache), Hector's widow |silent | – | |- |Astyanax, her son (8 years) |silent | – | |- |Le Rapsode, narrator of the Prologue |spoken |Jouanny | – |- |colspan="4"|Chorus: Trojans, Greeks, Tyrians and Carthaginians; Nymphs, Satyrs, Fauns, and Sylvans; Invisible spirits |} Instrumentation Berlioz specified the following instruments: In the orchestra: Woodwinds: piccolo, 2 flutes (2nd doubling piccolo), 2 oboes (2nd doubling English horn), 2 clarinets (2nd doubling bass clarinet), 4 bassoons Brass: 4 horns, 2 trumpets, 2 valve cornets, 3 trombones, ophicleide or tuba Percussion: timpani, triangles, bass drum, cymbals, tenor drum (caisse roulante), drum without snares (tambour sans timbre), tenor drum (tambourin), tam-tam, 2 pairs of small antique cymbals in E and F, 6 or 8 harps Strings Offstage: 3 oboes 3 trombones Saxhorns: sopranino in B (petit saxhorn suraigu en si), sopranos in E (or valve trumpets in E), altos in B (or valve trumpets in B), tenors in E (or horns in E), contrabasses in E (or tubas in E) Percussion: pairs of timpani, several pairs of cymbals, thunder machine (roulement de tonnerre), antique sistrums, tarbuka, tam-tam Synopsis Act 1At the abandoned Greek camp outside the walls of TroyThe Trojans are celebrating apparent deliverance from ten years of siege by the Greeks (also named the Achaeans in the opera). They see the large wooden horse left by the Greeks, which they presume to be an offering to Pallas Athene. Unlike all the other Trojans, however, Cassandre is mistrustful of the situation. She foresees that she will not live to marry her fiancé, Chorèbe. Chorèbe appears and urges Cassandre to forget her misgivings. But her prophetic vision clarifies, and she foresees the utter destruction of Troy. When Andromaque silently walks in holding her son Astyanax by the hand, the celebration halts. A captive, Sinon, is brought in. He lies to King Priam and the crowd that he has deserted the Greeks, and that the giant wooden horse they have left behind was intended as a gift to the gods to ensure their safe voyage home. He says the horse was made so big that the Trojans would not be able to move it into their city, because if they did they would be invincible. This only makes the Trojans want the horse inside their city all the more. Énée then rushes on to tell of the devouring of the priest Laocoön by a sea serpent, after Laocoön had warned the Trojans to burn the horse. Énée interprets this as a sign of the goddess Athene's anger at the sacrilege. Against Cassandre's futile protests, Priam orders the horse to be brought within the city of Troy and placed next to the temple of Pallas Athene. There is suddenly a sound of what seems to be the clashing of arms from within the horse, and for a brief moment the procession and celebrations stop, but then the Trojans, in their delusion, interpret it as a happy omen and continue pulling the horse into the city. Cassandre has watched the procession in despair, and as the act ends, resigns herself to death beneath the walls of Troy. Act 2 Before the act proper has started, the Greek soldiers hidden in the wooden horse have come out and begun to destroy Troy and its citizens.Scene 1: Palace of ÉnéeWith fighting going on in the background, the ghost of Hector visits Énée and warns him to flee Troy for Italy, where he will build a new Troy. After Hector fades, the priest Panthée conveys the news about the Greeks hidden in the horse. Ascagne appears with news of further destruction. At the head of a band of soldiers, Chorèbe urges Énée to take up arms for battle. All resolve to defend Troy to the death.Scene 2: Palace of PriamSeveral of the Trojan women are praying at the altar of Vesta/Cybele for their soldiers to receive divine aid. Cassandre reports that Énée and other Trojan warriors have rescued Priam's palace treasure and relieved people at the citadel. She prophesies that Énée and the survivors will found a new Troy in Italy. But she also says that Chorèbe is dead, and resolves to die herself. The other women acknowledge the accuracy of Cassandre's prophecies and their own error in dismissing her. Cassandre then calls upon the Trojan women to join her in death, to prevent being defiled by the invading Greeks. One group of women admits to fear of death, and Cassandre dismisses them from her sight. The remaining women unite with Cassandre in their determination to die. A Greek captain observes the women during this scene with admiration for their courage. Greek soldiers then come on the scene, demanding the Trojan treasure from the women. Cassandre defiantly mocks the soldiers, then suddenly stabs herself. Polyxène takes the same dagger and does likewise. The remaining women scorn the Greeks as being too late to find the treasure, and commit mass suicide, to the soldiers' horror. Cassandre summons one last cry of "Italie!" before collapsing, dead. Act 3Didon's throne-room at CarthageThe Carthaginians and their queen, Didon, are celebrating the prosperity that they have achieved in the past seven years since fleeing from Tyre to found a new city. Didon, however, is concerned about Iarbas, the Numidian king, not least because he has proposed a political marriage with her. The Carthaginians swear their defence of Didon, and the builders, sailors and farmers offer tribute to Didon. In private after these ceremonies, Didon and her sister Anna then discuss love. Anna urges Didon to remarry, but Didon insists on honoring the memory of her late husband Sichée. The bard Iopas then enters to tell of an unknown fleet that has arrived in port. Recalling her own wanderings on the seas, Didon bids that these strangers be made welcome. Ascagne enters, presents the saved treasure of Troy, and relates the Trojans' story. Didon acknowledges that she knows of this situation. Panthée then tells of the ultimate destiny of the Trojans to found a new city in Italy. During this scene, Énée is disguised as an ordinary sailor. Didon's minister Narbal then comes to tell her that Iarbas and his Numidian army are attacking the fields surrounding Carthage and are marching on the city. But Carthage does not have enough weapons to defend itself. Énée then reveals his true identity and offers the services of his people to help Carthage. Didon accepts the offer, and Énée entrusts his son Ascagne to Didon's care, but he suddenly dries his tears and joins the Carthaginians and Trojans in preparing for battle against the Numidians. Act 4Scene 1: Royal Hunt and Storm (mainly instrumental) This scene is a pantomime with primarily instrumental accompaniment, set in a forest with a cave in the background. A small stream flows from a crag and merges with a natural basin bordered with rushes and reeds. Two naiads appear and disappear, but return to bathe in the basin. Hunting horns are heard in the distance, and huntsmen with dogs pass by as the naiads hide in the reeds. Ascagne gallops across the stage on horseback. Didon and Énée have been separated from the rest of the hunting party. As a storm breaks, the two take shelter in the cave. At the climax of the storm, nymphs with dishevelled hair run to-and-fro over the rocks, gesticulating wildly. They break out in wild cries of "a-o" (sopranos and contraltos) and are joined by fauns, sylvans, and satyrs. The stream becomes a torrent, and waterfalls pour forth from the boulders, as the chorus intones "Italie! Italie! Italie!". A tree is hit by lightning, explodes and catches fire, as it falls to the ground. The satyrs, fauns, and sylvans pick up the flaming branches and dance with them in their hands, then disappear with the nymphs into the depths of the forest. The scene is slowly obscured by thick clouds, but as the storm subsides, the clouds lift and dissipate.Scene 2: The gardens of Didon by the shoreThe Numidians have been beaten back, and both Narbal and Anna are relieved at this. However, Narbal worries that Didon has been neglecting the management of the state, distracted by her love for Énée. Anna dismisses such concerns and says that this indicates that Énée would be an excellent king for Carthage. Narbal reminds Anna, however, that the gods have called Énée's final destiny to be in Italy. Anna replies that there is no stronger god than love. After Didon's entry, and dances from the Egyptian dancing girls, the slaves, and the Nubian slave girls, Iopas sings his song of the fields, at the queen's request. She then asks Énée for more tales of Troy. Énée reveals that after some persuading, Andromaque eventually married Pyrrhus, the son of Achille, who killed Hector, Andromaque's earlier husband. Hearing about Andromaque remarrying, Didon then feels resolved regarding her lingering feelings of faithfulness to her late husband. Alone, Didon and Énée then sing a love duet. At the end of the act, as Didon and Énée slowly walk together towards the back of the stage in an embrace, the god Mercury appears and strikes Énée's shield, which the hero has cast away, calling out three times, "Italie!" Act 5Scene 1: The harbor of CarthageA young Phrygian sailor, Hylas, sings his song of longing for home, alone. Two sentries mockingly comment that he will never see his homeland again. Panthée and the Trojan chieftains discuss the gods' angry signs at their delay in sailing for Italy. Ghostly voices are heard calling "Italie! Italie! Italie!". The sentries, however, remark that they have good lives in Carthage and do not want to leave. Énée then comes on stage, singing of his despair at the gods' portents and warnings to set sail for Italy, and also of unhappiness at his betrayal of Didon with this news. The ghosts of Priam, Chorèbe, Hector and Cassandre appear and relentlessly urge Énée to proceed on to Italy. Énée gives in and realizes that he must obey the gods' commands, but also realizes his cruelty and ingratitude to Didon as a result. He then orders his comrades to prepare to sail that very morning, before sunrise. Didon then appears, appalled at Énée's attempt to leave in secret, but still in love with him. Énée pleads the messages from the gods to move on, but Didon will have none of this. She pronounces a curse on him as she leaves. The Trojans shout "Italie!".Scene 2: Didon's apartment at dawnDidon asks Anna to plead with Énée one last time to stay. Anna acknowledges blame for encouraging the love between her sister and Énée. Didon angrily counters that if Énée truly loved her, he would defy the gods, but then asks her to plead with him for a few days' additional stay. The crowd has seen the Trojans set sail. Iopas conveys the news to Didon. In a rage, she demands that the Carthaginians give chase and destroy the Trojans' fleet, and wishes that she had destroyed the Trojans upon their arrival. She then decides to offer sacrifice, including destroying the Trojans' gifts to her and hers to them. Narbal is worried about Didon and tells Anna to stay with her sister, but the queen orders Anna to leave. Alone, she resolves to die, and after expressing her love for Énée one final time, prepares to bid her city and her people farewell.Scene 3: The palace gardensA sacrificial pyre with Énée's relics has been built. Priests enter in a procession. Narbal and Anna expound curses on Énée to suffer a humiliating death in battle. Didon says it is time to finish the sacrifice and that she feels peace enter her heart (this happens in a ghostly descending chromatic line recalling the appearance of Hector's ghost in Act II). She then ascends the pyre. She removes her veil and throws it on Énée's toga. She has a vision of a future African warrior, Hannibal, who will rise and attack Rome to avenge her. Didon then stabs herself with Énée's sword, to the horror of her people. But at the moment of her death, she has one last vision: Carthage will be destroyed, and Rome will be "immortal". The Carthaginians then utter one final curse on Énée and his people to the music of the Trojan march, vowing vengeance for his abandonment of Didon, as the opera ends. Musical numbers The list of musical numbers is from the Urtext vocal score. Act 1 Act 2 First Tableau: Second Tableau: Act 3 Act 4 First Tableau: Second Tableau: Act 5 First Tableau: Second Tableau: Third Tableau: Supplement La scène de Sinon The original finale of Act 5 Recordings References Notes Sources Berlioz, Hector (1864). Les Troyens à Carthage, libretto in French. Paris: Michel Lévy Frères. Copy at Gallica. Berlioz, Hector; Cairns, David, translator and editor (2002). The Memoirs of Hector Berlioz. New York: Alfred A. Knopf. . Berlioz, Hector (2003). Les Troyens. Grand Opéra en cinq actes, vocal score based on the Urtext of the New Berlioz Edition by Eike Wernhard. Kassel: Bärenreiter. Listings at WorldCat. Cairns, David (1999). Berlioz. Volume Two. Servitude and Greatness 1832–1869. London: Allen Lane. The Penguin Press. . Goldberg, Louise (1988a). "Performance history and critical opinion" in Kemp 1988, pp. 181–195. Goldberg, Louise (1988b). "Select list of performances (Staged and concert)" in Kemp 1988, pp. 216–227. Holoman, D. Kern (1989). Berlioz. Cambridge, Massachusetts: Harvard University Press. . Kemp, Ian, editor (1988). Hector Berlioz: Les Troyens. Cambridge: Cambridge University Press. . Kutsch, K. J. and Riemens, Leo (2003). Großes Sängerlexikon (fourth edition, in German). Munich: K. G. Saur. . Macdonald, Hugh (1982). Berlioz, The Master Musicians Series. London: J. M. Dent. . Walsh, T. J. (1981). Second Empire Opera: The Théâtre Lyrique Paris 1851–1870. New York: Riverrun Press. . Wolff, Stéphane (1962). L'Opéra au Palais Garnier, 1875-1962. Les oeuvres. Les Interprètes.'' Paris: L'Entracte. (1983 reprint: Geneva: Slatkine. .) External links Les Troyens in Extracts from the Memoirs of Hector Berlioz For the New Berlioz Complete Edition of Bärenreiter, which has been the musical basis for subsequent productions Description of Les Troyens at Naxos.com Guy Dumazert, French-language commentary on Les Troyens, 12 August 2001 Operas by Hector Berlioz French-language operas Grand operas Operas based on classical mythology Operas based on the Aeneid Music based on poems Operas Operas set in Africa Opera world premieres at the Théâtre Lyrique Cultural depictions of the Trojan War Cultural depictions of Dido Cultural depictions of Hannibal 1863 operas
18548764
https://en.wikipedia.org/wiki/Desert%20Pines%20High%20School
Desert Pines High School
Desert Pines High School is a public high school in Las Vegas, Nevada, United States, and is a part of the Clark County School District. The school which opened in 1999, also houses the Academy of Information Technology and the Academy of Communications. Academy of Information Technology The Academy of Information Technology at Desert Pines is a magnet program that educates students on the products, service, and implementation of information technology. Students enrolled in the academy learn about hardware (CyberCore, tech support, and computer forensics), software (programming, web development, and ORACLE), as well as networks (Cisco, Novell, etc.). After graduating from the program, students receive industry recognized certifications in A+, Cisco, Java, and ORACLE. Academy of Communications The Academy of Communications at Desert Pines is one of the top media education schools in the Clark County School District. The academy houses state of the art television and radio stations, as well as offering student internships, job shadowing, and summer enrichment programs. Desert Pines has one of the only school-sponsored radio stations in Clark County. DP-TV Extracurricular activities JROTC Desert Pines is home to one of the two Marine Junior Reserve Officers' Training Corps Units in the Clark County School District (the other being at Basic High School). Athletics The athletic programs at Desert Pines are known as the Jaguars and participate in the Northeast Division in the Sunrise 4A Region. The Jaguars football team have made the playoffs and as always show strong players, but did not have much success until the 2016 season. On Saturday, November 19, 2016, the Desert Pines varsity football team beat Spring Creek High School in the 3A high school football state championship game with a score of 39-6. Notable alumni Pierre Jackson, basketball player Julian Jacobs, basketball player Tony Fields II, Football player References External links Desert Pines High School homepage Clark County School District homepage Clark County School District Educational institutions established in 1999 High schools in Las Vegas Public high schools in Nevada Magnet schools in Nevada 1999 establishments in Nevada
3084102
https://en.wikipedia.org/wiki/College%20of%20Engineering%20Chengannur
College of Engineering Chengannur
The College of Engineering Chengannur commonly known as CEC, is an engineering institute in the state of Kerala, India that was established by the Government of Kerala under the aegis of the Institute of Human Resources Development (IHRD) in 1993. The college is located in Chengannur, Alappuzha. The college is affiliated to the APJ Abdul Kalam Technological University and the courses are recognised by AICTE and accredited by NBA, the National Board of Accreditation, India. Location CEC is situated in the heart of Chengannur, with the M.C. Road linking it to Thiruvananthapuram International Airport (100 km south) and Kochi Airport (100 km north). The Chengannur Railway Station and Bus Station are located within walking distance from the college. Overview The college has been approved by the All India Council for Technical Education, New Delhi . Being located in Chengannur town, Alleppey district, the college has access to transport, communication, and lodging facilities. It was one of the five engineering colleges in Kerala selected by the government of India for the Technical Education Quality Improvement Programme (TEQIP). The college is listed as `CHN' in the allocation list of engineering seats maintained by the Commissioner for Entrance Examinations, Government of Kerala Courses Two year postgraduate course in: M.Tech Electronics Engineering (VLSI and Embedded Systems) - 24 seats (from academic year 2010-11) M.Tech Computer Science (Digital Image Processing) - 24 seats Four-year B.Tech. degrees in: Computer Science and Engineering - 120 seats (formerly, Computer Engineering) Electronics and Communication Engineering - 120 seats (formerly, Electronics Engineering) Electrical and Electronics Engineering - 120 seats (from academic year 2009-10) Electronics and Instrumentation Engineering - 60 seats (from academic year 2012-13. About Admission to the courses is on an annual basis and is based on the All Kerala Common Entrance Examination conducted by the Controller of Entrance Examinations, Government of Kerala. The admissions to the free/merit seats and management seats are through the Central Allotment Process conducted by the Controller of Entrance Examinations, Government of Kerala. The proportion of seats are as follows: free/merit seats 50%, management seats 35% (as aforementioned allotment to both these categories are through the Central Allotment Process ), the remaining 15% seats come under the NRI quota. Annual intake CEC has an annual intake of 420 students (+10% lateral entry students) through government allotment, divided among the four branches as follows: Computer Science and Engineering: 120 (+10% lateral entry students) Electronics and Communication Engineering: 120(+10% lateral entry students) Electrical and Electronics Engineering: 120 (+10% lateral entry students) Electronics and Instrumentation Engineering: 60 (+10% lateral entry students) CEC technical and non-technical events SUMMIT Summit is a biennial technical festival conducted by the college, to providing a platform for improving the technical and non-technical skills of the students. The Summit offers events including seminars and workshops. Summit'12 was the sixth event of its kind, a three-day festival. Summit '12 differed from its predecessor as it was a techno-cultural event. The last Summit was held in February 2019. Technical organizations IEEE Student Branch An IEEE student branch was formed in CEC in mid-1997, with the goal of keeping the students in touch with technological advances. An IEEE library was inaugurated in December 1999. The library houses journals and magazines of IEEE. The IEEE student branch comes under the IEEE Kerala Section (http://www.ewh.ieee.org/r10/kerala/) (http://www.cecieee.org) CEC's IEEE student members have presented projects at state and national level conferences and competitions. An IEEE Robotics Initiative Program has begun, with groups designing robots. IEEE-CEC is known for its consistent achievements since its inception in 1996. IEEE-CEC is also known for many firsts including being the first IEEE Student Branch in the world to win the IEEE WIE SB Affinity Group of the year award in 2005. PRODDEC (Product Design and Development Center) PRODDEC CEC is a pioneer in technical organizations of CEC since its inception in 1995, with a prime vision to integrate technical ideas to develop products of an engineering outlook. The organization is always in the quest for new avenues to help and encourage the students to put their theoretical knowledge to practical use. In its endeavor the organization is always open to suggestions, constructive criticism, and moral support. In the course of time, PRODDEC has contributed greatly to the overall development of the students as competent engineers. Proud of the achievements since its formation, PRODDEC members have proved their mark in every area possible right from national competitions to public comfort systems, the automated traffic light system in Chengannur, automated admission procedure implementation in the college being few of them. The forum owns a PRODDEC lab in the college which shelters the creation of budding engineering products and discussions. Moving on with the torch lit and handed over by the eminent alumni, the team aims and works consistently in providing a platform for students of both computer science and electronic branches to exchange ideas and sharpen their technical skills. PRODDEC has conducted webinars, workshops ranging from introductory to advanced level with hands-on training for students and strives to keep students updated with the latest trends and advancements in technology emerging around the globe. FOCES (Forum of Computer Engineering Students) FOCES is an organization of CSE students of the college. The forum aims at improving the technical and industrial knowledge of the students and strives to keep them abreast with software and technologies evolving in the information technology field. It welcomes the freshers into the world of computer engineering by conducting orientation classes. After renovating FOCES in January 2010, its code of conduct has changed slightly, and now all students can join the forum, but CSE students are default members. Newsletters and CDs are brought out to keep the students abreast of trends. It organizes talks by personalities in the industry on evolving technologies in computing, workshops on developing platforms, languages and software packages in the IT industry.. Website: - http://www.foces.org ExESS (Electronics Engineering Students Society) The activities of ExESS have helped students in attending state and national level contests. Apart from conducting technical learning sessions for Component familiarisations and Electronic Design Automation (EDA) tools, ExESS also does hands-on training sessions for soldering and PCB fabrication. FLOSS (Free/Libre/Open-Source Software) Cell The FLOSS cell has been organized to explain Open Source and Linux. The cell conducts Linux familiarisation classes, installation sessions and other activities. FLOSS Cell helps install and maintain systems and LAN in the nearby government school. Innovation and Entrepreneurship Development Cell [IEDC] The Innovation and Entrepreneurship Development Cell [IEDC] Bootcamp College of Engineering Chengannur came into existence in June 2015.They aim at promoting entrepreneurial spirit among students.IEDC seeks to inculcate the entrepreneurial culture among the students which would, in turn, inspire them to go a step further and take up the challenge of entrepreneurship. Non-technical organizations National Service Scheme NSS CEC now consists of 3 units with a maximum volunteer strength of 300, in which each student embraces the idea of everyone belonging to the same family. Over the years the college has made a benchmark for units across the state due to its commendable coordination and activities conducted. Such activities include the successful hosting of stem cell donation camps, blood donation camps, medical camps, etc., where students were made aware of the topics and have volunteered their assistance. The link between our student volunteers was well demonstrated in the programme 'Sahapadikk Oru Veedu' in which they formed an entente to rebuild home for their fellow colleague who lost his home in 2018 flood. Another great example of our unity and willingness for service was showcased by the involvement during relief operations of 2018-19 floods. Students with no prior training to handle such situations coordinated despite their drawbacks and successfully helped to transform the college into a relief camp, thus saving hundreds of lives. The activities of the units extend to much more than activities held during a crisis. Each year volunteers take part in 'Punarjani', the event where they aid community service centers such as hospitals in repairing damaged machinery and extend their assistance in whatever ways possible. The social commitments of our volunteers are evident from the Friday food serving programme at District hospital Chengannur to provide food for those in need. As directed by the Cell the three units conduct campus programmes exploring the talents in youth, awareness sessions brushing up their knowledge and community programmes that improvise their social skills. The units also extend their aid in helping with the internal duties of the college such as active participation during admission procedures and campus cleaning programmes. Our vivid spirit is not only seen in such commitments. The college hosts a plethora of talents whose skills are displayed in a variety of cultural events such as flash mobs, street plays on relevant topics, etc. We successfully hosted the second edition of UYILO, a techno-cultural fest with astounding feedback and plans to conduct further editions of the same in the years to come. The units have shown their active participation in almost every aspect pertaining to social and cultural welfare, and as the years progress it sets on reaching more milestones and prove to be an exemplary model for both its own volunteers and those outside. Santhwanam Integrated with the NSS unit, it undertakes campus cleaning drives, sending volunteers to the Pulse-Polio Immunization programs of the Government of India, and organizes orphanage visits. Nature Club CEC has joined hands with the "Save the Earth" global movement through the `Aranyam' forestry club. Group discussions, awareness camps and visits to nearby sanctuaries are some of the activities. Arts Club ACE (Arts Club for Engineers) provides a platform for students to develop their creative skills. It runs the Music club, Quiz and Debate club, and Audio Visual club. The Arts Club organizes "Utsav", the college arts festival. The Students' Executive Senate (CEC Senate) The Students Executive Senate, simply known as 'CEC Senate', is the supreme student body. The members of the senate are elected by and from the students, with two representatives from each class. The objectives of the senate are: to train the students of the college in their duties, responsibilities, and rights, to organize debates, seminars, group discussions, work squads, and tours, and to encourage sports, arts and other cultural, social or recreational activities. Training Placement Cell The Training and Placement Cell (TPC) is a blend of faculty and students. With a group of students led by a Placement officer (Mr Gopakumar), the TPC helps the CeCians choose their goal, prepare for it and find placements in work. The TPC of Chengannur Engineering College was the first of its kind, among the college placement units in the state, to introduce an 'Industry like' screening system for the selection of TPC student members. Entrepreneurship Present as well as former students of College of Engineering Chengannur, have shown excellent entrepreneurship capabilities. Over 30+ Companies (Registered) have been initiated from this institute. Some of the noteworthy companies from College of Engineering Chengannur are : ExTravelMoney Technosol - Online Forex Services Product Profoundis Labs - IT Solutions (Acquired by FullContact) Red Panthers - Software Consultancy firm specializing in Ruby on Rails Veeble Softtech - Web Hosting, Web Development Company Poornam Infovision - IT Solutions Perleybrook Labs LLC - IT Solutions Entri.me - Educational Tech Product Startup Praudyogiki Technolabs LLP - IT Solutions Alumini Anagha (actress) See also List of Engineering Colleges in Kerala KEAM References External links Official website CEC Alumni website FoCES CECians unofficial blog SUMMIT '14 National Level Technical Fest website (new) Cochin University of Science And Technology official website The Institute of Human Resource Development Kerala official website FLOSS cell of CEC The IEEE Student Branch CEC on Twitter CECians Google Group (new) Engineering colleges in Kerala All India Council for Technical Education Institute of Human Resources Development Universities and colleges in Alappuzha district Educational institutions established in 1993 1993 establishments in Kerala
27367927
https://en.wikipedia.org/wiki/University%20of%20Texas%20at%20Dallas%20academic%20programs
University of Texas at Dallas academic programs
The University of Texas at Dallas (also referred to as UT Dallas or UTD) is a public research university in the University of Texas System. The University of Texas at Dallas main campus is located in Richardson, Texas. The University of Texas at Dallas offers over 145 academic programs across its eight schools including, 53 baccalaureate programs, 62 masters programs and 30 doctoral programs and hosts more than 50 research centers and institutes. The school also offers 30 undergraduate and graduate certificates. With a number of interdisciplinary degree programs, its curriculum is designed to allow study that crosses traditional disciplinary lines and to enable students to participate in collaborative research labs. The Erik Jonsson School of Engineering and Computer Science launched the first accredited telecommunications engineering degree in the U.S. and is one of only a handful of institutions offering a degree in software engineering. The Bioengineering department offers MS and PhD degrees in biomedical engineering in conjunction with programs at the University of Texas Southwestern Medical Center at Dallas and the University of Texas at Arlington. Dual degrees offered at UTD include M.S. Electrical Engineering (M.S.E.E.) degree in combination with an MBA in management, Molecular Biology and Business Administration (Double Major) B.S., and Molecular Biology and Criminology (Double Major) B.S.. Geospatial Information Sciences is jointly offered with the School of Natural Sciences and Mathematics and with the School of Economic, Political and Policy Sciences, which administers the degree. UT Dallas is the fourth university in the nation to have received an accreditation for a Geospatial Intelligence certificate. The Geospatial Intelligence Certificate is backed by the US Geospatial Intelligence Foundation (USGIF). The university is designated a National Center of Academic Excellence and a National Center of Academic Excellence in Information Assurance Research for the academic years 2008–2013 by the National Security Agency and Department of Homeland Security. School of Arts and Humanities The School of Arts and Humanities was established in 1975. Courses are offered in literature, foreign languages, history, philosophy, music, dance, drama, film, and visual arts. With the integration of the arts and humanities and interdisciplinary education the school has no conventional departments. Its curriculum allows study that crosses traditional disciplinary lines. Centers and institutes Center for Holocaust Studies Center for the Interdisciplinary Study of Museums Center for Translation Studies Center for Values in Medicine, Science and Technology Confucius Institute (Closed until further notice) CentralTrak, Artist Residency Program School of Arts, Technology, and Emerging Communication The newest of UT Dallas’ schools, the School of Arts, Technology, and Emerging Communication (ATEC), authorized by the UT System Board of Regents in February 2015. The school originated as a joint venture between the School of Arts and Humanities and the Computer Science program in the Erik Jonnson School of Engineering. Embraced by students and faculty alike, the dynamic growth of ATEC enrollment led to the creation of the new school, making it the first comprehensive academic program designed to merge computer science and engineering with creative arts and humanities. Housed in the Edith O’Donnell Arts and Technology building, ATEC fosters connections that encourage students to try, learn, and grow. The 155,000-square-foot building was designed by STUDIOS Architecture – the same firm that designed Google’s headquarters in Mountain View, Calif. The footprint of the building is designed around cluster concepts allowing thinkers to gather and work. ATEC classrooms, research labs, and makerspace studios support new ways of collaboration for a community immersed in innovative practices. ATEC appointed Dr. Anne Balsamo the inaugural dean in May 2016. Balsamo is a scholar, educator, entrepreneur, and designer of new media whose research and interactive projects explore the cultural possibilities of emergent technologies. Under Balsamo’s leadership, ATEC is a multidisciplinary academic research school that inspires students, faculty, staff, colleagues, and the public to think critically and creatively about the relationship between technology and culture. ATEC’s leading-edge STEAM programs draw inspiration from the creative disciplines of science, technology, art, engineering, and management. ATEC faculty and students create the future in their imagination, research, and creative practice. Academics At the undergraduate level, ATEC students choose an academic area of study in one of the following areas: Animation, Design and Creative Production, Game Design, and Critical Media Studies. At the graduate level ATEC offers a Master’s of Arts program, a Master’s of Fine Arts program, and a Doctoral Studies program. Rankings ATEC continues to rank among the nation’s top programs for Game Design both at the graduate and undergraduate level. ATEC at UT Dallas ranked 18th in the 2020 Animation Career Review Top 50 Game Design Schools and Colleges in the US. ATEC’s graduate program in Game Design was ranked 13th and the undergraduate program in Game Design ranked 19th in The Princeton Review 2020 list of Top Game Design Schools in the US. The ATEC Animation program ranked No. 2 in Texas, No. 3 in the Southwest, and No. 12 among US public schools for animation by the Animation Career Review in 2020. Research Labs and Creative Practice Studios ATEC’s research labs and creative practice studios foster collaboration across disciplines. Faculty and students are encouraged to identify new horizons of research and creativity. 3D Studio Animation Research Lab AntÈ Institute ArtSciLab Creative Automata Lab Cultural Science Lab (CultSciLab) Emerging Gizmology Lab Fashioning Circuits Games Research Lab LabSynthE Narrative Systems Research Lab Public Interactives Research Lab (PIRL) SP&CE Media The Studio for Mediating Play School of Behavioral and Brain Sciences The School of Behavioral and Brain Sciences (BBS) opened in 1963 and is housed in Green Hall on the main campus of the University of Texas at Dallas and in the Callier Center for Communication Disorders. The 2012 US News & World Report ranked the university's graduate audiology program 3rd in the nation and its graduate speech-pathology program 11th in the nation. Centers and institutes Callier Center for Communication Disorders Center for BrainHealth Center for Children and Families The Center for Vital Longevity School of Economic, Political and Policy Sciences The School of Economic, Political and Policy Sciences (EPPS) offers courses and programs in criminology, economics, geography and geospatial sciences, political science, public affairs, public policy and political economy, and sociology. UTD became the first university in Texas to implement a PhD Criminology program on October 26, 2006, when its program was approved by the Texas Higher Education Coordinating Board. The EPPS program was the first from Texas admitted to the University Consortium for Geographic Information Science and offered the first master of science in geospatial information sciences in Texas. UTD is one of four universities offering the Geospatial Information Sciences certificate. The Geospatial Intelligence Certificate is backed by the United States Geospatial Intelligence Foundation (USGIF), a collection of many organizations including Raytheon, Lockheed Martin and GeoEye. UT Dallas’ Geography and Geospatial Sciences program ranked 16th nationally and first in Texas by Academic Analytics of Stony Brook, N.Y. In a 2012 study, assessing the academic impact of publications, the UT Dallas criminology program was ranked fifth best in the world. The findings were published in the Journal of Criminal Justice Education. Centers and institutes Center for Crime and Justice Studies Center for Global Collective Action Center for the Study of Texas Politics Institute for Public Affairs Institute for Urban Policy Research The Negotiations Center Erik Jonsson School of Engineering and Computer Science The Erik Jonsson School of Engineering and Computer Science opened in 1986 and houses the Computer Science and Electrical Engineering departments as well as UTD's Computer Engineering, Materials Science & Engineering, Software Engineering, and Telecommunications Engineering programs. In 2002 the UTD Erik Jonsson School of Engineering and Computer Science was the first in the United States to offer an ABET-accredited B.S. degree in telecommunications engineering and is one of only a handful of institutions offering a degree in software engineering. The Bioengineering department offers MS and PhD degrees in biomedical engineering in conjunction with programs at the University of Texas Southwestern Medical Center at Dallas and the University of Texas at Arlington. UT Dallas undergraduate programs in engineering have emerged in U.S. News & World Report's annual rankings placing 60th among the nation’s public schools of engineering. The school’s graduate program U.S. News ranked 46th among public graduate schools of engineering and third among publicly funded schools in Texas. The school’s electrical engineering graduate program ranked 38th among comparable programs at other public universities and the graduate program in computer science is among the top 50 such programs at public universities. The school is developing new programs in bioengineering, chemical engineering, and systems engineering. The school is designated a National Center of Academic Excellence and a National Center of Academic Excellence in Information Assurance Research for the academic years 2008–2013 by the National Security Agency and Department of Homeland Security. Centers and institutes Center for Advanced Telecommunications Systems and Services (CATSS) Center for Integrated Circuits and Systems (CICS) Center for Systems, Communications and Signal Processing (CSCSP) Cybersecurity Research Center (CSRC) Embedded Software Center Emergency Preparedness Center Global Information Assurance Center Photonic Technology and Engineering Center (PhoTEC) Texas Analog Center of Excellence CyberSecurity and Emergency Preparedness Institute Human Language Technology Research Institute Center for Basic Research in Natural Language Processing Center for Emerging Natural Language Applications Center for Machine Learning and Language Processing Center for Robust Speech Systems (CRSS) Center for Search Engines and Web Technologies Center for Text Mining CyberSecurity Research Center Embedded Software Center Emergency Preparedness Center Global Information Assurance Center InterVoice Center for Conversational Technologies School of Interdisciplinary Studies The School of Interdisciplinary Studies, formerly The School of General Studies, provides interdisciplinary programs encouraging students to understand and integrate the liberal arts and sciences. The school offers a Bachelor of Arts in Interdisciplinary Studies, Bachelor of Science in Interdisciplinary Studies, Bachelor of Science in Healthcare Studies and Masters of Arts in Interdisciplinary Studies. Naveen Jindal School of Management The School of Management opened in 1975 and was renamed to the Naveen Jindal School of Management on October 7, 2011, after alumnus Naveen Jindal donated $15 million to the business school. The school is accredited by the Association to Advance Collegiate Schools of Business. UTD's undergraduate business programs ranked 81st overall and 39th among public university business schools in the U.S. according to BusinessWeek's 2010 rankings and ranked 30th in overall student satisfaction The Bloomberg BusinessWeek public universities rankings of undergraduate programs by specialty placed the UTD school of management 10th in both accounting and business law, 1st in teaching of quantitative methods, 3rd in teaching of calculus and sustainability concepts, 6th in financial management, 7th in ethics and 9th in corporate strategy course work. The 2010 U.S. News and World Report ranks the Full-Time MBA program among the top 50 in the nation, 24th among the nation’s public universities and 3rd for public school programs in the state of Texas. Bloomberg BusinessWeek, 2009, ranked the UTD Executive MBA program "top ranked" at 22 globally and the Professional Part-Time MBA program in the top 25 nationally. The Wall Street Journal ranked UTD's Executive MBA program 6th in the nation by ROI and the 2009 Financial Times rankings placed UTD's Executive MBA program 1st for public universities in Texas and 51 globally. In 2015 the Full-Time MBA and Professional MBA programs at the UT Dallas Naveen Jindal School of Management have been ranked at number 42 by Bloomberg BusinessWeek. In the 2018 Best B-School by Bloomberg, The Jindal School sits at number 43. Centers and institutes Center and Laboratory for Behavioral Operations and Economics Center for Finance Strategy Innovation Center for Information Technology and Management Center for Intelligent Supply Networks Center for the Analysis of Property Rights and Innovation Institute for Excellence in Corporate Governance Institute for Innovation and Entrepreneurship Center for Internal Auditing Excellence International Accounting Development: Oil and Gas International Center for Decision and Risk Analysis Leadership Center at UT Dallas Morris Hite Center for Marketing School of Natural Sciences and Mathematics The School of Natural Sciences and Mathematics offers both graduate and undergraduate programs in Biology and Molecular Biology, Chemistry and Biochemistry, Geosciences, Mathematical Sciences, and Physics, and a graduate program in Science Education. Undergraduate and post-baccalaureate programs in teacher certification are administratively housed in the School of Natural Sciences and Mathematics but serve other schools as well. Centers and institutes Center for Space Sciences (CSS) Center for STEM Education and Research UTeach Dallas Modeled after UT Austin's teacher preparatory program, UTeach Dallas, in the School of Natural Sciences and Mathematics, addresses the current national deficit of qualified math, science, and computer science teachers, as well as K-12 students' lack of interest in the STEM fields. GEMS Center Gateways to Excellence in Math and Science Center (GEMS) is part of the Office of Student Success and Assessment and portal to educational enhancement and educational success. Honors Collegium V is the selective honors and enrichment program of the University of Texas at Dallas. CentralTrak: The UT Dallas Artist Residency (Closed) CentralTrak closed in June 2017 due to a lease cancellation. A new permanent home for the Artist Residency has not been announced. The CentralTrak residency program for artists in the city of Dallas was founded in 2002, by former McDermott Director of the Dallas Museum of Art and current University of Texas at Dallas faculty member Dr. Richard Brettell. Originally the South Side Artist Residency, co-founded by developer Jack Matthews, CentralTrak is now a program connected to and supported by the University of Texas at Dallas Arts and Humanities department. CentralTrak is located in Exposition Park in the old Fair Park post office building in Dallas, Texas near the historical Dallas arts and music neighborhood, Deep Ellum. It offers living work spaces for eight artists and contains a gallery which regularly hosts solo and group exhibitions, lectures, and performances. CentralTrak is known for showcasing contemporary visual art, but also is used as a space for experimental music and other art forms. It was recently listed as one of the best contemporary art galleries in Dallas by Glasstire. It will be host for the Texas Biennial in the fall of 2013. Past directors have been Karen Weiner, Charissa Terranova, and Kate Sheerin. Heyd Fontenot is the current director, taking over for Kate Sheerin in 2011. Before becoming the current Director of CentralTrak, Heyd Fontenot co-curated the Gun and Knife Show in 2011. This exhibition focused on violence in America and the fetishization of weapons. On January 18, 2013, CentralTrak opened the two-person exhibition "Painting of All Excuses", featuring Cuban artists Raul Cordero and Michael "El Pollo" Pérez. Both artists were represented in the 2012 Havana Bienniale. History As the precursor to CentralTrak, the SouthSide Artist Residency began as an experiment located in the middle of a burgeoning arts scene; with ten loft-studios reserved for artists in a former Sears distribution center, Jack Matthews transformed the building into a center for creative living and working. His collaboration with the University of Texas at Dallas' School of Arts and Humanities provided national and international artists with a residency fellowship (during a period of six months to a year) funded by small grants, as well as the participation of the artists themselves, most of whom transported themselves to Texas from Argentina, France, and Austria, among others. With inspiration from critically acclaimed programs such as the Chinati Foundation in Marfa, Texas and the CORE Program at the Museum of Fine Arts, Houston, the residency lasted for a little over two years, ending over a disagreement between Matthews and the university. However, the residency reimagined itself as CentralTrak in early 2008 with support from the university and Dr. Terranova as its inaugural director. Notable exhibits February 13 – June 13, 2015; Who's Afraid of Chuck and George? August 3 – 24, 2013: Take by David Witherspoon July 6–27, 2013: Between Here and Cool – The photographs of Diane Durant May 11 – June 29, 2013: That Mortal Coil: Rebuking the Ideal in Contemporary Figurative Art March 9 – April 27, 2013: Failing Flat: Sculptural Tendencies in Painting curated by Nathan Green. Jan 19 – March 2, 2013: Painting of all Excuses Raul Cordero and Michel Pérez. Nov 17, 2012 – Jan 5, 2013: Co- Re-Creating Spaces: a group exhibition curated by Carolyn Sortor & Michael A. Morris. Nov. 10, 2012: Tiny Thumbs curated by Bobby Frye and Kyle Kondas. This exhibit was a pop-up arcade with five experimental games created for one night only. August 25 – September 22, 2012: The Skin I Live In Ari Richter July 14 – August 18, 2012: SHEET/ROCK Cassandra Emswiler and Sally Glass. May 26 – Jun 30, 2012: Go Cowboys Larissa Aharoni. April 21 – May 19, 2012: HARAKIRI: To Die For Performances Notable alumni Aziz Sancar Naveen Jindal Gary Farrelly Christeene Vale PJ Raval Gabriel Dawe Phyllida Barlow Kelli Connell Brian Fridge Stephen Lapthisophon Dadara References External links Academics University of Texas at Dallas
46124
https://en.wikipedia.org/wiki/ACIS
ACIS
The 3D ACIS Modeler (ACIS) is a geometric modeling kernel developed by Spatial Corporation (formerly Spatial Technology), part of Dassault Systemes. ACIS is used by many software developers in industries such as computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided engineering (CAE), architecture, engineering and construction (AEC), coordinate-measuring machine (CMM), 3D animation, and shipbuilding. ACIS provides software developers and manufacturers the underlying 3D modeling functionality. ACIS features an open, object-oriented C++ architecture that enables robust, 3D modelling capabilities. ACIS is used to construct applications with hybrid modeling features, since it integrates wireframe model, surface, and solid modeling functionality with both manifold and non-manifold topology, and a rich set of geometric operations. History As a geometric kernel, ACIS is a second generation system, coming after the first generation Romulus. There are several versions about what the word ACIS actually stands for, or whether it is an acronym at all. The most popular version is that ACIS stands for Alan, Charles, Ian's System (Alan Grayer, Charles Lang and Ian Braid as part of Three-Space Ltd.), or Alan, Charles, Ian and Spatial (as the system was later on sold to Spatial Technology, now Spatial Corp). According to a close source the name actually stands for Alan, Charles, Ian, Sowar, with Sowar coming from Dick Sowar, founder of Spatial Technology. However, when asked, the creators of ACIS would simply suggest that its name was derived from Greek mythology (See also Acis). In 1985 Alan Grayer, Charles Lang and Ian Braid (creators of Romulus and Romulus-D) formed Three-Space Ltd. (Cambridge, England) which had been retained by Dick Sowar's Spatial Technology (which had been founded by Sowar in 1986) to develop the ACIS solid modeling kernel for Spatial Technology's Strata CAM software. The first version of ACIS was released in 1989 and was quickly licensed by HP for integration into its ME CAD software. In late 2000, around the time when Spatial was acquired by Dassault Systemes, the ACIS file format changed slightly and was no longer openly published. Architecture A software component is a functionally specialized unit of software—a collection of software items (functions, classes, etc.) grouped together to serve some distinct purpose. It serves as a constituent part of a whole software system or product. A product is one or more software components that are assembled together and sold as a package. Components can be arranged in different combinations to form different products. The ACIS product line is designed using software component technology, which allows an application to use only the components it requires. In some cases, more than one component is available (either from Spatial or third party vendors) for a given purpose, so application developers can use the component that best meets their needs. For example, several rendering components are available from Spatial, and developers use the one that works best for their platform or application. Supported Platforms and Operating Systems Functionality ACIS Modeler ACIS core functionality can be subclassified into three categories, namely: 3D Modelling Extrude/revolve/sweep sets of 2D curves into complex surfaces or solids. Fillet and chamfer between faces and along edges in surface and solid models. Fit surfaces to a closed network of curves. Generate patterns of repetitive shapes. Hollow solids and thicken surfaces. Interactively bend, twist, stretch, and warp combinations of curves, surfaces, and solids. Intersect/subtract/unite any combination of curves, surfaces, and solids. Loft surfaces to fit a set of profile curves. Taper/offset/move surfaces in a model. 3D Model Management Attach user-defined data to any level of a model. Track geometry and topology changes. Calculate mass and volume. Model sub-regions of a solid using cellular topology. Unlimited undo/redo with independent history streams. 3D Model Visualization Tessellate surface geometry into polygonal mesh representation. Create advanced surfacing capabilities with the optional Deformable Modeling component. Generate precise 2D projections with hidden line removal using optional PHL V5 component. Develop graphical applications ACIS Modeler Extensions CGM Polyhedra CGM Polyhedra is an add-on to the 3D ACIS Modeler combining polyhedral and B-rep modeling. Utilizing the same interfaces that 3D ACIS Modeler users are already familiar with, existing and new customers can integrate approximated polyhedral data to their 3D printing, subtractive manufacturing, analysis, and other workflows. 3D Deformable Modeling 3D Deformable Modeling is an interactive sculpting tool for shaping 3D models. Included as part of Spatial's suite of 3D modeling development technologies, 3D Deformable Modeling uses local and global editing features that allow for the easy creation and manipulation of free-form B-spline and NURBS curves and surfaces. Advanced Covering Advanced Covering is a feature of Deformable Modeling that is now available as a standalone add-on for the 3D ACIS Modeler. This single API uses sophisticated algorithms to create high-quality n-sided surfaces that meet user-specified tolerances for position and continuity on boundaries and on optional internal guiding geometry. Advanced Covering allows a surface to be fit onto circuits (collections of edges that form closed loops) in solid or wire bodies, which is useful in consumer product design. Among other uses, Advanced Covering can be used for end-capping, post-translation corrections, and surface definition from curve data. Defeaturing Defeaturing automatically identifies and removes small features that CAE analysts typically want to eliminate from the 3D model prior to meshing. Analysts frequently work from the same models that are used for design and manufacture, but these models often carry much more detail than is necessary for simulation or analysis purposes. By removing unnecessary detail, Defeaturing simplifies the model, a process that typically is done manually at significant cost. CGM HLR CGM HLR is a hidden line removal (HLR) solution from Spatial based on CATIA V6 technology. CGM HLR is an ACIS-dependent development technology - an ACIS license is required. Though 3D is now the de facto CAD standard in most engineering disciplines, 2D still has a place in industries such as technical illustration, manufacturing, and architecture. Since 3D models are the typical primary output for CAD design, users in these industries require an efficient and accurate method of generating 2D computational drawings directly from the 3D models. Hidden line removal (HLR) is an important aspect of creating an accurate 2D representation from a 3D model. Using HLR, the converted model only displays those parts visible from a given perspective; hidden (or occluded) edges normally included in a 3D model representation are removed, or drawn in a line style that indicates their obscured position. File format Save File Types ACIS supports two kinds of save files, Standard ACIS Text (SAT), and Standard ACIS Binary (SAB). The two formats store identical information, so the term SAT file is generally used to refer to either when no distinction is needed. SAT files are ASCII text files that may be viewed with a simple text editor. A SAT file contains carriage returns, white space and other formatting that makes it readable to the human eye. A SAT file has a .sat file extension. SAB files cannot be viewed with a simple text editor and are meant for compactness and not for human readability. A SAB file has a .sab file extension. A SAB file uses delimiters between elements and binary tags, without additional formatting. Structure of the Save File Specification of SAT format for version 7.0 (circa 2001) has been made publicly available. This allowed external applications, even those not based on ACIS, accessing the data stored in such files. The basic information needed to understand the SAT file format, such as the structure of the save file format, how the data is encapsulated, the types of data written, subtypes and references, is available from this document. However the newer version of ACIS use modified format of SAT files whose specification is not publicly available. Thus reading of modern SAT files requires either using native ACIS library or reverse engineering of the format. A save file contains: a three-line header entity records, representing the bulk of the data optionally, a begin history data marker optionally, old entity records needed for history and rollback optionally, an end history data marker an end marker Beginning with ACIS Release 6.3, it is required that the product ID and units be populated for the file header before you can save a SAT file. Version Numbers and ACIS Releases ACIS is currently being developed by Spatial. They maintain the concept of a current version (release) number in ACIS, as well as a save version number. The save version allows one to create a SAT save file that can be read by a previous version of ACIS. Beginning with ACIS Release 4.0, the SAT save file format did not change with minor releases, only with major releases. This allowed applications that are based upon the same major version of ACIS to exchange data without being concerned about the save version. To provide this interoperability in a simple implementation, ACIS save files have contained a symbol that accurately identified the major version number, but not the minor version. This meant that applications created using the same major version of ACIS would produce compatible save files, regardless of their minor versions. This was accomplished by simply not incrementing the internal minor version number between major versions. Beginning with Release 7.0, ACIS started again providing accurate major, minor, and point version numbers. Beginning with Release 2016 1.0 in September, 2015, Spatial updated to Semantic Versioning, and now describes versions by the model year and major, minor and point releases within that model year. To summarize how release numbers and SAT changes are related: Major release: SAT file changes may be made; significant functionality changes likely; may require significant changes to existing applications Minor release: No SAT file changes are made; may provide new functionality; may require some minimal changes to existing applications Point release: Minor changes only (bug fixes). (Also known as service packs). Adoption In 2013 the following software uses ACIS as its geometric kernel/engine: BricsCAD, SpaceClaim, TurboCAD and Cimatron. See also Comparison of CAD editors for Architecture, Engineering and Construction ShapeManager References External links Spatial Corporation Homepage 3D ACIS Modeler Official Website Faces and Facets Spatial's Online Community Forum ACIS File Format ACIS Alliance 3D graphics software CAD file formats
32590529
https://en.wikipedia.org/wiki/Beep%20%28video%20game%29
Beep (video game)
Beep is a 2D-platforming action and adventure game originally released on 3 March 2011 by Canadian studio Big Fat Alien for Windows, Mac OS X and Linux. The Linux version arrived on Gameolith on 12 July 2011 and in the Ubuntu Software Center on 22 July 2011. Gameplay In Beep, the player controls an robot sent from earth who must find anti-matter in order to power his ship, and continue exploring. The player begins in a spaceship and is able to navigate to different planets within the solar system. When arriving at a level on a planet, a robot is deployed and the player must control the robot through the level. Collecting anti-matter, and anti-matter nuggets unlocks new levels and planets; there are 24 levels total. In each level, the player controls a four-wheeled robot, attacking enemy robots with your gun, and using an anti-gravity beam to move objects and platforms in order to access different parts of the level. Some objects can even be used to destroy hostile robots. In some instances, players must stack dead enemies' bodies in order to advance. Environments include swamp, ice, desert, and caves. The game is heavily physics-based and allows the robot to jump, glide, swim, and cling to surfaces. Reception Beep has received mixed praise and criticism from a couple of reviewers. Eurogamer's Matteo Lorenzetti praised the game's physics, comparing the robot's anti-gravity beam to Half-Life 2s gravity gun. While he also lauded the game's overall enjoyment factor, he criticized its lack of customization, saying that players are forced to use a mouse and WASD keys. PC Gamers Rachel Weber noted that the controls were very sensitive. She also bemoaned the game's bland graphics, noting the "distinct lack of eye candy" and "that final slick of lipstick and blusher". Gamezebo's Alicia Ashby criticized the game's physics as "mushy and imprecise", adding that players would be fighting against the controls instead of enjoying the game. References 2011 video games Action-adventure games Indie video games Linux games MacOS games Video games about robots Video games developed in Canada Windows games
13617506
https://en.wikipedia.org/wiki/OAuth
OAuth
OAuth (Open Authorization) is an open standard for access delegation, commonly used as a way for Internet users to grant websites or applications access to their information on other websites but without giving them the passwords. This mechanism is used by companies such as Amazon, Google, Facebook, Microsoft, and Twitter to permit the users to share information about their accounts with third-party applications or websites. Overview Generally, OAuth provides clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without providing credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server. In particular, OAuth 2.0 provides specific authorization flows for web applications, desktop applications, mobile phones, and smart devices. History OAuth began in November 2006 when Blaine Cook was developing the Twitter OpenID implementation. Meanwhile, Ma.gnolia needed a solution to allow its members with OpenIDs to authorize Dashboard Widgets to access their service. Cook, Chris Messina and Larry Halff from Magnolia met with David Recordon to discuss using OpenID with the Twitter and Magnolia APIs to delegate authentication. They concluded that there were no open standards for API access delegation. The OAuth discussion group was created in April 2007, for the small group of implementers to write the draft proposal for an open protocol. DeWitt Clinton from Google learned of the OAuth project, and expressed his interest in supporting the effort. In July 2007, the team drafted an initial specification. Eran Hammer joined and coordinated the many OAuth contributions creating a more formal specification. On 4 December 2007, the OAuth Core 1.0 final draft was released. At the 73rd Internet Engineering Task Force (IETF) meeting in Minneapolis in November 2008, an OAuth BoF was held to discuss bringing the protocol into the IETF for further standardization work. The event was well attended and there was wide support for formally chartering an OAuth working group within the IETF. The OAuth 1.0 protocol was published as RFC 5849, an informational Request for Comments, in April 2010. Since 31 August 2010, all third party Twitter applications have been required to use OAuth. The OAuth 2.0 framework was published considering additional use cases and extensibility requirements gathered from the wider IETF community. Albeit being built on the OAuth 1.0 deployment experience, OAuth 2.0 is not backwards compatible with OAuth 1.0. OAuth 2.0 was published as RFC 6749 and the Bearer Token Usage as RFC 6750, both standards track Requests for Comments, in October 2012. The OAuth 2.1 Authorization Framework is in draft stage and consolidates the functionality in the RFCs OAuth 2.0, OAuth 2.0 for Native Apps, Proof Key for Code Exchange, OAuth 2.0 for Browser-Based Apps, OAuth Security Best Current and Bearer Token Usage. Security issues OAuth 1.0 On 23 April 2009, a session fixation security flaw in the 1.0 protocol was announced. It affects the OAuth authorization flow (also known as "3-legged OAuth") in OAuth Core 1.0 Section 6. Version 1.0a of the OAuth Core protocol was issued to address this issue. OAuth 2.0 In January 2013, the Internet Engineering Task Force published a threat model for OAuth 2.0. Among the threats outlined is one called "Open Redirector"; in early 2014, a variant of this was described under the name "Covert Redirect" by Wang Jing. OAuth 2.0 has been analyzed using formal web protocol analysis. This analysis revealed that in setups with multiple authorization servers, one of which is behaving maliciously, clients can become confused about the authorization server to use and may forward secrets to the malicious authorization server (AS Mix-Up Attack). This prompted the creation of a new best current practice internet draft that sets out to define a new security standard for OAuth 2.0. Assuming a fix against the AS Mix-Up Attack in place, the security of OAuth 2.0 has been proven under strong attacker models using formal analysis. One implementation of OAuth 2.0 with numerous security flaws has been exposed. In April and May 2017, about one million users of Gmail (less than 0.1% of users as of May 2017) were targeted by an OAuth-based phishing attack, receiving an email purporting to be from a colleague, employer or friend wanting to share a document on Google Docs. Those who clicked on the link within the email were directed to sign in and allow a potentially malicious third-party program called "Google Apps" to access their "email account, contacts and online documents". Within "approximately one hour", the phishing attack was stopped by Google, who advised those who had given "Google Apps" access to their email to revoke such access and change their passwords. In the draft of OAuth 2.1 the use of the PKCE extension for native apps has been recommended to all kinds of OAuth clients, including web applications and other confidential clients in order to avoid malicious browser extensions to perform OAuth 2.0 code injection attack. An email software developer has criticised OAUTH2 as "an absolute dog's breakfast", requiring developers to write custom modules specific to each service (Gmail, Microsoft Mail services, etc.), and to register specifically with them. Uses Facebook's Graph API only supports OAuth 2.0. Google supports OAuth 2.0 as the recommended authorization mechanism for all of its APIs. Microsoft also supports OAuth 2.0 for various APIs and its Azure Active Directory service, which is used to secure many Microsoft and third party APIs. OAuth can be used as an authorizing mechanism to access secured RSS/ATOM feeds. Access to RSS/ATOM feeds that require authentication has always been an issue. For example, an RSS feed from a secured Google Site could not have been accessed using Google Reader. Instead, three-legged OAuth would have been used to authorize that RSS client to access the feed from the Google Site. OAuth and other standards OAuth is a service that is complementary to and distinct from OpenID. OAuth is unrelated to OATH, which is a reference architecture for authentication, not a standard for authorization. However, OAuth is directly related to OpenID Connect (OIDC), since OIDC is an authentication layer built on top of OAuth 2.0. OAuth is also unrelated to XACML, which is an authorization policy standard. OAuth can be used in conjunction with XACML, where OAuth is used for ownership consent and access delegation whereas XACML is used to define the authorization policies (e.g., managers can view documents in their region). OpenID vs. pseudo-authentication using OAuth OAuth is an authorization protocol, rather than an authentication protocol. Using OAuth on its own as an authentication method may be referred to as pseudo-authentication. The following diagrams highlight the differences between using OpenID (specifically designed as an authentication protocol) and OAuth for authorization. The communication flow in both processes is similar: (Not pictured) The user requests a resource or site login from the application. The site sees that the user is not authenticated. It formulates a request for the identity provider, encodes it, and sends it to the user as part of a redirect URL. The user's browser makes a request to the redirect URL for the identity provider, including the application's request If necessary, the identity provider authenticates the user (perhaps by asking them for their username and password) Once the identity provider is satisfied that the user is sufficiently authenticated, it processes the application's request, formulates a response, and sends that back to the user along with a redirect URL back to the application. The user's browser requests the redirect URL that goes back to the application, including the identity provider's response The application decodes the identity provider's response, and carries on accordingly. (OAuth only) The response includes an access token which the application can use to gain direct access to the identity provider's services on the user's behalf. The crucial difference is that in the OpenID authentication use case, the response from the identity provider is an assertion of identity; while in the OAuth authorization use case, the identity provider is also an API provider, and the response from the identity provider is an access token that may grant the application ongoing access to some of the identity provider's APIs, on the user's behalf. The access token acts as a kind of "valet key" that the application can include with its requests to the identity provider, which prove that it has permission from the user to access those APIs. Because the identity provider typically (but not always) authenticates the user as part of the process of granting an OAuth access token, it is tempting to view a successful OAuth access token request as an authentication method itself. However, because OAuth was not designed with this use case in mind, making this assumption can lead to major security flaws. OAuth and XACML XACML is a policy-based, attribute-based access control authorization framework. It provides: An access control architecture. A policy language with which to express a wide range of access control policies including policies that can use consents handled / defined via OAuth. A request / response scheme to send and receive authorization requests. XACML and OAuth can be combined to deliver a more comprehensive approach to authorization. OAuth does not provide a policy language with which to define access control policies. XACML can be used for its policy language. Where OAuth focuses on delegated access (I, the user, grant Twitter access to my Facebook wall), and identity-centric authorization, XACML takes an attribute-based approach which can consider attributes of the user, the action, the resource, and the context (who, what, where, when, how). With XACML it is possible to define policies such as Managers can view documents in their department Managers can edit documents they own in draft mode XACML provides more fine-grained access control than OAuth does. OAuth is limited in granularity to the coarse functionality (the scopes) exposed by the target service. As a result, it often makes sense to combine OAuth and XACML together where OAuth will provide the delegated access use case and consent management and XACML will provide the authorization policies that work on the applications, processes, and data. Lastly, XACML can work transparently across multiple stacks (APIs, web SSO, ESBs, home-grown apps, databases...). OAuth focuses exclusively on HTTP-based apps. Controversy Eran Hammer resigned from his role of lead author for the OAuth 2.0 project, withdrew from the IETF working group, and removed his name from the specification in July 2012. Hammer cited a conflict between web and enterprise cultures as his reason for leaving, noting that IETF is a community that is "all about enterprise use cases" and "not capable of simple". "What is now offered is a blueprint for an authorization protocol", he noted, "that is the enterprise way", providing a "whole new frontier to sell consulting services and integration solutions". In comparing OAuth 2.0 with OAuth 1.0, Hammer points out that it has become "more complex, less interoperable, less useful, more incomplete, and most importantly, less secure". He explains how architectural changes for 2.0 unbound tokens from clients, removed all signatures and cryptography at a protocol level and added expiring tokens (because tokens could not be revoked) while complicating the processing of authorization. Numerous items were left unspecified or unlimited in the specification because "as has been the nature of this working group, no issue is too small to get stuck on or leave open for each implementation to decide." David Recordon later also removed his name from the specifications for unspecified reasons. Dick Hardt took over the editor role, and the framework was published in October 2012. See also List of OAuth providers Data portability IndieAuth Mozilla Persona OpenID SAML XACML User-Managed Access References External links The Complete Guide to OAuth 2.0 and OpenID Connect Protocols OAuth Working Group's Mailing List The OAuth 1.0 Protocol (RFC 5849) The OAuth 2.0 Authorization Framework (RFC 6749) The OAuth 2.0 Authorization Framework: Bearer Token Usage (RFC 6750) The OAuth 2.1 Authorization Framework draft-ietf-oauth-v2-1-00 OAuth 2.0 Authorization Flows diagrams OAuth end points cheat sheet OAuth with Spring framework integration Illustration of OAuth 2.0 code injection attack Cloud standards Internet protocols Computer-related introductions in 2007 Computer access control Computer access control protocols
17001425
https://en.wikipedia.org/wiki/Minor%20planet
Minor planet
A minor planet is an astronomical object in direct orbit around the Sun (or more broadly, any star with a planetary system) that is neither a planet nor exclusively classified as a comet. Before 2006, the International Astronomical Union (IAU) officially used the term minor planet, but during that year's meeting it reclassified minor planets and comets into dwarf planets and small Solar System bodies (SSSBs). Minor planets include asteroids (near-Earth objects, Mars-crossers, main-belt asteroids and Jupiter trojans), as well as distant minor planets (centaurs and trans-Neptunian objects), most of which reside in the Kuiper belt and the scattered disc. , there are known objects, divided into 567,132 numbered (secured discoveries) and 519,523 unnumbered minor planets, with only five of those officially recognized as a dwarf planet. The first minor planet to be discovered was Ceres in 1801. The term minor planet has been used since the 19th century to describe these objects. The term planetoid has also been used, especially for larger, planetary objects such as those the IAU has called dwarf planets since 2006. Historically, the terms asteroid, minor planet, and planetoid have been more or less synonymous. This terminology has become more complicated by the discovery of numerous minor planets beyond the orbit of Jupiter, especially trans-Neptunian objects that are generally not considered asteroids. A minor planet seen releasing gas may be dually classified as a comet. Objects are called dwarf planets if their own gravity is sufficient to achieve hydrostatic equilibrium and form an ellipsoidal shape. All other minor planets and comets are called small Solar System bodies. The IAU stated that the term minor planet may still be used, but the term small Solar System body will be preferred. However, for purposes of numbering and naming, the traditional distinction between minor planet and comet is still used. Populations Hundreds of thousands of minor planets have been discovered within the Solar System and thousands more are discovered each month. The Minor Planet Center has documented over 213 million observations and 794,832 minor planets, of which 541,128 have orbits known well enough to be assigned permanent official numbers. Of these, 21,922 have official names. , the lowest-numbered unnamed minor planet is , and the highest-numbered named minor planet is 594913 ꞌAylóꞌchaxnim. There are various broad minor-planet populations: Asteroids; traditionally, most have been bodies in the inner Solar System. Near-Earth asteroids, those whose orbits take them inside the orbit of Mars. Further subclassification of these, based on orbital distance, is used: Apohele asteroids orbit inside of Earth's perihelion distance and thus are contained entirely within the orbit of Earth. Aten asteroids, those that have semi-major axes of less than Earth's and aphelion (furthest distance from the Sun) greater than 0.983 AU. Apollo asteroids are those asteroids with a semimajor axis greater than Earth's while having a perihelion distance of 1.017 AU or less. Like Aten asteroids, Apollo asteroids are Earth-crossers. Amor asteroids are those near-Earth asteroids that approach the orbit of Earth from beyond but do not cross it. Amor asteroids are further subdivided into four subgroups, depending on where their semimajor axis falls between Earth's orbit and the asteroid belt; Earth trojans, asteroids sharing Earth's orbit and gravitationally locked to it. As of 2011, the only one known is 2010 TK7. Mars trojans, asteroids sharing Mars's orbit and gravitationally locked to it. As of 2007, eight such asteroids are known. Asteroid belt, whose members follow roughly circular orbits between Mars and Jupiter. These are the original and best-known group of asteroids. Jupiter trojans, asteroids sharing Jupiter's orbit and gravitationally locked to it. Numerically they are estimated to equal the main-belt asteroids. Distant minor planets; an umbrella term for minor planets in the outer Solar System. Centaurs, bodies in the outer Solar System between Jupiter and Neptune. They have unstable orbits due to the gravitational influence of the giant planets, and therefore must have come from elsewhere, probably outside Neptune. Neptune trojans, bodies sharing Neptune's orbit and gravitationally locked to it. Although only a handful are known, there is evidence that Neptune trojans are more numerous than either the asteroids in the asteroid belt or the Jupiter trojans. Trans-Neptunian objects, bodies at or beyond the orbit of Neptune, the outermost planet. The Kuiper belt, objects inside an apparent population drop-off approximately 55 AU from the Sun. Classical Kuiper belt objects like Makemake, also known as cubewanos, are in primordial, relatively circular orbits that are not in resonance with Neptune. Resonant Kuiper belt objects Plutinos, bodies like that are in a 2:3 resonance with Neptune. Scattered disc objects like Eris, with aphelia outside the Kuiper belt. These are thought to have been scattered by Neptune. Resonant scattered disc objects. Detached objects such as Sedna, with both aphelia and perihelia outside the Kuiper belt. Sednoids, detached objects with perihelia greater than 75 AU (Sedna, , and Leleākūhonua). The Oort cloud, a hypothetical population thought to be the source of long-period comets that may extend to 50,000 AU from the Sun. Naming conventions All astronomical bodies in the Solar System need a distinct designation. The naming of minor planets runs through a three-step process. First, a provisional designation is given upon discovery—because the object still may turn out to be a false positive or become lost later on—called a provisionally designated minor planet. After the observation arc is accurate enough to predict its future location, a minor planet is formally designated and receives a number. It is then a numbered minor planet. Finally, in the third step, it may be named by its discoverers. However, only a small fraction of all minor planets have been named. The vast majority are either numbered or have still only a provisional designation. Example of the naming process: – provisional designation upon discovery on 24 April 1932 – formal designation, receives an official number 1862 Apollo – named minor planet, receives a name, the alphanumeric code is dropped Provisional designation A newly discovered minor planet is given a provisional designation. For example, the provisional designation consists of the year of discovery (2002) and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. 433 Eros). The formal naming convention uses parentheses around the number, but dropping the parentheses is quite common. Informally, it is common to drop the number altogether or to drop it after the first mention when a name is repeated in running text. Minor planets that have been given a number but not a name keep their provisional designation, e.g. (29075) 1950 DA. Because modern discovery techniques are finding vast numbers of new asteroids, they are increasingly being left unnamed. The earliest discovered to be left unnamed was for a long time (3360) 1981 VA, now 3360 Syrinx. In November 2006 its position as the lowest-numbered unnamed asteroid passed to (now 3708 Socus), and in May 2021 to . On rare occasions, a small object's provisional designation may become used as a name in itself: the then-unnamed gave its "name" to a group of objects that became known as classical Kuiper belt objects ("cubewanos") before it was finally named 15760 Albion in January 2018. A few objects are cross-listed as both comets and asteroids, such as 4015 Wilson–Harrington, which is also listed as 107P/Wilson–Harrington. Numbering Minor planets are awarded an official number once their orbits are confirmed. With the increasing rapidity of discovery, these are now six-figure numbers. The switch from five figures to six figures arrived with the publication of the Minor Planet Circular (MPC) of October 19, 2005, which saw the highest-numbered minor planet jump from 99947 to 118161. Naming The first few asteroids were named after figures from Greek and Roman mythology, but as such names started to dwindle the names of famous people, literary characters, discoverers' spouses, children, colleagues, and even television characters were used. Gender The first asteroid to be given a non-mythological name was 20 Massalia, named after the Greek name for the city of Marseille. The first to be given an entirely non-Classical name was 45 Eugenia, named after Empress Eugénie de Montijo, the wife of Napoleon III. For some time only female (or feminized) names were used; Alexander von Humboldt was the first man to have an asteroid named after him, but his name was feminized to 54 Alexandra. This unspoken tradition lasted until 334 Chicago was named; even then, female names showed up in the list for years after. Eccentric As the number of asteroids began to run into the hundreds, and eventually, in the thousands, discoverers began to give them increasingly frivolous names. The first hints of this were 482 Petrina and 483 Seppina, named after the discoverer's pet dogs. However, there was little controversy about this until 1971, upon the naming of 2309 Mr. Spock (the name of the discoverer's cat). Although the IAU subsequently banned pet names as sources, eccentric asteroid names are still being proposed and accepted, such as 4321 Zero, 6042 Cheshirecat, 9007 James Bond, 13579 Allodd and 24680 Alleven, and 26858 Misterrogers. Discoverer's name A well-established rule is that, unlike comets, minor planets may not be named after their discoverer(s). One way to circumvent this rule has been for astronomers to exchange the courtesy of naming their discoveries after each other. An exception to this rule is 96747 Crespodasilva, which was named after its discoverer, Lucy d'Escoffier Crespo da Silva, because she died shortly after the discovery, at age 22. Languages Names were adapted to various languages from the beginning. 1 Ceres, Ceres being its Anglo-Latin name, was actually named Cerere, the Italian form of the name. German, French, Arabic, and Hindi use forms similar to the English, whereas Russian uses a form, Tserera, similar to the Italian. In Greek, the name was translated to Δήμητρα (Demeter), the Greek equivalent of the Roman goddess Ceres. In the early years, before it started causing conflicts, asteroids named after Roman figures were generally translated in Greek; other examples are Ἥρα (Hera) for 3 Juno, Ἑστία (Hestia) for 4 Vesta, Χλωρίς (Chloris) for 8 Flora, and Πίστη (Pistis) for 37 Fides. In Chinese, the names are not given the Chinese forms of the deities they are named after, but rather typically have a syllable or two for the character of the deity or person, followed by 神 'god(dess)' or 女 'woman' if just one syllable, plus 星 'star/planet', so that most asteroid names are written with three Chinese characters. Thus Ceres is 穀神星 'grain goddess planet', Pallas is 智神星 'wisdom goddess planet', etc. Physical properties of comets and minor planets Commission 15 of the International Astronomical Union is dedicated to the Physical Study of Comets & Minor Planets. Archival data on the physical properties of comets and minor planets are found in the PDS Asteroid/Dust Archive. This includes standard asteroid physical characteristics such as the properties of binary systems, occultation timings and diameters, masses, densities, rotation periods, surface temperatures, albedoes, spin vectors, taxonomy, and absolute magnitudes and slopes. In addition, European Asteroid Research Node (E.A.R.N.), an association of asteroid research groups, maintains a Data Base of Physical and Dynamical Properties of Near Earth Asteroids. Most detailed information is available from :Category: Minor planets visited by spacecraft and :Category: Comets visited by spacecraft. See also Groups of minor planets List of minor planets Dwarf planet Quasi-satellite Small Solar System body Solar System Notes References External links Minor Planet Center Logarithmic graph of asteroid discoveries from 1801-2015
417018
https://en.wikipedia.org/wiki/Java%20Desktop%20System
Java Desktop System
Java Desktop System, briefly known as OpenSolaris Desktop, is a legacy desktop environment developed first by Sun Microsystems and then by Oracle Corporation after the 2010 Oracle acquisition of Sun. Java Desktop System is available for Solaris and was once available for Linux. The Linux version was discontinued after Solaris was released as open source software in 2005. Java Desktop System aims to provide a system familiar to the average computer user with a full suite of office productivity software such as an office suite, a web browser, email, calendaring, and instant messaging. Despite being known as the Java Desktop System, it is not actually written in Java. Rather, it is built around a tweaked version of GNOME along with other common free software projects, which are written mostly in C and C++. The name reflected Sun's promotion of the product as an outlet for corporate users to deploy software written for the Java platform. Versions Sun first bundled a preview release of GNOME 1.4 on a separate CD for Solaris 8. JDS version 2 included: Java GNOME (using the Blueprint theme) StarOffice Mozilla Evolution MP3 and CD player Java Media Framework's Java Media Player Gaim multi-service instant messaging RealPlayer JDS Release 2 was available for Solaris and for the SuSE-based Linux distribution. JDS Release 3 was released in 2005. It was included with Solaris 10 — upon installation of Solaris, one has the choice of using either the CDE or JDS. It was based on GNOME 2.6 and available only for the Solaris 10 platform. OpenSolaris Desktop OpenSolaris received its own version of the Java Desktop System. OpenSolaris Desktop was tied to the OpenSolaris operating system, and did not have its own release schedule. OpenSolaris Desktop 01 (released October 28, 2005) was based on GNOME 2.10 and OpenSolaris Desktop 02 (released December 23, 2005) was based on GNOME 2.12. The last version was released with the release of OpenSolaris 2009.6, and was based on Gnome 2.24. It also included Firefox 3.1, OpenOffice 3 and Sun VirtualBox. The OpenSolaris Desktop line of the Java Desktop System became defunct with the end of the OpenSolaris project. Availability With the end of the OpenSolaris project, JDS Release 3 is now the last release of the project on a currently supported operating system Solaris 10. Newer Solaris based operating systems have abandoned the Java Desktop System. Solaris 11 and projects based upon the OpenSolaris codebase such as OpenIndiana use a stock version of GNOME. See also Project Looking Glass References External links Oracle Documentation OS News review December 2003 eWeek Review December 2003 Desktop environments Java platform software Oracle software Sun Microsystems software Desktop environments based on GTK Free desktop environments GNOME Software forks
38453252
https://en.wikipedia.org/wiki/Global%20Information%20Governance%20Day
Global Information Governance Day
Global Information Governance Day (GIGD) is a day that occurs on the third Thursday in February. The purpose of Global Information Governance Day is to raise the awareness of information governance. The annual observance was started by Garth Landers, Tamir Sigal, and Barclay T. Blair in 2012. Information governance is the enforcement of desirable behavior in the creation, use, archiving, and deletion of information held by an organization. Gartner Inc., an information technology research and advisory firm, defines information governance as the specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival and deletion of information. It includes the processes, roles, standards and metrics that ensure the effective and efficient use of information in enabling an organization to achieve its goals. February is Information Governance Month, coordinated by the American Health Information Management Association. The celebration is coordinated and promoted by information governance experts. History Records management deals with the retention and disposition of records. A record can either be a physical, tangible object, or digital information such as a database, application data, and e-mail. The lifecycle was historically viewed as the point of creation to the eventual disposal of a record. As content generation exploded in recent decades, and regulations and compliance issues increased, traditional records management failed to keep pace. A more comprehensive platform for managing records and information became necessary to address all phases of the lifecycle, which led to the advent of information governance. Information governance (IG) goes beyond retention and disposition to include privacy, access controls, and other compliance issues. In electronic discovery, or e-discovery, electronically stored information is searched for relevant data by attorneys and placed on legal hold. IG includes consideration of how this content is held and controlled for e-discovery, and also provides a platform for defensible disposition and compliance. Additionally, metadata often accompanies electronically stored data and can be of great value to the enterprise if stored and managed correctly. With all of these additional considerations that go beyond traditional records management, IG emerged as a platform for organizations to define policies at the enterprise level, across multiple jurisdictions. IG then also provides for the enforcement of these policies into the various repositories of information, data, and records. Information governance was given national recognition in November 2011 with a directive from President Obama to overhaul current records management processes within the government to encompass current needs more comprehensively. Information governance has been a growing trend, even becoming the theme of the annual ARMA International Conference in 2012. While Records Managers are becoming aware of IG, there is still little awareness among many organizations. Global Information Governance Day was established in 2013 to raise this awareness. See also National Archives Records Management Enterprise content management Information technology governance Information security governance References External links Association of Records Managers and Administrators Gartner Information Governance Forrester Research information governance blog Barclay Blair Information Governance Definition EPA 10 Reasons for RM American Health Information Management Association Information technology management Content management systems Public records International observances February observances Holidays and observances by scheduling (nth weekday of the month)
18744049
https://en.wikipedia.org/wiki/The%20Viral%20Factory
The Viral Factory
The Viral Factory is a full-service advertising agency based in Shoreditch, United Kingdom. History The Viral Factory was founded in 2001 by Ed Robinson and Matt Smith. Their first venture was a collaboration with director Adam Stewart and creatives Richard Peretti and Gary Lathwell to create Headrush, an in-house promotional viral which gained the fledgling company its first web audience. Another collaboration with Adam Stewart; Moontruth aided in establishing the viral as a tool for public and media notoriety. One of The Viral Factory's first major corporate campaigns was a series of virals for the United States brand Trojan Condoms U.K / European launch. This in turn led to an increase in global blue chip clients and further viral campaigns with brands such as Microsoft, Ford and Coca-Cola. In 2010 The Viral Factory closed its USA branch. Style and content The Viral Factory work on feeding basic human emotions with anarchic versions of reality to get their client's message across, often using a mockumentry film technique or computer generated animation to convince the viewer that the footage is real. The Viral Factory has, on occasions, used ‘covert seeding’ to amplify the supposed authenticity of their footage, particularly in the Levi ‘Freedom to Move’ campaign of 2006. Notable clients Trojan Condoms — The Trojan campaign parodied Olympic events, substituting them with sexual "sports", such as Pelvic Power Lifting and Masters of Precision Vaulting. BBC Health and Education — GI Jonny is a parody of a retro children's action toy television commercial. Sponsored by the BBC to raise AIDS awareness amongst 16- to 25-year-old British males, starring 'GI Jonny' and 'Captain Bareback'. Samsung — An animation created using the product to illustrate its functions. Controversy ‘Moontruth’ Playing in to the hands of conspiracy theorists, a film was leaked to the public that supposedly proved that man had never in fact landed on the moon and that the moon landing was in fact staged in a television studio. This viral hoax led to 3,000 people, taken in by the footage, calling NASA to complain about their dishonesty in saying that they had conquered the moon. Ford Ka campaign. A viral entitled ‘Cat’ for the Ford Ka found notoriety in 2006. Depicting a cat getting its head chopped off in the ‘Ka’ roof. The viral caused outrage which led to the advertising agency that allegedly commissioned the viral and Ford, the client, to claim that they had never approved the virals’ release. Awards Cannes Cyber Gold Lion: Trojan Games (2004), Revenge Phillips amBX (2006) EPICA Gold : Axe/Lynx ‘Ravenstoke’ (2005) London International Awards Gold, Viral : ‘Fingerskilz’ Hewlett Packard (2007) Webby Awards Not for profit People's Voice Winner : ‘GI Jonny’ BBC Learning (2008) D&AD Yellow Pencil, Viral Animation & Motion Graphics & YouTube 2007 awards nomination, Most creative for ‘How we met’ Samsung Electronics (2008) See also Viral marketing Seeding agency Viral video List of Internet phenomena Webby Awards References External links The Viral Factory The Guardian Online - New Media & Advertising Business Week - Viral Video Grows Up BBC News - Top Ten Viral Videos DigitalArts - The Viral Factory BrandChannel - Toni Smith Marketing companies of the United Kingdom Marketing companies established in 2001 Shoreditch
54360025
https://en.wikipedia.org/wiki/Casync
Casync
casync (content-addressable synchronisation) is a Linux software utility designed to distribute frequently-updated file system images over the Internet. Utility According to the creator Lennart Poettering, casync is inspired by rsync and Git, as well as tar. casync is aimed to be used for Internet of things (IoT), container, virtual machine (VM), portable services, and operating system (OS) images, as well as backups and home directory synchronization. casync splits images into variable size segments, uses sha256 checksums, and aims to work with content delivery networks (CDNs). Available for Linux only, packages are available for Ubuntu, Fedora and Arch Linux. Similar software Similar software that delivers file system images are: Docker with a layered tarballs OSTree See also BitTorrent Data deduplication Flatpak InterPlanetary File System SquashFS zsync References 2017 software Backup software for Linux Data synchronization Free backup software Free file transfer software Free network-related software Linux software Network file transfer protocols Software that uses Meson Unix network-related software
26805464
https://en.wikipedia.org/wiki/IAPM%20%28mode%29
IAPM (mode)
Integrity-aware parallelizable mode (IAPM) is a mode of operation for cryptographic block ciphers. As its name implies, it allows for a parallel mode of operation for higher throughput. Encryption and authentication At the time of its creation, IAPM was one of the first cipher modes to provide both authentication and privacy in a single pass. (In earlier authenticated encryption designs, two passes would be required to: one to encrypt, and the second to compute a MAC.) IAPM was proposed for use in IPsec. Other AEAD schemes also provide all of the single pass, privacy and authentication properties. IAPM has mostly been supplanted by Galois/counter mode. See also OCB mode References Block cipher modes of operation Authenticated-encryption schemes
1850300
https://en.wikipedia.org/wiki/Myron%20W.%20Krueger
Myron W. Krueger
Myron Krueger (born 1942 in Gary, Indiana) is an American computer artist who developed early interactive works. He is also considered to be one of the first generation virtual reality and augmented reality researchers. While earning a Ph.D. in Computer Science at the University of Wisconsin–Madison, Krueger worked on a number of early interactive computer artworks. In 1969, he collaborated with Dan Sandin, Jerry Erdman and Richard Venezky on a computer-controlled environment called "glowflow," a computer-controlled light sound environment that responded to the people within it. Krueger went on to develop Metaplay, an integration of visuals, sounds, and responsive techniques into a single framework. In this, the computer was used to create a unique real-time relationship between the participants in the gallery and the artist in another building. In 1971, his "Psychic space" used a sensory floor to perceive the participants' movements around the environment. A later project, "Videoplace," was funded by the National Endowment for the arts and a two-way exhibit was shown at the Milwaukee Art Museum in 1975. From 1974 to 1978 M. Krueger performed computer graphics research at the Space Science and Engineering Center of the University of Wisconsin–Madison in exchange for institutional support for his "Videoplace" work. In 1978, joined the computer science faculty at the University of Connecticut, where he taught courses in hardware, software, computer graphics and artificial intelligence. "Videoplace" has been exhibited widely in both art and science contexts in the United States and Canada, and it was also shown in Japan. It was included in the SIGGRAPH Art Show in 1985 and 1990. "Videoplace" was also the featured exhibit at SIGCHI (Computer-Human Interaction Conference) in 1985 and 1989, and at the 1990 Ars Electronica Festival. Instead of taking the virtual reality track of head-mounted display and data glove (which would come later in the 1980s), he investigated projections onto walls. Krueger later used the hardware from Videoplace for another piece, Small Planet. In this work, participants are able to fly over a small, computer-generated, 3D planet. Flying is done by holding one's arms out, like a child pretending to fly, and leaning left or right and moving up or down. Small Planet was shown at SIGGRAPH '93, Interaction '97 (Ogaki, Japan), Mediartech '98 (Florence, Italy). He envisioned the art of interactivity, as opposed to art that happens to be interactive. That is, the idea that exploring the space of interactions between humans and computers was interesting. The focus was on the possibilities of interaction itself, rather than on an art project, which happens to have some response to the user. Though his work was somewhat unheralded in mainstream VR thinking for many years as it moved down a path that culminated in the "goggles 'n gloves" archetype, his legacy has experienced greater interest as more recent technological approaches (such as CAVE and Powerwall implementations) move toward the unencumbered interaction approaches championed by Krueger. Bibliography Myron Krueger. Artificial Reality, Addison-Wesley, 1983. Myron Krueger. Artificial Reality 2, Addison-Wesley Professional, 1991. References External links http://webarchive.loc.gov/all/20020914215836/http://bubblegum.parsons.edu/~praveen/thesis/html/wk05_1.html https://web.archive.org/web/20070929094921/http://www.artmuseum.net/w2vr/timeline/Krueger.html https://web.archive.org/web/20051122121405/http://www.siggraph.org/artdesign/gallery/S98/pione/pione3/krueger.html http://www.ctheory.net/articles.aspx?id=328 http://www.medienkunstnetz.de/works/videoplace/ Virtual reality pioneers American digital artists People in information technology People from Gary, Indiana University of Wisconsin–Madison alumni 1942 births Living people
700674
https://en.wikipedia.org/wiki/Tr%20%28Unix%29
Tr (Unix)
tr is a command in Unix, Plan 9, Inferno, and Unix-like operating systems. It is an abbreviation of translate or transliterate, indicating its operation of replacing or removing specific characters in its input data set. Overview The utility reads a byte stream from its standard input and writes the result to the standard output. As arguments, it takes two sets of characters (generally of the same length), and replaces occurrences of the characters in the first set with the corresponding elements from the second set. For example, tr 'abcd' 'jkmn' maps all characters a to j, b to k, c to m, and d to n. The character set may be abbreviated by using character ranges. The previous example could be written: tr 'a-d' 'jkmn' In POSIX-compliant versions of tr, the set represented by a character range depends on the locale's collating order, so it is safer to avoid character ranges in scripts that might be executed in a locale different from that in which they were written. Ranges can often be replaced with POSIX character sets such as [:alpha:]. The s flag causes tr to compress sequences of identical adjacent characters in its output to a single token. For example, tr -s '\n' replaces sequences of one or more newline characters with a single newline. The d flag causes tr to delete all tokens of the specified set of characters from its input. In this case, only a single character set argument is used. The following command removes carriage return characters. tr -d '\r' The c flag indicates the complement of the first set of characters. The invocation tr -cd '[:alnum:]' therefore removes all non-alphanumeric characters. Implementations The original version of tr was written by Douglas McIlroy and was introduced in Version 4 Unix. The version of tr bundled in GNU coreutils was written by Jim Meyering. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. It is also available in the OS-9 shell. A tr command is also part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. The command has also been ported to the IBM i operating system. Most versions of tr, including GNU tr and classic Unix tr, operate on single-byte characters and are not Unicode compliant. An exception is the Heirloom Toolchest implementation, which provides basic Unicode support. Ruby and Perl also have an internal tr operator, which operates analogously. Tcl's string map command is more general in that it maps strings to strings while tr maps characters to characters. See also sed List of Unix commands GNU Core Utilities References External links tr(1) – Unix 8th Edition manual page. } usage examples at examplenow.com Unix text processing utilities Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands
162190
https://en.wikipedia.org/wiki/Computer%20telephony%20integration
Computer telephony integration
Computer telephony integration, also called computer–telephone integration or CTI, is a common name for any technology that allows interactions on a telephone and a computer to be integrated or coordinated. The term is predominantly used to describe desktop-based interaction for helping users be more efficient, though it can also refer to server-based functionality such as automatic call routing. Common functions By application type CTI applications tend to run on either a user's desktop, or an unattended server. Common desktop functions provided by CTI applications Screen popping - Call information display (caller's number (ANI), number dialed (DNIS), and Screen pop on answer, with or without using calling line data. Generally this is used to search a business application for the caller's details. Dialing - Automatic dialing and computer-controlled dialing (power dial, preview dial, and predictive dial). Phone control - Includes call control (answer, hang up, hold, conference, etc.) and feature control (DND, call forwarding, etc.). Transfers - Coordinated phone and data transfers between two parties (i.e., pass on the Screen pop with the call.). Call center - Allows users to log in as a call center agent and control their agent state (Ready, Busy, Not ready, Break, etc.). Common server functions provided by CTI applications Call routing - The automatic routing of calls to a new destination based on criteria normally involving a database lookup of the caller's number (ANI) or number dialed (DNIS). Advanced call reporting functions - Using the detailed data that comes from CTI to provide better-than-normal call reporting. Voice recording integration - Using data from CTI to enrich the data stored against recorded calls. By connection type Computer-phone connections can be split into two categories: First-party call control Operates as if there is a direct connection between the user's computer and the phone set. Examples are a modem or a phone plugged directly into the computer. Typically, only the computer associated with the phone can control it by sending commands directly to the phone and thus this type of connection is suitable for desktop applications only. The computer can generally control all the functions of the phone at the computer user's discretion. Third-party call control Interactions between arbitrary numbers of computers and telephones are made through and coordinated by a dedicated telephony server. Consequently, the server governs which information and functions are available to a user. The user's computer generally connects to the telephony server over the local network. History and main CTI technologies The origins of CTI can be found in simple screen population (or "screen pop") technology. This allows data collected from the telephone systems to be used as input data to query databases with customer information and populate that data instantaneously in the customer service representative screen. The net effect is the agent already has the required screen on his/her terminal before speaking with the customer. This technology started gaining widespread adoption in markets like North America and West European countries. There were several standards which had a major impact in the ´normalization´ of in the industry, previously fully closed and proprietary to each PBX/ACD vendor. On the software level, the most adopted interface by vendors is the CSTA standard, which is approved by the standards body ITU. Other well known CTI standards in the industry are JTAPI, TSAPI and TAPI: JTAPI, the Java Telephony API is promoted by Sun; TSAPI, originally promoted by the AT&T (later Lucent then Avaya) and Novell; Microsoft pushed their own initiative also, and thus TAPI was born, with support mostly from Windows applications. All of these standards required the PBX vendor to write a specific driver, and initially support for this was slow. Among the key players in this area, Lucent played a big role and IBM acquired ROLM Inc, a US pioneer in ACDs, in an attempt to normalize all major PBX vendor interfaces with its CallPath middleware. This attempt failed when it sold this company to Siemens AG and gradually divested in the area. A pioneer startup that combined the technologies of voice digitization, Token Ring networking, and time-division multiplexing was ZTEL of Wilmington, Massachusetts. ZTEL's computer-based voice and data network combined user-programmable voice call processing features, protocol conversion for automated "data call processing," database-driven directory and telset definitions, and custom LSI chipset technology. ZTEL ceased operation in 1986. Two other important players were Digital Equipment Corporation and Tiger Software (now Mondago). Digital Equipment Corporation developed CT Connect which includes vendor abstraction middleware. CT Connect was then sold to Dialogic, which in turn was purchased by Intel. This CTI software, known as CT Connect, was most recently sold in 2005 to Envox Worldwide. Tiger Software produced the SmartServer suite which was primarily aimed at allowing CRM application vendors to add CTI functionality to their existing applications with minimal effort. Later, and after changing their name to Mondago, Tiger Software went on to produce the Go Connect server application, which is aimed at providing at helping other CTI vendors integrate with a wider range of telephone systems. By 2008, most PBX vendors had aligned themselves behind one or two of the TAPI, CSTA or TSAPI standard. The TSAPI advocates were: Avaya, Telrad. The CSTA advocates were: Siemens (now Unify), Aastra, DeTeWe, Toshiba, Panasonic. The majority (see main TAPI article for detail) preferred TAPI. A few vendors promoted proprietary standards: Mitel, Broadsoft, Digium and most hosted platforms. CT Connect and Go Connect thus provided an important translation middle-layer, allowing the PBX to communicate in its preferred protocol, while an application can communicate using its preferred protocol. Many of the early CTI vendors and developers have changed hands over the years. An example is Nabnasset, an Acton, Massachusetts firm that developed a CORBA based CTI solution for a client and then decided to make it into a general product. It merged with Quintus, a customer relationship management company, which went bankrupt and was purchased by Avaya Telecommunications. Smaller organisations have also survived from the early days and have leveraged their heritage to thrive. However, many of the 1980s startups that were inspired by the "Bell Breakup" and the coming competitive telephony marketplace, did not survive the decade. On the hardware level, there was a paradigm shift since 1993, with emerging standards from IETF, which led to several new players like Dialogic, Brooktrout (now part of Dialogic), Natural MicroSystems (also now part of Dialogic) and Aculab offering telephony interfacing boards for various networks and elements. Until 2011, it was the makers of telephone systems that implemented CTI technologies such as TAPI and CSTA. But after this time, a wave of handsets become popular that were independently made. These handsets would connect to the telephone systems using standards such as SIP and consumers could easily buy their telephone system from one vendor and their handsets from another. However, this situation led to poor quality CTI since the protocols (ie SIP) were not really suitable for third-party control. So, handset vendors started to add support for CTI directly. Initially this would be over proprietary HTTP methods, but in time uaCSTA (aka TR/87) became popular and by 2016 most SIP handsets support uaCSTA control. These include: Snom (the first to pioneer it), Yealink, Akuvox, Panasonic and Aastra. See also Automatic number identification (ANI) Automatic call distributor Dialed Number Identification Service (DNIS) Predictive dialer Screen pop Telephony Application Programming Interface (TAPI) Telephony Server Application Programming Interface (TSAPI) Computer-supported telecommunications applications (CSTA) Multi-Vendor Integration Protocol External links User Agent CSTA (uaCSTA) - TR/87 - ECMA International Telephone service enhanced features
43989914
https://en.wikipedia.org/wiki/Windows%2010
Windows 10
Windows 10 is a major release of Microsoft's Windows NT operating system. It is the direct successor to Windows 8.1, which was released nearly two years earlier. It was released to manufacturing on July 15, 2015, and later to retail on July 29, 2015. Windows 10 was made available for download via MSDN and TechNet, as a free upgrade for retail copies of Windows 8 and Windows 8.1 users via the Windows Store, and to Windows 7 users via Windows Update. Windows 10 receives new builds on an ongoing basis, which are available at no additional cost to users, in addition to additional test builds of Windows 10, which are available to Windows Insiders. Devices in enterprise environments can receive these updates at a slower pace, or use long-term support milestones that only receive critical updates, such as security patches, over their ten-year lifespan of extended support. Windows 10 received generally positive reviews upon its original release. Critics praised Microsoft's decision to provide the desktop-oriented interface in line with previous versions of Windows, contrasting the tablet-oriented approach of Windows 8, although Windows 10's touch-oriented user interface mode was criticized for containing regressions upon the touch-oriented interface of its predecessor. Critics also praised the improvements to Windows 10's bundled software over Windows 8.1, Xbox Live integration, as well as the functionality and capabilities of the Cortana personal assistant and the replacement of Internet Explorer with Microsoft Edge. However, media outlets have been critical of the changes to operating system behaviors, including mandatory update installation, privacy concerns over data collection performed by the OS for Microsoft and its partners, and adware-like tactics used to promote the operating system on its release. Microsoft initially aimed to have Windows 10 installed on over one billion devices within three years of its release; that goal was ultimately reached almost five years after release on March 16, 2020. By January 2018, Windows 10 surpassed Windows 7 as the most popular version of Windows worldwide. , it is estimated that 82% of Windows PCs, 61% of all PCs (the rest being older Windows versions and other operating systems such as macOS and Linux), and 27% of all devices (including mobile, tablet and console) are running Windows 10. On June 24, 2021, Microsoft announced Windows 10's successor, Windows 11, which was released on October 5, 2021. Windows 10 is the final version of Windows which supports 32-bit processors (IA-32 and ARMv7-based) and devices with BIOS firmware. Its successor, Windows 11, requires a device that uses UEFI firmware and a 64-bit processor in any supported architecture (x86-64 and ARMv8), though certain workarounds exist that can be used to install it on devices with legacy BIOS firmware. Development At the Microsoft Worldwide Partner Conference in 2011, Andrew Lees, the chief of Microsoft's mobile technologies, said that the company intended to have a single software ecosystem for PCs, phones, tablets, and other devices: "We won't have an ecosystem for PCs, and one for phones, and one for tabletsthey'll all come together." In December 2013, technology writer Mary Jo Foley reported that Microsoft was working on an update to Windows 8 codenamed "Threshold", after a planet in its Halo franchise. Similarly to "Blue" (which became Windows 8.1), Foley described Threshold, not as a single operating system, but as a "wave of operating systems" across multiple Microsoft platforms and services, quoting Microsoft sources, scheduled for the second quarter of 2015. She also stated that one of the goals for Threshold was to create a unified application platform and development toolkit for Windows, Windows Phone and Xbox One (which all use a similar kernel based on Windows NT). At the Build Conference in April 2014, Microsoft's Terry Myerson unveiled an updated version of Windows 8.1 (build 9697) that added the ability to run Windows Store apps inside desktop windows and a more traditional Start menu in place of the Start screen seen in Windows 8. The new Start menu takes after Windows 7's design by using only a portion of the screen and including a Windows 7-style application listing in the first column. The second column displays Windows 8-style app tiles. Myerson said that these changes would occur in a future update, but did not elaborate. Microsoft also unveiled the concept of a "universal Windows app", allowing Windows Store apps created for Windows 8.1 to be ported to Windows Phone 8.1 and Xbox One while sharing a common codebase, with an interface designed for different device form factors, and allowing user data and licenses for an app to be shared between multiple platforms. Windows Phone 8.1 would share nearly 90% of the common Windows Runtime APIs with Windows 8.1 on PCs. Screenshots of a Windows build purported to be Threshold were leaked in July 2014, showing the previously presented Start menu and windowed Windows Store apps, followed by a further screenshot of a build identifying itself as "Windows Technical Preview", numbered 9834, in September 2014, showing a new virtual desktop system, a notification center, and a new File Explorer icon. Announcement On September 30, 2014, Microsoft officially announced that Threshold would be unveiled during a media event as Windows 10. Myerson said that Windows 10 would be Microsoft's "most comprehensive platform ever", providing a single, unified platform for desktop and laptop computers, tablets, smartphones, and all-in-one devices. He emphasized that Windows 10 would take steps towards restoring user interface mechanics from Windows 7 to improve the experience for users on non-touch devices, noting criticism of Windows 8's touch-oriented interface by keyboard and mouse users. Despite these concessions, Myerson noted that the touch-optimized interface would evolve as well on 10. In regards to Microsoft naming the new operating system Windows 10 instead of Windows 9, Terry Myerson said that "based on the product that's coming, and just how different our approach will be overall, it wouldn't be right to call it Windows 9." He also joked that they could not call it "Windows One" (alluding to several recent Microsoft products with a similar brand, such as OneDrive, OneNote, and the Xbox One) because Windows 1.0 already existed. At a San Francisco conference in October 2014, Tony Prophet, Microsoft's Vice President of Windows Marketing, said that Windows 9 "came and went", and that Windows 10 would not be "an incremental step from Windows 8.1," but "a material step. We're trying to create one platform, one eco-system that unites as many of the devices from the small embedded Internet of Things, through tablets, through phones, through PCs and, ultimately, into the Xbox." Further details surrounding Windows 10's consumer-oriented features were presented during another media event held on January 21, 2015, entitled "Windows 10: The Next Chapter". The keynote featured the unveiling of Cortana integration within the operating system, new Xbox-oriented features, Windows 10 Mobile, an updated Office Mobile suite, Surface Huba large-screened Windows 10 device for enterprise collaboration based upon Perceptive Pixel technology, along with HoloLens‑augmented reality eyewear and an associated platform for building apps that can render holograms through HoloLens. Additional developer-oriented details surrounding the "Universal Windows Platform" concept were revealed and discussed during Microsoft's Build developers' conference. Among them were the unveiling of "Islandwood", which provides a middleware toolchain for compiling Objective-C-based software (particularly iOS) to run as universal apps on Windows 10 and Windows 10 Mobile. A port of Candy Crush Saga made using the toolkit, which shared much of its code with the iOS version, was demonstrated, alongside the announcement that the King-developed game would be bundled with Windows 10 at launch. At the 2015 Ignite conference, Microsoft employee Jerry Nixon stated that Windows 10 would be the "last version of Windows", a statement that Microsoft confirmed was "reflective" of its view of the operating system being a "service" with new versions and updates to be released over time. In 2021, however, Microsoft announced that Windows 10 would be succeeded on compatible hardware by Windows 11–and that Windows 10 support will end on October 14, 2025. Release and marketing On June 1, 2015, Microsoft announced that Windows 10 would be released on July 29. On July 20, 2015, Microsoft began "Upgrade Your World", an advertising campaign centering on Windows 10, with the premiere of television commercials in Australia, Canada, France, Germany, Japan, the United Kingdom, and the United States. The commercials focused on the tagline "A more human way to do", emphasizing new features and technologies supported by Windows 10 that sought to provide a more "personal" experience to users. The campaign culminated with launch events in thirteen cities on July 29, 2015, which celebrated "the unprecedented role our biggest fans played in the development of Windows 10". Features Windows 10 makes its user experience and functionality more consistent between different classes of device and addresses most of the shortcomings in the user interface that were introduced in Windows 8. Windows 10 Mobile, the successor to Windows Phone 8.1, shared some user interface elements and apps with its PC counterpart. Windows 10 supports universal apps, an expansion of the Metro-style first introduced in Windows 8. Universal apps can be designed to run across multiple Microsoft product families with nearly identical codeincluding PCs, tablets, smartphones, embedded systems, Xbox One, Surface Hub and Mixed Reality. The Windows user interface was revised to handle transitions between a mouse-oriented interface and a touchscreen-optimized interface based on available input devicesparticularly on 2-in-1 PCs, both interfaces include an updated Start menu which incorporates elements of Windows 7's traditional Start menu with the tiles of Windows 8. Windows 10 also introduced the Microsoft Edge web browser, a virtual desktop system, a window and desktop management feature called Task View, support for fingerprint and face recognition login, new security features for enterprise environments, and . The Windows Runtime app ecosystem was revised into the Universal Windows Platform (UWP). These universal apps are made to run across multiple platforms and device classes, including smartphones, tablets, Xbox One consoles, and other devices compatible with Windows 10. Windows apps share code across platforms, have responsive designs that adapt to the needs of the device and available inputs, can synchronize data between Windows 10 devices (including notifications, credentials, and allowing cross-platform multiplayer for games), and are distributed through the Microsoft Store (rebranded from Windows Store since September 2017). Developers can allow "cross-buys", where purchased licenses for an app apply to all of the user's compatible devices, rather than only the one they purchased on (e.g., a user purchasing an app on PC is also entitled to use the smartphone version at no extra cost). The ARM version of Windows 10 allows running applications for x86 processors through 32-bit software emulation. On Windows 10, Microsoft Store serves as a unified storefront for apps, video content, and eBooks. Windows 10 also allows web apps and desktop software (using either Win32 or .NET Framework) to be packaged for distribution on Microsoft Store. Desktop software distributed through Windows Store is packaged using the App-V system to allow sandboxing. User interface and desktop A new iteration of the Start menu is used on the Windows 10 desktop, with a list of places and other options on the left side, and tiles representing applications on the right. The menu can be resized, and expanded into a full-screen display, which is the default option in Tablet mode. A new virtual desktop system was added by a feature known as Task View, which displays all open windows and allows users to switch between them, or switch between multiple workspaces. Universal apps, which previously could be used only in full screen mode, can now be used in self-contained windows similarly to other programs. Program windows can now be snapped to quadrants of the screen by dragging them to the corner. When a window is snapped to one side of the screen, Task View appears and the user is prompted to choose a second window to fill the unused side of the screen (called "Snap Assist"). The Windows system icons were also changed. Charms have been removed; their functionality in universal apps is accessed from an App commands menu on their title bar. In its place is Action Center, which displays notifications and settings toggles. It is accessed by clicking an icon in the notification area, or dragging from the right of the screen. Notifications can be synced between multiple devices. The Settings app (formerly PC Settings) was refreshed and now includes more options that were previously exclusive to the desktop Control Panel. Windows 10 is designed to adapt its user interface based on the type of device being used and available input methods. It offers two separate user interface modes: a user interface optimized for mouse and keyboard, and a "Tablet mode" designed for touchscreens. Users can toggle between these two modes at any time, and Windows can prompt or automatically switch when certain events occur, such as disabling Tablet mode on a tablet if a keyboard or mouse is plugged in, or when a 2-in-1 PC is switched to its laptop state. In Tablet mode, programs default to a maximized view, and the taskbar contains a back button and hides buttons for opened or pinned programs by default; Task View is used instead to switch between programs. The full screen Start menu is used in this mode, similarly to Windows 8, but scrolls vertically instead of horizontally. System security Windows 10 incorporates multi-factor authentication technology based upon standards developed by the FIDO Alliance. The operating system includes improved support for biometric authentication through the Windows Hello platform. Devices with supported cameras (requiring infrared illumination, such as Intel RealSense) allow users to log in with iris or face recognition, similarly to Kinect. Devices with supported readers allow users to log in through fingerprint recognition. Support was also added for palm-vein scanning through a partnership with Fujitsu in February 2018. Credentials are stored locally and protected using asymmetric encryption. In 2017, researchers demonstrated that Windows Hello could be bypassed on fully-updated Windows 10 1703 with a color printout of a person's picture taken with an IR camera. In 2021, researchers were again able to bypass the Windows Hello functionalities by using custom hardware disguised as a camera, which presented an IR photo of the owner's face. In addition to biometric authentication, Windows Hello supports authentication with a PIN. By default, Windows requires a PIN to consist of four digits, but can be configured to permit more complex PINs. However, a PIN is not a simpler password. While passwords are transmitted to domain controllers, PINs are not. They are tied to one device, and if compromised, only one device is affected. Backed by a Trusted Platform Module (TPM) chip, Windows uses PINs to create strong asymmetric key pairs. As such, the authentication token transmitted to the server is harder to crack. In addition, whereas weak passwords may be broken via rainbow tables, TPM causes the much-simpler Windows PINs to be resilient to brute-force attacks. When Windows 10 was first introduced, multi-factor authentication was provided by two components: Windows Hello and Passport (not to be confused with the Passport platform of 1998). Later, Passport was merged into Windows Hello. The enterprise edition of Windows 10 offers additional security features; administrators can set up policies for the automatic encryption of sensitive data, selectively block applications from accessing encrypted data, and enable Device Guarda system which allows administrators to enforce a high-security environment by blocking the execution of software that is not digitally signed by a trusted vendor or Microsoft. Device Guard is designed to protect against zero-day exploits, and runs inside a hypervisor so that its operation remains separated from the operating system itself. Command line The console windows based on Windows Console (for any console app, not just PowerShell and Windows Command Prompt) can now be resized without any restrictions, can be made to cover the full screen by pressing , and can use standard keyboard shortcuts, such as those for cut, copy, and paste. Other features such as word wrap and transparency were also added. These functions can be disabled to revert to the legacy console if needed. The Anniversary Update added Windows Subsystem for Linux (WSL), which allows the installation of a user space environment from a supported Linux distribution that runs natively on Windows. The subsystem translates Linux system calls to those of the Windows NT kernel (only claims full system call compatibility as of WSL 2, included in a later Windows update). The environment can execute the Bash shell and 64-bit command-line programs (WSL 2 also supports 32-bit Linux programs and graphics, assuming supporting software installed, and GPUs support for other uses). Windows applications cannot be executed from the Linux environment, and vice versa. Linux distributions for Windows Subsystem for Linux are obtained through Microsoft Store. The feature initially supported an Ubuntu-based environment; Microsoft announced in May 2017 that it would add Fedora and OpenSUSE environment options as well. Storage requirements To reduce the storage footprint of the operating system, Windows 10 automatically compresses system files. The system can reduce the storage footprint of Windows by approximately 1.5GB for 32-bit systems and 2.6GB for 64-bit systems. The level of compression used is dependent on a performance assessment performed during installations or by OEMs, which tests how much compression can be used without harming operating system performance. Furthermore, the Refresh and Reset functions use runtime system files instead, making a separate recovery partition redundant, allowing patches and updates to remain installed following the operation, and further reducing the amount of space required for Windows 10 by up to 12GB. These functions replace the WIMBoot mode introduced on Windows 8.1 Update, which allowed OEMs to configure low-capacity devices with flash-based storage to use Windows system files out of the compressed WIM image typically used for installation and recovery. Windows 10 also includes a function in its Settings app that allows users to view a breakdown of how their device's storage capacity is being used by different types of files, and determine whether certain types of files are saved to internal storage or an SD card by default. Online services and functionality Windows 10 introduces Microsoft Edge, a new default web browser. It initially featured a new standards-compliant rendering engine derived from Trident, and also includes annotation tools and integration with other Microsoft platforms present within Windows 10. Internet Explorer 11 is maintained on Windows 10 for compatibility purposes, but is deprecated in favor of Edge and will no longer be actively developed. In January 2020, the initial version of Edge was succeeded by a new iteration derived from the Chromium project and the Blink layout engine and the old Edge based on EdgeHTML is now called 'Microsoft Edge Legacy'. The legacy version of Edge is currently being replaced by the new Chromium-based Edge via Windows Update, though this version can also be downloaded manually. Every Windows 10 version from 20H2, which was released on October 20, 2020, will come with the new version of the browser preinstalled. The Windows 10 October 2020 update added a price comparison tool to the Edge browser. Windows 10 incorporates a universal search box located alongside the Start and Task View buttons, which can be hidden or condensed into a single button. Previous versions featured Microsoft's intelligent personal assistant Cortana, which was first introduced with Windows Phone 8.1 in 2014, and supports both text and voice input. Many of its features are a direct carryover from Windows Phone, including integration with Bing, setting reminders, a Notebook feature for managing personal information, as well as searching for files, playing music, launching applications and setting reminders or sending emails. Since the November 2019 update, Microsoft has begun to downplay Cortana as part of a repositioning of the product towards enterprise use, with the May 2020 update removing its Windows shell integration and consumer-oriented features. Microsoft Family Safety is replaced by Microsoft Family, a parental controls system that applies across Windows platforms and Microsoft online services. Users can create a designated family, and monitor and restrict the actions of users designated as children, such as access to websites, enforcing age ratings on Microsoft Store purchases, and other restrictions. The service can also send weekly e-mail reports to parents detailing a child's computer usage. Unlike previous versions of Windows, child accounts in a family must be associated with a Microsoft accountwhich allows these settings to apply across all Windows 10 devices that a particular child is using. Windows 10 also offers the Wi-Fi Sense feature originating from Windows Phone 8.1; users can optionally have their device automatically connect to suggested open hotspots, and share their home network's password with contacts (either via Skype, People, or Facebook) so they may automatically connect to the network on a Windows 10 device without needing to manually enter its password. Credentials are stored in an encrypted form on Microsoft servers and sent to the devices of the selected contacts. Passwords are not viewable by the guest user, and the guest user is not allowed to access other computers or devices on the network. Wi-Fi Sense is not usable on 802.1X-encrypted networks. Adding "_optout" at the end of the SSID will also block the corresponding network from being used for this feature. Universal calling and messaging apps for Windows 10 are built in as of the November 2015 update: Messaging, Skype Video, and Phone. These offer built-in alternatives to the Skype download and sync with Windows 10 Mobile. Multimedia and gaming Windows 10 provides greater integration with the Xbox ecosystem. Xbox SmartGlass is succeeded by the Xbox Console Companion (formerly the Xbox app), which allows users to browse their game library (including both PC and Xbox console games), and Game DVR is also available using a keyboard shortcut, allowing users to save the last 30 seconds of gameplay as a video that can be shared to Xbox Live, OneDrive, or elsewhere. Windows 10 also allows users to control and play games from an Xbox One console over a local network. The Xbox Live SDK allows application developers to incorporate Xbox Live functionality into their apps, and future wireless Xbox One accessories, such as controllers, are supported on Windows with an adapter. Microsoft also intends to allow cross-purchases and save synchronization between Xbox One and Windows 10 versions of games; Microsoft Studios games such as ReCore and Quantum Break are intended as being exclusive to Windows 10 and Xbox One. Candy Crush Saga and Microsoft Solitaire Collection are also automatically installed upon installation of Windows 10. Windows 10 adds native game recording and screenshot capture ability using the newly introduced Game Bar. Users can also have the OS continuously record gameplay in the background, which then allows the user to save the last few moments of gameplay to the storage device. Windows 10 adds FLAC and HEVC codecs and support for the Matroska media container, allowing these formats to be opened in Windows Media Player and other applications. DirectX 12 Windows 10 includes DirectX 12, alongside WDDM 2.0. Unveiled March 2014 at GDC, DirectX 12 aims to provide "console-level efficiency" with "closer to the metal" access to hardware resources, and reduced CPU and graphics driver overhead. Most of the performance improvements are achieved through low-level programming, which allow developers to use resources more efficiently and reduce single-threaded CPU bottlenecking caused by abstraction through higher level APIs. DirectX 12 will also feature support for vendor agnostic multi-GPU setups. WDDM 2.0 introduces a new virtual memory management and allocation system to reduce workload on the kernel-mode driver. Fonts Windows 10 adds three new default typefaces compared to Windows 8, but removes dozens of others. The removed typefaces are available in supplemental packs and may be added manually over a non-metered internet connection. Editions and pricing Windows 10 is available in five main editions for personal computing devices; the Home and Pro editions of which are sold at retail in most countries, and as pre-loaded software on new computers. Home is aimed at home users, while Pro is aimed at power users and small businesses. Each edition of Windows 10 includes all of the capabilities and features of the edition below it, and add additional features oriented towards their market segments; for example, Pro adds additional networking and security features such as BitLocker, Device Guard, Windows Update for Business, and the ability to join a domain. Enterprise and Education, the other editions, contain additional features aimed towards business environments, and are only available through volume licensing. As part of Microsoft's unification strategies, Windows products that are based on Windows 10's common platform but meant for specialized platforms are marketed as editions of the operating system, rather than as separate product lines. An updated version of Microsoft's Windows Phone operating system for smartphones, and also tablets, was branded as Windows 10 Mobile. Editions of Enterprise and Mobile will also be produced for embedded systems, along with Windows 10 IoT Core, which is designed specifically for use in small footprint, low-cost devices and Internet of Things (IoT) scenarios and is similar to Windows Embedded. On May 2, 2017, Microsoft unveiled Windows 10 S (referred to in leaks as Windows 10 Cloud), a feature-limited edition of Windows 10 which was designed primarily for devices in the education market (competing, in particular, with Chrome OS netbooks), such as the Surface Laptop that Microsoft also unveiled at this time. The OS restricts software installation to applications obtained from Microsoft Store; the device may be upgraded to Windows 10 Pro for a fee to enable unrestricted software installation. As a time-limited promotion, Microsoft stated that this upgrade would be free on the Surface Laptop until March 31, 2018. Windows 10 S also contains a faster initial setup and login process, and allows devices to be provisioned using a USB drive with the Windows Intune for Education platform. In March 2018, Microsoft announced that Windows 10 S would be deprecated because of market confusion and would be replaced by "S Mode", an OEM option wherein Windows defaults to only allowing applications to be installed from Microsoft Store, but does not require payment in order to disable these restrictions. Preview releases A public beta program for Windows10 known as the Windows Insider Program began with the first publicly available preview release on October 1, 2014. Insider preview builds are aimed towards enthusiasts and enterprise users for the testing and evaluation of updates and new features. Users of the Windows Insider program receive occasional updates to newer preview builds of the operating system and will continue to be able to evaluate preview releases after general availability (GA) in July 2015this is in contrast to previous Windows beta programs, where public preview builds were released less frequently and only during the months preceding GA. Windows Insider builds continued being released after the release to manufacturing (RTM) of Windows10. Public release On July 29, 2015, Microsoft officially announced that Windows 10 would be released for retail purchase as a free upgrade from earlier versions of Windows. In comparison to previous Windows releases, which had a longer turnover between the release to manufacturing (RTM) and general release to allow for testing by vendors (and in some cases, the development of "upgrade kits" to prepare systems for installation of the new version), an HP executive explained that because it knew Microsoft targeted the operating system for a 2015 release, the company was able to optimize its then-current and upcoming products for Windows 10 in advance of its release, negating the need for such a milestone. The general availability build of Windows10, numbered 10240, was first released to Windows Insider channels for pre-launch testing on July 15, 2015, prior to its formal release. Although a Microsoft official said there would be no specific RTM build of Windows 10, 10240 was described as an RTM build by media outlets because it was released to all Windows Insider members at once (rather than to users on the "Fast ring" first), it no longer carried pre-release branding and desktop watermark text, and its build number had mathematical connections to the number10 in reference to the operating system's naming. The Enterprise edition was released to volume licensing on August 1, 2015. Windows 10 is distributed digitally through the "Media Creation Tool", which is functionally identical to the Windows 8 online installer, and can also be used to generate an ISO image or USB install media. In-place upgrades are supported from most editions of Windows 7 with Service Pack 1 and Windows8.1 with Update 1, while users with Windows8 must first upgrade to Windows8.1. Changing between architectures (e.g., upgrading from 32-bit edition to a 64-bit editions) via in-place upgrades is not supported; a clean install is required. In-place upgrades may be rolled back to the device's previous version of Windows, provided that 30days have not passed since installation, and backup files were not removed using Disk Cleanup. Windows 10 was available in 190countries and 111 languages upon its launch, and as part of efforts to "re-engage" with users in China, Microsoft also announced that it would partner with Qihoo and Tencent to help promote and distribute Windows10 in China, and that Chinese PC maker Lenovo would provide assistance at its service centers and retail outlets for helping users upgrade to Windows10. At retail, Windows 10 is priced similarly to editions of Windows 8.1, with U.S. prices set at $119 and $199 for Windows 10Home and Pro respectively. A Windows 10 Pro Pack license allows upgrades from Windows 10 Home to Windows 10 Pro. Retail copies only ship on USB flash drive media; however, system builder copies still ship as DVD-ROM media. New devices shipping with Windows10 were also released during the operating system's launch window. Windows RT devices cannot be upgraded to Windows10. Free upgrade offer During its first year of availability, upgrade licenses for Windows10 could be obtained at no charge for devices with a genuine license for an eligible edition of Windows7 or8.1. This offer did not apply to Enterprise editions, as customers under an active Software Assurance (SA)contract with upgrade rights are entitled to obtain Windows 10 Enterprise under their existing terms. All users running non-genuine copies of Windows, and those without an existing Windows7 or8 license, were ineligible for this promotion; although upgrades from a non-genuine version were possible, they result in a non-genuine copy of10. On the general availability build of Windows10 (version 1507), to activate and generate the "digital entitlement" for Windows10, the operating system must have first been installed as an in-place upgrade. During the free upgrade, a genuineticket.xml file is created in the background and the system's motherboard details are registered with a Microsoft Product Activation server. Once installed, the operating system can be reinstalled on that particular system via normal means without a product key, and the system's license will automatically be detected via online activation - in essence, the Microsoft Product Activation Server will remember the system's motherboard and give it the green light for product re-activation. Because of installation issues with Upgrade Only installs, the November Update (version 1511) included additional activation mechanisms. This build treated Windows7 and Windows8/8.1 product keys as Windows10 product keys, meaning they could be entered during installation to activate the free license, without the need to upgrade first to "activate" the hardware with Microsoft's activation servers. For major Original Equipment Manufacturers (OEMs), Windows 8/8.1 and Windows 10 OEM product keys are embedded in the firmware of the motherboard and if the correct edition of Windows 10 is present on the installation media, they are automatically inputted during installation. Since the release of the Fall Creators Update (version 1709), Microsoft decided to release multi-edition installation media, to alleviate installation and product activation issues users experienced because of accidentally installing the wrong edition of Windows 10. The Windows Insider Preview version of Windows10 automatically updated itself to the generally released version as part of the version progression and continues to be updated to new beta builds, as it had throughout the testing process. Microsoft explicitly stated that Windows Insider was not a valid upgrade path for those running a version of Windows that is ineligible for the upgrade offer; although, if it was not installed with a license carried over from an in-place upgrade to 10 Insider Preview from Windows7 or8, the Insider Preview does remain activated as long as the user does not exit the Windows Insider program. The offer was promoted and delivered via the "Get Windows10" application (also known as GWX), which was automatically installed via Windows Update ahead of Windows 10's release, and activated on systems deemed eligible for the upgrade offer. Via a notification area icon, users could access an application that advertised Windows10 and the free upgrade offer, check device compatibility, and "reserve" an automatic download of the operating system upon its release. On July 28, a pre-download process began in which Windows10 installation files were downloaded to some computers that had reserved it. Microsoft said that those who reserved Windows10 would be able to install it through GWX in a phased rollout process. The operating system could alternatively be downloaded at any time using a separate "Media Creation Tool" setup program, that allows for the creation of DVD or USB installation media. In May 2016, Microsoft announced that the free upgrade offer would be extended to users of assistive technologies; however, Microsoft did not implement any means of certifying eligibility for this offer, which some outlets thereby promoted as being a loophole to fraudulently obtain a free Windows 10 upgrade. Microsoft said that the loophole is not intended to be used in this manner. In November 2017, Microsoft announced that this program would end on December 31, 2017. However, another loophole was found that allowed Windows 7 and 8.1 users upgrade to Windows 10 using existing licenses, even though the free upgrade offers officially ended in 2017. No word from Microsoft was given whether it will be closed and some outlets have continued to promote it as a free method of upgrading from the now-unsupported Windows 7. Licensing During upgrades, Windows10 licenses are not tied directly to a product key. Instead, the license status of the system's current installation of Windows is migrated, and a "Digital license" ( known as "Digital entitlement" in version 1511 or earlier) is generated during the activation process, which is bound to the hardware information collected during the process. If Windows10 is reinstalled cleanly and there have not been any significant hardware changes since installation (such as a motherboard change), the online activation process will automatically recognize the system's digital entitlement if no product key is entered during installations. However, unique product keys are still distributed within retail copies of Windows10. As with previous non-volume-licensed variants of Windows, significant hardware changes will invalidate the digital entitlement, and require Windows to be re-activated. Updates and support Unlike previous versions of Windows, Windows Update does not allow the selective installation of updates, and all updates (including patches, feature updates, and driver software) are downloaded and installed automatically. Users can only choose whether their system will reboot automatically to install updates when the system is inactive, or be notified to schedule a reboot. If a wireless network is designated as "Metered"—a function which automatically reduces the operating system's background network activity to conserve limits on Internet usage—most updates are not downloaded until the device is connected to a non-metered network. Version 1703 allows wired (Ethernet) networks to be designated as metered, but Windows may still download certain updates while connected to a metered network. In version 2004, by installing the August 2020 security update and later versions, driver and non-security updates pushed via Windows Update that are considered optional are no longer automatically downloaded and installed in their devices. Users can access them on Settings > Update & Security > Windows Update > View optional update. Updates can cause compatibility or other problems; a Microsoft troubleshooter program allows bad updates to be uninstalled. Under the Windows end-user license agreement, users consent to the automatic installation of all updates, features and drivers provided by the service, and implicitly consent "without any additional notice" to the possibility of features being modified or removed. The agreement also states, specifically for users of Windows10 in Canada, that they may pause updates by disconnecting their device from the Internet. Windows Update can also use a peer to peer system for distributing updates; by default, users' bandwidth is used to distribute previously downloaded updates to other users, in combination with Microsoft servers. Users can instead choose to only use peer-to-peer updates within their local area network. Support lifecycle The original release of Windows 10 receives mainstream support for five years after its original release, followed by five years of extended support, but this is subject to conditions. Microsoft's support lifecycle policy for the operating system notes that "Updates are cumulative, with each update built upon all of the updates that preceded it", that "a device needs to install the latest update to remain supported", and that a device's ability to receive future updates will depend on hardware compatibility, driver availability, and whether the device is within the OEM's "support period"a new aspect not accounted for in lifecycle policies for previous versions. This policy was first invoked in 2017 to block Intel Clover Trail devices from receiving the Creators Update, as Microsoft asserts that future updates "require additional hardware support to provide the best possible experience", and that Intel no longer provided support or drivers for the platform. Microsoft stated that these devices would no longer receive feature updates, but would still receive security updates through January 2023. Microsoft will continue to support at least one standard Windows 10 release until October 14, 2025. The following table collects current status of the aforementioned updating and support of different branches of Windows 10: Feature updates Windows 10 is often described by Microsoft as being a "service", as it receives regular "feature updates" that contain new features and other updates and fixes. In April 2017, Microsoft stated that these updates would be released twice a year every March and September in the future. Mainstream builds of Windows 10, until and including 2004, were labeled "YYMM", with "YY" representing the two-digit year and "MM" representing the month of release. For example, version 1809 was released in September (the ninth month) of 2018. This was changed with the 20H2 release where "MM" represents the half of the year in which the update was released, for example H1 for the first half and H2 for the second half. The pace at which feature updates are received by devices is dependent on which release channel is used. The default branch for all users of Windows10 Home and Pro is "Semi-Annual Channel (Targeted)" (formerly "Current Branch", or "CB"), which receives stable builds after they are publicly released by Microsoft. Each build of Windows 10 is supported for 18 months after its original release. In enterprise environments, Microsoft officially intends that this branch is used for "targeted" deployments of newly released stable versions so that they can be evaluated and tested on a limited number of devices before a wider deployment. Once a stable build is certified by Microsoft and its partners as being suitable for broad deployment, the build is then released on the "Semi-Annual Channel" (formerly "Current Branch for Business", or "CBB"), which is supported by the Pro and Enterprise editions of Windows 10. Semi-Annual Channel receives stable builds on a four-month delay from their release on the Targeted channel, Administrators can also use the "Windows Update for Business" system, as well as existing tools such as WSUS and System Center Configuration Manager, to organize structured deployments of feature updates across their networks. The Windows Insider branches receive unstable builds as they are released; it is divided into two channels, "Dev" (which receives new builds immediately after their release), and "Beta" (whose releases are slightly delayed from their "Dev" release). Enterprise licensees may use the Windows 10 Enterprise LTSC (formerly LTSB) edition, where "LTSC" stands for "long-term servicing channel", which only receive quality of life updates (i.e. security patches), and has a full, 5 or 10-year support lifecycle for each build. This edition is designed for "special-purpose devices" that perform a fixed function (such as automated teller machines and medical equipment). For this reason, it excludes Cortana, Microsoft Store, and all bundled Universal Windows Platform apps (including but not limited to Microsoft Edge, hence these builds ship only with Internet Explorer as browser). Microsoft director Stella Chernyak explained that "we have businesses [that] may have mission-critical environments where we respect the fact they want to test and stabilize the environment for a long time." Four LTSC builds have been released, correlating with the 1507, 1607, 1809, and 21H2 versions of Windows 10, respectively. In July 2017, Microsoft announced changes in the terminology for Windows branches as part of its effort to unify the update cadence with that of Office 365 ProPlus and Windows Server 2016. The branch system now defines two paces of upgrade deployment in enterprise environments, "targeted" initial deployment of a new version on selected systems immediately after its stable release for final testing, and "broad" deployment afterwards. Hence, "Current Branch" is now known as "Semi-Annual Channel (Targeted)", and "Current Branch for Business" for broad deployment is now referred to as "Semi-Annual Channel". In February 2019, Microsoft announced changes again in delivering updates in beginning of release of version 1903: a single SAC will be released and SAC-T will be retired, and users are no longer able to switch to different channels. Instead, these updates can be deferred from 30 to 90 days, or depending how the device was configured to deferred the updates. In April 2019, it was announced that, in addition, feature updates will no longer be automatically pushed to users. However, after the release of version 2004, the update only pushed for those running a feature update version that is nearing end of service or it can be paused for up to 35 days. Feature updates prior to version 1909 are distributed solely as an in-place upgrade installation, requiring the download of a complete operating system package (approximately 3.5 GB in size for 64-bit systems). Unlike previous builds, version 1909 is designed primarily as an update rollup version of 1903, focusing primarily on minor feature additions and enhancements. For upgrades to 1909 from 1903, a new delivery method is used where its changes were delivered as part of the monthly cumulative update, but are left in a dormant state until the 1909 update "enablement" patch is installed. The full upgrade process is still used for those using builds prior to 1903. Features in development In May 2017, Microsoft unveiled Fluent Design System (previously codenamed "Project Neon"), a revamp of Microsoft Design Language 2 that will include guidelines for the designs and interactions used within software designed for all Windows 10 devices and platforms. The new design language will include the more prominent use of motion, depth, and translucency effects. Microsoft stated that the implementation of this design language would be performed over time, and it had already started to implement elements of it in Creators Update and Fall Creators Update. On December 7, 2016, Microsoft announced that, as part of a partnership with Qualcomm, it planned to introduce support for running Win32 software on ARM architecture with a 32-bit x86 processor emulator, in 2017. Terry Myerson stated that this move would enable the production of Qualcomm Snapdragon-based Windows devices with cellular connectivity and improved power efficiency over Intel-compatible devices, and still capable of running the majority of existing Windows software (unlike the previous Windows RT, which was restricted to Windows Store apps). Microsoft is initially targeting this project towards laptops. Microsoft launched the branding Always Connected PCs in December 2017 to market Windows 10 devices with cellular connectivity, which included two ARM-based 2-in-1 laptops from Asus and HP featuring the Snapdragon 835 system-on-chip, and the announcement of a partnership between AMD and Qualcomm to integrate its Snapdragon X16 gigabit LTE modem with AMD's Ryzen Mobile platform. In August 2019, Microsoft began testing changes to its handling of the user interface on convertible devices—downplaying the existing "Tablet Mode" option in favor of presenting the normal desktop with optimizations for touch when a keyboard is not present, such as increasing the space between taskbar buttons and displaying the virtual keyboard when text fields are selected. In April 2021, the ability to run Linux applications using a graphical user interface, such as Audacity, directly in Windows, was introduced as a preview. This feature would later be included as part of the updated Windows Subsystem for Linux 2 for Windows 11 only. System requirements The basic hardware requirements to install Windows 10 were initially the same as those for Windows 8.1 and Windows 8, and only slightly higher than for Windows 7 and Windows Vista. As of the May 2019 update, the minimum disk space requirement has been increased to 32 GB. In addition, on new installations, Windows permanently reserves up to 7 GB of disk space in order to ensure proper installation of future feature updates. The 64-bit variants require a CPU that supports certain instructions. Devices with low storage capacity must provide a USB flash drive or SD card with sufficient storage for temporary files during upgrades. Some pre-built devices may be described as "certified" by Microsoft. Certified tablets must include , , and keys; and keys are no longer required. As with Windows 8, all certified devices must ship with UEFI Secure Boot enabled by default. Unlike Windows 8, OEMs are no longer required to make Secure Boot settings user-configurable, meaning that devices may optionally be locked to run only Microsoft-signed operating systems. A supported infrared-illuminated camera is required for Windows Hello face authentication, and a supported fingerprint reader is required for Windows Hello fingerprint authentication. Device Guard requires a UEFI system with no third-party certificates loaded, and CPU virtualization extensions (including SLAT and IOMMU) enabled in firmware. Beginning with Intel Kaby Lake and AMD Bristol Ridge, Windows 10 is the only version of Windows that Microsoft will officially support on newer CPU microarchitectures. Terry Myerson stated that Microsoft did not want to make further investments in optimizing older versions of Windows and associated software for newer generations of processors. These policies were criticized by the media, who especially noted that Microsoft was refusing to support newer hardware (particularly Intel's Skylake CPUs, which was also originally targeted by the new policy with a premature end of support that was ultimately retracted) on Windows 8.1, a version of Windows that was still in mainstream support until January 2018. In addition, an enthusiast-created modification was released that disabled the check and allowed Windows 8.1 and earlier to continue to work on the platform. Windows 10 version 1703 and later do not support Intel Clover Trail system-on-chips, per Microsoft's stated policy of only providing updates for devices during their OEM support period. Starting with Windows 10 version 2004, Microsoft will require new OEM devices to use 64-bit processors, and will therefore cease the distribution of x86 (32-bit) variants of Windows 10 via OEM channels. The 32-bit variants of Windows 10 will remain available via non-OEM channels, and Microsoft will continue to "[provide] feature and security updates on these devices". This would later be followed by Windows 11 dropping 32-bit hardware support altogether, and thus making Windows 10 the final version of Windows to have a 32-bit version. Reception Critics characterized the initial release of Windows 10 as being rushed, citing the incomplete state of some of the operating system's bundled software, such as the Edge web browser, as well as the stability of the operating system itself on launch. However, TechRadar felt that it could be "the new Windows 7", citing the operating system's more familiar user interface, improvements to bundled apps, performance improvements, a "rock solid" search system, and the Settings app being more full-featured than its equivalents on8 and8.1. The Edge browser was praised for its performance, although it was not in a feature-complete state at launch. While considering them a "great idea in principle", concerns were shown for Microsoft's focus on the universal app ecosystem: It's by no means certain that developers are going to flock to Windows10 from iOS and Android simply because they can convert their apps easily. It may well become a no-brainer for them, but at the moment a conscious decision is still required. Engadget was similarly positive, noting that the upgrade process was painless and that Windows10's user interface had balanced aspects of Windows8 with those of previous versions with a more mature aesthetic. Cortana's always-on voice detection was considered to be its "true strength", also citing its query capabilities and personalization features, but noting that it was not as pre-emptive as Google Now. Windows10's stock applications were praised for being improved over their Windows 8 counterparts, and for supporting windowed modes. The Xbox app was also praised for its Xbox One streaming functionality, although recommending its use over a wired network because of inconsistent quality over Wi-Fi. In conclusion, it was argued that "Windows10 delivers the most refined desktop experience ever from Microsoft, and yet it's so much more than that. It's also a decent tablet OS, and it's ready for a world filled with hybrid devices. And, barring another baffling screwup, it looks like a significant step forward for mobile. Heck, it makes the Xbox One a more useful machine." Ars Technica panned the new Tablet mode interface for removing the charms and app switching, making the Start button harder to use by requiring users to reach for the button on the bottom-left rather than at the center of the screen when swiping with a thumb, and for making application switching less instantaneous through the use of Task View. Microsoft Edge was praised for being "tremendously promising", and "a much better browser than Internet Explorer ever was", but criticized it for its lack of functionality on-launch. In conclusion, contrasting Windows8 as being a "reliable" platform albeit consisting of unfinished concepts, Windows10 was considered "the best Windows yet", and was praised for having a better overall concept in its ability to be "comfortable and effective" across a wide array of form factors, but that it was buggier than previous versions of Windows were on-launch. ExtremeTech felt that Windows10 restricted the choices of users, citing its more opaque setting menus, forcing users to give up bandwidth for the peer-to-peer distribution of updates, and for taking away user control of specific functions, such as updates, explaining that "it feels, once again, as if Microsoft has taken the seed of a good idea, like providing users with security updates automatically, and shoved the throttle to maximum." Windows 10 has also received criticism because of deleting files without user permission after auto updates. Critics have noted that Windows10 heavily emphasizes freemium services, and contains various advertising facilities. Some outlets have considered these to be a hidden "cost" of the free upgrade offer. Examples of these have included microtransactions in bundled games such as Microsoft Solitaire Collection, default settings that display promotions of "suggested" apps in the Start menu, "tips" on the lock screen that may contain advertising, ads displayed in File Explorer for Office 365 subscriptions on Creators' Update, and various advertising notifications displayed by default which promote Microsoft Edge when it is not set as the default web browser (including, in a September 2018 build, nag pop-ups displayed to interrupt the installation process of competitors). Market share and sales Up to August 2016, Windows 10 usage was increasing, with it then plateauing, while eventually in 2018, it became more popular than Windows 7 (though Windows 7 was still more used in some countries in Asia and Africa in 2019). , the operating system is running on over a billion devices, reaching the goal set by Microsoft two years after the initial deadline. Twenty-four hours after it was released, Microsoft announced that over 14million devices were running Windows10. On August 26, Microsoft said over 75million devices were running Windows10, in 192countries, and on over 90,000 unique PC or tablet models. According to Terry Myerson, there were over 110million devices running Windows10 as of October 6, 2015. On January 4, 2016, Microsoft reported that Windows10 had been activated on over 200million devices since the operating system's launch in July 2015. According to StatCounter, Windows 10 overtook Windows 8.1 in December 2015. Iceland was the first country where Windows 10 was ranked first (not only on the desktop, but across all platforms), with several larger European countries following. For one week in late November 2016, Windows 10 overtook first rank from Windows 7 in the United States, before losing it again. By February 2017, Windows 10 was losing market share to Windows 7. In mid-January 2018, Windows 10 had a slightly higher global market share than Windows 7, with it noticeably more popular on weekends, while popularity varies widely by region, e.g. Windows 10 was then still behind in Africa and far ahead in some other regions e.g. Oceania. Update system changes Windows 10 Home is permanently set to download all updates automatically, including cumulative updates, security patches, and drivers, and users cannot individually select updates to install or not. Microsoft offers a diagnostic tool that can be used to hide updates and prevent them from being reinstalled, but only after they had been already installed, then uninstalled without rebooting the system. Tom Warren of The Verge felt that, given web browsers such as Google Chrome had already adopted such an automatic update system, such a requirement would help to keep all Windows10 devices secure, and felt that "if you're used to family members calling you for technical support because they've failed to upgrade to the latest Windows service pack or some malware disabled Windows Update then those days will hopefully be over." Concerns were raised that because of these changes, users would be unable to skip the automatic installation of updates that are faulty or cause issues with certain system configurations—although build upgrades will also be subject to public beta testing via Windows Insider program. There were also concerns that the forced installation of driver updates through Windows Update, where they were previously designated as "optional", could cause conflicts with drivers that were installed independently of Windows Update. An example of such a situation occurred prior to the general release of the operating system, when an Nvidia graphics card driver that was automatically pushed to Windows10 users via Windows Update caused issues that prevented the use of certain functions, or prevented their system from booting at all. Criticism was also directed towards Microsoft's decision to no longer provide specific details on the contents of cumulative updates for Windows 10. On February 9, 2016, Microsoft retracted this decision and began to provide release notes for cumulative updates on the Windows website. Some users reported that during the installation of the November upgrade, some applications (particularly utility programs such as CPU-Z and Speccy) were automatically uninstalled during the upgrade process, and some default programs were reset to Microsoft-specified defaults (such as Photos app, and Microsoft Edge for PDF viewing), both without warning. Further issues were discovered upon the launch of the Anniversary Update ("Redstone"), including a bug that caused some devices to freeze (but addressed by cumulative update KB3176938, released on August 31, 2016), and that fundamental changes to how Windows handles webcams had caused many to stop working. In June 2017, a Redstone 3 Insider build (RS_EDGE_CASE in PC and rs_IoT on Mobile) was accidentally released to both Insider and non-Insider users on all Windows 10 devices, but the update was retracted, with Microsoft apologizing and releasing a note on their Windows Insider Program blog describing how to prevent the build from being installed on their device. According to Dona Sarkar, this was due to "an inadvertent deployment to the engineering system that controls which builds/which rings to push out to insiders." A Gartner analyst felt that Windows 10 Pro was becoming increasingly inappropriate for use in enterprise environments because of support policy changes by Microsoft, including consumer-oriented upgrade lifecycle length, and only offering extended support for individual builds to Enterprise and Education editions of Windows 10. Critics have acknowledged that Microsoft's update and testing practices had been affecting the overall quality of Windows 10. In particular, it was pointed out that Microsoft's internal testing departments had been prominently affected by a major round of layoffs undertaken by the company in 2014. Microsoft relies primarily on user testing and bug reports via the Windows Insider program (which may not always be of sufficient quality to identify a bug), as well as correspondence with OEMs and other stakeholders. In the wake of the known folder redirection data loss bug in the version 1809, it was pointed out that bug reports describing the issue had been present on the Feedback Hub app for several months prior to the public release. Following the incident, Microsoft updated Feedback Hub so that users may specify the severity of a particular bug report. When announcing the resumption of 1809's rollout, Microsoft stated that it planned to be more transparent in its handling of update quality in the future, through a series of blog posts that will detail its testing process and the planned development of a "dashboard" that will indicate the rollout progress of future updates. Distribution practices Microsoft was criticized for the tactics that it used to promote its free upgrade campaign for Windows 10, including adware-like behaviors, using deceptive user interfaces to coax users into installing the operating system, downloading installation files without user consent, and making it difficult for users to suppress the advertising and notifications if they did not wish to upgrade to 10. The upgrade offer was marketed and initiated using the "Get Windows 10" (GWX) application, which was first downloaded and installed via Windows Update in March 2015. Registry keys and group policies could be used to partially disable the GWX mechanism, but the installation of patches to the GWX software via Windows Update could reset these keys back to defaults, and thus reactivate the software. Third-party programs were also created to assist users in applying measures to disable GWX. In September 2015, it was reported that Microsoft was triggering automatic downloads of Windows 10 installation files on all compatible Windows 7 or 8.1 systems configured to automatically download and install updates, regardless of whether or not they had specifically requested the upgrade. Microsoft officially confirmed the change, claiming it was "an industry practice that reduces the time for installation and ensures device readiness." This move was criticized by users with data caps or devices with low storage capacity, as resources were consumed by the automatic downloads of up to 6 GB of data. Other critics argued that Microsoft should not have triggered any downloading of Windows 10 installation files without user consent. In October 2015, Windows 10 began to appear as an "Optional" update on the Windows Update interface, but pre-selected for installation on some systems. A Microsoft spokesperson said that this was a mistake, and that the download would no longer be pre-selected by default. However, on October 29, 2015, Microsoft announced that it planned to classify Windows 10 as a "recommended" update in the Windows Update interface sometime in 2016, which would cause an automatic download of installation files and a one-time prompt with a choice to install to appear. In December 2015, it was reported that a new advertising dialog had begun to appear, only containing "Upgrade now" and "Upgrade tonight" buttons, and no obvious method to decline installation besides the close button. In March 2016, some users also alleged that their Windows 7 and 8.1 devices had automatically begun upgrading to Windows 10 without their consent. In June 2016, the GWX dialog's behavior changed to make closing the window imply a consent to a scheduled upgrade. Despite this, an InfoWorld editor disputed the claims that upgrades had begun without any consent at all; testing showed that the upgrade to Windows 10 would only begin once the user accepts the end-user license agreement (EULA) presented by its installer, and that not doing so would eventually cause Windows Update to time out with an error, thus halting the installation attempt. It was concluded that these users may have unknowingly clicked the "Accept" prompt without full knowledge that this would begin the upgrade. In December 2016, Microsoft's chief marketing officer Chris Capossela admitted that the company had "gone too far" by using this tactic, stating, "we know we want people to be running Windows 10 from a security perspective, but finding the right balance where you're not stepping over the line of being too aggressive is something we tried and for a lot of the year I think we got it right." On January 21, 2016, Microsoft was sued in small claims court by a user whose computer had attempted to upgrade to Windows 10 without her consent shortly after the release of the operating system. The upgrade failed, and her computer was left in a broken state thereafter, which disrupted the ability to run her travel agency. The court ruled in favor of the user and awarded her $10,000 in damages, but Microsoft appealed. However, in May 2016, Microsoft dropped the appeal and chose to pay the damages. Shortly after the suit was reported on by the Seattle Times, Microsoft confirmed it was updating the GWX software once again to add more explicit options for opting out of a free Windows 10 upgrade; the final notification was a full-screen pop-up window notifying users of the impending end of the free upgrade offer, and contained "Remind me later", "Do not notify me again" and "Notify me three more times" as options. In March 2019, Microsoft announced that it would display notifications informing users on Windows 7 devices of the upcoming end of extended support for the platform, and direct users to a website urging them to upgrade to Windows 10 or purchase new hardware. This dialog will be similar to the previous Windows 10 upgrade prompts, but will not explicitly mention Windows 10. Privacy and data collection Privacy advocates and other critics have expressed concern regarding Windows10's privacy policies and its collection and use of customer data. Under the default "Express" settings, Windows10 is configured to send various information to Microsoft and other parties, including the collection of user contacts, calendar data, and "associated input data" to personalize "speech, typing, and inking input", typing and inking data to improve recognition, allowing apps to use a unique "advertisingID" for analytics and advertising personalization (functionality introduced by Windows 8.1) and allow apps to request the user's location data and send this data to Microsoft and "trusted partners" to improve location detection (Windows8 had similar settings, except that location data collection did not include "trusted partners"). Users can opt out from most of this data collection, but telemetry data for error reporting and usage is also sent to Microsoft, and this cannot be disabled on non-Enterprise editions of Windows10. Microsoft's privacy policy states, however, that "Basic"-level telemetry data is anonymized and cannot be used to identify an individual user or device. The use of Cortana also requires the collection of data "such as Your PC location, data from your calendar, the apps you use, data from your emails and text messages, who you call, your contacts and how often you interact with them on Your PC" to personalize its functionality. Rock Paper Shotgun writer Alec Meer argued that Microsoft's intent for this data collection lacked transparency, stating that "there is no world in which 45pages of policy documents and opt-out settings split across 13different settings screens and an external website constitutes 'real transparency'." Joel Hruska of ExtremeTech wrote that "the company that brought us the 'Scroogled' campaign now hoovers up your data in ways that would make Google jealous." However, it was also pointed out that the requirement for such vast usage of customer data had become a norm, citing the increased reliance on cloud computing and other forms of external processing, as well as similar data collection requirements for services on mobile devices such as Google Now and Siri. In August 2015, Russian politician Nikolai Levichev called for Windows10 to be banned from use within the Russian government, as it sends user data to servers in the United States. The Russian government had passed a federal law requiring all online services to store the data of Russian users on servers within the country by September 2016 or be blocked. Writing for ZDNet, Ed Bott said that the lack of complaints by businesses about privacy in Windows10 indicated "how utterly normal those privacy terms are in 2015." In a Computerworld editorial, Preston Gralla said that "the kind of information Windows10 gathers is no different from what other operating systems gather. But Microsoft is held to a different standard than other companies". The Microsoft Services agreement reads that the company's online services may automatically "download software updates or configuration changes, including those that prevent you from accessing the Services, playing counterfeit games, or using unauthorized hardware peripheral devices." Critics interpreted this statement as implying that Microsoft would scan for and delete unlicensed software installed on devices running Windows10. However, others pointed out that this agreement was specifically for Microsoft online services such as Microsoft account, Office 365, Skype, as well as Xbox Live, and that the offending passage most likely referred to digital rights management on Xbox consoles and first-party games, and not plans to police pirated video games installed on Windows10 PCs. Despite this, some torrent trackers announced plans to block Windows10 users, also arguing that the operating system could send information to anti-piracy groups that are affiliated with Microsoft. Writing about these allegations, Ed Bott of ZDNet compared Microsoft's privacy policy to Apple's and Google's and concluded that he "[didn't] see anything that looks remotely like Big Brother." Columnist Kim Komando argued that "Microsoft might in the future run scans and disable software or hardware it sees as a security threat," consistent with the Windows10 update policy. In September 2019, Microsoft hid the option to create a local account during a fresh installation if a PC is connected to the internet. This move was criticized by users who did not want to use an online Microsoft account. Additionally, in Windows 10 Home, the first Microsoft account linked to the primary user’s account can no longer be unlinked, but other users can unlink their own Microsoft accounts from their user accounts. In late-July 2020, Windows Defender began to classify modifications of the hosts file that block Microsoft telemetry servers as being a severe security risk. See also Comparison of operating systems History of operating systems List of operating systems Microsoft Windows version history References External links Windows 10 release information from Microsoft 2015 software ARM operating systems IA-32 operating systems Proprietary operating systems Tablet operating systems 10 X86-64 operating systems
645139
https://en.wikipedia.org/wiki/Data%20dictionary
Data dictionary
A data dictionary, or metadata repository, as defined in the IBM Dictionary of Computing, is a "centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format". Oracle defines it as a collection of tables with metadata. The term can have one of several closely related meanings pertaining to databases and database management systems (DBMS): A document describing a database or collection of databases An integral component of a DBMS that is required to determine its structure A piece of middleware that extends or supplants the native data dictionary of a DBMS Documentation The terms data dictionary and data repository indicate a more general software utility than a catalogue. A catalogue is closely coupled with the DBMS software. It provides the information stored in it to the user and the DBA, but it is mainly accessed by the various software modules of the DBMS itself, such as DDL and DML compilers, the query optimiser, the transaction processor, report generators, and the constraint enforcer. On the other hand, a data dictionary is a data structure that stores metadata, i.e., (structured) data about information. The software package for a stand-alone data dictionary or data repository may interact with the software modules of the DBMS, but it is mainly used by the designers, users and administrators of a computer system for information resource management. These systems maintain information on system hardware and software configuration, documentation, application and users as well as other information relevant to system administration. If a data dictionary system is used only by the designers, users, and administrators and not by the DBMS Software, it is called a passive data dictionary. Otherwise, it is called an active data dictionary or data dictionary. When a passive data dictionary is updated, it is done so manually and independently from any changes to a DBMS (database) structure. With an active data dictionary, the dictionary is updated first and changes occur in the DBMS automatically as a result. Database users and application developers can benefit from an authoritative data dictionary document that catalogs the organization, contents, and conventions of one or more databases. This typically includes the names and descriptions of various tables (records or Entities) and their contents (fields) plus additional details, like the type and length of each data element. Another important piece of information that a data dictionary can provide is the relationship between Tables. This is sometimes referred to in Entity-Relationship diagrams, or if using Set descriptors, identifying which Sets database Tables participate in. In an active data dictionary constraints may be placed upon the underlying data. For instance, a Range may be imposed on the value of numeric data in a data element (field), or a Record in a Table may be FORCED to participate in a set relationship with another Record-Type. Additionally, a distributed DBMS may have certain location specifics described within its active data dictionary (e.g. where Tables are physically located). The data dictionary consists of record types (tables) created in the database by systems generated command files, tailored for each supported back-end DBMS. Oracle has a list of specific views for the "sys" user. This allows users to look up the exact information that is needed. Command files contain SQL Statements for CREATE TABLE, CREATE UNIQUE INDEX, ALTER TABLE (for referential integrity), etc., using the specific statement required by that type of database. There is no universal standard as to the level of detail in such a document. Middleware In the construction of database applications, it can be useful to introduce an additional layer of data dictionary software, i.e. middleware, which communicates with the underlying DBMS data dictionary. Such a "high-level" data dictionary may offer additional features and a degree of flexibility that goes beyond the limitations of the native "low-level" data dictionary, whose primary purpose is to support the basic functions of the DBMS, not the requirements of a typical application. For example, a high-level data dictionary can provide alternative entity-relationship models tailored to suit different applications that share a common database. Extensions to the data dictionary also can assist in query optimization against distributed databases. Additionally, DBA functions are often automated using restructuring tools that are tightly coupled to an active data dictionary. Software frameworks aimed at rapid application development sometimes include high-level data dictionary facilities, which can substantially reduce the amount of programming required to build menus, forms, reports, and other components of a database application, including the database itself. For example, PHPLens includes a PHP class library to automate the creation of tables, indexes, and foreign key constraints portably for multiple databases. Another PHP-based data dictionary, part of the RADICORE toolkit, automatically generates program objects, scripts, and SQL code for menus and forms with data validation and complex joins. For the ASP.NET environment, Base One's data dictionary provides cross-DBMS facilities for automated database creation, data validation, performance enhancement (caching and index utilization), application security, and extended data types. Visual DataFlex features provides the ability to use DataDictionaries as class files to form middle layer between the user interface and the underlying database. The intent is to create standardized rules to maintain data integrity and enforce business rules throughout one or more related applications. Some industries use generalized data dictionaries as technical standards to ensure interoperability between systems. The real estate industry, for example, abides by a RESO's Data Dictionary to which the National Association of REALTORS mandates its MLSs comply with through its policy handbook. This intermediate mapping layer for MLSs' native databases is supported by software companies which provide API services to MLS organizations. Platform-specific examples Developers use a data description specification (DDS) to describe data attributes in file descriptions that are external to the application program that processes the data, in the context of an IBM i. The sys.ts$ table in Oracle stores information about every table in the database. It is part of the data dictionary that is created when the Oracle Database is created. Typical attributes Here is a non-exhaustive list of typical items found in a data dictionary for columns or fields: Entity or form name or their ID (EntityID or FormID). The group this field belongs to. Field name, such as RDBMS field name Displayed field title. May default to field name if blank. Field type (string, integer, date, etc.) Dimension(s) such as min and max values, display width, or number of decimal places. Different field types may interpret this differently. An alternative is to have different attributes depending on field type. Field display order or tab order Coordinates on screen (if a positional or grid-based UI) Default value Prompt type, such as drop-down list, combo-box, check-boxes, range, etc. Is-required (Boolean) - If 'true', the value can't be blank, null, or only white-spaces Is-read-only (Boolean) Reference table name, if a foreign key. Can be used for validation or selection lists. Various event handlers or references to. Example: "on-click", "on-validate", etc. See event-driven programming. Format code, such as a regular expression or COBOL-style "PIC" statements Description or synopsis Database index characteristics or specification See also Data hierarchy Data modeling Database schema ISO/IEC 11179 Metadata registry Semantic spectrum Vocabulary OneSource Metadata repository References External links Yourdon, Structured Analysis Wiki, Data Dictionaries (Web archive) Octopai, Data Dictionary vs. Business Glossary Data management Data modeling Knowledge representation Metadata
31688426
https://en.wikipedia.org/wiki/Zywave
Zywave
Zywave, Inc. is a software company headquartered in Milwaukee, Wisconsin that provides software as a service (SaaS) products for insurance brokers and financial planners. Insurance product categories include agency management, data analysis, compliance, risk management and marketing tools, while its financial division produces planning tools primarily for wealth management. Following acquisitions in 2010 and 2011, Zywave now commands the largest market share for personal financial planning software. Its customer base contains more than 350,000 professional users across North America. Zywave also has offices in Carlsbad, California and Winnipeg, Canada. History Zywave was founded in 1995 by individuals at a regional insurance brokerage, Frank F. Haack & Associates, (later part of Willis) in Milwaukee, Wisconsin, who developed the products as customer service tools. They shared the software with other brokers in industry networking groups like Assurex Global and Intersure. The company then completely separated from Frank F. Haack in 2004. In 2008, Zywave entered into a strategic investment partnership with Vista Equity Partners, a San Francisco private equity firm. In 2010, Zywave announced the acquisition of Specific Software Solutions, a provider of workers' compensation software, and in 2011, Zywave acquired financial planning software developer Emerging Information Systems, Inc. The name "Zywave" was inspired by the axes and shape of a data graph. Awards Inc. 5000 honoree, 2007 and 2008 BenefitsPro Readers' Choice Awards 2006 Top Workplaces in Southeastern Wisconsin 2013 References Companies based in Milwaukee Software companies based in Wisconsin Software companies of the United States Software companies established in 1995 1995 establishments in Wisconsin
18640610
https://en.wikipedia.org/wiki/Perverse%20%28album%29
Perverse (album)
Perverse is the third studio album by British rock band Jesus Jones, released in 1993 on Food Records. After their international success following the release of Doubt (1991), Jesus Jones, especially band leader Mike Edwards, conceived Perverse as a darker, more contemporary album. Fusing rave and techno music into more traditional rock and pop song structures, the album is heavier than its predecessors with a much greater inclusion of industrial music and features lyrics that concern the future. Edwards wrote the lyrics of the album during the band's 1991 tour, using a Roland W-30 sampler to conceive songs in their earliest stages. According to Trouser Press, Perverse "enjoys the historical distinction of being the first album recorded entirely (except for Edwards' vocals) on computer." The band recorded the entire album onto floppy disks in Edwards' house, which were then used on his computer to turn the music into "zeroes and ones". Edwards described it as "the second rock album of the nineties," after The Young Gods' T.V. Sky, due to both albums embracing full-on computer technology. Although the band were ridiculed at the time for the recording process, it later became an influential technique. Upon its release, Perverse peaked at number 6 on the UK Albums Chart and was the start of the band's declining fortunes, although it still yearned three top 40 singles, "The Devil You Know", which also reached number 1 on the Modern Rock Tracks Chart, "The Right Decision" and "Zeroes and Ones" still making the album quite successful. The album received both mixed and positive reviews at the time, with some critics finding the album's production clattered, but later reviews have been more favourable, and some have posed the album as the band's best work. An extensive deluxe edition of the album was released in November 2014. Background With their second album Doubt, Jesus Jones became internationally successful, thanks to hit singles such as "Right Here, Right Now", which reached number 2 on the US Billboard 200, "International Bright Young Thing", which reached number 7 on the UK Singles Chart, and "Real, Real, Real," which reached number 4 on the US Billboard 200. According to music critic Stephen Thomas Erlewine, the album's best moments "showed that sample-driven dance club music could comfortably fit into pop music." For the album's follow-up, Perverse, Jesus Jones, especially band leader, lyricist and vocalist Mike Edwards and keyboardist Iain Baker, decided to venture further into the technological side of recording music. Erlewine described Edwards' aim for Perverse as "his mission to make techno palatable for the pop masses." Edwards and keyboardist Iain Baker felt that, despite living in a "very technological society", rock music had "stayed still" since the 1970s by not embracing current technology: "Technology gives musicians so much opportunity to vary what they do. It's unbelievable and appalling that they don't use it. [...] And it's not that by using technology you lose the warmth and humanity of acoustic instruments, because you don't. You can still create a warm and human sound. The Young Gods did it on the L'eau rouge album and the Aphex Twin is doing it now." Baker cited a song on Aphex Twin's then-new album Surfing on Sine Waves (1993), released under the Polygon Window moniker, as "doing just that" and cited him as "a true pioneer in music." Production Writing Band leader and vocalist Mike Edwards is the sole lyricist on Perverse. On previous albums, Edwards wrote complete versions of songs, and sent virtually complete demos to the rest of the band, who would then interpret the material in "their own unique way." However, on Perverse, he deliberately attempted to write what he felt was necessary for the music, and disregarded how it would sound live. He commented, "I remember writing "Who? Where? Why?" on the last album, thinking, damn, there's two guitarists, I've got to put a second guitar part in there. That becomes limiting, so on this record, I took a linear, very fascistic approach: 'this is the song, nothing else matters'. There were songs on the album where members of the band didn't play." Using a Roland W-30 sampler, which Edwards referred to as his "sketch pad," most of the songs on Perverse began as 15 second sketches that Edwards had "written in hotel bedrooms on Jesus Jones' last tour." Despite having written all the band's previous songs on guitar, Edwards wrote most of the songs on Perverse using a keyboard. In the words of Jon Lewin in Making Music, "he says because he knows the guitar so well, it keeps leading him back to the same riffs and chords. But as a two-fingered 'keyboard idiot', Mike found himself continually discovering new harmonies and intervals." Edwards used two different songwriting methods for the album, the self-described "collade effect": "Get a sample you really like, a musical phrase or just one sound you can make into a musical phrase. Make it do something you enjoy, then add an extra instrument on top, something that sets off the adrenalin, and then just keep collaging until at some point you think 'structure'," and the "cire perdu" method which "he likens to wax-loss casting, where a sculptor's original wax model melts away in the casting process to leave a perfect mould. Simply, he works up a version of a piece of music he likes, then builds his own song on top of that." Recording Edwards' aim for recording Perverse was to take the techniques and 'attitudes' of techno, a genre he viewed as the "most progressive and creative form of popular music at the moment," but use them to create a rock album. He chose this method partially as a concept and partially as he believed creating a rock album with this method would be a success. He told one journalist, "ultimately it made things more interesting for me as a writer, mostly because of the degree of editability at every stage of what you do - notes, timing, sounds - as long as the data exists, it's all changeable." One of the first ever albums to be wholly recorded through a computer, the music on Perverse was recorded onto floppy disks over two and a half months from 1991–92 at Edwards' house in north London, and was produced at London's Think Studios. The album cost almost £100,000, almost five times as much as Doubt, and was produced by Warne Livesey. Edwards summed up the recording sessions: "We did it in a classic techno way. I have an Atari and Cubase. I'd write all the songs on it, and the rest of the band would come in and play all their parts into the computer, using MIDI guitars and drum pads." While recording the album, Edwards "turned every song into binary codes, then fiddled with them until he had achieved a suitably hi-tech noise." With the entire album being recorded into the computer, "the whole album only existed in frequencies." Jerry de Borg's guitar is presented at 300 Hz to 8 kHz. The Roland GR-50 guitar synth was used to load the album's guitar parts into the computer. Al Jarwoski's bass, meanwhile, became 20 Hz to 4 kHz, and as such, "there was no such thing as bass on the record," writers journalist Mark Reed. The rest of the band are also credited unusually on Perverse; the band's percussionist Gen for "drum type sounds," Iain Baker as "omnipresent," and Mike Edwards himself for "first generation (unsampled) vocals, and sole writer." The album is shaped by the prominent usage of the Roland JD-800 synthesiser, which follows each previous Jesus Jones album in that the others also used a new piece of technology which shaped the music. When Edwards first started writing the songs on Perverse, he envisioned the album as containing "these big buzzy synths, basically the equivalent of heavy metal guitars, but more exciting because less clichéd", and felt the JD-800 "was perfect for manufacturing hard, unusual sounds. The sounds I made on the JD are always the ones that our sound guy has to turn down." With Edwards having sequenced the album's music on his Cubase, Livesey continued to work on "structures, editing sounds, putting in subtle wooshes and swishes," which Edwards described as "enormous attention to detail." Edwards' vocals, which were recorded after the music, are the only live element to Perverse. In a 1992 interview prior to release, Edwards christened the album "the second rock album of the nineties," after T.V. Sky (1992) by The Young Gods, recorded in a similar fashion. Explaining this to Graham Reid of Elsewhere, he said "That's not me blowing my own trumpet. What I'm saying is, for us to make an album that reflects the society we‘re living in we use computer technology. That’s a very natural thing to do - why isn’t everyone? Why is it the retro stuff is getting the front covers when new music is what it’s supposed to be about? I’ just trying to draw attention to the fact nothing is going on - and everybody knows it." Composition Music Compared to Doubt, Perverse is a much darker, broodier album, and has been described as the band's heaviest album. Stephen Thomas Erlewine of AllMusic, describing Perverse as an artistic expanding of the band's sound, called the album "a synthesis of techno/rave dance music with traditional pop/rock songs and structures." Meanwhile, Douglas D. Keller of The Tech felt that, although many of the songs contain distinctly techno elements, the album itself is not overall a techno recording: "Songs such as 'Idiot Stare' make extensive use of repeating drum tracks, but there are fast and slow movements within the song which add a richness to this and other tracks that is missing in most techno singles." The band continued to be influenced by house music on the album, and according to Q, "hardcore runs through much of the LP, most notably on 'Zeroes and Ones' and 'Spiral'." According to Graham Reid, samples are either used as the starting point for the music or for further enhancement. Although the album shares Doubt'''s idea of extensive sampling, synthesisers and drum-style sounds, Perverse uses them to curate a driving sound. Edwards performs both "textured lead and backing vocals" on the album which are interwoven with each other; he felt backing vocals are integral as they heavily contribute towards "the feel of a song," which is why the band treated them as "another instrument to use in the mix," and felt that "lack of confidence" is key to the album's vocal sound. Compared to the Young Gods, a primary influence on the album, Baker felt the music on Perverse was not as leftfield in style. "Zeroes and Ones" is dominated by "driving synth pattern, heavy rhythm section and big guitar riffs." "The Devil You Know" features tingly elements resembling Eastern music, starting with a notably high-pitched Iranian instrument. The synthesized basslines on "From Love To War" was influenced by LFO, whose 1991 album Frequencies was Edwards' favourite of that year. "Yellow Brown" and "Spiral" also bear a progressive rock influence. The former song features, as Simon Reynolds described them, "baleful, treated noises, and looped samples of voice low in the mix." "The Right Decision" is buoyed by "a firm, unfettered groove." While Edwards was listening to a lot of proto-ambient techno, he conceived "Spiral" as an attempt to create a piece of music "in a very techno way but without the beat, still have all the really powerful elements without the dictatorial thing of the beat. It had to have a sinister quality so we recorded it fast, varispeeded up, so my voice had that low demonic quality." Lyrics Edwards wrote each of the songs on the album "with a hint of his vision of the future included." Perverse also occasionally reveals contradictory lyrics, such as how "Magazine" appears to celebrate "the distortion of current evens in the media" while "Don't Believe It" is "a biting criticism of the media." In the album's liner notes, "Don't Believe It" is described with the caption: ""May '92, with a little ignorance and media manipulation, there is a whipping boy for every occasion. On our side, truth, decency, and the right way to talk, on the other side, our perfect enemy out for revenge." According to The Tech, "Zeroes and Ones", whose name is a reference to the binary code and the album's recording method, concerns "the increasing prominence of computers in daily life, from pocket calculators and shopping at home to missile guiding and virtual sex. The song is both a celebration of the power of computers and a warning about the control that they exerts over our lives." "The Devil You Know", an example of Edwards' "collade effect" songwriting, was built from his "collage method." As for "The Right Decision", Edwards explained to Making Music, "I worked hard on the chords for this one. What I attempted to do was write a great single without using the classic format. After a short while, I realise it has become verse/chorus/verse/chorus, but it's hidden because the sections are different and the two bridges have different chords." "Magazine" refers to magazines as "the fast food of the information medium." "Spiral" is anchored by a "thematic link" shared between its music and lyrics. ReleasePerverse was released on 25 January 1993 by Food Records in the United Kingdom and SBK Records in the United States. Upon release, it reached number 6 on the UK Albums Chart, and stayed on the chart for four weeks, a departure from the number 1 peak achieved by Doubt. It also charted at number 59 on the US Billboard 200 in February 1993. As the band's commercial momentum never recovered, Pitchfork later included Perverse in their list of "ten career-killing albums" from the 1990s. Two copies of the mask on the album cover are known to exist, one owned by Edwards. During interviews given to promote the album, keyboardist Baker accompanied Edwards, unlike with earlier band promotion, because Baker was the band's "techno expert." "The Devil You Know" was released as the album's first single in December 1992, reaching number 10 on the UK Singles Chart. It was followed by second single "The Right Decision", which reached number 36 on the UK Singles Chart and stayed there for three weeks. "Zeroes and Ones" was the final single, which reached number 30 upon its release as a single in July. The two CD singles for "Zeroes and Ones" featured radical remixes of the song by artists who influenced Perverse such as the Prodigy and Aphex Twin, the latter of whose "Aphex Twin Reconstruction Mix #2" later featured on his remix compilation 26 Mixes for Cash (2003). On the US Modern Rock Tracks chart, "The Devil You Know" reached number 1 and "The Right Decision" reached number 12. Reception and legacyPerverse was originally released to mixed reviews. Simon Reynolds of Melody Maker found the band's "rock must engage with techno" idea engaging, but ultimately said the album "quickly reverts to type." Trouser Press were similarly mixed, feeling the album "[pumps] up the electronic fuzz tones and industrial guitar riffs at the expense of songs' character" on most of the songs." Douglas D. Keller of The Tech was more favourable, saying Perverse is "an engaging album with very contemporary ideas and the potential to shape the course of alternative and techno in the near future." Later reviews have been more favourable; in a retrospective review, Stephen Thomas Erlewine of AllMusic rated the album four stars out of five, saying that it is "an ambitious album that works sporadically," saying that, although "too often, the hooks are submerged beneath layers of computerized noise and aren't strong enough to pull themselves out," "when Perverse clicks, Jesus Jones gives the listener an idea of how enjoyable a successful marriage of techno and rock could be." Mark Reed of Drowned in Sound, writing in 2002, called the album the band's "major leap forward," and cited its recording approach as innovative: "At the time (and now) the band were lambasted for it. Now everyone does it." Meanwhile, writing in 2014, David Wilson of Get Ready to Rock noted that Perverse is often seen as the band's creative peak. Shane Pinnegar of 100Percent Rock Magazine, in a 2015 interview with Edwards, described the album as good, but noted "it isn’t remembered as being as quintessentially ‘of its time’ as Doubt is," believing this to be due to its topical themes. Edwards replied, "yeah, obviously that approach stuck with me, which is why Perverse not only sounds the way it does, but has the lyrical themes that it does – including the now quaintly amusing idea that the internet might actually be quite an interesting thing! 'Zeroes and Ones', the song, is very hard to sing with a straight face these days." Deluxe editionPerverse was remastered and re-released as a deluxe edition on 17 November 2014 by Edsel Records alongside the band's other albums. The release, which is packaged to resemble a hardback book, contains 2 CDs and a DVD, comprising "radio sessions, rare tracks, demos and sought-after mixes plus the Bonus DVD featuring videos and live footage." The deluxe edition was favourably assessed by David Wilson of Get Ready to Rock'' in a review of all the band's deluxe editions, who rated the release four stars out of five. Track listing All tracks written by Mike Edwards. "Zeroes and Ones" – 3:24 "The Devil You Know" (Edwards, Henry Kriger) – 4:31 "Get a Good Thing" – 3:23 "From Love to War" – 3:49 "Yellow Brown" – 3:23 "Magazine" – 2:46 "The Right Decision" – 3:36 "Your Crusade" – 3:30 "Don't Believe It" – 3:45 "Tongue Tied" – 3:16 "Spiral" – 4:30 "Idiot Stare" – 5:10 Notes Jesus Jones albums 1993 albums Food Records albums Grebo (music) albums Techno albums by English artists
28913203
https://en.wikipedia.org/wiki/GIS%20Live%20DVD
GIS Live DVD
GIS Live DVD is a type of the thematic Live CD containing GIS/RS applications and related tutorials, and sample data sets. The general sense of a GIS Live DVD is to demonstrate the power of FLOSS GIS and encourage users to start on FLOSS GIS. However, a disc can be used for GIS data processing and training, too. A disc usually includes some selected Linux-based or Wine (software)-enabled Windows applications for GIS and Remote Sensing use. Using this disc the end users can execute GIS functions to get experience in free and open source software solutions or solve some simple business operations. The set-up and the operating behaviour of the applications can also be studied prior to building real FLOSS GIS-based systems. Recently a LiveDVD image is stored and booted from USB (Live USB). Use cases There can be multiple use cases: Demonstration of one or more GIS applications or dataThe disc contains preconfigured applications with some GIS data and introductory tutorials. It is used at workshops or meetings to demonstrate the capabilities of one or more GIS applications or data sets or results of projects that applied FLOSS GIS. It makes possible the stand-alone operation using the pre-installed tutorial and sample data. The performance is critical especially the time behaviour. Usually a demonstration disc tends to promote a project so the user friendliness and correctness are critical too. Evaluation of GIS applicationsThe objective is to provide a simple use and compact environment for evaluating and/or benchmarking applications or data sets. It has a similar built to a demonstration disc however it needs more flexibility in using external data and access to network. The time behaviour and the performance is less critical since is highly likely the disc is run in a virtual machine. The user-friendliness is less important since the evaluation is done by professional usually. EducationPrimary aim of this disc is to support of certain GIS training that is based on an established curricula. It can be university education or a special training course. The selection of the applications for the disc supports the gathering of the targeted skill. The disc includes the agenda of the course too. Networking and using peripherals such as printer and external disks is important. Installation The goal is support the installation of WEB or Desktop applications by users do not have the required skills or the configuration is complex or difficult. Development Kit for implementing complex customised applications The goal is provide a package of a tools for GIS/LIS application development that includes standard software components and business related sample models and codes to support and speed up implementation. The emphases is on the demonstration of system integration and the sample databases and software codes and instruction. GIS data processing Theoretically a Thematic Live DVD can be used for supporting real business procedures. A USB version of a Live DVD can be adopted to data collection where it includes a specific set-up, e.g. an IT system for spatial related census. Creation The creation is usually based on an existing Live CD. Some of the GIS and Remote sensing applications are already included in the basic Linux distribution. Some other packages has got "ready to install" versions while some others need to be translated from the source code. After having got all applications and related contents (e.g. documentations, sample data) installed the image of the Live DVD is created by the basic distribution provided tools. Risks and assumptions The CASCADOSS research project has identified the following risks of disc creation Shortages of the Incomplete Applications The developments of several FLOSS products are not completed. There are some never finished products too. However these incomplete products can have extremely useful functionality that makes use in environmental GIS data processing. By adding them to a disc their weaknesses could not be fixed. Moreover, in case of some old products the incompatibility, e.g. Linux kernel is a real risk. Coexistence Problems In order to create a disc that includes several applications various sub-packages needs to be installed. These packages are not necessarily compatible. They have different technical approach and quality. Therefore, some products cannot coexist smoothly (or at all) with the others on the same Linux distribution. Difficulties with Closed Package Dependent Applications When the code of an FLOSS product is open then errors and reasons can be identified, the code can be corrected and the binary version can be recompiled. However, there are some closed packages, e.g. Sun's Java Runtime Environment that cannot be diagnosed or modified. Therefore, some Java based and closed package dependent application cannot be installed. Difficulties Proprietary Hardware Drivers Several hardware drivers, e.g. video, wireless, printer are proprietary software. The standard Linux drivers can not necessarily manage them correctly. Therefore, it is impossible to create a disc that compatible with all PCs. Especially the notebooks could have problem in running a LiveDVD. Difficulties with Obsolete Hardware The Linux kernel is continuously evolving so some older hardware are not supported in a standard way so it is not possible to support them on a disc. Legal Risks (a plethora of different FLOSS licences) The complex dependency of packages can result conflicts of different FLOSS licences. Several package needs to be changed recompiled and some dependent packages needs to be replaced by a functionally equivalent that can have incompatible licence. Therefore, some FLOSS project could complain for removing their package from the Live CD. Licence or Patent violations Some FLOSS products depend on proprietary package some others have code that can violate patents or related rights of some firms. Since the broad national differences in related rights the violation depends on where the Live CD is run. Therefore, some right owners could complain for removing some package from the Live CD. Exceeding available room FLOSS GIS/Remote sensing/Environmental applications have a large amount of sample, demonstration and training data sets. Installing all the data could easily result too large amount of data that cannot be burnt to a CD or DVD. Therefore, some data sets might be excluded from the media. Poor I18n support in the GIS/Remote sensing applications The I18n stands for the internationalization in IT applications. Unfortunately several applications do not care the international use of applications that makes hard or impossible the use of national characters or integrating an application to a national version of an operating system. Unfamiliarity of Linux Most of the PC users are inflexible to turn to any different operating system. Therefore, a Linux-based application can be strange for them since it has a different habit than well-known products. Moreover, the terminal-based operation is unacceptable for most of the end-users. Quality Problems of the Basic Linux distribution Are Hardy Manageable As all software products, the Linux distributions have bugs and failures. These problems have a direct impact on any Live CD and cannot be handled as part of a Live CD creation that is focused on applications and not the functions of the operating system. Functions The following diagram presents a general function–means tree of the GIS LiveDVDs: Installation The ability to install comes from the basic Linux distribution usually. To keep this feature of the basic Linux distribution, the thematic content needs to be stored according to the standards of the basic distribution. As part of the installation, the system makes an update that can spoil the set-up of the added applications. In order the fully reserve this feature, repositories must be created and regularly maintained for all the installed applications. Standard functions of the operating system This is usually provided by the basic Linux distribution. Use of resources and peripherals must be adjusted to the needs of the installed application. The tools are available for experienced users to correct the set-up or customise Live CD while it is in use. Office applications This means an Office suite that makes possible to run slides, editing texts and images. GIS application-specific functions This covers information content, tutorials, user guides, etc. and the functions provided by the GIS applications. It has sense to divide applications to desk top, server and terminal based tools. Existing GIS Live DVDs ArcheOS Primary use is professional archaeological work. Distributor: Arc-Team Italy. Basic Operating system: Debian, 2010, Software included: AutoQ3D, OpenJUMP GIS, XvidCap, QCAD, SAGA GIS, ParaView, PostgreSQL, PostGIS, JSVR, WhiteDune, Pmapper, GRASS Cascadoss LiveDVD ver 3.0 Primary uses are evaluation and training. Distributor: Compet-Terra Bt. Hungary as a member of the CASCADOSS project. Basic distribution: Fedora 11. GIS software included: GRASS GIS, gvSIG, ivics, ISIS Modelling Software, JUMP GIS, Kosmo, QGIS, SAGA, Ilwis, Spatialite, GeoNetwork opensource, MapServer, GeoServer, Openlayers, R, Postgresql, postgis, Data included: sample data from Belgium, Poland, Bohemia and sample data of the applications. Debian GIS Live image Primary use is demonstration. Distributor: Software in the Public Interest, Inc., United States of America. Basic distribution: Debian, 2007, GIS Software Included: QGIS 0.8.0-5, earth3d 1.0.5-1, mapnik 0.4.0-1, Grass 6.2.1-3, gpsbabel 1.3.3-2, mapserver 4.10.2-1, gpsdrive 2.09-2.2, kflog 2.1.1-3.1, postgis 1.2.1-2 Esri Geoportal Server Live DVD Primary use is evaluation and demonstration. Distributor: Esri, Inc., United States of America. Basic distribution: Debian Squeeze, GIS Software Included: Esri Geoportal Server 1.2.5 FOSS4G 2009 Live DVD Primary uses are demonstration, education and installation. Distributor: The Open Source Geospatial Foundation, Basic distribution: Ubuntu Hardy, 2009, Software included: GDAL/OGR, mapserver, GRASS, postgis, pgadmin, GDAL-GRASS plugin, PROJ.4, geoserver, uDig, gvSIG, R-spatial, GeoNetwork opensource FOSS4G 2008 "Practical Introduction to GRASS" workshop Live DVD Primary use is education. FOSS4G 2008 "Practical Introduction to GRASS and related software for beginners" Workshop Live DVD. Creator: M. Ciolli, C. Tattoni, P. Zatelli, Dipartimento di Ingegneria Civile e Ambientale, Facoltà di Ingegneria, Università di Trento, Italy. Basic operating system: Kubuntu 8.04. Software: GRASS GIS 6.2.3 and GRASS GIS 6.4 (snapshot_2008_7_12), R 2.7.0 with GRASS support, QGIS 0.10 with GRASS plugin, PostgreSQL 8.3.8, PostGIS 1.3.3, pgAdminIII 1.8.2, pgagent 1.8.2, GRASS tutorial and workshop slides. GISAK Primary use is education. Creator: Jan Růžička, František Klímek, Institute of Geoinformatics, Faculty of Mining and Geology, Czech Republic. Basic distribution: Knoppix, 2006, GIS Software. included: GRASS, Thuban, GpsDrive, JUMP GIS, MapLab, MapServer, QGIS, PostGIS GIS Knoppix Primary use is demonstration. Distributor: Sourcepole AG, Switzerland. GIS Basic distribution: Knoppix Software included: GRASS 6.2, UMN MapServer 4.4.1, MapLab 2.0.1, PostgreSQL 8.1.4.2, TerraView 2.0, MapDesk, MySQL 5.0.21, JUMP GIS 1.1.2, Interlis, QGIS 0.8, Thuban 1.0.0, GSdrive, GPSMan GIS-LiveCD Release for FOSS4G 2006 Primary use is demonstration. Distributor: GDF Hannover bR, Germany, Basic distribution: 2006, KNOPPIX. GIS Software included: GRASS 6.2, QGIS 0.8 CVS, MySQL, PostgreSQL Ominiverdi Primary use is demonstration. Distributor: Omniverdi. Italy (that is a group of programmers) Basic distribution: 2008, Gentoo Software included: GRASS 6.3.0, QGIS 0.11.0, PostgreSQL 8.3.1, PostGIS 1.3.1, GDAL 1.5.0, PROJ.4 4.5.0, PgAdmin 1.6.3 Data included: QGIS Dataset Alaska, Spearfish, North Carolina GRASS dataset OSGeo-Live Primary use are demonstration and installation. Distributor: The live DVD project of the Open Source Geospatial Foundation, Basic distribution: 2012, Xubuntu Software included: OpenLayers, Geomajas, Mapbender, MapFish, GeoMoose, Sahana Eden, Ushahidi, PostGIS, SpatiaLite, Rasdaman, pgRouting, QGIS desktop and server, GRASS GIS, gvSIG, uDig, Kosmo, OpenJUMP GIS, SAGA GIS, OSSIM, Geopublisher, AtlasStyler, osgEarth, GpsDrive, Marble, OpenCPN, OpenStreetMap, Prune, Viking, zyGrib, GeoKettle, GDAL/OGR, GMT, Mapnik, MapTiler, OTB, R Spatial Task View, GeoServer, MapServer, deegree, GeoNetwork, MapProxy, 52°North WSS,52°North WPS, 52°North SOS, TinyOWS, ZOO Project, GeoTools, MetaCRS, libLAS. Data included: Natural Earth, OSGeo North Carolina, OpenStreetMap. Windows and Apple Installers for some of the above applications. A video of the OSGeo-Live is available too. Italian GRASS DVD Primary use is education. Creator: M. Ciolli, C. Tattoni, A. Vitti, P. Zatelli, Dipartimento di Ingegneria Civile e Ambientale, Facoltà di Ingegneria, Università di Trento, Italy. Basic operating system: Kubuntu 9.04 2009 GRASS, GIS 6.4RC4, PostgreSQL 8.3.7, pgAdminIII 1.8.4, QGIS 1.1.0, PostGIS 1.3.3, R 2.9.0 with gstat and spgrass6 packages, North Carolina GRASS Dataset, QGIS & GRASS Tutorials. See also GIS List of geographic information systems software List of live CDs List of remastering software Live USB Software appliance Screenshots Here are some screenshots of live CDs dedicated to GIS: References External links Presentation about the creation of a GIS LiveDVD in Hungarian Using Live DVD to train WMS/WFS services regarding QGIS and MapServer Using LiveDVD to evaluate MapServer Software appliances GIS software
55052303
https://en.wikipedia.org/wiki/Maslow%20CNC
Maslow CNC
Maslow CNC is an open-source CNC router project. It is the only commercially available vertical CNC router and is notable for its low cost of US$500. Although the kit is advertised at $500, like many tools, additional initial material and hardware costs are required. The kits are now sold by three re-sellers range in price from $400 to $500. Lumber and plywood are required to make the machine's frame along with an appropriate and compatible router. Lastly, a personal computer or tablet is needed with Windows, Mac OSX or Linux as its operating system. Overall initial material material costs approximately $800. The unique vertical design mimics a hanging plotter allowing it to have a 4' x 8' cutting area with a footprint 10' wide x 19" deep. Maslow CNC uses geared motors with encoders (8148 counts/rev) and a closed loop feedback system to achieve a resolution of ±0.4mm. To reduce cost, Maslow CNC comes in kit form, uses a commercial off-the-shelf handheld router provided by the user for the router spindle, uses an Arduino Mega microprocessor, and uses a large number of common hardware items rather than custom parts. The Maslow CNC project was created 2016 by Bar Smith, Hannah Teagle and Tom Beckett. The project was funded with preorders on Kickstarter, raising $314,000. It was featured on Tested and was shown at Maker Faire Bay Area 2017. The original company is no longer selling the kits. Gallery References External links Maslow CNC official website Maslow CNC Github repository CNC Open hardware electronic devices
418206
https://en.wikipedia.org/wiki/Execution%20%28computing%29
Execution (computing)
Execution in computer and software engineering is the process by which a computer or virtual machine reads and acts on the instructions of a computer program. Each instruction of a program is a description of a particular action which must be carried out, in order for a specific problem to be solved. Execution involves repeatedly following a 'fetch–decode–execute' cycle for each instruction. As the executing machine follows the instructions, specific effects are produced in accordance with the semantics of those instructions. Programs for a computer may be executed in a batch process without human interaction or a user may type commands in an interactive session of an interpreter. In this case, the "commands" are simply program instructions, whose execution is chained together. The term run is used almost synonymously. A related meaning of both "to run" and "to execute" refers to the specific action of a user starting (or launching or invoking) a program, as in "Please run the application." Process Prior to execution, a program must first be written. This is generally done in source code, which is then compiled at compile time (and statically linked at link time) to produce an executable. This executable is then invoked, most often by an operating system, which loads the program into memory (load time), possibly performs dynamic linking, and then begins execution by moving control to the entry point of the program; all these steps depend on the Application Binary Interface of the operating system. At this point execution begins and the program enters run time. The program then runs until it ends, either normal termination or a crash. Executable Executable code, an executable file, or an executable program, sometimes simply referred to as an executable or binary, is a list of instructions and data to cause a computer "to perform indicated tasks according to encoded instructions", as opposed to a data file that must be interpreted (parsed) by a program to be meaningful. The exact interpretation depends upon the use. "Instructions" is traditionally taken to mean machine code instructions for a physical CPU. In some contexts, a file containing scripting instructions (such as bytecode) may also be considered executable. Context of execution The context in which execution takes place is crucial. Very few programs execute on a bare machine. Programs usually contain implicit and explicit assumptions about resources available at the time of execution. Most programs execute with the support of an operating system and run-time libraries specific to the source language that provide crucial services not supplied directly by the computer itself. This supportive environment, for instance, usually decouples a program from direct manipulation of the computer peripherals, providing more general, abstract services instead. Runtime system A runtime system, also called runtime environment, primarily implements portions of an execution model. This is not to be confused with the runtime lifecycle phase of a program, during which the runtime system is in operation. When treating the runtime system as distinct from the runtime environment (RTE), the first may be defined as a specific part of the application software (IDE) used for programming,a piece of software that provides the programmer a more convenient environment for running programs during their production (testing and similar) while the second (RTE) would be the very instance of an execution model being applied to the developed program which is itself then run in the aforementioned runtime system. Most programming languages have some form of runtime system that provides an environment in which programs run. This environment may address a number of issues including the management of application memory, how the program accesses variables, mechanisms for passing parameters between procedures, interfacing with the operating system, and otherwise. The compiler makes assumptions depending on the specific runtime system to generate correct code. Typically the runtime system will have some responsibility for setting up and managing the stack and heap, and may include features such as garbage collection, threads or other dynamic features built into the language. Instruction cycle The instruction cycle (also known as the fetch–decode–execute cycle, or simply the fetch-execute cycle) is the cycle that the central processing unit (CPU) follows from boot-up until the computer has shut down in order to process instructions. It is composed of three main stages: the fetch stage, the decode stage, and the execute stage. In simpler CPUs, the instruction cycle is executed sequentially, each instruction being processed before the next one is started. In most modern CPUs, the instruction cycles are instead executed concurrently, and often in parallel, through an instruction pipeline: the next instruction starts being processed before the previous instruction has finished, which is possible because the cycle is broken up into separate steps. Interpreter A system that executes a program is called an interpreter of the program. Loosely speaking, an interpreter directly executes a program. This contrasts with a language translator that converts a program from one language to another before it is executed. Virtual machine A virtual machine (VM) is the virtualization/emulation of a computer system. Virtual machines are based on computer architectures and provide functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination. Virtual machines differ and are organized by their function, shown here: System virtual machines (also termed full virtualization VMs) provide a substitute for a real machine. They provide functionality needed to execute entire operating systems. A hypervisor uses native execution to share and manage hardware, allowing for multiple environments which are isolated from one another, yet exist on the same physical machine. Modern hypervisors use hardware-assisted virtualization, virtualization-specific hardware, primarily from the host CPUs. Process virtual machines are designed to execute computer programs in a platform-independent environment. Some virtual machine emulators, such as QEMU and video game console emulators, are designed to also emulate (or "virtually imitate") different system architectures thus allowing execution of software applications and operating systems written for another CPU or architecture. Operating-system-level virtualization allows the resources of a computer to be partitioned via the kernel. The terms are not universally interchangeable. References See also Executable Run-time system Runtime program phase Program counter Computing terminology
4926819
https://en.wikipedia.org/wiki/Paperback%20Software%20International
Paperback Software International
Paperback Software International Ltd. was a software company founded in the 1980s by Adam Osborne to manufacture discount software such as spreadsheet (VP-Planner), database (VP-Info) and information management (VP-Expert) software. History The company was found by a United States court to have infringed on copyright for reproducing the appearance and menu system of Lotus 1-2-3 in its competing spreadsheet program, even though they did use different computer code. The loss of this lawsuit was the main cause for the foundering of the company and paved the way for future copyright law on computer software. Overview Not only was VP Planner cheaper, it was regarded by some as better. Adam Osborne's US Paperback Software business folded following lengthy litigation with Lotus Software. The litigation began in 1987, when Lotus initially won a copyright claim in 1990 against Paperback Software. Lotus sued Borland over the latter’s Quattro Pro spreadsheet but, after six years of litigation, lost the lawsuit. The court ruled that it is not copyright infringement to use the Lotus interface as a subset, but, by then, Paperback Software had folded, and Lotus 1-2-3 had faced intense competition from Microsoft Excel. Legacy VP-Info remains in use and continues to be available for download from public software archives, and through the Wayback Machine. VP-Info was revised and updated and re-published by SubRosa Corporation as the Shark database management application. References Companies established in the 1980s
8236017
https://en.wikipedia.org/wiki/HP%20Integrity%20Virtual%20Machines
HP Integrity Virtual Machines
Integrity Virtual Machines is software from Hewlett-Packard that allows multiple virtual machines to run concurrently on any Itanium server running HP-UX, notably the HPE Integrity Servers line. It is part of HP's Virtual Server Environment suite. The product is optimized for server use. History Christophe de Dinechin initiated a skunkworks project to virtualize Itanium, with the help of Jean-Marc Chevrot and of a "virtual team" of experienced HP engineers. A prototype of Integrity Virtual Machines was then developed between 2000 and 2003 by Christophe de Dinechin, Todd Kjos and Jonathan Ross. It was then turned into a full-fledged product by a larger team of experienced OpenVMS, Tru64 Unix and HP-UX kernel engineers. Version 1.0 and 1.2, released in 2005, ran HP-UX in virtual machines. Version 2.0, released in November 2006, additionally supports Windows Server 2003, CD and DVD burners, tape drives and VLAN. Version 3.0, released in June 2007, supports Linux Red Hat Enterprise Linux Version 3.5, released in late 2007, supports SUSE Linux Enterprise Server, HP-UX 11i v3 guests, new service packs for Windows and Linux guests, and accelerated virtual I/O for HP-UX guests, enabling better I/O performance and a larger number of devices. Version 4.0, released in September 2008, runs on HP-UX 11.31 (also known as 11i v3), supports 8 virtual CPUs, capped CPU allocation (in addition to CPU entitlement as in previous releases), additional support for accelerated virtual I/O (AVIO), and a new VM performance analysis tool. Version 4.0 also includes beta functionality such as on-line migration and support for OpenVMS guests. Version 4.1, released in April 2009, supports Online VM Migration which allows customers to migrate active guests from one VM Host to another VM Host without service interruption. It also provides support for SSH third-party alternatives for secure communications, accelerated virtual I/O (AVIO) for networking on Windows and Linux guests, support for ignite and VxVM backing stores. Version 4.2, released March 2010, supports encryption during a VM migration, brings support for newer Itanium hardware and VM Guest OS versions, contains software allowing for VMs as Serviceguard packages and Serviceguard Nodes, and support for automatic memory reallocation. It also added support for OpenVMS 8.4 guests. Version 4.2.5, released September 2010, brings support for the HP Integrity Superdome 2, as well as suspend and resume support for a VM. Version 4.3, released March 2011, brings support for the Intel® Itanium® Tukwila processor, an NVRAM Edit utility, a Virtual iLO Remote Console, 16 virtual CPUs for guests, 128GB for guest memory, 256 AVIO storage devices, the support for Fiber Channel over Ethernet, and the support of NFS backing stores. Version 6.1, released March 2012 brings support for management of Hp vPar, Direct Input Output (DIO) feature for improved I/O functionality, manageability and performance. Version 6.3, released March 2014 can emulate NVRAM, supports 32 CPUs and 256GB of RAM for VM guests, supports dynamic addition of I/O devices, and supports migration between i2 and i4 processors. Version 6.3.5, released March 2015 enables dynamic deletion of I/O devices, improvements to what physical ports NPIV guests use after they're migrated, reduced memory overhead for large VMs, and more. Capabilities Exact specifications depend on the precise version and system configuration. The host configurations are the same as those supported by HP-UX, and can include 128 physical cores and 1TB of main memory. More than 250 guests can run concurrently, although the optimal number is generally lower, depending on host memory and processor configuration. Guests can have multiple virtual CPUs, the maximum number in supported configurations being 4 with releases before 4.0, then 8 with release 4.0, 16 with release 4.3, and 32 with 6.3. Guests can be configured with up to 256GB of memory in version 6.3. In recent releases, memory can be adjusted dynamically for HP-UX guests. Virtual devices can be added or removed dynamically. The number of virtual devices allowed in supported configurations depends on the release. Versions after 4.3 support up to 256 when accelerated virtual I/Os are used. The CPU allocation for virtual machines can be adjusted dynamically with a granularity of 1% or 1 MHz. CPU time is allocated by a fair-share scheduler, which delivers better CPU utilization for SMP guests than a more simplistic gang scheduler. User interface Integrity Virtual Machines can be created and managed using a command-line interface or a graphical user interface accessed using a web browser. Essential commands include: hpvmcreate to create virtual machines hpvmstatus to display status information hpvmstart and hpvmstop to start and stop virtual machines hpvmmodify to modify existing virtual machines hpvmconsole to simulate a hardware console hpvmmigrate to perform on-line or off-line guest migration hpvmsar to show performance information about the running guests. hpvmsuspend and hpvmresume to suspend and resume virtual machines The user interface is integrated in the HP Integrity Virtual Machines Manager. See also Comparison of platform virtualization software References External links Product description at Hewlett-Packard Architecture overview White papers on HPVM Release note for version 4.3 Virtualization software Integrity Virtual Machines Unix software
44959855
https://en.wikipedia.org/wiki/Nightshade%20%28astronomy%20software%29
Nightshade (astronomy software)
Nightshade is a simulation and visualization software for teaching and exploring astronomy, Earth science, and related topics. Its focus is on use in digital planetarium systems or as an educational tool, with additional features to allow it to also be used on desktop or laptop computers. It operates on Linux, macOS and Windows. Nightshade Legacy was open source and is no longer developed. A completely new codebase, Nightshade NG (Next Generation) is licensed under the "Nightshade Public License", a non-free license with field of use restrictions but with source available. History Nightshade Legacy began as a fork of Stellarium by Digitalis Education Solutions after controversy between the Stellarium project and its commercial contributor about a focus on desktop versus planetarium use. Development of Nightshade Legacy ceased around 2011. A rewrite, Nightshade NG, was begun in 2011, with a change to a non-free license with field of use restrictions to prevent its use by commercial competitors of Digitalis. Several Nightshade NG "preview releases" have been made. Features of Nightshade NG Sky features The Hipparcos Catalogue with proper 3D positioning, proper motion and radial velocity Extra catalogues with more than 12 million stars Asterisms and illustrations of the constellations from many cultures Images of nebulae (built-in and user added) Realistic Milky Way Physically-based atmosphere allowing for realistic sunrise and sunset from anywhere on or above the Earth or Mars Planets of the Solar System and their major moons Ability to display stars and other celestial objects as seen from any reference point in the Milky Way galaxy Ability to fast-forward or rewind time +/- 1,000,000 years Terrain features Ability to add multiple image layers of a planet using Web Map Service or GDAL Ability to display imagery over topography Ultra-high resolution imagery for Earth, Mars, Mercury, Europa, and the Moon Interface Zoom Time control Multilingual interface Scripting to record and playback shows Fisheye projection for planetarium domes Graphical interface and extensive keyboard control Text user interface for planetarium domes Web-based graphical console for control from any Firefox or Mobile Safari device Visualization Equatorial, azimuthal, J2000, and galactic grids Star twinkling Shooting stars Eclipse simulation for any body Skinnable landscapes Spherical panorama projection Customisability Deep sky objects, landscapes, constellation images, scripts etc. can be added. Stratoscript command syntax to allow for easy macro-style programming Ability to record and playback scripts References External links Planetarium software for Linux Educational software for Windows Educational software for MacOS Science software for MacOS Science software for Windows
1303939
https://en.wikipedia.org/wiki/Walt%20Disney%20Animation%20Studios
Walt Disney Animation Studios
Walt Disney Animation Studios (WDAS), sometimes shortened to Disney Animation, is an American animation studio that creates animated features and short films for The Walt Disney Company. The company's production logo features a scene from its first synchronized sound cartoon, Steamboat Willie (1928). Founded on October 16, 1923, by brothers Walt Disney and Roy O. Disney, it is the oldest-running animation studio in the world. It is currently organized as a division of Walt Disney Studios and is headquartered at the Roy E. Disney Animation Building at the Walt Disney Studios lot in Burbank, California. Since its foundation, the studio has produced 60 feature films, from Snow White and the Seven Dwarfs (1937) to Encanto (2021), and hundreds of short films. Founded as Disney Brothers Cartoon Studio in 1923, renamed Walt Disney Studio in 1926 and incorporated as Walt Disney Productions in 1929, the studio was dedicated to producing short films until it entered feature production in 1934, resulting in 1937's Snow White and the Seven Dwarfs, one of the first full-length animated feature films and the first U.S.-based one. In 1986, during a large corporate restructuring, Walt Disney Productions, which had grown from a single animation studio into an international multimedia company, was renamed The Walt Disney Company and the animation studio Walt Disney Feature Animation in order to differentiate it from the other divisions. Its current name was adopted in 2007 after Pixar Animation Studios was acquired by Disney in the previous year. For much of its existence, the studio was recognized as the premier American animation studio; it developed many of the techniques, concepts and principles that became standard practices of traditional animation. The studio also pioneered the art of storyboarding, which is now a standard technique used in both animated and live-action filmmaking. The studio's catalog of animated features is among Disney's most notable assets, with the stars of its animated shorts – Mickey Mouse, Minnie Mouse, Donald Duck, Daisy Duck, Goofy, and Pluto – becoming recognizable figures in popular culture and mascots for The Walt Disney Company as a whole. Walt Disney Animation Studios is currently managed by Jennifer Lee (Chief Creative Officer) and Clark Spencer (President), and continues to produce films using both traditional animation and computer-generated imagery (CGI). , the studio was no longer developing hand-drawn animated features and had laid off most of their hand-drawn animation division - although they still make hand-drawn animated shorts. However, a 2019 interview with Lee indicated that the company would be open to proposals from filmmakers for future hand-drawn feature projects. History 1923–1929: Disney Brothers Cartoon Studio Kansas City, Missouri natives Walt Disney and Roy O. Disney founded Disney Brothers Cartoon Studio in Los Angeles in 1923 and got their start producing a series of silent Alice Comedies short films featuring a live-action child actress in an animated world. The Alice Comedies were distributed by Margaret J. Winkler's Winkler Pictures, which later also distributed a second Disney short subject series, the all-animated Oswald the Lucky Rabbit, through Universal Pictures starting in 1927. Upon relocating to California, the Disney brothers initially started working in their uncle Robert Disney's garage at 4406 Kingswell Avenue in the Los Feliz neighborhood of Los Angeles, then, in October 1923, formally launched their studio in a small office on the rear side of a real estate agency's office at 4651 Kingswell Avenue. In February 1924, the studio moved next door to office space of its own at 4649 Kingswell Avenue. In 1925, Disney put down a deposit on a new location at 2719 Hyperion Avenue in the nearby Silver Lake neighborhood, which came to be known as the Hyperion Studio to distinguish it from the studio's other locations, and, in January 1926, the studio moved there and took on the name Walt Disney Studio. Meanwhile, after the first year's worth of Oswalds, Walt Disney attempted to renew his contract with Winkler Pictures, but Charles Mintz, who had taken over Margaret Winkler's business after marrying her, wanted to force Disney to accept a lower advance payment for each Oswald short. Disney refused and, as Universal owned the rights to Oswald rather than Disney, Mintz set up his own animation studio to produce Oswald cartoons. Most of Disney's staff was hired away by Mintz to move over once Disney's Oswald contract expired in mid-1928. Working in secret while the rest of the staff finished the remaining Oswalds on contract, Disney and his head animator Ub Iwerks led a small handful of loyal staffers in producing cartoons starring a new character named Mickey Mouse. The first two Mickey Mouse cartoons, Plane Crazy and The Galloping Gaucho, were previewed in limited engagements during the summer of 1928. For the third Mickey cartoon, however, Disney produced a soundtrack, collaborating with musician Carl Stalling and businessman Pat Powers, who provided Disney with his bootlegged "Cinephone" sound-on-film process. Subsequently, the third Mickey Mouse cartoon, Steamboat Willie, became Disney's first cartoon with synchronized sound and was a major success upon its November 1928 debut at the West 57th Theatre in New York City. The Mickey Mouse series of sound cartoons, distributed by Powers through Celebrity Productions, quickly became the most popular cartoon series in the United States. A second Disney series of sound cartoons, Silly Symphonies, debuted in 1929 with The Skeleton Dance. 1929–1940: Reincorporation, Silly Symphonies, and Snow White and the Seven Dwarfs In 1929, disputes over finances between Disney and Powers led to Disney's studio, reincorporated on December 16, 1929, as Walt Disney Productions, signing a new distribution contract with Columbia Pictures. Powers, in return, signed away Ub Iwerks, who began producing cartoons at his own studio, although he would return to Disney in 1940. Columbia distributed Disney's shorts for two years before the Disney studio entered a new distribution deal with United Artists in 1932. The same year, Disney signed a two-year exclusive deal with Technicolor to utilize its new 3-strip color film process, which allowed for fuller-color reproduction where previous color film processors could not. The result was the Silly Symphony Flowers and Trees, the first film commercially released in full Technicolor. Flowers and Trees was a major success and all Silly Symphonies were subsequently produced in Technicolor. By the early 1930s, Walt Disney had realized that the success of animated films depended upon telling emotionally gripping stories that would grab the audience and not let go, and this realization led him to create a separate "story department" with storyboard artists dedicated to story development. With well-developed characters and an interesting story, the 1933 Technicolor Silly Symphony cartoon Three Little Pigs became a major box office and pop culture success, with its theme song "Who's Afraid of the Big Bad Wolf?" becoming a popular chart hit. In 1934, Walt Disney gathered several key staff members and announced his plans to make his first animated feature film. Despite derision from most of the film industry, who dubbed the production "Disney's Folly", Disney proceeded undaunted into the production of Snow White and the Seven Dwarfs, which would become the first animated feature in English and Technicolor. Considerable training and development went into the production of Snow White and the Seven Dwarfs and the studio greatly expanded, with established animators, artists from other fields and recent college graduates joining the studio to work on the film. The training classes, supervised by head animators such as Les Clark, Norm Ferguson and Art Babbit and taught by Donald W. Graham, an art teacher from the nearby Chouinard Art Institute, had begun at the studio in 1932 and were greatly expanded into orientation training and continuing education classes. In the course of teaching the classes, Graham and the animators created or formalized many of the techniques and processes that became the key tenets and principles of traditional animation. Silly Symphonies such as The Goddess of Spring (1934) and The Old Mill (1937) served as experimentation grounds for new techniques such as the animation of realistic human figures, special effects animation and the use of the multiplane camera, an invention that split animation artwork layers into several planes, allowing the camera to appear to move dimensionally through an animated scene. Snow White and the Seven Dwarfs cost Disney a then-expensive sum of $1.4 million to complete (including $100,000 on story development alone) and was an unprecedented success when released in February 1938 by RKO Radio Pictures, which had assumed distribution of Disney product from United Artists in 1937. It was briefly the highest-grossing film of all time before the unprecedented success of Gone with the Wind two years later, grossing over $8 million on its initial release, the equivalent of $ in 1999 dollars. During the production of Snow White, work had continued on the Mickey Mouse and Silly Symphonies series of shorts. Mickey Mouse switched to Technicolor in 1935, by which time the series had added several major supporting characters, among them Mickey's dog Pluto and their friends Donald Duck and Goofy. Donald, Goofy and Pluto would all be appearing in series of their own by 1940, and the Donald Duck cartoons eclipsed the Mickey Mouse series in popularity. Silly Symphonies, which garnered seven Academy Awards, ceased in 1939, until the shorts returned to theatres with some re-issues and re-releases. 1940–1948: New features, strike, and World War II The success of Snow White allowed Disney to build a new, larger studio on Buena Vista Street in Burbank, where The Walt Disney Company remains headquartered to this day. Walt Disney Productions had its initial public offering on April 2, 1940, with Walt Disney as president and chairman and Roy Disney as CEO. The studio launched into the production of new animated features, the first of which was Pinocchio, released in February 1940. Pinocchio was not initially a box office success. The box office returns from the film's initial release were below both Snow White's unprecedented success and the studio's expectations. Of the film's $2.289 million cost – twice of Snow White – Disney only recouped $1 million by late 1940, with studio reports of the film's final original box office take varying between $1.4 million and $1.9 million. However, Pinocchio was a critical success, winning the Academy Award for Best Original Song and Best Original Score, making it the first film of the studio to win not only either Oscar, but both at the same time. Fantasia, an experimental film produced to an accompanying orchestral arrangement conducted by Leopold Stokowski, was released in November 1940 by Disney itself in a series of limited-seating roadshow engagements. The film cost $2 million to produce and, although the film earned $1.4 million in its roadshow engagements, the high cost ($85,000 per theater) of installing Fantasound placed Fantasia at an even greater loss than Pinocchio. RKO assumed distribution of Fantasia in 1941, later reissuing it in severely edited versions over the years. Despite its financial failure, Fantasia was the subject of two Academy Honorary Awards on February 26, 1942 – one for the development of the innovative Fantasound system used to create the film's stereoscopic soundtrack, and the other for Stokowski and his contributions to the film. Much of the character animation on these productions and all subsequent features until the late 1970s was supervised by a brain-trust of animators Walt Disney dubbed the "Nine Old Men", many of whom also served as directors and later producers on the Disney features: Frank Thomas, Ollie Johnston, Woolie Reitherman, Les Clark, Ward Kimball, Eric Larson, John Lounsbery, Milt Kahl, and Marc Davis. Other head animators at Disney during this period included Norm Ferguson, Bill Tytla and Fred Moore. The development of the feature animation department created a caste system at the Disney studio: lesser animators (and feature animators in-between assignments) were assigned to work on the short subjects, while animators higher in status such as the Nine Old Men worked on the features. Concern over Walt Disney accepting credit for the artists' work as well as debates over compensation led to many of the newer and lower-ranked animators seeking to unionize the Disney studio. A bitter union strike began in May 1941, which was resolved without the angered Walt Disney's involvement in July and August of that year. As Walt Disney Productions was being set up as a union shop, Walt Disney and several studio employees were sent by the U.S. government on a Good Neighbor policy trip to Central and South America. The Disney strike and its aftermath led to an exodus of several animation professionals from the studio, from top-level animators such as Art Babbitt and Bill Tytla to artists better known for their work outside the Disney studio such as Frank Tashlin, Maurice Noble, Walt Kelly, Bill Melendez, and John Hubley. Hubley, along with several other Disney strikers, went on to found the United Productions of America studio, Disney's key animation rival in the 1950s. Dumbo, in production during the midst of the animators' strike, premiered in October 1941 and proved to be a financial success. The simple film only cost $950,000 to produce, half the cost of Snow White and the Seven Dwarfs, less than a third of the cost of Pinocchio, and two-fifths of the cost of Fantasia. Dumbo eventually grossed $1.6 million during its original release. In August 1942, Bambi was released and, as with Pinocchio and Fantasia, did not perform well at the box office. Out of its $1.7 million budget, it only grossed $1.64 million. Production of full-length animated features was temporarily suspended after the release of Bambi. Given the financial failures of some of the recent features and World War II cutting off much of the overseas cinema market, the studio's financiers at the Bank of America would only loan the studio working capital if it temporarily restricted itself to shorts production. Features then in production such as Peter Pan, Alice in Wonderland and Lady and the Tramp were therefore put on hold until after the war. Following the United States' entry into World War II after the attack on Pearl Harbor, the studio housed over 500 U.S. Army soldiers who were responsible for protecting nearby aircraft factories from enemy bombers. In addition, several Disney animators were drafted to fight in the war and the studio was contracted on producing wartime content for every branch of the U.S. military, particularly military training, and civilian propaganda films. From 1942 to 1943, 95 percent of the studio's animation output was for the military. During the war, Disney produced the live-action/animated military propaganda feature Victory Through Air Power (1943), and a series of Latin culture-themed shorts resulting from the 1941 Good Neighbor trip were compiled into two features, Saludos Amigos (1942) and The Three Caballeros (1944). Saludos and Caballeros set the template for several other 1940s Disney releases of "package films": low-budgeted films composed of animated short subjects with animated or live-action bridging material. These films were Make Mine Music (1946), Fun and Fancy Free (1947), Melody Time (1948) and The Adventures of Ichabod and Mr. Toad (1949). The studio also produced two features, Song of the South (1946) and So Dear to My Heart (1948), which used more expansive live-action stories which still included animated sequences and sequences combining live-action and animated characters. Shorts production continued during this period as well, with Donald Duck, Goofy, and Pluto cartoons being the main output accompanied by cartoons starring Mickey Mouse, Figaro and, in the 1950s, Chip 'n' Dale and Humphrey the Bear. In addition, Disney began reissuing the previous features, beginning with re-releases of Snow White in 1944, Pinocchio in 1945, and Fantasia in 1946. This led to a tradition of reissuing the Disney films every seven years, which lasted into the 1990s before being translated into the studio's handling of home video releases. 1948–1966: Return of features, Buena Vista, end of shorts, layoffs, and Walt’s final years In 1948, Disney returned to the production of full-length features with Cinderella, a feature film based on the fairy tale by Charles Perrault. At a cost of nearly $3 million, the future of the studio depended upon the success of this film. Upon its release in 1950, Cinderella proved to be a box-office success, with the profits from the film's release allowing Disney to carry on producing animated features throughout the 1950s. Following its success, production on the in-limbo features Alice in Wonderland, Peter Pan, and Lady and the Tramp was resumed. In addition, an ambitious new project, an adaptation of the Brothers Grimm fairy tale "Sleeping Beauty" set to Tchaikovsky's classic score, was begun but took much of the rest of the decade to complete. Alice in Wonderland, released in 1951, met with a lukewarm response at the box office and was a sharp critical disappointment in its initial release. Peter Pan, released in 1953, on the other hand, was a commercial success and the sixth highest-grossing film of the year. In 1955, Lady and the Tramp was released to higher box office success than any other Disney animated feature since Snow White and the Seven Dwarfs, earning an estimated $6.5 million in rentals at the North American box office in 1955. Lady and the Tramp is significant as Disney's first widescreen animated feature, produced in the CinemaScope process, and was the first Disney animated feature to be released by Disney's own distribution company, Buena Vista Distribution. By the mid-1950s, with Walt Disney's attention primarily set on new endeavours such as live-action films, television and the Disneyland theme park, production of the animated films was left primarily in the hands of the "Nine Old Men" trust of head animators and directors. This led to several delays in approvals during the production of Sleeping Beauty, which was finally released in 1959. At $6 million, it was Disney's most expensive film to date, produced in a heavily-stylised art style devised by artist Eyvind Earle and presented in large-format Super Technirama 70 with six-track stereophonic sound. However, despite being the studio's highest-grossing animated feature since Snow White and the Seven Dwarfs, the film's large production costs and the box office underperformance of Disney's other 1959 output resulted in the studio posting its first annual loss in a decade for fiscal year 1960, leading to massive layoffs throughout the studio. By the end of the decade, the Disney short subjects were no longer being produced on a regular basis, with many of the shorts divisions' personnel either leaving the company or being reassigned to work on Disney television programmes such as The Mickey Mouse Club and Disneyland. While the Silly Symphonies shorts had dominated the Academy Award for Best Short Subject (Cartoons) during the 1930s, its reign over the most awards had been ended by MGM's Tom and Jerry cartoons, Warner Bros' Looney Tunes and Merrie Melodies, and the works of United Productions of America (UPA), whose flat art style and stylized animation techniques were lauded as more modern alternatives to the older Disney style. During the 1950s, only one Disney short, the stylized Toot, Whistle, Plunk and Boom, won the Best Short Subject (Cartoons) Oscar. The Mickey Mouse, Pluto and Goofy shorts had all ceased regular production by 1953, with Donald Duck and Humphrey continuing and converting to widescreen CinemaScope before the shorts division was shut down in 1956. After that, all future shorts were produced by the feature films division until 1969. The last Disney short of the golden age of animation was It's Tough to Be a Bird. Disney shorts would only be produced on a sporadic basis from this point on, with notable later shorts including Runaway Brain (1995, starring Mickey Mouse) and Paperman (2012). Despite the 1959 layoffs and competition for Walt Disney's attention from the company's expanded live-action film, TV and theme park departments, production continued on feature animation productions at a reduced level. In 1961, the studio released One Hundred and One Dalmatians, an animated feature that popularized the use of xerography during the process of inking and painting traditional animation cels. Using xerography, animation drawings could be photochemically transferred rather than traced from paper drawings to the clear acetate sheets ("cels") used in final animation production. The resulting art style – a scratchier line which revealed the construction lines in the animators' drawings – typified Disney films into the 1980s. The film was a success, being the tenth highest-grossing film of 1961 with rentals of $6.4 million. The Disney animation training program started at the studio in 1932 before the development of Snow White eventually led to Walt Disney helping found the California Institute of the Arts (CalArts). This university formed via the merger of Chouinard Art Institute and the Los Angeles Conservatory of Music. It included a Disney-developed animation program of study among its degree offerings. CalArts became the alma mater of many of the animators who would work at Disney and other animation studios from the 1970s to the present. The Sword in the Stone was released in 1963 and was the sixth highest-grossing film of the year in North America with estimated rentals of $4.75 million. A featurette adaptation of one of A. A. Milne's Winnie-the-Pooh stories, Winnie the Pooh and the Honey Tree, was released in 1966, to be followed by several other Pooh featurettes over the years and a full-length compilation feature, The Many Adventures of Winnie the Pooh, which was released in 1977. Walt Disney died in December 1966, ten months before the studio's next film The Jungle Book, was completed and released. The film was a success, finishing 1967 as the fourth highest-grossing film of the year. 1966–1984: Decline in popularity, Don Bluth's entrance and departure, "rock bottom" Following Walt Disney's death, Wolfgang Reitherman continued as both producer and director of the features. The studio began the 1970s with the release of The Aristocats, the last film project to be approved by Walt Disney. In 1971, Roy O. Disney, the studio co-founder, died and Walt Disney Productions was left in the hands of Donn Tatum and Card Walker, who alternated as chairman and CEO in overlapping terms until 1978. The next feature, Robin Hood (1973), was produced with a significantly reduced budget and animation repurposed from previous features. Both The Aristocats and Robin Hood were minor box office and critical successes. The Rescuers, released in 1977, was a success exceeding the achievements of the previous two Disney features. Receiving positive reviews, high commercial returns, and an Academy Award nomination, it ended up being the third highest-grossing film of the year and the most successful and best reviewed Disney animated film since The Jungle Book. The film was reissued in 1983, accompanied by a new Disney featurette, Mickey's Christmas Carol. The production of The Rescuers signaled the beginning of a changing of the guard process in the personnel at the Disney animation studio: as veterans such as Milt Kahl and Les Clark retired, they were gradually replaced by new talents such as Don Bluth, Ron Clements, John Musker and Glen Keane. The new animators, culled from the animation program at CalArts and trained by Eric Larson, Frank Thomas, Ollie Johnston and Woolie Reitherman, got their first chance to prove themselves as a group with the animated sequences in Disney's live-action/animated hybrid feature Pete's Dragon (1977), the animation for which was directed by Bluth. In September 1979, dissatisfied with what they felt was a stagnation in the development of the art of animation at Disney, Bluth and several of the other new guard animators quit to start their own studio, Don Bluth Productions, which became Disney's chief competitor in the animation field during the 1980s. Delayed half a year by the defection of the Bluth group, The Fox and the Hound was released in 1981 after four years in production. The film was considered a financial success by the studio, and development continued on The Black Cauldron, a long-gestating adaptation of the Chronicles of Prydain series of novels by Lloyd Alexander produced in Super Technirama 70. The Black Cauldron was intended to expand the appeal of Disney animated films to older audiences and to showcase the talents of the new generation of Disney animators from CalArts. Besides Keane, Musker and Clements, this new group of artists included other promising animators such as Andreas Deja, Mike Gabriel, John Lasseter, Brad Bird and Tim Burton. Lasseter was fired from Disney in 1983 for pushing the studio to explore computer animation production, but went on to become the creative head of Pixar, a pioneering computer animation studio that would begin a close association with Disney in the late 1980s. Similarly, Burton was fired in 1984 after producing a live-action short shelved by the studio, Frankenweenie, then went on to become a high-profile producer and director of live-action and stop-motion features for Disney and other studios. Some of Burton's high-profile projects for Disney would include the stop-motion The Nightmare Before Christmas (1993), a live-action adaptation of Alice in Wonderland (2010), and a stop-motion feature remake of Frankenweenie (2012). Bird was also fired after a few years working at the company for criticizing Disney's upper management for playing it safe and not taking risks on animation. He subsequently became an animation director at other studios, including Warner Bros. Animation and Pixar. Ron Miller, Walt Disney's son-in-law, became president of Walt Disney Productions in 1980 and CEO in 1983. That year, he expanded the company's film and television production divisions, creating the Walt Disney Pictures banner under which future films from the feature animation department would be released. 1984–1989: Michael Eisner takeover, restructuring, and return to prominence After a series of corporate takeover attempts in 1984, Roy E. Disney, son of Roy O. and nephew of Walt, resigned from the company's board of directors and launched a campaign called "SaveDisney", successfully convincing the board to fire Miller. Roy E. Disney brought in Michael Eisner as Disney's new CEO and Frank Wells as president. Eisner in turn named Jeffrey Katzenberg chairman of the film division, The Walt Disney Studios. Near completion when the Eisner regime took over Disney, The Black Cauldron (1985) came to represent what would later be referred to as the "rock bottom" point for Disney animation. The studio's most expensive feature to that point at $44 million, The Black Cauldron was a critical and commercial failure. The film's $21 million box office gross led to a loss for the studio, putting the future of the animation division in jeopardy. Between the 1950s and 1980s, the significance of animation to Disney's bottom line was significantly reduced as the company expanded into further live-action production, television and theme parks. As new CEO, Michael Eisner strongly considered shuttering the feature animation studio and outsourcing future animation. Roy E. Disney intervened, offering to head the feature animation division and turn its fortunes around, while Eisner established the Walt Disney Pictures Television Animation Group to produce lower-cost animation for television. Named Chairman of feature animation by Eisner, Roy E. Disney appointed Peter Schneider president of animation to run the day-to-day operations in 1985. On February 6, 1986, Disney executives moved the animation division from the Disney studio lot in Burbank to a variety of warehouses, hangars and trailers located about two miles east (3.2 kilometers) at 1420 Flower Street in nearby Glendale, California. About a year later, the growing computer graphics (CG) group would move there too. The animation division's first feature animation at its new location was The Great Mouse Detective (1986), begun by John Musker and Ron Clements as Basil of Baker Street after both left production of The Black Cauldron. The film was enough of a critical and commercial success to instill executive confidence in the animation studio. Later the same year, however, Universal Pictures and Steven Spielberg's Amblin Entertainment released Don Bluth's An American Tail, which outgrossed The Great Mouse Detective at the box office and became the highest-grossing first-issue animated film to that point. Katzenberg, Schneider, and Roy Disney set about changing the culture of the studio, increasing staffing and production so that a new animated feature would be released every year instead of every two to four. The first of the releases on the accelerated production schedule was Oliver & Company (1988), which featured an all-star cast including Billy Joel and Bette Midler and an emphasis on a modern pop soundtrack. Oliver & Company opened in the theaters on the same day as another Bluth/Amblin/Universal animated film, The Land Before Time; however, Oliver outgrossed Time in the US and went on to become the most successful animated feature in the US to that date, though the latter's worldwide box office gross was higher than the former. At the same time in 1988, Disney started entering into Australia's long-standing animation industry by purchasing Hanna-Barbera's Australian studio to start Disney Animation Australia. While Oliver & Company and the next feature The Little Mermaid were in production, Disney collaborated with Steven Spielberg's Amblin Entertainment and master animator Richard Williams to produce Who Framed Roger Rabbit, a groundbreaking live-action/animation hybrid directed by Robert Zemeckis which featured licensed animated characters from other animation studios. Disney set up a new animation studio under Williams' supervision in London to create the cartoon characters for Roger Rabbit, with many of the artists from the California studio traveling to England to work on the film. A significant critical and commercial success, Roger Rabbit won three Academy Awards for technical achievements. and was key in renewing mainstream interest in American animation. Other than the film itself, the studio also produced three Roger Rabbit shorts during the late 1980s and early 1990s. 1989–1994: Beginning of the Disney Renaissance, successful releases, and impact on the animation industry A second satellite studio, Walt Disney Feature Animation Florida, opened in 1989 with 40 employees. Its offices were located within the Disney-MGM Studios theme park at Walt Disney World in Bay Lake, Florida, and visitors were allowed to tour the studio and observe animators at work. That same year, the studio released The Little Mermaid, which became a keystone achievement in Disney's history as its largest critical and commercial success in decades. Directed by John Musker and Ron Clements, who'd been co-directors on The Great Mouse Detective, The Little Mermaid earned $84 million at the North American box office, a record for the studio. The film was built around a score from Broadway songwriters Alan Menken and Howard Ashman, who was also a co-producer and story consultant on the film. The Little Mermaid won two Academy Awards, for Best Original Song and Best Original Score. The Little Mermaid vigorously relaunched a profound new interest in the animation and musical film genres. The film was also the first to feature the use of Disney's Computer Animation Production System (CAPS). Developed for Disney by Pixar, which had grown into a commercial computer animation and technology development company, CAPS/ink-and-paint would become significant in allowing future Disney films to more seamlessly integrate computer-generated imagery and achieve higher production values with digital ink and paint and compositing techniques. The Little Mermaid was the first of a series of blockbusters that would be released over the next decade by Walt Disney Feature Animation, a period later designated by the term Disney Renaissance. Accompanied in theaters by the Mickey Mouse featurette The Prince and the Pauper, The Rescuers Down Under (1990) was Disney's first animated feature sequel and the studio's first film to be fully colored and composited via computer using the CAPS/ink-and-paint system. However, the film did not duplicate the success of The Little Mermaid. The next Disney animated feature, Beauty and the Beast, had begun production in London but was moved back to Burbank after Disney decided to shutter the London satellite office and retool the film into a musical-comedy format similar to The Little Mermaid. Alan Menken and Howard Ashman were retained to write the songs and score, though Ashman died before production was completed. Debuting first in a work-in-progress version at the 1991 New York Film Festival before its November 1991 wide release, Beauty and the Beast, directed by Kirk Wise and Gary Trousdale, was an unprecedented critical and commercial success and would later be regarded as one of the studio's best films. The film earned six Academy Award nominations, including one for Best Picture, a first for an animated work, winning for Best Song and Best Original Score. Its $145 million box office gross set new records, and merchandising for the film, including toys, cross-promotions, and soundtrack sales, was also lucrative. The successes of The Little Mermaid and Beauty and the Beast established the template for future Disney releases during the 1990s: a musical-comedy format with Broadway-styled songs and tentpole action sequences, buoyed by cross-promotional marketing and merchandising, all carefully designed to pull audiences of all ages and types into theatres. In addition to John Musker, Ron Clements, Kirk Wise and Gary Trousdale, the new guard of Disney artists creating these films included story artists/directors Roger Allers, Rob Minkoff, Chris Sanders and Brenda Chapman, and lead animators Glen Keane, Andreas Deja, Eric Goldberg, Nik Ranieri, Will Finn and many others. Aladdin, released in November 1992, continued the upward trend in Disney's animation success, earning $504 million worldwide at the box office, and two more Oscars for Best Song and Best Score. Featuring songs by Menken, Ashman and Tim Rice (who replaced Ashman after his death) and starring the voice of Robin Williams, Aladdin also established the trend of hiring celebrity actors and actresses to provide the voices of Disney characters, which had been explored to some degree with The Jungle Book and Oliver & Company but now became standard practice. In June 1994, Disney released The Lion King, directed by Roger Allers and Rob Minkoff. An all-animal story set in Africa, The Lion King featured an all-star voice cast which included James Earl Jones, Matthew Broderick and Jeremy Irons, with songs written by Tim Rice and pop star Elton John. The Lion King earned $768 million at the worldwide box office, to this date a record for a traditionally-animated film, earning millions more in merchandising, promotions and record sales for its soundtrack. Aladdin and The Lion King had been the highest-grossing films worldwide in each of their respective release years. Between these in-house productions, Disney diversified in animation methods and produced The Nightmare Before Christmas with former Disney animator Tim Burton. With animation becoming again an increasingly important and lucrative part of Disney's business, the company began to expand its operations. The flagship California studio was split into two units and expanded, and ground was broken on a new Disney Feature Animation building adjacent to the main Disney lot in Burbank, which was dedicated in 1995. The Florida satellite, officially incorporated in 1992, was expanded as well, and one of Disney's television animation studios in the Paris, France suburb of Montreuil – the former Brizzi Brothers studio – became Walt Disney Feature Animation Paris, where A Goofy Movie (1995) and significant parts of later Disney films were produced. Disney also began producing lower cost direct-to-video sequels for its successful animated films using the services of its television animation studios under the name Disney MovieToons. The Return of Jafar (1994), a sequel to Aladdin and a pilot for the Aladdin television show spin-off, was the first of these productions. Walt Disney Feature Animation was also heavily involved in the adaptations of both Beauty and the Beast in 1994 and The Lion King in 1997 into Broadway musicals. Jeffrey Katzenberg and the Disney story team were heavily involved in the development and production of Toy Story, the first fully computer-animated feature ever produced. Toy Story was produced for Disney by Pixar and directed by former Disney animator John Lasseter, whom Peter Schneider had unsuccessfully tried to hire back after his success with Pixar shorts such as Tin Toy (1988). Released in 1995, Toy Story opened to critical acclaim and commercial success, leading to Pixar signing a five-film deal with Disney, which bore critically and financially successful computer animated films such as A Bug's Life (1998), Toy Story 2 (1999), Monsters, Inc. (2001), and Finding Nemo (2003). In addition, the successes of Aladdin and The Lion King spurred a significant increase in the number of American-produced animated features throughout the rest of the decade, with the major film studios establishing new animation divisions such as Fox Animation Studios, Sullivan Bluth Studios, Amblimation, Rich Animation Studios, Turner Feature Animation, and Warner Bros. Animation being formed to produce films in a Disney-esque musical-comedy format such as We're Back! A Dinosaur's Story (1993), Thumbelina (1994), The Swan Princess (1994), A Troll in Central Park (1994), The Pebble and the Penguin (1995), Cats Don't Dance (1997), Anastasia (1997), Quest for Camelot (1998) and The King and I (1999). Out of these non-Disney animated features, only Anastasia was a box-office success. 1994–1999: End of the Disney Renaissance, declining returns Concerns arose internally at the Disney studio, particularly from Roy E. Disney, about studio chief Jeffrey Katzenberg taking too much credit for the success of Disney's early 1990s releases. Disney Company president Frank Wells was killed in a helicopter accident in 1994, and Katzenberg lobbied CEO Michael Eisner for the vacant president position. Instead, tensions between Katzenberg, Eisner and Disney resulted in Katzenberg being forced to resign from the company on August 24 of that year, with Joe Roth taking his place. On October 12, 1994, Katzenberg went on to become one of the founders of DreamWorks SKG, whose animation division became Disney's key rival in feature animation, with both computer animated films such as Antz (1998) and traditionally-animated films such as The Prince of Egypt (1998). In December 1994, the Animation Building in Burbank was completed for the animation division. In contrast to the early 1990s productions, not all the films in the second half of the renaissance were successful. Pocahontas, released in summer of 1995, was the first film of the renaissance to receive mixed reviews from critics but was still popular with audiences and commercially successful, earning $346 million worldwide, and won two Academy Awards for its music by Alan Menken and Stephen Schwartz. The next film, The Hunchback of Notre Dame (1996) was partially produced at the Paris studio and, although it is considered Disney's darkest film, The Hunchback of Notre Dame performed better critically than Pocahontas and grossed $325 million worldwide. The following summer, Hercules (1997) did well at the box office, grossing $252 million worldwide, but underperformed in comparison to Disney's previous films. It received positive reviews for its acting but the animation and music were criticized. Hercules was responsible for beginning the decline of traditionally-animated films. The declining box office success became doubly concerning inside the studio as wage competition from DreamWorks had significantly increased the studio's overhead, with production costs increasing from $79 million in total costs (production, marketing, and overhead) for The Lion King in 1994 to $179 million for Hercules three years later. Moreover, Disney depended upon the popularity of its new features in order to develop merchandising, theme park attractions, direct-to-video sequels and television programming in its other divisions. The production schedule was scaled back and a larger number of creative executives were hired to more closely supervise production, a move that was not popular among the animation staff. Mulan (1998), the first film produced primarily at the Florida studio, opened to positive reviews from audiences and critics and earned a successful $305 million at the worldwide box office, restoring both the critical and commercial success of the studio. The next film, Tarzan (1999), directed by Kevin Lima and Chris Buck, had a high production cost of $130 million, again received positive reviews and earned $448 million at the box office. The Tarzan soundtrack by pop star Phil Collins resulted in significant record sales and an Academy Award for Best Song. 1999–2005: Slump, downsizing, and conversion to computer animation, corporate issues Fantasia 2000, a sequel to the 1940 film that had been a pet project of Roy E. Disney's since 1990, premiered on December 17, 1999, at Carnegie Hall in New York City as part of a concert tour that also visited London, Paris, Tokyo and Pasadena, California. The film was then released in 75 IMAX theaters worldwide from January 1 to April 30, 2000, making it the first animated feature-length film to be released in the format; a standard theatrical release followed on June 15, 2000. Produced in pieces when artists were available between productions, Fantasia 2000 was the first animated feature produced for and released in IMAX format. The film's $90 million worldwide box office total against its $90 million production cost resulted in it losing $100 million for the studio. Peter Schneider left his post as president of Walt Disney Feature Animation in 1999 to become president of The Walt Disney Studios under Joe Roth. Thomas Schumacher, who had been Schneider's vice president of animation for several years, became the new president of Walt Disney Feature Animation. By this time, competition from other studios had driven animators' incomes to all-time highs, making traditionally-animated features even more costly to produce. Schumacher was tasked with cutting costs, and massive layoffs began to cut salaries and bring the studio's staff – which peaked at 2,200 people in 1999 – down to approximately 1,200 employees. In October 1999, Dream Quest Images, a special effects studio previously purchased by The Walt Disney Company in April 1996 to replace Buena Vista Visual Effects, was merged with the computer-graphics operation of Walt Disney Feature Animation to form a division called The Secret Lab. The Secret Lab produced one feature film, Dinosaur, which was released in May 2000 and featured CGI prehistoric creatures against filmed live-action backgrounds. The $128 million production earned $349 million worldwide, below studio expectations, and The Secret Lab was closed in 2001. In December 2000, The Emperor's New Groove was released. It had been a musical epic called Kingdom of the Sun before being revised mid-production into a smaller comedy. The film earned $169 million worldwide on release, though it was well-reviewed and performed better on video; Atlantis: The Lost Empire (2001), an attempt to break the Disney formula by moving into action-adventure, received mixed reviews and earned $186 million worldwide against production costs of $120 million. By 2001, the notable successes of computer-animated films from Pixar and DreamWorks such as Monsters, Inc. and Shrek, respectively, against Disney's lesser returns for The Emperor's New Groove and Atlantis: The Lost Empire led to a growing perception that hand-drawn animation was becoming outdated and falling out of fashion. In March 2002, just after the successful release of Blue Sky Studios' computer-animated feature Ice Age, Disney laid off most of the employees at the Feature Animation studio in Burbank, downsizing it to one unit and beginning plans to move into fully computer animated films. A handful of employees were offered positions doing computer animation. Morale plunged to a low not seen since the start of the studio's ten-year exile to Glendale in 1985. The Paris studio was also closed in 2003. The Burbank studio's remaining hand-drawn productions, Treasure Planet and Home on the Range, continued production. Treasure Planet, an outer space retelling of Robert Louis Stevenson's Treasure Island, was a pet project of writer-directors Ron Clements and John Musker. It received an IMAX release and generally positive reviews but was financially unsuccessful upon its November 2002 release, resulting in a $74 million write-down for The Walt Disney Company in fiscal year 2003. The Burbank studio's 2D departments closed at the end of 2002 following completion of Home on the Range, a long-in-production feature that had previously been known as Sweating Bullets. Meanwhile, hand-drawn feature animation production continued at the Feature Animation Florida studio, where the films could be produced at lower costs. Lilo & Stitch, an offbeat comedy-drama written and directed by Chris Sanders and Dean DeBlois, became the studio's first bonafide hit since Tarzan upon its summer 2002 release, earning $273 million worldwide against an $80 million production budget. By this time, most of the Disney features from the 1990s had been spun off into direct-to-video sequels, television series or both, produced by the Disney Television Animation unit. Beginning with the February 2002 release of Return to Never Land, a sequel to Peter Pan (1953), Disney began releasing lower-budgeted sequels to earlier films, which had been intended for video premieres, in theaters, a process derided by some of the Disney animation staff and fans of the Disney films. In 2003, Tom Schumacher was appointed president of Buena Vista Theatrical Group, Disney's stageplay and musical theater arm, and David Stainton, then president of Walt Disney Television Animation, was appointed as his replacement. Stainton continued to oversee Disney's direct-to-video division, Disneytoon Studios, which had been part of the television animation department, though transferred at this time to Walt Disney Feature Animation management. Under Stainton, the Florida studio completed Brother Bear, which did not perform as well as Lilo & Stitch critically or financially. Disney closed the Florida studio on January 12, 2004, with the then in-progress feature My Peoples left unfinished when the studio closed two months later. Upon the unsuccessful April 2004 release of Home on the Range, Disney, led by executive Bob Lambert, officially announced its conversion of Walt Disney Feature Animation into a fully CGI studio – a process begun two years prior – now with a staff of 600 people and began selling off all of its traditional animation equipment. Just after Brother Bears November 2003 release, Feature Animation chairman Roy E. Disney had resigned from The Walt Disney Company, launching with business partner Stanley Gold a second external "SaveDisney" campaign similar to the one that had forced Ron Miller out in 1984, this time to force out Michael Eisner. Two of their arguing points against Eisner included his handling of Feature Animation and the souring of the studio's relationship with Pixar. The same year, the studio collaborated with Walt Disney Imagineering on the 4D film attraction Mickey's Philharmagic, which centers on Donald Duck as he is transported through several Disney musical sequences. One of the studio's first attempts at CG animation, the studio brought back several animators from the Renaissance period to work on the film; each animator worked on a sequence that they had previously worked on, such as Glen Keane animating Ariel for the "Part of Your World" sequence. Talks between Eisner and Pixar CEO Steve Jobs over renewal terms for the highly lucrative Pixar-Disney distribution deal broke down in January 2004. Jobs, in particular, disagreed with Eisner's insistence that sequels such as the then in-development Toy Story 3 (2010) would not count against the number of films required in the studio's new deal. To that end, Disney announced the launching of Circle 7 Animation, a division of Feature Animation which would have produced sequels to the Pixar films, while Pixar began shopping for a new distribution deal. In 2005, Disney released its first fully computer-animated feature, Chicken Little. The film was a moderate success at the box office, earning $315 million worldwide, but was not well-received critically. Later that year, after two years of Roy E. Disney's "SaveDisney" campaign, Eisner announced that he would resign and named Bob Iger, then president of The Walt Disney Company, his successor as chairman and CEO. 2005–2010: Rebound, Disney's acquisition of Pixar and renaming Iger later said, "I didn’t yet have a complete sense of just how broken Disney Animation was." He described its history since the early 1990s as "dotted by a slew of expensive failures" like Hercules and Chicken Little; the "modest successes" like Mulan and Lilo & Stitch were still critically and commercially unsuccessful compared to the earlier films of the Disney Renaissance. After Iger became CEO, Jobs resumed negotiations for Pixar with Disney. On January 24, 2006, Disney announced that it would acquire Pixar for $7.4 billion in an all-stock deal, with the deal closing that May, and the Circle 7 studio launched to produce Toy Story 3 was shut down, with most of its employees returning to Feature Animation and Toy Story 3 returning to Pixar's control. Iger later said that it was "a deal I wanted badly, and [Disney] needed badly". He believed that Disney Animation needed new leadership and, as part of the acquisition, Edwin Catmull and John Lasseter were named president and Chief Creative Officer, respectively, of Feature Animation as well as Pixar. While Disney executives had discussed closing Feature Animation as redundant, Catmull and Lasseter refused and instead resolved to try to turn things around at the studio. Lasseter said, "We weren't going to let that [closure] happen on our watch. We were determined to save the legacy of Walt Disney's amazing studio and bring it back up to the creative level it had to be. Saving this heritage was squarely on our shoulders." Lasseter and Catmull set about rebuilding the morale of the Feature Animation staff, and rehired a number of its 1980s "new guard" generation of star animators who had left the studio, including Ron Clements, John Musker, Eric Goldberg, Mark Henn, Andreas Deja, Bruce W. Smith and Chris Buck. To maintain the separation of Walt Disney Feature Animation and Pixar despite their now common ownership and management, Catmull and Lasseter "drew a hard line" that each studio was solely responsible for its own projects and would not be allowed to borrow personnel from or lend tasks out to the other. Catmull said that he and Lasseter would "make sure the studios are quite distinct from each other. We don’t want them to merge; that would definitely be the wrong approach. Each should have its own personality". Catmull and Lasseter also brought to Disney Feature Animation the Pixar model of a "filmmaker-driven studio" as opposed to an "executive-driven studio"; they abolished Disney's prior system of requiring directors to respond to "mandatory" notes from development executives ranking above the producers in favor of a system roughly analogous to peer review, in which non-mandatory notes come primarily from fellow producers, directors and writers. Most of the layers of "gatekeepers" (midlevel executives) were stripped away, and Lasseter established a routine of personally meeting weekly with filmmakers on all projects in the last year of production and delivering feedback on the spot. The studio's team of top creatives who work together closely on the development of its films is known as the Disney Story Trust; it is somewhat similar to the Pixar Braintrust, but its meetings are reportedly "more polite" than those of its Pixar counterpart. In 2007, Lasseter renamed Walt Disney Feature Animation Walt Disney Animation Studios, and re-positioned the studio as an animation house that produced both traditional and computer-animated projects. In order to keep costs down on hand-drawn productions, animation, design and layout were done in-house at Disney while clean-up animation and digital ink-and-paint were farmed out to vendors and freelancers. The studio released Meet the Robinsons in 2007, its second all-CGI film, earning $169.3 million worldwide. That same year, Disneytoon Studios was also restructured and began to operate as a separate unit under Lasseter and Catmull's control. Lasseter's direct intervention with the studio's next film, American Dog, resulted in the departure of director Chris Sanders, who went on to become a director at DreamWorks Animation. The film was retooled by new directors Byron Howard and Chris Williams as Bolt, which was released in 2008 and had the best critical reception of any Disney animated feature since Lilo & Stitch and became a moderate financial success, receiving an Academy Award nomination for Best Animated Film. The Princess and the Frog, loosely based on the fairy tale the Frog Prince and directed by Ron Clements and John Musker, was the studio's first hand-drawn animated film in five years. A return to the musical-comedy format of the 1990s with songs by Randy Newman, the film was released in 2009 to a positive critical reception and was also nominated for three Academy Awards, including two for Best Song. The box office performance of The Princess and the Frog – a total of $267 million earned worldwide against a $105 million production budget – was seen as an underperformance due to competition with Avatar. In addition, the "Princess" aspect of the title was blamed, resulting in future Disney films then in production about princesses being given gender neutral, symbolic titles: Rapunzel became Tangled and The Snow Queen became Frozen. In 2014, Disney animator Tom Sito compared the film's box office performance to that of The Great Mouse Detective (1986), which was a step-up from the theatrical run of the 1985 film The Black Cauldron. In 2009, the studio also produced the computer-animated Prep & Landing holiday special for the ABC television network. 2010–2019: Continued resurgence, John Lasseter and Ed Catmull's departure After The Princess and the Frog, the studio released Tangled, a musical CGI adaptation of the Brothers Grimm fairy tale "Rapunzel" with songs by Alan Menken and Glenn Slater. In active development since 2002 under Glen Keane, Tangled, directed by Byron Howard and Nathan Greno, was released in 2010 and became a significant critical and commercial success and was nominated for several accolades. The film earned $592 million in worldwide box office revenue, becoming the studio's third most successful release to date. The hand-drawn feature Winnie the Pooh, a new feature film based on the eponymous stories by A.A. Milne, followed in 2011 to positive reviews but underperformed at the box office; it remains the studio's most recent hand-drawn feature. The film was released in theaters alongside the hand-drawn short The Ballad of Nessie. Wreck-It Ralph, directed by Rich Moore, was released in 2012 to critical acclaim and commercial success. A comedy-adventure about a video-game villain who redeems himself as a hero, it won numerous awards, including the Annie, Critics' Choice and Kids' Choice Awards for Best Animated Feature Film, and received Golden Globe and Academy Award nominations. The film earned $471 million in worldwide box office revenue. In addition, the studio won its first Academy Award for a short film in forty-four years with Paperman, which was released in theaters with Wreck-It Ralph. Directed by John Kahrs, Paperman utilized new software developed in-house at the studio called Meander, which merges hand-drawn and computer animation techniques within the same character to create a unique "hybrid". According to Producer Kristina Reed, the studio is continuing to develop the technique for future projects, including an animated feature. In 2013, the studio laid off nine of its hand-drawn animators, including Nik Ranieri and Ruben A. Aquino, leading to speculation on animation blogs that the studio was abandoning traditional animation, an idea that the studio dismissed. That same year, Frozen, a CGI musical film inspired by Hans Christian Andersen's fairy tale "The Snow Queen", was released to widespread acclaim and became a blockbuster hit. Directed by Chris Buck and Jennifer Lee with songs by the Broadway team of Robert Lopez and Kristen Anderson-Lopez, it was the first Disney animated film to earn over $1 billion in worldwide box office revenue. Frozen also became the first film from Walt Disney Animation Studios to win the Academy Award for Best Animated Feature (a category started in 2001), as well as the first feature-length motion picture from the studio to win an Academy Award since Tarzan and the first to win multiple Academy Awards since Pocahontas. It was released in theaters with Get a Horse!, a new Mickey Mouse cartoon combining black-and-white hand-drawn animation and full-color CGI animation. The studio's next feature, Big Hero 6, a CGI comedy-adventure film inspired by the Marvel Comics series of the same name, was released in 2014. For the film, the studio developed new light rendering software called Hyperion, which the studio continued to use on all subsequent films. Big Hero 6 received critical acclaim and was the highest-grossing animated film of 2014, also winning the Academy Award for Best Animated Feature. The film was accompanied in theaters by the animated short Feast, which won the Academy Award for Best Animated Short Film. In that same month, it was announced that General Manager, Andrew Millstein has been promoted as President of Walt Disney Animation Studios. In March 2016, the studio released Zootopia, a CGI buddy-comedy film set in a modern world inhabited by anthropomorphic animals. The film was a critical and commercial success, grossing over $1 billion worldwide, and won the Academy Award for Best Animated Feature. Moana, a CGI fantasy-adventure film, was released in November 2016. The film was shown in theaters with the animated short Inner Workings. Moana was another commercial and critical success for the studio, grossing over $600 million worldwide and receiving two Academy Award nominations. In November 2017, John Lasseter announced that he was taking a six-month leave of absence after acknowledging what he called "missteps" in his behavior with employees in a memo to staff. According to various news outlets, Lasseter had a history of alleged sexual misconduct towards employees. On June 8, 2018, it was announced that Lasseter would leave Disney and Pixar at the end of the year after the company decided not to renew his contract, but he would take on a consulting role until it expired. Jennifer Lee was announced as Lasseter's replacement as chief creative officer of Disney Animation on June 19, 2018. On June 28, 2018, the studio's division Disneytoon Studios was shut down, resulting in the layoffs of 75 animators and staff. On October 23, 2018, it was announced that Ed Catmull would be retiring at the end of the year, and would stay in an adviser role until July 2019. In November 2018, the studio released a sequel to Wreck-It Ralph, titled Ralph Breaks the Internet. The film grossed over $500 million worldwide and received nominations for a Golden Globe and an Academy Award, both for Best Animated Feature. 2019–present: Continued success and expansion to television In August 2019, it was announced that Andrew Millstein would be stepping down from his role as president, before moving on to become co-president of Blue Sky Studios alongside Robert Baird, while Clark Spencer was named president of Disney Animation, reporting to Walt Disney Studios chairman Alan Bergman and working alongside chief creative officer Jennifer Lee. The studio's next feature film was the sequel Frozen II, released in November 2019. The film grossed over $1 billion worldwide and received an Academy Award nomination for Best Original Song. According to Disney (which does not consider the 2019 The Lion King remake to be an animated film), Frozen II is the highest-grossing animated film of all time. In December 2020, the studio announced that it was expanding into producing television series - a business usually handled by the Disney Television Animation division. Most of the projects in development are for the Disney+ streaming service. Among the CG series being produced include Baymax! (a spinoff of Big Hero 6), a TV anthology called Zootopia+ (set in the Zootopia universe), and a TV adaptation of Moana. A hand-drawn series called Tiana, featuring the lead character from The Princess and the Frog, is also in development. They also announced they would be teaming up with British-based Pan-African entertainment company Kugali Media on a science fiction anthology named Iwájú. In addition, employees from Disney Animation are involved on the Disney Television Animation series Monsters at Work, based on Pixar's Monsters, Inc.. Raya and the Last Dragon, a CGI fantasy-adventure film, was released in March 2021. Due to the COVID-19 pandemic, it was released simultaneously in theaters and on Disney+ with Premier Access. The film was accompanied in theaters with the animated short Us Again. Starting in 2020, Disney Animation created a series of experimental shorts called Short Circuit for Disney+. The first pack of shorts was released in 2020, and a second pack was released in August 2021. During that period, Disney Animation returned to work on hand-drawn animation, having released the hand-drawn "At Home with Olaf" web short "Ice", as well as three hand-drawn animated Goofy shorts for Disney+, and a hand-drawn animated "Short Circuit" titled "Dinosaur Barbarian". In August 2021, it was reported that Disney Animation was opening a new animation studio in Vancouver. Operations at the Vancouver studio will start in 2022, with former Disney Animation finance lead Amir Nasrabadi serving as head for the studio. The Vancouver studio will work on the animation for the Disney+-exclusive long-form series and future Disney+ specials, while the short-form series will be animated at the Burbank studio. Pre-production and storyboarding for the long-form series and specials will also take place at the Burbank studio. In November 2021, the studio released Encanto, a CGI musical-fantasy film. It was released in theaters with the 2D/CG hybrid short Far from the Tree. Studio Management Walt Disney Animation Studios is currently managed by Jennifer Lee (Chief Creative Officer, 2018–present) and Clark Spencer (President, 2019–present). Former presidents of the studio include Andrew Millstein (November 2014–July 2019), Edwin Catmull (June 2007–July 2019), David Stainton (January 2003–January 2006), Thomas Schumacher (January 1999–December 2002) and Peter Schneider (1985–January 1999). Other Disney executives who also exercised much influence within the studio were John Lasseter (2006–2018, Chief Creative Officer, Walt Disney Animation Studios), Roy E. Disney (1972–2009, CEO and Chairman, Walt Disney Animation Studios), Jeffrey Katzenberg (1984–94, Chairman, The Walt Disney Studios), Michael Eisner (1984–2005, CEO, The Walt Disney Company), and Frank Wells (1984–94, President and COO, The Walt Disney Company). Following Roy Disney's death in 2009, the WDAS headquarters in Burbank was re-dedicated as The Roy E. Disney Animation Building in May 2010. Locations Since 1995, Walt Disney Animation Studios has been headquartered in the Roy E. Disney Animation Building in Burbank, California, across Riverside Drive from The Walt Disney Studios, where the original Animation building (now housing corporate offices) is located. The Disney Animation Building's lobby is capped by a large version of the famous hat from the Sorcerer's Apprentice segment of Fantasia (1940), and the building is informally called the "hat building" for that reason. Disney Animation shares its site with ABC Studios, whose building is located immediately to the west. Until the mid-1990s, Disney Animation previously operated out of the Air Way complex, a cluster of old hangars, office buildings, and trailers in the Grand Central Business Centre, an industrial park on the site of the former Grand Central Airport about two miles (3.2 km) east in the city of Glendale. The Disneytoon Studios unit was based in Glendale. Disney Animation's archive, formerly known as "the morgue" (based on an analogy to a morgue file) and today known as the Animation Research Library, is also located in Glendale. Unlike the Burbank buildings, the ARL is located in a nondescript office building near Disney's Grand Central Creative Campus. The 12,000-square-foot ARL is home to over 64 million items of animation artwork dating back to 1924; because of its importance to the company, visitors are required to agree not to disclose its exact location within Glendale. Previously, feature animation satellite studios were located around the world in Montreuil, Seine-Saint-Denis, France (a suburb of Paris), and in Bay Lake, Florida (near Orlando, at Disney's Hollywood Studios theme park). The Paris studio was shut down in 2002, while the Florida studio was shut down in 2004. The Florida animation building survives as an office building, while the former Magic of Disney Animation section of the building is home to Star Wars Launch Bay. In November 2014, Disney Animation commenced a 16-month upgrade of the Roy E. Disney Animation Building, in order to fix what then-studio president Edwin Catmull had called its "dungeon-like" interior. For example, the interior was so cramped that it could not easily accommodate "town hall" meetings with all employees in attendance. Due to the renovation, the studio's employees were temporarily moved from Burbank into the closest available Disney-controlled studio space – the Disneytoon Studios building in the industrial park in Glendale and the old Imagineering warehouse in North Hollywood under the western approach to Bob Hope Airport (the Tujunga Building). The renovation was completed in October 2016. Productions Feature films Walt Disney Animation Studios has produced animated features in a series of animation techniques, including traditional animation, computer animation, combination of both and animation combined with live-action scenes. The studio's first film, Snow White and the Seven Dwarfs, was released on December 21, 1937, and their most recent film, Encanto, was released on November 24, 2021. Short films Since Alice Comedies in the 1920s, Walt Disney Animation Studios has produced a series of prominent short films, including the Mickey Mouse cartoons and the Silly Symphonies series, until the cartoon studio division was closed in 1956. Many of these shorts provided a medium for the studio to experiment with new technologies that they would use in their filmmaking process, such as the synchronization of sound in Steamboat Willie (1928), the integration of the three-strip Technicolor process in Flowers and Trees (1932), the multiplane camera in The Old Mill (1937), the xerography process in Goliath II (1960), and the hand-drawn/CGI hybrid animation in Off His Rockers (1992), Paperman (2012), and Get a Horse! (2013). From 2001 to 2008, Disney released the Walt Disney Treasures, a limited collector DVD series, celebrating what would have been Walt Disney's 100th birthday. On August 18, 2015, Disney released twelve short animation films entitled: Walt Disney Animation Studios Short Films Collection which includes among others Tick Tock Tale (2010) directed by Dean Wellins and Prep & Landing – Operation: Secret Santa (2010) written and directed by Kevin Deters and Stevie Wermers-Skelton. On March 22, 2017, the shorts included were released on Netflix. Television programming Walt Disney Animation Studios announced its expansion into television programming in 2020, and is currently producing 5 original shows for Disney+. The shows include Baymax! and Zootopia+ (for 2022) Iwájú and Tiana (for 2023) and Moana: The Series (for 2024). Franchises This does not include Disney's direct-to-video or television follow-up films produced by either Disney Television Animation or DisneyToon Studios. See also The Walt Disney Company Disney's Nine Old Men 12 basic principles of animation Walt Disney Treasures Disney Animation: The Illusion of Life Modern animation in the United States: Disney Animation studios owned by The Walt Disney Company DisneyToon Studios Pixar Animation Studios Blue Sky Studios 20th Century Animation List of Disney theatrical animated feature films Documentary films about Disney animation A Trip Through the Walt Disney Studios (1937, short) The Reluctant Dragon (1941, a staged "mockumentary") Frank and Ollie (1995) Dream On Silly Dreamer (2005) Waking Sleeping Beauty (2009) References Sources Further reading External links Walt Disney Animation Studios on YouTube Film studios Disney animation Animation Disney Cinema of Southern California Entertainment companies based in California Entertainment companies established in 1923 American companies established in 1923 Postmodern architecture in California 1923 establishments in California Companies based in Burbank, California The Walt Disney Studios Walt Disney Pictures
2840574
https://en.wikipedia.org/wiki/CBC-MAC
CBC-MAC
In cryptography, a cipher block chaining message authentication code (CBC-MAC) is a technique for constructing a message authentication code from a block cipher. The message is encrypted with some block cipher algorithm in CBC mode to create a chain of blocks such that each block depends on the proper encryption of the previous block. This interdependence ensures that a change to any of the plaintext bits will cause the final encrypted block to change in a way that cannot be predicted or counteracted without knowing the key to the block cipher. To calculate the CBC-MAC of message , one encrypts in CBC mode with zero initialization vector and keeps the last block. The following figure sketches the computation of the CBC-MAC of a message comprising blocks using a secret key and a block cipher : Security with fixed and variable-length messages If the block cipher used is secure (meaning that it is a pseudorandom permutation), then CBC-MAC is secure for fixed-length messages. However, by itself, it is not secure for variable-length messages. Thus, any single key must only be used for messages of a fixed and known length. This is because an attacker who knows the correct message-tag (i.e. CBC-MAC) pairs for two messages and can generate a third message whose CBC-MAC will also be . This is simply done by XORing the first block of with and then concatenating with this modified ; i.e., by making . When computing the MAC for the message , it follows that we compute the MAC for in the usual manner as , but when this value is chained forwards to the stage computing we will perform an exclusive OR operation with the value derived for the MAC of the first message. The presence of that tag in the new message means it will cancel, leaving no contribution to the MAC from the blocks of plain text in the first message : and thus the tag for is . This problem cannot be solved by adding a message-size block to the end. There are three main ways of modifying CBC-MAC so that it is secure for variable length messages: 1) Input-length key separation; 2) Length-prepending; 3) Encrypt last block. In such a case, it may also be recommended to use a different mode of operation, for example, CMAC or HMAC to protect the integrity of variable-length messages. Length prepending One solution is to include the length of the message in the first block; in fact CBC-MAC has been proven secure as long as no two messages that are prefixes of each other are ever used and prepending the length is a special case of this. This can be problematic if the message length may not be known when processing begins. Encrypt-last-block Encrypt-last-block CBC-MAC (ECBC-MAC) is defined as . Compared to the other discussed methods of extending CBC-MAC to variable-length messages, encrypt-last-block has the advantage of not needing to know the length of the message until the end of the computation. Attack methods As with many cryptographic schemes, naïve use of ciphers and other protocols may lead to attacks being possible, reducing the effectiveness of the cryptographic protection (or even rendering it useless). We present attacks which are possible due to using the CBC-MAC incorrectly. Using the same key for encryption and authentication One common mistake is to reuse the same key for CBC encryption and CBC-MAC. Although a reuse of a key for different purposes is a bad practice in general, in this particular case the mistake leads to a spectacular attack: Suppose Alice has sent to Bob the cipher text blocks . During the transmission process, Eve can tamper with any of the cipher-text blocks and adjust any of the bits therein as she chooses, provided that the final block, , remains the same. We assume, for the purposes of this example and without loss of generality, that the initialization vector used for the encryption process is a vector of zeroes. When Bob receives the message, he will first decrypt the message by reversing the encryption process which Alice applied, using the cipher text blocks . The tampered message, delivered to Bob in replacement of Alice's original, is . Bob first decrypts the message received using the shared secret key to obtain corresponding plain text. Note that all plain text produced will be different from that which Alice originally sent, because Eve has modified all but the last cipher text block. In particular, the final plain text, , differs from the original, , which Alice sent; although is the same, , so a different plain text is produced when chaining the previous cipher text block into the exclusive-OR after decryption of : . It follows that Bob will now compute the authentication tag using CBC-MAC over all the values of plain text which he decoded. The tag for the new message, , is given by: Notice that this expression is equal to which is exactly : and it follows that . Therefore, Eve was able to modify the cipher text in transit (without necessarily knowing what plain text it corresponds to) such that an entirely different message, , was produced, but the tag for this message matched the tag of the original, and Bob was unaware that the contents had been modified in transit. By definition, a Message Authentication Code is broken if we can find a different message (a sequence of plain-text pairs ) which produces the same tag as the previous message, , with . It follows that the message authentication protocol, in this usage scenario, has been broken, and Bob has been deceived into believing Alice sent him a message which she did not produce. If, instead, we use different keys for the encryption and authentication stages, say and , respectively, this attack is foiled. The decryption of the modified cipher-text blocks obtains some plain text string . However, due to the MAC's usage of a different key , we cannot "undo" the decryption process in the forward step of the computation of the message authentication code so as to produce the same tag; each modified will now be encrypted by in the CBC-MAC process to some value . This example also shows that a CBC-MAC cannot be used as a collision-resistant one-way function: given a key it is trivial to create a different message which "hashes" to the same tag. Allowing the initialization vector to vary in value When encrypting data using a block cipher in cipher block chaining (or another) mode, it is common to introduce an initialization vector to the first stage of the encryption process. It is typically required that this vector be chosen randomly (a nonce) and that it is not repeated for any given secret key under which the block cipher operates. This provides semantic security, by means of ensuring the same plain text is not encrypted to the same cipher text, allowing an attacker to infer a relationship exists. When computing a message authentication code, such as by CBC-MAC, the use of an initialization vector is a possible attack vector. In the operation of a ciphertext block chaining cipher, the first block of plain text is mixed with the initialization vector using an exclusive OR (). The result of this operation is the input to the block cipher for encryption. However, when performing encryption and decryption, we are required to send the initialization vector in plain text - typically as the block immediately preceding the first block of cipher text - such that the first block of plain text can be decrypted and recovered successfully. If computing a MAC, we will also need to transmit the initialization vector to the other party in plain text so that they can verify the tag on the message matches the value they have computed. If we allow the initialization vector to be selected arbitrarily, it follows that the first block of plain text can potentially be modified (transmitting a different message) while producing the same message tag. Consider a message . In particular, when computing the message tag for CBC-MAC, suppose we choose an initialization vector such that computation of the MAC begins with . This produces a (message, tag) pair . Now produce the message . For each bit modified in , flip the corresponding bit in the initialization vector to produce the initialization vector . It follows that to compute the MAC for this message, we begin the computation by . As bits in both the plain text and initialization vector have been flipped in the same places, the modification is cancelled in this first stage, meaning the input to the block cipher is identical to that for . If no further changes are made to the plain text, the same tag will be derived despite a different message being transmitted. If the freedom to select an initialization vector is removed and all implementations of CBC-MAC fix themselves on a particular initialization vector (often the vector of zeroes, but in theory, it could be anything provided all implementations agree), this attack cannot proceed. To sum up, if the attacker is able to set the IV that will be used for MAC verification, he can perform arbitrary modification of the first data block without invalidating the MAC. Using predictable initialization vector Sometimes IV is used as a counter to prevent message replay attacks. However, if the attacker can predict what IV will be used for MAC verification, he or she can replay previously observed message by modifying the first data block to compensate for the change in the IV that will be used for the verification. For example, if the attacker has observed message with and knows , he can produce that will pass MAC verification with . The simplest countermeasure is to encrypt the IV before using it (i.e., prepending IV to the data). Alternatively MAC in CFB mode can be used, because in CFB mode the IV is encrypted before it is XORed with the data. Another solution (in case protection against message replay attacks is not required) is to always use a zero vector IV. Note that the above formula for becomes . So since and are the same message, by definition they will have the same tag. This is not a forgery, rather the intended use of CBC-MAC. Standards that define the algorithm FIPS PUB 113 Computer Data Authentication is a (now obsolete) U.S. government standard that specified the CBC-MAC algorithm using DES as the block cipher. The CBC-MAC algorithm is equivalent to ISO/IEC 9797-1 MAC Algorithm 1. See also CMAC – A block-cipher–based MAC algorithm which is secure for messages of different lengths (recommended by NIST). OMAC and PMAC – Other methods to turn block ciphers into message authentication codes (MACs). One-way compression function – Hash functions can be made from block ciphers. But note, there are significant differences in function and uses for security between MACs (such as CBC-MAC) and hashes. References Message authentication codes Block cipher modes of operation
29039034
https://en.wikipedia.org/wiki/Voiceroid
Voiceroid
Voiceroid is a speech synthesizer application developed by AH-Software and is designed for speech. It is only available in the Japanese language. Its name comes from the singing software Vocaloid, for which AH-Software also develops voicebanks. Both AH-Software's first Vocaloids and Voiceroids went on sale on December 4, 2009. It differs from regular text-to-speech programs in that it gives users more control over settings like tempo, pitch, and intonation. Overview Voiceroid uses an engine called AITalk developed by AI Inc. The user is able to adjust the tempo, pitch, and intonation to make the program sound more natural. The original two products, Tsukuyomi Shouta and Tsukuyomi Ai, were packaged with the animating software, Crazy Talk SE. On October 22, 2010, a new version of the engine was introduced, known as Voiceroid+. The first voicebank released for this new engine featured a character from the children's anime Eagle Talon, known as Yoshida-kun. Much like Shouta and Ai, he is aimed at young audiences. The first three Voiceroids were subject to censorship, and inappropriate words were filtered out. However, Tsurumaki Maki was designed specifically for a more mature audience and is the first of the series to have no form of censorship. Yuzuki Yukari is also the first Vocaloid Vocaloid to have a Voiceroid voicebank. For Tohoku Zunko's release the software was vastly improved compared to previous Voiceroid+ voices. The first two Voiceroids to come in one package are Kotonoha Akane and Aoi. In 2015, the software was upgraded to Voiceroid+ EX. In 2017, the newest version of the software, "Voiceroid 2", was announced. This version has a number of new features and differences, though users can still import the past Voiceroid and its variants into it. However, older engine versions will not be able to use any new features. A number of releases for the software have been produced after successful crowdfunding campaigns; since 2016, this has become the main source of funding for new voicebanks. List of products Voiceroid Voiceroid Tsukuyomi Shouta: Designed to be based on a seven-year-old male. Release date: December 4, 2009. Tsukuyomi Ai: Shouta's "sister" product, designed to be based on a five-year-old female. Release date: December 4, 2009. Voiceroid+ Yoshida-kun: Based on the character of the same name from the anime Eagle Talon. Release date: October 22, 2010. Tsurumaki Maki: A teenage female, voiced by Tomoe Tamiyasu. She was the first Voiceroid to not contain censorship of vulgar words. Release date: November 12, 2010. Yuzuki Yukari: An 18-year-old female, voiced by Chihiro Ishiguro. She is the first Vocaloid to also have a VOICEROID library, as well as several emotion parameters (joy, sadness and anger). Release date: December 22, 2011. Tohoku Zunko: A 17-year-old female, voiced by Satomi Satō. She was originally created to help promote the recovery of the Tōhoku region after the Great East Japan Earthquake of 2011. Her character design is based on zunda mochi, a Tōhoku specialty. Release date: September 28, 2012. She later received a VOCALOID3 singing voicebank in June 2014 and a n3utrino singing voicebank in July 2021. Kotonoha Akane and Aoi: A pair of twin sisters, voiced by Sakakibara Yui. Akane's voicebank has Kansai-ben intonation whereas Aoi's has standard intonation. They also have a singing voicebank for Synthesizer V, released on July 30, 2020. Release date: April 25, 2014. Voiceroid + EX All 6 past Voiceroids except Kotonoha Akane and Aoi were upgraded. Minase Kou: The first new male vocal released for this version of the software. Release date: October 29, 2015. Kyomachi Seika: A 23-year-old female, voiced by Rika Tachibana. Her character serves as a mascot for the town of Seika, Kyoto. Release date: June 10, 2016. Tohoku Kiritan: An 11-year-old female, voiced by Himika Akaneya. She is the younger sister of Tohoku Zunko. The goal for her to become a VOICEROID was pitched at ¥4,500,000 to be raised by 18 May 2016. At time of closing she raised ¥6,983,080, 55% more than was required. Release date: October 27, 2016. She also has singing voicebanks available for UTAU, n3utrino and CeVIO AI. Voiceroid 2 Yuzuki Yukari and the Kotonoha sisters were announced for Voiceroid 2. Tohoku Itako: A 19-year-old female, voiced by Ibuki Kido. She is the eldest sister of Tohoku Zunko and Tohoku Kiritan. Confirmation of her production began after a crowdfunding campaign which raised ¥10,300,000 upon closing. Release date: November 8, 2018. Kizuna Akari: A 15-year-old female voiced by Madoka Yonezawa, simultaneously released as a VOCALOID5 singing voicebank. Release date: December 22, 2017. Haruno Sora: A 17-year-old female voiced by Kikuko Inoue, simultaneously released as a VOCALOID5 singing voicebank. Release date: July 26, 2018. Tsuina-chan: A 14-year-old girl and a demon hunter. She was confirmed to become a Voiceroid after yet another successful crowdfunding campaign; her goal was ¥4,800,000 and she reached ¥14,052,884, a total of almost 300% more than needed. Similarly to the Kotonoha package, she has a standard Japanese voicebank and a Kansai-ben voicebank, both voiced by Mai Kadowaki. Release date: November 1, 2019. Iori Yuzuru: A male voicebank voiced by Yoshiyuki Matsuura, developed in collaboration with AI Inc. Release date: February 27, 2020. Otomachi Una: A 11-year-old female, voiced by Aimi Tanaka. Release date: December 11, 2020. She was originally released for VOCALOID4 on July 30, 2016. See also Speech synthesis Vocaloid CeVIO References External links Speech synthesis
1581616
https://en.wikipedia.org/wiki/File%20synchronization
File synchronization
File synchronization (or syncing) in computing is the process of ensuring that computer files in two or more locations are updated via certain rules. In one-way file synchronization, also called mirroring, updated files are copied from a source location to one or more target locations, but no files are copied back to the source location. In two-way file synchronization, updated files are copied in both directions, usually with the purpose of keeping the two locations identical to each other. In this article, the term synchronization refers exclusively to two-way file synchronization. File synchronization is commonly used for home backups on external hard drives or updating for transport on USB flash drives. BitTorrent Sync, Dropbox and SKYSITE are prominent products. Some backup software also support real-time file sync. The automatic process prevents copying already identical files and thus can be faster and save much time versus a manual copy, and is less error prone. However this suffers from the limit that the synchronized files must physically fit in the portable storage device. Synchronization software that only keeps a list of files and the changed files eliminates this problem (e.g. the "snapshot" feature in Beyond Compare or the "package" feature in Synchronize It!). It is especially useful for mobile workers, or others that work on multiple computers. It is possible to synchronize multiple locations by synchronizing them one pair at a time. The Unison Manual describes how to do this: If you need to do this, the most reliable way to set things up is to organize the machines into a "star topology," with one machine designated as the "hub" and the rest as "spokes," and with each spoke machine synchronizing only with the hub. The big advantage of the star topology is that it eliminates the possibility of confusing "spurious conflicts" arising from the fact that a separate archive is maintained by Unison for every pair of hosts that it synchronizes. Common features Common features of file synchronization systems include: Encryption for security, especially when synchronizing across the Internet. Compressing any data sent across a network. Conflict detection where a file has been modified on both sources, as opposed to where it has only been modified on one. Undetected conflicts can lead to overwriting copies of the file with the most recent version, causing data loss. For conflict detection, the synchronization software needs to keep a database of the synchronized files. Distributed conflict detection can be achieved by version vectors. Open Files Support ensures data integrity when copying data or application files that are in-use or database files that are exclusively locked. Specific support for using an intermediate storage device, such as a removable flash disc, to synchronize two machines. Most synchronizing programs can be used in this way, but providing specific support for this can reduce the amount of data stored on a device. The ability to preview any changes before they are made. The ability to view differences in individual files. Backup between operating systems and transfer between network computers. Ability to edit or use files on multiple computers or operating systems. Possible security concerns Consumer-grade file synchronization solutions are popular, however for business use, they create a concern of allowing corporate information to sprawl to unmanaged devices and cloud services which are uncontrolled by the organization. See also Comparison of file synchronization software Comparison of online backup services Data synchronization Data comparison Mirror (computing) Backup software List of backup software Remote backup service Shared file access References Storage software Utility software types
24338173
https://en.wikipedia.org/wiki/Bromcom
Bromcom
Bromcom Computers plc is a British technology company. It provides schools and colleges with a Management Information System and handheld data capture devices to record and track pupil performance. History Bromcom Computers plc (Bromcom) was founded in 1986 by computer scientist Ali Guryel as a private company serving business-to-business alongside sister company Frontline Technology Ltd. These companies were formed following the establishment of a sole proprietary company selling microcomputers. In the early 1990s, EARS (Electronic Attendance Registration System) was created by Bromcom. This initially was an A4 computer folder for teachers to take pupil attendance electronically, replacing the traditional paper register. It appeared on BBC's Tomorrow's World in January 1994. In 1996, Bromcom enhanced wNET/Ears to include a number of new featuresprimarily Electronic GradeBook and the Two Way Link to SIMS Software. In 1998 Bromcom launched a new range of computer folders with a larger LCD screen and PC-style QWERTY keyboard. In later models, the size was reduced to A5 size with models m-PDA. In June 2000, Bromcom launched a 'Parent Portal', MyChildAtSchool.com, to enable pupil's parents to access information about their child's academic performance via the internet. MyChildAtSchool.com was revised in 2009 with a new interface created by graphic designers. In 2000, Bromcom launched the Bromcom e-School adapting e-commerce technology to schools. e-School brings automated online Internet-based data management of pupils across the school, to LEA/EAZs, to parents and to teachers working from home on their PC over the Internet. By the end of 2000, Bromcom introduced the PCMCIA/PC-CARD based on the PC-1340 PC Card to enable Windows-based laptops and handhelds to perform registration via the wNET network. The associated 'WinFolder' registration software now joined the well-established java-based jNET/jFolder as part of the company's multi-network software line. The following year saw the arrival of Bromcom's next generation registration computer folder, the SmartPDA. In 2003, e-Markbook was created, an automatic and electronic tool integrating with the school's existing MIS system. This was closely followed by e-Behaviour, a software module to monitor and track pupil behaviour - both positive and negative via a points system. In 2004, Bromcom MIS (Management Information System) developments started and was completed in 2008, based on proven browser platform developed on Microsoft.NET technology since 2001. Bromcom has been described by Becta as one of the seven largest suppliers of school management information systems. While Bromcom were the first to launch a full cloud based MIS back in 2011, their cloud journey started in 2000 with the start of their MyChildAtSchool product, a secure way of letting parents see the information, whilst CAPITA, who hold the majority of the market share, are yet to fully launch their Cloud based MIS. Bromcom is named as one of the approved Government Procurement Service (GPS) suppliers for both its Management Information System and Virtual Learning Platform on the Information Management and Learning Services framework (IMLS). In 2013 Bromcom won Telford & Wrekin Council Framework contract for school MIS. Bromcom was awarded a contract on the Government's G-Cloud 5. Early in 2016, Bromcom won ARK academies, Harris federation and the Minerva Learning Trust Framework contracts for the school’s MIS. Corporate affairs Bromcom systems for accessing student's data require substantial interoperability directly with the schools Management Information Systems (MIS). Since the 1990s the most widely used MIS in the UK has been Capita's SIMS. In 1999 Capita did not cooperate with Bromcom's request for improved interoperability in order to write back to Capita's database. In light of this, Bromcom took a complaint to the OFT, and sought OFT support to secure this cooperation. The OFT agreed with Bromcom that Capita was obliged to cooperate with Bromcom to provide the necessary interoperability. Capita agreed to cooperate, and provided documentation that allowed the means of writing back to Capita's database which was then a file-sharing database called Clipper. Capita moved over to a Microsoft SQL database in 2003/4, and hence a new arrangement was required to provide interoperability. Bromcom charged Capita with abuse of its dominant position by its offer of a new interface "at an unreasonable price and on uncompetitive terms", again referring the matter to OFT. The issue was settled through the OFT in May 2003 by Capita providing the required interoperability via a 'voluntary assurance'. In 2005, recognizing the serious issues posed by the overwhelming market dominance of SIMS and the lack of competition, Becta commissioned a report called "Management Information Systems and Value for Money". Becta established a Schools Interoperability Framework (based on the model used in the United States) which education products could easily comply with and interoperate. The director of SIMS, however, claimed that the implementation of these standard interfaces would incur a significant cost to their software. In 2009, Bromcom Computers plc brought a case against Capita to the Office of Fair Trading, alleging that Capita has been abusing its dominant position. Bromcom stated that Capita's charges for contracts and dominance in the UK schools software market has led to schools over paying by £75.4 million over a ten-year period. The complaint to the OFT follows recommendations made in 2005 by Becta's School Management Information Systems and Value for Money report, a number of which remain outstanding. In September 2010, Becta published a report entitled "School management information systems and value for money 2010" which was carried out by Atkins Ltd stating the impact on the MIS marketplace of statutory returns, interoperability approaches and the arrangements for the provision of local authority support. Their findings showed that the school MIS marketplace is "still uncompetitive..., still dominated by a single supplier, and still distorted due to the impact of the statutory returns process which increases costs to schools thus increasing the burdens on local authorities and mitigating in particular against smaller providers." IMLS Framework In March 2012, a new framework agreement was created in a £575m deal with the Department for Education. 18 suppliers have been appointed under the new framework, including Bromcom, Capita, Serco, RM and ScholarPack (Histon House Ltd). The agreement was set up by the Government Procurement Service on behalf of the DfE. Bromcom Computers plc was selected to be part of Lot 1 (officially recognised supplier of Information Management Systems) to schools as well as Lot 2 (officially recognised supplier of Learning (Platform) Services). Notes External links Truancy cut by computer revolution — BBC News Software companies of the United Kingdom Companies established in 1986 Companies based in the London Borough of Bromley 1986 establishments in the United Kingdom
2536505
https://en.wikipedia.org/wiki/Operation%3A%20Inner%20Space
Operation: Inner Space
Operation: Inner Space is an action game developed in 1992 and published in 1994 by Software Dynamics for Windows. The player's mission is to enter the computer (represented by "Inner Space") in a spaceship and recover the icons and resources that have been set loose by an invasion, and ultimately to destroy the "Inner Demon". The player interacts with other spacecraft along the way, and can compete in races for icons. Software Dynamics' goals in creating the game were to have players interact with opponents rather than kill them and for players to have personalized worlds. The intention was to be funny, and the game features ships based on animals and military aircraft. The development focus was on gameplay, and the team tried to make the game run like a DOS game. The game includes a "Ship Factory", which allows the player to make and customize ships. Three add-ons were released that added ships and increased the Ship Factory's functionality. The game received highly positive reviews, with reviewers praising its artificial intelligence, replayability, and originality. The game is available for purchase on Software Dynamics's website, where the shareware version is available for download. A sequel, named Operation: Inner Space 2: Lightning was planned, but cancelled. Plot The Inner Demon and its viruses have invaded the player's computer and infected its icons. Then the Inner Demon set up a lair in a black hole which changes position. Guarding this lair are four dragons each holding one of the Inner Demon's powers. Using a spaceship, the player goes into the computer's "Inner Space" to collect the uninfected icons and stop the Inner Demon. Along the way, the player will improve their ship and maintain relationships with the population of Inner Space. Gameplay Each ship belongs to one of eight teams: the Avengers, Pirates, Predators, Enforcers, Renegades, Fuzzy Ones, Knights, and the Speed Demons. Each team has personality and behaviours; for example, the Knights tend to be helpful to their allies, whereas the Pirates enjoy looting other ships. The player can play as any ship from any team, except the Enforcers, who only police the levels as bots. Each team has pre-set relations with the other teams, which are not fixed and can easily change. Certain ships have special attributes, such as a cloak or shield. Once the player has chosen a disk drive to "disinfect" and a ship (and therefore a team) to pilot, a directory to enter must be chosen. Once a directory has been chosen, the player is debriefed on the "wave", which includes which icons are present, which other ships are entering, and what hazards are present. Hazards include viruses, which infect icons, and turrets, which fire lasers or a specific type of weapon when a ship comes into close proximity. Upon entering a wave, the player's primary task is to collect each icon present. Icons are the game's currency used to purchase weapons and upgrades. Icons can be damaged (reducing their value), infected (which eliminates their value and causes the player's ship to temporarily lose control), or destroyed completely. Icons can spawn small ships called defenders to protect themselves. These ships circle around the icon and attack anything which threatens it. Upon capturing an icon, its defenders switch allegiance to the ship which captured it. Fuel tanks are present throughout, which the player must collect regularly to avoid running out of fuel. The player will encounter other ships and interact with them, which can include giving commands. Other ships may not be friendly and the player may enter combat. When a ship is destroyed, a resource pack is released containing its icons and a weapon. If the player's ship is destroyed, the game ends. Helping other ships may strengthen relations between its team and the player's team, and attacking will worsen relations. Friendly ships may come to assist if trouble is encountered. During a wave, any ship can call the "ambulance" which offers repair and refuelling services, weapons and upgrades. Weapons available range from missiles utilizing various targeting mechanisms, to fireballs. Upgrades include more powerful lasers and stronger armour. The player can purchase a new ship, although their identity and team does not change. The player leaves a wave by flying through an exit gate. Occasionally, a black hole will appear and suck the player into the Inner Demon's realm. Here, the player must attempt to rob the Inner Demon of its powers. If the player manages this, a special weapon (called a "noble weapon") is released. Damage inflicted on the player here is not permanent. The player returns to normal upon exiting when a noble weapon is collected, or is destroyed. Once all four noble weapons are obtained, the Inner Demon can be challenged. If the player is victorious, the game is won. Inner Space has laws which all ships must obey. Crimes include destroying uninfected icons and attacking an Enforcer. A patrol Enforcer will occasionally enter and look for crime. If a crime occurs, an Intercept Enforcer is called in to arrest the offender. The offender will be handcuffed, taken to the Hall of Justice, charged and punished for the crime(s). Punishments include fines and confiscation of weapons. If the offender resists arrest, the Enforcer will respond accordingly before continuing with the arrest. In serious cases, a Terminator Enforcer may be summoned to destroy the offender. Directories will occasionally, at random, become races (the directory where the game is installed is always a race directory). This means the player must race on one of three courses to win the icons. The icons are awarded to the player for a first-place win. Stealing icons is a crime and the thief will be arrested immediately upon doing so. There are duel directories, where the player is locked into a duel with other ships, and must either win or be destroyed. The game can be played in either Action mode, where the player handles all the controls manually, or in Strategy mode, where the player gives commands to the ship, and the finer details are handled automatically. For example, the player can command the ship to attack a target, but cannot chose which weapons to use, or when or how they are used. Development Operation: Inner Space was developed in 1992 for Microsoft Windows 3.1. Software Dynamics were aiming to have the game running at 36 frames per second on 16mhz 386 PCs, and aimed to develop a native Windows game that rivalled the performance of DOS games. The idea to develop the game came about because Software Dynamics wanted to develop something new after developing screensavers. The development team did not want the game to be like others where the objective is to kill everyone, but rather to play with the opponents instead. They team aimed to create an artificial intelligence that felt like real opponents. Another aspect they wanted to achieve was personalized worlds for each user. This was believed important because they thought consumers liked things customized just for them. Furthering this goal was the creation of the Ship Factory to enable players to create ships, which Bill Stewart, the game's director, stated was not easy. It was felt that the game should be funny, and to that end, the team "couldn't resist going wild" and created ships such as ducks, tigers, and Mig 29s, and made the fuel tanks change into items such as tea cups at certain times of day. Another goal the team had was to make players think of the game objects as real by giving them as many characteristics as they could, such as rocks heating up when shot and doing more damage if a collision occurs when heated. The team also wanted the sound and the graphics to be interesting to the player, and spent "endless hours" perfecting the voice overs to present useful information in such a way. According to Stewart, the team spent "thousands of hours designing, programming, and debugging", and they did not know if anybody was going to like it because it "ventured into uncharted territory". Three add-ons were released; a Ship Builder's Kit, which enhanced the Ship Factory and enables the player to import ships. This is required for the other two add-ons; the Military and Nations of The World ship sets, which add military ships such as tanks and World War I and World War II planes, and ships based on various nation's flags respectively. These add-ons were combined and released as "The Works!", which includes sound and voice packages that are available separately. Operation: Inner Space 2: Lightning had been a planned sequel, but the publisher cancelled development. Software Dynamics, now known as Dynamic Karma, has stated it has no intentions of further game development. Reception Operation: Inner Space received critical acclaim. Gamer's Ledge lauded the game, and described it as "An excellent example for other developers to emulate". Maximize Magazine said the game rivalled the best Macintosh games. PC Multimedia & Entertainment Magazine praised the artificial intelligence, and reviewer Michael Bendner was impressed that the game could fit onto one floppy disk. The game made it into the finalists for Ziff Davis's 1995 Shareware Awards, where it was called "The best of a new breed of native Windows games". Happy Puppy Games praised the game's originality, describing the use of the hard disk contents as game elements as "simply brilliant". Windows Magazine called the gameplay "unique". The Shareware Shop complimented its "Excellent sound and graphics". Midnight Publications complimented the addictiveness. Dan Nguyen of Games Domain commended the game, describing it as "awesome", and was also impressed with the game's small size on disk. Computer life commented that "Inner Space encourages and empowers your creativity!". Windows 95 uncut called it "Simply the best". Computer Life UK considered the game "an animated, audio-filled blast". PC Direct called the game "the arcade shoot 'em-up with attitude". Chuck Miller of Computer Gaming World commented that the game "offers an entertaining twist to a classic game idea", and praised the humour, although he criticized the design of the interface, saying it lacked the sophistication of the rest of the game. The game was not without its critics. Jason Bednarik of World Village called the game "repetitious", further stating that the icon collecting and challenging the Inner Demon made it more tedious. He concluded by calling the game a remake of Asteroids, commenting that it is not worth the price. References Sources External links Official website 1994 video games Action video games Video games developed in Canada Windows games Windows-only games
26246008
https://en.wikipedia.org/wiki/VAW-112
VAW-112
Carrier Airborne Early Warning Squadron 112 (VAW-112) is an inactive United States Navy squadron. It was nicknamed the "Golden Hawks". VAW-112 flew the E-2C Hawkeye out of NAS Point Mugu and last deployed as part of Carrier Air Wing 9 (CVW-9) on board . Squadron History 1960s – 1980s The squadron was established 20 April 1967 and assigned to CVW-9. The squadron made three combat deployments operating the E-2A Hawkeye in the Western Pacific in support of the Vietnam War aboard . In May 1970, the squadron was temporarily disestablished and placed in "stand down" status until reactivated on 3 July 1973. The squadron, now flying E-2Bs, were assigned to Carrier Air Wing 2 (CVW-2) and made three Western Pacific/Indian Ocean deployments on board , before assignment to Carrier Air Wing 8 (CVW-8) aboard , for a Mediterranean and Indian Ocean deployment. In May 1979, the squadron transitioned to the E-2C and again became part of CVW-9 in February 1981. As part of CVW-9, VAW-112 made three Western Pacific/Indian Ocean deployments on board , USS Ranger and . During this period, VAW-112 was awarded the Battle Efficiency (Battle "E") award for 1979 and 1985. During 1989, VAW-112 deployed aboard USS Nimitz for NORPAC '89, and in August 1989, they became the first West Coast squadron to transition to the E-2C Plus aircraft. 1990s In February and March 1990, VAW-112 deployed aboard USS Constellation for an "Around the Horn" of South America to Norfolk, Virginia cruise. Then in September 1990, the squadron deployed to Howard Air Force Base, Panama for a Joint Task Force 4 counter-narcotics operation. The squadron finished the year and entered 1991 with the CVW-9 workup schedule on board USS Nimitz. In March 1991, the squadron departed for the Western Pacific, Indian Ocean, Northern Persian Gulf cruise in support of Operation Desert Storm aboard USS Nimitz. In December 1991, VAW-112 deployed again to Howard Air Force Base for another Joint Task Force Four counter-narcotics operation. The squadron participated in joint and combined exercises in 1992 including Roving Sands in May 1992. In February 1993, VAW-112 deployed aboard USS Nimitz to the Persian Gulf in support of Operation Southern Watch, flying more than 1,000 hours. Upon returning, VAW-112 transitioned to the E-2C Plus Group II. In November 1993, VAW-112 deployed to NS Guantanamo Bay. In 1994 VAW-112 made numerous detachments while the Nimitz was in dry dock. These included Red Air and Red Flag exercises during February; JADO/JEZ trials in March; Roving Sands and Maple Flag at CFB Cold Lake, Canada in June; and another Joint Task Force Four counter-narcotics operation detachment in August. Following a work-up cycle in 1995, the squadron departed San Diego for the Persian Gulf aboard USS Nimitz in December. After remaining on station for three months, VAW-112 departed the Persian Gulf to deploy off the coast of Taiwan during the Third Taiwan Strait Crisis. After returning home in May, the squadron then headed for Puerto Rico in mid-July for counter-narcotics operations at NS Roosevelt Roads. During a 1997 work-up cycle for an "Around the World" deployment in late July, the squadron participated in a Pacific Fleet Surge Exercise. The squadron provided battlespace command and control to the battle group for more than 96 continuous hours. During this time VAW-112 surpassed a safety milestone – 27 years and more than 57,000 mishaps-free flight hours. They departed San Diego in September 1997 on another "Around the World" deployment. In 1997, the squadron were presented the Battle "E", the CNO Safety "S" Award, and the coveted Airborne Early Warning Excellence Award. The squadron deployed in July 1998 for a short detachment to Hawaii aboard USS Kitty Hawk and later transferred the newest E-2C Plus Group II Navigation Upgrade aircraft to VAW-115 home based at Naval Air Facility Atsugi, Japan. The squadron moved from NAS Miramar to NAS Point Mugu in July 1998. 2000s Following a work-up period in 1999, the Golden Hawks deployed aboard in January 2000 for a Western Pacific/Indian Ocean cruise that included flight operations in support of Operation Southern Watch over Iraq. The squadron wrapped up 2000 with counter-narcotics operations in Puerto Rico in September and a carrier qualification detachment to Mazatlán, Mexico in December. In 2001, the squadron executed several aircraft control detachments including detachments to NAS Key West; NS Norfolk and NAS Fallon. While still continuing their workup cycle leading to a 2002 deployment, the squadron also participated in Fleet Battle Experiment India, providing air control services to the battle group participating in the highest profile Navy exercise in many years. After the September 11 attacks the squadron stood alerts and flew combat missions for the air defense of the entire western coast of the U.S. in support of Operation Noble Eagle. Immediately following their actions in Operation Noble Eagle, the squadron left for Air Wing Fallon in Fallon, Nevada. The squadron finished an accelerated training schedule and deployed two months early in mid-November 2001 along with the rest of CVW-9 aboard USS John C. Stennis. After an expedited transit across the Pacific the squadron commenced combat operations over Afghanistan in mid-December. The squadron accumulated over 2,095 hours, 500 sorties, and logging 666 arrested landings in support of Operation Enduring Freedom. The squadron returned home to NAS Point Mugu at the end of May, 2002.Upon returning home, VAW-112 completed training to transition to the Mission Computer Upgrade and Advanced Controller-Indicator Set (MCU/ACIS) Navigation Upgrade version of the E-2C Plus. This new version of the Hawkeye featured new display scopes and interfaces for aircraft controllers and mission commanders, along with a new, more powerful mission computer. In addition, the aircraft's navigation system is significantly more reliable. In October 2002, the squadron commenced an unannounced, compressed inter-deployment turnaround cycle and left for NAS Fallon, Nevada to complete both Strike Fighter Advance Readiness Program (SFARP) and Air Wing 9 Fallon Det in a record span of three weeks. The squadron returned home for three weeks and readied themselves for COMPTUEX PLUS on board . In January 2003 the squadron deployed to the Western Pacific on board USS Carl Vinson, seven months ahead of schedule. During the WestPac 2003 cruise USS Carl Vinson visited Hawaii, Guam, Pusan, South Korea, Japan, Singapore, Perth, and Hong Kong. When CVW-9 returned home in November 2003, it had been deployed, embarked, or detached for twenty-one of the previous twenty-seven months making the air wing the most deployed Naval Aviation unit since the September 11 attacks. In January 2004, the squadron departed once more on USS Carl Vinson for a three-week Tailored Ships Training Availability (TSTA) exercise. This was repeated again in June 2004 and served as the beginning of the next workup cycle in preparation for deploying in support of Operation Iraqi Freedom (OIF). Following TSTA the squadron again detached to NAS Fallon, Nevada on two different occasions – first for three weeks to complete SFARP and again two months later for Air Wing Fallon for four weeks. The workup cycle also included a three-week return to USS Carl Vinson for the carrier's Composite Training Unit Exercise (COMPTUEX) which brought the entire strike group together into one cohesive fighting unit in preparation for actual combat operations. In January 2005, VAW-112 prepared for an "Around the World" deployment on board USS Carl Vinson and after a three-week Joint Task Force Exercise (JTFEX), headed west in support of OIF. After port calls in Guam and Singapore, USS Carl Vinson and CVW-9 arrived in the Persian Gulf where VAW-112 immediately began flying missions over Iraq. The squadron served as an airborne battlefield communications relay for the troops and convoys in country. VAW-112 carried out over 480 sorties, accumulating nearly 1,500 hours with a 98 percent sortie completion rate. The squadron returned from their deployment in August 2005 and VAW-112 and CVW-9 transferred to USS John C. Stennis as USS Carl Vinson entered a complex overhaul cycle at Newport News, Virginia. In November 2005, the squadron became the first squadron on the West Coast to incorporate the NP2000 eight blade modification for its propellers. In April 2006 the squadron began work-ups for their scheduled 2007 deployment. Beginning with the Hawkeye Advance Readiness Program (HARP) in preparation for that May's Strike Fighter Advance Readiness Program (SFARP), at NAS Fallon. The squadron returned to sea that June on board USS John C. Stennis for Tailored Ships Training Availability (TSTA). The squadron returned to NAS Fallon in August to complete Air Wing Fallon. In September 2006, the squadron modified their aircraft to incorporate both the Automatic Identification System (AIS) and Intra Battle Group Wireless Network (IBGWN), which they first employed during COMPTUEX. Following CSG-3's Joint Task Force Exercise (JTFEX), the squadron returned to NAS Point Mugu for the holidays and prepared for their upcoming January deployment. In January, 2007, VAW-112 again deployed in support of Operation Iraqi Freedom and Operation Enduring Freedom. The squadron served as an airborne battlefield communications platform coordinating close air support and tanking missions. VAW-112 carried out over 950 sorties, accumulating over 1,800 hours with a 98 percent sortie completion rate. After five months in the Northern Arabian Sea and Persian Gulf, the squadron returned to NAS Point Mugu ending a seven-month deployment. For their efforts in the air and on the ground, VAW-112 was awarded the Battle Efficiency Award or ‘Battle E’ from Commander, Naval Air Forces, U.S. Pacific Fleet. Upon return from deployment, VAW-112 received four new Hawkeye 2000 aircraft. This platform incorporates new electronic and flight systems increasing the Golden Hawk's ability to provide accurate and timely airborne command and control. 2010s On 18 December 2011, the final command and control mission for U.S. forces over Iraq was flown by a squadron E-2C Hawkeye, operating from USS John C. Stennis effectively ending U.S. naval support for Operation New Dawn. After returning from cruise, the squadron was informed that as part of the surge carrier, they would begin a condensed work-up cycle for an eight-month deployment, again in support of Operation Enduring Freedom. After two weeks of Air Wing Fallon and two weeks of Sustainment Exercise (SUSTEX), VAW-112 set sail in September 2012. After supporting OEF and Operation Spartan Shield, the squadron returned in May 2013. Since May 2013 all VAW-112 aircraft have undergone a transition to the Communications, Navigation, Surveillance/Air Traffic Management (CNS/ATM) system, in addition to a weapons system software upgrade. In February 2016, it was reported that it was planned to deactivate VAW-112 in FY 2017. The squadron was deactivated on 31 May 2017 See also History of the United States Navy List of United States Navy aircraft squadrons References External links VAW-112's official website Early warning squadrons of the United States Navy
25717
https://en.wikipedia.org/wiki/Regular%20expression
Regular expression
A regular expression (shortened as regex or regexp; also referred to as rational expression) is a sequence of characters that specifies a search pattern in text. Usually such patterns are used by string-searching algorithms for "find" or "find and replace" operations on strings, or for input validation. It is a technique developed in theoretical computer science and formal language theory. The concept of regular expressions began in the 1950s, when the American mathematician Stephen Cole Kleene formalized the description of a regular language. They came into common use with Unix text-processing utilities. Different syntaxes for writing regular expressions have existed since the 1980s, one being the POSIX standard and another, widely used, being the Perl syntax. Regular expressions are used in search engines, search and replace dialogs of word processors and text editors, in text processing utilities such as sed and AWK and in lexical analysis. Many programming languages provide regex capabilities either built-in or via libraries, as it has uses in many situations. History Regular expressions originated in 1951, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called regular events. These arose in theoretical computer science, in the subfields of automata theory (models of computation) and the description and classification of formal languages. Other early implementations of pattern matching include the SNOBOL language, which did not use regular expressions, but instead its own pattern matching constructs. Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor and lexical analysis in a compiler. Among the first appearances of regular expressions in program form was when Ken Thompson built Kleene's notation into the editor QED as a means to match patterns in text files. For speed, Thompson implemented regular expression matching by just-in-time compilation (JIT) to IBM 7094 code on the Compatible Time-Sharing System, an important early example of JIT compilation. He later added this capability to the Unix editor ed, which eventually led to the popular search tool grep's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor: g/re/p meaning "Global search for Regular Expression and Print matching lines"). Around the same time when Thompson developed QED, a group of researchers including Douglas T. Ross implemented a tool based on regular expressions that is used for lexical analysis in compiler design. Many variations of these original forms of regular expressions were used in Unix programs at Bell Labs in the 1970s, including vi, lex, sed, AWK, and expr, and in other programs such as Emacs. Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in the POSIX.2 standard in 1992. In the 1980s the more complicated regexes arose in Perl, which originally derived from a regex library written by Henry Spencer (1986), who later wrote an implementation of Advanced Regular Expressions for Tcl. The Tcl library is a hybrid NFA/DFA implementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation include PostgreSQL. Perl later expanded on Spencer's original library to add many new features. Part of the effort in the design of Raku (formerly named Perl 6) is to improve Perl's regex integration, and to increase their scope and capabilities to allow the definition of parsing expression grammars. The result is a mini-language called Raku rules, which are used to define Raku grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regexes, but also allow BNF-style definition of a recursive descent parser via sub-rules. The use of regexes in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards like ISO SGML (precursored by ANSI "GCA 101-1983") consolidated. The kernel of the structure specification language standards consists of regexes. Its use is evident in the DTD element group syntax. Prior to the use of regular expressions, many search languages allowed simple wildcards, for example "*" to match any sequence of characters, and "?" to match a single character. Relics of this can be found today in the glob syntax for filenames, and in the SQL LIKE operator. Starting in 1997, Philip Hazel developed PCRE (Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regex functionality and is used by many modern tools including PHP and Apache HTTP Server. Today, regexes are widely supported in programming languages, text processing programs (particularly lexers), advanced text editors, and some other programs. Regex support is part of the standard library of many programming languages, including Java and Python, and is built into the syntax of others, including Perl and ECMAScript. Implementations of regex functionality is often called a regex engine, and a number of libraries are available for reuse. In the late 2010s, several companies started to offer hardware, FPGA, GPU implementations of PCRE compatible regex engines that are faster compared to CPU implementations. Patterns The phrase regular expressions, or regexes, is often used to mean the specific, standard textual syntax for representing patterns for matching text, as distinct from the mathematical notation described below. Each character in a regular expression (that is, each character in the string describing its pattern) is either a metacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regex b., 'b' is a literal character that matches just 'b', while '.' is a metacharacter that matches every character except a newline. Therefore, this regex matches, for example, 'b%', or 'bx', or 'b5'. Together, metacharacters and literal characters can be used to identify text of a given pattern or process a number of instances of it. Pattern matches may vary from a precise equality to a very general similarity, as controlled by the metacharacters. For example, . is a very general pattern, [a-z] (match all lower case letters from 'a' to 'z') is less general and b is a precise pattern (matches just 'b'). The metacharacter syntax is designed specifically to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standard ASCII keyboard. A very simple case of a regular expression in this syntax is to locate a word spelled two different ways in a text editor, the regular expression seriali[sz]e matches both "serialise" and "serialize". Wildcard characters also achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base. The usual context of wildcard characters is in globbing similar names in a list of files, whereas regexes are usually employed in applications that pattern-match text strings in general. For example, the regex ^[ \t]+|[ \t]+$ matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is [+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?. A regex processor translates a regular expression in the above syntax into an internal representation that can be executed and matched against a string representing the text being searched in. One possible approach is the Thompson's construction algorithm to construct a nondeterministic finite automaton (NFA), which is then made deterministic and the resulting deterministic finite automaton (DFA) is run on the target text string to recognize substrings that match the regular expression. The picture shows the NFA scheme N(s*) obtained from the regular expression s*, where s denotes a simpler regular expression in turn, which has already been recursively translated to the NFA N(s). Basic concepts A regular expression, often called a pattern, specifies a set of strings required for a particular purpose. A simple way to specify a finite set of strings is to list its elements or members. However, there are often more concise ways: for example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the pattern H(ä|ae?)ndel; we say that this pattern matches each of the three strings. However, there can be many ways to write a regular expression for the same set of strings: for example, (Hän|Han|Haen)del also specifies the same set of three strings in this example. Most formalisms provide the following operations to construct regular expressions. Boolean "or" A vertical bar separates alternatives. For example, can match "gray" or "grey". Grouping Parentheses are used to define the scope and precedence of the operators (among other uses). For example, gray|grey and are equivalent patterns which both describe the set of "gray" or "grey". Quantification A quantifier after an element (such as a token, character, or group) specifies how many times the preceding element is allowed to repeat. The most common quantifiers are the question mark ?, the asterisk * (derived from the Kleene star), and the plus sign + (Kleene plus). {| | style="width:15px; vertical-align:top;" | ? |The question mark indicates zero or one occurrences of the preceding element. For example, colou?r matches both "color" and "colour". |- | style="vertical-align:top;" | * |The asterisk indicates zero or more occurrences of the preceding element. For example, ab*c matches "ac", "abc", "abbc", "abbbc", and so on. |- | style="vertical-align:top;" | + |The plus sign indicates one or more occurrences of the preceding element. For example, ab+c matches "abc", "abbc", "abbbc", and so on, but not "ac". |- |{n} | The preceding item is matched exactly n times. |- |{min,} | The preceding item is matched min or more times. |- |{,max} | The preceding item is matched up to max times. |- |{min,max} | The preceding item is matched at least min times, but not more than max times. |} Wildcard The wildcard . matches any character. For example, a.b matches any string that contains an "a", and then any character and then "b"; and a.*b matches any string that contains an "a", and then the character "b" at some later point. These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷. For example, H(ae?|ä)ndel and are both valid patterns which match the same strings as the earlier example, H(ä|ae?)ndel. The precise syntax for regular expressions varies among tools and with context; more detail is given in . Formal language theory Regular expressions describe regular languages in formal language theory. They have the same expressive power as regular grammars. Formal definition Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory. Given a finite alphabet Σ, the following constants are defined as regular expressions: (empty set) ∅ denoting the set ∅. (empty string) ε denoting the set containing only the "empty" string, which has no characters at all. (literal character) a in Σ denoting the set containing only the character a. Given regular expressions R and S, the following operations over them are defined to produce regular expressions: (concatenation) (RS) denotes the set of strings that can be obtained by concatenating a string accepted by R and a string accepted by S (in that order). For example, let R denote {"ab", "c"} and S denote {"d", "ef"}. Then, (RS) denotes {"abd", "abef", "cd", "cef"}. (alternation) (R|S) denotes the set union of sets described by R and S. For example, if R describes {"ab", "c"} and S describes {"ab", "d", "ef"}, expression (R|S) describes {"ab", "c", "d", "ef"}. (Kleene star) (R*) denotes the smallest superset of the set described by R that contains ε and is closed under string concatenation. This is the set of all strings that can be made by concatenating any finite number (including zero) of strings from the set described by R. For example, if R denotes {"0", "1"}, (R*) denotes the set of all finite binary strings (including the empty string). If R denotes {"ab", "c"}, (R*) denotes {ε, "ab", "c", "abab", "abc", "cab", "cc", "ababab", "abcab", ... }. To avoid parentheses it is assumed that the Kleene star has the highest priority, then concatenation and then alternation. If there is no ambiguity then parentheses may be omitted. For example, (ab)c can be written as abc, and a|(b(c*)) can be written as a|bc*. Many textbooks use the symbols ∪, +, or ∨ for alternation instead of the vertical bar. Examples: a|b* denotes {ε, "a", "b", "bb", "bbb", ...} (a|b)* denotes the set of all strings with no symbols other than "a" and "b", including the empty string: {ε, "a", "b", "aa", "ab", "ba", "bb", "aaa", ...} ab*(c|ε) denotes the set of strings starting with "a", then zero or more "b"s and finally optionally a "c": {"a", "ac", "ab", "abc", "abb", "abbc", ...} (0|(1(01*0)*1))* denotes the set of binary numbers that are multiples of 3: { ε, "0", "00", "11", "000", "011", "110", "0000", "0011", "0110", "1001", "1100", "1111", "00000", ... } Expressive power and compactness The formal definition of regular expressions is minimal on purpose, and avoids defining ? and +—these can be expressed as follows: a+ = aa*, and a? = (a|ε). Sometimes the complement operator is added, to give a generalized regular expression; here Rc matches all strings over Σ* that do not match R. In principle, the complement operator is redundant, because it doesn't grant any more expressive power. However, it can make a regular expression much more concise—eliminating a single complement operator can cause a double exponential blow-up of its length. Regular expressions in this sense can express the regular languages, exactly the class of languages accepted by deterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size grows exponentially in the size of the shortest equivalent regular expressions. The standard example here is the languages Lk consisting of all strings over the alphabet {a,b} whose kth-from-last letter equals a. On the one hand, a regular expression describing L4 is given by . Generalizing this pattern to Lk gives the expression: On the other hand, it is known that every deterministic finite automaton accepting the language Lk must have at least 2k states. Luckily, there is a simple mapping from regular expressions to the more general nondeterministic finite automata (NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3 grammars of the Chomsky hierarchy. In the opposite direction, there are many languages easily described by a DFA that are not easily described by a regular expression. For instance, determining the validity of a given ISBN requires computing the modulus of the integer base 11, and can be easily implemented with an 11-state DFA. However, a regular expression to answer the same problem of divisibility by 11 is at least multiple megabytes in length. Given a regular expression, Thompson's construction algorithm computes an equivalent nondeterministic finite automaton. A conversion in the opposite direction is achieved by Kleene's algorithm. Finally, it is worth noting that many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; rather, they implement regexes. See below for more on this. Deciding equivalence of regular expressions As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results. It is possible to write an algorithm that, for two given regular expressions, decides whether the described languages are equal; the algorithm reduces each expression to a minimal deterministic finite state machine, and determines whether they are isomorphic (equivalent). Algebraic laws for regular expressions can be obtained using a method by Gischer which is best explained along an example: In order to check whether (X+Y)* and (X* Y*)* denote the same regular language, for all regular expressions X, Y, it is necessary and sufficient to check whether the particular regular expressions (a+b)* and (a* b*)* denote the same language over the alphabet Σ={a,b}. More generally, an equation E=F between regular-expression terms with variables holds if, and only if, its instantiation with different variables replaced by different symbol constants holds. Every regular expression can be written solely in terms of the Kleene star and set unions. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to the star height problem. In 1991, Dexter Kozen axiomatized regular expressions as a Kleene algebra, using equational and Horn clause axioms. Already in 1964, Redko had proved that no finite set of purely equational axioms can characterize the algebra of regular languages. Syntax A regex pattern matches a target string. The pattern is composed of a sequence of atoms. An atom is a single point within the regex pattern which it tries to match to the target string. The simplest atom is a literal, but grouping parts of the pattern to match an atom will require using ( ) as metacharacters. Metacharacters help form: atoms; quantifiers telling how many atoms (and whether it is a greedy quantifier or not); a logical OR character, which offers a set of alternatives, and a logical NOT character, which negates an atom's existence; and backreferences to refer to previous atoms of a completing pattern of atoms. A match is made, not when all the atoms of the string are matched, but rather when all the pattern atoms in the regex have matched. The idea is to make a small pattern of characters stand for a large number of possible strings, rather than compiling a large list of all the literal possibilities. Depending on the regex processor there are about fourteen metacharacters, characters that may or may not have their literal character meaning, depending on context, or whether they are "escaped", i.e. preceded by an escape sequence, in this case, the backslash \. Modern and POSIX extended regexes use metacharacters more often than their literal meaning, so to avoid "backslash-osis" or leaning toothpick syndrome it makes sense to have a metacharacter escape to a literal mode; but starting out, it makes more sense to have the four bracketing metacharacters ( ) and { } be primarily literal, and "escape" this usual meaning to become metacharacters. Common standards implement both. The usual metacharacters are {}[]()^$.|*+? and \. The usual characters that become metacharacters when escaped are dswDSW and N. Delimiters When entering a regex in a programming language, they may be represented as a usual string literal, hence usually quoted; this is common in C, Java, and Python for instance, where the regex re is entered as "re". However, they are often written with slashes as delimiters, as in /re/ for the regex re. This originates in ed, where / is the editor command for searching, and an expression /re/ can be used to specify a range of lines (matching the pattern), which can be combined with other commands on either side, most famously g/re/p as in grep ("global regex print"), which is included in most Unix-based operating systems, such as Linux distributions. A similar convention is used in sed, where search and replace is given by s/re/replacement/ and patterns can be joined with a comma to specify a range of lines as in /re1/,/re2/. This notation is particularly well known due to its use in Perl, where it forms part of the syntax distinct from normal string literals. In some cases, such as sed and Perl, alternative delimiters can be used to avoid collision with contents, and to avoid having to escape occurrences of the delimiter character in the contents. For example, in sed the command s,/,X, will replace a / with an X, using commas as delimiters. Standards The IEEE POSIX standard has three sets of compliance: BRE (Basic Regular Expressions), ERE (Extended Regular Expressions), and SRE (Simple Regular Expressions). SRE is deprecated, in favor of BRE, as both provide backward compatibility. The subsection below covering the character classes applies to both BRE and ERE. BRE and ERE work together. ERE adds ?, +, and |, and it removes the need to escape the metacharacters ( ) and { }, which are required in BRE. Furthermore, as long as the POSIX standard syntax for regexes is adhered to, there can be, and often is, additional syntax to serve specific (yet POSIX compliant) applications. Although POSIX.2 leaves some implementation specifics undefined, BRE and ERE provide a "standard" which has since been adopted as the default syntax of many tools, where the choice of BRE or ERE modes is usually a supported option. For example, GNU grep has the following options: "grep -E" for ERE, and "grep -G" for BRE (the default), and "grep -P" for Perl regexes. Perl regexes have become a de facto standard, having a rich and powerful set of atomic expressions. Perl has no "basic" or "extended" levels. As in POSIX EREs, ( ) and { } are treated as metacharacters unless escaped; other metacharacters are known to be literal or symbolic based on context alone. Additional functionality includes lazy matching, backreferences, named capture groups, and recursive patterns. POSIX basic and extended In the POSIX standard, Basic Regular Syntax (BRE) requires that the metacharacters ( ) and { } be designated \(\) and \{\}, whereas Extended Regular Syntax (ERE) does not. Examples: .at matches any three-character string ending with "at", including "hat", "cat", "bat", "4at", "#at" and " at" (starting with a space). [hc]at matches "hat" and "cat". [^b]at matches all strings matched by .at except "bat". [^hc]at matches all strings matched by .at other than "hat" and "cat". ^[hc]at matches "hat" and "cat", but only at the beginning of the string or line. [hc]at$ matches "hat" and "cat", but only at the end of the string or line. \[.\] matches any single character surrounded by "[" and "]" since the brackets are escaped, for example: "[a]", "[b]", "[7]", "[@]", "[]]", and "[ ]" (bracket space bracket). s.* matches s followed by zero or more characters, for example: "s", "saw", "seed", "s3w96.7", and "s6#h%(>>>m n mQ". POSIX extended The meaning of metacharacters escaped with a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example, \( \) is now ( ) and \{ \} is now { }. Additionally, support is removed for \n backreferences and the following metacharacters are added: Examples: [hc]?at matches "at", "hat", and "cat". [hc]*at matches "at", "hat", "cat", "hhat", "chat", "hcat", "cchchat", and so on. [hc]+at matches "hat", "cat", "hhat", "chat", "hcat", "cchchat", and so on, but not "at". cat|dog matches "cat" or "dog". POSIX Extended Regular Expressions can often be used with modern Unix utilities by including the command line flag -E. Character classes The character class is the most basic regex concept after a literal match. It makes one small sequence of characters match a larger set of characters. For example, [A-Z] could stand for any uppercase letter in the English alphabet, and \d could mean any digit. Character classes apply to both POSIX levels. When specifying a range of characters, such as [a-Z] (i.e. lowercase a to uppercase Z), the computer's locale settings determine the contents by the numeric ordering of the character encoding. They could store digits in that sequence, or the ordering could be abc…zABC…Z, or aAbBcC…zZ. So the POSIX standard defines a character class, which will be known by the regex processor installed. Those definitions are in the following table: POSIX character classes can only be used within bracket expressions. For example, [[:upper:]ab] matches the uppercase letters and lowercase "a" and "b". An additional non-POSIX class understood by some tools is [:word:], which is usually defined as [:alnum:] plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editor Vim further distinguishes word and word-head classes (using the notation \w and \h) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions: numbers are generally excluded, so an identifier would look like \h\w* or [[:alpha:]_][[:alnum:]_]* in POSIX notation. Note that what the POSIX regex standards call character classes are commonly referred to as POSIX character classes in other regex flavors which support them. With most other regex flavors, the term character class is used to describe what POSIX calls bracket expressions. Perl and PCRE Because of its expressive power and (relative) ease of reading, many other utilities and programming languages have adopted syntax similar to Perl's — for example, Java, JavaScript, Julia, Python, Ruby, Qt, Microsoft's .NET Framework, and XML Schema. Some languages and tools such as Boost and PHP support multiple regex flavors. Perl-derivative regex implementations are not identical and usually implement a subset of features found in Perl 5.0, released in 1994. Perl sometimes does incorporate features initially found in other languages. For example, Perl 5.10 implements syntactic extensions originally developed in PCRE and Python. Lazy matching In Python and some other implementations (e.g. Java), the three common quantifiers (*, + and ?) are greedy by default because they match as many characters as possible. The regex ".+" (including the double-quotes) applied to the string "Ganymede," he continued, "is the largest moon in the Solar System." matches the entire line (because the entire line begins and ends with a double-quote) instead of matching only the first part, "Ganymede,". The aforementioned quantifiers may, however, be made lazy or minimal or reluctant, matching as few characters as possible, by appending a question mark: ".+?" matches only "Ganymede,". However, the whole sentence can still be matched in some circumstances. The question-mark operator does not change the meaning of the dot operator, so this still can match the double-quotes in the input. A pattern like ".*?" EOF will still match the whole input if this is the string: "Ganymede," he continued, "is the largest moon in the Solar System." EOF To ensure that the double-quotes cannot be part of the match, the dot has to be replaced (e.g. "[^"]*"). This will match a quoted text part without additional double-quotes in it. (By removing the possibility of matching the fixed suffix, i.e. ", this has also transformed the lazy-match to a greedy-match, so the ? is no longer needed.) Possessive matching In Java, quantifiers may be made possessive by appending a plus sign, which disables backing off (in a backtracking engine), even if doing so would allow the overall match to succeed: While the regex ".*" applied to the string "Ganymede," he continued, "is the largest moon in the Solar System." matches the entire line, the regex ".*+" does , because .*+ consumes the entire input, including the final ". Thus, possessive quantifiers are most useful with negated character classes, e.g. "[^"]*+", which matches "Ganymede," when applied to the same string. Another common extension serving the same function is atomic grouping, which disables backtracking for a parenthesized group. The typical syntax is . For example, while matches both and , only matches because the engine is forbidden from backtracking and try with setting the group as "w". Possessive quantifiers are easier to implement than greedy and lazy quantifiers, and are typically more efficient at runtime. Patterns for non-regular languages Many features found in virtually all modern regular expression libraries provide an expressive power that exceeds the regular languages. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (). This means that, among other things, a pattern can match strings of repeated words like "papa" or "WikiWiki", called squares in formal language theory. The pattern for these strings is (.+)\1. The language of squares is not regular, nor is it context-free, due to the pumping lemma. However, pattern matching with an unbounded number of backreferences, as supported by numerous modern tools, is still context sensitive. The general problem of matching any number of backreferences is NP-complete, growing exponentially by the number of backref groups used. However, many tools, libraries, and engines that provide such constructions still use the term regular expression for their patterns. This has led to a nomenclature where the term regular expression has different meanings in formal language theory and pattern matching. For this reason, some people have taken to using the term regex, regexp, or simply pattern to describe the latter. Larry Wall, author of the Perl programming language, writes in an essay about the design of Raku: Other features not found in describing regular languages include assertions. These include the ubiquitous and , as well as some more sophisticated extensions like lookaround. They define the surrounding of a match and don't spill into the match itself, a feature only relevant for the use case of string searching. Some of them can be simulated in a regular language by treating the surroundings as a part of the language as well. Implementations and running times There are at least three different algorithms that decide whether and how a given regex matches a string. The oldest and fastest relies on a result in formal language theory that allows every nondeterministic finite automaton (NFA) to be transformed into a deterministic finite automaton (DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of size m has the time and memory cost of O(2m), but it can be run on a string of size n in time O(n). Note that the size of the expression is the size after abbreviations, such as numeric quantifiers, have been expanded. An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises to O(mn). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky. Modern implementations include the re1-re2-sregex family based on Cox's code. The third algorithm is to match the pattern against the input string by backtracking. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like that contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem called Regular expression Denial of Service (ReDoS). Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match. GNU grep (and the underlying gnulib DFA) uses such a strategy. Sublinear runtime algorithms have been achieved using Boyer-Moore (BM) based algorithms and related DFA optimization techniques such as the reverse scan. GNU grep, which supports a wide variety of POSIX syntaxes and extensions, uses BM for a first-pass prefiltering, and then uses an implicit DFA. Wu agrep, which implements approximate matching, combines the prefiltering into the DFA in BDM (backward DAWG matching). NR-grep's BNDM extends the BDM technique with Shift-Or bit-level parallelism. A few theoretical alternatives to backtracking for backreferences exist, and their "exponents" are tamer in that they are only related to the number of backreferences, a fixed property of some regexp languages such as POSIX. One naive method that duplicates a non-backtracking NFA for each backreference note has a complexity of time and space for a haystack of length n and k backreferences in the RegExp. A very recent theoretical work based on memory automata gives a tighter bound based on "active" variable nodes used, and a polynomial possibility for some backreferenced regexps. Unicode In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regexes were originally written to use ASCII characters as their token set though regex libraries have supported numerous other character sets. Many modern regex engines offer at least some support for Unicode. In most respects it makes no difference what the character set is, but some issues do arise when extending regexes to support Unicode. Supported encoding. Some regex libraries expect to work on some particular encoding instead of on abstract Unicode characters. Many of these require the UTF-8 encoding, while others might expect UTF-16, or UTF-32. In contrast, Perl and Java are agnostic on encodings, instead operating on decoded characters internally. Supported Unicode range. Many regex engines support only the Basic Multilingual Plane, that is, the characters which can be encoded with only 16 bits. Currently (as of 2016) only a few regex engines (e.g., Perl's and Java's) can handle the full 21-bit Unicode range. Extending ASCII-oriented constructs to Unicode. For example, in ASCII-based implementations, character ranges of the form [x-y] are valid wherever x and y have code points in the range [0x00,0x7F] and codepoint(x) ≤ codepoint(y). The natural extension of such character ranges to Unicode would simply change the requirement that the endpoints lie in [0x00,0x7F] to the requirement that they lie in [0x0000,0x10FFFF]. However, in practice this is often not the case. Some implementations, such as that of gawk, do not allow character ranges to cross Unicode blocks. A range like [0x61,0x7F] is valid since both endpoints fall within the Basic Latin block, as is [0x0530,0x0560] since both endpoints fall within the Armenian block, but a range like [0x0061,0x0532] is invalid since it includes multiple Unicode blocks. Other engines, such as that of the Vim editor, allow block-crossing but the character values must not be more than 256 apart. Case insensitivity. Some case-insensitivity flags affect only the ASCII characters. Other flags affect all characters. Some engines have two different flags, one for ASCII, the other for Unicode. Exactly which characters belong to the POSIX classes also varies. Cousins of case insensitivity. As ASCII has case distinction, case insensitivity became a logical feature in text searching. Unicode introduced alphabetic scripts without case like Devanagari. For these, case sensitivity is not applicable. For scripts like Chinese, another distinction seems logical: between traditional and simplified. In Arabic scripts, insensitivity to initial, medial, final, and isolated position may be desired. In Japanese, insensitivity between hiragana and katakana is sometimes useful. Normalization. Unicode has combining characters. Like old typewriters, plain base characters (white spaces, punctuations, symbols, digits, or letters) can be followed by one or more non-spacing symbols (usually diacritics, like accent marks modifying letters) to form a single printable character; but Unicode also provides a limited set of precomposed characters, i.e. characters that already include one or more combining characters. A sequence of a base character + combining characters should be matched with the identical single precomposed character (only some of these combining sequences can be precomposed into a single Unicode character, but infinitely many other combining sequences are possible in Unicode, and needed for various languages, using one or more combining characters after an initial base character; these combining sequences may include a base character or combining characters partially precomposed, but not necessarily in canonical order and not necessarily using the canonical precompositions). The process of standardizing sequences of a base character + combining characters by decomposing these canonically equivalent sequences, before reordering them into canonical order (and optionally recomposing some combining characters into the leading base character) is called normalization. New control codes. Unicode introduced amongst others, byte order marks and text direction markers. These codes might have to be dealt with in a special way. Introduction of character classes for Unicode blocks, scripts, and numerous other character properties. Block properties are much less useful than script properties, because a block can have code points from several different scripts, and a script can have code points from several different blocks. In Perl and the library, properties of the form \p{InX} or \p{Block=X} match characters in block X and \P{InX} or \P{Block=X} matches code points not in that block. Similarly, \p{Armenian}, \p{IsArmenian}, or \p{Script=Armenian} matches any character in the Armenian script. In general, \p{X} matches any character with either the binary property X or the general category X. For example, \p{Lu}, \p{Uppercase_Letter}, or \p{GC=Lu} matches any uppercase letter. Binary properties that are not general categories include \p{White_Space}, \p{Alphabetic}, \p{Math}, and \p{Dash}. Examples of non-binary properties are \p{Bidi_Class=Right_to_Left}, \p{Word_Break=A_Letter}, and \p{Numeric_Value=10}. Uses Regexes are useful in a wide variety of text processing tasks, and more generally string processing, where the data need not be textual. Common applications include data validation, data scraping (especially web scraping), data wrangling, simple parsing, the production of syntax highlighting systems, and many other tasks. While regexes would be useful on Internet search engines, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions include Google Code Search and Exalead. However, Google Code Search was shut down in January 2012. Examples The specific syntax rules vary depending on the specific implementation, programming language, or library in use. Additionally, the functionality of regex implementations can vary between versions. Because regexes can be difficult to both explain and understand without examples, interactive websites for testing regexes are a useful resource for learning regexes by experimentation. This section provides a basic description of some of the properties of regexes by way of illustration. The following conventions are used in the examples. metacharacter(s) ;; the metacharacters column specifies the regex syntax being demonstrated =~ m// ;; indicates a regex match operation in Perl =~ s/// ;; indicates a regex substitution operation in Perl Also worth noting is that these regexes are all Perl-like syntax. Standard POSIX regular expressions are different. Unless otherwise indicated, the following examples conform to the Perl programming language, release 5.8.8, January 31, 2006. This means that other implementations may lack support for some parts of the syntax shown here (e.g. basic vs. extended regex, \( \) vs. (), or lack of \d instead of POSIX [:digit:]). The syntax and conventions used in these examples coincide with that of other programming environments as well. Induction Regular expressions can often be created ("induced" or "learned") based on a set of example strings. This is known as the induction of regular languages, and is part of the general problem of grammar induction in computational learning theory. Formally, given examples of strings in a regular language, and perhaps also given examples of strings not in that regular language, it is possible to induce a grammar for the language, i.e., a regular expression that generates that language. Not all regular languages can be induced in this way (see language identification in the limit), but many can. For example, the set of examples {1, 10, 100}, and negative set (of counterexamples) {11, 1001, 101, 0} can be used to induce the regular expression 1⋅0* (1 followed by zero or more 0s). See also Comparison of regular-expression engines Extended Backus–Naur form Matching wildcards Regular tree grammar Thompson's construction – converts a regular expression into an equivalent nondeterministic finite automaton (NFA) Notes References External links ISO/IEC 9945-2:1993 Information technology – Portable Operating System Interface (POSIX) – Part 2: Shell and Utilities ISO/IEC 9945-2:2002 Information technology – Portable Operating System Interface (POSIX) – Part 2: System Interfaces ISO/IEC 9945-2:2003 Information technology – Portable Operating System Interface (POSIX) – Part 2: System Interfaces ISO/IEC/IEEE 9945:2009 Information technology – Portable Operating System Interface (POSIX®) Base Specifications, Issue 7 Regular Expression, IEEE Std 1003.1-2017, Open Group Automata (computation) Formal languages Pattern matching Programming constructs Articles with example code 1951 introductions
14088819
https://en.wikipedia.org/wiki/Tux%20Typing
Tux Typing
Tux Typing is a free and open source typing tutor created especially for children. It features several different types of game play, at a variety of difficulty levels. It is designed to be fun and to improve words per minute speed of typists. It is written in the C programming language and is available in the repositories of some Linux distributions such as Fedora. Interface There is a practice mode for learning basics of typing. There are also two games. In the first, fish are falling from the sky. Each fish has a letter or a word written on it. When the player presses the corresponding key, or types the appropriate word, Tux will position himself to eat the fish. The second game is similar, but the goal is to prevent comets from falling on a city. When a comet lands on the city, the shield will be removed, then if it is hit again without the shield, it disappears. If it is hit by a comet while the city is destroyed, points will be deducted. In both games, different languages can be selected as a source for the words. See also Tux, of Math Command Tux Paint References Notes Tina Gasperson (September 8, 2008) ''Three typing tutors and a boy, linux.com External links Official website Tux Typing at GitHub Software for children Typing software Linux games Open-source video games Free educational software GNOME Kids Touch typing tutors for Linux Free software that uses SDL Software that uses Cairo (graphics) Typing video games
40029539
https://en.wikipedia.org/wiki/Whonix
Whonix
Whonix (formerly TorBOX) is a Debian–based security-focused Linux distribution. It aims to provide privacy, security and anonymity on the internet. The operating system consists of two virtual machines, a "Workstation" and a Tor "Gateway", running Debian Linux. All communications are forced through the Tor network. Design Whonix is based on Kicksecure, a hardened Debian derivative with anonymity packages installed on top. It is distributed as two virtual machine images: a "Gateway" and a "Workstation". These images are installed on a user-provided host operating system. Each VM image contains a customized Linux instance based on Debian. Updates are distributed via Tor using Debian's apt-get package manager. The supported virtualization engines are VirtualBox, Qubes OS, and Linux KVM. An "advanced" configuration uses two physically separate computers, with the Gateway running on the hardware of one of the computers, and the Workstation running in a VM hosted on the second. This protects against attacks on hypervisors at the cost of flexibility. Supported physical hardware platforms include the Raspberry Pi 3 and unofficial community efforts on the PowerPC workstation hardware, Talos, from Raptor Computing. On first startup, each VM runs a check to ensure that the software is up to date. On every boot, the date and time are set using the sdwdate secure time daemon that works over Tor's TCP protocol. The Gateway VM is responsible for running Tor, and has two virtual network interfaces. One of these is connected to the outside Internet via NAT on the VM host, and is used to communicate with Tor relays. The other is connected to a virtual LAN that runs entirely inside the host. The Workstation VM runs user applications. It is connected only to the internal virtual LAN, and can directly communicate only with the Gateway, which forces all traffic coming from the Workstation to pass through the Tor network. The Workstation VM can "see" only IP addresses on the Internal LAN, which are the same in every Whonix installation. User applications therefore have no knowledge of the user's "real" IP address, nor do they have access to any information about the physical hardware. In order to obtain such information, an application would have to find a way to "break out" of the VM, or to subvert the Gateway (perhaps through a bug in Tor or the Gateway's Linux kernel). The Web browser pre-installed in the Workstation VM is the modified version of Mozilla Firefox provided by the Tor Project as part of its Tor Browser package. This browser has been changed to reduce the amount of system-specific information leaked to Web servers. Since version 15, like Tails, Whonix supports an optional "amnesiac" live-mode. This combines the best of both worlds by allowing Tor's entry guard system to choose long-lived entry points for the Tor network on the Gateway, reducing the adversaries' ability to trap users by running malicious relays, while rolling back to a trusted state. Some precautions on the host may be needed to avoid data being written to the disk accidentally. Grub-live, an additional separate project, aims to allow bare-metal Debian hosts to boot into a live session, avoiding forensic remnants on disc. Additional testing to confirm the efficacy of the package is needed as of yet. For the best defense against malicious guards, it is recommended to boot up the gateway from a pristine state and have a unique guard paired to each user activity. Users would take a snapshot to be able to switch to, and use that guard consistently. This setup guarantees that most activities of the user remain protected from malicious entry guards while not increasing the risk of running into one as a completely amnesiac system would. Scope Anonymity is a complex problem with many issues beyond IP address masking that are necessary to protect user privacy. Whonix focuses on these areas to provide a comprehensive solution. Some features: Kloak - A keystroke anonymization tool that randomizes the timing between key presses. Keystroke biometric algorithms have advanced to the point where it is viable to fingerprint users based on soft biometric traits with extremely high accuracy. This is a privacy risk because masking spatial information—such as the IP address via Tor—is insufficient to anonymize users. Tirdad - A Linux kernel module for overwriting TCP ISNs. TCP Initial Sequence Numbers use fine-grained kernel timer data, leaking correlatable patterns of CPU activity in non-anonymous system traffic. They may otherwise act as a side-channel for long running crypto operations. Disabled TCP Timestamps - TCP timestamps leak system clock info down to the millisecond which aids network adversaries in tracking systems behind NAT. sdwdate - A secure time daemon alternative to NTP that uses trustworthy sources and benefits from Tor's end-to-end encryption. NTP suffers from being easy to manipulate and surveil. RCE flaws were also discovered in NTP clients. MAT 2 - Software and filesystems add a lot of extraneous information about who, what, how, when and where documents and media files were created. MAT 2 strips out this information to make file sharing safer without divulging identifying information about the source. LKRG - Linux Kernel Runtime Guard (LKRG) is a Linux security module that thwarts classes of kernel exploitation techniques. Hardening the guest OS makes it more difficult for adversaries to break out of the hypervisor and deanonymize the user. Documentation The Whonix wiki includes a collection of operational security guides for tips on preserving anonymity while online. Additionally, a number of original content guides on which security tools to use, and how to use such tools, have been added over time. This includes how to access the I2P and Freenet networks over Tor. See also Tails (operating system) References External links 2012 software Operating system security X86-64 Linux distributions Linux distributions
5543542
https://en.wikipedia.org/wiki/ISC%20license
ISC license
The ISC license is a permissive free software license published by the Internet Software Consortium, now called Internet Systems Consortium (ISC). It is functionally equivalent to the simplified BSD and MIT licenses, but without language deemed unnecessary following the Berne Convention. Originally used for ISC software such as BIND and dig, it has become the preferred license for contributions to OpenBSD and the default license for npm packages. The ISC license is also used for Linux wireless drivers contributed by Qualcomm Atheros. License terms When initially released, the license did not include the term "and/or", which was changed from "and" by ISC in 2007. Paul Vixie stated on the BIND mailing list that the ISC license started using the term "and/or" to avoid controversy similar to the events surrounding the University of Washington's refusal to allow distribution of the Pine email software. OpenBSD license The OpenBSD project began using the ISC license in 2003, before ISC added the term "and/or". However, the Free Software Foundation claims that OpenBSD "updated" the license to remove the unnecessary term. Theo de Raadt of OpenBSD chose to retain the wording originally used by the University of California, Berkeley, which allowed free redistribution in either non-free or open-source software. Both licenses are considered acceptable by the Free Software Foundation, and compatible with the GNU GPL. Reception In 2015, ISC announced they would release their Kea DHCP Software under the Mozilla Public License 2.0, stating, "There is no longer a good reason for ISC to have its own license, separate from everything else". They also preferred a copyleft license, stating, "If a company uses our software but improves it, we really want those improvements to go back into the master source". Throughout the following years, they re-licensed all ISC-hosted software, including BIND in 2016 and ISC DHCP Server in 2017. The Publications Office of the European Union advises using the MIT license instead of the ISC License in order to reduce license proliferation. The GNU project states the inclusion of "and/or" still allows the license to be interpreted as prohibiting distribution of modified versions. Although they state there is no reason to avoid software released under this license, they advise against using the license to keep the problematic language from causing trouble in the future. See also Comparison of free and open-source software licenses Software using the ISC license Footnotes References External links Internet Systems Consortium's License Text License template at the Open Source Initiative Free and open-source software licenses Permissive software licenses
59765682
https://en.wikipedia.org/wiki/Moran%20Cerf
Moran Cerf
Moran Cerf (; born ) is an Israeli neuroscientist, assistant professor of business (at the Kellogg School of Management at Northwestern University), investor and a former white hat hacker. He is the founder of Think-Alike and B-Cube and the host and curator of PopTech, one of the top 5 leading conferences in the world. Cerf is also the president and co-founder of the Human Single Neuron society. As of 2013, he is a member of the institute on complex systems. Cerf has received numerous awards including the Templeton Foundation "Extraordinary Minds" award, and the Chicagoan award. Recently, he was named one of the "40 leading professors below 40". He has won several national storytelling competitions, notably, the Moth Grandslam, multiple times. Cerf is the Alfred P. Sloan screenwriting professor at the American Film Institute (AFI) where he teaches an annual workshop on science in films. He is also a science consultant to Hollywood films and TV series (Limitless, Bull, Falling Water, etc.). He has spoken publicly on topics of neuroscience, business, decision making and hacking (TED, PopTech, Google, TEDx, TED-Ed) and his views on the risks of hacking into humans' brains are often appearing in the media. Early life and background Cerf was born in Paris, France, to a Jewish family, and raised in Israel. As a young child, he was an art prodigy, attending the Israeli School of Arts. He was a part of many Israeli TV shows for children, and had a number of public appearances as a child that garnered him attention in Israel. In 2002, he had one of the first podcasts in the world, where he hosted a weekly radio show. As a young child, Cerf became an avid programmer and was part of a small community of hackers that paved the road to online white hat hacking. Education Cerf holds a Bachelor of Science in Physics (1998–2000), and a Master's Degree in Philosophy (2000) from Tel Aviv University. In 2002, he received the prestigious Presidential Scholarship award in Israel while originally pursuing a PhD in philosophy. In 2005, he shifted to neuroscience and ultimately completed a PhD in neuroscience at Caltech (2009). While pursuing his studies, Cerf worked as a white hat hacker in the Israeli emerging Cybersecurity world, performing penetration tests for banks and government institutes. He attributes much of his understanding of the brain and the way he researches the time spent breaking codes and performing penetration tests. A single meeting with the late Francis Crick where the two discussed the importance of "using hacking skills to study the most interesting vault in the world – our brain" made Prof. Cerf leave his senior business post and pursue full-time PhD at Caltech, under Prof. Christof Koch. He completed his PhD at Caltech between the years 2005–2009 and worked on one of the flagship projects in neuroscience: "Single Neuron Recording in Humans". The project involves studying humans using direct recordings from their brain while they are awake with electrodes implanted inside their heads. This unique research setup has garnered him worldwide recognition and resulted in some of the famous publications that positioned Cerf at the forefront of neuroscience research. Following his time at Caltech Professor Cerf has moved to NYU where he spent three years studying what makes content engaging and looking for ways to translate his neuroscience research to a broader audience. It is the time there that he attributes to his interest in finding ways to understand how to translate brain research to applications and the need to work together with the business world to communicate science. Following his time at NYU studying engagement, and with the growth of public attention to his work (partially due to his repeated nationwide wins of the Moth Grandslam story-telling competition, where he discussed the behind the scenes of neuroscience), Prof. Cerf became a public figure in the science communication sphere and his talks garnered a large following. In 2014, Cerf was offered a job as a professor at the Kellogg School of Management, where he holds positions as both a business and neuroscience professor. In 2016, he joined the MIT Media Lab as a visiting professor where he started working on dream recording and manipulation. Notable research Cerf is well known for his research on consciousness and projecting people's thoughts and dreams directly from their brain, for his research on free will, for his comments on the future of humans and the ability to hack our brains, and for his recent work in business and neuroscience in marketing, where he found a way to predict people's interest and engagement with content by observing their neural responses. For this work, he was named by Prof. Phil Kotler (father of modern marketing) "the next leader in marketing". He is a frequent contributor to "Business Insider", "Forbes" and various other popular media journals where he writes about topics of neuroscience and business and ways to implement science in decision-making. In his 2016 TED talk, Cerf discussed a method for extraction of dreams which he is rumored to turn into a company, Dream-Alike. Similar mention of his work about projecting thoughts was later discussed in a blog post at ‘Wait But Why’. Career Prior to his academic career, Cerf held positions in pharmaceutical, telecommunications, fashion, software development, and innovative research fields. Hacking Cerf spent nearly a decade working as a computer system hacker, breaking into financial and government institutes to test and improve their security. As a hacker, Cerf was first working for Check Point, and later started his own for-hire ‘white hat’ hackers team that helped business secure their systems from malevolent hackers. His team later integrated into the ADC (Advanced Defense Center) at a then-startup company called Imperva (now a NASDAQ traded company) where Cerf ran the ADC and spent a number of years before leaving for academia, holding various managerial positions. Some of Cerf's stories about hacking and his experiences as a hacker were included in various films and became the basis for his worldwide acclaim. Neuroscience In an interview with Forbes, Professor Cerf attributed his hacking background to what made him a successful neuroscientist, suggesting that the ways he uses non-traditional tools to investigate the brain and the creative ways of thinking about black boxes borrow from his time breaking codes. Cerf is known for his pioneering work studying patients undergoing brain-surgery, which allows him to investigate behavior, emotion, decision making, and dreams by directly recording the activity of individual neurons using electrodes implanted in the patients' brain. As a business professor, Cerf's works look at the drivers of decision-making in consumer neuroscience and using neuro-marketing techniques to gain insights about consumer behavior. He is an associate editor for several scholarly journals in both business and neuroscience and a consultant for companies that span a wide range of verticals, including automotive (Ferrari), Performance (Red Bull), Finances (TransUnion), and even relationships (Tinder). Hollywood Since 2007, Cerf permanently held a position at the American Film Institute (AFI) as the Alfred P. Sloan professor, where he teaches an annual workshop on science communication in film and TV. Through this workshop he worked on various films and TV shows and organizes the annual Sloan seminar where he holds a public discussion on science communication with Hollywood figures such as Ann Druyan (Cosmos), Michelle Ashford (Masters of Sex), Whitney Cummings (Whitney, 2 Broke Girls), Giancarlo Esposito (Breaking Bad), Jake Gyllenhaal (Donnie Darko), Len Mlodinow (ghost writer of Stephen Hawking's books and writer on MacGyver), Andy Serkis (King Kong, Gollum in Lord of the Rings), Michael Begler and Jack Amiel (The Knick), Clifford Johnson (theoretical physicist) and more. Books Competition and attention in the human brain: Single neuron recordings and eye-tracking in healthy controls and subjects with neurological and psychiatric disorders, Lap Lambert Academic Publishing, 2012, Single Neuron Studies of the Human Brain: Probing Cognition, co-authored with Itzhak Fried and Ueli Rutishauser and Gabriel Kreiman, The MIT Press, 2014, Consumer Neuroscience, MIT Press, 2017, Foresight, Northwestern University, 2017, Awards and recognition 2016: 40 leading professors below 40 2010, 2016: 2 times US champion of the Moth Grand-Slam Filmography Philanthropy In 2018, Cerf founded B-Cube, a non-profit that aims at helping organizations use advances in neuroscience to change behavior. While B-Cube's endeavors are not public, it is estimated that they invested shy of $12M in 2018 in their projects. From their website, it seems that B-Cube has worked with Ferrari (on improving risk assessment in driving), with SS&C (on ways to improve learning in classrooms), with Viacom (on applications of neuroscience to entertainment) and with Founders Pledge (on using neuroscience for social good). Given that Cerf is the host and curator of PopTech, responsible for their fellow's program, it was suggested that B-Cube may be involved in the PopTech mission for social good promotion through technology. Cerf has spoken at the UN and various organizations on ways to use neuroscience and research to promote young individuals’ rise out of poverty, on the ethics of using neuroscience to prevent threats to democracy and on the dangers of the attention economy. Personal life Cerf's political affiliation is unknown although he has worked with the United States Digital Service (USDS) under President Obama, and continued working with 18F during the Trump administration. His work focused on a-political aspects of cybersecurity (a project known as ‘login.gov’). In 2017, Cerf dated actress Ashley Judd. They were last seen together at the Sundance Film Festival. In 2011–2012 Cerf dated actress Kristina Anapau (True Blood). In 2016, Cerf was selected as one of Elle Magazine’s most eligible bachelors. In his free time, Cerf is a pilot (Private Jets and Helicopters). Board and other posts Board member, Chicago Ideas Host and curator, PopTech Co-founder, Human Intracranial Research Foundation board member, VR Americas Co-Founder, ThinkAlike Founder (rumored), DreamAlike Presidential Innovation Fellow, White House USDS/18F Founder, B-Cube See also List of neuroscientists List of hackers David Eagleman Dan Ariely Bryan Johnson (entrepreneur) Elon Musk References External links Moran Cerf's official website Moran Cerf, Le "Hacker" Du Cerveau Moran Cerf at TED (conference) B-Cube California Institute of Technology alumni Tel Aviv University alumni Living people 21st-century American short story writers Scientists from New York City Writers from New York City Year of birth missing (living people) French emigrants to Israel Israeli Jews Israeli neuroscientists Israeli science writers Israeli scientists
6600
https://en.wikipedia.org/wiki/Currying
Currying
In mathematics and computer science, currying is the technique of converting a function that takes multiple arguments into a sequence of functions that each takes a single argument. For example, currying a function that takes three arguments creates three functions: Or more abstractly, a function that takes two arguments, one from and one from , and produces outputs in by currying is translated into a function that takes a single argument from and produces as outputs functions from to Currying is related to, but not the same as, partial application. Currying is useful in both practical and theoretical settings. In functional programming languages, and many others, it provides a way of automatically managing how arguments are passed to functions and exceptions. In theoretical computer science, it provides a way to study functions with multiple arguments in simpler theoretical models which provide only one argument. The most general setting for the strict notion of currying and uncurrying is in the closed monoidal categories, which underpins a vast generalization of the Curry–Howard correspondence of proofs and programs to a correspondence with many other structures, including quantum mechanics, cobordisms and string theory. It was introduced by Gottlob Frege, developed by Moses Schönfinkel, and further developed by Haskell Curry. Uncurrying is the dual transformation to currying, and can be seen as a form of defunctionalization. It takes a function whose return value is another function , and yields a new function that takes as parameters the arguments for both and , and returns, as a result, the application of and subsequently, , to those arguments. The process can be iterated. Motivation Currying provides a way for working with functions that take multiple arguments, and using them in frameworks where functions might take only one argument. For example, some analytical techniques can only be applied to functions with a single argument. Practical functions frequently take more arguments than this. Frege showed that it was sufficient to provide solutions for the single argument case, as it was possible to transform a function with multiple arguments into a chain of single-argument functions instead. This transformation is the process now known as currying. All "ordinary" functions that might typically be encountered in mathematical analysis or in computer programming can be curried. However, there are categories in which currying is not possible; the most general categories which allow currying are the closed monoidal categories. Some programming languages almost always use curried functions to achieve multiple arguments; notable examples are ML and Haskell, where in both cases all functions have exactly one argument. This property is inherited from lambda calculus, where multi-argument functions are usually represented in curried form. Currying is related to, but not the same as partial application. In practice, the programming technique of closures can be used to perform partial application and a kind of currying, by hiding arguments in an environment that travels with the curried function. Illustration Suppose we have a function which takes two real number () arguments and outputs real numbers, and it is defined by . Currying translates this into a function which takes a single real argument and outputs functions from to . In symbols, , where denotes the set of all functions that take a single real argument and produce real outputs. For every real number , define the function by , and then define the function by . So for instance, is the function that sends its real argument to the output , or . We see that in general so that the original function and its currying convey exactly the same information. In this situation, we also write This also works for functions with more than two arguments. If were a function of three arguments , its currying would have the property History The name "currying", coined by Christopher Strachey in 1967, is a reference to logician Haskell Curry. The alternative name "Schönfinkelisation" has been proposed as a reference to Moses Schönfinkel. In the mathematical context, the principle can be traced back to work in 1893 by Frege. Definition Currying is most easily understood by starting with an informal definition, which can then be molded to fit many different domains. First, there is some notation to be established. The notation denotes all functions from to . If is such a function, we write . Let denote the ordered pairs of the elements of and respectively, that is, the Cartesian product of and . Here, and may be sets, or they may be types, or they may be other kinds of objects, as explored below. Given a function , currying constructs a new function . That is, takes an argument from and returns a function that maps to . It is defined by for from and from . We then also write Uncurrying is the reverse transformation, and is most easily understood in terms of its right adjoint, the function Set theory In set theory, the notation is used to denote the set of functions from the set to the set . Currying is the natural bijection between the set of functions from to , and the set of functions from to the set of functions from to . In symbols: Indeed, it is this natural bijection that justifies the exponential notation for the set of functions. As is the case in all instances of currying, the formula above describes an adjoint pair of functors: for every fixed set , the functor is left adjoint to the functor . In the category of sets, the object is called the exponential object. Function spaces In the theory of function spaces, such as in functional analysis or homotopy theory, one is commonly interested in continuous functions between topological spaces. One writes (the Hom functor) for the set of all functions from to , and uses the notation to denote the subset of continuous functions. Here, is the bijection while uncurrying is the inverse map. If the set of continuous functions from to is given the compact-open topology, and if the space is locally compact Hausdorff, then is a homeomorphism. This is also the case when , and are compactly generated, although there are more cases. One useful corollary is that a function is continuous if and only if its curried form is continuous. Another important result is that the application map, usually called "evaluation" in this context, is continuous (note that eval is a strictly different concept in computer science.) That is, is continuous when is compact-open and locally compact Hausdorff. These two results are central for establishing the continuity of homotopy, i.e. when is the unit interval , so that can the thought of as either a homotopy of two functions from to , or, equivalently, a single (continuous) path in . Algebraic topology In algebraic topology, currying serves as an example of Eckmann–Hilton duality, and, as such, plays an important role in a variety of different settings. For example, loop space is adjoint to reduced suspensions; this is commonly written as where is the set of homotopy classes of maps , and is the suspension of A, and is the loop space of A. In essence, the suspension can be seen as the cartesian product of with the unit interval, modulo an equivalence relation to turn the interval into a loop. The curried form then maps the space to the space of functions from loops into , that is, from into . Then is the adjoint functor that maps suspensions to loop spaces, and uncurrying is the dual. The duality between the mapping cone and the mapping fiber (cofibration and fibration) can be understood as a form of currying, which in turn leads to the duality of the long exact and coexact Puppe sequences. In homological algebra, the relationship between currying and uncurrying is known as tensor-hom adjunction. Here, an interesting twist arises: the Hom functor and the tensor product functor might not lift to an exact sequence; this leads to the definition of the Ext functor and the Tor functor. Domain theory In order theory, that is, the theory of lattices of partially ordered sets, is a continuous function when the lattice is given the Scott topology. Scott-continuous functions were first investigated in the attempt to provide a semantics for lambda calculus (as ordinary set theory is inadequate to do this). More generally, Scott-continuous functions are now studied in domain theory, which encompasses the study of denotational semantics of computer algorithms. Note that the Scott topology is quite different than many common topologies one might encounter in the category of topological spaces; the Scott topology is typically finer, and is not sober. The notion of continuity makes its appearance in homotopy type theory, where, roughly speaking, two computer programs can be considered to be homotopic, i.e. compute the same results, if they can be "continuously" refactored from one to the other. Lambda calculi In theoretical computer science, currying provides a way to study functions with multiple arguments in very simple theoretical models, such as the lambda calculus, in which functions only take a single argument. Consider a function taking two arguments, and having the type , which should be understood to mean that x must have the type , y must have the type , and the function itself returns the type . The curried form of f is defined as where is the abstractor of lambda calculus. Since curry takes, as input, functions with the type , one concludes that the type of curry itself is The → operator is often considered right-associative, so the curried function type is often written as . Conversely, function application is considered to be left-associative, so that is equivalent to . That is, the parenthesis are not required to disambiguate the order of the application. Curried functions may be used in any programming language that supports closures; however, uncurried functions are generally preferred for efficiency reasons, since the overhead of partial application and closure creation can then be avoided for most function calls. Type theory In type theory, the general idea of a type system in computer science is formalized into a specific algebra of types. For example, when writing , the intent is that and are types, while the arrow is a type constructor, specifically, the function type or arrow type. Similarly, the Cartesian product of types is constructed by the product type constructor . The type-theoretical approach is expressed in programming languages such as ML and the languages derived from and inspired by it: CaML, Haskell and F#. The type-theoretical approach provides a natural complement to the language of category theory, as discussed below. This is because categories, and specifically, monoidal categories, have an internal language, with simply-typed lambda calculus being the most prominent example of such a language. It is important in this context, because it can be built from a single type constructor, the arrow type. Currying then endows the language with a natural product type. The correspondence between objects in categories and types then allows programming languages to be re-interpreted as logics (via Curry–Howard correspondence), and as other types of mathematical systems, as explored further, below. Logic Under the Curry–Howard correspondence, the existence of currying and uncurrying is equivalent to the logical theorem , as tuples (product type) corresponds to conjunction in logic, and function type corresponds to implication. The exponential object in the category of Heyting algebras is normally written as material implication . Distributive Heyting algebras are Boolean algebras, and the exponential object has the explicit form , thus making it clear that the exponential object really is material implication. Category theory The above notions of currying and uncurrying find their most general, abstract statement in category theory. Currying is a universal property of an exponential object, and gives rise to an adjunction in cartesian closed categories. That is, there is a natural isomorphism between the morphisms from a binary product and the morphisms to an exponential object . This generalizes to a broader result in closed monoidal categories: Currying is the statement that the tensor product and the internal Hom are adjoint functors; that is, for every object there is a natural isomorphism: Here, Hom denotes the (external) Hom-functor of all morphisms in the category, while denotes the internal hom functor in the closed monoidal category. For the category of sets, the two are the same. When the product is the cartesian product, then the internal hom becomes the exponential object . Currying can break down in one of two ways. One is if a category is not closed, and thus lacks an internal hom functor (possibly because there is more than one choice for such a functor). Another way is if it is not monoidal, and thus lacks a product (that is, lacks a way of writing down pairs of objects). Categories that do have both products and internal homs are exactly the closed monoidal categories. The setting of cartesian closed categories is sufficient for the discussion of classical logic; the more general setting of closed monoidal categories is suitable for quantum computation. The difference between these two is that the product for cartesian categories (such as the category of sets, complete partial orders or Heyting algebras) is just the Cartesian product; it is interpreted as an ordered pair of items (or a list). Simply typed lambda calculus is the internal language of cartesian closed categories; and it is for this reason that pairs and lists are the primary types in the type theory of LISP, Scheme and many functional programming languages. By contrast, the product for monoidal categories (such as Hilbert space and the vector spaces of functional analysis) is the tensor product. The internal language of such categories is linear logic, a form of quantum logic; the corresponding type system is the linear type system. Such categories are suitable for describing entangled quantum states, and, more generally, allow a vast generalization of the Curry–Howard correspondence to quantum mechanics, to cobordisms in algebraic topology, and to string theory. The linear type system, and linear logic are useful for describing synchronization primitives, such as mutual exclusion locks, and the operation of vending machines. Contrast with partial function application Currying and partial function application are often conflated. One of the significant differences between the two is that a call to a partially applied function returns the result right away, not another function down the currying chain; this distinction can be illustrated clearly for functions whose arity is greater than two. Given a function of type , currying produces . That is, while an evaluation of the first function might be represented as , evaluation of the curried function would be represented as , applying each argument in turn to a single-argument function returned by the previous invocation. Note that after calling , we are left with a function that takes a single argument and returns another function, not a function that takes two arguments. In contrast, partial function application refers to the process of fixing a number of arguments to a function, producing another function of smaller arity. Given the definition of above, we might fix (or 'bind') the first argument, producing a function of type . Evaluation of this function might be represented as . Note that the result of partial function application in this case is a function that takes two arguments. Intuitively, partial function application says "if you fix the first argument of the function, you get a function of the remaining arguments". For example, if function div stands for the division operation x/y, then div with the parameter x fixed at 1 (i.e., div 1) is another function: the same as the function inv that returns the multiplicative inverse of its argument, defined by inv(y) = 1/y. The practical motivation for partial application is that very often the functions obtained by supplying some but not all of the arguments to a function are useful; for example, many languages have a function or operator similar to plus_one. Partial application makes it easy to define these functions, for example by creating a function that represents the addition operator with 1 bound as its first argument. Partial application can be seen as evaluating a curried function at a fixed point, e.g. given and then or simply where curries f's first parameter. Thus, partial application is reduced to a curried function at a fixed point. Further, a curried function at a fixed point is (trivially), a partial application. For further evidence, note that, given any function , a function may be defined such that . Thus, any partial application may be reduced to a single curry operation. As such, curry is more suitably defined as an operation which, in many theoretical cases, is often applied recursively, but which is theoretically indistinguishable (when considered as an operation) from a partial application. So, a partial application can be defined as the objective result of a single application of the curry operator on some ordering of the inputs of some function. See also Tensor-hom adjunction Lazy evaluation Closure (computer science) smn theorem Closed monoidal category Notes References External links Currying Schonfinkelling at the Portland Pattern Repository Currying != Generalized Partial Application! - post at Lambda-the-Ultimate.org Higher-order functions Functional programming Lambda calculus Articles with example Java code
3041948
https://en.wikipedia.org/wiki/Agile%20modeling
Agile modeling
Agile modeling (AM) is a methodology for modeling and documenting software systems based on best practices. It is a collection of values and principles, that can be applied on an (agile) software development project. This methodology is more flexible than traditional modeling methods, making it a better fit in a fast changing environment. It is part of the agile software development tool kit. Agile modeling is a supplement to other agile development methodologies such as Scrum, extreme programming (XP), and Rational Unified Process (RUP). It is explicitly included as part of the disciplined agile delivery (DAD) framework. As per 2011 stats, agile modeling accounted for 1% of all agile software development. Core practices There are several core practices: Documentation Document continuously. Documentation is made throughout the life-cycle, in parallel to the creation of the rest of the solution. Document late. Documentation is made as late as possible, avoiding speculative ideas that are likely to change in favor of stable information. Executable specifications. Requirements are specified in the form of executable "customer tests", instead of non-executable "static" documentation. Single-source information. Information (models, documentation, software), is stored in one place and one place only, to prevent questions about what the "correct" version / information is. Modeling Active stakeholder participation. Stakeholders of the solution/software being modeled should be actively involved with doing so. This is an extension of the on-site customer practice from Extreme Programming. Architecture envisioning. The team performs light-weight, high-level modeling that is just barely good enough (JBGE) at the beginning of a software project so as to explore the architecture strategy that the team believes will work. Inclusive tools. Prefer modelling tools, such as whiteboards and paper, that are easy to work with (they're inclusive). Iteration modeling. When a requirement/work item has not been sufficiently explored in detail via look-ahead modeling the team may choose to do that exploration during their iteration/sprint planning session. The need to do this is generally seen as a symptom that the team is not doing sufficient look-ahead modeling. Just barely good enough (JBGE). All artifact, including models and documents, should be just sufficient for the task at hand. JBGE is contextual in nature, in the case of the model it is determined by a combination of the complexity of whatever the model describes and the skills of the audience for that model. Look-ahead modeling. An agile team will look down their backlog one or more iterations/sprints ahead to ensure that a requirement/work item is ready to be worked on. Also called "backlog grooming" or "backlog refinement" in Scrum. Model storming. A short, often impromptu, agile modeling session. Model storming sessions are held to explore the details of a requirement or aspect of your design. Multiple models. Agile modelers should know how to create a range of model types (such as user stories, story maps, data models, Unified Modeling Language (UML) diagrams, and more) so as to apply the best model for the situation at hand. Prioritized requirements. Requirements should be worked on in priority order. Requirements envisioning. The team performs light-weight, high-level modeling that is JBGE at the beginning of a software project to explore the stakeholder requirements. Limitations There is significant dependence on personal communication and customer collaboration. Agile modeling disciplines can be difficult to apply : On large teams (say 30 or more) without adequate tooling support Where team members are unable to share and collaborate on models (which would make agile software development in general difficult) When modeling skills are weak or lacking. See also Story-driven modelling Agile software development Robustness diagram References External links The Agile Modeling Home Page Agile Model Driven Development (AMDD) Agile software development
48044846
https://en.wikipedia.org/wiki/Snapforce%20CRM
Snapforce CRM
Snapforce CRM is a comprehensive Customer relationship management (CRM) SaaS application, developed by Snapforce.com. Primary use case is customer management and sales automation, although it can also be configured with telephony support. Additional software components include customer databases, customer interaction tracking, reporting, and workflow automation. Deployment options Snapforce CRM can be configured to handle inbound and outbound calling, calls are logged to the prospect or customer record automatically in real time. Recognition In May 2014 the company became the first crm software provider to offer telephony services as a native feature. Snapforce CRM was recognized as one of ten "Top Players" in the Customer Relationship Management Market Report 2015. References External links Company Website Cloud applications CRM software companies Customer relationship management software Software companies based in New Jersey VoIP companies Software companies of the United States Software companies established in 2013 Companies established in 2013 2013 establishments in the United States
9402006
https://en.wikipedia.org/wiki/Gauloises
Gauloises
Gauloises (, "Gaulish" [feminine plural] in French; cigarette is a feminine noun in French), is a brand of cigarette of French origin, now manufactured in Poland. It is produced by the company Imperial Tobacco following its acquisition of Altadis in January 2008 in most countries, but produced and sold by Reemtsma in Germany. Until 2017 the cigarette was manufactured at a plant in Riom, Puy-de-Dôme, in France. History Gauloises was launched by SEITA in 1910. Traditional Gauloises were short, wide, unfiltered and made with dark tobaccos from Syria and Turkey which produced a strong and distinctive aroma. The brand is most famous for its cigarettes' strength, especially in its original unfiltered version. Forty years later, filtered Gauloises cigarettes debuted. In 1984, the Gauloises brand was expanded to include a light, American-type tobacco with a filter. The original non-filter, Gauloises Caporal, have been discontinued and replaced with Gauloises Brunes, which are also filterless but less strong. Gauloises Brunes have low tar and nicotine levels, because of European tobacco laws, but the tobacco is still dark and strong-tasting. As of 2018, the Gauloises cigarettes are produced in Poland after the last manufacturing plant in Riom, Puy-de-Dôme closed its doors in the end of 2017. Between the World Wars the smoking of Gauloises in France was considered patriotic and an affiliation with French "heartland" values. The brand was associated with the cigarette-smoking poilu (a slang term for the French infantryman in the trenches) and the resistance fighters during the Vichy Regime. Their slogan was "Liberté toujours ("Freedom forever"). In 1939–1940 some packets of cigarettes were given a distinctive "troop brand". In March 1954 SEITA launched the "Gauloise Disque Bleu" brand, with CEO Pierre Grimanelli proud of the new packaging that would, he argued, increase sales. The brand was also linked to high-status and inspirational figures representing the worlds of art (e.g. Pablo Picasso) and the intellectual elite (e.g. Jean-Paul Sartre, Albert Camus and Jean Baudrillard). In popular music, for example French pianist and composer Maurice Ravel, American singer Jim Morrison and British music icon John Lennon. American artist Robert Motherwell used Gauloises packets and cartons in many collages, including an extensive series with the packets surrounded by bright red acrylic paint, often with incised lines in the painted areas. In the introduction to his 2015 book Robert Motherwell: The Making of an American Giant, gallery owner Bernard Jacobson says, "Motherwell smoked Lucky Strikes, but in his collage life he smokes Gauloises, around whose blue packets he now organises one composition after another, 'exotic to me precisely because in the normal course of things I don't smoke French cigarettes'." And by incorporating Gauloises packets he makes deft and condensed allusion to "French blue": to the Mediterranean and the palette of Matisse ... to the smoke coiling up in a Cubist assemblage." Henri Charrière, French author and convict, repeatedly references the smoking of Gauloises in his autobiography Papillon. This, together with the romantic associations of France, made Gauloises a popular brand among some writers and artists: in practically every story and novel written by Julio Cortázar set in Paris, the protagonists smoke Gauloises. John Lennon was a noted smoker of Gauloises Bleues. Frank O'Hara in his poem "The Day Lady Died" writes of going to "the tobacconist in the Ziegfeld Theatre" in New York and casually asking "for a carton of Gauloises." In John le Carré's 1979 novel Smiley's People a key plot point involves the concealment of a microfilm in a packet of Gauloises, which are Vladimir's favourite. Smoking Gauloises is also mentioned in the teen television series Gossip Girl. D/Sgt Mort Cooperman smokes Gauloises in several mystery novels by Richard "Kinky" Friedman. Smoking Gauloises was also promoted as a contribution to the national good: a portion of the profits from their sale was paid to the Régie Française des Tabacs, a semi-governmental corporation charged with controlling the use of tobacco, especially by minors, and directing its profits towards socially beneficial causes. The designers of the traditional Gauloise packet reinforced national identity by selecting a peculiarly French shade of blue (like the blues later used in the work of French artist Yves Klein). John Frusciante, former guitarist of the Red Hot Chili Peppers, smoked Gauloises, as noted in the book Scar Tissue by friend and bandmate Anthony Kiedis. During his time at Marlborough College in the early 1960s English singer-songwriter Nick Drake would enjoy smoking Disque Blue cigarettes with his friend Jeremy Mason, in the High Street of the town. The last factory producing Gauloises, in Lille, closed in 2005. In July 2016, the French government considered a ban on both the Gauloises and Gitanes cigarette brands because they were deemed "too stylish and cool". The ban would also apply to brands including Marlboro Gold, Vogue, Lucky Strike and Fortuna. It is the result of a new public health law based on a European directive that says tobacco products "must not include any element that contributes to the promotion of tobacco or give an erroneous impression of certain characteristics". Four major tobacco companies have written to the government seeking clarification on the potential law, calling for an urgent meeting to discuss the details of the plan. In the letter they accuse French health minister Marisol Touraine of an "arbitrary and disproportionate" application of EU directives. Legal problems The cigarette was manufactured by SEITA but 1999 proved to be a landmark year. The legal difficulties crystallised when a French health insurance fund filed a 51.33 million franc lawsuit against four cigarette companies, including SEITA, to cover the estimated and continuing costs of treating the illnesses linked to cigarette smoking. This was followed by an action filed by the family of a deceased heavy smoker and the French state health insurer, Caisse Primaire d'Assurance Maladie, claiming compensation for the cost of the deceased's medical treatment and for producing a dangerous and addictive product. Consequently, brand management was assigned to Altadis, with joint French and Spanish ownership, and this company continued manufacture and international distribution until its acquisition by Imperial Tobacco. On 30 October 2007 the Criminal Chamber of the French Supreme Court ruled against SEITA, accusing it of having signed a partnership agreement with the organisers of the 2000–2002 Francofolies Festivals for the use of visual brand elements of Gauloises Blondes. Sport sponsorship Auto sponsorship Gauloises was the primary sponsor of the Equipe Ligier Formula One team in 1996, replacing sister brand Gitanes, as well as its successor Prost Grand Prix from 1997 until 2000. Gauloises also sponsored the Kronos-run Citroën cars in the World Rally Championship during the 2006 World Rally Championship. Moto sponsorship Gauloises was the primary sponsor of the factory Yamaha team in 2004 and 2005, as well as the satellite Tech 3 team from 2001 until 2004 when they got sponsored by Fortuna in the MotoGP class. Gauloises was a major sponsor for teams in the Dakar Rally. Gauloises also sponsored various grand prix races on the MotoGP calendar, such as the Dutch TT, the Circuit de Barcelona-Catalunya and the Brno Circuit. Markets Gauloises is mainly sold in France, but also was or still is sold in Australia, Canada, Luxembourg, Belgium, Netherlands, Norway, Sweden, Denmark, Germany, Austria, Switzerland, Spain, Italy, Greece, Poland, Hungary, Croatia, Serbia, Czech Republic, Slovenia, Ukraine, Belarus, Russia, Morocco, Tunisia, Algeria, Madagascar, Syria, Israel, United Arab Emirates, Thailand, Mexico, South Africa and Argentina. Gauloises is no longer available in the United Kingdom. See also Cigarette Gitanes Tabac de Troupe at French Wikipedia :fr:Tabac de troupe References External links Imperial Brands brands French brands Cigarette brands Products introduced in 1910
7622727
https://en.wikipedia.org/wiki/Anti-computer%20forensics
Anti-computer forensics
Anti-computer forensics or counter-forensics are techniques used to obstruct forensic analysis. Definition Anti-forensics has only recently been recognized as a legitimate field of study. Within this field of study, numerous definitions of anti-forensics abound. One of the more widely known and accepted definitions comes from Marc Rogers of Purdue University. Rogers uses a more traditional "crime scene" approach when defining anti-forensics. "Attempts to negatively affect the existence, amount and/or quality of evidence from a crime scene, or make the analysis and examination of evidence difficult or impossible to conduct." One of the earliest detailed presentations of anti-forensics, in Phrack Magazine in 2002, defines anti-forensics as "the removal, or hiding, of evidence in an attempt to mitigate the effectiveness of a forensics investigation". A more abbreviated definition is given by Scott Berinato in his article entitled, The Rise of Anti-Forensics. "Anti-forensics is more than technology. It is an approach to criminal hacking that can be summed up like this: Make it hard for them to find you and impossible for them to prove they found you." Neither author takes into account using anti-forensics methods to ensure the privacy of one's personal data. Sub-categories Anti-forensics methods are often broken down into several sub-categories to make classification of the various tools and techniques simpler. One of the more widely accepted subcategory breakdowns was developed by Dr. Marcus Rogers. He has proposed the following sub-categories: data hiding, artifact wiping, trail obfuscation and attacks against the CF (computer forensics) processes and tools. Attacks against forensics tools directly has also been called counter-forensics. Purpose and goals Within the field of digital forensics there is much debate over the purpose and goals of anti-forensic methods. The common conception is that anti-forensic tools are purely malicious in intent and design. Others believe that these tools should be used to illustrate deficiencies in digital forensic procedures, digital forensic tools, and forensic examiner education. This sentiment was echoed at the 2005 Blackhat Conference by anti-forensic tool authors, James Foster and Vinnie Liu. They stated that by exposing these issues, forensic investigators will have to work harder to prove that collected evidence is both accurate and dependable. They believe that this will result in better tools and education for the forensic examiner. Also, counter-forensics has significance for defence against espionage, as recovering information by forensic tools serves the goals of spies equally as well as investigators. Data hiding Data hiding is the process of making data difficult to find while also keeping it accessible for future use. "Obfuscation and encryption of data give an adversary the ability to limit identification and collection of evidence by investigators while allowing access and use to themselves." Some of the more common forms of data hiding include encryption, steganography and other various forms of hardware/software based data concealment. Each of the different data hiding methods makes digital forensic examinations difficult. When the different data hiding methods are combined, they can make a successful forensic investigation nearly impossible. Encryption One of the more commonly used techniques to defeat computer forensics is data encryption. In a presentation he gave on encryption and anti-forensic methodologies the Vice President of Secure Computing, Paul Henry, referred to encryption as a "forensic expert's nightmare". The majority of publicly available encryption programs allow the user to create virtual encrypted disks which can only be opened with a designated key. Through the use of modern encryption algorithms and various encryption techniques these programs make the data virtually impossible to read without the designated key. File level encryption encrypts only the file contents. This leaves important information such as file name, size and timestamps unencrypted. Parts of the content of the file can be reconstructed from other locations, such as temporary files, swap file and deleted, unencrypted copies. Most encryption programs have the ability to perform a number of additional functions that make digital forensic efforts increasingly difficult. Some of these functions include the use of a keyfile, full-volume encryption, and plausible deniability. The widespread availability of software containing these functions has put the field of digital forensics at a great disadvantage. Steganography Steganography is a technique where information or files are hidden within another file in an attempt to hide data by leaving it in plain sight. "Steganography produces dark data that is typically buried within light data (e.g., a non-perceptible digital watermark buried within a digital photograph)." Some experts have argued that the use of steganography techniques are not very widespread and therefore shouldn't be given a lot of thought. Most experts will agree that steganography has the capability of disrupting the forensic process when used correctly. According to Jeffrey Carr, a 2007 edition of Technical Mujahid (a bi-monthly terrorist publication) outlined the importance of using a steganography program called Secrets of the Mujahideen. According to Carr, the program was touted as giving the user the capability to avoid detection by current steganalysis programs. It did this through the use of steganography in conjunction with file compression. Other forms of data hiding Other forms of data hiding involve the use of tools and techniques to hide data throughout various locations in a computer system. Some of these places can include "memory, slack space, hidden directories, bad blocks, alternate data streams, (and) hidden partitions." One of the more well known tools that is often used for data hiding is called Slacker (part of the Metasploit framework). Slacker breaks up a file and places each piece of that file into the slack space of other files, thereby hiding it from the forensic examination software. Another data hiding technique involves the use of bad sectors. To perform this technique, the user changes a particular sector from good to bad and then data is placed onto that particular cluster. The belief is that forensic examination tools will see these clusters as bad and continue on without any examination of their contents. Artifact wiping The methods used in artifact wiping are tasked with permanently eliminating particular files or entire file systems. This can be accomplished through the use of a variety of methods that include disk cleaning utilities, file wiping utilities and disk degaussing/destruction techniques. Disk cleaning utilities Disk cleaning utilities use a variety of methods to overwrite the existing data on disks (see data remanence). The effectiveness of disk cleaning utilities as anti-forensic tools is often challenged as some believe they are not completely effective. Experts who don't believe that disk cleaning utilities are acceptable for disk sanitization base their opinions of current DOD policy, which states that the only acceptable form of sanitization is degaussing. (See National Industrial Security Program.) Disk cleaning utilities are also criticized because they leave signatures that the file system was wiped, which in some cases is unacceptable. Some of the widely used disk cleaning utilities include DBAN, srm, BCWipe Total WipeOut, KillDisk, PC Inspector and CyberScrubs cyberCide. Another option which is approved by the NIST and the NSA is CMRR Secure Erase, which uses the Secure Erase command built into the ATA specification. File wiping utilities File wiping utilities are used to delete individual files from an operating system. The advantage of file wiping utilities is that they can accomplish their task in a relatively short amount of time as opposed to disk cleaning utilities which take much longer. Another advantage of file wiping utilities is that they generally leave a much smaller signature than disk cleaning utilities. There are two primary disadvantages of file wiping utilities, first they require user involvement in the process and second some experts believe that file wiping programs don't always correctly and completely wipe file information. Some of the widely used file wiping utilities include BCWipe, R-Wipe & Clean, Eraser, Aevita Wipe & Delete and CyberScrubs PrivacySuite. On Linux tools like shred and srm can be also used to wipe single files. SSDs are by design more difficult to wipe, since the firmware can write to other cells therefore allowing data recovery. In these instances ATA Secure Erase should be used on the whole drive, with tools like hdparm that support it. Disk degaussing / destruction techniques Disk degaussing is a process by which a magnetic field is applied to a digital media device. The result is a device that is entirely clean of any previously stored data. Degaussing is rarely used as an anti-forensic method despite the fact that it is an effective means to ensure data has been wiped. This is attributed to the high cost of degaussing machines, which are difficult for the average consumer to afford. A more commonly used technique to ensure data wiping is the physical destruction of the device. The NIST recommends that "physical destruction can be accomplished using a variety of methods, including disintegration, incineration, pulverizing, shredding and melting." Trail obfuscation The purpose of trail obfuscation is to confuse, disorient, and divert the forensic examination process. Trail obfuscation covers a variety of techniques and tools that include "log cleaners, spoofing, misinformation, backbone hopping, zombied accounts, trojan commands." One of the more widely known trail obfuscation tools is Timestomp (part of the Metasploit Framework). Timestomp gives the user the ability to modify file metadata pertaining to access, creation and modification times/dates. By using programs such as Timestomp, a user can render any number of files useless in a legal setting by directly calling into question the files' credibility. Another well known trail-obfuscation program is Transmogrify (also part of the Metasploit Framework). In most file types the header of the file contains identifying information. A (.jpg) would have header information that identifies it as a (.jpg), a (.doc) would have information that identifies it as (.doc) and so on. Transmogrify allows the user to change the header information of a file, so a (.jpg) header could be changed to a (.doc) header. If a forensic examination program or operating system were to conduct a search for images on a machine, it would simply see a (.doc) file and skip over it. Attacks against computer forensics In the past anti-forensic tools have focused on attacking the forensic process by destroying data, hiding data, or altering data usage information. Anti-forensics has recently moved into a new realm where tools and techniques are focused on attacking forensic tools that perform the examinations. These new anti-forensic methods have benefited from a number of factors to include well documented forensic examination procedures, widely known forensic tool vulnerabilities, and digital forensic examiners' heavy reliance on their tools. During a typical forensic examination, the examiner would create an image of the computer's disks. This keeps the original computer (evidence) from being tainted by forensic tools. Hashes are created by the forensic examination software to verify the integrity of the image. One of the recent anti-tool techniques targets the integrity of the hash that is created to verify the image. By affecting the integrity of the hash, any evidence that is collected during the subsequent investigation can be challenged. Physical To prevent physical access to data while the computer is powered on (from a grab-and-go theft for instance, as well as seizure from Law Enforcement), there are different solutions that could be implemented: Software frameworks like USBGuard or USBKill implements USB authorization policies and method of use policies. If the software is triggered, by insertion or removal of USB devices, a specific action can be performed. After the arrest of Silk Road's administrator Ross Ulbricht, a number of proof of concept anti-forensic tools have been created to detect seizing of the computer from the owner to shut it down, therefore making the data inaccessible if full disk encryption is used. A Kensington Security Slot is present on many devices and can prevent stealing by opportunistic thieves. Use of chassis intrusion detection feature in computer case or a sensor (such as a photodetector) rigged with explosives for self-destruction. In some jurisdictions this method could be illegal since it could seriously maim or kill an unauthorized user and could consist in destruction of evidence. Battery could be removed from a laptop to make it work only while attached to the power supply unit. If the cable is removed, shutdown of the computer will occur immediately causing data loss. In the event of a power surge the same will occur though. Some of these methods rely on shutting the computer down, while the data might be retained in the RAM from a couple of seconds up to a couple minutes, theoretically allowing for a cold boot attack. Cryogenically freezing the RAM might extend this time even further and some attacks on the wild have been spotted. Methods to counteract this attack exist and can overwrite the memory before shutting down. Some anti-forensic tools even detect the temperature of the RAM to perform a shutdown when below a certain threshold. Attempts to create a tamper-resistant desktop computer has been made (as of 2020, the ORWL model is one of the best examples). However, security of this particular model is debated by security researcher and Qubes OS founder Joanna Rutkowska. Effectiveness of anti-forensics Anti-forensic methods rely on several weaknesses in the forensic process including: the human element, dependency on tools, and the physical/logical limitations of computers. By reducing the forensic process's susceptibility to these weaknesses, an examiner can reduce the likelihood of anti-forensic methods successfully impacting an investigation. This may be accomplished by providing increased training for investigators, and corroborating results using multiple tools. See also Cryptographic hash function Data remanence DECAF Degauss Encryption Forensic disk controller Information privacy Keyfile Metadata removal tool Plausible deniability Notes and references External links Counter-Forensic Tools: Analysis and Data Recovery Refereed Proceedings of the 5th Annual Digital Forensic Research Workshop, DFRWS 2005 at DBLP Anti-Forensics Class Little over 3hr of video on the subject of anti-forensic techniques Computer forensics Counter-forensics Cryptography law Encryption debate
48947752
https://en.wikipedia.org/wiki/Sigurdur%20Thordarson
Sigurdur Thordarson
Sigurdur Thordarson (born 1992), commonly known as Siggi hakkari ("Siggi the Hacker"), is an Icelandic convicted criminal and FBI informant against WikiLeaks. He is known for information leaks, multiple cases of fraud and embezzlement, sexual solicitation of minors and adults. In 2010, at the age of 17, he was arrested for stealing and leaking classified information about Icelandic financial companies. After his arrest, he was introduced to Julian Assange, the editor and founder of WikiLeaks, and worked as a volunteer for the organization for several months between 2010 and 2011. In 2011, as several charges were brought against him, in an attempt to escape prosecution for his crimes, Thordarson contacted the FBI and offered to become an informant in return for immunity from all prosecution, turning over numerous internal WikiLeaks documents in the process. WikiLeaks accused him of having embezzled $50,000 from the WikiLeaks online store to which he pleaded guilty along with other economic crimes against other entities. He was also accused of impersonating Julian Assange. He has multiple convictions for sexual offences. In June 2021, in an interview with Icelandic newspaper Stundin, Thordarson admitted to making numerous false accusations against Julian Assange which were used in the American indictment against Assange. Information leaks Thordarson began leaking information about the Icelandic banking system to the media in late 2009. This included information about individuals in the Icelandic banking system, information that showed that individuals were committing illegal acts in relation to banking. One of the leaks by Thordarson concerned a case called "Vafningsmálið." It involved Bjarni Benediktsson during his time as an MP. Bjarni reported that the case was only a political smear campaign. The information published by Icelandic news media obtained from Thordarson also showed that one of the country's biggest football stars, Eiður Guðjohnsen, was deeply indebted and almost bankrupt. After the information was published, Eiður sued the local newspaper DV for publishing this information. DV lost the case in a lower court, but won an appeal to the Supreme Court of Iceland, stating that the information was a matter for the public. Amongst other information that Thordarson admitted to have leaked in an interview with the Rolling Stone magazine was information about local businessman Karl Wernersson. He was the owner of the Milestone ehf that was the investment company from which Thordarson stole most of the information. Other names in the documents leaked by Thordarson included information about Birkir Kristinsson, who had recently been convicted of economic crimes while working for Glitnir bank. Some speculate that information from Thordarson was used as evidence in that case, Thordarson also leaked a classified report about one of the bigger aluminum plants in Iceland. The report stated that the plant was paying 1/4 of what other aluminum plants in the world are paying for electricity. Other information leaked by Thordarson contained information about other local business men such as Gunnar Gunnarsson, who also has been reported to assist football star Cristiano Ronaldo in tax affairs. In 2013, Thordarson argued with Birgitta Jónsdóttir on Twitter over the release of the loanbooks of the Glitnir Bank. Thordarson said she had no involvement, but he claimed that he had given her the files years ago. In 2009, Thordarson arrived at the offices of the Special Prosecutor, who investigated the bank collapse in Iceland in 2008. Thordarson reportedly gave them all the information he had on Milestone and other local business men, however instead of using some of the information obtained from Thordarson in investigation the investigators decided to sell the information. The case against the two police officers was later dismissed, and it has been reported that the investigators made roughly 30 Million ISK ($250.000) from the documents. In January 2010, Thordarson was arrested on suspicion of stealing classified information, that case never reached the court system and Thordarson denied his involvement until the Rolling Stone interview. Thordarson was only seventeen years old when he was arrested for leaking the information. WikiLeaks connection Kristinn Hrafnsson, spokesman for WikiLeaks, claims Thordarson was a volunteer for the WikiLeaks organization for a few months between 2010 and 2011 where Thordarson took part in moderating a chat room. Former WikiLeaks employee James Ball has claimed Thordarson reported directly to Assange and served as WikiLeaks' contact with hacker groups. Kristinn says Thordarson's claim of having been a board member or chief of staff at WikiLeaks are false and that he is a pathological liar. Ex black hat hacker, Kevin Poulsen, has stated that Thordarson began working for WikiLeaks as early as February 2010 and was fired in November 2011; he acknowledges that Thordarson is "prone to lying". Thordarson sold T-shirts on an unauthorised WikiLeaks online store, from which he embezzled $50,000, the money being paid into his own bank account and the reason why he was sacked. David Kushner, who interviewed Thordarson for the Rolling Stone, claimed that Thordarson provided Rolling Stone with over 1 terabyte of data (1,000 gigabytes) about WikiLeaks, and said that either Thordarson was the real deal or this was the biggest and most elaborate lie in the digital age. In June 2021, in an interview with Icelandic newspaper Stundin, Thordarson admitted that he had made false accusations against Julian Assange, now claiming, for example, that Assange never instructed him to "hack or access" phone recordings and computers of Icelandic politicians. He acknowledged that he dishonestly claimed to be an "official representative of WikiLeaks" and admitted that he stole documentation from Wikileaks staff by copying their hard drives. He is referred to as 'Teenager' in the indictment against Assange. Over ten days later, The Washington Post reported that the interview with Thordarson had led such individuals as Edward Snowden, who supports WikiLeaks, to say the case against Assange was weakened, but the Post said the indictment was not based on testimony from Thordarson which was being used as information on Assange's contact with Chelsea Manning. Sources such as The Hill, Der Spiegel and Deutsche Welle described Thordarson as being a chief or key witness. FBI connection In August 2011, Thordarson contacted the United States Embassy in Reykjavik and claimed he had information about an ongoing criminal investigation in the United States, and requested a meeting. Thordarson was then summoned to the embassy, where he gave diplomatic staff official documents showing that he was who he claimed to be. The day after the meeting with the embassy official the FBI sent a private jet with eight federal agents and a prosecutor to question Thordarson. The FBI gave Icelandic authorities notice that they were questioning Thordarson in relation to an co-investigation that Anonymous and LulzSec were about to infiltrate Icelandic government systems. After the authorities found out Thordarson was being questioned about WikiLeaks, the FBI was asked to leave Iceland. The FBI left the country a few days later but took Thordarson with them to Denmark where questioning continued. Thordarson was subsequently allowed to return to Iceland. In 2012, he met with the federal agents on multiple occasions, and was flown to Copenhagen where Thordarson was provided a room in a luxury hotel. Thordarson was allowed to return to Iceland after every meeting. Thordarson met with the FBI again in Washington D.C. and spent a couple of days with them there. The final meeting that Thordarson said took place with the FBI was during a course Thordarson was enrolled in at Aarhus in Denmark, teaching IT Security. Thordarson met with the agents there and handed over several hard drives. Wired reported Thordarson had received $5,000 for his assistance and said he was on the FBI's payroll. In 2013, Thordarson was also summoned to the General Committee of the Icelandic Parliament after days of being discussed in the Parliament. They questioned Thordarson about his involvement in the FBI case. The then-Minister of the Interior Ögmundur Jónasson said in Parliament that Thordarson was young and the FBI meant him to be a "spy" within the WikiLeaks organization. At the parliament hearing, Thordarson arrived with two bodyguards. "Spy Computer" In January 2011, it was reported in the Icelandic media that a computer had been found within closed sections of the Parliament. It was alleged that WikiLeaks was suspected of placing the computer inside the Parliament. Bjarni Benediktsson the MP Thordarson leaked information about comments found on the computer. On an interview with Icelandic media Stundin, in June 2021, Thordarson said Assange had no involvement whatsoever in the phone recordings of the Parliament. Thordarson was questioned about his involvement in this case. Morgunblaðið, Iceland's largest newspaper published on the front page on 31 January 2011 that a local reporter for the paper DV was suspected of obtaining the information from Thordarson. The reporter was said to be under investigation for receiving the information from Thordarson and manipulating Thordarson into leaking the information and placing the computer inside Parliament. The reporter sued the newspaper for libel and won the case. Morgunblaðið withdrew the report and issued an apology to the reporter on 7 December. There was a report in the Icelandic media that stated that specialists were now checking whether parliament phones were spied on by WikiLeaks. Wired published chat logs that indicated such. This is believed to support the claim that Thordarson is involved with the spy computer somehow. Birgitta Jónsdóttir issued a statement stating that she had never heard of any recordings. The case is still under investigation with no official suspects. Anonymous and LulzSec During his period at WikiLeaks, it has also been reported that Thordarson ordered attacks on Icelandic governmental infrastructures such as the servers hosting the Ministry's websites www.stjornarradid.is and www.landsnet.is. Those DDoS attacks were successful for a few hours. This was all done after an Icelandic business man that owns an Icelandic data center asked Thordarson to do so. It has also been reported that Thordarson ordered Hector Monsegur (Sabu) and his team to attack Icelandic State Police servers. This all happened during Sabu's time as an FBI informant. It is reported that Thordarson obtained the unpublished version of a report about the surveillance unit at the U.S Embassy in Reykjavik. Media sources have indicated that persons part of Anonymous and LulzSec reported to Thordarson. This issue was covered partially in the book We are Anonymous. Reports state that Thordarson obtained many leaks through this method that WikiLeaks later published, such as The Kissinger cables and The Syria Files. It is unknown how WikiLeaks or Thordarson obtained the information, though chat logs between Thordarson and Hector Monsegur a.k.a. Sabu have surfaced. Some people also speculate whether the attack on the website of the Central Intelligence Agency was ordered by Thordarson as a test to see whether "Sabu" had really as good skills and people as he claimed, it is believed that communications between "Sabu" and Thordarson escalated after the CIA attack. Convictions and arrests In 2012, Thordarson was questioned about sexual misconduct, accused of deceiving a seventeen-year-old teenage boy. At the time, Thordarson was 18 years old. Thordarson denied the charges but was found guilty in late 2013 and received 8 months in prison. In 2012, WikiLeaks filed criminal charges against Thordarson for embezzlement. Thordarson denied the charges and the case was later dismissed. He was later arrested in the summer of 2013 on charges of financial fraud. At that time, the WikiLeaks case was brought back up, and Thordarson was indicted on charges of embezzlement and financial fraud. In 2014, Thordarson was ordered to pay WikiLeaks 7 million ISK (roughly $55,000) as well as being sentenced to prison for 2 years for embezzlement and financial fraud. Thordarson pled guilty to all counts. In those cases Thordarson was ordered to pay the victims 15 million ISK (roughly $115.000), Thordarson received a two-year prison sentence in those cases. In 2012, Thordarson was arrested for allegedly having tried to blackmail a large Icelandic candy factory, but the case was later dismissed. In January 2014, Thordarson was again arrested for sex crimes. He was believed to be a potential flight risk as well as being likely to sabotage the investigation against him and therefore placed in solitary confinement. Thordarson had said he would offer flight tickets, Land Rovers, and up to a million dollars in exchange for sexual favours. The victims ranged from the age of 15–20, all male, during which Thordarson was 18–21. A psychiatric evaluation ruled that Thordarson was of sound mind, but that he had an antisocial personality disorder and was incapable of feeling remorse for his actions. Thordarson pled guilty to all counts and received 3 years for that. Thordarson was ordered to pay 8.6 million ISK (roughly $66,000) in damages to his victims. In 2014, he was sentenced to pay roughly $236,000 in damages for various economic crimes and frauds, including having swindled fast-food companies, car rentals, electronics shops, and having tricked someone into giving him all his shares in a book publishing company. In September 2015 he was sentenced to three years' imprisonment for having sex with nine underage boys, after confessing to the crime the previous month. The victims were offered payment or some other form of inducement. A court ordered criminal forensic psychiatric evaluation diagnosed him with antisocial personality disorder. In September 2021, on the day he returned to Iceland from a trip to Spain, Thordarson was arrested and imprisoned. He is being held indefinitely under a Icelandic law enabling the detention of individuals believed to be active in ongoing crimes. According to Stundin, the cases leading to his arrest involved financial fraud. Media portrayals The film The Fifth Estate (2013), with Benedict Cumberbatch as Assange, features a character based on Thordarson's played by Jamie Blackley. Thordarson is mentioned in Domscheit-Berg's book, during his time with WikiLeaks he reportedly used the handles PenguinX, Singi201 and "Q". Notes References 1992 births Living people People from Reykjavík WikiLeaks Icelandic prisoners and detainees Prisoners and detainees of Iceland People convicted of fraud People convicted of sex crimes
10672963
https://en.wikipedia.org/wiki/University%20of%20Wisconsin%E2%80%93Milwaukee%20College%20of%20Engineering%20and%20Applied%20Science
University of Wisconsin–Milwaukee College of Engineering and Applied Science
The College of Engineering and Applied Science is a college within the University of Wisconsin–Milwaukee. It offers bachelor, master and doctoral degrees in civil engineering, electrical engineering, industrial engineering, materials engineering, mechanical engineering, and computer science. Based on the statistical analysis by H.J. Newton, Professor of Statistics at Texas A&M University in 1997 on the National Research Council report issued in 1995, the school was ranked 73rd nationally in the National Research Council (NRC) rankings, with its Civil Engineering program 69th, Electronic Engineering 96th, Industrial Engineering 34th, Materials science 60th, and Mechanical Engineering 87th. The school ranks 129th nationally by U.S. News & World Report, with its computer science program ranked 110th in 2011. Departments Civil Engineering & Mechanics Computer Science Electrical Engineering Industrial & Manufacturing Engineering Materials Science Mechanical Engineering Research centers Center for Alternative Fuels Research Programs Center for By-Products Utilization Center for Composite Materials Center for Cryptography, Computer, & Network Security Center for Ergonomics Center for Urban Transportation Studies Center for Energy Analysis & Diagnostics Southeastern Wisconsin Energy Technology Research Center Notable people Satya Nadella ('90 MS Computer Science), Microsoft CEO. Michael Dhuey, electrical and computer engineer, co-inventor of the Macintosh II and the iPod. Luther Graef ('61 MS Structural Engineering), Founder of Graef Anhalt Schloemer & Associates Inc., and former President of ASCE. Phil Katz ('84, BS Computer Science), a computer programmer best known as the author of PKZIP. Pradeep Rohatgi, Wisconsin Distinguished Professor of Engineering Cheng Xu, aerodynamic design engineer, American Society of Mechanical Engineers Fellow Y. Austin Chang, material engineering researcher and educator Scott Yanoff ('93 BS Computer Science), Internet pioneer. Alan Kulwicki ('77 BS Mechanical Engineering), 1992 NASCAR Cup Series champion driver and team owner See also Jantar-Mantar References External links College of Engineering and Applied Science University of Wisconsin–Milwaukee University of Wisconsin–Milwaukee Engineering schools and colleges in the United States Engineering universities and colleges in Wisconsin
1047803
https://en.wikipedia.org/wiki/PhreakNIC
PhreakNIC
PhreakNIC is an annual hacker and technology convention held in Nashville, Tennessee. It is organized by the Nashville 2600 Organization and draws upon resources from SouthEastern 2600 (se2600). The Nashville Linux User Group was closely tied with PhreakNIC for the first 10 years, but is no longer an active participant in the planning. First held in 1997, PhreakNIC continues to be a long-time favorite among hackers, security experts and technology enthusiasts. PhreakNIC currently holds claim as the oldest regional hacker con and is one of the few hacker cons run by a 501(c)(3) tax-free charity. The conference attracts about 350 attendees. PhreakNIC consists of presentations on a variety of technical subjects, sometimes related to a conference theme. There is also a film room showing anime and technology-related videos from popular culture. The Nashville Linux Users Group held a Linux Installfest from PhreakNIC 3 through PhreakNIC X. PhreakNIC is attended by hackers and other technology enthusiasts from across the United States, although, as a regional conference, most of its attendees come from a two-state radius around Tennessee, including groups from Missouri, Ohio, Washington, DC, Georgia, Kentucky, and Alabama. As PhreakNIC has been held in downtown Nashville, near the Tennessee Titans stadium, the exact date for PhreakNIC was previously not announced each year until the NFL released its schedule in April. PhreakNIC is intentionally scheduled not to coincide with a Titans home game, to give conference organizers full reign over the venue. However, PhreakNIC is typically held around the time of Halloween. Past locations Drury Inn at I-24 and Harding Place PhreakNIC 2 PhreakNIC 3: October 29–31, 1999 Days Inn on Briley Pkwy at the airport PhreakNIC 4: November 3–5, 2000 PhreakNIC 5: November 2–4, 2001 Ramada Inn downtown PhreakNIC 6: November 1–3, 2002 Days Inn at the Stadium PhreakNIC 7: October 24–26, 2003 PhreakNIC 8: October 22–24, 2004 PhreakNIC 9: October 21–23, 2005 PhreakNIC 10: October 20–22, 2006 PhreakNIC 11: October 19–21, 2007 PhreakNIC 12: October 24–26, 2008 PhreakNIC 13: October 30–31, 2009 PhreakNIC 14: October 15–17, 2010 PhreakNIC 15: November 4–6, 2011 Clarion Inn and Suites, Murfreesboro, TN PhreakNIC 16: November 9–11, 2012 PhreakNIC 17: September 20–22, 2013 Millennium Maxwell House Hotel, Nashville, TN PhreakNIC 18: October 30 - November 2, 2014 Clarion Inn and Suites, Murfreesboro, TN PhreakNIC 19: November 6–7, 2015 PhreakNIC 20: November 4–6, 2016 PhreakNIC 21: November 3-5, 2017 See also DEF CON Hackers on Planet Earth Notacon Shmoocon CarolinaCon References External links PhreakNIC website Nashville 2600 Nashville Linux Users Group SouthEastern 2600 PhreakNIC Video Archives Official PhreakNIC YouTube Channel Hacker conventions
7678199
https://en.wikipedia.org/wiki/Georges%20Painvin
Georges Painvin
Georges Jean Painvin (; 28 January 1886 – 21 January 1980) was a French geologist and industrialist, best known as the cryptanalyst who broke the ADFGX/ADFGVX cipher used by the Germans during the First World War. Early life Painvin was born into a family of graduates from the École polytechnique and mathematicians from Nantes. In addition to his remarkable scientific education, the young Painvin was also a keen cello player, where in 1902 he was awarded First prize for cello at the Nantes Conservatory of Music. In 1905, Painvin passed his matriculation exam into the École polytechnique. In his second year, he opted for admission to the Corps des mines where he would make his profession. However, French military service would briefly take him away from this fulfilment. On 7 September 1907, Painvin was appointed reserve second lieutenant and assigned to the 33rd Artillery Regiment to attend his third year on obligatory military service. In 1909 and again in 1911, he attended only short periods of military service lasting a few days. It was not until 1908 that Painvin entered the École Nationale Supérieure des Mines for three-years study, where he would be ranked 4th of the 6 students in his class. On completion Painvin graduated to engineer. In 1911, Painvin became professor of palaeontology at the Ecole des Mines de Saint-Étienne and from 1913 at the École des mines de Paris. On 1 September 1911, Painvin was promoted further in his military service to lieutenant and reassigned to the 53rd Artillery Regiment the following year. In October 1913, Painvin also completed a probationary period at the École supérieure de guerre (French Army War College), which resulted in Painvin being assigned to the staff service on 6 April 1914. Painvin's teaching career would unfortunately be interrupted by the onset of the First World War. When the conflict broke out, Painvin was naturally recalled into the French army. Initial cryptanalysis Painvin was assigned to the staff of General Maunoury's 6th Army, with whom he served as an orderly officer. Under General Maunoury, Painvin participated in the Battle of Ourcq in particular. However, Painvin's position gave him relative freedom to allow him to be interested in cryptology and ciphers. On befriending a Captain Paulier of the French army, who introduced Painvin to telegram and communication systems, Painvin would later perform cryptanalysis for the French war effort. Painvin had no training in cryptology but showed considerable passion for these "ciphers". Painvin asked that he be given intercepted cryptograms transmitted by the invading Imperial Germany. It did not take long before Painvin made himself known in the field of cryptanalysis. He was assigned to the "Cabinet noir", the French black room which he would occupy until the end of the War. The encrypted telegram messages would consist of both military and diplomatic communications, some transmitted as far as between Berlin and Constantinople. There, he concentrated on the ciphers of the Imperial German Navy, then of the Austro-Hungarian Navy, which until his joining had remained entirely incomprehensible. He managed to break the ciphers, allowing a more efficient hunt for German submarines (U-boats). On 21 January 1915, Painvin proposed a method, the ARC system, which made it possible to discover the cryptographic key used for the encryption and this with a single text. The German troops used several cipher systems, but this did not discourage Painvin, on the contrary. Accompanied by a Colonel Olivari, Painvin set upon attacking the triliteral ABC cipher. After two weeks of work, the two cryptanalysts managed to reconstruct the encrypted messages despite having false messages voluntarily sent by the Germans. One path of encrypted diplomatic communications in particular, led to the unravelling of the spy Mata-Hari; during the first months of the war, Painvin's work made it possible to quickly follow the evolution of this enemy figure. In 1917, the Germans introduced the KRU field cipher. More complex with one cryptographic key per army unit, it would nevertheless be the subject of a meticulous analysis on the part of Painvin and a Captain Guitard. The "Radiogram of Victory" During the spring of 1918, Paris was constantly being bombarded by German Gotha G.IV bomber aircraft and heavy artillery. The French were unable to crack the newly introduced ADFGX cipher (designated by the German Imperial Army as "Geheimschrift der Funker 1918", in short: GedeFu 18) being used by the Germans from 1 March 1918 and thus could not predict their attacks. On 5 April 1918, shortly after the Germans launched their Spring Offensive, Painvin discovered two cryptographic keys used for the new ADFGX cipher and was able to decipher the new German cipher system. He relied on it for messages dated from 1 April. In June 1918, the German Imperial Army was preparing for a final push on the Western Front to cover the 100 kilometres that separated it from Paris. The Allies needed to know where the German attack would come. But, at this worst stage of the War, the German cipher system had become more complex from 30 May, by adding the letter "V" (ADFGVX cipher) to the earlier ADFGX cipher method. On 1 June 1918, the French listening station on top of the Eiffel Tower intercepted a German radio message for the first time, which not only contained the letters A, D, F, G and X, but also the letter V. The radio message came from the German army outposts in the region of Remaugies, north of Compiègne, and read: FGAXA XAXFF FAFVA AVDFA GAXFX FAFAG DXGGX AGXFD XGAGX GAXGX AGXVF VXXAG XDDAX GGAAF DGGAF FXGGX XDFAX GXAXV AGXGG DFAGG GXVAX VFXGV FFGGA XDGAX FDVGG A Painvin recognized this and correctly concluded that the Germans had expanded the Polybius square from 5×5 to 6×6 and were now able to encode a total of 36 characters instead of the previous 25 letters. He also correctly suspected that the 26 letters of the alphabet plus the 10 digits (0 to 9) were used and based his cryptanalysis on this assumption. After some 26 hours of intensive work, until he was physically exhausted, he succeeded in reconstructing the grid and permutation used for the encryption and was able to decipher the intercepted message on 2 June 1918. The authentic plaintext message read in German: "Munitionierung beschleunigen Punkt Soweit nicht eingesehen auch bei Tag" Translated into English: "Speed up supply of ammunition. If not seen also during the day". The message was immediately forwarded to Marshal Ferdinand Foch's French headquarters and convinced him that the Germans were planning a massive attack in the section of the front at Compiègne. Foch concentrated his last reserve troops around this city, which meant that the German attack that took place here shortly afterwards could be repulsed. Breaking the German ADFGVX cipher took its toll on Painvin's physical and mental health and shortly after the message was delivered, he collapsed, exhausted by all his efforts. In the aftermath of the Armistice, exhausted by these years of physical and mental effort, Painvin was forced to go into a long convalescence. On the French side, the German radio message has since been referred to as "Le Radiogramme de la Victoire". For Painvin's painstaking efforts and determination, he was honoured and made a Knight of the Legion of Honour in a military capacity on 10 July 1918. He would, however, not be able to disclose or talk about his work accomplishments for a large part of his later lifetime, because the activities of a number of French government services were under cover of military secrecy from the general public until 1962. In December 1962, Painvin's contribution to the war effort in the field of code decryption was described by French General Desfemmes. On 19 December 1973, Painvin was elevated to the rank of Grand Officer of the Legion of Honour. The inventor of the ADFGX/ADFGVX cipher, the German signal corps officer Lieutenant , did not learn of Painvin's achievement until 1967. In 1966, nearly fifty years later, Fritz Nebel learned that his system had been broken during World War I and said that he had originally proposed a double column transposition as the second stage of his method. However, his proposal was rejected in discussions by his superiors and, for practical reasons, they decided in favor of a (cryptographically significantly weaker) simple column transposition. Two years later, in 1968, Nebel and Painvin met in person and Nebel expressed his feelings by saying that the enemies of yesterday meet as the friends of today. Painvin emphasized that if it had been done as Nebel suggested, he certainly would not have been able to break the encryption. The American cryptologist Herbert Yardley in The American Black Chamber would say of Painvin: After 1918 After the War, Painvin resumed and continued his teaching activities part-time during the interwar period. He was also chairman of several companies, and participated in the strong growth for the company of Electrochemistry, Electrometallurgy and Electric arc furnace Steelworks of Ugine (abbr. ) during the 1920s, of which he was appointed director general in 1922. The company mobilised new methods of electrochemistry to produce on a large scale the first stainless steels at affordable prices, helped by the French inventor and industrialist René Marie Victor Perrin (1893-1966), who developed the Ugine-Perrin process. The company would remain at the cutting edge of technology 40 years later with the inauguration of the giant Fos-sur-Mer steel plant near the Rhône. In addition to the steelworks company in Ugine, Painvin chaired Crédit Commercial de France from 1941 to 1944. From 1934, he also contributed to the reorganization of the Paris stock exchange, which he presided over from 1940. He was also chairman of the chemical industries organizing committee, as well as of the Paris Chamber of Commerce (from January 1944). Several articles have studied Painvin's activity during the German military Occupation of France (1940-1944). Painvin was considered "a large-scale industrialist, who works very sincerely and very honestly with the German services"; and, "in the minds of many people, Mr. Painvin was regarded as pro-regime". Under two demission directions before the court of justice of Seine and the Comité national interprofessionnel d'épuration (CNIE) (National Interprofessional Purification Committee) for acts of collaboration by French civilians during the German occupation of France, Painvin resigned as president and administrator of Ugine steelworks on 12 December 1945. In the aftermath of World War II, Painvin decided to step back and give up most of his functions. In 1948, Painvin moved to Casablanca where he was entrusted in 1950 with the presidency of the industrial, financial and services conglomerate Omnium Nord-Africain, being also delegate president of the Société Chérifienne d'Exploitation d'Ouvrages Maritimes, of the Société Chérifienne du plâtre and member of the Casablanca Chamber of Commerce and Industry. Painvin retired in 1962, and returned to France at the age of 76, he died in 1980 at the age of 93. Literature "The Codebreakers", The Annals of Mines: Georges Jean PAINVIN (1886-1980) (in French). References and notes École Polytechnique alumni Mines ParisTech alumni Corps des mines Pre-computer cryptographers French cryptographers 1886 births 1980 deaths Military history of France
29204532
https://en.wikipedia.org/wiki/TouchWiz
TouchWiz
TouchWiz is a discontinued user interface developed by Samsung Electronics Co., Ltd with partners, featuring a full touch user interface. It is sometimes incorrectly referred to as an operating system. TouchWiz is used internally by Samsung for smartphones, feature phones and tablet computers, and is not available for licensing by external parties. The Android version of TouchWiz also comes with the Samsung-made app store Galaxy Apps (Galaxy Store). It was replaced by Samsung Experience in 2017 with the release of Android 7.1.1 “Nougat”. History Overview The first, original, edition of TouchWiz was released in 2008. This 1.0 version was officially launched with the original Samsung Solstice in 2009. The latest version of TouchWiz is TouchWiz 6.0, which is on the Samsung Galaxy J1 mini Prime and TouchWiz Nature UX 5.0 on the Samsung Galaxy J3 (2016) feature a more refined user interface as compared to the previous versions found on Samsung's older phones released prior to Galaxy S5's release. The status bar is now transparent during home screen mode in TouchWiz Nature UX 2.0 and TouchWiz Nature UX 2.5. In TouchWiz 4.0 on Galaxy S II and the Galaxy Note (both later updated to Nature UX), some of the features added include panning and tilt, which makes use of the accelerometer and gyroscope in the phone to detect motion. TouchWiz is used by Samsung's own proprietary operating systems, Bada and REX, as well as by phones based on the Android operating system prior to Android Nougat. It is also present in phones running the Tizen operating system. TouchWiz was abandoned by Samsung in late 2016 in favor of Samsung Experience. TouchWiz was a central issue in the legal battle between Apple and Samsung. TouchWiz 1.0 This was the original edition of TouchWiz, released in 2008, with pre-introduction (trial) on SGH-F480. This version is officially launched with the original Samsung Solstice in 2009. Various versions of TouchWiz 1.0, with different features, were used on Solstice's siblings such as the Samsung Eternity, Impression, Impact and Highlight. TouchWiz 2.0 This was the original second edition of TouchWiz, released in 2009. This version premiered with the Samsung Solstice 2 in 2010. TouchWiz 3.0 Released in 2010, to support Android Eclair (2.1) and Android Froyo (2.2). This version premiered with the Samsung Galaxy S. A lite version of TouchWiz 3.0, with reduced features, was used on the Samsung Galaxy Proclaim. TouchWiz 4.0 The second version of TouchWiz was released in 2011, to support Android Gingerbread and Android Honeycomb (2.3 - 3.2.6). The Galaxy S II was the first device preloaded with TouchWiz 4.0. This version includes better hardware acceleration than 3.0, as well multiple touchscreen options involving multi-touch gestures and using the phone's accelerometer. One such feature allows users to place two fingers on the screen and tilt the device towards and away from themselves, to zoom in and out, respectively. "Panning" on TouchWiz 4.0 allows users to scroll through home screens by moving the device from side to side. TouchWiz Nature UX The third version of TouchWiz was renamed to TouchWiz Nature UX. It was released in 2012 and supported Android Ice Cream Sandwich (4.0). The Galaxy S III, Galaxy Star and Galaxy Note 10.1 were the first devices preloaded with this version, although a "lite" version was used beforehand on the Samsung Galaxy Tab 2 7.0. The 2013 Galaxy S2 "Plus" variant featured this user interface as well. TouchWiz Nature UX contains more interactive elements than previous version, such as a water ripple effect on the lock screen, and "smart stay", a feature which uses eye tracking technology to determine if the user is still watching the screen. Users are able to set custom vibration patterns for phone calls and notifications. The keyboard software is equipped with a clipboard manager. A pinch-to-zoom magnification feature and a picture-in-picture mode ("pop-up play") have been added to the precluded video player, as well as panning and zooming motion gestures in the gallery software. To complement the TouchWiz interface, and as a response to Apple's Siri, this version introduced S Voice, Samsung's intelligent personal assistant. The colour palette of the user interface has been adapted to the colours of nature, prominently green (plants, forest, grass) and blue (ocean, sky), to visually represent the slogan “Inspired by Nature”. This version of TouchWiz also utilized many colour gradients. Criticism has been aimed at the inability of the home screen and lock screen to work in horizontal display orientation. Premium suite upgrade With the Android 4.1 update (pre-installed on the Galaxy Note II), Samsung delivered a "premium suite upgrade", whose improvements include a split screen mode, making this the first mobile user interface to run more than one application simultaneously. Other additions and accessibility improvements are "easy mode", a simplified home screen option with larger icons, "smart rotation", where the screen rotates after the orientation of the user's face detected through the front camera, a low-light shot mode, the ability to adjust the volume of each side of headphones separately, a "reader mode" for Samsung Internet (formerly known as "S Browser") with adjustable font size, "Page Buddy", which can detect the user's intended action such as opening the music player when plugging in earphones, and the ability to read the news feed from Facebook in particular directly from the lock screen well before common lock screen notifications were released with Android 5.0 "Lollipop". TouchWiz Nature UX 2.0 This version supports Android Jellybean (4.2.2) and was released in 2013; the Samsung Galaxy S4 was the first device to use TouchWiz Nature UX 2.0. Even more eye tracking abilities were introduced with this version, such as "smart scroll", which allows users to scroll down and up on webpages by tilting their head downwards and upwards, respectively. The on-screen buttons for photo, video and modes now have a metallic texture, and both photo and video recording modes are combined into one viewfinder page rather than separated by switching modes. The audio setting (mute / vibrate / sound) is also accessible from the device options (power off / restart / data network mode / flight mode) which is accessed through holding the power button. With Android 4.4 KitKat, which rolled out in February 2014 to Samsung Galaxy devices, the colours of the green battery icon, as well as the green (upstream) and orange (downstream) indicator arrows in the top status bar have been changed to grey. TouchWiz Nature UX 2.5 TouchWiz Nature UX 2.5 was released in 2013 to support the last updates to Android Jellybean (4.1- 4.3), and was first used on the Galaxy Note 3 and the Galaxy Note 10.1 2014 Edition. This version completely supports the Samsung Knox security solution, as well as multi-user capabilities. The camera was also improved in this update: shutter lag was reduced, and features like a 360° panorama mode were added. The settings menu is equipped with a new search feature. It is the first mobile user interface to feature windowing. To the existing split-screen feature, the abilities to drag and drop items inbetween and open select apps twice was added. A vertical one-handed operation mode has been added for the Galaxy Note 3. It can be accessed with a swiping gesture on either side of the screen and allows variably shrinking the simulated size of the screen. It also is equipped with on-screen navigation (options, home, back) and volume keys (up, down). TouchWiz Nature UX 3.0 This update was released in 2014 to support Android KitKat (4.4). It was first seen on the Galaxy S5, Galaxy K Zoom and the Galaxy Note Pro 12.2. It later appeared on the Galaxy Alpha. The home screen and settings menu were made more user-friendly with larger icons and less clutter. Also, icons in the context menus were removed for minimalism, and any disabled options, where previously they would have been visible but unusable (grayed out), now do not show up at all. The one-handed operation mode available on the Galaxy S5 allows setting shortcuts for apps and contacts. A floating menu with user-specified app shortcuts has also been added. The new colour palette utilizes oceanic colours (e.g. #006578, #00151C, #004754, #009ba1, #005060, #0034da and #001d27) to reference the Galaxy S5's water resistance. Some budget devices, such as the Samsung Galaxy Trend 2 Lite, Galaxy J1 Ace, Galaxy V Plus, Galaxy Grand Neo Plus and Galaxy Tab E feature a reduced version of TouchWiz Nature UX 3.0 called "TouchWiz Essence UX", which is adapted for devices with less than 1 GB of RAM. This version has an ultra-power-saving mode, which drastically extends the battery duration by making the screen grayscale, restricting the apps that can be used, and turning off features like Wi-Fi and Bluetooth. The redesigned settings menu has flat icons instead of previously used self-coloured clip-art, and is equipped with three distinct viewing modes: Grid view, List view and Tab view. Another minor distinction is that the boot screen shows the Android trademark, but no longer the model number of the device (e.g. SM-G900F, SM-G901F). TouchWiz Nature UX 3.5 This is a slightly modified version of TouchWiz Nature UX 3.0, released in 2014 for the Galaxy Note 4, Galaxy Note Edge and Galaxy A-series (2015). Most of the changes made were minor, aesthetic ones, including an overhaul of the cluttered settings menu, the inclusion of quick setting shortcuts and centralization of the lock screen clock. However, the camera application was stripped down to its most basic features, removing features such as the WiFi Direct-powered remote viewfinder, nonetheless gaining an AF/AE lock feature accessible through tapping and holding in the camera viewfinder. The "grid view" setting menu viewing mode introduced in TouchWiz Nature UX 3.0 has been removed. Menus and system applications use white instead of dark backgrounds. The recent task switcher has changed into a vertically scrollable list with overlapping thumbnails, while a flat list with non-overlapping thumbnails with orientation-based scrolling direction had been used previously. The ability to use shortcuts for apps and contacts in the one-handed operation mode has been removed. TouchWiz Nature UX 4.0 This version comes with Galaxy S5, and so supports Android Lollipop and was released in 2015. Update 4.0 eventually became available to the Galaxy S4, Galaxy Note 3 (2013) and Galaxy S5 and Note 4 (2014), and other Lollipop-compatible devices, but with fewer features. This version of TouchWiz continued the design that was initially seen on the Galaxy S5, with slightly more rounded icons, but also incorporated Lollipop's additions and changes, such as making the notification drop-down menu merely an overlay instead of a full-screen drawer and colouring it neon blue. TouchWiz Nature UX 4.0 also included a visual overhaul for the whole system, changing the black background in system apps to a white theme, similar to TouchWiz UX 3.5 seen on the Note 4 & A series. The black theme had been in place since the original Galaxy S, because it reduced battery consumption as Samsung mainly uses AMOLED display technology. It was changed because of a patent licensing deal with Google, which required that the TouchWiz interface follow the design of "stock" Android more closely. The size of the current video file and the remaining space storage amount is no longer indicated in the camera software viewfinder during video recording, and the audio controls (silent, vibration only, on) have been removed from the power menu. TouchWiz 5.0 With TouchWiz 5.0, Samsung reverted to the earlier, simpler naming system, without the "Nature UX" infix, to reflect aesthetic changes. This version was released in 2015 primarily for the Samsung Galaxy S6, and supports later updates to Android Lollipop (5.0.2 - 5.1.1). This update cleaned up the user interface, reduced the number of duplicate functions, and used brighter and simpler colours with icon shadows. Many icons in top bars have been replaced with uppercase text labels such as "". A new version of TouchWiz 5.0 was released in September 2015 for the Galaxy Note 5 and S6 Edge+. The new version features updated iconography, with stock apps now featuring "squircle" icons instead of freeform, and a re-added single-handed operation mode that was removed on the Galaxy S6 after being available on the S5. However, it is opened with a triple-press of the home button instead of a swiping gesture, lacks on-screen keys (navigation, volume controls, app/contact shortcuts) and only has one fixed size. The camera user interface has undergone several changes as well. Shortcuts on the left pane are no longer customizable, and the settings are on a separate page rather than on top of an active viewfinder. Mid-range and entry-level devices feature a version named "TouchWiz Essence 2.0". It is similar to TouchWiz 5.0 but icon shadows are not included along with or without Theme Support. It is comparatively lighter, faster and featured on devices such as the 2016 A series, Galaxy A8, Galaxy S5 Neo, tablets and the J series devices which run Android Lollipop. It also doesn't feature the Sparkling Bubbles lockscreen effect exclusive to Galaxy S6 devices and the Note 5. Galaxy S6 user reports suggest that the step-by-step frame navigation and still frame image extraction features have been removed from Samsung's precluded video player software. TouchWiz 6.0 This version of TouchWiz began during initial beta testing of Android Marshmallow on the Galaxy S6 in December 2015, for users who had signed up for the beta program, and became formally available in February 2016. It features a redesigned notification drop-down and colour overhaul, replacing the original blue and green hue to white. This version also removed the weather while centring and enlarging the clock on the lock screen, as well as bringing back the ability to customize the shortcuts on the lock screen. Icons are slightly modified with a flatter look, removing the shadows that featured previously. The Smart Manager was removed as an app, and was moved to a settings option instead. On this version, Samsung also added accessibility options such as Show button shapes, where buttons are outlined with a visible border and a shaded background to increase contrast from the background, and the ability change the display density setting, although this was initially only accessible through a third-party app as the setting was hidden by the system. An update to the Galaxy S7 and S7 Edge made it official, allowing users to change it under the display settings. TouchWiz 6.0 also includes Google's additions to Android: Doze and App Standby to improve battery performance (although Samsung's own app optimization feature remains available, thus meaning there are two separate "app optimization" settings: one within the Smart Manager app, and the other within the battery usage screen), Now on Tap to quickly access the intelligent personal assistant Google Now, and Permission Control to limit the permissions granted to a particular application. TouchWiz Grace UX First released with the Samsung Galaxy Note 7 for Android Marshmallow, the Grace UX was named after the device's codename, and eventually made its way to older devices, including the Galaxy Note 5 through an update, the Galaxy S7 and S7 Edge through the official Android Nougat update. The Grace UX features a cleaner, flatter look to iconography and extensive use of white space. TouchWiz Grace UX devices also benefit from the Secure Folder functionality, which enables users to keep certain data, and even apps, behind a secure password. In addition, for most countries, all the languages that were absent from previous versions (Android Marshmallow or earlier) will be available in this release, starting with the Galaxy Tab S3. Devices running Samsung TouchWiz Proprietary Samsung Champ Samsung Impact/Highlight Samsung Jet Samsung Preston Samsung Solstice Samsung Corby Samsung Star Samsung Tocco Samsung Ultra Touch Samsung Blue Earth Samsung Monte Bada Samsung Wave 575 (TouchWiz 3.0) Samsung Wave S8500 Samsung Wave II S8530 Samsung Wave 723 (TouchWiz 3.0) Samsung Wave 525 (TouchWiz 3.0) Samsung Wave Y Samsung Wave 3 (TouchWiz 4.0) Windows Mobile Samsung Omnia II Symbian Samsung i8910 Tizen Samsung Z1 Samsung Z3 Android Cameras Samsung Galaxy Camera (TouchWiz Nature UX) Samsung Galaxy NX (TouchWiz Nature UX 2.0) Samsung Galaxy Camera 2 (TouchWiz Nature UX 2.5) Smartphones Samsung Behold II Samsung Illusion SCH-I110 (TouchWiz 3.0) Samsung Infuse 4G (TouchWiz 3.0) Samsung Rugby Smart (TouchWiz 3.0) Samsung Droid Charge (TouchWiz 4.0, Upgradable to TouchWiz Nature UX) Samsung Galaxy Chat (TouchWiz Nature UX) Samsung Galaxy Gio (TouchWiz 3.0) Samsung Galaxy Fit (TouchWiz 3.0) Samsung Galaxy Mini (TouchWiz 3.0) Samsung Galaxy Mini 2 (TouchWiz 4.0, Upgradable to TouchWiz Nature UX) Samsung Galaxy 3 GT-i5800 (TouchWiz 3.0) Samsung Galaxy 5 GT-i5500 (TouchWiz 3.0) Samsung Captivate Glide (TouchWiz 4.0) Samsung Gravity Smart (TouchWiz 3.0) Samsung Exhibit II 4G (TouchWiz 4.0) Samsung Galaxy Y (TouchWiz 3.0, Upgradable to TouchWiz 4.0) Samsung Galaxy W (TouchWiz 4.0) Samsung Galaxy Light (TouchWiz 4.0) Samsung Galaxy J1 (2016) (TouchWiz Nature UX 5.0) Samsung Galaxy Ace (TouchWiz 3.0) Samsung Galaxy Ace Plus (TouchWiz 4.0) Samsung Galaxy Ace 2 (TouchWiz 4.0, Upgradable to TouchWiz Nature UX) Samsung Galaxy Ace 3 (TouchWiz Nature UX 2.0, Upgradable to TouchWiz Nature UX 2.5) Samsung Galaxy Ace Style/Ace 4/Ace 4 Neo (TouchWiz Essence UX) Samsung Galaxy Express (TouchWiz Nature UX) Samsung Galaxy Grand (TouchWiz Nature UX, Upgradable to Nature UX 2.0) Samsung Galaxy Grand Prime (TouchWiz Essence UX) Samsung Galaxy K Zoom (TouchWiz Nature UX 3.0) Samsung Galaxy Premier (TouchWiz Nature UX) Samsung Galaxy Prevail 2 (TouchWiz Nature UX) Samsung Galaxy Pro (TouchWiz 3.0) Samsung Galaxy Proclaim (TouchWiz 3.0 Lite) Samsung Galaxy Pocket 2 (TouchWiz Essence UX) Samsung Galaxy Pocket Neo (TouchWiz Nature UX 2.0) Samsung Galaxy Young (TouchWiz Nature UX) Samsung Galaxy Young 2 (TouchWiz Essence UX) Samsung Galaxy Fame (TouchWiz Nature UX) Samsung Galaxy S (TouchWiz 3.0, Upgradable to TouchWiz 4.0) Samsung Galaxy S Blaze 4G (TouchWiz 4.0) Samsung Galaxy S Duos (TouchWiz Nature UX) Samsung Galaxy S Duos 2 (TouchWiz Nature UX 2.0) Samsung Galaxy SL (TouchWiz 3.0, Upgradable to TouchWiz 4.0) Samsung Galaxy S Plus (TouchWiz 3.0, Upgradable to TouchWiz 4.0) Samsung Galaxy S Advance (TouchWiz 4.0, Upgradable to TouchWiz Nature UX 2.0) Samsung Galaxy S II (TouchWiz 4.0, Upgradable to TouchWiz Nature UX (4.1.2)) S II Skyrocket (TouchWiz 4.0, Upgradable to TouchWiz Nature UX) Samsung Galaxy S II Plus (TouchWiz Nature UX, Upgradable to TouchWiz Nature UX 2.0) Samsung Galaxy S III/S III Mini/Samsung Galaxy S III Neo (TouchWiz Nature UX, Upgradable to TouchWiz Nature UX 2.5 (4.4.2 - S III/Neo), TouchWiz Nature UX 2.0 (S III Mini VE)) Samsung Galaxy S4/Samsung Galaxy S4 Value Edition/S4 Active i9295/S4 Mini i919X/S4 Zoom/S4 LTE-A (GT-i9506) (TouchWiz Nature UX 2.0, Upgradable to TouchWiz Nature UX 4.0) Samsung Galaxy S5/S5 Mini (TouchWiz Nature UX 3.0, Upgradable to TouchWiz Nature UX 5.0 (S5 Mini), TouchWiz 6.0 (Galaxy S5/S5 Mini)) Samsung Galaxy S6/S6 Edge (TouchWiz Nature UX 5.0 (5.0.2), upgradeable to Grace UX (7.0)) Samsung Galaxy Star S5282/S5280 (TouchWiz Nature UX) Samsung Galaxy J1 Ace Neo (TouchWiz Nature UX 5.0) Samsung Galaxy Star 2/Star 2 Plus (TouchWiz Essence UX) Samsung Galaxy Win (TouchWiz Nature UX 2.0) Samsung Galaxy Core (TouchWiz Nature UX 2.0, Upgradable to TouchWiz Nature UX 2.5) Samsung Galaxy Core 2 (TouchWiz Essence UX) Samsung Galaxy J1 (TouchWiz Essence UX) Samsung Galaxy J7 (TouchWiz Nature UX 5.0 (5.1.1), upgradeable to TouchWiz 6.0 (6.0.1)) Samsung Galaxy J1 Ace (TouchWiz Essence UX (4.4.4)) Samsung Galaxy Trend Plus (TouchWiz Nature UX 2.0) Samsung Galaxy S Duos 3/VE (TouchWiz Essence UX (4.4.4)) Samsung Exhibit 4G (TouchWiz 3.0) Samsung Galaxy Alpha (Touchwiz Nature UX 3.0 (4.4.4), Upgradable to TouchWiz Nature UX 3.5 (4.4.4) and 5.0 (5.0.2)) Samsung Galaxy A3/Samsung Galaxy A5/Samsung Galaxy A7 (TouchWiz Nature UX 3.5 (4.4.4), Upgradable to TouchWiz Nature UX 5.0 (5.0.2) & TouchWiz 6.0 (6.0.1)) Samsung Galaxy A3 (2016)/Samsung Galaxy A5 (2016)/Samsung Galaxy A7 (2016)/Samsung Galaxy A9 (2016) (TouchWiz Nature UX 5.0, upgradeable to Touchwiz 6.0 & Grace UX (7.0)) Samsung Galaxy J1 Ace VE (TouchWiz Nature UX 5.0) Samsung Galaxy A9 Pro (2016) (TouchWiz 6.0 (6.0.1)) Samsung Galaxy J1 Mini (TouchWiz Nature UX 5.0 (5.1.1)) Samsung Galaxy J2 (2016) (TouchWiz 6.0 (6.0.1)) Samsung Galaxy A8 (TouchWiz Nature UX 5.0 (5.1.1), Upgradeable to TouchWiz 6.0 (6.0.1)) Samsung Galaxy E5/Samsung Galaxy E7 (TouchWiz Nature UX 3.5 (4.4.4), Upgradable to TouchWiz Nature UX 5.0 (5.1.1)) Samsung Galaxy J2 Pro (TouchWiz 6.0 (6.0.1)) Samsung Galaxy V Plus (TouchWiz Essence UX (4.4.4)) Samsung Galaxy J2 (TouchWiz Nature UX 5.0 (5.1.1)) Samsung Galaxy Active Neo (TouchWiz Nature UX 5.0 (5.1.1)) Samsung Galaxy J5 (TouchWiz Nature UX 5.0 (5.1.1), Upgradable to TouchWiz 6.0 (6.0.1)) Samsung Galaxy Trend 2 Lite (TouchWiz Essence UX (4.4.4)) Samsung Galaxy On5/On7 (Touchwiz Nature UX 5.0 (5.1.1)) Samsung Galaxy S7/S7 Edge (TouchWiz 6.0 (6.0.1), upgradable to TouchWiz Grace UX (7.0)) Samsung Galaxy A8 (2016) (TouchWiz Grace UX (6.0.1)) Samsung Galaxy A3 (2017)/Samsung Galaxy A5 (2017)/Samsung Galaxy A7 (2017) (TouchWiz Grace UX (6.0.1)) Samsung Galaxy J1 mini Prime (TouchWiz 6.0 (6.0.1)) Samsung Galaxy J2 Prime (TouchWiz Grace UX (6.0.1)) Phablets Samsung Galaxy Grand 2 (TouchWiz Nature UX 2.5 (4.3/4.4.2)) Samsung Galaxy Mega 5.8 i9150 (TouchWiz Nature UX 2.0 (4.2.2), Upgradable to TouchWiz Nature UX 2.5 (4.4.2)) Samsung Galaxy Mega 6.3 GT-i9200/i9205 (TouchWiz Nature UX 2.0 (4.2.2), Upgradable to TouchWiz Nature UX 2.5 (4.4.2)) Samsung Galaxy Mega 2 (TouchWiz Nature UX 3.0 (4.4.4)) Samsung Galaxy Note (TouchWiz 4.0 (2.3.3), Upgradable to TouchWiz Nature UX (4.1.2)) Samsung Galaxy Note II SM-N7100 and SM-N7105/N7106 (TouchWiz Nature UX (4.1.1), Upgradable to TouchWiz Nature UX 2.5 (4.4.2)) Samsung Galaxy Note 3 (TouchWiz Nature UX 2.5 (4.3/4.4.2), Upgradable to TouchWiz Nature UX 3.5 (5.0)) Samsung Galaxy Note 4 (TouchWiz Nature UX 3.0 (4.4.4), Upgradable to TouchWiz Nature UX 3.5 (5.0.1) and TouchWiz Nature UX 5.0 (5.1) TouchWiz 6.0 (6.0.1)) Samsung Galaxy Note 5 (TouchWiz Nature UX 5.0 (5.1.1), Upgradable to TouchWiz Grace UX (6.0.1)) Samsung Galaxy Note 7 (TouchWiz Grace UX (6.0.1)) Samsung Galaxy Round (TouchWiz Nature UX 2.5 (4.3)) Samsung Galaxy S6 Edge+ (TouchWiz Nature UX 5.0 (5.1.1), upgradeable to TouchWiz Nature UX 6.0 (6.0.1)) Tablets Samsung Galaxy Tab 7.0 (TouchWiz 3.0, Upgradable to TouchWiz 4.0) Samsung Galaxy Tab 7.0 Plus (TouchWiz 4.0, Upgradable to TouchWiz Nature UX) Samsung Galaxy Tab 2 7.0 (TouchWiz Nature UX Lite, Upgradable to TouchWiz Nature UX 2.0) Samsung Galaxy Tab 3 7.0 (TouchWiz Nature UX, Upgradable to TouchWiz Nature UX 2.5) Samsung Galaxy Tab 4 7.0 (Touchwiz Nature UX 3.0) Samsung Galaxy Tab 7.7 (TouchWiz 4.0, Upgradable to TouchWiz Nature UX) Samsung Galaxy Tab 3 8.0 (TouchWiz Nature UX 2.0, Upgradable to TouchWiz Nature UX 2.5) Samsung Galaxy Tab 4 8.0 (Touchwiz Nature UX 3.0) Samsung Galaxy Tab 8.9 (TouchWiz 4.0, Upgradable to TouchWiz Nature UX) Samsung Galaxy Tab 10.1 (TouchWiz 4.0, Upgrading TouchWiz Nature UX) Samsung Galaxy Tab 2 10.1 (TouchWiz Nature UX Lite, Upgradable to TouchWiz Nature UX) Samsung Galaxy Tab 3 10.1 (TouchWiz Nature UX, Upgradable to TouchWiz Nature UX 2.5) Samsung Galaxy Tab 4 10.1 (Touchwiz Nature UX 3.0) Samsung Galaxy Note 8.0 (TouchWiz Nature UX, upgradable to TouchWiz Nature UX 2.0 (4.2.2) and TouchWiz Nature UX 3.0 (4.4.2)) Samsung Galaxy Note 10.1 (TouchWiz Nature UX Lite, upgradable to TouchWiz Nature UX, TouchWiz Nature UX 2.5) Samsung Galaxy Note 10.1 2014 Edition (TouchWiz Nature UX 2.5, upgradable to TouchWiz Nature UX 3.0, TouchWiz Nature UX 5.0) Samsung Galaxy Note Pro 12.2 (TouchWiz Nature UX 3.0, Upgradeable To TouchWiz Nature UX 3.5) Samsung Galaxy Tab S 8.4 (TouchWiz Nature UX 3.0 Upgradeable to TouchWiz Nature UX 3.5, TouchWiz 6.0) Samsung Galaxy Tab S 10.5 (TouchWiz Nature UX 3.0 Upgradeable to TouchWiz Nature UX 3.5, TouchWiz 6.0) Samsung Galaxy Tab S2 (TouchWiz Essence UX 2.0 upgradeable to TouchWiz UX 6.0) Samsung Galaxy Tab S3 (TouchWiz Grace UX) Samsung Galaxy Tab A 8.0 (TouchWiz Essence UX 2.0 upgradeable to Samsumg Experience 8) Samsung Galaxy Tab A 9.7 (TouchWiz 5.0 (Essence) upgradeable to Samsung Experience 8) Samsung Galaxy Tab E (TouchWiz Nature 5.0 (Essence) upgradeable to Samsung Experience 8) Recent TouchWiz Versions References Android (operating system) software Custom Android firmware Mobile operating systems Samsung software User interface techniques
34623328
https://en.wikipedia.org/wiki/Kemp%20Technologies
Kemp Technologies
Kemp, Inc. was founded in 2000 in Bethpage, New York and operates in the application delivery controller industry. The company builds load balancing products which balances user traffic between multiple application servers in a physical, virtual or cloud environment. In 2010, Kemp opened a European headquarters in Limerick, Ireland. Edison Ventures, Kennet Partners and ORIX Venture Finance invested $16 million into the company for research and development, sales and marketing in early 2012. In April 2014, Kemp announced a further investment in its Limerick Operations to expand from 30 positions to 80. Kemp was recognized as a Visionary in the 2015 Gartner Magic Quadrant for Application Delivery controllers and again in 2016. In 2019, Kemp was acquired by private equity firm Mill Point Capital. In November 2021, Kemp was acquired by Progress Software for $258 millon. Business Kemp is a software company that develops load balancing and application delivery software built on a bespoke Linux operating system which is sold under the LoadMaster brand. As of 2019, there were over 100,000 deployments of LoadMaster globally for customers that need high availability, scalability, security and visibility for their applications. This enables customers to scale their operations by delivering applications in a highly available manner with layer 4 to 7 load balancing, enhanced performance, and greater security. LoadMaster is available as a hardware appliance as well as a software-based load balancer that is available as a virtualized appliance and in the cloud including Microsoft Azure, Amazon AWS, and other private clouds. Products Kemp LoadMaster Kemp's main product, the LoadMaster, is a load balancer built on its own proprietary software platform called LMOS, that enables it to run on almost any platform: As a Kemp LoadMaster appliance, a Virtual LoadMaster (VLM) deployed on Hyper-V, VMware, on bare metal or in the public cloud. KEMP is available in Azure, where it is in the top 15 deployed applications as well as in AWS and VMware vCloud Air. Latest version of LMOS is 7.2.54.0, released April 2021. In 2013, Kemp announced that it was adding Pre-Authorization, Single Sign-On (SSO) and Persistent Logging to its product range as a TMG alternative Geo Multi-Site DNS Load Balancer Kemp’s DNS based Global Site Load Balancer (GSLB) enables customers to provide availability, scaling and resilience for applications that are geographically distributed, including data center environments, private clouds, multi-public cloud environments such as Azure and AWS as well as hybrid environments where applications are deployed across both public and private cloud. The capabilities provided by GEO LoadMaster are similar to hosted services such as Dyn DNS VLM For Azure and AWS In March 2014, Kemp announced availability on the Microsoft Azure Cloud platform (the first 3rd party load balancer available) of the VLM for Azure LoadMaster, a virtual load balancer. Kemp 360 Central™ In May 2016, Kemp launched its centralized application monitoring and reporting product, called Kemp 360 Central™, which allows network and application administrators to view the state of different load balancers or application delivery controllers. Views include throughput, users and transactions per second. The product allows users to connect to 3rd party devices like F5, AWS ELB, NGINX, and HAProxy. Kemp 360 Vision™ At the same time as launching Kemp 360 Central, Kemp also announced the general release of 360 Vision, which monitors the health of applications. 360 Vision monitors patterns of application data, not just statistics, and is able to provide pre-emptive health alerts designed to prevent application outages. Free LoadMaster In March 2015, Kemp launched a free version of LoadMaster software called Free LoadMaster, which is a fully featured load balancer that shares most of the commercial product's features, including full layer 4 to layer 7 load balancing, reverse proxy, web content caching and compression, a non-commercial WAF (Web Application Firewall) and up to 20Mbps throughput. SDN Kemp announced and launched the world's first software defined network (SDN) ready adaptive load balancer and in September 2014, Kemp announced it was joining the OpenDaylight Open Source SDN Project. Service Mesh Kemp introduced service mesh as part of its offering in 2018 and was listed as a company to watch by TechTarget. Notes External links Networking hardware Server hardware Networking hardware companies 2000 software Software companies of the United States DDoS mitigation companies Networking companies of the United States Networking software companies Private equity portfolio companies
67257794
https://en.wikipedia.org/wiki/Angie%20Jones
Angie Jones
Angie Jones is a software developer, Java Champion, international keynote speaker, and IBM Master Inventor. She is also the CEO of Diva Chix. Early life and education Angie Jones was born in New Orleans, Louisiana. She attended Marion Abramson Senior High School in New Orleans, LA. After graduating from high school, she enrolled as a Business major at Tennessee State University. However, after taking an introductory C++ course which sparked her interest in tech, she changed her major to Computer Science. She has a Bachelor of Science in Computer Science from Tennessee State University. In 2010, she obtained a Master's Degree in Computer Science from North Carolina State University Career Angie Jones began her career in 2003 at IBM in Research Triangle Park, North Carolina where she worked as a software engineer for 9 years. In 2007, Jones became the CEO of Diva Chix, an online fashion game "where teenage girls and women learn to excel in technological areas as well as learn other key life lessons such as running a business and working as a team, while competing in fashion battles". The game is still active and Jones remains the CEO and sole developer. In 2014, she became an adjunct professor at Durham Technical Community College where she taught Java programming courses until 2017. In 2015, she joined LexisNexis as a Consulting Automation Engineer. In 2016, she began speaking at software conferences around the world and became an international keynote speaker in 2017. In 2017, she moved to California to work for Twitter as a Senior Automation Engineer. In 2018, Jones joined Applitools, an AI-powered visual testing and monitoring platform, to start their Developer Relations initiative. As part of her role at Applitools, she created Test Automation University, an online platform that provides free courses on programming and test automation subjects taught by herself and other notable industry experts. In 2021, she joined Block as the Head of Developer Relations for their open source decentralized exchange platform, TBD54566975. Jones holds 26 patented inventions in the United States of America and Japan. Jones has authored chapters in multiple software engineering books including The Digital Quality Handbook: Guide for Achieving Continuous Quality in a DevOps Reality, DevOps: Implementing Cultural Change, and 97 Things Every Java Programmer Should Know. She has also written the foreword for Formulation. She also writes tutorials and technical articles on her website, angiejones.tech. Volunteering Jones volunteers with Black Girls Code, where she led the Raleigh-Durham Chapter from 2015 - 2017. Jones also volunteered with TechGirlz where she planned and taught technology workshops for middle-school girls from 2015 - 2017. Jones is also an active member of Alpha Kappa Alpha sorority where she has served as vice president and Technology Chairman of her local chapter, Alpha Zeta Omega in Durham, North Carolina. Awards and Honors In 2009, Angie was awarded the title of IBM Master Inventor. In 2017, she won the Leah K. Frazier award for "Service with a Global Impact" from Alpha Kappa Alpha in which she is an active sorority member. On July 3, 2020, Angie became the first Black female Java Champion, an Oracle-sponsored exclusive group of experts in the Java programming language who make significant contributions to the Java ecosystem. Also in 2020, she was awarded the CSC Alumni Award by the North Carolina State University Department of Computer Science and was also named a GitHub Star. References Tennessee State University alumni North Carolina State University alumni IBM employees
28536157
https://en.wikipedia.org/wiki/Review%20Board
Review Board
Review Board is a web-based collaborative code review tool, available as free software under the MIT License. An alternative to Rietveld and Gerrit, Review Board integrates with Bazaar, ClearCase, CVS, Git, Mercurial, Perforce, and Subversion. Review Board can be installed on any server running Apache or lighttpd and is free for both personal and commercial use. There is also an official commercial Review Board hosting service, RBCommons. Review requests can be posted manually or automatically using either a REST Web API, or a Python script. Users Some of the notable users of Review Board are: Cisco Citrix Novell NetApp Twitter VMware Yelp Yahoo LinkedIn Apache Software Foundation HBase Calligra Konsole Amazon Cloudera Hewlett Packard Enterprise Tableau Software See also List of tools for code review References External links Review Board all-in-one installer RBCommons Review Board hosting Software review Free software programmed in Python Software using the MIT license
1179449
https://en.wikipedia.org/wiki/The%20Weather%20Network
The Weather Network
The Weather Network (TWN) is a Canadian English-language weather information specialty channel available in Canada, the United States and the United Kingdom. It delivers weather information on television, digital platforms (responsive websites, mobile and tablet applications) and TV apps. The network also operates counterpart brands including MétéoMédia; Canadian, Eltiempo Spain, Wetter Plus Germany, and Clima Latin America.` The company is owned by Pelmorex Media which is headquartered in a 100,000 square foot media centre located in Oakville, Ontario, Canada. The company continues to grow on a global scale, while maintaining its status in the Canadian market. Their specialty television networks are among the most widely distributed and frequently consulted television networks in Canada, theweathernetwork.com is among Canada's leading web services, and their mobile web property is ranked #1 in the weather category and the second largest mobile website in Canada. The channel offers regional feeds for Alberta, Toronto, Atlantic Canada and British Columbia. History The Weather Network was licensed by the Canadian Radio-television and Telecommunications Commission on December 1, 1987 and began broadcasting on September 1, 1988 (six years after the U.S. Weather Channel) as WeatherNow, under the ownership of engineering firm Lavalin Inc. (now known as SNC-Lavalin) and Landmark Communications. The channel gained its present name on May 1, 1989. In the early years, TWN, and its sister channel, MétéoMédia, shared a single television feed via analogue transponder on one of the Anik satellites, with computer-generated local forecasts airing on one while the video feed of a live forecaster or commercials aired on the other. At first the video section was only available during drive times on weekdays and half of the day on weekends; at other times local forecast was looped. The two services began to run separately starting in 1994, while both were still based in Montreal. Local forecasts were generated using the same systems owned by The Weather Channel in the U.S. called WeatherStar. TWN began using its own system called PMX in 1996, which is still in use today. Pelmorex purchased The Weather Network from SNC-Lavalin in 1993, two years after the merger of SNC and Lavalin. The channel launched its website in 1996. Throughout the 1990s and early 2000s, The Weather Network's broadcasts were divided into different programming blocks. One of the most notable was "EarthWatch", which originally began as a five-minute news segment discussing environmental and weather-related issues. The show had expanded as a nighttime programming block in the mid-1990s, and the news segment later spun off as the current "WeatherWatch" segment. Other programming blocks included the "Morning Report", focusing on Eastern Canada in the mornings; "Sea to Sea", focusing on Western Canada in the "workday" hours; an unnamed afternoon block which would later be known as "Across Canada" (spun off from a segment seen on "EarthWatch"); and the "Weekend Report", later known as "This Weekend". ("Morning Report" was, coincidentally, the title of the GTA broadcast dated back to February 7, 1994; the runtime was 4 hours. When it was upgraded to a national broadcast; "Good Morning Toronto" was provided as a replacement, but runtime was reduced by half-hour.) Programming blocks were discontinued in 2002 for weekdays and 2004 on weekends, although "This Weekend" continued to air until 2007. On May 2, 1998, The Weather Network started broadcasting nationally from a new studio facility in Mississauga, Ontario after relocating from Montreal. This led to the departure of several presenters, notably those who were on air during weekends. Several new presenters arrived at the time, while many of the Montreal presenters initially relocated, most departed from the channel over time, many of whom moved back to Montreal. To date, Chris St. Clair is the only presenter from Montreal remaining. Late 2000 marked the beginning of a period of gradual, but significant changes with The Weather Network's programming, starting with the launch of a seven-day and short term precipitation forecast during the Local Forecast along with the introduction of new weather icon that is used currently. In 2002, The Weather Network introduced "Metacast Ultra", a weather presentation system that consisted of weather maps featuring more than 1,200 local communities, commuter routes and regional highways, animated weather icons, and higher resolution weather graphics. On March 29, 2004, The Weather Network introduced a new 14-day trend outlook as part of the local cable weather package. It provided a two-week look at how the weather would trend compared to normal temperature values and weather conditions for that time of year. In June 2004, The Weather Network took legal action against Star Choice (now Shaw Direct) after moving TWN on a new bundle without giving any notice to its subscribers. The channel's management tried to prevent Star Choice from moving the channel as subscribers would have to pay an additional $7 to watch The Weather Network. In late 2004, TWN made improved local forecast coverage, providing more localized forecasts in up to 1,200 communities across Canada. The Weather Network relocated its headquarters to Oakville, Ontario in November 2005. The channel's GTA morning show made its debut at the brand new broadcast facility on November 29, 2005, while the network's national programming started broadcasting from the new facility on December 2, 2005. The Weather Network has gradually introduced new local weather products including an hourly forecast for the next 12 hours in 2006, long term precipitation forecasts in 2008 and improved satellite and radar maps in 2009. In 2009, The Weather Network was granted 9(1)(h) must-carry status by the CRTC, under the condition that Pelmorex develop a "national aggregator and distributor" of localized emergency alert messages. In early 2013, The Weather Network launched regional feeds, currently for Alberta and the Maritime provinces. Each feed features its own regional forecasts, weather stories, and where available, traffic information was provided by Beat the Traffic. On December 8, 2014, The Weather Network and CBC News began a content-sharing partnership, in which TWN produced national weather forecasts that would appear on CBC News Network and during The National, and The Weather Network would be able to syndicate CBC News content on its television and digital outlets. In 2015, Pelmorex bought out The Weather Channel's stake in the service. Programming Local forecast A notable feature of The Weather Network is its local forecast. On cable providers, a report for the nearest weather station to the cable headend is given, typically consisting of current conditions, 18-hour, 4-day and 14-day forecasts, as well as almanac. On most satellite providers, a run-down of weather conditions and three-day forecasts for major cities across Canada is featured. The two- to three-minute segment occurs on the 10's (analogous to The Weather Channel's "Local on the 8's"). Since the start of the 2015 winter programming cycle, a one-minute update is also scheduled on the 6's. The segment is well known by frequent viewers for its background music. In January 2010, an online poll was held that allowed viewers to vote for their favorite Local Forecast music, which would play during the morning hours. In the 2014 spring programming cycle, as part of the channel's 25th anniversary, a variety of songs that were originally played in the '90s and early '2000s eras were re-used at random. Some digital television providers in Canada (primarily IPTV services, such as Bell Fibe TV and Telus Optik) may also offer The Weather Network iTV, an app which allows users to view expanded local forecasts. Studio/Live Programming The Weather Network broadcasts in a news-wheel format, featuring various forecast or weather-related segments throughout the hour. For some regions including the Greater Toronto Area, Alberta and the Maritime provinces, "Regional forecasts" are shown every half-hour, featuring forecasts and weather stories specifically for its respective region. For some areas, traffic reports are also presented during the morning and afternoon commute. For other regions "WeatherWATCH" provide a detailed analysis of the current weather across Canada, including the weather expected nationwide over the next three days. WeatherWATCH airs for three minutes just before the local/regional forecasts. The remaining half-hour cycle features various weather stories from across the country and around the world. In addition, TWN airs a variety of smaller segments including: Force of Nature - (Featured every 10 minutes on the 3's, a show-reel of significant weather making headlines around the world). Flu, Cold and Covid Reports Weather Watch Storm Centre Science Behind the Weather Regional Climate Beyond The forecast Local Forecast - Radar update and major cities' forecasts for 7 days ahead.. Storm-Hunters - weekends at 7 and 10pm. Captured Share Your Weather Must See Weather and Your Health The Travellers Report - Today's outlook for major cities in Canada, the United States and the Caribbean. The Weather Network's news department won the first annual Adrienne Clarkson Diversity Award for network television. This award is given by the Radio and Television News Director's Association (R-T-N-D-A) for the best news reports on a subject of cultural diversity. The Weather Network then won for its 2006 two-part news series on weather and black history. The Weather Network also won a World Medal from the NY Festivals International TV Broadcasting Awards for a 2007 story on a blind woman learning to sail who uses her other senses to determine changes in wind patterns and potential storms. It won the same award again in 2008 for a story on a man and his seeing-eye dog trying to adapt to a harsh New Brunswick winter. The Weather Network HD The Weather Network HD is a 1080i high definition simulcast of The Weather Network that launched on May 30, 2011. It is currently available on Cogeco, EastLink, Bell MTS, Rogers Cable, Telus Optik TV, and Shaw Cable. The HD simulcast for cable and IPTV providers currently do not offer local forecasts unlike the standard definition feed. On August 22, 2017, the HD feed debuted on Shaw Direct. At first, the channel's design featured a carousel consists of current temperatures, 18-hour and 3-day forecasts (including expected temperatures, conditions and precipitation possibility) for key cities within the viewer's region. Throughout the local forecast segments, an additional L-shaped banner was introduced, with the top two-thirds of the ticker displays similar information (for two cities at a time instead of one), while the bottom of the ticker promotes upcoming segments in the programming cycle. At the start of 2014 spring programming cycle, "14 day trends" were introduced to the latter, whilst the former showcased information from 50 major Canadian cities. The L-shaped banner was expanded to be used at all times. Special weather statements are shown on a crawl that appears above the bottom of the ticker, when active. In 2017, the HD feed underwent another change in on-screen design, now featuring DIN Next as the principal typeface. This typeface has been dominant in the SD feed since roughly the mid-2010s. The top two-thirds now features three "boxes", with the first one showing the city as a header, and contains the current local time, date and weather conditions. The second one cycles through extra information on ceiling, pressure, humidity, apparent temperature, wind, gusts and visibility. The final "box" contains 18-hour forecasts for the city. The bottom of the ticker now alternates between national weather headlines and information of upcoming programming, which could be overridden at anytime by special weather statements on a bottom-up scrolling text format. Satellite services In 2006, Bell Satellite TV and The Weather Network started an interactive version of The Weather Network, enabling viewers to set their city and view specific forecasts every time. Web and mobile services In addition to its website, The Weather Network runs an e-mail and text messaging service called WeatherDirect, that sends weather forecasts via e-mail. There is also an e-mail service for pollen conditions and road conditions. The Weather Network also operates a Twitter and Facebook account, which include Severe Weather alerts and Weather News. Mobile App The Weather Network is currently available as an app for iPhone and Android smartphones. Smart TV App In November 2015, an app-only version of the TV channel was launched on Android TV (using Live Channels) and Apple TV. It has a similar news-wheel, albeit with some changes, and overlays the local forecast at the bottom dynamically using GPS. It also includes on-demand video and local maps. Criticism The channel has been criticized for its excessive use of advertising through commercials and forecasts and some weather segments (e.g., hot spots, picnic/barbecue report, etc.) – which has led to less time for detailed forecasts and more time spent on advertising. The same problem also occurs with U.S.-based The Weather Channel. In the past, there was little to no advertising. Currently, local forecasts are sponsored using static logos during and after forecasts. The channel has also been criticized for putting more coverage over the weather in Southern Ontario than the rest of Canada during its national segments. The 2008 launch of local programming for the Greater Toronto Area had also further limited updated forecasts throughout the rest of Canada. Notable on-air presenters Current presenters Chris St. Clair – co–host (1995–2021) Now retired. Former presenters See also The Weather Channel, the American equivalent of The Weather Network. MétéoMédia, French version of The Weather Network. References External links All Channel Alert Android (operating system) software Wear OS software BlackBerry software Analog cable television networks in Canada Companies based in Oakville, Ontario English-language television stations in Canada IOS software MacOS software Pelmorex Satellite radio stations in Canada Sirius Satellite Radio channels Television channels and stations established in 1988 1988 establishments in Canada TvOS software Universal Windows Platform apps WatchOS software Weather television networks Windows software
58644759
https://en.wikipedia.org/wiki/Information%20engineering%20%28field%29
Information engineering (field)
Information engineering is the engineering discipline that deals with the generation, distribution, analysis, and use of information, data, and knowledge in systems. The field first became identifiable in the early 21st century. The components of information engineering include more theoretical fields such as machine learning, artificial intelligence, control theory, signal processing, and information theory, and more applied fields such as computer vision, natural language processing, bioinformatics, medical image computing, cheminformatics, autonomous robotics, mobile robotics, and telecommunications. Many of these originate from computer science, as well as other branches of engineering such as computer engineering, electrical engineering, and bioengineering. The field of information engineering is based heavily on mathematics, particularly probability, statistics, calculus, linear algebra, optimization, differential equations, variational calculus, and complex analysis. Information engineers often hold a degree in information engineering or a related area, and are often part of a professional body such as the Institution of Engineering and Technology or Institute of Measurement and Control. They are employed in almost all industries due to the widespread use of information engineering. History The term information engineering used to refer to a software engineering methodology that is now more commonly known as information technology engineering or information engineering methodology. It began to gain its current meaning early on in the 21st century. Elements Machine learning and statistics Machine learning is the field that involves the use of statistical and probabilistic methods to let computers "learn" from data without being explicitly programmed. Data science involves the application of machine learning to extract knowledge from data. Subfields of machine learning include deep learning, supervised learning, unsupervised learning, reinforcement learning, semi-supervised learning, and active learning. Causal inference is another related component of information engineering. Control theory Control theory refers to the control of (continuous) dynamical systems, with the aim being to avoid delays, overshoots, or instability. Information engineers tend to focus more on control theory rather than the physical design of control systems and circuits (which tends to fall under electrical engineering). Subfields of control theory include classical control, optimal control, and nonlinear control. Signal processing Signal processing refers to the generation, analysis and use of signals, which could take many forms such as image, sound, electrical, or biological. Information theory Information theory studies the analysis, transmission, and storage of information. Major subfields of information theory include coding and data compression. Computer vision Computer vision is the field that deals with getting computers to understand image and video data at a high level. Natural language processing Natural language processing deals with getting computers to understand human (natural) languages at a high level. This usually means text, but also often includes speech processing and recognition. Bioinformatics Bioinformatics is the field that deals with the analysis, processing, and use of biological data. This usually means topics such as genomics and proteomics, and sometimes also includes medical image computing. Cheminformatics Cheminformatics is the field that deals with the analysis, processing, and use of chemical data. Robotics Robotics in information engineering focuses mainly on the algorithms and computer programs used to control robots. As such, information engineering tends to focus more on autonomous, mobile, or probabilistic robots. Major subfields studied by information engineers include control, perception, SLAM, and motion planning. Tools In the past some areas in information engineering such as signal processing used analog electronics, but nowadays most information engineering is done with digital computers. Many tasks in information engineering can be parallelized, and so nowadays information engineering is carried out using CPUs, GPUs, and AI accelerators. There has also been interest in using quantum computers for some subfields of information engineering such as machine learning and robotics. See also Aerospace engineering Chemical engineering Civil engineering Internet of things List of engineering branches Mechanical engineering Statistics References Engineering disciplines Information
14000469
https://en.wikipedia.org/wiki/Virtual%20Switch%20Redundancy%20Protocol
Virtual Switch Redundancy Protocol
The Virtual Switch Redundancy Protocol (VSRP) is a proprietary network resilience protocol developed by Foundry Networks and currently being sold in products manufactured by both Brocade Communications Systems (formerly Foundry Networks) and Hewlett Packard. The protocol differs from many others in use as it combines Layer 2 and Layer 3 resilience – effectively doing the jobs of both Spanning tree protocol and the Virtual Router Redundancy Protocol at the same time. Whilst the restrictions on the physical topologies able to make use of VSRP mean that it is less flexible than STP and VRRP, it does significantly improve on the failover times provided by either of those protocols. See also Common Address Redundancy Protocol Virtual Router Redundancy Protocol Hot Standby Router Protocol Spanning Tree Protocol External links Configuration Guide - Brocade Communications Systems Internet protocols Routing protocols
22853548
https://en.wikipedia.org/wiki/Dr.Web
Dr.Web
Dr.Web is a software suite developed by Russian anti-malware company Doctor Web. First released in 1992, it became the first anti-virus service in Russia. The company also offers anti-spam solutions and is used by Yandex to scan e-mail attachments. It also features an add-on for all major browsers which checks links with the online version of Dr Web. Dr.Web has withdrawn from AV tests such as Virus Bulletin VB100% around 2008 stating that they believe that virus scans on viruses are different subject from that of real world malware attacks. Critics, reviews and reliability Staunch anti-adware policy led to software developers complaints that Dr.Web treated their virus free applications as "virus" and receive no responds from Dr.Web if they try to contact Dr.Web to resolve the issue. Notable discoveries Flashback Trojan Dr.Web discovered the Trojan BackDoor.Flashback variant that affected more than 600,000 Macs. Trojan.Skimer.18 Dr.Web discovered the Trojan.Skimer.18, a Trojan that works like an ATM software skimmer. The Trojan can intercept and transmit bank card information processed by ATMs as well as data stored on the card and its PIN code. Linux.Encoder.1 Dr.Web discovered the ransomware Linux.Encoder.1 that affected more than 2,000 Linux users. Linux.Encoder.2 which was discovered later turned out to be an earlier version of this ransomware. Trojan.Skimer discovery and attacks on Doctor Web offices Doctor Web received a threat supposedly from the Trojan writers or criminal organization sponsoring this malware's development and promotion: On March 31, 2014, after two arson attacks were carried out on Igor Daniloff's anti-virus laboratory in St. Petersburg, company received a second threat. Doctor Web released a statement that the company considers it its duty to provide users with the ultimate protection against the encroachments of cybercriminals and consequently, efforts aimed at identifying and studying ATM threats with their ATM Shield. See also Antivirus software Comparison of antivirus software Comparison of computer viruses References External links Antivirus software Computer security software companies Lua (programming language)-scriptable software Russian brands Software companies of Russia
1088665
https://en.wikipedia.org/wiki/NForce4
NForce4
The nForce4 is a motherboard chipset released by Nvidia in October 2004. The chipset supports AMD 64-bit processors (Socket 939, Socket AM2 and Socket 754) and Intel Pentium 4 LGA 775 processors. Models nForce4/nForce4-4x nForce4 is the second evolution of the Media Communications Processor (MCP) and incorporates both Northbridge and Southbridge on a single die (the first was nForce3). The Socket 754 version of the board has the HyperTransport link clocked to 800 MHz (6.4 GB/s transfer rate). Motherboards based on early revisions are mostly referred to as "nForce4-4x" (relating with their ability to handle HT speeds of 4x). Support for up to 20 PCI Express (PCIe) lanes (up to 38-40 lanes for the nForce4 SLI x16). Reference boards are set up with one x16 slot and three x1 slots, leaving 1 lane unused. Support for up to 10 USB 2.0 ports. Support for 4 SATA and 4 PATA drives, which can be linked together in any combination of SATA and PATA to form a RAID 0, 1, or 0+1. Nvidia RAID Morphing, which allows conversion from one RAID type to another on the fly. Nvidia nTune, a tool for easy overclocking and timing configurations. Full 1000 MHz speed on HyperTransport (8 GB/s transfer rate). Eight-channel AC'97 audio. Onboard Gigabit Ethernet. Nvidia ActiveArmor, an onboard firewall solution. (Not available on regular nForce 4) Does not support Windows 98 or Windows Me. nForce4 Ultra The Ultra version contains all of the features of the nForce4-4x version with the addition of: Hardware processing for the ActiveArmor to reduce CPU load. Serial ATA 3 Gbit/s interface with 300 MB/s transfer speeds for SATA 3 Gbit/s drives. Enthusiasts discovered early after the release of nForce4 Ultra that the chipset was identical to nForce4 SLI other than a single resistor on the chip package itself. By modifying this resistor as the SLI is configured, an Ultra can be turned into an SLI. nForce4 SLI The SLI version has all the features of the Ultra version, in addition to SLI (Scalable Link Interface). This interface allows two video cards to be connected to produce a single output. This can theoretically double framerates by splitting work between the two GPUs. On a standard (non x16) nForce4 SLI motherboard, the system can be configured to provide an x16 slot for one graphics board or twin x8 slots for the SLI configuration. A jumper bank must be altered to set these options. nForce4 SLI Intel Edition Unlike its AMD Athlon 64 sibling, the Intel Edition is an older chipset as it has both a northbridge and southbridge. As with the older nForce2 chipsets, Nvidia calls the northbridge the "System Platform Processor" (SPP) and the southbridge the "Media and Communications Processor" (MCP). This change in design was necessitated because, unlike the Athlon 64/Opteron, the Pentium 4 does not have an on-board memory controller thus requiring Nvidia to include one in the chipset like in older nForce2. In addition to supporting Pentium 4 processors (with up to a 1066 MHz FSB) the chipset includes support for DDR2 SDRAM. Also like Nvidia's older chipsets, the MCP and SPP communicate through a Hypertransport link, in this case only at (1.6 GB/s. transfer rate) Apart from these differences, the nForce4 SLI Intel Edition shares the same features as the regular nForce4 SLI. An oddity of the Intel Edition is the fact that while it works with the Pentium D 830 (3.0 GHz) and 840 (3.2 GHz), as well as the Extreme Edition of the 840, it does not work with the Pentium D 820 (2.8 GHz) because the 820 has a much lower current draw than the 830 and 840. Attempting to boot an Intel Edition board with an 820 will cause it to shut down so as to avoid damaging the processor. Nvidia have stated that they do not consider the 820 to be an enthusiast (because it is older) processor, and as such will not be enabling support for it. However, the nForce4 SLI X16 supports it. nForce4 SLI x16 The nForce4 SLI x16 has similar features to the nForce4 SLI, except it now provides 16 PCI-Express lanes to both graphics cards in an SLI configuration (as opposed to only 8 lanes per graphics card with the original SLI chipset). This is the only version of the nForce4 for AMD processors that has a separate northbridge and southbridge. It comprises the existing nForce4 MCP for the southbridge and a new AMD nForce4 System Platform Processor (SPP). The two chips are connected via the HyperTransport link. This solution provides 38 PCI-Express lanes in total, and can be divided over 7 slots. It is also available for Intel processors, whereby it provides 40 PCI-Express lanes, which can be divided over 9 slots. Southbridges nForce400/405/410/430 The nForce400/405/410/430 refer to nForce4 based southbridges which are used together with GeForce 6100/6150 series northbridges to form a chipset with integrated graphics. The combination is a follow-up to the popular nForce2 IGP chipset. Driver Availability Nvidia offers nForce4 chipset driver downloads for NT-based Windows versions from 2000 up to and including Vista in the "Legacy" product type category on their download page. However, there is no official support for Windows 7 or newer, but Windows 7 has a built-in driver for the nForce 6 chipset, which is very similar. Flaws Nvidia's nForce4 chipset suffers from several unresolved issues. The ActiveArmor hardware firewall is nearly non-functional, with many unsolved bugs and potentially serious instability issues. Installing ActiveArmor can cause BSODs for users of certain software, especially peer-to-peer file sharing applications. Some programs, such as μTorrent, go so far as to have warning messages about using Nvidia's firewall in combination with their software. ActiveArmor also has a high probability of causing corruption of file downloads. Nvidia has been unable to solve these issues and points to hardware bugs within the chipset itself, problems which they are unable to work around. There have also been data corruption issues associated with certain SATA 3 Gbit/s hard drives. The issues can often be resolved with a firmware update for the hard drive from the manufacturer. The nForce4 chipset has also been blamed for issues with PCI cards, relating to Nvidia's implementation of the PCI bus. RME Audio, a maker of professional audio equipment, has stated that the latency of the PCI bus is unreliable and that the chipset's PCI Express interface can "hog" system data transfer resources when intense video card usage is occurring. This has the effect of causing audible pops and clicks with PCI sound cards. Gamers have noticed this effect, especially with Creative's Sound Blaster X-Fi and Sound Blaster Audigy 2 sound cards. Compatibility issues between these sound cards and nForce4 motherboards have been ongoing, including reports of serious and irreversible damage to crucial motherboard components. Driver updates were also found to be unsuccessful. Latency issues are more readily apparent with sound cards than other addon cards because of the direct user feedback the audio problems demonstrate. See also Comparison of Nvidia chipsets NForce 500 References External links Nvidia: nForce4 Anandtech: nForce4: PCI Express and SLI for Athlon 64 Techreport: Nvidia's nForce4 SLI Intel Edition chipset Nvidia chipsets
1784745
https://en.wikipedia.org/wiki/Gtkpod
Gtkpod
gtkpod provides a graphical user interface that enables users of Linux and other Unix operating systems to transfer audio files onto their iPod Classic, iPod Nano, iPod Shuffle, iPod Photo, or iPod Mini music players. Although it does not support some of the more advanced features of iTunes, gtkpod still performs the role of an iPod manager for Linux. Album art and videos are now supported, and preliminary support for jailbroken iPhones and iPod Touches is available. Most digital audio players permit the user to browse and access their content via an interface closely related to the underlying file system. iPods, on the other hand, employ a proprietary database file for managing all the metadata associated with their content. Because of this, an iPod cannot recognize files that have been copied directly into the low-level file system unless its music database has been appropriately modified. This task is usually performed by iTunes, but since Apple has only released versions for Mac OS X and Windows, gtkpod provides the needed support for other operating systems. Starting with version 0.93, the code that handles the iPod access has been separated as libgpod, a shared library that allows other projects to provide iPod support as well. It is currently used by popular players such as Rhythmbox and Amarok. See also Comparison of iPod managers Comparison of media players List of free software for audio List of Linux audio software References External links Supported iPods Free audio software Free software programmed in C IPod software Audio software that uses GTK Linux audio video-related software IOS software
13720123
https://en.wikipedia.org/wiki/Basic%20sequential%20access%20method
Basic sequential access method
In IBM mainframe operating systems, Basic sequential access method (BSAM) is an access method to read and write datasets sequentially. BSAM is available on OS/360, OS/VS2, MVS, z/OS, and related operating systems. BSAM is used for devices that are naturally sequential, such as punched card readers, punches, line printers, and magnetic tape. It is also used for data on devices that could also be addressed directly, such as magnetic disks. BSAM offers device independence: to the extent possible, the same API calls are used for different devices. BSAM allows programs to read and write physical blocks of data, as opposed to the more powerful but less flexible Queued Sequential Access Method (QSAM) which allows programs to access logical records within physical blocks of data. The BSAM user must be aware of the possibility of encountering short (truncated) blocks (blocks within a dataset which are shorter than the BLKSIZE of the dataset), particularly at the end of a dataset, but also in many cases within a dataset. QSAM has none of these limitations. Application program interface The programmer specifies DSORG=PS in his Data Control Block (DCB) to indicate use of BSAM. As a basic access method BSAM reads and writes member data in blocks and the I/O operation proceeds asynchronously and must be tested for completion using the CHECK macro. BSAM uses the standard system macros OPEN, CLOSE, READ, WRITE,and CHECK. The NOTE macro instruction returns position of the last block read or written, and the POINT macro will reposition to the location identified by a previous NOTE. If the dataset is unblocked, that is, the logical record length (LRECL) is equal to the physical block size (BLKSIZE), BSAM may be utilized to simulate a directly accessed dataset using NOTE and POINT on any supported direct access device type (DEVD=DA), and some primitive applications were designed in this way. Similar facilities The BSAM application program interface can be compared with the interface offered by open, read, write and close calls (using file handles) in other operating systems such as Unix and Windows. POINT provides an analog of seek or lseek,and ftell is the equivalent of NOTE. See also Queued Sequential Access Method (QSAM) Hierarchical Sequential Access Method (HSAM) Basic Indexed Sequential Access Method (BISAM) Queued Indexed Sequential Access Method (QISAM) Hierarchical Indexed Sequential Access Method (HISAM) References IBM mainframe operating systems
4393438
https://en.wikipedia.org/wiki/Baldur%27s%20Gate%20%28video%20game%29
Baldur's Gate (video game)
Baldur's Gate is a fantasy role-playing video game developed by BioWare and published in 1998 by Interplay Entertainment. It is the first game in the Baldur's Gate series and takes place in the Forgotten Realms, a high fantasy campaign setting, using a modified version of the Advanced Dungeons & Dragons (AD&D) 2nd edition rules. It was the first game to use the Infinity Engine for its graphics, with Interplay using the engine for other Forgotten Realms-licensed games, including the Icewind Dale series, and Planescape: Torment. The game's story focuses on a player-made character who finds themselves travelling across the Sword Coast alongside a party of companions. The game received critical acclaim following its release and was credited for revitalizing computer role-playing games. An expansion pack entitled Tales of the Sword Coast was released, as was a sequel entitled Baldur's Gate II: Shadows of Amn, which later received its own expansion called Throne of Bhaal. An enhanced version of the Infinity Engine was later created as part of Beamdog's remake, entitled Baldur's Gate: Enhanced Edition, the first new release in the franchise in nearly nine years. Gameplay Players conduct the game from a top-down isometric third-person perspective, creating a character who then travels across pre-rendered locations, taking on quests, recruiting companions to aid them, and combating enemies, while working towards completing the game's main story. Control is done through a user interface that allows a player to move characters and give them actions to undertake, review information on on-going quests and the statistics of characters in their party, manage their inventories, and organize the formation of the party, though the screen does not need to be centered on the characters being controlled and can be moved around with the mouse and keyboard, the latter also capable of accessing various player options through keyboard shortcuts. All of the game play mechanics were coded to conform to the Advanced Dungeons & Dragons 2nd edition role-playing rules, with the game automatically computing rule intricacies, including tracking statistics and dice rolling. Although the game is conducted in real-time, some elements of the rule set were modified to allow it to feature a pausable real-time mode. This allowed players to pause the game at any time and prepare what actions a character would do, including the ability to set the game to automatically pause at preset points in combat. Each new playthrough requires the player to either create a new character, or import one they exported from a previous playthrough. Every new character requires the player to determine what their name, gender, race, class, and alignment are, and what ability scores and weapon proficiencies they have. New characters can also be multi-class, but must adhere to the restrictions that come from this, in accordance to the 2nd edition rules; for example, a character who is both a cleric and a fighter, may only use weapons of the former class. The game's main story is divided up into eight parts, featuring a prologue and seven chapters. Each section requires the player to complete a specific task in order to continue. Some areas of the map are not accessible until the player has advanced to a specific chapter. A player may have up to five companions travelling with them in their party, with the player free to decide whom to recruit or dismiss from the party. The main UI consists of three action bars surrounding the main screen. The first bar consists of a map, journal, character records, their inventories, spellbooks and a clock. The second bar consists of a portrait of each character in the party, their HP, order, and any effects they are experiencing. The third bar provides specific actions per the number of characters being controlled: if a single character is selected, the player has the ability to switch between the weapons the character is wielding, use spells or items, or utilize a character's or piece of equipment's special abilities. If more than one character is selected, the bar displays options for conversing with or attacking NPCs, stop what is being done, or change their formation. The inventory system allows each character to equip items categorized as: weapons, ammunition, armor, helmets, necklaces, rings, belts, cloaks, feet, or usable. The number of items a character can both equip and carry is affected by their weight limit, which is determined by their Strength ability score; going over this limit will encumber the character causing them to move slowly or prevent them moving altogether until they remove items from their inventory. The system also indicates what equipment a character may not use as defined by their class. This mechanic also determines how many weapon slots they have available; by default, all character have two weapon slots, with an off-hand slot for shields. Some classes allow characters additional weapon slots. Characters may equip three stacks of ammo for ranged weapons (bows, crossbows and slings), and use three different types of usable items (potions, scrolls and wands). Conversation can be initiated by players selecting a member of the party and clicking on a friendly or neutral NPC. Some conversations are initiated automatically when characters come close to them. Certain NPCs offer services the player can utilize, including buying and selling items and identifying enchanted items. Other useful places include inns where the party can rest in safety to recover lost hit points and memorize spells, as well as temples where characters can pay for healing services, such as resurrecting a dead party member. Other features that affect gameplay include: The ability to customize their character after creation, albeit with some restrictions. The ability to change the primary and minor colors used by each character. The ability to switch the game's AI on or off, and change what script a character uses. Most locations are hidden when first visited but are revealed as the character moves around them. A fog of war effect hides explored areas when the player's characters move away from them. A reputation system that tracks the moral actions of the PC and affects how they are perceived, changing if they resolve a problem or commit a crime in the view of witnesses. Higher reputations cause shops to decrease prices, while lower reputations cause shops to increase prices. Lower reputations may also lead to the character being attacked when in town. Companions are also affected by reputation, with evil companions leaving the party, even attacking it, if it is high, and good and neutral companions leaving when it is low. Some side quests also require a minimum reputation to begin. Certain NPCs may also react negatively or positively depending on their alignment and the player's reputation. The ability to keep track of in-game time through the changes in lighting and the activity that is occurring. Characters become fatigued after spending a full in-game day, especially after travelling long distances between world map locations, and must rest to recover, either in an inn or camping out in the countryside/within a dungeon. Characters can be ambushed when camping out or travelling long distances between world map locations. Players can play either in single-player mode, or in multiplayer mode. The latter allows up to six players to work together online with their own created characters. Plot Setting Baldur's Gate takes place in the fictional world of Ed Greenwood's Forgotten Realms setting, during the year of 1368DR in the midst of an apparent iron shortage, where items made with iron inexplicably rot and break. Focusing upon the western shoreline of Faerûn, the game is set within a stretch of the region known as the Sword Coast, which contains a multitude of ecologies and terrains, including mountains, forests, plains, cities, and ruins, with the story encompassing both the city of Baldur's Gate, the largest and most affluent city in the region, and the lands south of it, including the Cloud Peaks, the Wood of Sharp Teeth, the Cloakwood forest, the town of Beregost and the village of Nashkel, and the fortress citadel of Candlekeep. In addition to the region, a variety of organisations from the Forgotten Realms setting also feature as part of the game's main story, including the Zhentarim, the Red Wizards of Thay, The Iron Throne, the Flaming Fist, The Chill, The Black Talons, and the Harpers. Characters Baldur's Gate includes around 25 player companions that can join with the PC. A number of the characters who appear include several who are canon to the official Forgotten Realms campaign setting, including Drizzt Do'Urden and Elminster. Story The player character is the young and orphaned ward of the mage Gorion. The two live in the ancient library fortress of Candlekeep. Abruptly, the Ward is instructed by Gorion to prepare to leave the citadel during the night with no explanation. That night, a mysterious armoured figure and his cohorts ambush the pair and order Gorion to hand over the Ward. Gorion refuses, and dies in the ensuing battle, while urging his Ward to escape. The next morning, the Ward encounters Imoen, a childhood friend and fellow orphan from Candlekeep, who had followed them in secret. With Candlekeep no longer accessible to them without Gorion's influence to circumvent its admission fee, and the city of Baldur's Gate currently closed off to outsiders due to bandit raids, the Ward resolves to investigate the cause of the region's Iron Crisis. Travelling to the mines of Nashkel, the main source of the region's iron, the Ward's party discovers that the mine's ore is being contaminated by a group of kobolds led by a half-orc, and that they and the bandits plaguing the region are being controlled by an organization known as the Iron Throne, a merchant outfit operating out of Baldur's Gate. After sabotaging a mine operated by the Iron Throne in the Cloakwood that would presumably give them total control over the region's iron, the Ward's party travels to the newly reopened Baldur's Gate. Invading the Throne's headquarters, the group learns that proof of the organization's involvement with the Iron Crisis was taken by one of the regional leaders when they and the rest of the leadership headed to Candlekeep for an important meeting. Revealing their findings to Duke Eltan, the leader of the Flaming Fist, the group receive a rare and valuable book, which would allow them access into Candlekeep, in order to spy on the meeting. During their investigations in the citadel's library, the Ward discovers a prophecy written by the ancient seer Alaundo, foretelling how the offspring created during the Time of Troubles by the dead god Bhaal, the Lord of Murder, will sow chaos until only one remains to become the new Lord of Murder. The Ward then finds a letter from Gorion revealing that the Ward is among the offspring of Bhaal, known as Bhaalspawn. During their stay at Candlekeep, the Ward's party is imprisoned for the murders of the Iron Throne leaders, regardless of whether or not they did so, until they can be transported to Baldur's Gate to be executed. Tethoril, a prominent keeper in Candlekeep, visits the party and reveals that a suspicious character the party met earlier, Koveras, is really the foster son of one of the now dead Iron Throne leaders. His name is Sarevok, the one responsible for Gorion's murder, and who also wishes to kill the Ward. Believing the Ward to be innocent, Tethoril transports the party into the catacombs beneath the fortress, where the party battle their way through doppelgängers taking on the forms of people the Ward knew in Candlekeep. Returning to Baldur's Gate, the Ward's party find themselves accused of causing the Iron Crisis on the orders of the Kingdom of Amn, assassinating one of the city's Grand Dukes, and poisoning Duke Eltan. Forced to stay hidden from the Flaming Fist, the party discovers that the Iron Throne orchestrated the Iron Crisis to gain control of iron through their mine in the Cloakwood, while using doppelgängers to weaken other merchant outfits, ensuring that they would have a monopoly on iron. With tensions rising between Baldur's Gate and Amn, the organization hoped to sell the stockpiled iron to the city at exorbitant prices. Afterward, they aimed to de-escalate tensions between Baldur's Gate and Amn. The party also discovers that Sarevok, having discovered that he was a Bhaalspawn, hoped to fuel distrust between Baldur's Gate and Amn by making each think the other was responsible for creating the crisis, and cause them to go to war. Sarevok believed that the resulting carnage would be enough to allow him to become the new Lord of Murder. Due to the Ward's similar background, he hired assassins to kill them. Sarevok remained loyal to his father until the Iron Throne's meeting in Candlekeep threatened his plans, which led Sarevok to eliminate him and the other regional leaders of the Iron Throne before taking over the outfit and transferring their stores of iron to the city in order to be seen as a savior. He was also responsible for the poisoning of Duke Eltan and the assassination of one of the four Grand Dukes. The Ward's party gain entry to the Ducal Palace, where the coronation of Sarevok as a Grand Duke of Baldur's Gate would be held, and present evidence of his schemes. Exposed, Sarevok flees into an ancient underground ruin beneath Baldur's Gate, with the Ward and the party following after. The Ward confronts Sarevok within an ancient temple to Bhaal, and defeats him, saving the Sword Coast and ending their brother's schemes. In the final ending cinematic, Sarevok's tainted soul departs his body and travels deep underground to a large circular chamber of alcoves, and destroys a statue of himself contained in one of the alcoves, whereupon it is revealed that the other alcoves each contain a statue of a Bhaalspawn that exists in Faerûn. Development and release Baldur's Gate began development in 1995 by Canadian game developer BioWare, a company founded by practicing physicians Ray Muzyka and Greg Zeschuk. The game was initially titled Forgotten Realms. According to Muzyka, "our head programmer has actually read every one of the [Forgotten Realms] books - everything, every single one of the short stories and the paperbacks. He made a point of it. He really wanted to immerse himself". The game required 90 man-years of development, which was spent simultaneously creating the game's content and the BioWare Infinity Engine. The primary script engine for the game (used mainly as a debugging tool) was Lua. DirectDraw was used for the graphics. Wasteland was a major influence on Baldur's Gate, particularly its design philosophy of having more than one possible method to achieve each goal. Unusually for the time, the graphics were not built from tiles; each background was individually rendered, which greatly extended the amount of time needed to create the game. At the time that the game was shipped, none of the sixty-member team had previously participated in the release of a video game. The time pressure to complete the game led to the use of simple areas and game design. Ray Muzyka said that the team held a "passion and a love of the art" and they developed a "collaborative design spirit". He believes that the game was successful because of the collaboration with Interplay. According to writer Luke Kristjanson, the character of Imoen was a late addition to fill a "non-psychotic-thief gap in the early levels". Kristjanson assembled Imoen's lines by editing voice-over for a guard character named Pique from an unused demo, and explained that her lack of voiced dialogue or standalone interactions with other party members throughout the game was due to budgetary constraints. The wife of Dean Anderson, a team member who later served as the art director of the 2009 role-playing game Dragon Age: Origins, was the basis for Imoen's facial features as depicted in her character portrait. Baldur's Gate was released on December 21, 1998, and was published by Black Isle Studios, an internal division of Interplay. Reception Sales According to Feargus Urquhart, Interplay's commercial forecasts for Baldur's Gate were "very low". He noted that the publisher's headquarters in Britain predicted zero sales in that region. Lifetime projections for the German market were "no more than 50,000" copies, reported Udo Hoffman of PC Player. Internally, BioWare's worldwide sales goal was 200,000 units, a number that PC Zones Dave Woods said would "justify work on the sequel". However, the game became an unexpected commercial hit. Ray Muzyka attributed this success in part to the Dungeons & Dragons license, and to the team's decision to use fan feedback during development, which he felt had increased the game's mass-market appeal. Following its shipment to retailers on December 21, Baldur's Gate began to sell at a "phenomenal rate", according to Mark Asher of CNET Gamecenter. He wrote at the time that its "first run of 50,000 copies sold out immediately and Interplay's elves are working hard to get more games to the stores". The title debuted in the United States at #3 on PC Data's computer game sales rankings for the week ending January 2, 1999. It sold-through 55,071 copies in the country by the end of 1998, for revenues of $2.56 million. In its second week, Baldur's Gate rose to #2 in the United States. Internationally, it debuted at #1 on 's computer game charts for the German market in the first half of January 1999, and reached first place on Chart-Track's equivalent for the United Kingdom by its second week. According to Interplay, Baldur's Gate also took #1 on the charts in Canada and France. Its global sales reached 175,000 units by mid-January 1999, a sales rate that the Los Angeles Times reported as Interplay's fastest ever. This performance led to a stock-price increase for the company. In the United States, Baldur's Gate remained in PC Data's weekly top 3 from January 10 through February 6. It claimed #1 for January, with sales of 80,500 copies and revenues of $3.6 million in the country that month. Supply shortages continued throughout much of January. The game likewise secured Media Control's #1 position for the entirety of that month, and held at #1 for the United Kingdom in its third week. Baldur's Gate subsequently took #4 for February in the German market and #3 in the United States, after holding in the latter country's weekly top 10 from February 7 – March 6. By mid-February, Gamecenter reported sales of 450,000 units for Baldur's Gate, which Asher called "the biggest hit Interplay has had since Descent" and a rebuttal to the common belief that role-playing games were commercially moribund. Worldwide sales totaled more than 500,000 copies by the end of that month. Despite these figures, Interplay posted a loss of $16 million for the fourth quarter of 1998, and of $28 million for the year. Brian Fargo attributed the losses in part to Baldur's Gate: he wrote that it "did not ship until the last days of 1998, which reduced shipments in the quarter to about half the projected volume". Baldur's Gate maintained its unbroken streak in PC Data's weekly top 10 through the week ending March 27, after which it was absent. It then took seventh for March, and was absent from the United States' monthly top 20 by April. However, it held in Chart-Track's British top 20 during April, after 13 weeks; and in Media Control's top 10 and top 20 during the second halves of March, April and May. Baldur's Gates sales in the German market during its initial months reached 90,000 units, a success for the region. By the end of May 1999, it received a "Gold" award from the (VUD), for sales of at least 100,000 units across Germany, Austria and Switzerland. Coinciding with the release of the Tales of the Sword Coast expansion pack in the United States, Baldur's Gate returned to PC Data's top 10 for a week in May. Thereafter, it became a staple of the firm's monthly top 20 from May through August. Interplay reported worldwide sales of nearly 700,000 copies for Baldur's Gate by June, and it was the United States' second-best-selling computer game during the first half of 1999, behind SimCity 3000. As of September, it had sold above 300,000 units and earned roughly $15 million in revenues in the country during 1999 alone. A writer for PC Accelerator remarked that this success "created an almost audible sigh of relief from publisher Interplay". By November 1999, Baldur's Gate had sold roughly 1 million units worldwide. It claimed ninth place for 1999 in the United States, with a total of 356,448 sales that year. At $15.7 million in revenue, it was the country's seventh-highest-grossing computer game of 1999. Sales of Baldur's Gate continued in 2000: by March, it had surpassed 500,000 copies sold in the United States, which led Desslock of GameSpot to describe the title as an "undisputed commercial blockbuster". U.S. sales had risen to 600,000 units by April 2001, while global sales totaled 1.5 million copies that May. The game proceeded to sell 83,208 units in the United States from February 2001 through the first week of November alone. Worldwide, Baldur's Gate ultimately surpassed 2.2 million sales by early 2003. By 2015 the game sold about 2.8 million copies. Reviews and awards Baldur's Gate received positive reviews from virtually every major computer gaming publication that reviewed it. At the time of the game's release, PC Gamer US said that Baldur's Gate "reigns supreme over every RPG currently available, and sets new standards for those to come". Computer Shopper called it "clearly the best Advanced Dungeons & Dragons (AD&D) game ever to grace a PC screen". Maximum PC magazine compared the gameplay to Diablo, but noted its more extensive selection of features and options. The pixel-based characters were panned, but the reviewer stated that "the gloriously rendered backgrounds make up for that shortcoming". The main criticism was of the problems with the path finding algorithm for non-player characters. Despite this, the game was deemed an "instant classic" because of the amount of customization allowed, the "fluid story lines", and the replayability. The reviewer from Pyramid felt that the "basic buzz was positive" surrounding the development of the game. The "actual results are a mixed bag, but there's real promise for the future" thanks to the inclusion of the Infinity Engine. Baldur's Gate was named the best computer role-playing game of 1998 by the Academy of Interactive Arts & Sciences, the Academy of Adventure Gaming Arts & Design, Computer Gaming World, the Game Developers Conference, Computer Games Strategy Plus, IGN, CNET Gamecenter, The Electric Playground, RPG Vault, PC Gamer US and GameSpot. IGN, Computer Games and RPG Vault also presented it with their overall "Game of the Year" awards. The editors of Computer Games wrote that Baldur's Gate "delivers everything you could ask for in a computer game." Baldur's Gate was #3 on CBR's 2020 "10 Of The Best DnD Stories To Start Off With" list — the article states that "beyond giving some insight on the Sword Coast, the game also provides a diverse cast of characters that your character can recruit into their party. This cast of character serves as a great source of inspiration to make interesting player characters". Legacy According to IGN, Baldur's Gate did much to revive the role-playing video game genre. John Harris of Gamasutra wrote that it "rescued computer D&D from the wastebasket". According to GameSpy, "Baldur's Gate was a triumph [that] single-handedly revived the [computer role-playing game] and almost made gamers forgive Interplay for Descent to Undermountain". IGN ranked Baldur's Gate No. 5 on their list of "The Top 11 Dungeons & Dragons Games of All Time" in 2014. In his article "Our favourite cities in PC gaming", Phil Savage of PC Gamer praised Bioware's level design for the titular city in Baldur's Gate, stating that "Baldur's Gate feels vast, exciting and dangerous—just like a proper city". Baldur's Gate was the first game in the Baldur's Gate series. It was followed by the expansion pack Tales of the Sword Coast (1999), then the sequel Baldur's Gate II: Shadows of Amn (2000) and its expansion pack Throne of Bhaal (2001). As of 2006, total sales for all releases in the series was almost five million copies. The series set the standard for other games using AD&D rules, especially those developed by BioWare and Black Isle Studios: Planescape: Torment (1999), Icewind Dale (2000), and Icewind Dale II (2002). The novel Baldur's Gate (1999) by Philip Athans was based on the game. Baldur's Gate was re-released along with its expansion in 2000 as Baldur's Gate Double Pack, and again in 2002 as a three CD collection entitled Baldur's Gate: The Original Saga. In 2002, the game and its expansion were released along with Icewind Dale, Icewind Dale: Heart of Winter and Planescape: Torment as the Black Isle Compilation. In 2004, it was re-released, this time along with Icewind Dale II, in Part Two of the compilation. Atari published the Baldur's Gate 4 in 1 Boxset including all four games on a combination of DVDs and CDs. Baldur's Gate and its expansion were released digitally on Good Old Games (later GOG.com) on September 23, 2010. It has also been made available via GameStop App as part of the D&D Anthology: The Master Collection, which also includes the expansion Baldur's Gate: Tales of the Sword Coast, Baldur's Gate II: Shadows of Amn, Baldur's Gate II: Throne of Bhaal, Icewind Dale, Icewind Dale: Heart of Winter, Icewind Dale: Trials of the Luremaster, Icewind Dale II, Planescape: Torment, and The Temple of Elemental Evil. On March 15, 2012 a remake was announced entitled Baldur's Gate: Enhanced Edition, originally slated for release in Summer 2012. Five days later, Overhaul Games announced that the Enhanced Edition would also be released for the Apple iPad. On September 14, Trent Oster, president of Overhaul Games, announced that the game's release would be delayed until November, citing an overwhelming response and a desire to "make the best Baldur's Gate possible". The game was launched for Microsoft Windows on November 28, 2012, for iPad running iOS 6 or greater on December 7, 2012, for Mac OS X on February 22, 2013, and for Android on April 17, 2014. Baldur's Gate III is currently in development by Larian Studios. Skybound Games, a division of Skybound Entertainment, brought Baldur's Gate: Enhanced Edition to the PlayStation 4, Xbox One, and Nintendo Switch on October 15, 2019. Notes References External links 1998 video games Baldur's Gate video games BioWare games Cancelled Dreamcast games Cancelled PlayStation (console) games Classic Mac OS games Cooperative video games Infinity Engine games Interactive Achievement Award winners Lua (programming language)-scripted video games Mobile games Multiplayer and single-player video games Origins Award winners Role-playing video games Video games adapted into novels Video games developed in Canada Video games featuring protagonists of selectable gender Video games scored by Michael Hoenig Video games with expansion packs Windows games
2473255
https://en.wikipedia.org/wiki/Rhesus%20of%20Thrace
Rhesus of Thrace
Rhesus (; Ancient Greek: Ῥῆσος Rhêsos) is a mythical Thracian king in Iliad, Book X, who fought on the side of Trojans. Diomedes and Odysseus stole his team of fine horses during a night raid on the Trojan camp. Etymology His name (a Thracian anthroponym) probably derives from PIE *reg-, 'to rule', showing a satem-sound change. Family According to Homer, his father was Eioneus who may be connected to the historic Eion in western Thrace, at the mouth of the Strymon, and the port of the later Amphipolis. Later writers provide Rhesus with a more exotic parentage, claiming that his mother was one of the Muses (Euterpe, Calliope or Terpsichore) and his father, the river god Strymon. Stephanus of Byzantium mentions the name of Rhesus' sister Sete, who had a son Bithys with Ares. Mythology Rhesus was raised by fountain nymphs and died without engaging in battle. He arrived late to Troy, because his country was attacked by Scythia, right after he received word that the Greeks had attacked Troy. Dolon, who had gone out to spy on Agamemnon’s army for Hector, was caught by Diomedes and Odysseus and proceeded to tell the two Argives about the newest arrivals, Thracians under the leadership of Rhesus. Dolon explained that Rhesus had the finest horses, as well as huge, golden armor that was suitable for gods rather than mortals. Because of Dolon’s cowardice, Rhesus met his demise without ever getting the chance to defend himself or Troy. When the Thracians were sleeping, Diomedes and Odysseus attacked the camp in the dead of night, killing Rhesus in his tent and stealing his famous steeds. The event portrayed in the Iliad also provides the action of the play Rhesus, transmitted among the plays of Euripides. The mother of Rhesus, one of the nine muses, then arrived and laid blame on all those responsible: Odysseus, Diomedes, and Athena. She also announced the imminent resurrection of Rhesus, who will become immortal but will be sent to stay in a cave. Scholia to the Iliad episode and the Rhesus agree in giving Rhesus a more heroic stature, incompatible with Homer's version. Rhesus is also named as one of the eight rivers that Poseidon raged from Mount Ida to the sea in order to knock down the wall that the Achaeans built. There was also a river in Bithynia named Rhesus, with Greek myth providing an attendant river god of the same name. Rhesus the Thracian king was himself associated with Bithynia through his love with the Bithynian huntress Arganthone, in the Erotika Pathemata ["Sufferings for Love"] by Parthenius of Nicaea, chapter 36. Namesake Rhesus Glacier on Anvers Island in Antarctica is named after Rhesus of Thrace, as is the Jovian asteroid 9142 Rhesus. Cultural depictions In the motion picture Hercules, Tobias Santelmann plays a character named Rhesus, who lives in the vicinity of Thrace but has little else in common with the traditional character, instead being a rebel against King Cotys. In the videogame Total War Saga: Troy, Rhesus becomes a playable character along with Memnon. His campaign involves him unifying the Thracian tribes. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Euripides, The Rhesus of Euripides translated into English rhyming verse with explanatory notes by Gilbert Murray, LL.D., D.Litt, F.B.A., Regius Professor of Greek in the University of Oxford. Euripides. Gilbert Murray. New York. Oxford University Press. 1913. Online version at the Perseus Digital Library. Euripides, Euripidis Fabulae. vol. 3. Gilbert Murray. Oxford. Clarendon Press, Oxford. 1913. Greek text available at the Perseus Digital Library. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Maurus Servius Honoratus, In Vergilii carmina comentarii. Servii Grammatici qui feruntur in Vergilii carmina commentarii; recensuerunt Georgius Thilo et Hermannus Hagen. Georgius Thilo. Leipzig. B. G. Teubner. 1881. Online version at the Perseus Digital Library. Stephanus of Byzantium, Stephani Byzantii Ethnicorum quae supersunt, edited by August Meineike (1790-1870), published 1849. A few entries from this important ancient handbook of place names have been translated by Brady Kiesling. Online version at the Topos Text Project. External links Children of Potamoi People of the Trojan War Mythological kings of Thrace Kings in Greek mythology Characters in Greek mythology Greek mythology of Thrace
23390653
https://en.wikipedia.org/wiki/History%20of%20health%20care%20reform%20in%20the%20United%20States
History of health care reform in the United States
The history of health care reform in the United States has spanned many decades with health care reform having been the subject of political debate since the early part of the 20th century. Recent reforms remain an active political issue. Alternative reform proposals were offered by both of the major candidates in the 2008 and 2016 presidential elections. Federal health care proposals Late 18th century On July 16, 1798, President John Adams signed the first Federal public health law, "An act for the relief of sick and disabled Seamen." This assessed every seaman at American ports 20 cents a month. This was the first prepaid medical care plan in the United States. The money was used for the care of sick seamen and the building of seamen's hospitals. This act created the Marine Hospital Service under the Department of the Treasury. In 1802 Marine Hospitals were operating in Boston; Newport; Norfolk; and Charleston, S.C. and medical services were contracted in other ports. 19th century Another of the earliest health care proposals at the federal level was the 1854 Bill for the Benefit of the Indigent Insane, which would have established asylums for the indigent insane, as well as the blind and deaf, via federal land grants to the states. This bill was proposed by activist Dorothea Dix and passed both houses of Congress, but was vetoed by President Franklin Pierce. Pierce argued that the federal government should not commit itself to social welfare, which he stated was the responsibility of the states. After the American Civil War, the federal government established the first system of medical care in the South, known as the Freedmen's Bureau. The government constructed 40 hospitals, employed over 120 physicians, and treated well over one million sick and dying former slaves. The hospitals were short-lived, lasting from 1865 to 1870. Freedmen's Hospital in Washington, D.C. remained in operation until the late nineteenth century when it became part of Howard University. The next major initiative came in the New Deal legislation of the 1930s, in the context of the Great Depression. 1900s–1920s In the first 10–15 years of the 20th century Progressivism was influencing both Europe and the United States. Many European countries were passing the first social welfare acts and forming the basis for compulsory government-run or voluntary subsidized health care programs. The United Kingdom passed the National Insurance Act of 1911 that provided medical care and replacement of some lost wages if a worker became ill. It did not, however, cover spouses or dependents. As early as the 1912 presidential election, former president Theodore Roosevelt vaguely called for the creation of a national health service in the 15th plank of his Progressive Party platform. However, neither Roosevelt nor his opponents discussed health care plans in detail, and Roosevelt lost the election to Woodrow Wilson. A unique American history of decentralization in government, limited government, and a tradition of classical liberalism are all possible explanations for the suspicion around the idea of compulsory government-run insurance. The American Medical Association (AMA) was also deeply and vocally opposed to the idea, which it labeled "socialized medicine". In addition, many urban US workers already had access to sickness insurance through employer-based sickness funds. Early industrial sickness insurance purchased through employers was one influential economic origin of the current American health care system. These late-19th-century and early-20th-century sickness insurance schemes were generally inexpensive for workers: their small scale and local administration kept overhead low, and because the people who purchased insurance were all employees of the same company, that prevented people who were already ill from buying in. The presence of employer-based sickness funds may have contributed to why the idea of government-based insurance did not take hold in the United States at the same time that the United Kingdom and the rest of Europe was moving toward socialized schemes like the UK National Insurance Act of 1911. Thus, at the beginning of the 20th century, Americans were used to associating insurance with employers, which paved the way for the beginning of third-party health insurance in the 1930s. 1930s–1950s With the Great Depression, more and more people could not afford medical services. In 1933, Franklin D. Roosevelt asked Isidore Falk and Edgar Sydenstricter to help draft provisions to Roosevelt's pending Social Security legislation to include publicly funded health care programs. These reforms were attacked by the American Medical Association as well as state and local affiliates of the AMA as "compulsory health insurance." Roosevelt ended up removing the health care provisions from the bill in 1935. Fear of organized medicine's opposition to universal health care became standard for decades after the 1930s. During this time, individual hospitals began offering their own insurance programs, the first of which became Blue Cross. Groups of hospitals as well as physician groups (i.e. Blue Shield) soon began selling group health insurance policies to employers, who then offered them to their employees and collected premiums. In the 1940s Congress passed legislation that supported the new third-party insurers. During World War II, industrialist Henry J. Kaiser used an arrangement in which doctors bypassed traditional fee-for-care and were contracted to meet all the medical needs for his employees on construction projects up and down the West coast. After the war ended, he opened the plan up to the public as a non-profit organization under the name Kaiser Permanente. During World War II, the federal government introduced wages and price controls. In an effort to continue attracting and retaining employees without violating those controls, employers offered and sponsored health insurance to employees in lieu of gross pay. This was a beginning of the third-party paying system that began to replace direct out-of-pocket payments. Following the world war, President Harry Truman called for universal health care as a part of his Fair Deal in 1949 but strong opposition stopped that part of the Fair Deal. However, in 1946 the National Mental Health Act was passed, as was the Hospital Survey and Construction Act, or Hill-Burton Act. In 1951 the IRS declared group premiums paid by employers as a tax-deductible business expense, which solidified the third-party insurance companies' place as primary providers of access to health care in the United States. 1960s–1980s 1960s In the Civil Rights era of the 1960s and early 1970s, public opinion shifted towards the problem of the uninsured, especially the elderly. Since care for the elderly would someday affect everyone, supporters of health care reform were able to avoid the worst fears of "socialized medicine," which was considered a dirty word for its association with communism. After Lyndon B. Johnson was elected president in 1964, the stage was set for the passage of Medicare and Medicaid in 1965. Johnson's plan was not without opposition, however. "Opponents, especially the AMA and insurance companies, opposed the Johnson administration's proposal on the grounds that it was compulsory, it represented socialized medicine, it would reduce the quality of care, and it was 'un-American.'" These views notwithstanding, the Medicare program was established when the Social Security Amendments of 1965 were signed into law on July 30, 1965, by President Lyndon B. Johnson. Medicare is a social insurance program administered by the United States government, providing health insurance coverage to people who are either age 65 and over, or who meet other special criteria. 1970s In 1970, three proposals for single-payer universal national health insurance financed by payroll taxes and general federal revenues were introduced in the U.S. Congress. In February 1970, Representative Martha Griffiths (D-MI) introduced a national health insurance bill—without any cost sharing—developed with the AFL–CIO. In April 1970, Senator Jacob Javits (R-NY) introduced a bill to extend Medicare to all—retaining existing Medicare cost sharing and coverage limits—developed after consultation with Governor Nelson Rockefeller (R-NY) and former Johnson administration HEW Secretary Wilbur Cohen. In August 1970, Senator Ted Kennedy (D-MA) introduced a bipartisan national health insurance bill—without any cost sharing—developed with the Committee for National Health Insurance founded by United Auto Workers (UAW) president Walter Reuther, with a corresponding bill introduced in the House the following month by Representative James Corman (D-CA). In September 1970, the Senate Labor and Public Welfare Committee held the first congressional hearings in twenty years on national health insurance. In January 1971, Kennedy began a decade as chairman of the Health subcommittee of the Senate Labor and Public Welfare Committee, and introduced a reconciled bipartisan Kennedy–Griffiths bill proposing universal national health insurance. In February 1971, President Richard Nixon proposed more limited health insurance reform—an employer mandate to offer private health insurance if employees volunteered to pay 25 percent of premiums, federalization of Medicaid for the poor with dependent minor children, merger of Medicare Parts A and B with elimination of the Medicare Part B $5.30 monthly premium, and support for health maintenance organizations (HMOs). Hearings on national health insurance were held by the House Ways and Means Committee and the Senate Finance Committee in 1971, but no bill had the support of committee chairmen Representative Wilbur Mills (D-AR) or Senator Russell Long (D-LA). In October 1972, Nixon signed the Social Security Amendments of 1972 extending Medicare to those under 65 who have been severely disabled for over two years or have end stage renal disease (ESRD), and gradually raising the Medicare Part A payroll tax from 1.1% to 1.45% in 1986. In the 1972 presidential election, Nixon won re-election in a landslide over the only Democratic presidential nominee not endorsed by the AFL–CIO in its history, Senator George McGovern (D-SD), who was a cosponsor of the Kennedy-Griffiths bill, but did not make national health insurance a major issue in his campaign. In October 1973, Long and Senator Abraham Ribicoff (D-CT) introduced a bipartisan bill for catastrophic health insurance coverage for workers financed by payroll taxes and for Medicare beneficiaries, and federalization of Medicaid with extension to the poor without dependent minor children. In February 1974, Nixon proposed more comprehensive health insurance reform—an employer mandate to offer private health insurance if employees volunteered to pay 25 percent of premiums, replacement of Medicaid by state-run health insurance plans available to all with income-based premiums and cost sharing, and replacement of Medicare with a new federal program that eliminated the limit on hospital days, added income-based out-of-pocket limits, and added outpatient prescription drug coverage. In April 1974, Kennedy and Mills introduced a bill for near-universal national health insurance with benefits identical to the expanded Nixon plan—but with mandatory participation by employers and employees through payroll taxes and with lower cost sharing—both plans were criticized by labor, consumer, and senior citizens organizations because of their substantial cost sharing. In August 1974, after Nixon's resignation and President Gerald Ford's call for health insurance reform, Mills tried to advance a compromise based on Nixon's plan—but with mandatory participation by employers and employees through premiums to private health insurance companies and catastrophic health insurance coverage financed by payroll taxes—but gave up when unable to get more than a 13–12 majority of his committee to support his compromise plan. In December 1974, Mills resigned as chairman of the Ways and Means Committee and was succeeded by Representative Al Ullman (D-OR), who opposed payroll tax and general federal revenue financing of national health insurance. In January 1975, in the midst of the worst recession in the four decades since the Great Depression, Ford said he would veto any health insurance reform, and Kennedy returned to sponsoring his original universal national health insurance bill. In April 1975, with one third of its sponsors gone after the November 1974 election, the AMA replaced its "Medicredit" plan with an employer mandate proposal similar to Nixon's 1974 plan. In January 1976, Ford proposed adding catastrophic coverage to Medicare, offset by increased cost sharing. In April 1976, Democratic presidential candidate Jimmy Carter proposed health care reform that included key features of Kennedy's universal national health insurance bill. In December 1977, President Carter told Kennedy his bill must be changed to preserve a large role for private insurance companies, minimize federal spending (precluding payroll tax financing), and be phased-in so not to interfere with balancing the federal budget. Kennedy and organized labor compromised and made the requested changes, but broke with Carter in July 1978 when he would not commit to pursuing a single bill with a fixed schedule for phasing-in comprehensive coverage. In May 1979, Kennedy proposed a new bipartisan universal national health insurance bill—choice of competing federally-regulated private health insurance plans with no cost sharing financed by income-based premiums via an employer mandate and individual mandate, replacement of Medicaid by government payment of premiums to private insurers, and enhancement of Medicare by adding prescription drug coverage and eliminating premiums and cost sharing. In June 1979, Carter proposed more limited health insurance reform—an employer mandate to provide catastrophic private health insurance plus coverage without cost sharing for pregnant women and infants, federalization of Medicaid with extension to the very poor without dependent minor children, and enhancement of Medicare by adding catastrophic coverage. In November 1979, Long led a bipartisan conservative majority of his Senate Finance Committee to support an employer mandate to provide catastrophic-only private health insurance and enhancement of Medicare by adding catastrophic coverage, but abandoned efforts in May 1980 due to budget constraints in the face of a deteriorating economy. 1980s The Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA) amended the Employee Retirement Income Security Act of 1974 (ERISA) to give some employees the ability to continue health insurance coverage after leaving employment. Clinton initiative Health care reform was a major concern of the Bill Clinton administration headed up by First Lady Hillary Clinton. The 1993 Clinton health care plan included mandatory enrollment in a health insurance plan, subsidies to guarantee affordability across all income ranges, and the establishment of health alliances in each state. Every citizen or permanent resident would thus be guaranteed medical care. The bill faced withering criticism by Republicans, led by William Kristol, who communicated his concern that a Democratic health care bill would "revive the reputation of... Democrats as the generous protector of middle-class interests. And it will at the same time strike a punishing blow against Republican claims to defend the middle class by restraining government." The bill was not enacted into law. The "Health Security Express," a cross-country tour by multiple buses carrying supporters of President Clinton's national health care reform, started at the end of July 1994. During each stop, the bus riders would talk about their personal experiences, health care disasters and why they felt it was important for all Americans to have health insurance. 2000-2008: Bush era debates In 2000 the Health Insurance Association of America (HIAA) partnered with Families USA and the American Hospital Association (AHA) on a "strange bedfellows" proposal intended to seek common ground in expanding coverage for the uninsured. In 2001, a Patients' Bill of Rights was debated in Congress, which would have provided patients with an explicit list of rights concerning their health care. This initiative was essentially taking some of ideas found in the Consumers' Bill of Rights and applying it to the field of health care. It was undertaken in an effort to ensure the quality of care of all patients by preserving the integrity of the processes that occur in the health care industry. Standardizing the nature of health care institutions in this manner proved rather provocative. In fact, many interest groups, including the American Medical Association (AMA) and the pharmaceutical industry came out vehemently against the congressional bill. Basically, providing emergency medical care to anyone, regardless of health insurance status, as well as the right of a patient to hold their health plan accountable for any and all harm done proved to be the biggest stumbling blocks for this bill. As a result of this intense opposition, the Patients' Bill of Rights initiative eventually failed to pass Congress in 2002. As president, Bush signed into law the Medicare Prescription Drug, Improvement, and Modernization Act which included a prescription drug plan for elderly and disabled Americans. During the 2004 presidential election, both the George Bush and John Kerry campaigns offered health care proposals. Bush's proposals for expanding health care coverage were more modest than those advanced by Senator Kerry. Several estimates were made comparing the cost and impact of the Bush and Kerry proposals. While the estimates varied, they all indicated that the increase in coverage and the funding requirements of the Bush plan would both be lower than those of the more comprehensive Kerry plan. In 2006 the HIAA's successor organization, America's Health Insurance Plans (AHIP), issued another set of reform proposals. In January 2007 Rep. John Conyers, Jr. (D-MI) has introduced The United States National Health Care Act (HR 676) in the House of Representatives. As of October 2008, HR 676 has 93 co-sponsors. Also in January 2007, Senator Ron Wyden introduced the Healthy Americans Act (S. 334) in the Senate. As of October 2008, S. 334 had 17 cosponsors. Also in 2007, AHIP issued a proposal for guaranteeing access to coverage in the individual health insurance market and a proposal for improving the quality and safety of the U.S. health care system. "Economic Survey of the United States 2008: Health Care Reform" by the Organisation for Economic Co-operation and Development, published in December 2008, said that: Tax benefits of employer-based insurances should be abolished. The resulting tax revenues should be used to subsidize the purchase of insurance by individuals. These subsidies, "which could take many forms, such as direct subsidies or refundable tax credits, would improve the current situation in at least two ways: they would reach those who do not now receive the benefit of the tax exclusion; and they would encourage more cost-conscious purchase of health insurance plans and health care services as, in contrast to the uncapped tax exclusion, such subsidies would reduce the incentive to purchase health plans with little cost sharing." In December 2008, the Institute for America's Future, together with the chairman of the Ways and Means Health Subcommittee, Pete Stark, launched a proposal from Jacob Hacker, co-director of the U.C. Berkeley School of Law Center on Health, that in essence said that the government should offer a public health insurance plan to compete on a level playing field with private insurance plans. This was said to be the basis of the Obama/Biden plan. The argument is based on three basic points. Firstly, public plans success at managing cost control (Medicare medical spending rose 4.6% p.a. compared 7.3% for private health insurance on a like-for-like basis in the 10 years from 1997 to 2006). Secondly, public insurance has better payment and quality-improvement methods because of its large databases, new payment approaches, and care-coordination strategies. Thirdly, it can set a standard against which private plans must compete, which would help unite the public around the principle of broadly shared risk while building greater confidence in government in the long term. Also in December 2008, America's Health Insurance Plans (AHIP) announced a set of proposals which included setting a national goal to reduce the projected growth in health care spending by 30%. AHIP said that if this goal were achieved, it would result in cumulative five-year savings of $500 billion. Among the proposals was the establishment of an independent comparative effectiveness entity that compares and evaluates the benefits, risks, and incremental costs of new drugs, devices, and biologics. An earlier "Technical Memo" published by AHIP in June 2008 had estimated that a package of reforms involving comparative effectiveness research, health information technology (HIT), medical liability reform, "pay-for-performance" and disease management and prevention could reduce U.S. national health expenditures "by as much as 9 percent by the year 2025, compared with current baseline trends." Debate in the 2008 presidential election Although both candidates had a health care system that revolved around private insurance markets with help from public insurance programs, both had different opinions on how this system should operate when put in place. Senator John McCain proposed a plan that focused on making health care more affordable. The senator proposed to replace special tax breaks for persons with employer-based health care coverage with a universal system of tax credits. These credits, $2,500 for an individual and $5,000 for a family would be available to Americans regardless of income, employment or tax liability. In his plan, Senator McCain proposed the Guaranteed Access Plan which would provide federal assistance to the states to secure health insurance coverage through high-risk areas. Senator McCain also proposed the idea of an open-market competition system. This would give families the opportunity to go across state lines and buy health plans, expanding personal options for affordable coverage and force the health insurance companies to compete over the consumers’ money on an unprecedented scale. Barack Obama called for universal health care. His health care plan called for the creation of a National Health Insurance Exchange that would include both private insurance plans and a Medicare-like government run option. Coverage would be guaranteed regardless of health status, and premiums would not vary based on health status either. It would have required parents to cover their children, but did not require adults to buy insurance. The Philadelphia Inquirer reported that the two plans had different philosophical focuses. They described the purpose of the McCain plan as to "make insurance more affordable," while the purpose of the Obama plan was for "more people to have health insurance." The Des Moines Register characterized the plans similarly. A poll released in early November, 2008, found that voters supporting Obama listed health care as their second priority; voters supporting McCain listed it as fourth, tied with the war in Iraq. Affordability was the primary health care priority among both sets of voters. Obama voters were more likely than McCain voters to believe government can do much about health care costs. 2009 reform debate In March 2009 AHIP proposed a set of reforms intended to address waste and unsustainable growth in the current health care market. These reforms included: An individual insurance mandate with a financial penalty as a quid pro quo for guaranteed issue Updates to the Medicare physician fee schedule; Setting standards and expectations for safety and quality of diagnostics; Promoting care coordination and patient-centered care by designating a "medical home" that would replace fragmented care with a coordinated approach to care. Physicians would receive a periodic payment for a set of defined services, such as care coordination that integrates all treatment received by a patient throughout an illness or an acute event. This would promote ongoing comprehensive care management, optimizes patients’ health status and assist patients in navigating the health care system Linking payment to quality, adherence to guidelines, achieving better clinical outcomes, giving better patient experience and lowering the total cost of care. Bundled payments (instead of individual billing) for the management of chronic conditions in which providers would have shared accountability and responsibility for the management of chronic conditions such as coronary artery disease, diabetes, chronic obstructive pulmonary disease and asthma, and similarly A fixed rate all-inclusive average payment for acute care episodes which tend to follow a pattern (even though some acute care episodes may cost more or less than this). On May 5, 2009, US Senate Finance Committee held hearings on Health care reform. On the panel of the "invited stakeholder", no supporter of the Single-payer health care system was invited. The panel featured Republican senators and industry panelists who argued against any kind of expanded health care coverage. The preclusion of the single payer option from the discussion caused significant protest by doctors in the audience. There is one bill currently before Congress but others are expected to be presented soon. A merged single bill is the likely outcome. The Affordable Health Choices Act is currently before the House of Representatives and the main sticking points at the markup stage of the bill have been in two areas; should the government provide a public insurance plan option to compete head to head with the private insurance sector, and secondly should comparative effectiveness research be used to contain costs met by the public providers of health care. Some Republicans have expressed opposition to the public insurance option believing that the government will not compete fairly with the private insurers. Republicans have also expressed opposition to the use of comparative effectiveness research to limit coverage in any public sector plan (including any public insurance scheme or any existing government scheme such as Medicare), which they regard as rationing by the back door. Democrats have claimed that the bill will not do this but are reluctant to introduce a clause that will prevent, arguing that it would limit the right of the DHHS to prevent payments for services that clearly do not work. America's Health Insurance Plans, the umbrella organization of the private health insurance providers in the United States has recently urged the use of CER to cut costs by restricting access to ineffective treatments and cost/benefit ineffective ones. Republican amendments to the bill would not prevent the private insurance sectors from citing CER to restrict coverage and apply rationing of their funds, a situation which would create a competition imbalance between the public and private sector insurers. A proposed but not yet enacted short bill with the same effect is the Republican sponsored Patients Act 2009. On June 15, 2009, the U.S. Congressional Budget Office (CBO) issued a preliminary analysis of the major provisions of the Affordable Health Choices Act. The CBO estimated the ten-year cost to the federal government of the major insurance-related provisions of the bill at approximately $1.0 trillion. Over the same ten-year period from 2010 to 2019, the CBO estimated that the bill would reduce the number of uninsured Americans by approximately 16 million. At about the same time, the Associated Press reported that the CBO had given Congressional officials an estimate of $1.6 trillion for the cost of a companion measure being developed by the Senate Finance Committee. In response to these estimates, the Senate Finance Committee delayed action on its bill and began work on reducing the cost of the proposal to $1.0 trillion, and the debate over the Affordable Health Choices act became more acrimonious. Congressional Democrats were surprised by the magnitude of the estimates, and the uncertainty created by the estimates has increased the confidence of Republicans who are critical of the Obama Administration's approach to health care. However, in a June New York Times editorial, economist Paul Krugman argued that despite these estimates universal health coverage is still affordable. "The fundamental fact is that we can afford universal health insurance—even those high estimates were less than the $1.8 trillion cost of the Bush tax cuts." In contrast to earlier advocacy of a publicly funded health care program, in August 2009 Obama administration officials announced they would support a health insurance cooperative in response to deep political unrest amongst Congressional Republicans and amongst citizens in town hall meetings held across America. However, in a June 2009 NBC News/Wall Street Journal survey, 76% said it was either "extremely" or "quite" important to "give people a choice of both a public plan administered by the federal government and a private plan for their health insurance." During the summer of 2009, members of the "Tea Party" protested against proposed health care reforms. Former insurance PR executive Wendell Potter of the Center for Media and Democracy- whose funding comes from groups such as the Tides Foundation- argue that the hyperbole generated by this phenomenon is a form of corporate astroturfing, which he says that he used to write for CIGNA. Opponents of more government involvement, such as Phil Kerpen of Americans for Prosperity- whose funding comes mainly from the Koch Industries corporation counter-argue that those corporations oppose a public-plan, but some try to push for government actions that will unfairly benefit them, like employer mandates forcing private companies to buy health insurance. Journalist Ben Smith has referred to mid-2009 as "The Summer of Astroturf" given the organizing and coordinating efforts made by various groups on both pro- and anti-reform sides. Healthcare debate, 2008–2010 Healthcare reform was a major topic of discussion during the 2008 Democratic presidential primaries. As the race narrowed, attention focused on the plans presented by the two leading candidates, New York Senator Hillary Clinton and the eventual nominee, Illinois Senator Barack Obama. Each candidate proposed a plan to cover the approximately 45 million Americans estimated to not have health insurance at some point each year. Clinton's plan would have required all Americans obtain coverage (in effect, an individual health insurance mandate), while Obama's provided a subsidy but did not include a mandate. During the general election, Obama said that fixing healthcare would be one of his top four priorities if he won the presidency. After his inauguration, Obama announced to a joint session of Congress in February 2009 his intent to work with Congress to construct a plan for healthcare reform. By July, a series of bills were approved by committees within the House of Representatives. On the Senate side, from June through to September, the Senate Finance Committee held a series of 31 meetings to develop of a healthcare reform bill. This group – in particular, Senators Max Baucus (D-MT), Chuck Grassley (R-IA), Kent Conrad (D-ND), Olympia Snowe (R-ME), Jeff Bingaman (D-NM), and Mike Enzi (R-WY) – met for more than 60 hours, and the principles that they discussed, in conjunction with the other Committees, became the foundation of the Senate's healthcare reform bill. With universal healthcare as one of the stated goals of the Obama Administration, Congressional Democrats and health policy experts like Jonathan Gruber and David Cutler argued that guaranteed issue would require both a community rating and an individual mandate to prevent either adverse selection and/or free riding from creating an insurance death spiral; they convinced Obama that this was necessary, persuading him to accept Congressional proposals that included a mandate. This approach was preferred because the President and Congressional leaders concluded that more liberal plans, such as Medicare-for-all, could not win filibuster-proof support in the Senate. By deliberately drawing on bipartisan ideas – the same basic outline was supported by former Senate Majority Leaders Howard Baker (R-TN), Bob Dole (R-KS), Tom Daschle (D-SD) and George Mitchell (D-ME) – the bill's drafters hoped to increase the chances of getting the necessary votes for passage. However, following the adoption of an individual mandate as a central component of the proposed reforms by Democrats, Republicans began to oppose the mandate and threaten to filibuster any bills that contained it. Senate Minority Leader Mitch McConnell (R-KY), who lead the Republican Congressional strategy in responding to the bill, calculated that Republicans should not support the bill, and worked to keep party discipline and prevent defections: Republican Senators, including those who had supported previous bills with a similar mandate, began to describe the mandate as "unconstitutional". Writing in The New Yorker, Ezra Klein stated that "the end result was... a policy that once enjoyed broad support within the Republican Party suddenly faced unified opposition." The New York Times subsequently noted: "It can be difficult to remember now, given the ferocity with which many Republicans assail it as an attack on freedom, but the provision in President Obama's healthcare law requiring all Americans to buy health insurance has its roots in conservative thinking." The reform negotiations also attracted a great deal of attention from lobbyists, including deals among certain lobbies and the advocates of the law to win the support of groups who had opposed past reform efforts, such as in 1993. The Sunlight Foundation documented many of the reported ties between "the healthcare lobbyist complex" and politicians in both major parties. During the August 2009 summer congressional recess, many members went back to their districts and entertained town hall meetings to solicit public opinion on the proposals. Over the recess, the Tea Party movement organized protests and many conservative groups and individuals targeted congressional town hall meetings to voice their opposition to the proposed reform bills. There were also many threats made against members of Congress over the course of the Congressional debate, and many were assigned extra protection. To maintain the progress of the legislative process, when Congress returned from recess, in September 2009 President Obama delivered a speech to a joint session of Congress supporting the ongoing Congressional negotiations, to re-emphasize his commitment to reform and again outline his proposals. In it he acknowledged the polarization of the debate, and quoted a letter from the late-Senator Ted Kennedy urging on reform: "what we face is above all a moral issue; that at stake are not just the details of policy, but fundamental principles of social justice and the character of our country." On November 7, the House of Representatives passed the Affordable Health Care for America Act on a 220–215 vote and forwarded it to the Senate for passage. Senate The Senate began work on its own proposals while the House was still working on the Affordable Health Care for America Act. Instead, the Senate took up H.R. 3590, a bill regarding housing tax breaks for service members. As the United States Constitution requires all revenue-related bills to originate in the House, the Senate took up this bill since it was first passed by the House as a revenue-related modification to the Internal Revenue Code. The bill was then used as the Senate's vehicle for their healthcare reform proposal, completely revising the content of the bill. The bill as amended would ultimately incorporate elements of proposals that were reported favorably by the Senate Health and Finance committees. With the Republican minority in the Senate vowing to filibuster any bill that they did not support, requiring a cloture vote to end debate, 60 votes would be necessary to get passage in the Senate. At the start of the 111th Congress, Democrats had only 58 votes; the Senate seat in Minnesota that would be won by Al Franken was still undergoing a recount, and Arlen Specter was still a Republican. To reach 60 votes, negotiations were undertaken to satisfy the demands of moderate Democrats, and to try to bring aboard several Republican Senators; particular attention was given to Bob Bennett (R-UT), Chuck Grassley (R-IA), Mike Enzi (R-WY), and Olympia Snowe (R-ME). Negotiations continued even after July 7—when Al Franken was sworn into office, and by which time Arlen Specter had switched parties—because of disagreements over the substance of the bill, which was still being drafted in committee, and because moderate Democrats hoped to win bipartisan support. However, on August 25, before the bill could come up for a vote, Ted Kennedy—a long-time advocate for healthcare reform—died, depriving Democrats of their 60th vote. Before the seat was filled, attention was drawn to Senator Snowe because of her vote in favor of the draft bill in the Finance Committee on October 15, however she explicitly stated that this did not mean she would support the final bill. Paul Kirk was appointed as Senator Kennedy's temporary replacement on September 24. Following the Finance Committee vote, negotiations turned to the demands of moderate Democrats to finalize their support, whose votes would be necessary to break the Republican filibuster. Majority Leader Harry Reid focused on satisfying the centrist members of the Democratic caucus until the hold-outs narrowed down to Connecticut's Joe Lieberman, an independent who caucused with Democrats, and Nebraska's Ben Nelson. Lieberman, despite intense negotiations in search of a compromise by Reid, refused to support a public option; a concession granted only after Lieberman agreed to commit to voting for the bill if the provision was not included, even though it had majority support in Congress. There was debate among supporters of the bill about the importance of the public option, although the vast majority of supporters concluded that it was a minor part of the reform overall, and that Congressional Democrats' fight for it won various concessions; this included conditional waivers allowing states to set up state-based public options, for example Vermont's Green Mountain Care. With every other Democrat now in favor and every other Republican now overtly opposed, the White House and Reid moved on to addressing Senator Nelson's concerns in order to win filibuster-proof support for the bill; they had by this point concluded that "it was a waste of time dealing with [Snowe]" because, after her vote for the draft bill in the Finance Committee, Snowe had come under intense pressure from the Republican Senate Leadership who opposed reform. (Snowe retired at the end of her term, citing partisanship and polarization). After a final 13-hour negotiation, Nelson's support for the bill was won after two concessions: a compromise on abortion, modifying the language of the bill "to give states the right to prohibit coverage of abortion within their own insurance exchanges," which would require consumers to pay for the procedure out-of-pocket if the state so decided; and an amendment to offer a higher rate of Medicaid reimbursement for Nebraska. The latter half of the compromise was derisively referred to as the "Cornhusker Kickback" and was later repealed by the subsequent reconciliation amendment bill. On December 23, the Senate voted 60–39 to end debate on the bill: a cloture vote to end the filibuster by opponents. The bill then passed by a vote of 60–39 on December 24, 2009, with all Democrats and two independents voting for, and all Republicans voting against except one (Jim Bunning (R-KY), not voting). The bill was endorsed by the AMA and AARP. Several weeks after the vote, on January 19, 2010, Massachusetts Republican Scott Brown was elected to the Senate in a special election to replace the late Ted Kennedy, having campaigned on giving the Republican minority the 41st vote needed to sustain filibusters, even signing autographs as "Scott 41." The special election had become significant to the reform debate because of its effects on the legislative process. The first was a psychological one: the symbolic importance of losing the traditionally Democratic (‘blue’) Massachusetts seat formerly held by Ted Kennedy, a staunch support of reform, made many Congressional Democrats concerned about the political cost of passing a bill. The second effect was more practical: the loss of the Democrat's supermajority complicated the legislative strategy of reform proponents. House The election of Scott Brown meant Democrats could no longer break a filibuster in the Senate. In response, White House Chief of Staff Rahm Emanuel argued the Democrats should scale-back for a less ambitious bill; House Speaker Nancy Pelosi pushed back, dismissing Emanuel's scaled-down approach as "Kiddie Care." Obama also remained insistent on comprehensive reform, and the news that Anthem in California intended to raise premium rates for its patients by as much as 39% gave him a new line of argument to reassure nervous Democrats after Scott Brown's win. On February 22 Obama laid out a "Senate-leaning" proposal to consolidate the bills. He also held a meeting, on February 25, with leaders of both parties urging passage of a reform bill. The summit proved successful in shifting the political narrative away from the Massachusetts loss back to healthcare policy. With Democrats having lost a filibuster-proof supermajority in the Senate, but having already passed the Senate bill with 60 votes on December 24, the most viable option for the proponents of comprehensive reform was for the House to abandon its own health reform bill, the Affordable Health Care for America Act, and pass the Senate's bill, The Patient Protection and Affordable Care Act, instead. Various health policy experts encouraged the House to pass the Senate version of the bill. However, House Democrats were not happy with the content of the Senate bill, and had expected to be able to negotiate changes in a House–Senate Conference before passing a final bill. With that option off the table, as any bill that emerged from Conference that differed from the Senate bill would have to be passed in the Senate over another Republican filibuster; most House Democrats agreed to pass the Senate bill on condition that it be amended by a subsequent bill. They drafted the Health Care and Education Reconciliation Act, which could be passed via the reconciliation process. Unlike rules under regular order, as per the Congressional Budget Act of 1974 reconciliation cannot be subject to a filibuster, which requires 60 votes to break, but the process is limited to budget changes; this is why the procedure was never able to be used to pass a comprehensive reform bill in the first place, such as the ACA, due to inherently non-budgetary regulations. Whereas the already passed Senate bill could not have been put through reconciliation, most of House Democrats' demands were budgetary: "these changes – higher subsidy levels, different kinds of taxes to pay for them, nixing the Nebraska Medicaid deal – mainly involve taxes and spending. In other words, they're exactly the kinds of policies that are well-suited for reconciliation." The remaining obstacle was a pivotal group of pro-life Democrats, initially reluctant to support the bill, led by Congressman Bart Stupak. The group found the possibility of federal funding for abortion would be substantive enough to warrant opposition. The Senate bill had not included language that satisfied their abortion concerns, but they could not include additional such language in the reconciliation bill, as it would be outside the scope of the process with its budgetary limits. Instead, President Obama issued Executive Order 13535, reaffirming the principles in the Hyde Amendment. This concession won the support of Stupak and members of his group and assured passage of the bill. The House passed the Senate bill with a vote of 219 to 212 on March 21, 2010, with 34 Democrats and all 178 Republicans voting against it. The following day, Republicans introduced legislation to repeal the bill. Obama signed the ACA into law on March 23, 2010. The amendment bill, The Health Care and Education Reconciliation Act, was also passed by the House on March 21, then by the Senate via reconciliation on March 25, and finally signed by President Obama on March 30. State and city reform efforts A few states have taken steps toward universal health care coverage, most notably Minnesota, Massachusetts and Connecticut. Examples include the Massachusetts 2006 Health Reform Statute and Connecticut's SustiNet plan to provide health care to state residents. The influx of more than a quarter of a million newly insured residents has led to overcrowded waiting rooms and overworked primary-care physicians who were already in short supply in Massachusetts. Other states, while not attempting to insure all of their residents, cover large numbers of people by reimbursing hospitals and other health care providers using what is generally characterized as a charity care scheme; New Jersey is an example of a state that employs the latter strategy. Several single payer referendums have been proposed at the state level, but so far all have failed to pass: California in 1994, Massachusetts in 2000, and Oregon in 2002. The state legislature of California twice passed SB 840, The Health Care for All Californians Act, a single-payer health care system. Both times, Governor Arnold Schwarzenegger (R) vetoed the bill, once in 2006 and again in 2008. The percentage of residents that are uninsured varies from state to state. In 2008 Texas had the highest percentage of residents without health insurance, 24%. New Mexico had the second highest percentage of uninsured that year at 22%. States play a variety of roles in the health care system including purchasers of health care and regulators of providers and health plans, which give them multiple opportunities to try to improve how it functions. While states are actively working to improve the system in a variety of ways, there remains room for them to do more. One municipality, San Francisco, California, has established a program to provide health care to all uninsured residents (Healthy San Francisco). In July 2009, Connecticut passed into law a plan called SustiNet, with the goal of achieving health care coverage of 98% of its residents by 2014. The SustiNet law establishes a nine-member board to recommend to the legislature, by January 1, 2011, the details of and implementation process for a self-insured health care plan called SustiNet. The recommendations must address (1) the phased-in offering of the SustiNet plan to state employees and retirees, HUSKY A and B beneficiaries, people without employer-sponsored insurance (ESI) or with unaffordable ESI, small and large employers, and others; (2) establishing an entity that can contract with insurers and health care providers, set reimbursement rates, develop medical homes for patients, and encourage the use of health information technology; (3) a model benefits package; and (4) public outreach and ways to identify uninsured citizens. The board must establish committees to make recommendations to it about health information technology, medical homes, clinical care and safety guidelines, and preventive care and improved health outcomes. The act also establishes an independent information clearinghouse to inform employers, consumers, and the public about SustiNet and private health care plans and creates task forces to address obesity, tobacco usage, and health care workforce issues. The effective date of the SustiNet law was July 1, 2009, for most provisions. In May 2011, the state of Vermont became the first state to pass legislation establishing a single-payer health care system. The legislation, known as Act 48, establishes health care in the state as a "human right" and lays the responsibility on the state to provide a health care system which best meets the needs of the citizens of Vermont. In December 2014, the governor of the state of Vermont suspended plans to implement this single-payer system because of its cost. See also Health care reform in the United States Health economics Health insurance exchange Health insurance in the United States Health policy analysis Health care politics List of healthcare reform advocacy groups in the United States National health insurance United States National Health Care Act References History Political history of the United States History of the United States by topic
22676326
https://en.wikipedia.org/wiki/TCS%20BaNCS
TCS BaNCS
TCS BaNCS is a core banking software suite developed by Tata Consultancy Services for use by retail banks. It includes functions for universal banking, core banking, payments, wealth management, forex and money markets, compliance, insurance, securities processing, custody, financial inclusion, Islamic banking and treasury operations. There are also modules that deal with capital markets and the insurance business. The suite of products is periodically evaluated by independent research firms such as Forrester. History Prior to being taken over by TCS in 2005. The BANCS Core Banking software was developed at the headquarters of Financial Network Services Pty (FNS) in Sydney Australia. First implemented into local Australian and New Zealand banks and credit unions throughout the late 1970s and into the 1980s the demand from overseas markets grew substantially looking for automation and consolidation of disparate systems. Project sites across Asia, Middle East and Nordic countries led to multiple versions being developed in different geographies. In the mid 1980s the software came under the control of the Systems Development Office at 70 Rosehill Street, Redfern in Sydney. Base versions from 1.0 through to 7.6 were developed by FNS. Various region specific business rules were designed and integrated with parameterised functionality into the Base Software in sub-versions from customer sites via the Redfern Development Office. Two major branches were existed from version 2 through 4 of the core. Initially based on AT&T/NCR 9800 series mainframe architecture - platform versions were created for UNIX and later (from version 4 onwards) on NT/Windows Server. The core applications COBOL source code was interchangeable, but interfacing software, such as API gateways, transaction queues and data storage differed to cater for the platform and database choices demanded from customers. The core application is written in C and linked to the Microfocus COBOL runtime system-linked objects. Scripting varied across branches to handle technical operations and non-functional requirements. Major versions featured multi-currency (with spot positions), multi-language and some multi-entity functionality. To cater for this, various front-ends were created such as BTM (Branch Terminal Manager) application, a web based teller system (BANCSLink) and BEAM-handled transactions from tellers. ATM and POS switch and card management (Telepac) modules were also built out extensively in the 1980s in line with the increasing reliance on debit and credit cards globally at the time Major version 6 included three back-end database technologies: Oracle, Informix and DB2. The UNIX Variants included HP-UX, IBM AIX and AT&T NCR System V UNIX variants. ksh (Korn Shell) was the primary script language. In the 1990s and 2000s Internet Banking (BANCS Connect), Treasury, FX/MM, Trade Finance were also developed with multiple client sites supported and upgraded onto later versions as the software matured and customers reaped benefits of further functionality Version 7.6 and onward was developed in COBOL by the Tata Consultancy Services business unit TCS Financial Solutions. TCS furthermore re-wrote the base application in JAVA to cater for the limited supply of COBOL software engineers into the 2000s Notable client implementations included a Australia and New Zealand tier 2 banks and credit unions, Tier 1 banks in South Korea, Taiwan and China, State Bank of India and large nationwide banks in India, National Bank of Kuwait, and multiple banks in Saudi Arabia, North and South Africa. A partnership with Telenor Novit existed throughout the 1980s and 1990ms that powered multiple Scandinavian banks as well In August 2013, Panzhihua (PZH) Commercial Bank started using the software, making it the first Chinese city commercial bank to use the TCS BaNCS solution. References Tata Consultancy Services Banking software companies Financial software companies
38838
https://en.wikipedia.org/wiki/Cyclic%20redundancy%20check
Cyclic redundancy check
A cyclic redundancy check (CRC) is an error-detecting code commonly used in digital networks and storage devices to detect accidental changes to raw data. Blocks of data entering these systems get a short check value attached, based on the remainder of a polynomial division of their contents. On retrieval, the calculation is repeated and, in the event the check values do not match, corrective action can be taken against data corruption. CRCs can be used for error correction (see bitfilters). CRCs are so called because the check (data verification) value is a redundancy (it expands the message without adding information) and the algorithm is based on cyclic codes. CRCs are popular because they are simple to implement in binary hardware, easy to analyze mathematically, and particularly good at detecting common errors caused by noise in transmission channels. Because the check value has a fixed length, the function that generates it is occasionally used as a hash function. Introduction CRCs are based on the theory of cyclic error-correcting codes. The use of systematic cyclic codes, which encode messages by adding a fixed-length check value, for the purpose of error detection in communication networks, was first proposed by W. Wesley Peterson in 1961. Cyclic codes are not only simple to implement but have the benefit of being particularly well suited for the detection of burst errors: contiguous sequences of erroneous data symbols in messages. This is important because burst errors are common transmission errors in many communication channels, including magnetic and optical storage devices. Typically an n-bit CRC applied to a data block of arbitrary length will detect any single error burst not longer than n bits, and the fraction of all longer error bursts that it will detect is . Specification of a CRC code requires definition of a so-called generator polynomial. This polynomial becomes the divisor in a polynomial long division, which takes the message as the dividend and in which the quotient is discarded and the remainder becomes the result. The important caveat is that the polynomial coefficients are calculated according to the arithmetic of a finite field, so the addition operation can always be performed bitwise-parallel (there is no carry between digits). In practice, all commonly used CRCs employ the Galois field, or more simply a finite field, of two elements, GF(2). The two elements are usually called 0 and 1, comfortably matching computer architecture. A CRC is called an n-bit CRC when its check value is n bits long. For a given n, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degree n, which means it has terms. In other words, the polynomial has a length of ; its encoding requires bits. Note that most polynomial specifications either drop the MSB or LSB, since they are always 1. The CRC and associated polynomial typically have a name of the form CRC-n-XXX as in the table below. The simplest error-detection system, the parity bit, is in fact a 1-bit CRC: it uses the generator polynomial  (two terms), and has the name CRC-1. Application A CRC-enabled device calculates a short, fixed-length binary sequence, known as the check value or CRC, for each block of data to be sent or stored and appends it to the data, forming a codeword. When a codeword is received or read, the device either compares its check value with one freshly calculated from the data block, or equivalently, performs a CRC on the whole codeword and compares the resulting check value with an expected residue constant. If the CRC values do not match, then the block contains a data error. The device may take corrective action, such as rereading the block or requesting that it be sent again. Otherwise, the data is assumed to be error-free (though, with some small probability, it may contain undetected errors; this is inherent in the nature of error-checking). Data integrity CRCs are specifically designed to protect against common types of errors on communication channels, where they can provide quick and reasonable assurance of the integrity of messages delivered. However, they are not suitable for protecting against intentional alteration of data. Firstly, as there is no authentication, an attacker can edit a message and recompute the CRC without the substitution being detected. When stored alongside the data, CRCs and cryptographic hash functions by themselves do not protect against intentional modification of data. Any application that requires protection against such attacks must use cryptographic authentication mechanisms, such as message authentication codes or digital signatures (which are commonly based on cryptographic hash functions). Secondly, unlike cryptographic hash functions, CRC is an easily reversible function, which makes it unsuitable for use in digital signatures. Thirdly, CRC satisfies a relation similar to that of a linear function (or more accurately, an affine function): where depends on the length of and . This can be also stated as follows, where , and have the same length as a result, even if the CRC is encrypted with a stream cipher that uses XOR as its combining operation (or mode of block cipher which effectively turns it into a stream cipher, such as OFB or CFB), both the message and the associated CRC can be manipulated without knowledge of the encryption key; this was one of the well-known design flaws of the Wired Equivalent Privacy (WEP) protocol. Computation To compute an n-bit binary CRC, line the bits representing the input in a row, and position the ()-bit pattern representing the CRC's divisor (called a "polynomial") underneath the left end of the row. In this example, we shall encode 14 bits of message with a 3-bit CRC, with a polynomial . The polynomial is written in binary as the coefficients; a 3rd-degree polynomial has 4 coefficients (). In this case, the coefficients are 1, 0, 1 and 1. The result of the calculation is 3 bits long, which is why it is called a 3-bit CRC. However, you need 4 bits to explicitly state the polynomial. Start with the message to be encoded: 11010011101100 This is first padded with zeros corresponding to the bit length n of the CRC. This is done so that the resulting code word is in systematic form. Here is the first calculation for computing a 3-bit CRC: 11010011101100 000 <--- input right padded by 3 bits 1011 <--- divisor (4 bits) = x³ + x + 1 ------------------ 01100011101100 000 <--- result The algorithm acts on the bits directly above the divisor in each step. The result for that iteration is the bitwise XOR of the polynomial divisor with the bits above it. The bits not above the divisor are simply copied directly below for that step. The divisor is then shifted right to align with the highest remaining 1 bit in the input, and the process is repeated until the divisor reaches the right-hand end of the input row. Here is the entire calculation: 11010011101100 000 <--- input right padded by 3 bits 1011 <--- divisor 01100011101100 000 <--- result (note the first four bits are the XOR with the divisor beneath, the rest of the bits are unchanged) 1011 <--- divisor ... 00111011101100 000 1011 00010111101100 000 1011 00000001101100 000 <--- note that the divisor moves over to align with the next 1 in the dividend (since quotient for that step was zero) 1011 (in other words, it doesn't necessarily move one bit per iteration) 00000000110100 000 1011 00000000011000 000 1011 00000000001110 000 1011 00000000000101 000 101 1 ----------------- 00000000000000 100 <--- remainder (3 bits). Division algorithm stops here as dividend is equal to zero. Since the leftmost divisor bit zeroed every input bit it touched, when this process ends the only bits in the input row that can be nonzero are the n bits at the right-hand end of the row. These n bits are the remainder of the division step, and will also be the value of the CRC function (unless the chosen CRC specification calls for some postprocessing). The validity of a received message can easily be verified by performing the above calculation again, this time with the check value added instead of zeroes. The remainder should equal zero if there are no detectable errors. 11010011101100 100 <--- input with check value 1011 <--- divisor 01100011101100 100 <--- result 1011 <--- divisor ... 00111011101100 100 ...... 00000000001110 100 1011 00000000000101 100 101 1 ------------------ 00000000000000 000 <--- remainder The following Python code outlines a function which will return the initial CRC remainder for a chosen input and polynomial, with either 1 or 0 as the initial padding. Note that this code works with string inputs rather than raw numbers: def crc_remainder(input_bitstring, polynomial_bitstring, initial_filler): """Calculate the CRC remainder of a string of bits using a chosen polynomial. initial_filler should be '1' or '0'. """ polynomial_bitstring = polynomial_bitstring.lstrip('0') len_input = len(input_bitstring) initial_padding = (len(polynomial_bitstring) - 1) * initial_filler input_padded_array = list(input_bitstring + initial_padding) while '1' in input_padded_array[:len_input]: cur_shift = input_padded_array.index('1') for i in range(len(polynomial_bitstring)): input_padded_array[cur_shift + i] \ = str(int(polynomial_bitstring[i] != input_padded_array[cur_shift + i])) return ''.join(input_padded_array)[len_input:] def crc_check(input_bitstring, polynomial_bitstring, check_value): """Calculate the CRC check of a string of bits using a chosen polynomial.""" polynomial_bitstring = polynomial_bitstring.lstrip('0') len_input = len(input_bitstring) initial_padding = check_value input_padded_array = list(input_bitstring + initial_padding) while '1' in input_padded_array[:len_input]: cur_shift = input_padded_array.index('1') for i in range(len(polynomial_bitstring)): input_padded_array[cur_shift + i] \ = str(int(polynomial_bitstring[i] != input_padded_array[cur_shift + i])) return ('1' not in ''.join(input_padded_array)[len_input:]) >>> crc_remainder('11010011101100', '1011', '0') '100' >>> crc_check('11010011101100', '1011', '100') True CRC-32 algorithm This is a practical algorithm for the CRC-32 variant of CRC. The CRCTable is a memoization of a calculation that would have to be repeated for each byte of the message (). Function CRC32 Input: data: Bytes // Array of bytes Output: crc32: UInt32 // 32-bit unsigned CRC-32 value // Initialize CRC-32 to starting value crc32 ← 0xFFFFFFFF for each byte in data do nLookupIndex ← (crc32 xor byte) and 0xFF; crc32 ← (crc32 shr 8) xor CRCTable[nLookupIndex] // CRCTable is an array of 256 32-bit constants // Finalize the CRC-32 value by inverting all the bits crc32 ← crc32 xor 0xFFFFFFFF return crc32 In C, the algorithm looks as such: #include <inttypes.h> // uint32_t, uint8_t uint32_t CRC32(const uint8_t data[], size_t data_length) { uint32_t crc32 = 0xFFFFFFFFu; for (size_t i = 0; i < data_length; i++) { const uint32_t lookupIndex = (crc32 ^ data[i]) & 0xff; crc32 = (crc32 >> 8) ^ CRCTable[lookupIndex]; // CRCTable is an array of 256 32-bit constants } // Finalize the CRC-32 value by inverting all the bits crc32 ^= 0xFFFFFFFFu; return crc32; } Mathematics Mathematical analysis of this division-like process reveals how to select a divisor that guarantees good error-detection properties. In this analysis, the digits of the bit strings are taken as the coefficients of a polynomial in some variable x—coefficients that are elements of the finite field GF(2) (the integers modulo 2, i.e. either a zero or a one), instead of more familiar numbers. The set of binary polynomials is a mathematical ring. Designing polynomials The selection of the generator polynomial is the most important part of implementing the CRC algorithm. The polynomial must be chosen to maximize the error-detecting capabilities while minimizing overall collision probabilities. The most important attribute of the polynomial is its length (largest degree(exponent) +1 of any one term in the polynomial), because of its direct influence on the length of the computed check value. The most commonly used polynomial lengths are 9 bits (CRC-8), 17 bits (CRC-16), 33 bits (CRC-32), and 65 bits (CRC-64). A CRC is called an n-bit CRC when its check value is n-bits. For a given n, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degree n, and hence terms (the polynomial has a length of ). The remainder has length n. The CRC has a name of the form CRC-n-XXX. The design of the CRC polynomial depends on the maximum total length of the block to be protected (data + CRC bits), the desired error protection features, and the type of resources for implementing the CRC, as well as the desired performance. A common misconception is that the "best" CRC polynomials are derived from either irreducible polynomials or irreducible polynomials times the factor , which adds to the code the ability to detect all errors affecting an odd number of bits. In reality, all the factors described above should enter into the selection of the polynomial and may lead to a reducible polynomial. However, choosing a reducible polynomial will result in a certain proportion of missed errors, due to the quotient ring having zero divisors. The advantage of choosing a primitive polynomial as the generator for a CRC code is that the resulting code has maximal total block length in the sense that all 1-bit errors within that block length have different remainders (also called syndromes) and therefore, since the remainder is a linear function of the block, the code can detect all 2-bit errors within that block length. If is the degree of the primitive generator polynomial, then the maximal total block length is , and the associated code is able to detect any single-bit or double-bit errors. We can improve this situation. If we use the generator polynomial , where is a primitive polynomial of degree , then the maximal total block length is , and the code is able to detect single, double, triple and any odd number of errors. A polynomial that admits other factorizations may be chosen then so as to balance the maximal total blocklength with a desired error detection power. The BCH codes are a powerful class of such polynomials. They subsume the two examples above. Regardless of the reducibility properties of a generator polynomial of degree r, if it includes the "+1" term, the code will be able to detect error patterns that are confined to a window of r contiguous bits. These patterns are called "error bursts". Specification The concept of the CRC as an error-detecting code gets complicated when an implementer or standards committee uses it to design a practical system. Here are some of the complications: Sometimes an implementation prefixes a fixed bit pattern to the bitstream to be checked. This is useful when clocking errors might insert 0-bits in front of a message, an alteration that would otherwise leave the check value unchanged. Usually, but not always, an implementation appends n 0-bits (n being the size of the CRC) to the bitstream to be checked before the polynomial division occurs. Such appending is explicitly demonstrated in the Computation of CRC article. This has the convenience that the remainder of the original bitstream with the check value appended is exactly zero, so the CRC can be checked simply by performing the polynomial division on the received bitstream and comparing the remainder with zero. Due to the associative and commutative properties of the exclusive-or operation, practical table driven implementations can obtain a result numerically equivalent to zero-appending without explicitly appending any zeroes, by using an equivalent, faster algorithm that combines the message bitstream with the stream being shifted out of the CRC register. Sometimes an implementation exclusive-ORs a fixed bit pattern into the remainder of the polynomial division. Bit order: Some schemes view the low-order bit of each byte as "first", which then during polynomial division means "leftmost", which is contrary to our customary understanding of "low-order". This convention makes sense when serial-port transmissions are CRC-checked in hardware, because some widespread serial-port transmission conventions transmit bytes least-significant bit first. Byte order: With multi-byte CRCs, there can be confusion over whether the byte transmitted first (or stored in the lowest-addressed byte of memory) is the least-significant byte (LSB) or the most-significant byte (MSB). For example, some 16-bit CRC schemes swap the bytes of the check value. Omission of the high-order bit of the divisor polynomial: Since the high-order bit is always 1, and since an n-bit CRC must be defined by an ()-bit divisor which overflows an n-bit register, some writers assume that it is unnecessary to mention the divisor's high-order bit. Omission of the low-order bit of the divisor polynomial: Since the low-order bit is always 1, authors such as Philip Koopman represent polynomials with their high-order bit intact, but without the low-order bit (the or 1 term). This convention encodes the polynomial complete with its degree in one integer. These complications mean that there are three common ways to express a polynomial as an integer: the first two, which are mirror images in binary, are the constants found in code; the third is the number found in Koopman's papers. In each case, one term is omitted. So the polynomial may be transcribed as: 0x3 = 0b0011, representing (MSB-first code) 0xC = 0b1100, representing (LSB-first code) 0x9 = 0b1001, representing (Koopman notation) In the table below they are shown as: Obfuscation CRCs in proprietary protocols might be obfuscated by using a non-trivial initial value and a final XOR, but these techniques do not add cryptographic strength to the algorithm and can be reverse engineered using straightforward methods. Standards and common use Numerous varieties of cyclic redundancy checks have been incorporated into technical standards. By no means does one algorithm, or one of each degree, suit every purpose; Koopman and Chakravarty recommend selecting a polynomial according to the application requirements and the expected distribution of message lengths. The number of distinct CRCs in use has confused developers, a situation which authors have sought to address. There are three polynomials reported for CRC-12, twenty-two conflicting definitions of CRC-16, and seven of CRC-32. The polynomials commonly applied are not the most efficient ones possible. Since 1993, Koopman, Castagnoli and others have surveyed the space of polynomials between 3 and 64 bits in size, finding examples that have much better performance (in terms of Hamming distance for a given message size) than the polynomials of earlier protocols, and publishing the best of these with the aim of improving the error detection capacity of future standards. In particular, iSCSI and SCTP have adopted one of the findings of this research, the CRC-32C (Castagnoli) polynomial. The design of the 32-bit polynomial most commonly used by standards bodies, CRC-32-IEEE, was the result of a joint effort for the Rome Laboratory and the Air Force Electronic Systems Division by Joseph Hammond, James Brown and Shyan-Shiang Liu of the Georgia Institute of Technology and Kenneth Brayer of the Mitre Corporation. The earliest known appearances of the 32-bit polynomial were in their 1975 publications: Technical Report 2956 by Brayer for Mitre, published in January and released for public dissemination through DTIC in August, and Hammond, Brown and Liu's report for the Rome Laboratory, published in May. Both reports contained contributions from the other team. During December 1975, Brayer and Hammond presented their work in a paper at the IEEE National Telecommunications Conference: the IEEE CRC-32 polynomial is the generating polynomial of a Hamming code and was selected for its error detection performance. Even so, the Castagnoli CRC-32C polynomial used in iSCSI or SCTP matches its performance on messages from 58 bits to 131 kbits, and outperforms it in several size ranges including the two most common sizes of Internet packet. The ITU-T G.hn standard also uses CRC-32C to detect errors in the payload (although it uses CRC-16-CCITT for PHY headers). CRC-32C computation is implemented in hardware as an operation () of SSE4.2 instruction set, first introduced in Intel processors' Nehalem microarchitecture. ARM AArch64 architecture also provides hardware acceleration for both CRC-32 and CRC-32C operations. Polynomial representations of cyclic redundancy checks The table below lists only the polynomials of the various algorithms in use. Variations of a particular protocol can impose pre-inversion, post-inversion and reversed bit ordering as described above. For example, the CRC32 used in Gzip and Bzip2 use the same polynomial, but Gzip employs reversed bit ordering, while Bzip2 does not. Note that even parity polynomials in GF(2) with degree greater than 1 are never primitive. Even parity polynomial marked as primitive in this table represent a primitive polynomial multiplied by . The most significant bit of a polynomial is always 1, and is not shown in the hex representations. Implementations Implementation of CRC32 in GNU Radio up to 3.6.1 (ca. 2012) C class code for CRC checksum calculation with many different CRCs to choose from CRC catalogues Catalogue of parametrised CRC algorithms CRC Polynomial Zoo See also Mathematics of cyclic redundancy checks Computation of cyclic redundancy checks List of hash functions List of checksum algorithms Information security Simple file verification LRC References Further reading External links Cyclic Redundancy Checks, MathPages, overview of error-detection of different polynomials Algorithm 4 was used in Linux and Bzip2. , Slicing-by-4 and slicing-by-8 algorithms — Bitfilters — theory, practice, hardware, and software with emphasis on CRC-32. Reverse-Engineering a CRC Algorithm — includes links to PDFs giving 16 and 32-bit CRC Hamming distances ISO/IEC 13239:2002: Information technology -- Telecommunications and information exchange between systems -- High-level data link control (HDLC) procedures CRC32-Castagnoli Linux Library Binary arithmetic Finite fields Polynomials Wikipedia articles with ASCII art Articles with example Python (programming language) code
1680891
https://en.wikipedia.org/wiki/IBM%207090/94%20IBSYS
IBM 7090/94 IBSYS
IBSYS is the discontinued tape-based operating system that IBM supplied with its IBM 709, IBM 7090 and IBM 7094 computers. A similar operating system (but with several significant differences), also called IBSYS, was provided with IBM 7040 and IBM 7044 computers. IBSYS was based on FORTRAN Monitor System (FMS) and (more likely) Bell Labs' "BESYS" rather than the SHARE Operating System. IBSYS directly supported several old language processors on the $EXECUTE card: 9PAC, FORTRAN and IBSFAP. Newer language processors ran under IBJOB. IBSYS System Supervisor IBSYS itself is a resident monitor program, that reads control card images placed between the decks of program and data cards of individual jobs. An IBSYS control card begins with a "$" in column 1, immediately followed by a Control Name that selects the various IBSYS utility programs needed to set up and run the job. These card deck images are usually read from magnetic tapes prepared offline, not directly from the card reader. IBJOB Processor The IBJOB Processor is a subsystem that runs under the IBSYS System Supervisor. It reads control cards that request, e.g., compilation, execution. The languages supported include COBOL. Commercial Translator (COMTRAN), Fortran IV (IBFTC) and Macro Assembly Program (IBMAP). See also University of Michigan Executive System Timeline of operating systems Further reading Noble, A. S., Jr., "Design of an integrated programming and operating system", IBM Systems Journal, June 1963. "The present paper considers the underlying design concepts of IBSYS/IBJOB, an integrated programming and operating system. The historical background and over-all structure of the system are discussed. Flow of jobs through the IBJOB processor, as controlled by the monitor, is also described." "IBM 7090/7094 IBSYS Operating System Version 13 System Monitor (IBSYS)", Form C28-6248-7 "IBM 7090/7094 IBSYS Operating System Version 13 IBJOB Processor", Form C28-6389-1 "IBM 7090/7094 IBSYS Operating System Version 13 IBJOB Processor Debugging Package", Form C28-6393-2 Notes External links IBM 7090/94 IBSYS Operating System, Jack Harper Dave Pitts' IBM 7090 support IBSYS source archived with Bitsavers History of FORTRAN and FORTRAN II – FORTRAN II and other software running on IBSYS, Software Preservation Group, Computer History Museum 7090 94 IBSYS OS IBSYS Discontinued operating systems 1960 software
3479566
https://en.wikipedia.org/wiki/Neoware
Neoware
Neoware was a company that manufactured and marketed thin clients. It also developed and marketed enterprise software, thin client appliances, and related services aimed at reducing the TCO of IT infrastructure. Neoware owned one of the three available "OS Streaming" technologies that make it possible to remote boot diskless computers under Microsoft Windows and Linux. On July 23, 2007, HP announced that it has signed a definitive merger agreement to purchase Neoware for $241 million. The acquisition was completed on October 1, 2007. Products Thin clients Neoware TeemTalk (Pericom was acquired July 2003) Neoware Image Manager (Qualystem Technology S.A.S. was acquired April 2005) LBT Linux-Based Terminal ezRemote Manager Thintune Linux/Manager (the brand was acquired from eSeSIX in March 2005) Neoware Device Manager NeoLinux 4 Windows XPe Windows CE References PC Magazine Editors' Choice, March 2000 Deloitte's Technology Fast 50 Program for the Delaware Valley Various press releases in Neoware's site "press releases" section Defunct computer companies of the United States Hewlett-Packard acquisitions Companies based in Montgomery County, Pennsylvania American companies established in 1992 1992 establishments in Pennsylvania
32996207
https://en.wikipedia.org/wiki/LibreOffice%20Calc
LibreOffice Calc
LibreOffice Calc is the spreadsheet component of the LibreOffice software package. After forking from OpenOffice.org in 2010, LibreOffice Calc underwent a massive re-work of external reference handling to fix many defects in formula calculations involving external references, and to boost data caching performance, especially when referencing large data ranges. Additionally, Calc now supports 1 million rows in a spreadsheet with macro references to each cell. Calc is capable of opening and saving most spreadsheets in Microsoft Excel file format. Calc is also capable of saving spreadsheets as PDF files. As with the entire LibreOffice suite, Calc is available for a variety of platforms, including Linux, macOS, Microsoft Windows, and FreeBSD. Available under the Mozilla Public License, Calc is free and open-source software. There is now a closed beta of LibreOffice on AmigaOS 4.1. Features Capabilities of Calc include: Ability to read/write OpenDocument (ODF), Excel (XLS), CSV, and several other formats. Support for many functions, including those for imaginary numbers, as well as financial and statistical functions. Supports 1 million rows in a spreadsheet, making LibreOffice spreadsheets more suitable for heavier scientific or financial spreadsheets. However, the number of columns is restricted to at most 1024, much lower than Excel's limit of 16384. Up to now, new functions such as IFS, Switch TEXT JOIN, MAXIFS, MINIFS functions, etc. were available only in Excel 2016 and later. LibreOffice Calc can use them. In its internal data structure, Calc until version 4.1 relies on cells as the base class throughout, which has been blamed for "extreme memory use, slow computation, and difficult code". Version 4.2 (released in January 2014) addresses these issues by instead storing the data in arrays where possible. Pivot Table Originally called DataPilot, Pivot Table provides similar functionality to the Pivot table found in Microsoft Excel. It is used for interactive table layout and dynamic data analysis. Pivot table has support for an unlimited number of fields. Previously Pivot Table only supported up to 8 column/row/data fields and up to 10 page fields. An advanced sort macro is included that allows data to be arranged or categorised based on either a user generated macro or one of several default included macros. Supported file formats Release history Calc has continued to diverge since the fork from its parent OpenOffice with new features being added and code cleanups taking place. Versions for LibreOffice Calc include the following: See also List of spreadsheet software Comparison of spreadsheet software References External links Cross-platform free software Free spreadsheet software Calc Spreadsheet software Spreadsheet software for macOS Spreadsheet software for Windows
4433231
https://en.wikipedia.org/wiki/Tripwire%20%28company%29
Tripwire (company)
Tripwire, Inc. is a software company based in Portland, Oregon that focuses on security and compliance automation. It is a subsidiary of technology company Belden. History Tripwire's intrusion detection software was created in the 1990s by Purdue University graduate student Gene Kim and his professor Gene Spafford. In 1997, Gene Kim co-founded Tripwire, Inc. with rights to the Tripwire name and technology, and produced a commercial version, Tripwire for Servers. In 2000, Tripwire released Open Source Tripwire. In 2005, the firm released Tripwire Enterprise, a product for configuration control by detecting, assessing, reporting and remediating file and configuration changes. In January 2010, it announced the release of Tripwire Log Center, a log and security information and event management (SIEM) software that stores, correlates and reports log and security event data. The two products can be integrated to enable correlation of change and event data. August 21, 2009, the firm acquired Activeworx technologies from CrossTec Corporation. Revenues grew to $74 million in 2009. In October 2009, the company had 261 employees; that number grew to 336 by June 2010. By May–June 2010, the company had over 5,500 customers and had announced that it had filed a registration statement with the Securities and Exchange Commission for a proposed initial public offering of its common stock. A year later, the company announced its sale to the private equity firm Thoma Bravo, ending its $86 million IPO plans. CEO Jim Johnson cited the firm's failure to reach the $100 million revenue milestone in 2010 as well as changing IPO market expectations as reasons for not going through with the IPO. The day following the acquisition, the company laid off about 50 of its 350 employees. Tripwire acquired nCircle, which focused on asset discovery and vulnerability management, in 2013. In December 2014, Belden announced plans to buy Tripwire for $710 million. The acquisition was completed on January 2, 2015. In February 2022, Belden announced to sell Tripwire to HelpSystems. References Companies based in Portland, Oregon Computer security software companies Software companies based in Oregon Software companies established in 1997 Privately held companies based in Oregon 1997 establishments in Oregon 2015 mergers and acquisitions Software companies of the United States
197440
https://en.wikipedia.org/wiki/Fraunhofer%20Society
Fraunhofer Society
The Fraunhofer Society (, "Fraunhofer Society for the Advancement of Applied Research") is a German research organization with 75institutes spread throughout Germany, each focusing on different fields of applied science (as opposed to the Max Planck Society, which works primarily on basic science). With some 29,000 employees, mainly scientists and engineers, and with an annual research budget of about €2.8billion, it is the biggest organization for applied research and development services in Europe. Some basic funding for the Fraunhofer Society is provided by the state (the German public, through the federal government together with the states or Länder, "owns" the Fraunhofer Society), but more than 70% of the funding is earned through contract work, either for government-sponsored projects or from industry. It is named after Joseph von Fraunhofer who, as a scientist, an engineer, and an entrepreneur, is said to have superbly exemplified the goals of the society. The organization has seven centers in the United States, under the name "Fraunhofer USA", and three in Asia. In October 2010, Fraunhofer announced that it would open its first research center in South America. Fraunhofer UK Research Ltd was established as a legally independent affiliate along with its Fraunhofer Centre for Applied Photonics, in Glasgow, Scotland, in March 2012. The Fraunhofer model The so-called "Fraunhofer model" has been in existence since 1973 and has led to the society's continuing growth. Under the model, the Fraunhofer Society earns about 70% of its income through contracts with industry or specific government projects. The other 30% of the budget is sourced in the proportion 9:1 from federal and state (Land) government grants and is used to support preparatory research. Thus the size of the society's budget depends largely on its success in maximizing revenue from commissions. This funding model applies not just to the central society itself but also to the individual institutes. This serves both to drive the realization of the Fraunhofer Society's strategic direction of becoming a leader in applied research and to encourage a flexible, autonomous, and entrepreneurial approach to the society's research priorities. The institutes are not legally independent units. The Fraunhofer model grants a very high degree of independence to the institutes in terms of project results, scientific impact and above all for their own funding. On the one hand, this results in a high degree of independence in terms of technical focus, distribution of resources, project acquisition, and in project management. On the other hand, this also generates a certain economic pressure and a compulsion to customer and market orientation. In this sense, the institutes and their employees act in an entrepreneurial manner and ideally combine research, innovation, and entrepreneurship. Numerous innovations are the result of research and development work at the Fraunhofer institutes. The institutes work on practically all application-relevant technology fields, i.e. microelectronics, information and communication technology, life sciences, materials research, energy technology or medical technology. One of the best known Fraunhofer developments is the MP3 audio data compression process. In 2018, the Fraunhofer-Gesellschaft reported 734 new inventions. This corresponds to about three inventions per working day. Of these, 612 developments were registered for patents. The number of active property rights and property right applications increased to 6881. Institutes The Fraunhofer Society currently operates 72 institutes and research units. These are Fraunhofer Institutes for: Algorithms and Scientific Computing – SCAI Applied Information Technology – FIT Applied and Integrated Security – AISEC Applied Optics and Precision Engineering – IOF Applied Polymer Research – IAP Applied Solid State Physics – IAF Biomedical Engineering – IBMT Building Physics – IBP Cell Therapy and Immunology - IZI Ceramic Technologies and Systems – IKTS Chemical Technology – ICT Communication, Information Processing and Ergonomics – FKIE Computer Graphics Research – IGD Digital Media Technology – IDMT Digital Medicine - MEVIS Electron Beam and Plasma Technology – FEP Electronic Nano Systems – ENAS Energy Economics and Energy System Technology - IEE Environmental, Safety and Energy Technology – UMSICHT Embedded Systems and Communication - ESK Experimental Software Engineering – IESE Factory Operation and Automation – IFF High Frequency Physics and Radar Techniques – FHR High-Speed Dynamics (Ernst-Mach-Institut) – EMI Industrial Engineering – IAO Industrial Mathematics – ITWM Information Center for Regional Planning and Building Construction – IRB Integrated Circuits – IIS Integrated Systems and Device Technology – IISB Intelligent Analysis and Information Systems – IAIS Interfacial Engineering and Biotechnology – IGB International Management and Knowledge Economy - IMW Laser Technology – ILT Machine Tools and Forming Technology – IWU Manufacturing Engineering and Applied Materials Research – IFAM Manufacturing Engineering and Automation – IPA Material and Beam Technology – IWS Material Flow and Logistics – IML Materials Recycling and Resource Strategies – IWKS Mechanics of Materials – IWM Microelectronic Circuits and Systems – IMS Microstructure of Materials and Systems – IMWS Microsystems and Solid State Technologies EMFT - EMFT Molecular Biology and Applied Ecology – IME Non-Destructive Testing – IZFP Optronics, System Technologies and Image Exploitation – IOSB Open Communication Systems – FOKUS Photonic Microsystems – IPMS Physical Measurement Techniques – IPM Process Engineering and Packaging – IVV Production Systems and Design Technology – IPK Production Technology – IPT Reliability and Microintegration – IZM Secure Information Technology – SIT Silicate Research – ISC Silicon Technology – ISIT Solar Energy Systems – ISE Structural Durability and System Reliability – LBF Surface Engineering and Thin Films – IST Systems and Innovation Research – ISI Technological Trend Analysis – INT Telecommunications, Heinrich-Hertz-Institut – HHI Toxicology and Experimental Medicine – ITEM Transportation and Infrastructure Systems – IVI Wind Energy Systems – IWES Wood Research, Wilhelm-Klauditz-Institut – WKI Fraunhofer USA In addition to its German institutes, the Fraunhofer Society operates five US-based Centers through its American subsidiary, Fraunhofer USA: Coatings and Diamond Technologies – CCD Experimental Software Engineering – CESE Laser Applications – CLA Manufacturing Innovation – CMI Digital Media Technologies – DMT Fraunhofer Singapore In 2017 Fraunhofer Society launched its first direct subsidiary in Asia: Fraunhofer Singapore – Visual and Medical Computing, Cognitive Human-Machine Interaction, Cyber- and Information Security, Visual Immersive Mathematics Fraunhofer UK Research Ltd At the invitation of the UK Government, Fraunhofer UK Research Ltd was established in partnership with the University of Strathclyde. The UK's first Fraunhofer Centre, Fraunhofer Centre for Applied Photonics, was established and quickly recognised as a world-leading centre in lasers and optical systems. The UK Government commented on the significance of Fraunhofer CAP in quantum technology innovation. Ongoing core funding is received from Scottish Government, Scottish Enterprise and the University of Strathclyde. Notable projects The MP3 compression algorithm was invented and patented by Fraunhofer IIS. Its license revenues generated about €100 million in revenue for the society in 2005. The Fraunhofer Heinrich Hertz Institute (HHI) was a significant contributor to the H.264/MPEG-4 AVC video compression standard, a technology recognized with two Emmy awards in 2008 and 2009. This includes the Fraunhofer FDK AAC library. As of May 2010, a metamorphic triple-junction solar cell developed by Fraunhofer's Institute for Solar Energy Systems holds the world record for solar energy conversion efficiency with 41.1%, nearly twice that of a standard silicon-based cell. Fraunhofer is developing a program for use at IKEA stores, which would allow people to take a picture of their home into a store to view a fully assembled, digital adaptation of their room. E-puzzler, a pattern-recognition machine, which can digitally put back together even the most finely shredded papers. The E-puzzler uses a computerized conveyor belt that runs shards of shredded and torn paper through a digital scanner, automatically reconstructing original documents. OpenIMS, an Open Source implementation of IMS Call Session Control Functions (CSCFs) and a lightweight Home Subscriber Server (HSS), which together form the core elements of all IMS/NGN architectures as specified today within 3GPP, 3GPP2, ETSI TISPAN and the PacketCable initiative. Powerpaste, a magnesium and hydrogen -based gel, that releases hydrogen fuel suitable for fuel cell consumption when it reacts with water has been developed by the Fraunhofer Institute for Manufacturing Technology and Advanced Materials (IFAM) Roborder, an autonomous border surveillance system that uses unmanned mobile robots including aerial, water surface, underwater and ground vehicles which incorporate multi-modal sensors as part of an interoperable network. History The Fraunhofer Society was founded in Munich on March 26, 1949, by representatives of industry and academia, the government of Bavaria, and the nascent Federal Republic. In 1952, the Federal Ministry for Economic Affairs declared the Fraunhofer Society to be the third part of the non-university German research landscape (alongside the German Research Foundation (DFG) and the Max Planck Institutes). Whether the Fraunhofer Society should support applied research through its own facilities was, however, the subject of a long-running dispute. From 1954, the Society's first institutes developed. By 1956, it was developing research facilities in cooperation with the Ministry of Defense. In 1959, the Fraunhofer Society comprised nine institutes with 135 coworkers and a budget of 3.6 million Deutsche Mark. In 1965, the Fraunhofer Society was identified as a sponsor organization for applied research. In 1968, the Fraunhofer Society became the target of public criticism for its role in military research. By 1969, Fraunhofer had more than 1,200 employees in 19 institutes. The budget stood at 33 million Deutsche Mark. At this time, a "commission for the promotion of the development of the Fraunhofer Society" planned the further development of the Fraunhofer Society (FhG). The commission developed a financing model that would make the Society dependent on its commercial success. This would later come to be known as the "Fraunhofer Model". The Model was agreed to by the Federal Cabinet and the Bund-Länder-Kommission in 1973. In the same year, the executive committee and central administration moved into joint accommodation at Leonrodstraße 54 in Munich. The Fraunhofer program for the promotion of consulting research for SMEs was established, and has gained ever more significance in subsequent years. In 1977, the political ownership of the society was shared by the Ministries of Defense and Research. By 1984, the Fraunhofer Society had 3,500 employees in 33 institutes and a research budget of 360 million Deutsche Mark. By 1988, defense research represented only about 10% of the entire expenditure of the Fraunhofer Society. By 1989, the Fraunhofer Society had nearly 6,400 employees in 37 institutes, with a total budget of 700 million Deutsche Mark. In 1991, the Fraunhofer Society faced the challenge of integrating numerous research establishments in former East Germany as branch offices of already-existing institutes in the Fraunhofer Society. In 1993, the Fraunhofer Society's total budget exceeded 1 billion Deutsche Mark. In 1994, the Society founded a US-based subsidiary, Fraunhofer USA, Inc., to extend the outreach of Fraunhofer's R&D network to American clients. Its mission statement of 2000 committed the Fraunhofer Society to being a market and customer-oriented, nationally and internationally active sponsor organization for institutes of the applied research. In 1999, Fraunhofer initiated Fraunhofer Venture, a technology transfer office, to advance the transfer of its scientific research findings and meet the growing entrepreneurial spirit in the Fraunhofer institutes. Between 2000 and 2001, the institutes and IT research centers of the GMD (Gesellschaft für Mathematik und Datenverarbeitung – Society for Mathematics and Information technology) were integrated into the Fraunhofer Society at the initiative of the Federal Ministry for Education and Research. The year 2000 marked a noteworthy success at Fraunhofer-Institut for Integrated Circuits (IIS): MP3, a lossy audio format which they developed. For many years afterward, MP3 was the most widely adopted method for compressing and decompressing digital audio. In 2002, ownership of the Heinrich-Hertz-Institut for Communications Technology Berlin GmbH (HHI), which belonged to the Gottfried William Leibniz Society e. V. (GWL), was transferred to the Fraunhofer Society. With this integration the Fraunhofer Society budget exceeded €1 billion for the first time. In 2003, the Fraunhofer Society headquarters moved to its own building in Munich. The Fraunhofer Society developed and formulated a firm specific mission statement summarizing fundamental targets and codifying the desired "values and guidelines" of the society's "culture". Amongst these, the society committed itself to improving the opportunities for female employees and coworkers to identify themselves with the enterprise and to develop their own creative potential. In 2004, the former "Fraunhofer Working Group for Electronic Media Technology" at the Fraunhofer-Institut for Integrated Circuits (IIS) gained the status of an independent institute. It becomes Fraunhofer-Institut for Digital Media Technology IDMT. New alliances and topic groups helped to strengthen the market operational readiness level of the institutes for Fraunhofer in certain jurisdictions. In 2005, two new institutes, the Leipzig Fraunhofer-Institut for Cell Therapy and Immunology (IZI), and the Fraunhofer Center for Nano-electronic technologies CNT in Dresden, were founded. In 2006, the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) was founded as a merger between the Institute for Autonomous Intelligent Systems (AIS), and the Institute for Media Communication (IMK). In 2009, the former FGAN Institutes were converted into Fraunhofer Institutes, amongst them the Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE and the Fraunhofer Institute for Radar and High Frequency Technology FHR. In 2012, the cooperation of Fraunhofer with selected research-oriented universities of applied sciences based on the "Application Center" model started. The first cooperation was started with the Technische Hochschule OWL in Lemgo and leads to the foundation of the Fraunhofer IOSB-INA in the late 2011. Image gallery Presidents Walther Gerlach (1949–1951) Wilhelm Roelen (1951–1955) Hermann von Siemens (1955–1964) Franz Kollmann (1964–1968) Christian Otto Mohr (1968–1974) Heinz Keller (1974–1982) Max Syrbe (1982–1993) Hans-Jürgen Warnecke (1993–2002) Hans-Jörg Bullinger (2002–2012) Reimund Neugebauer (2012–present) See also National Network for Manufacturing Innovation Open access in Germany Notes References External links (US) (UK) 1949 establishments in Germany Engineering research institutes Laboratories in Germany Members of the European Research Consortium for Informatics and Mathematics Organizations established in 1949 Robotics organizations Scientific organisations based in Germany 20th-century establishments in Bavaria
42125403
https://en.wikipedia.org/wiki/Roku
Roku
Roku ( ) is a brand of hardware digital media players manufactured by American company Roku, Inc. They offer access to streaming media content from various online services. The first Roku model, developed in collaboration with Netflix, was introduced in May 2008. Roku devices have been considered influential on the digital media player market, helping to popularize the concept of low-cost, small-form-factor set-top boxes for over-the-top media consumption. Roku has also licensed its platform as middleware for smart TVs. In August 2021, Roku reached more than 55 million active accounts, according to its quarterly earnings report. , Roku has 60.1 million active users. History Roku was founded by Anthony Wood in 2002, who had previously founded ReplayTV, a DVR company that competed with TiVo. After ReplayTV's failure, Wood worked for a while at Netflix. In 2007, Wood's company began working with Netflix on Project:Griffin, a set-top box to allow Netflix users to stream Netflix content to their TVs. Only a few weeks before the project's launch, Netflix's founder Reed Hastings decided it would hamper license arrangements with third parties, potentially keeping Netflix off other similar platforms, and killed the project. Fast Company magazine cited the decision to kill the project as "one of Netflix's riskiest moves". Netflix decided instead to spin off the company, and Roku released their first set-top box in 2008. In 2010 they began offering models with various capabilities, which eventually became their standard business model. In 2014, Roku partnered with smart TV manufacturers to produce TVs with built-in Roku functionality. In 2015, Roku won the inaugural Emmy for Television Enhancement Devices. In 2019, Roku acquired dataxu, an advertising technology company for $150 million. Roku streaming players First generation The first Roku model, the Roku DVP N1000, was unveiled on May 20, 2008. It was developed in partnership with Netflix to serve as a standalone set-top box for its recently introduced "Watch Instantly" service. The goal was to produce a device with a small footprint that could be sold at low cost compared to larger digital video recorders and video game consoles. It features an NXP PNX8935 video decoder supporting both standard and high definition formats up to 720p; HDMI output; and automatic software updates, including the addition of new channels for other video services. Roku launched two new models in October 2009: the Roku SD (a simplified version of the DVP, with only analog AV outputs); and the Roku HD-XR, an updated version with 802.11n Wi-Fi and a USB port for future functionality. The Roku DVP was retroactively renamed the Roku HD. By then, Roku had added support for other services. The next month, they introduced the Channel Store, where users could download third-party apps for other content services (including the possibility of private services for specific uses). Netflix support was initially dependent on a PC, requiring users to add content to their "Instant Queue" from the service's web interface before it could be accessed via the Roku. In May 2010, the channel was updated to allow users to search the Netflix library directly from the device. In August 2010, Roku announced plans to add 1080p video support to the HD-XR. The next month, they released an updated lineup with thinner form factors: a new HD; the XD, with 1080p support; and the XDS, with optical audio, dual-band Wi-Fi, and a USB port. The XD and XDS also included an updated remote. Support for the first-generation Roku models ended in September 2015. Second generation In July 2011, Roku unveiled its second generation of players, branded as Roku 2 HD, XD, and XS. All three models include 802.11n, and also add microSD slots and Bluetooth. The XD and XS support 1080p, and only the XS model includes an Ethernet connector and USB port. They also support the "Roku Game Remote"—a Bluetooth remote with motion controller support for games, which was bundled with the XS and sold separately for other models. The Roku LT was unveiled in October, as an entry-level model with no Bluetooth or microSD support. In January 2012, Roku unveiled the Streaming Stick - a new model condensed into a dongle form factor using Mobile High-Definition Link (MHL). Later in October, Roku introduced a new search feature to the second-generation models, aggregating content from services usable on the device. Third generation Roku unveiled its third-generation models in March 2013, the Roku 3 and Roku 2. The Roku 3 contains an upgraded CPU over the 2 XS, and a Wi-Fi Direct remote with an integrated headphone jack. The Roku 2 features only the faster CPU. Fourth generation In October 2015, Roku introduced the Roku 4; the device contains upgraded hardware with support for 4K resolution video, as well as 802.11ac wireless. Fifth generation Roku revamped their entire streaming player line-up with five new models in September 2016 (low end Roku Express, Roku Express+, high end Roku Premiere, Roku Premiere+, and top-of-the-line Roku Ultra), while the Streaming Stick (3600) was held over from the previous generation (having been released the previous April) as a sixth option. The Roku Premiere+ and Roku Ultra support HDR video using HDR10. Sixth generation In October 2017, Roku introduced its sixth generation of products. The Premiere and Premiere+ models were discontinued, the Streaming Stick+ (with an enhanced Wi-Fi antenna device) was introduced, as well as new processors for the Roku Streaming Stick, Roku Express, and Roku Express+. Seventh generation In September 2018, Roku introduced the seventh generation of products. Carrying over from the 2017 sixth-generation without any changes were the Express (3900), Express+ (3910), Streaming Stick (3800), and Streaming Stick+ (3810). The Ultra is the same hardware device from 2017, but it comes with JBL premium headphones and is repackaged with the new model number 4661. Roku has resurrected the Premiere and Premiere+ names, but these two new models bear little resemblance to the 2016 fifth-generation Premiere (4620) and Premiere+ (4630) models. The new Premiere (3920) and Premiere+ (3921) are essentially based on the Express (3900) model with 4K support added, it also includes Roku Streaming Stick+ Headphone Edition (3811) for improving Wifi signal strength and private listening. Eighth generation In September 2019, Roku introduced the eighth generation of products. The same year, Netflix decided not to support older generations of Roku, including the Roku HD, HD-XR, SD, XD, and XDS, as well as the NetGear-branded XD and XDS. Roku had warned in 2015 that it would stop updating players made in May 2011 or earlier, and these vintage boxes were among them. Ninth generation On September 28, 2020, Roku introduced the ninth generation of products. An updated Roku Ultra was released along with the addition of the Roku Streambar, a 2-in-1 Roku and Soundbar device. The microSD slot was removed from the new Ultra 4800, making it the first top-tier Roku device since the first generation to lack this feature. On April 14, 2021, Roku announced the Roku Express 4K+, replacing the 8th generation Roku Express devices, the Voice Remote Pro as an optional upgrade for existing Roku players, and Roku OS 10 for all modern Roku devices. Tenth generation On September 20, 2021, Roku introduced the tenth generation of products. The Roku Streaming Stick 4K was announced along with the Roku Streaming Stick 4K+ which includes an upgraded rechargeable Roku Voice Remote Pro with lost remote finder. Roku announced an updated Roku Ultra LT with a faster processor, stronger Wi-Fi and Dolby Vision as well as Bluetooth audio streaming and built-in ethernet support. Roku also announced Roku OS 10.5 with several new and improved features. On November 15, 2021, Roku announced a budget model Roku LE (3930S3) to be sold at Walmart, while supplies last. It lacks 4K and HDR10 support, making its features similar to those of the 2019 Roku Express (3930). It has the same form factor as the 2019 Roku Express, except the plastic shell is white rather than black. Feature comparison Roku TV Roku announced its first branded Smart TV and it was released in late 2014. These TVs are manufactured by companies like TCL, Westinghouse and Hisense, and use the Roku user interface as the "brain" of the TV. Roku TVs are updated just like the streaming devices. More recent models also integrate a set of features for use with over-the-air TV signals, including a program guide that provides information for shows and movies available on local antenna broadcast TV, as well as where that content is available to stream, and the ability to pause live TV (although the feature requires a USB hard drive with at least 16GB storage). On November 14, 2019, Walmart and Roku announced that they would be selling Roku TVs under the Onn brand exclusively at Walmart stores, starting November 29. In January 2020, Roku created a badge to certify devices as working with a Roku TV model. The first certified brands were TCL North America, Sound United, Polk Audio, Marantz, Definitive Technology, and Classé. In January 2021, a Roku executive said one out of three smart TVs sold in the United States and Canada came with Roku's operating system built-in. Software The Roku box runs a custom Linux distribution called Roku OS. Updates to the software include bug fixes, security updates, feature additions, and many new interface revisions. Roku pushes OS updates to supported devices in a staggered release. OS updates are rolled out to a percentage group of candidate devices to ensure the build is stable before being made available en masse. Content and programming Roku provides video services from a number of Internet-based video on demand providers. Roku channels Content on Roku devices is provided by Roku partners and are identified using the term channel. Users can add or remove different channels using the Roku Channel Store. Roku's website does not specify which channels are free to its users. Service creation for Roku Player The Roku is an open-platform device with a freely available software development kit that enables anyone to create new channels. The channels are written in a Roku-specific language called BrightScript, a scripting language the company describes as 'unique', but "similar to Visual Basic" and "similar to JavaScript". Developers who wish to test their channels before a general release, or who wish to limit viewership, can create "private" channels that require a code be entered by the user in the account page of the Roku website. These private channels, which are not part of the official Roku Channel Store, are neither reviewed or certified by Roku. There is an NDK (Native Developer Kit) available, though it has added restrictions. The Roku Channel Roku launched its own streaming channel on its devices in October 2017. It is ad-supported, but free. Its licensed content includes movies and TV shows from studios such as Lionsgate, MGM, Paramount, Sony Pictures Entertainment, Warner Bros., Disney, and Universal as well as Roku channel content publishers American Classics, FilmRise, Nosey, OVGuide, Popcornflix, Vidmark, and YuYu. It is implementing an ad revenue sharing model with content providers. On August 8, 2018, the Roku Channel became available on the web as well. Roku also added the "Featured Free" section as the top section of its main menu from which users can get access to direct streaming of shows and movies from its partners. In January 2019, premium subscription options from select content providers were added to the Roku Channel. Originally only available in the U.S., it launched in the UK on April 7, 2020, with a different selection of movies and TV shows, and without premium subscription add-ons. On January 8, 2021, Roku announced that it had acquired the original content library of the defunct mobile video service Quibi for an undisclosed amount, reported to be around $100 million. The content is being rebranded as Roku Originals. Controversies Non-certified channels The Daily Beast alleged that non-certified channels on Roku eased access to materials promoting conspiracy theories and terrorism content. In June 2017, a Mexico City court banned the sale of Roku products in Mexico, following claims by Televisa (via its Izzi cable subsidiary), that the devices were being used for subscription-based streaming services that illegally stream television content without permission from copyright holders. The devices used Roku's private channels feature to install the services, which were all against the terms of service Roku applies for official channels available in its store. Roku defended itself against the allegations as such, stating that these channels were not officially certified and that the company takes active measures to stop illegal streaming services. The 11th Collegiate Court in Mexico City overturned the decision in October 2018, with Roku returning to the Mexican market soon after; Televisa's streaming service Blim TV would also launch on the platform. In August 2017 Roku began to display a prominent disclaimer when non-certified channels are added, warning that channels enabling piracy may be removed "without prior notice". In mid-May 2018, a software glitch caused some users to see copyright takedown notices on legitimate services such as Netflix and YouTube. Roku acknowledged and patched the glitch. Carriage disputes Pay television-styled carriage disputes emerged on the Roku platform in 2020, as the company requires providers to agree to revenue sharing for subscription services that are billed through the platform, and to hold 30% of advertising inventory. On September 18 of that same year, Roku announced that NBCUniversal TV Everywhere services would be removed from its devices "as early as this weekend", due to its refusal to carry the company's streaming service Peacock under terms it deemed "unreasonable". It reached an agreement with NBCUniversal later that day. HBO Max, which launched in May 2020, was unavailable on Roku until December 2020 due to similar disputes over revenue sharing, particularly in regards to an upcoming ad-supported tier. On December 17, 2020, HBO Max began streaming on Roku. Another dispute, starting mid-December 2020, caused Spectrum customers to be unable to download the Spectrum TV streaming app to their Roku devices; existing customers could retain the app, but would lose it upon deletion, even to fix software bugs. This dispute was resolved on August 17, 2021. On April 30, 2021, Roku removed the over-the-top television service YouTube TV from its Channels Store, preventing it from being downloaded. The company accused operator Google LLC of making demands regarding its YouTube app that it considered "predatory, anti-competitive and discriminatory", including enhanced access to customer data, giving YouTube greater prominence in Roku's search interface, and requiring that Roku implement specific hardware standards that could increase the cost of its devices. Roku accused Google of "leveraging its YouTube monopoly to force an independent company into an agreement that is both bad for consumers and bad for fair competition." Google claimed that Roku had "terminated our deal in bad faith amidst our negotiation", stating that it wanted to renew the "existing reasonable terms" under which Roku offered YouTube TV. Google denied Roku's claims regarding customer data and prominence of the YouTube app, and stated that its carriage of a YouTube app was under a separate agreement, and unnecessarily brought into negotiations. As a partial workaround, YouTube began to deploy an update to its main app on Roku and other platforms, which integrates the YouTube TV service. See also Comparison of set-top boxes SoundBridge, another Roku product Smart TV Notes References External links Telecommunications-related introductions in 2008 Digital media players Internet radio Streaming television Linux-based devices Online advertising services and affiliate networks Technological comparisons
18467705
https://en.wikipedia.org/wiki/1997%20USC%20Trojans%20football%20team
1997 USC Trojans football team
The 1997 USC Trojans football team (variously "Trojans" or "USC") represented the University of Southern California during the 1997 NCAA Division I-A football season, finishing with a 6–5 record and tied for fifth place in the Pacific-10 Conference with a 4–4 conference record; despite a qualifying record, the Trojans were not invited to a bowl game. The team was coached by John Robinson, in his second stint as head coach of the Trojans, and played their home games at the Los Angeles Coliseum. Before the season USC entered the preseason with 75 players from its 1996 squad that went a disappointing 6–6; it returned starters at 14 positions, eight on offense and six on defense along with both the punter and placekicker. The Trojans had a top 10-ranked recruiting class with 18 high school players and two transfer players from junior college programs; 16 of the 20 incoming players were All-Americans at previous levels. The Trojan running game, led by Delon Washington, hoped to improve a ground game that was ranked 90th in the nation; considered very disappointing at a school that had been known as "Tailback U". USC brought in Hue Jackson as the new Offensive Coordinator; Jackson had previously served as offensive coordinator for California under Steve Mariucci, who had been hired to coach the NFL's San Francisco 49ers. Schedule Roster Coaching staff Game summaries Florida State USC opened its season hosting the #5-ranked Florida State University Seminoles of the Atlantic Coast Conference, under long-time coach Bobby Bowden, in the first ever game between the two programs. The Seminoles entered the game as a dominant force in college football, having finished in the top four of the final poll for 10 consecutive years behind teams famous for their speed and depth; the previous season the Seminoles went 11–1. The Trojans entered the season with a new quarterback, sophomore John Fox, after the graduation of veteran quarterback Brad Otten. The #23-ranked Trojans were a departure from Florida State's recent trend of season-opening opponents, which tended towards low-ranked teams in games played either at their home field or on a neutral field. The Trojans hoped to reestablish their running game, which had been anemic the previous season; however the Seminoles had allowed an average of only 59 yards rushing in 1996. The Seminoles defeated the Trojans, 14–7, in a competitive, defense-oriented game. Florida State scored first: After recovering a fumble by USC Delon Washington at the USC 36, the Seminoles scored on a two-yard quarterback sneak with 2:10 left in the first quarter. The Trojans were able to respond in the second quarter, tying the score, 7–7, with 13:42 left in the half. The pivotal moment in the game came on a 97-yard Seminoles drive in the fourth quarter that resulted in the go-ahead score with 10:40 left in the game. The drive was almost stopped when cornerback Brian Kelly dropped a near-interception about 11 yards from Florida State's end zone. USC had two more drives, with the second reaching the Florida State 26-yard line before turning the ball over on downs. USC was pleased with the performance of Fox, who in his first start completed 18 of 32 passes for 159 yards with one interception that came on a fourth-down play. However, the running game did not produce, totaling 25 yards with starting tailback Washington going for only 16 yards in 18 carries. The defense performed well, led by sophomore linebacker Chris Claiborne who had eight tackles, including two sacks. USC held the Seminoles to 89 yards rushing. Kelly broke up four passes, made five tackles and had an interception that was wiped out by an offside penalty against USC. Neither team did very well on third-down conversions; USC was 4 of 17, Florida State went 4 of 14. Despite losing the game, the Trojans' performance was seen as a positive sign that the team was approaching the season with a drive not present until the very end of the previous season. USC remained ranked #23. Florida State would go on to an 11–1 record, winning the ACC conference and ending the season ranked #3 in the AP Poll. Washington State Opening Pac-10 Conference play, the USC hosted the Washington State Cougars under coach Mike Price. The Cougars arrived after a bye week; they defeated UCLA in their opening game, led by junior quarterback Ryan Leaf who passed for 381 yards and three touchdowns in a 37–34 victory. USC entered the game favored by a touchdown, and with a 10-game winning streak against Washington State, since 1986, and a home winning streak stretching back to 1957. The game was framed as the Trojans opportunity to set the tone of their season: They didn't drop in the polls after a close loss to highly ranked Florida State; a win against Cougars would help them start rising in the polls; a loss would likely undo any good will left over in voters. The Cougars defeated the Trojans, 28–21; it was the first time the Cougars had ever defeated UCLA and USC in the same season, and the first time they had won in the Coliseum in 40 years. Washington State had a strong first half, leading 21–6 at the end of the second quarter; the Trojans only touchdown had the extra-point blocked. John Fox fumbled a snap that led to Washington State's third touchdown. R. Jay Soward returned the second half kickoff 95 yards for a touchdown, spurring a USC rally. Early in the fourth quarter, the Trojans used a halfback pass for a touchdown, then went on to make a two-point conversion to tie the score, 21–21, with 12:44 remaining in the game. The USC defense held the Cougars for most of the second half; but Washington State finally moved ahead late in the fourth quarter, when Leaf connected with a receiver for a 51-yard touchdown with 4:18 left in the game. Fox did not show the same level of poise as his start against Florida State; he completed 23 of 42 passes for 229 yards. Leaf completed 21 of 40 passes for 355 yards and three touchdowns, though he was intercepted twice by Brian Kelly. The Trojans running game continued to be a non-factor, totaling 31 yards at approximately one yard per carry. The concern started to focus on the offensive line, as Delon Washington rushed for 20 yards in eight carries; and given an opportunity, freshman Malaefou MacKenzie managed only 14 yards in 11 carries. USC had 10 penalties for 92 yards, often placing the offense in long second down situations. Although the Cougars entered the game unranked, they would go on to have an excellent season, finishing 10–2, sharing the Pac-10 title and playing in the Rose Bowl for only the third time in school history. Leaf would go on to have a strong year, throwing for a school-record 3,968 yards and 34 touchdowns, being named a first team All-American and finishing third in the voting for the Heisman Trophy. The Trojans fell out of the rankings; the loss marked the fifth time in 104 years of football that USC opened a season 0-2, prior to the season the only Trojan teams that began a season with two defeats were those of 1896, 1902, 1957 and 1960. California The Trojans played their first Pac-10 road game of the season, visiting the California Golden Bears, under first-year coach Tom Holmoe, at Memorial Stadium in Berkeley, California. The Golden Bears had defeated the Trojans the previous season, and hoped to make it a second year in a row for the first time since 1958. The Trojans defense made seven sacks and forced Cal to punt eight times in a 27-17 victory. The Trojans surged to a 24-point halftime lead, but were held scoreless in the second as California tried to close the gap. R. Jay Soward caught two touchdown passes of 33 and 65 yards from John Fox; LaVale Woods rushed for 129 yards and two touchdowns. California receiver Bobby Shaw, who entered the game leading the nation, finished with five catches for 98 yards and one touchdown. The Trojans continued to have problems on special teams; kicker Adam Abrams missed a 34-yard field-goal attempt and an extra point, and allowed a blocked punt against punter Jim Wren. Trojans offensive coordinator Hue Jackson, who had served in the same role for the Golden Bears the previous season, called plays from the field instead of the press box at head coach Robinson's request. USC entered the game ranked 112th among 112 Division I-A schools in net rushing yards. The offensive line reacted by inscribing the number "112" on their wrist bands before the game. The team made progress, gaining 123 yards in net rushing. The game allowed them to rise to 109th in rushing offense. On defense, USC held California to 31 rushing yards and ranked fifth nationally in rushing defense, giving up 54.3 yards a game. The Trojans' victory ensured they would not go 0-3 for the first time since 1960. UNLV Next hosted UNLV of the Western Athletic Conference, under head coach Jeff Horton. The Trojans entered the game as 25-point favorites against the Rebels. Running back LaVale Woods sprained his left ankle on the second possession, placing the running game back on freshman Malaefou MacKenzie and senior Delon Washington, who had been Woods had replaced for the previous game against California. MacKenzie ran for 104 yards in 19 carries and two touchdowns. The kicking game improved, with Adam Abrams making the Trojans' first field goals of the season from 27 and 37 yards before the end of the first half. Cornerback Daylon McCutcheon was used successfully on offensive plays, taking advantage of his overall athletic abilities. The victory was coach John Robinson's 100th college victory. Arizona State Notre Dame Oregon Washington Stanford Oregon State UCLA After the season 1997 Team Players in the NFL Chris Claiborne Travis Claridge Rashard Cook Ennis Davis David Gibson Brian Kelly Malaefou MacKenzie Daylon McCutcheon Billy Miller Zeke Moreno Chad Morton Ifeanyi Ohalete R. Jay Soward Antuan Simmons References USC USC Trojans football seasons USC Trojans football
23458933
https://en.wikipedia.org/wiki/Nftables
Nftables
nftables is a subsystem of the Linux kernel providing filtering and classification of network packets/datagrams/frames. It has been available since Linux kernel 3.13 released on 19 January 2014. nftables replaces the legacy iptables portions of Netfilter. Among the advantages of nftables over iptables is less code duplication and easier extension to new protocols. nftables is configured via the user-space utility nft, while legacy tools are configured via the utilities iptables, ip6tables, arptables and ebtables frameworks. nftables utilizes the building blocks of the Netfilter infrastructure, such as the existing hooks into the networking stack, connection tracking system, userspace queueing component, and logging subsystem. nft Command-line syntax A command to drop any packets with destination IP address 1.2.3.4: nft add rule ip filter output ip daddr 1.2.3.4 drop Note that the new syntax differs significantly from that of iptables, in which the same rule would be written: iptables -A OUTPUT -d 1.2.3.4 -j DROP The new syntax can appear more verbose, but it is also far more flexible. nftables incorporates advanced data structures such as dictionaries, maps and concatenations that do not exist with iptables. Making use of these can significantly reduce the number of chains and rules needed to express a given packet filtering design. The iptables-translate tool can be used to translate many existing iptables rules to equivalent nftables rules. Debian 10 (Buster), among other Linux distributions, uses nftables along with iptables-translate as the default packet filtering backend. History The project was first publicly presented at Netfilter Workshop 2008 by Patrick McHardy from the Netfilter Core Team. The first preview release of kernel and userspace implementation was given in March 2009. Although the tool has been called "the biggest change to Linux firewalling since the introduction of iptables in 2001", it has received little press attention. Notable hacker Fyodor Vaskovich (Gordon Lyon) said that he is "looking forward to its general release in the mainstream Linux kernel". The project stayed in alpha stage, and the official website was removed in 2009. In March 2010, emails from the author on the project mailing lists showed the project was still active and approaching a beta release, but the latter was never shipped officially. In October 2012, Pablo Neira Ayuso proposed a compatibility layer for iptables and announced a possible inclusion of the project into mainstream kernel. On 16 October 2013, Pablo Neira Ayuso submitted a nftables core pull request to the Linux kernel mainline tree. It was merged into the kernel mainline on 19 January 2014, with the release of Linux kernel version 3.13. Overview The nftables kernel engine adds a simple virtual machine into the Linux kernel, which is able to execute bytecode to inspect a network packet and make decisions on how that packet should be handled. The operations implemented by this virtual machine are intentionally made basic. It can get data from the packet itself, have a look at the associated metadata (inbound interface, for example), and manage connection-tracking data. Arithmetic, bitwise and comparison operators can be used for making decisions based on that data. The virtual machine is also capable of manipulating sets of data (typically, IP addresses), allowing multiple comparison operations to be replaced with a single set lookup. The above-described organization is contrary to the iptables firewalling code, which has protocol awareness built-in so deeply into the logic that the code has had to be replicated four times—for IPv4, IPv6, ARP, and Ethernet bridging—as the firewall engines are too protocol-specific to be used in a generic manner. The main advantages of nftables over iptables are the simplification of the Linux kernel ABI, reduction of code duplication, improved error reporting, and more efficient execution, storage and incremental changes of filtering rules. Traditionally used , , and (for IPv4, IPv6, ARP and Ethernet bridging, respectively) are intended to be replaced with as a single unified implementation, providing firewall configuration on top of the in-kernel virtual machine. nftables also offers an improved userspace API that allows atomic replacements of one or more firewall rules within a single Netlink transaction. This speeds up firewall configuration changes for setups having large rulesets; it can also help in avoiding race conditions while the rule changes are being executed. nftables also includes compatibility features to ease transition from previous firewalls, command-line utilities to convert rules in the iptables format, and syntax-compatible versions of iptables commands that use the nftables backend. References External links nftables Git source code repository nftables HOWTO documentation First release of nftables (2009-03-18) Pablo Neira Ayuso: [RFC] back on nf_tables (plus compatibility layer) nftables quick HOWTO nftables sections in ArchWiki and Gentoo Wiki nft_compat extended to support ebtables extensions (merged in Linux kernel 4.0) Firewall software Linux security software Linux kernel features
37689105
https://en.wikipedia.org/wiki/Intetics
Intetics
Intetics is a IT outsourcing company headquartered in Naples, Florida with software development centers in Europe in Belarus (Minsk), Ukraine(Kharkiv, Kyiv and Lviv) and Poland (Kraków), branch offices in United States (Wilmette, Illinois), Germany (Düsseldorf), and a representative office in UK (London). History The company was founded in 1995 by Boris Kontsevoi in Minsk. Today Intetics specializes in the establishment and operation of offshore dedicated teams for application development, GIS, systems integration, engineering, data processing and back-office support. Initially, the company operated under the name Client-Server Programs, 8 years later was renamed Intetics. The word is an amalgam of the words "Internet, Technology, and Ethics." In 2010 Belarusian production branch of Intetics becomes a resident at the Belarusian Hi-Tech Park. In 2016 Intetics introduces the new framework in software development, Predictive Software Engineering. Community involvement Intetics is one of the largest IT companies in Belarus. Awards IAOP's Global Outsourcing 100 (2006-2008, 2010–2017, 2019–2020) Software Magazine's Software 500 Company (2010-2018) CV Magazine's Software & Technology Innovation Award 2016 References Companies based in Cook County, Illinois Companies established in 1995 Outsourcing companies Software companies based in Illinois Software companies of Belarus Software companies of Ukraine Software companies based in Florida Software companies of the United States 1995 establishments in the United States 1995 establishments in Illinois Software companies established in 1995 Belarusian companies established in 1995
8824217
https://en.wikipedia.org/wiki/Comx-35
Comx-35
The COMX-35 was a home computer that was one of the very few systems to use the RCA 1802 microprocessor, the same microprocessor that is also used in some space probes. The COMX-35 had a keyboard with an integrated joystick in place of cursor keys. It was relatively inexpensive and came with a large collection of software. COMX-35 was manufactured in Hong Kong by COMX World Operations Ltd and was released in the Netherlands, the United Kingdom, Sweden, New Zealand, Australia, Finland, Norway, Italy, Singapore, Turkey and the People's Republic of China. Hardware Technical specifications CPU: CDP 1802 at or (NTSC) Random-access memory: ( max) ROM: with Basic interpreter VIS: (Video Interface System) CDP1869/CDP1870 Text modes: 40 columns x 24 lines. Alternative , and Character set: 128 Programmable characters, the default character set displayed only uppercase characters Character size: 6x9 (PAL) or 6x8 (NTSC) pixels, alternative up to 6x16 Graphics modes: None, but the character-set was re programmable to simulate a High Resolution display Colours: A total of 8 foreground colours are available (with a limited choice of 4 per character and 1 per line of that character) and 8 background colours (defined for the whole screen). Sound: 2 channels: one for tone generation with a span of 8 octaves, and 1 for special effect/white noise. Volume programmable in 16 steps. Memory map RAM The 'COMX 35' was called '35' due to the RAM in the machine, this included actual User RAM of which roughly was available for actual BASIC and the rest was used for system parameters and reserved for use by the BASIC System ROM. An additional was included as video RAM, for details see the Video Interface System (VIS) chapter. Video interface system The COMX used the RCA CDP1869 and CDP1870 Video Interface System (VIS), consisting of the CDP1869 address and sound generator and the CDP1870 colour video generator. The COMX automatically selected operation in PAL or NTSC, this was done via the PAL/NTSC input on the VIS. Also during start-up the system ROM detected PAL/NTSC by checking EF2. EF2 gave PAL/NTSC information before the first pulse on the Q line, after this EF2 was used for keyboard handling. The VIS ran on for a PAL and for an NTSC machine. This frequency was divided by 2 and output via CPUCLK (pin 38) to the CDP 1802 for timing of the CPU (2.813 and ). The VIS was also responsible for the timing of the interrupt ()S and timing of the non display period via PREDISPLAY (pin 1). Video memory could only be accessed during the non display period which allowed for execution of 2160 machine cycles on a PAL and 1574 on an NTSC machine. Provided that not more instructions were executed than the indicated maximum number of machine cycles video memory could be accessed during the interrupt routine. Alternatively the program could be paused by waiting for a non display period by checking EF1. The Video memory consisted of 2 parts, RAM page memory and RAM character memory. The page memory stored the ASCII code for each character position on the screen. The screen had 960 characters where position 0 (left top corner) could be accessed by memory location @F800 (before scrolling). The character memory stored the character definition of each ASCII character and could be accessed by memory location @F400-@F7FF. Character memory could be accessed via different methods see also the VIS data sheet. Models The COMX 35 came in two colours, either a white or black keyboard. Later models also included a monitor connection. The second COMX home computer was called the COMX PC1 which was basically the same hardware as the COMX-35 with a better keyboard and a joystick connection. There was also a clone of the COMX PC1 also known as the Savla PC1, which is sold in India. Peripherals The COMX 35 had one 44-pin external connector for additional expansion options in the form of interface cards. Memory location @C000-DFFF was reserved for use by any interface card, either to connect ROM, additional RAM or for other purposes. The following hardware was available: Expansion box The expansion box allowed up to 4 interface cards to be connected to the COMX 35. The expansion box also included a firmware ROM connected to memory location @E000-@EFFF which extended basic with commands and logic to switch between different interface cards. Next to the standard firmware ROM there was an adaptation made by F&M (Frank and Marcel van Tongeren) this ROM added a screen editor feature to COMX basic. Floppy disk controller The COMX Floppy disk controller allowed connection of 5.25" disk drives. The controller used the WD1770 clocked at . The DOS ROM was selected between address C000-DFFF and was also mapped over address DD0-DDF of the basic ROM. COMX DOS supports 35 tracks for both single and double sided disks and 70 tracks on single sided disks. Every track consisted of 16 sectors and every sector of 128 bytes, resulting in disk files of max . Printer card The COMX Printer card allowed connection of parallel and serial printers. Depending on what type of printer was connected the firmware ROM was selected either with the parallel firmware between memory location @C000-@CFFF and the serial firmware between memory location @D000-@DFFF or the other way around. Thermal printer and card The COMX Thermal Printer came including a dedicated interface card, printing was done on thermal paper by using a head that with which could heat the paper and as such print both text and images. You needed to be careful when writing your own printer drivers, which was needed for graphic printing, as it was very easy to 'burn' the printer head. 32K RAM card The COMX RAM card placed additional RAM from address @C000 to @DFFF, i.e. only one bank of the available at a time. To switch to a different bank the OUT 1 instruction needed to be used via 1802 assembler code. Bit 5 and 6 were used for the RAM bank selection (bit 1 to 4 were used for the expansion box slot selection). 80-column card The COMX 80-column card added possibility to use basic with a text mode of . The MC6845 was used as video chip. F&M Joy Card This card was not developed by COMX but was a homemade extension by F&M: only a handful were ever made. The card had connections for 2 joysticks and came with a simple game and supporting software. Software The company importing the COMX in The Netherlands, West Electronics, provided almost all their software for the COMX for free and without copyrights (or for a small fee for tape, disk and/or shipment). West Electronics also organized different competitions for homemade software. Probably the most popular game on the COMX was 'Worm' known in The Netherlands as 'Eet een wurm'. This was a very basic game where you had to direct a snake over the screen and eat all the worms. If you managed to play 'Worm' long enough it would eventually run out of places to put new 'food' and as such slow down the game almost to a stop. There was a correction made for the game by F&M including some additional improvements. The COMX was probably most popular in The Netherlands mainly due to the efforts from West Electronics to provide free software. As a result of the competitions quite a few excellent games were written by enthusiastic users. Here is a small subset of a list too extensive to publish here: Get Your Gadget by JunioR (Jeroen Griffioen and Robbert Nix) Boulderdash by AHON (Arjan Houben and Oscar Nooy) Donkey The Kong by MP-Soft (Michel Peters) Happiehap and Trainspotting by F&M (Frank and Marcel van Tongeren) Emulator An emulator (Emma 02) running on Microsoft Windows, Linux and Mac OS X is available and can be downloaded from the Dutch COMX Club site or the Emma 02 site. The screenshots shown here are generated with this emulator. The emulator also supports the following other 1802 systems: Elf 2000, COSMAC VIP, COSMAC ELF, Netronis Elf II, Quest Super Elf, RCA Studio II, Victory MPT-02, Visicom COM-100, Cidelsa, Telmac TMC-600, Telmac TMC-2000, Telmac Nano, Pecom 64 and the ETI-660. Known bugs The most famous bug in the COMX basic ROM was when you typed in the line number 65535 this resulted in the COMX hanging and the screen getting all messed up which could be very frustrating if the user had spent hours typing in a BASIC program. A similar thing as when using line number 65535 was typing in 'READY', after this the COMX was not 'READY' anymore. F&M discovered this one when they designed the F&M screen editor and pressed 'CR' (return) on the 'READY' prompt. As such they decided to change the prompt into 'OK' to avoid too many accidental hangings when using a screen editor. This fault was actually caused by the basic READ command, when a READ Y (or any other READ) instruction is given when there is no DATA statement in the loaded basic program the COMX hangs. Another bug in the standard character set was the '!' which displayed a red dot just above the black dot. References External links hobby-site.com - Emma 02 including COMX Emulator hobby-site.com - The COMX-35 at the COMX Club Netherlands old-computers.com - The Comx-35 at old computers.com Home computers 8-bit computers
1769096
https://en.wikipedia.org/wiki/Cut%20%26%20Paste
Cut & Paste
Cut & Paste is a word processor published in 1984 for the Apple II, Atari 8-bit family, Commodore 64, IBM PC, and IBM PCjr. The few productivity releases from game developer and publisher Electronic Arts include this and Financial Cookbook. In the UK it was distributed by Ariolasoft. Originally sold for , Cut & Paste was praised for its user-friendliness and criticized as overly simplistic. Magazine advertisements proclaimed, "If you can learn to use this word processor in 90 seconds, can it really be any good?" and the slogan on the box read "The Remarkably Simple Word Processor". Overview Cut & Paste is a simple word processor released by Electronic Arts in 1984 for . It was developed in a time when the ability to cut, copy, and paste text (now known as a clipboard) was a significant feature for home computers. Its package is a hard plastic box which opens like a book, containing a program floppy disk, a data floppy disk with sample documents, and a 27-page manual including the operation of all of the four supported computers. The application launches with onscreen instructions. Ongoing usage has a fullscreen text editor with a one-row menu bar along the bottom. The menu bar is accessed using the esc key, and is navigated with the arrow keys and return key. The application operates in insert mode, so existing characters are edited only by deleting instead of typing directly over them. Backspace from the right is used instead of delete from the left. The application can help prevent the orphaning of a paragraph's line across a page break. There is a full-screen print menu. The application's namesake feature allows the user to begin a selection, use the arrow keys to mark the full selection, and end a selectionthen cut it and paste it. The clipboard buffer persists between different documents. Features absent from Cut & Paste which were sooner or later considered staples of 1980s word processing include saving the page headers, right margin justification, centering, underlining, selecting fonts, utilizing any printer features, text searching, spelling checking, and paragraph lead indentation. It uses a proprietary file format which can't be interchanged with any other software. Development Cut & Paste was developed by Steve Hayes, Tim Mott, Jerry Morrison, Dan Silva, Steve Shaw, David S. Maynard, and Norm Lane. David Maynard had previously created Worms?, a software toy that was one of the 1983 launch games from Electronic Arts. Dan Silva went on to write Deluxe Paint for the Amiga, published in 1985. Reception Marty Petersen, for InfoWorld, wrote "We think it is simply mediocre, performing just adequately, and not nearly as good a buy as some other low-cost word processing programs." He called out the limited text formatting capabilities as a significant drawback, and concluded that Cut & Paste is "little more than a glorified typewriter". Reviewing the Atari 8-bit version for UK magazine Page 6 in 1986, John Davison concluded, "I find it difficult to raise any enthusiasm for this program. Its few good points are far outweighed by its many bad ones. In action, it seems closer to an electronic typewriter than a computerised word processor." BYTE said that Cut & Paste "is simple to use but unfortunately it's much too limited to be of great value as a serious word processor", recommending it for children learning how to use a computer. Arthur Leyenberger in an August 1984 review for ANALOG Computing wrote, "The user interface is probably Cut & Paste's strongest feature. ... I was able to start typing this review using the program as soon as I put the disk in the drive." But he found the software "just does not have enough features to make it a serious choice for anyone doing more than writing an occasional letter." References 1983 software Apple II software Commodore 64 software Atari 8-bit family software Electronic Arts Word processors
288311
https://en.wikipedia.org/wiki/Web%20application
Web application
A web application (or web app) is application software that runs on a web server, unlike computer-based software programs that are run locally on the operating system (OS) of the device. Web applications are accessed by the user through a web browser with an active network connection. These applications are programmed using a client–server modeled structure—the user ("client") is provided services through an off-site server that is hosted by a third-party. Examples of commonly-used web applications include: web-mail, online retail sales, online banking, and online auctions. Definition and similar terms The general distinction between a dynamic web page of any kind and a "web app" is unclear. Web sites most likely to be referred to as "web applications" are those which have similar functionality to a desktop software application, or to a mobile app. HTML5 introduced explicit language support for making applications that are loaded as web pages, but can store data locally and continue to function while offline. Single-page applications are more application-like because they reject the more typical web paradigm of moving between distinct pages with different URLs. This is due to individual components being able to be replaced or updated without having to refresh the whole web page. Single-page frameworks might be used for speed development of such a web app for a mobile platform as it is able to save bandwidth, as well as the extinction of loading external files. Mobile web application There are several ways of targeting mobile devices when making web applications: Responsive web design can be used to make a web application - whether a conventional website or a single-page application viewable on small screens that work well with touchscreens. Progressive web applications (PWAs) are web applications that load like regular web pages or websites but can offer the user functionality such as working offline and device hardware access traditionally available only to native mobile applications. Hybrid apps embed a web site inside a native app, possibly using a hybrid framework. This allows development using web technologies (and possibly directly copying code from an existing mobile web site) while also retaining certain advantages of native apps (e.g. direct access to device hardware, offline operation, app store visibility). Hybrid app frameworks include Apache Cordova, Electron, Flutter, Haxe, React Native and Xamarin. History In earlier computing models like client-server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as its user interface and had to be separately installed on each user's personal computer. An upgrade to the server-side code of the application would typically also require an upgrade to the client-side code installed on each user workstation, adding to the support cost and decreasing productivity. In addition, both the client and server components of the application were usually tightly bound to a particular computer architecture and operating system and porting them to others was often prohibitively expensive for all but the largest applications (Nowadays, native apps for mobile devices are also hobbled by some or all of the foregoing issues). In contrast, web applications use web documents written in a standard format such as HTML and JavaScript, which are supported by a variety of web browsers. Web applications can be considered as a specific variant of client-server software where the client software is downloaded to the client machine when visiting the relevant web page, using standard procedures such as HTTP. Client web software updates may happen each time the web page is visited. During the session, the web browser interprets and displays the pages, and acts as the universal client for any web application. In the early days of the Web, each individual web page was delivered to the client as a static document, but the sequence of pages could still provide an interactive experience, as user input was returned through web form elements embedded in the page markup. However, every significant change to the web page required a round trip back to the server to refresh the entire page. In 1995, Netscape introduced a client-side scripting language called JavaScript allowing programmers to add some dynamic elements to the user interface that ran on the client side. So instead of sending data to the server in order to generate an entire web page, the embedded scripts of the downloaded page can perform various tasks such as input validation or showing/hiding parts of the page. In 1996, Macromedia introduced Flash, a vector animation player that could be added to browsers as a plug-in to embed animations on the web pages. It allowed the use of a scripting language to program interactions on the client-side with no need to communicate with the server. In 1999, the "web application" concept was introduced in the Java language in the Servlet Specification version 2.2. [2.1?]. At that time both JavaScript and XML had already been developed, but Ajax had still not yet been coined and the XMLHttpRequest object had only been recently introduced on Internet Explorer 5 as an ActiveX object. In 2005, the term Ajax was coined, and applications like Gmail started to make their client sides more and more interactive. A web page script is able to contact the server for storing/retrieving data without downloading an entire web page. In 2007, Steve Jobs announced that web apps, developed in HTML5 using AJAX architecture, would be the standard format for iPhone apps. No software development kit (SDK) was required, and the apps would be fully integrated into the device through the Safari browser engine. This model was later switched for the App Store, as a means of preventing jailbreakers and of appeasing frustrated developers. In 2014, HTML5 was finalized, which provides graphic and multimedia capabilities without the need of client-side plug-ins. HTML5 also enriched the semantic content of documents. The APIs and document object model (DOM) are no longer afterthoughts, but are fundamental parts of the HTML5 specification. WebGL API paved the way for advanced 3D graphics based on HTML5 canvas and JavaScript language. These have significant importance in creating truly platform and browser independent rich web applications. In 2016, during the annual Google IO conference, Eric Bidelman (Senior Staff Developers Programs Engineer) introduced Progressive Web Apps (PWAs) as a new standard in web development. Jeff Burtoft, Principal Program Manager at Microsoft, said "Google led the way with Progressive Web Apps, and after a long process, we decided that we needed to fully support it." As such, Microsoft and Google both supported the PWA standard. Interface Through Java, JavaScript, CSS, Flash, Silverlight and other technologies, application-specific methods such as drawing on the screen, playing audio, and access to the keyboard and mouse are all possible. Many services have worked to combine all of these into a more familiar interface that adopts the appearance of an operating system. General-purpose techniques such as drag and drop are also supported by these technologies. Web developers often use client-side scripting to add functionality, especially to create an interactive experience that does not require page reloading. Recently, technologies have been developed to coordinate client-side scripting with server-side technologies such as ASP.NET, J2EE, Perl/Plack and PHP. Ajax, a web development technique using a combination of various technologies, is an example of technology that creates a more interactive experience. Structure Applications are usually broken into logical chunks called "tiers", where every tier is assigned a role. Traditional applications consist only of 1 tier, which resides on the client machine, but web applications lend themselves to an n-tiered approach by nature. Though many variations are possible, the most common structure is the three-tiered application. In its most common form, the three tiers are called presentation, application and storage, in this order. A web browser is the first tier (presentation), an engine using some dynamic Web content technology (such as ASP, CGI, ColdFusion, Dart, JSP/Java, Node.js, PHP, Python or Ruby on Rails) is the middle tier (application logic), and a database is the third tier (storage). The web browser sends requests to the middle tier, which services them by making queries and updates against the database and generates a user interface. For more complex applications, a 3-tier solution may fall short, and it may be beneficial to use an n-tiered approach, where the greatest benefit is breaking the business logic, which resides on the application tier, into a more fine-grained model. Another benefit may be adding an integration tier that separates the data tier from the rest of tiers by providing an easy-to-use interface to access the data. For example, the client data would be accessed by calling a "list_clients()" function instead of making an SQL query directly against the client table on the database. This allows the underlying database to be replaced without making any change to the other tiers. There are some who view a web application as a two-tier architecture. This can be a "smart" client that performs all the work and queries a "dumb" server, or a "dumb" client that relies on a "smart" server. The client would handle the presentation tier, the server would have the database (storage tier), and the business logic (application tier) would be on one of them or on both. While this increases the scalability of the applications and separates the display and the database, it still doesn't allow for true specialization of layers, so most applications will outgrow this model. Business use An emerging strategy for application software companies is to provide web access to software previously distributed as local applications. Depending on the type of application, it may require the development of an entirely different browser-based interface, or merely adapting an existing application to use different presentation technology. These programs allow the user to pay a monthly or yearly fee for use of a software application without having to install it on a local hard drive. A company which follows this strategy is known as an application service provider (ASP), and ASPs are currently receiving much attention in the software industry. Security breaches on these kinds of applications are a major concern because it can involve both enterprise information and private customer data. Protecting these assets is an important part of any web application and there are some key operational areas that must be included in the development process. This includes processes for authentication, authorization, asset handling, input, and logging and auditing. Building security into the applications from the beginning can be more effective and less disruptive in the long run. Cloud computing model web applications are software as a service (SaaS). There are business applications provided as SaaS for enterprises for a fixed or usage-dependent fee. Other web applications are offered free of charge, often generating income from advertisements shown in web application interface. Development Writing web applications is often simplified by the use of web application framework. These frameworks facilitate rapid application development by allowing a development team to focus on the parts of their application which are unique to their goals without having to resolve common development issues such as user management. Many of the frameworks in use are open-source software. The use of web application frameworks can often reduce the number of errors in a program, both by making the code simpler, and by allowing one team to concentrate on the framework while another focuses on a specified use case. In applications which are exposed to constant hacking attempts on the Internet, security-related problems can be caused by errors in the program. Frameworks can also promote the use of best practices such as GET after POST. In addition, there is potential for the development of applications on Internet operating systems, although currently there are not many viable platforms that fit this model. Applications Examples of browser applications are simple office software (word processors, online spreadsheets, and presentation tools), but can also include more advanced applications such as project management, computer-aided design, video editing, and point-of-sale. See also Software as a service (SaaS) Web 2.0 Web engineering Web services Web Sciences Web widget Single-page application Ajax (programming) Web development tools Browser game API economy Mobile app Progressive web application References External links HTML5 Draft recommendation, changes to HTML and related APIs to ease authoring of web-based applications. Web Applications Working Group at the World Wide Web Consortium (W3C) PWAs on Web.dev by Google Developers. Software architecture Web development User interface techniques
11441272
https://en.wikipedia.org/wiki/Paperless%20office
Paperless office
A paperless office (or paper-free office) is a work environment in which the use of paper is eliminated or greatly reduced. This is done by converting documents and other papers into digital form, a process known as digitization. Proponents claim that "going paperless" can save money, boost productivity, save space, make documentation and information sharing easier, keep personal information more secure, and help the environment. The concept can be extended to communications outside the office as well. Definition The paperless world was a publicist's slogan, intended to describe the office of the future. It was facilitated by the popularization of video display computer terminals like the 1964 IBM 2260. An early prediction of the paperless office was made in a 1975 Business Week article. The idea was that office automation would make paper redundant for routine tasks such as record-keeping and bookkeeping, and it came to prominence with the introduction of the personal computer. While the prediction of a PC on every desk was remarkably prophetic, the "paperless office" was not. Improvements in printers and photocopiers made it much easier to reproduce documents in bulk, causing the worldwide use of office paper to more than double from 1980 to 2000. This was attributed to the increased ease of document production and widespread use of electronic communication, which resulted in users receiving large numbers of documents that were often printed out. However, since about 2000, at least in the US, the use of office paper has leveled off and is now decreasing, which has been attributed to a generation shift; younger people are believed to be less inclined to print out documents, and more inclined to read them on a full-color interactive display screen. According to the United States Environmental Protection Agency, the average office worker generates approximately two pounds of paper and paperboard products each day. The term "The Paperless Office" was first used in commerce by Micronet, Inc., an automated office equipment company, in 1978. History Traditional offices have paper-based filing systems, which may include filing cabinets, folders, shelves, microfiche systems, and drawing cabinets, all of which require maintenance, equipment, considerable space, and are resource-intensive. In contrast, a paperless office could simply have a desk, chair, and computer (with a modest amount of local or network storage), and all of the information would be stored in digital form. Speech recognition and speech synthesis could also be used to facilitate the storage of information digitally. Once computer data is printed on paper, it becomes out-of-sync with computer database updates. Paper is difficult to search and arrange in multiple sort arrangements, and similar paper data stored in multiple locations is often difficult and costly to track and update. A paperless office would have a single-source collection point for distributed database updates, and a publish-subscribe system. Modern computer screens make reading less exhausting for the eyes; a laptop computer can be used on a couch or in bed. With tablet computers and smartphones, with many other low-cost value-added features like video animation, video clips, and full-length movies, many argue that paper is now obsolete to all but those who are resistant to technological change. eBooks are often free or low cost compared to hard-copy books. Others argue that paper will always have a place because it affords different uses than screens. Environmental impact of paper Some believe that paper product manufacturing contributes significantly to deforestation and man-made climate change, and produces greenhouse gases. Others argue that paper product manufacturing, especially in North America, supports the ecological and economic balance of sustainable forestry. According to the 2018 American Forest & Paper Association Sustainability Report, paper manufacturing decreased greenhouse gas emission by 20% in an eleven-year period. Measures such as recycling can help reduce the environmental impact of paper. Some paper production outside of North America may lead to air pollution with the release of nitrogen dioxide (NO2), sulfur dioxide (SO2), and carbon dioxide (CO2). Waste water discharged from pulp and paper mills outside of North America may contain solids, nutrients, and dissolved organic matter that are classified as pollutants. Nutrients such as nitrogen and phosphorus can cause or exacerbate eutrophication of fresh water bodies. Printing inks and toners are very expensive and use environment-damaging volatile organic compounds, heavy metals and non-renewable oils, although standards for the amount of heavy metals in ink have been set by some regulatory bodies. Deinking recycled paper pulp results in a waste slurry, sometimes weighing 22% of the weight of the recycled wastepaper, which may go to landfills. Environmental impact of electronics A paperless work environment requires an infrastructure of electronic components to enable the production, transmission, and storage of information. The industry that produces these components is one of the least sustainable and most environmentally damaging sectors in the world. The process of manufacturing electronic hardware involves the extraction of precious metals and the production of plastic on an industrial scale. The transmission and storage of digital data is facilitated by data centers, which consume significant amounts of the electricity supply of a host country. Eliminating paper via automation and electronic forms automation The need for paper is eliminated by using online systems, such as replacing index cards and rolodexes with databases, typed letters and faxes with email, and reference books with the internet. Another way to eliminate paper is to automate paper-based processes that rely on forms, applications and surveys to capture and share data. This method is referred to as "electronic forms" or e-forms and is typically accomplished by using existing print-perfect documents in electronic format to allow for prefilling of existing data, capturing data manually entered online by end-users, providing secure methods to submit form data to processing systems, and digitally signing the electronic documents without printing. The technologies that may be used with electronic forms automation include – Portable Document Format (PDF) – to create, display and interact with electronic documents and forms E-form (electronic form) management software – to create, integrate and route forms and form data with processing systems Databases – to capture data for prefilling and processing documents Workflow platforms – to route information, documents and direct process flow E-mail (electronics email) communication which allows sending and receiving information of all kinds and enable attachments Digital signature solutions – to digitally sign documents (used by end-users) Web servers – to host the process, receive submitted data, store documents and manage document rights One of the main issues that has kept companies from adopting paperwork automation is difficulty capturing digital signatures in a cost-effective and compliant manner. The E-Sign Act of 2000 in the United States provided that a document cannot be rejected on the basis of an electronic signature and required all companies to accept digital signatures on documents. Today there are sufficient cost-effective options available, including solutions that do not require end-users to purchase hardware or software. One of the great benefits of this type of software is that OCR (Optical character recognition) can be used, which enables the user to search the full text of any file. Additionally, user-defined tags can be added to each file to make it easier to locate certain files throughout the entire system. Some paperless software offers a scanner, hardware and software and works seamlessly in separating and organizing important documents. Paperless software might also allow people to enable online signatures for important documents that can be used in any small business or office. Document management and archiving systems do offer some methods of automating forms. Typically, the point in which document management systems start working with a document is when the document is scanned and/or sent into the system. Many document management systems include the ability to read documents via optical character recognition and use that data within the document management system's framework. While this technology is essential to achieving a paperless office it does not address the processes that generate paper in the first place. Digitizing paper-based documents Another key aspect of the paperless office philosophy is the conversion of paper documents, photos, engineering plans, microfiche and all the other paper based systems to digital documents. Technologies that may be used for this include scanners, digital mail solutions, book copiers, wide format scanners (for engineering drawings), microfiche scanners, fax to PDF conversion, online post offices, multifunction printers and document management systems. Each of these technologies uses software that converts the raster formats (bitmaps) into other forms depending on need. Generally, they involve some form of image compression technology that produces smaller raster images or use optical character recognition (OCR) to convert a document into text. A combination of OCR and raster is used to enable search ability while maintaining the original form of the document. An important step is the labeling related to paper-to-digital conversion and the cataloging of scanned documents. Some technologies have been developed to do this, but they generally involve either human cataloging or automated indexing on the OCR document. However, scanners and software continue to improve with the development of small, portable scanners that are able to scan doubled-sided A4 documents at around 30-35ppm to a raster format (typically TIFF fax 4 or PDF). An issue faced by those wishing to take the paperless philosophy to the limit has been copyright laws. These laws may restrict the transfer of documents protected by copyright from one medium to another, such as converting books to electronic format. Securing and tracing documents As awareness of identity theft and data breaches became more widespread, new laws and regulations were enacted, requiring companies that manage or store personally identifiable information to take proper care of those documents. Paperless office systems are easier to secure than traditional filing cabinets, and can track individual accesses to each document. Difficulties in adopting the paperless office A major difficulty in "going paperless" is that much of a business's communication is with other businesses and individuals, as opposed to just being internal. Electronic communication requires both the sender and the recipient to have easy access to appropriate software and hardware. Costs and temporary productivity losses when converting to a paperless office are also a factor, as are government regulations, industry standards, legal requirements, and business policies which may also slow down the change. Businesses may encounter technological difficulties such as file format compatibility, longevity of digital documents, system stability, and employees and clients not having appropriate technological skills. For these reasons, while there may be a reduction of paper, some uses of paper will likely remain indefinitely. However, a 2015 questionnaire suggested that nearly half of small/medium-sized businesses believed they were or could go paperless by the end of that year. See also U.S. Paperwork Reduction Act, (1980) Environmental impact of paper Document management system Backup References Further reading - discusses limitations of the paperless office, and the valuable role paper can play for knowledge workers. External links The Paper Free Office – dream or reality? AIIM Market Intelligence Office work History of human–computer interaction Waste minimisation Deforestation Paper
61934102
https://en.wikipedia.org/wiki/Digital%20Trust%20Center
Digital Trust Center
The Digital Trust Center is a Dutch organisation, with the main goal to help entrepreneurs with safe digital entrepreneurship. The organisation is an initiative of the Ministry of Economic Affairs and Climate Policy () and is supported by the Confederation of Netherlands Industry and Employers (VNO-NCW), the Royal Association MKB-Nederland), the National Coordinator for Security and Counterterrorism, the National Cyber Security Centre / Platform for the Information Society, Platform Netherlands, NLdigital, and the Dutch Chamber of Commerce. Target group The Digital Trust Center's target group consists of 2.2 million companies: from freelancers to large companies. These are all companies in the Netherlands that do not belong to the so-called vital sectors, such as banks, telecom, energy and water companies. Companies in these vital sectors have the National Cyber Security Centre as a partner within the central government. Mission The Dutch Trust Centers mission is to increase the resilience of businesses to cyber threats with a focus on two key tasks. Its first task is to provide businesses with reliable and independent information on digital vulnerabilities and concrete advice on the action they should take, by means of a digital platform and other facilities. Its second task is to foster cyber security alliances between businesses. Both tasks aim to help businesses improve their cyber security arrangements and to increase their resilience to cyber threats. External links Digital Trust Center Cybercrime
12101316
https://en.wikipedia.org/wiki/Enterprise%20search
Enterprise search
Enterprise search is the practice of making content from multiple enterprise-type sources, such as databases and intranets, searchable to a defined audience. "Enterprise search" is used to describe the software of search information within an enterprise (though the search function and its results may still be public). Enterprise search can be contrasted with web search, which applies search technology to documents on the open web, and desktop search, which applies search technology to the content on a single computer. Enterprise search systems index data and documents from a variety of sources such as: file systems, intranets, document management systems, e-mail, and databases. Many enterprise search systems integrate structured and unstructured data in their collections. Enterprise search systems also use access controls to enforce a security policy on their users. Enterprise search can be seen as a type of vertical search of an enterprise. Components of an enterprise search system In an enterprise search system, content goes through various phases from source repository to search results: Content awareness Content awareness (or "content collection") is usually either a push or pull model. In the push model, a source system is integrated with the search engine in such a way that it connects to it and pushes new content directly to its APIs. This model is used when realtime indexing is important. In the pull model, the software gathers content from sources using a connector such as a web crawler or a database connector. The connector typically polls the source with certain intervals to look for new, updated or deleted content. Content processing and analysis Content from different sources may have many different formats or document types, such as XML, HTML, Office document formats or plain text. The content processing phase processes the incoming documents to plain text using document filters. It is also often necessary to normalize content in various ways to improve recall or precision. These may include stemming, lemmatization, synonym expansion, entity extraction, part of speech tagging. As part of processing and analysis, tokenization is applied to split the content into tokens which is the basic matching unit. It is also common to normalize tokens to lower case to provide case-insensitive search, as well as to normalize accents to provide better recall. Indexing The resulting text is stored in an index, which is optimized for quick lookups without storing the full text of the document. The index may contain the dictionary of all unique words in the corpus as well as information about ranking and term frequency. Query processing Using a web page, the user issues a query to the system. The query consists of any terms the user enters as well as navigational actions such as faceting and paging information. Matching The processed query is then compared to the stored index, and the search system returns results (or "hits") referencing source documents that match. Some systems are able to present the document as it was indexed. Differences from web search Beyond the difference in the kinds of materials being indexed, enterprise search systems also typically include functionality that is not associated with the mainstream web search engines. These include: Adapters to index content from a variety of repositories, such as databases and content management systems. Federated search, which consists of transforming a query and broadcasting it to a group of disparate databases or external content sources with the appropriate syntax, merging the results collected from the databases, presenting them in a succinct and unified format with minimal duplication, and providing a means, performed either automatically or by the portal user, to sort the merged result set. Enterprise bookmarking, collaborative tagging systems for capturing knowledge about structured and semi-structured enterprise data. Entity extraction that seeks to locate and classify elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. Faceted search, a technique for accessing a collection of information represented using a faceted classification, allowing users to explore by filtering available information. Access control, usually in the form of an Access control list (ACL), is often required to restrict access to documents based on individual user identities. There are many types of access control mechanisms for different content sources making this a complex task to address comprehensively in an enterprise search environment (see below). Text clustering, which groups the top several hundred search results into topics that are computed on the fly from the search-results descriptions, typically titles, excerpts (snippets), and meta-data. This technique lets users navigate the content by topic rather than by the meta-data that is used in faceting. Clustering compensates for the problem of incompatible meta-data across multiple enterprise repositories, which hinders the usefulness of faceting. User interfaces, which in web search are deliberately kept simple in order not to distract the user from clicking on ads, which generates the revenue. Although the business model for enterprise search could include showing ads, in practice this is not done. To enhance end user productivity, enterprise vendors continually experiment with rich UI functionality which occupies significant screen space, which would be problematic for web search. Relevance factors The factors that determine the relevance of search results within the context of an enterprise overlap with but are different from those that apply to web search. In general, enterprise search engines cannot take advantage of the rich link structure as is found on the web's hypertext content, however, a new breed of Enterprise search engines based on a bottom-up Web 2.0 technology are providing both a contributory approach and hyperlinking within the enterprise. Algorithms like PageRank exploit hyperlink structure to assign authority to documents, and then use that authority as a query-independent relevance factor. In contrast, enterprises typically have to use other query-independent factors, such as a document's recency or popularity, along with query-dependent factors traditionally associated with information retrieval algorithms. Also, the rich functionality of enterprise search UIs, such as clustering and faceting, diminish reliance on ranking as the means to direct the user's attention. Access control: early binding vs late binding Security and restricted access to documents is an important matter in enterprise search. There are two main approaches to apply restricted access: early binding vs late binding. Late binding Permissions are analyzed and assigned to documents at query stage. Query engine generates a document set and before returning it to a user this set is filtered based on user access rights. It is costly process but accurate (based on user permissions at the moment of query). Early binding Permissions are analyzed and assigned to documents at indexing stage. It is much more effective than late binding, but could be inaccurate (user might be granted or revoked permissions between in the period between indexing and querying). Search relevance testing options Search application relevance can be determined by following relevance testing options like Focus groups Reference evaluation protocol (based on relevance judgements of results from agreed-upon queries performed against common document corpuses) Empirical testing A/B testing Log analysis on a Beta production site Online ratings See also Collaborative search engine Comparison of enterprise search software Data defined storage Enterprise bookmarking Enterprise information access Faceted search Information extraction Knowledge management List of enterprise search vendors List of search engines Text mining Vertical search References Information retrieval genres Enterprise search
707741
https://en.wikipedia.org/wiki/Transport%20Layer%20Interface
Transport Layer Interface
In computer networking, the Transport Layer Interface (TLI) was the networking API provided by AT&T UNIX System V Release 3 (SVR3) in 1987 and continued into Release 4 (SVR4). TLI was the System V counterpart to the BSD sockets programming interface, which was also provided in UNIX System V Release 4 (SVR4). TLI was later standardized as XTI, the X/Open Transport Interface. TLI and Sockets It was originally expected that the OSI protocols would supersede TCP/IP, thus TLI is designed from an OSI model-oriented viewpoint, corresponding to the OSI transport layer. Otherwise, TLI looks similar, API-wise, to sockets. TLI and XTI were widely used (?) and, up to UNIX 98, may have been preferred over the POSIX Sockets 5API with respect to existing standards. However, it was clear at least since the early 1990s that the Berkeley Socket interface would ultimately prevail. TLI and XTI are still supported in SVR4-derived operating systems and operating systems conforming to branded UNIX (UNIX 95, UNIX 98 and UNIX 03 Single UNIX Specifications) such as Solaris and AIX (as well as the classic Mac OS, in the form of Open Transport). Under UNIX 95 (XPG4) and UNIX 98 (XPG5.2), XTI was the preferred and recommended supported API for new transport protocols. As a result of deliberations by the Austin Group with the goal of bringing flavors of UNIX that do not provide STREAMS, such as BSD and Linux, under the Single UNIX Specification, the UNIX 03 Single UNIX Specification both declares STREAMS as optional, and declares POSIX Sockets as the preferred API for new transport protocols. See also X/Open Transport Interface, formally standardized successor to TLI. X/Open Portability Guide, the predecessor to POSIX Computer networking, outlining the major networking protocols Notes References External links The Open Group's XTI standard Example client-server application working on Solaris and Linux Application programming interfaces
45062283
https://en.wikipedia.org/wiki/Kinetica%20%28software%29
Kinetica (software)
Kinetica is a distributed, memory-first OLAP database developed by Kinetica DB, Inc. Kinetica is designed to use GPUs and modern vector processors to improve performance on complex queries across large volumes of real-time data. Kinetica is well suited for analytics on streaming geospatial and temporal data. Background In 2009, Amit Vij and Nima Neghaban founded GIS Federal, a developer of software they called GPUdb. The GIS stood for Global Intelligence Solutions. GPUdb was initially marketed for US military and intelligence applications, at Fort Belvoir for INSCOM. In 2014 and 2016, the analyst firm International Data Corporation mentioned Kinetica for its production deployments at the US Army and United States Postal Service, respectively. As a result of their work with USPS, IDC announced that Kinetica was the recipient of the HPC Innovation Excellence Award. On March 3, 2016, the name of the company was changed to GPUdb to match the name of the software, and a $7 million investment was announced which included Raymond J. Lane. In September 2016, it announced another $6 million investment, and an office in San Francisco, while keeping its office in Arlington, Virginia. After adding marketing and service people, the name of both the company and product was changed to Kinetica. In June 2017, the company announced US$50 million in Series A funding led by Canvas Ventures and Meritech Capital Partners, along with new investor Citi Ventures and existing backer Ray Lane of GreatPoint Ventures. The company has headquarters in Arlington, Virginia and San Francisco and regional offices in Europe and Asia Pacific. Software The software is designed to run on graphics processing units such as the Tesla from Nvidia. Partners include Cisco, Dell EMC, HPE, IBM, NVIDIA, Confluent, Amazon, Microsoft, Google, and Oracle. At Kinetica's core is a distributed, memory-first relational SQL database that utilizes the processing power of CPUs with the acceleration of multi-core GPU devices to analyze and visualize data with fast (often millisecond) response times. Kinetica is designed to handle streaming, batch and historic data. Other features include graph solving algorithms, user defined functions, geospatial functions, the Advanced Analytics Workbench for deploying machine learning and deep learning algorithms, natural language processing, automatic storage tiering, and Kinetica Reveal - a web-based dashboarding tool. Customers The United States Postal Service deployed Kinetica in to production 2014. Other public customers include Telkomsel, Softbank, GSK, Pubmatic, and OVO. References External links Official Kinetica website Proprietary database management systems Geographical databases Cross-platform software Software companies based in the San Francisco Bay Area Big data companies Software companies of the United States
12558884
https://en.wikipedia.org/wiki/Pennsylvania%20Route%20321
Pennsylvania Route 321
Pennsylvania Route 321 (PA 321) is a state highway located in Elk and McKean counties in the U.S. state of Pennsylvania, maintained by the Pennsylvania Department of Transportation (PennDOT). The southern terminus is at U.S. Route 219 (US 219) in the community of Wilcox. The northern terminus is at PA 346 within the Allegheny National Forest. PA 321 heads northwest from Wilcox through rural areas to the borough of Kane, where it forms a brief concurrency with US 6. North of here, the route heads through the national forest and runs along the shore of the Allegheny Reservoir. PA 321 runs east briefly with PA 59 before winding north through more forest to its northern terminus. A portion of the route along the Allegheny Reservoir is designated as the Longhouse National Scenic Byway, a Pennsylvania Scenic Byway and National Forest Scenic Byway. The road between Wilcox and Kane was designated as part of Legislative Route 97 (LR 97) in 1911 and as part of PA 6 in 1924. US 119 became concurrent with PA 6 in 1926 before US 219 replaced both designations on this stretch of road two years later. The road between Kane and Kinzua was built in the late 1920s and became an extension of PA 68 in 1935. In 1952, US 219 was relocated off the road between Wilcox and Kane. Plans were made to construct the Kinzua Dam in 1960, and several new roads would need to be built to accommodate the reservoir including a relocation of PA 68. In 1961, the PA 321 designation was approved for the unnumbered road between Wilcox and Kane and PA 68 between Kane and Kinzua in order to provide an access road to the planned recreation area; signs were posted the following year. In the mid-1960s, improvements were planned for PA 321. The PA 321 designation north of Kane was removed in 1966. In the late 1960s, the section between north of Kane and PA 346 was constructed, including a new alignment between Red Bridge and PA 59. Work on the segment north of Kane took place in the early 1970s. In Kane, PA 321 was relocated from following US 6 through the downtown area to use Hacker Street to the east, with reconstruction finished in 1973. The road between Wilcox and Kane, including a bypass of the former, was rebuilt in the mid-1970s. In 1974, PA 321 was extended from US 6 in Kane north to PA 346. Route description PA 321 begins at an intersection with US 219 in the community of Wilcox in Jones Township, Elk County, heading north on two-lane undivided Buena Vista Highway. The road bypasses the center of Wilcox to the west as it passes through an industrial area. The highway crosses the West Branch Clarion River and curves northwest into forests. The route runs through Dahoga and continues along the eastern border of Allegheny National Forest, with a Buffalo and Pittsburgh Railroad line a short distance to the west of the road. PA 321 enters Wetmore Township in McKean County and becomes Brickyard Road, departing the Allegheny National Forest. The road continues through rural land, passing through Sergeant and East Kane. The route heads into the borough of Kane and becomes Westerberg Way, running through industrial areas as it crosses an abandoned railroad line and comes to an intersection with US 6. Here, PA 321 turns west briefly for a concurrency with US 6 on Biddle Street, immediately turning north-northwest onto Hacker Street, where it is lined with homes. After passing Glenwood Park, the road curves northeast as Kinzua Avenue and leaves Kane for Wetmore Township, becoming Kane-Marshburg Road and heading north-northwest into wooded areas. The route re-enters the Allegheny National Forest and continues northwest, crossing into Hamilton Township and intersecting Longhouse Drive. PA 321 comes to a bridge over the Kinzua Creek arm of the Allegheny Reservoir near the Red Bridge Recreation Area, which contains a picnic area, a campground, and a bank fishing area. From here, the highway follows the northeastern shore of the reservoir, passing through Dunkle Corners. The road turns to the east and runs along the south shore of Chappel Bay, soon heading away from the reservoir and turning northeast. The route continues into Corydon Township and reaches an intersection with PA 59 near the Bradford Ranger Station. At this point, PA 321 turns east for a concurrency with PA 59 and the name remains Kane-Marshburg Road. The road leaves the national forest and heads through the community of Klondike before PA 321 splits to the northwest onto Sugar Run Road in Lafayette Township. The road crosses back into Corydon Township and runs through more of the Allegheny National Forest. The route passes to the north of a section of the Allegheny Reservoir again and turns to the northeast. PA 321 winds north and serves the Tracy Ridge Trailhead, which consists of a campground and hiking trails leading to the reservoir. The route comes to its northern terminus at PA 346 a short distance south of the New York border. The portion of PA 321 between Longhouse Drive and PA 59 is part of the Longhouse National Scenic Byway, a Pennsylvania Scenic Byway and National Forest Scenic Byway that encircles the Kinzua Creek arm of the Allegheny Reservoir and serves multiple recreational areas. In 2015, PA 321 had an annual average daily traffic count ranging from a high of 4,300 vehicles along the US 6 concurrency in Kane to a low of 100 vehicles between PA 59 and PA 346. The portion of PA 321 that is concurrent with US 6 is part of the National Highway System. History The PA 321 designation was approved by the Pennsylvania Department of Highways (PDH) in 1961, replacing what was known as "Old 219" between Wilcox and Kane and a portion of PA 68 between Kane and Kinzua; signs were installed the following year. Plans for improving PA 321 were announced in 1963, including a new alignment north of Red Bridge to PA 59 that would follow the shore of the Allegheny Reservoir. In 1965, construction began on a portion of the road between north of Kane and Red Bridge. Signage for PA 321 north of Kane was removed in 1966. The portion of PA 321 in the Red Bridge area was completed in 1966 while the section south of Red Bridge to north of Kane was finished the following year. In 1967, construction began on the new alignment between Red Bridge and PA 59, which was finished the next year. The road between PA 59 near Marshburg and the New York border was completed in 1968. Construction of the of PA 321 north of Kane took place between 1970 and 1971. Construction of the section of PA 321 between East Kane and Kane and along Hacker Streey in Kane occurred from 1972 to 1973. The final section of the route between Wilcox and East Kane was rebuilt between 1972 and 1975. Designation Following the passage of the Sproul Road Bill in 1911, the highway between Wilcox and Kane was incorporated as part of LR 97. The Wilcox-Kane road was built in the mid-1920s. In 1925, this roadway became a part of PA 6 (the Buffalo-Pittsburgh Highway), which ran from the Maryland border near Meyersdale north to Bradford. With the creation of the U.S. Highway System on November 11, 1926, US 119 was cosigned with PA 6 on this stretch of road. US 219 replaced the US 119/PA 6 designation between Wilcox and Kane in 1928. The roadway between Kane and Red Bridge was improved by Works Progress Administration labor between 1929 and 1930 in order to "get the rural areas 'out of the mud'." The road was built with a soft stone base that would not handle the heavier traffic volumes that would later use the road. This section of highway was patched by the state several times in order to keep it usable for traffic. The road between Kane and Kinzua became a northward extension of PA 68 in 1935. Plans were announced in 1949 for a direct alignment of US 219 between Wilcox and Lantz Corners to bypass the westerly jog to Kane. In 1952, US 219 was relocated off the roadway between Wilcox and Kane and US 6 between Kane and Lantz Corners to use the new direct alignment. In 1960, plans were made to construct the Kinzua Dam in the Allegheny National Forest, with road relocations necessary for the new reservoir that would be formed. Among the roads that would need to moved was PA 68 between Kane and Kinzua. The route was to be relocated to follow the shore of the reservoir before coming to an intersection with PA 59. This new road would provide access to the Kinzua Dam from Kane. A meeting was held on February 24, 1960, between state and federal officials to discuss the road relocations for the dam construction, including PA 68. Plans were announced in March 1961 to repave "Old Route 219" between Wilcox and Kane, with completion planned for the summer. The next month, the Special Highways Committee of the Kane Chamber of Commerce planned to meet with the supervisor of the Allegheny National Forest to discuss access roads to the Kinzua Dam. Among the ideas offered was extending PA 255 from Johnsonburg north to Kane, running concurrent with US 219 between Johnsonburg and Wilcox and following the unnumbered road between Wilcox and Kane. By May of that year, no decision was made on how PA 68 would be routed to the north of Red Bridge with the construction of the Kinzua Dam. In 1961, the Special Highways Committee recommended a new designation for LR 97 between Wilcox and Kane, which was known as "Old 219", along with improving the Wilcox-Kane road and PA 68 between Kane and Red Bridge. A new road was favored north of Red Bridge to serve the proposed recreational area at the Allegheny Reservoir. The PA 321 designation was approved by the PDH on December 7 of that year. The route replaced LR 97 between Wilcox and Kane, where it joined US 6 and followed that route to the Easton Street intersection. Between Kane and Kinzua, PA 321 replaced a section of PA 68, with the northern terminus of PA 68 cut back to US 6/PA 321 at the intersection of Fraley and Greeves streets in Kane. The new designation was suggested by the president of the Kane Chamber of Commerce, Victor Westerberg, to provide a connection to the Kinzua Dam via Kane. The roadway north of Kane was also designated as LR 42003. PA 321 was signed between Wilcox and Kinzua in May 1962. Signage for PA 321 from Kane northward had been removed by March 1966 as that section of road was under construction, with the northern terminus of the route moved to US 6 at the south end of Kane. In April 1968, signage was installed in Kane pointing the way to the Kinzua Dam via US 6; PA 321 remained unsigned pending the completion of the road north of Kane. With the construction of PA 321, the route remained unsigned north of Kane. In addition, there were a lack of signs at the PA 59 intersection pointing the way to Kane and a lack of signage in Kane directing motorists to attractions in the Kinzua Dam area. Also, a 1972 map from the Pennsylvania Game Commission incorrectly labeled PA 66 north into Kane along with the road between Kane and the Kinzua Dam area as PA 68. Plans were announced to improve signage between the two points in September 1972. In June 1974, PA 321 was signed between US 6 in Kane and PA 346 near the New York border. On May 7, 1990, the portion of PA 321 between Longhouse Drive and PA 59 was designated as the Longhouse National Scenic Byway, a National Forest Scenic Byway. Planning When PA 321 was designated in 1961, improvement work along the section between Kane and Wilcox along with the portion between Kane and Red Bridge was planned for spring 1962. In February 1963, the U.S. Army Corps of Engineers (USACE) and the Allegheny National Forest agreed that PA 321 would be relocated to a new alignment following the shore of the reservoir north to PA 59. On June 20, 1963, officials from the borough of Kane met with district highway engineer Stanton Funk to determine the progress of improving PA 321. At that time, no progress had been made on improving the route. The state had no plans to build the new alignment of PA 321 north of Red Bridge; a tentative route was decided upon but no work had begun. As a result of this meeting, Kane borough officials voted on July 1 to protest both Governor William Scranton and the Pennsylvania General Assembly for the state bypassing Kane and not improving the roads in the area. The construction of PA 59 to provide access to the dam provided a shortcut to US 6 between Warren and Smethport, along with the US 219 bypass through Lantz Corners to Wilcox, kept traffic heading to the dam away from Kane. Therefore, the state determined that they did not need to improve the highways in the Kane area. However, Kane officials felt improving PA 321 would provide better access to the dam and would boost tourism to their town. On November 2, 1963, plans were revealed by Representative Albert W. Johnson to improve PA 321. With these plans, the portion of the route between Wilcox and Kane would have several curves eliminated. In Kane, the curbs on the portion of US 6/PA 321 running along Fraley Street would be replaced. The section of PA 321 between Kane and Red Bridge, known as the Kinzua Road, was resurveyed in summer 1963 and advanced to the design stage, with funding split between the PDH and the U.S. Forest Service. The route north of Red Bridge would be built along a new alignment as the last highway relocation in the creation of the Kinzua Dam reservoir. Work on these projects was scheduled to begin in 1964. The new alignment of PA 321 north of Red Bridge was planned to parallel the original route to Morrison, where it would split and follow the North Fork Run to an intersection with the relocated alignment of PA 59. A new bridge carrying the route over Kinzua Creek near Red Bridge would be constructed. The new bridge would be higher than the original bridge to accommodate the higher water level of the reservoir. Plans were made in 1964 to realign PA 321 to follow Hacker Street in Kane to access the Kinzua Road, avoiding the downtown area along US 6. On January 12, 1965, a meeting was held between Kane borough officials, local government and planning commission officials, and Funk and chief engineer Robert Keppner regarding the construction of PA 321, including improving the road between Wilcox and Kane, building a bypass to the east of Kane, and constructing the road north to PA 59. At this time, no plans existed to improve US 6/PA 321 on Fraley Street in Kane. In this meeting, it was announced construction on building the route north of Kane was planned to start before July 1, 1965, with work progressing from Red Bridge, where the highway was in the worst condition, south to Kane. The route would be built as a wide road with wide shoulders. Also, there was opposition to routing PA 321 along Clay Street in Kane and instead the state agreed to improve the existing alignment between the borough and the Elk County border, with one-way streets discussed as a possibility for the route through Kane. The Clay Street alignment would involve building new road and would lower the grade of the Baltimore and Ohio Railroad tracks; instead, a routing by way of Hacker Street was suggested which would not involve altering the railroad tracks, with a traffic light installed at the junction of US 6 and PA 321 to slow down traffic along US 6. There was also a proposal to have PA 321 bypass Kane to the east and intersect a relocated PA 68 south of Kane and a relocated US 6 east of the borough. On March 11, 1965, plans were unveiled by PDH secretary Henry Harral for constructing PA 321, with bidding expected to begin soon after. The new bridge over the Kinzua Creek at Red Bridge would be a continuous steel girder bridge with two concrete piers, long and high. A three-span bridge would be built over Chappel Forks. Work would take place on the route from north of Kane north to Red Bridge. Between the Elk County line and Kane, PA 321 would be improved along its existing alignment. At this time, the of the road to the north of Kane was still under design, with bidding planned for later in the year. Between Kane and Red Bridge, PA 321 would become a Forest Service Highway and would be built as a heavy-duty road. On May 28, 1966, it was announced that the State Highways Commission would hold a meeting concerning road projects in the Clearfield District on June 1, asking for input from residents. Among the projects to be recommended by Representative Westerberg and other Kane officials were the improvement of US 6/PA 321 on Fraley Street in Kane along with completing the remaining segments of PA 321. At the meeting, Westerberg presented a need for improved highways in McKean County, including completing PA 321 in an expedited manner in order to provide access between Kane and Kinzua Lake. He called for work to begin on the to the north of Kane, as the existing road could not handle traffic heading to the lake. On July 20, 1966, Westerberg announced that a construction notice would be posted for the section between the Elk County line and Kane, with work to begin pending public hearings. Westerberg would also inquire about the status of the section between Red Bridge and PA 59. On June 7, 1967, the state released its "5th and 6th Year Program", which called for Fraley Street in Kane to be widened and also listed for the sections of PA 321 between Wilcox and Kane and to the north of Kane to be reconstructed. The route was planned to be built as a road between Wilcox and the county line, a road between the county line and East Kane, and a road north of Kane. The slow progress of constructing PA 321 led to people in Kane feeling they were being neglected in not getting road access to Kinzua Dam. In October 1967, it was announced by Secretary of Highways Robert G. Bartlett that a meeting would soon be held to speed up highway construction projects in the Kane area. Among the items called for was the replacement of the curbs on Fraley Street, which was projected to start in 1968. In addition, plans for constructing the route between Wilcox and Kane and along Hacker Street through Kane would be discussed. In December 1967, several recommendations were made to the Pennsylvania Highway Commission for the six-year program, including completing PA 321. At this time, final design was underway for PA 321 between Wilcox and north of Kane. Bids for improving Fraley Street in Kane were advertised in December. Also, a road known as Sugar Run Road was proposed to be built between Marshburg and the Kinzua Reservoir at a cost of $3,806,000, providing a connection for PA 321 north of PA 59. On October 30, 1968, a meeting was held between the PDH and local officials. At this meeting, changes were made by the state to the PA 321 project at the southern end of Kane, with construction pushed back to 1969 and 1970. In addition, a design for an interchange with a possible US 6 bypass of Kane was revealed. The alignment of PA 321 between Wilcox and Kane was to be a reconstruction of the existing alignment with relocations in a few places. In Kane, the route would follow Hacker Street, which would be widened to . Kane to Red Bridge The section of PA 321 between Kane and Kinzua was closed to through traffic in early 1963, though it started to see its heaviest traffic volumes ever. In early 1964, the U.S. Forest Service committed funds to the construction of PA 321 between Kane and Red Bridge. On June 19, 1964, Representative Victor Westerberg asked that the PDH give priority to three road projects in McKean County, including PA 321, to help improve access to the Kinzua Dam. As of this time, the section of the route from Kane to Red Bridge was waiting for action from the state for construction to begin. On November 9, 1964, preliminary work on building PA 321 between Kane and Red Bridge was scheduled to get underway. This road would be reconstructed to eliminate or reduce several sharp curves. However, the notices for building the road did not include Easton and Chase streets in Kane, which carried two blocks of PA 321 to the north of US 6. The construction of PA 321 north of Kane was expected to block the town's access to the Kinzua Dam area and an alternate route via Forest Highway 262 was not anticipated to be completed in time, with traffic detoured via US 6 or US 219 to PA 59. In late November, a $285,000 bid was submitted to clear the valley of the Kinzua Creek for the reservoir. At this time, the state had $500,000 allocated for constructing PA 321 from Kane to Red Bridge, with construction scheduled to begin in July 1965. The section of PA 321 between Kane and Red Bridge was scheduled to have construction contracts issued in June 1965 with construction beginning on July 1. Surveys of PA 321 north of Kane were completed by March 1965. By April 1965, work was underway in clearing the way for the reservoir that was to be formed by the Kinzua Dam. In July 1965, a $688,540 bid was submitted by Istock Construction Company to construct PA 321 from north of Kane to Red Bridge. Construction on the first section of PA 321 between north of Kane and Red Bridge started on October 2, 1965, and was moving ahead of schedule. The section of PA 321 running north of Kane was unfunded, with the project scheduled for 1966. Work was scheduled to be completed on the section from north of Kane to Red Bridge in 1966, while the to the north of Kane was planned to be finished in 1967. With the construction projects underway, PA 321 was closed north of Kane and traffic was detoured via US 6 to Warren or via US 6 and US 219 to reach PA 59 and the Kinzua Dam. In January 1966, work on the road south of Red Bridge was suspended for the winter. Construction of the road between north of Kane and Red Bridge was affected by mud. By May 1966, no action had been taken on the of PA 321 north of Kane along with the road between Red Bridge and PA 59, with no plans for access between Kane and the Kinzua Dam to be finished that year. It was confirmed that PA 321 between Kane and PA 59 would be a road with a berm, after an road was initially planned. By June 1966, construction resumed on the section of PA 321 to the south of Red Bridge, though progress was slow. The section between north of Kane and Red Bridge was plagued by equipment and worker shortages. By October, final touches were being put in place on the section of the route south of Red Bridge, which were slated to be finished in spring 1967. The of the road north of Kane remained without funds and the section north to PA 59 had been approved by the USACE and was waiting for contract letting by the state, possibly not to be completed until 1968. The Kiasutha Recreational Area in the Allegheny National Forest was scheduled for a limited opening on June 1, 1967. In August 1968, it was announced that contract letting for the of PA 321 to the north of Kane would begin in December. It was determined that most of this section of road would be relocated onto a new alignment mostly to the west. It was anticipated the existing road would remain open to traffic while the new alignment was built. In April 1969, the PDH requested taking a portion of Glenwood Park for $2,020 to construct the route heading out of Kane. It was announced by state Senator Richard Frame and Representative Westerberg that the PA 321 project to the north of Kane was included in $6 million of capital fund projects introduced to the Pennsylvania State Senate by Frame on June 25; bids were scheduled for September 12. The reconstruction of PA 321 north of Kane was shelved on November 7, 1969, due to lack of funds, but remained on the state Senate calendar. It was announced by Senator Frame and Representative Westerberg in March 1970 that bidding for the section of PA 321 north of Kane would begin on April 24. The Clearfield District held an information meeting on April 23, 1970, at which Representative Westerberg was in attendance. It was determined that the alignment of PA 321 on Hacker Street in Kane would be modified to extend the street northwest to the realigned PA 321 and pass to the west of the existing alignment in order to reduce the number of curves and avoid taking a portion of Glenwood Park. The low bidders to construct the section of PA 321 to the north of Kane were Putnam and Greene, Inc. for $1,768,997.11, with construction work projected to start in mid-May. During construction of the new alignment of the route, traffic traveling between Kane and the Kinzua Dam was expected to be detoured. In May 1970, the state began surveying the modified routing of PA 321 at the north end of Kane. By October 1970, work was underway on the relocation of PA 321 immediately to the north of Kane. A cut was made to extend Hacker Street north, with the removed fill used to extend and level the athletic fields at Glenwood Park. Construction on this section was halted for the winter but was anticipated to resume in April 1971. The section of PA 321 north of Kane was partially opened to traffic by December 1971, with guardrail installation underway. This section was fully completed by the end of 1971. In 1972, a fault from a deep spring developed north of Kane, leading to that section of highway being replaced. North of Red Bridge The USACE and the state were responsible for the section of the route between Red Bridge and PA 59, which included the bridge over Kinzua Creek. The state would build this section of road for the USACE. The bridge over the Kinzua Creek was advanced to the review stage by September 1964, and construction of the new road between Red Bridge and PA 59 was scheduled to be built starting in 1965. By November of that year, the bridge over the reservoir and the highway north to PA 59 were under review by the USACE. By February 1965, the new bridge at Red Bridge had been engineered while plans for the section between Red Bridge and PA 59 were again under review by the USACE after being returned for some minor revisions. By October 2, 1965, the first section between Red Bridge and PA 59, which involved the new bridge over Kinzua Creek, was under contract to O'Block Construction Company at a bid of $924,285, with construction starting the following week. On December 31, 1965, work began on building the concrete piers for the new bridge at Red Bridge. Progress continued on the Kinzua Creek bridge piers through the winter. Work on the bridge and its approach continued as weather permitted during the winter months. By the following month, the rock fill for the bridge approach was halfway complete while the path of the road leading to the bridge was being cut in order to be graded. The rock fill for the Kinzua Creek bridge was leveled with the piers by March 1966. The same month, the section between Red Bridge and PA 59 was again placed for review for more revisions to be made while the to the north of Kane was surveyed but without funding. The bridge piers and rock fill were completed by May 1966. Construction of the Kinzua Creek bridge was affected by mud, though construction of the bridge remained on schedule. In June 1966, right-of-way negotiations were underway between Red Bridge and PA 59. On June 3, 1966, pilings began to be driven for the bridge abutments at Kinzua Creek. By the following month, steel beams for the bridge over Kinzua Creek began to be put in place, with the bridge project ahead of schedule. In October 1966, the new bridge over the Kinzua Creek at Red Bridge along of road to the north of the bridge were completed and opened to traffic, with work on the berms remaining. In December 1966, the former bridge at Red Bridge was listed for demolition as part of clearing the basin for the lake. On January 10, 1967, it was announced by Representative Westerberg that bidding for the relocation of PA 321 between Red Bridge and PA 59, which included the new Chappel Forks bridge, would open on February 10. The O'Block Construction Company submitted a low bid of $1,579,713, with completion expected later in the year. Construction was underway on building the relocated PA 321 between Red Bridge and PA 59 by June 1967. On June 2, clearing began to build the bridge at Chappel Forks. Construction equipment was shifted from the Forest Route 262 project to PA 321 as the forest road project was slowed for special tests. By the later part of the month, a lot of progress was made on building the road north of Red Bridge, although construction was slowed by rainy weather; the relocation was on target for a 1968 completion date. In addition to building the road, campsites and boat ramps would also be built adjacent to the road serving the lake. At Dunkle Corners, three boat ramps would be constructed where the former Kinzua Road crossed the lake and was submerged. By October 1967, the section between Red Bridge and PA 59 was partially complete, but was impassable at several points and in need of a repave to handle heavy traffic. The steel needed to finish the Chappel Forks Bridge was delayed by a strike. The section between Red Bridge and PA 59 was ready to be paved starting in April. On June 5, the road was closed to traffic to allow paving to begin. By August of that year, the section between Red Bridge and PA 59 was paved while finishing touches on berms and guardrails were underway; the road remained closed to traffic and was planned to be opened in September. In addition, a connecting road between PA 59 in Marshburg through Willow Bay to the New York border was completed. By late September, the project between Red Bridge and PA 59 was almost complete and would be opened to traffic following a formal inspection. On October 3, 1968, an inspection was scheduled with the USACE, USFS, PDH, and O'Block Construction Company officials touring the highway. This inspection revealed no issues with the road, and a final inspection was scheduled on October 8 before it would open. The section of PA 321 between Red Bridge and PA 59 opened to traffic on the afternoon of October 8, 1968 following a final inspection. Kane In October 1968, signs were installed on PA 321 at each end of Kane advising that the road would be improved from 1968 to 1969. On April 4, 1968, the borough of Kane made an offer with the state to give Hacker Street to the state for the alignment of PA 321 in exchange for parts of Easton Street, Chase Street, and Kinzua Avenue. On February 3, 1969, it was revealed that several highway projects in the Kane area were stalled. The improvement of Fraley Street was halted due to the Kane Gas Company not having the money to replace its lines. In addition, the rebuilding of the stretch of PA 321 to the north of Kane was removed from the construction agenda for 1969. The construction of PA 321 south of Kane was still scheduled to begin later in the year, but the project was sent to the redesign stage for modifications to be made to the Baltimore and Ohio Railroad crossing to avoid grade changes. In November 1969, the borough of Kane and McKean County received plans from the Baltimore and Ohio Railroad for modifications to the railroad crossing along the route. On December 31, 1969, it was announced that plans for the PA 321 project in Kane and East Kane would be on display to the public at Kane's borough hall on January 5, 1970. The first public hearing on the project was held the following day at the Central Fire Building in Kane to discuss the section of PA 321 through Kane and East Kane. Nearly 100 people, including PDH representatives, attended this hearing, where the state explained the project and the residents offered their opinions. At this hearing, it was mentioned that it was decided to route PA 321 through Kane rather than along an eastern bypass as the borough was intended to be a destination for motorists using the highway. Construction on the Hacker Street section of PA 321 was scheduled to begin in summer 1970. The road was to have concrete travel lanes with asphalt shoulders. The proposed routing of PA 321 through Kane would cost $1,095,000 and would require acquiring one residential property. An alternate alignment was considered to provide a better intersection with US 6; however, it would have cost $1,123,000 and would have required taking two properties along with cutting off a railroad siding to a plant. On March 2, 1970, the Kane borough council voted to consider withdrawing its offer to let the state use Hacker Street for the alignment of PA 321 as the state would require borough taxpayers to pay around $60,000 of the approximate $100,000 cost to relocate sewer, water, and gas lines. Several residents on Hacker Street were against the construction project. The Kane Borough Planning Commission discussed on July 6, 1970, the routing of PA 321 along Hacker Street. The state felt the street needed to be lowered more for a better construction base while the borough felt it needed to be lowered less in order to avoid relocating underground utilities. In addition, the residential property at US 6 and Hacker Street received a notice to be razed for construction of PA 321. On September 14, 1970, the borough of Kane was assured that it would receive 95 percent state aid for the $80,000 cost to lower the sanitary sewer on Hacker Street for construction of the route. The remaining $4,000 cost would come from McKean County from the Liquid Fuels Refund. At this time, there was no start date for construction along Hacker Street. On February 25, 1971, it was announced that contract letting for the section of the route between Glenwood Park in Kane and East Kane, including Hacker Street, would take place on April 30. Residents in East Kane voiced opposition to reconstructing the route in their village in March 1971, and believed that the state found it more important to provide a route to Kane while taking a lot of land from people in East Kane to construct the highway. In late April, the opening of bids for the section of PA 321 through Kane was postponed due to approval needed from the Public Utilities Commission on changes made to the nearby Baltimore and Ohio Railroad siding. On April 10, 1972, work continued along the section of PA 321 in Kane and East Kane, with full scale construction starting May 1 when ground conditions were more suitable. Work would first take place between East Kane and Kane and at the US 6 intersection, while construction along Hacker Street would follow. Hacker Street was intended to be the last part of PA 321 in Kane to be constructed to have the least impact on local residents. The same day, it was announced that bids would open on May 5 for concrete paving between Wilcox and East Kane. As full scale work began between East Kane and Kane on May 1, plans were accelerated for construction on the Hacker Street section of the route to begin by mid-May. During construction of Hacker Street, the street would be closed while US 6 and one cross street would remain open. By July 1972, the road between East Kane and US 6 in Kane was being graded, with concrete pouring scheduled for August, while utility relocation was underway along Hacker Street. By September 1972, concrete was poured for the route between East Kane and Kane, and an overhead grade crossing signal was installed at the Baltimore and Ohio Railroad crossing to the south of Kane. The intersection with US 6 was nearly complete, with curb work to take place. Construction continued south from East Kane towards Wilcox, with several curves eliminated. On August 6, 1973, asphalt paving began along Hacker Street in Kane, which was expected to take ten days to complete. Following that, sidewalks along the street would be improved. The following month, the borough of Kane received $10,000 from the liquid fuels tax to restore the sidewalks along Hacker Street. The street was anticipated to be opened following the completion of the sidewalk project by the borough and PennDOT. In October 1973, the section of PA 321 between East Kane and Kane and along Hacker Street in Kane was completed. An inspection of this segment by PennDOT, local officials, and E. M. Brown, Inc. officials was held prior to being opened to traffic after road signs along Hacker Street were installed. On November 4, 1974, PennDOT gave Kinzua Avenue to the borough of Kane in exchange to acquiring Hacker Street with the completion of the project along that street. On April 5, 1976, the Kane borough council voted to replace the incandescent street lights along Hacker Street with mercury vapor lights. Wilcox to Kane In October 1965, the section of PA 321 between Wilcox and north of Kane was unfunded. In March 1966, it was announced by the Pennsylvania Department of Highways that construction would begin later in the year on the section of PA 321 between the Elk County border and the Baltimore and Ohio Railroad crossing between Kane and East Kane at a cost of $1.2 million. At this time, a decision was still needed for how the route would cross the railroad tracks into Kane, with surveys to be conducted. Also, there were no plans for PA 321 to bypass Kane. The section of the route between Wilcox and the county line was still under design. In addition, it was soon determined that the Baltimore and Ohio Railroad crossing in Kane would be lowered to build the highway. The project for PA 321 between Wilcox and Kane was taken "off the shelf" by the state in February 1967. By March 1968, the section of the route that would bypass Wilcox was listed at a cost of $550,000. Contracts were scheduled to be let for PA 321 south of Kane in 1968. A public hearing took place January 27, 1970, at Kane Area Junior High School to discuss the section between Wilcox and East Kane. On February 25, 1971, it was announced that contracts were anticipated to be issued for the section between Wilcox and East Kane on June 25 but could possibly be pushed into July. By this time, property acquisition south of Kane was underway. The section of PA 321 between Wilcox and Kane had severely deteriorated at this point and was plagued with potholes. Bidding on the section of route between Wilcox and East Kane was scheduled to begin in the fall. On June 25, 1971, E. M. Brown, Inc. was the low bidder at $1,652,027 to construct the section of PA 321 between Kane and East Kane; construction was scheduled to start 30 days later and the project was to be completed by spring 1973. Traffic would be detoured between Kane and Wilcox along US 6 and US 219 while construction took place. The state gave the "notice to proceed" for construction of the route in Kane and East Kane on September 30, 1971; construction was planned to begin on October 11. PennDOT dropped plans to construct a bypass carrying PA 321 to the west of Wilcox on October 13, 1971, as it would have cut through Wilcox Baseball Park and a parcel of land owned by the Wilcox Area Industrial Development Corporation that was planned as a future industrial site. As a result, the improvement project for the route would be ended to the north of Wilcox. The project would use state and federal funds. The bypass issue was resolved on December 2, 1971, following community petitions for the bypass and a meeting with the Wilcox Area Industrial Development Corporation; bids were scheduled to be let on December 17. The state had reached an agreement with the Wilcox Area Industrial Development Corporation to use the land for the bypass and made an exchange with the baseball field for it to be built in a new location. This section of PA 321 was to be constructed as a road, with the US 219 intersection located south of the original intersection. The bidding for this section of PA 321 was pulled on December 17 and postponed to February or April 1972 because revisions were needed to account for transition in funding for the Wilcox bypass from the PDH to PennDOT, in which the state would do bonding and the projects would go through the Legislature as bills and be signed by the Governor. By July 1972, construction was underway on the segment of PA 321 between Wilcox and East Kane, with clearing of the right-of-way starting in East Kane and moving south. Completion of this segment was expected in 1974. During the middle part of 1972, construction was slowed by wet weather. Concrete paving for the route between Wilcox and East Kane was scheduled to begin by September 1, 1974, with the road possibly open to traffic in the fall. The paving of this section of road progressed from East Kane south to Wilcox in the course of a month. The concrete paving was completed in October 1974. In June 1975, construction was slated to be completed between Wilcox and Kane. On June 12, 1975, it was announced a ribbon cutting ceremony for PA 321 between Wilcox and Kane would take place on July 2. The section of PA 321 between Wilcox and Kane was opened at a ceremony held at 11:30 am on July 2, 1975, at Kane Area Senior High School, with PennDOT secretary Jacob Kassab cutting the ribbon. A total of 200 people were in attendance for the ceremony, including Senator Frame, Congressman Johnson, Representative Westerberg, Kane Mayor Edgar James, and other local officials. The Kane Area Senior High School marching band performed at the ceremony. Following this, a luncheon was held at a restaurant near Kane with 70 persons in attendance. Major intersections See also References External links Pennsylvania Highways – PA 321 321 Transportation in McKean County, Pennsylvania Transportation in Elk County, Pennsylvania
32726780
https://en.wikipedia.org/wiki/Cray%20XK6
Cray XK6
The Cray XK6 made by Cray is an enhanced version of the Cray XE6 supercomputer, announced in May 2011. The XK6 uses the same "blade" architecture of the XE6, with each XK6 blade comprising four compute "nodes". Each node consists of a 16-core AMD Opteron 6200 processor with 16 or 32 GB of DDR3 RAM and an Nvidia Tesla X2090 GPGPU with 6 GB of GDDR5 RAM, the two connected via PCI Express 2.0. Two Gemini router ASICs are shared between the nodes on a blade, providing a 3-dimensional torus network topology between nodes. This means that it has 576 GB of Graphics memory and over 1500 CPU cores, several orders of magnitude more powerful than the best publicly available computer on the market. An XK6 cabinet accommodates 24 blades (96 nodes). Each of the Tesla processors is rated at 665 double-precision gigaflops giving 63.8 teraflops per cabinet. The XK6 is capable of scaling to 500,000 Opteron cores, giving up to 50 petaflops total hybrid peak performance. The XK6 runs the Cray Linux Environment. This incorporates SUSE Linux Enterprise Server and Cray's Compute Node Linux. The first order for an XK6 system was an upgrade of an existing XE6m at the Swiss National Supercomputing Centre (CSCS). References External links Cray XK6 press release Xk6 Petascale computers X86 supercomputers de:Cray XK6
13051
https://en.wikipedia.org/wiki/Gnumeric
Gnumeric
Gnumeric is a spreadsheet program that is part of the GNOME Free Software Desktop Project. Gnumeric version 1.0 was released on 31 December 2001. Gnumeric is distributed as free software under the GNU General Public License; it is intended to replace proprietary spreadsheet programs like Microsoft Excel. Gnumeric was created and developed by Miguel de Icaza, but he has since moved on to other projects. The maintainer was Jody Goldberg. Features Gnumeric has the ability to import and export data in several file formats, including CSV, Microsoft Excel (write support for the more recent .xlsx format is incomplete), Microsoft Works spreadsheets (.wks), HTML, LaTeX, Lotus 1-2-3, OpenDocument and Quattro Pro; its native format is the Gnumeric file format (.gnm or .gnumeric), an XML file compressed with gzip. It includes all of the spreadsheet functions of the North American edition of Microsoft Excel and many functions unique to Gnumeric. Pivot tables and Visual Basic for Applications macros are not yet supported. Gnumeric's accuracy has helped it to establish a niche for statistical analysis and other scientific tasks. For improving the accuracy of Gnumeric, the developers are cooperating with the R Project. Gnumeric has an interface for the creation and editing of graphs different from other spreadsheet software. For editing a graph, Gnumeric displays a window where all the elements of the graph are listed. Other spreadsheet programs typically require the user to select the individual elements of the graph in the graph itself in order to edit them. Gnumeric is available in Version 1.12.50. Gnumeric under Microsoft Windows Gnumeric releases were ported to Microsoft Windows until August 2014 (the latest versions were 1.10.16 and 1.12.17). Use of current version of Gnumeric on Windows is possible with MSYS2 with experienced know-how of a Linux/Unix user. After GTK+ 2.24.10 and 3.6.4, development of Windows version was closed by GNOME. Creation of Windows version was complicated by bugs in old Windows versions of GTK+. Installation of MSYS2 on Windows is a good way to use current GTK software. GTK+ 2.24.10 and 3.6.4 are available on-line. Versions of GTK for 64-bit Windows are prepared by Tom Schoonjans – current examples are 2.24.32 and 3.24.23. This could be also a new start for a new native 64-bit Windows version of Gnumeric. See also EditGrid – was an on-line spreadsheet which used Gnumeric as its back-end List of spreadsheet software Comparison of spreadsheet software References External links Gnumeric XML File Format Open Mag interviews Jody Goldberg on Gnumeric. Nancy Cohen, 17 February 2004; archived 2012 Linux Productivity Magazine Volume 2 Issue 6, June 2003: full issue on Introduction to Gnumeric 2001 software Business software for Linux Cross-platform software Free software programmed in C Free spreadsheet software Office software that uses GTK Plotting software
1752246
https://en.wikipedia.org/wiki/Live%20Free%20or%20Die%20Hard
Live Free or Die Hard
Live Free or Die Hard (released as Die Hard 4.0 outside North America) is a 2007 American action-thriller film and the fourth installment in the Die Hard film series. It was directed by Len Wiseman. It is based on a 1997 article "A Farewell to Arms" written for Wired magazine by John Carlin. The film's name is adapted from New Hampshire's state motto, "Live Free or Die". In the film, NYPD Detective John McClane (Bruce Willis) attempts to stop a cyber-terrorist (Timothy Olyphant) who hacks into government and commercial computers across the United States with the goal of starting a "fire sale" that would disable key elements of the nation's infrastructure. Justin Long, Cliff Curtis, and Maggie Q also star. Live Free or Die Hard was released in the United States on June 27, 2007. The film grossed $388 million worldwide, making it the highest-grossing installment in the Die Hard series. It received generally positive reviews from critics, who called the film a return to form for the series. It is the only Die Hard film to be theatrically released with a PG-13 rating from the MPAA, although an unrated edition would later be made available on home media. A fifth film, A Good Day to Die Hard, was released in 2013. Plot Responding to a brief computer outage at its Cyber-Security Division by tracking down top computer hackers, the FBI, led by Deputy Director Miguel Bowman, asks NYPD detective John McClane to go to New Jersey to bring in hacker Matthew Farrell. McClane arrives just in time to save him from assassins sent by Mai Linh, who is Thomas Gabriel's lover and also working with him to carry out his plans. On the way to D.C., Farrell tells McClane he had written an algorithm for Linh to crack a specific security system for white hat purposes. Meanwhile, Gabriel orders his crew of hackers to take over transportation grids and the stock market, while nationally broadcasting a threatening message to the U.S. government. Farrell realizes this is the start of a "fire sale", a cyber attack designed to disable the nation's infrastructure. As McClane and Farrell are driven to DHS headquarters, Linh, posing as a dispatcher, reroutes them into a helicopter ambush. McClane fends off the attackers, destroying the helicopter and all but one of the terrorists. Desperate, McClane asks Farrell what would be Gabriel's next move. He deduces Gabriel's next target is the power grid, so they drive to a utility superstation in West Virginia. They find the superstation under control by a team led by Linh. McClane kills all but Linh. After an intense, violent struggle between the two, Linh, falls to her death in an elevator shaft. While Farrell is working on the hub computer to slow down the damage done, he is able to trace Gabriel & uploads his picture. He then sends it to Bowman at the FBI. McClane is shocked to learn that Bowman and Gabriel once worked together at the U.S. Department of Defense. Gabriel was the chief programmer for infrastructural security. He warned the department of weaknesses that made America's network infrastructure vulnerable to cyberwarfare, but his unorthodox methods got him fired. It's now clear he is out for revenge. Enraged over Linh's death, Gabriel redirects large amounts of natural gas to the superstation to kill McClane and Farrell, but they escape. McClane and Farrell then travel by helicopter to the home of super hacker, Frederick "Warlock" Kaludis (Kevin Smith) in Baltimore. Warlock identifies the piece of code Farrell wrote for Linh as a means to access data at a Social Security Administration building at Woodlawn, Maryland. He confirms Gabriel's former ties as a government employee. Doing a traceroute, Warlock locates Gabriel. The Woodlawn building is actually an NSA facility intended to back up the nation's personal and financial records in the event of a cyber attack and designed by Gabriel himself. The attack on the FBI triggered a download of financial data to Woodlawn, data which Gabriel plans to steal. Meanwhile, ordered by Gabriel, one of his top men kills all but one of his computer hackers after they've outlived their usefulness. Gabriel then taps into the connection they made, which reveals the location of McClane's estranged daughter Lucy (Mary Elizabeth Winstead), whom he kidnaps. McClane and Gabriel then meet - virtually - McClane telling him he will lose. McClane and Farrell race to the Woodlawn facility. Farrell finds the facility's main server and encrypts the data Gabriel's men downloaded before getting captured. Gabriel then takes Farrell and Lucy with him as he flees. McClane pursues them, hijacking their semi mobile base. Accessing the communication system of an F-35B Lightning II, Gabriel orders the pilot to attack the truck McClane is driving, but the jet is destroyed by falling debris. McClane survives and sees Gabriel's vehicle pull into a nearby hangar. There, Gabriel demands that Farrell decrypt the financial data. When he refuses, Gabriel shoots him in the knee and threatens to kill Lucy. McClane arrives, killing two of Gabriel's men, but he is shot in the shoulder by Gabriel's last man, Emerson. Gabriel positions himself behind McClane, putting the barrel of the gun in his shoulder wound. McClane then pulls the trigger. The bullet travels through McClane's shoulder and hits Gabriel in the chest, killing him instantly. Farrell then grabs a handgun and kills Emerson as the FBI arrives. Afterwards, McClane thanks Farrell for saving Lucy's life, who, takes a romantic interest in him. Cast Bruce Willis as Detective John McClane, who works for the New York City Police Department. Justin Long as Matthew "Matt" Farrell Timothy Olyphant as Thomas Gabriel, a former Defense Department analyst who leads a group of cyber-terrorists systematically shutting down the entire U.S. infrastructure. Olyphant filmed his role within three weeks. Mary Elizabeth Winstead as Lucy Gennero-McClane, McClane's estranged daughter. The inclusion of McClane's daughter was previously considered for the third film, and she was in the video game Die Hard: Vendetta. Maggie Q as Mai Linh, Gabriel's second-in-command and lover Kevin Smith as Frederick "Warlock" Kaludis. Smith made uncredited rewrites to the scenes in which he appears. Cliff Curtis as Miguel Bowman, Deputy Director of the FBI Cyber Security Division Jonathan Sadowski as Trey, Gabriel's main hacker Edoardo Costa as Emerson, Gabriel's henchman Cyril Raffaelli as Rand, Gabriel's acrobatic henchman Yorgo Constantine as Russo, Gabriel's henchman Željko Ivanek as Agent Molina Christina Chang as Taylor Len Wiseman as the F-35B pilot (cameo) Additional characters include Gabriel's henchmen: Chris Palmero as Del, Andrew Friedman as Casper and Bryon Weiss as Robinson. Chris Ellis appears as Jack Sclavino, McClane's superior officer. Sung Kang makes an appearance as Raj, a desk officer of FBI's cyber division. Matt O'Leary appears as Clay, a hacker who unwittingly gives Gabriel a code, allowing his house to be destroyed. Jake McDorman plays a small role as Jim, Lucy's boyfriend. Tim Russ appears as an NSA agent. Rosemary Knower has a cameo as Mrs. Kaludis, Frederick's mother. Production Script and title The film's plot is based on an earlier script entitled WW3.com by David Marconi, screenwriter of the 1998 film Enemy of the State. Using John Carlin's Wired magazine article entitled "A Farewell to Arms", Marconi crafted a screenplay about a cyber-terrorist attack on the United States. The attack procedure is known as a "fire sale", depicting a three-stage coordinated attack on a country's transportation, telecommunications, financial, and utilities infrastructure systems. After the September 11, 2001 attacks, the project was stalled, only to be resurrected several years later and rewritten into Live Free or Die Hard by Doug Richardson and eventually by Mark Bomback. Willis said in 2005 that the film would be called Die Hard 4.0, as it revolves around computers and cyber-terrorism. IGN later reported the film was to be called Die Hard: Reset instead. 20th Century Fox later announced the title as Live Free or Die Hard and set a release date of June 29, 2007 with filming to begin in September 2006. The title is based on New Hampshire's state motto, "Live Free or Die", which is attributed to a quote from General John Stark. International trailers use the Die Hard 4.0 title, as the film was released outside North America with that title. Early into the film's DVD commentary, both Wiseman and Willis note a preference for the title Die Hard 4.0. Visual effects For the visual effects used throughout the film, actor Bruce Willis and director Len Wiseman stated that they wanted to use a limited amount of computer-generated imagery (CGI). One VFX producer said that "Len was insisting on the fact that, because we’ve got Transformers and other big CG movies coming out, this one has to feel more real. It has to be embedded in some kind of practical reality in order to give it that edge of being a Die Hard." Companies such as Digital Dimension, The Orphanage, R!ot, Pixel Magic, and Amalgamated Pixels assisted in the film's visual effects. Digital Dimension worked on 200 visual effects shots in the film, including the sequence that shows characters John McClane and Matt Farrell crouching between two cars as another car lands on top of the other cars. To achieve this effect, a crane yanked the car and threw it in the air onto the two cars that were also being pulled by cables. The shot was completed when the two characters were integrated into the footage of the car stunt after the lighting was adjusted and CGI glass and debris were added. In the same sequence, John McClane destroys a helicopter that several of Gabriel's henchman are riding in by ramming it with a car. This was accomplished by first filming one take where one of Gabriel's henchman, Rand, jumps from the helicopter, and in the next take the car is propelled into the stationary helicopter as it is hoisted by wires. The final view of the shot overlays the two takes, with added CGI for the debris and moving rotor blades. The company also assisted in adding cars for traffic collisions and masses of people for evacuations from several government buildings. The Orphanage developed a multi-level freeway interchange for use in one of the film's final scenes by creating a digital environment and a long spiral ramp that was built in front of a bluescreen. When a F-35 jet is chasing McClane on the freeway, a miniature model and a full-size prop were both built to assist in digitally adding the jet into the scene. The nine-foot model was constructed from November 2006 through February 2007. When the jet is shown hovering near the freeway, editors used the software 3D graphics program Maya to blur the background and create a heat ripple effect. Filming and injuries Filming for Live Free or Die Hard started in downtown Baltimore, Maryland on September 23, 2006. Eight different sets were built on a large soundstage for filming many scenes throughout the film. When recording the sound for the semi trailer used in one of the film's final scenes, 18 microphones were used to record the engine, tires, and damage to the vehicle. Post-production for the film only took 16 weeks, when it was more common for similar films to use 26 weeks. In order to prevent possible injuries and be in peak condition for the film, Willis worked out almost daily for several months prior to filming. Willis was injured on January 24, 2007 during a fight scene, when he was kicked above his right eye by a stunt double for actress Maggie Q who was wearing stiletto heels. Willis described the event as "no big deal" but when Len Wiseman inspected his injury, he noticed that the situation was much more serious than previously thought—in the DVD commentary, Wiseman indicates in inspecting the wound that he could see bone. Willis was hospitalized and received seven stitches which ran through his right eyebrow and down into the corner of his eye. Due to the film's non-linear production schedule, these stitches can accidentally be seen in the scene where McClane first delivers Farrell to Bowman. Throughout filming, between 200 and 250 stunt people were used. Bruce Willis' stunt double, Larry Rippenkroeger, was knocked unconscious when he fell from a fire escape to the pavement. Rippenkroeger suffered broken bones in his face, several broken ribs, a punctured lung, and fractures in both wrists. Due to his injuries, production was temporarily shut down. Willis personally paid the hotel bills for Rippenkroeger's parents and visited him a number of times at the hospital. Kevin Smith recalls rewriting scenes on the set of Live Free or Die Hard in his spoken word film Sold Out: A Threevening with Kevin Smith. Music Soundtrack The score for Live Free or Die Hard, written by Marco Beltrami, was released on July 2, 2007 by Varèse Sarabande (which also released the soundtracks for the first two Die Hard films), several days after the United States release of the film. This was the first film not to be scored by Michael Kamen, due to his death in 2003; Beltrami incorporates Kamen's thematic material into his score, but Kamen is not credited on the film or the album. Other songs in the film include "Rock & Roll Queen" by The Subways, "Fortunate Son" by Creedence Clearwater Revival and "I'm So Sick" by Flyleaf. Eric Lichtenfeld, reviewing from Soundtrack.net, said of the score: "The action cues, in which the entire orchestra seems percussive, flow well together." "Out of Bullets" (1:08) "Shootout" (3:41) "Leaving the Apartment" (2:08) "Dead Hackers" (1:31) "Traffic Jam" (4:13) "It's a Fire Sale" (2:57) "The Break-In" (2:28) "Farrell to D.C." (4:36) "Copter Chase" (4:41) "Blackout" (2:03) "Illegal Broadcast" (3:48) "Hurry Up!" (1:23) "The Power Plant" (2:01) "Landing" (2:28) "Cold Cuts" (2:00) "Break a Neck" (2:47) "Farrell Is In" (4:22) "The F-35" (4:13) "Aftermath" (3:12) "Live Free or Die Hard" (2:56) Release Rating In the United States, the first three films in the Die Hard series were rated R by the Motion Picture Association of America. Live Free or Die Hard, however, was edited to obtain a PG-13 rating. In some cases, alternate profanity-free dialogue was shot and used or swearing was cut out in post-production to reduce profanity. Director Len Wiseman commented on the rating, saying "It was about three months into it [production], and I hadn't even heard that it was PG-13... But in the end, it was just trying to make the best Die Hard movie, not really thinking so much about what the rating would be." Bruce Willis was upset with the studio's decision, stating, "I really wanted this one to live up to the promise of the first one, which I always thought was the only really good one. That's a studio decision that is becoming more and more common, because they’re trying to reach a broader audience. It seems almost a courageous move to give a picture an R rating these days. But we still made a pretty hardcore, smashmouth film." Willis said he thought that viewers unaware that it was not an R-rated film would not suspect so due to the level and intensity of the action as well as the usage of some profanity, although he admitted these elements were less intense than in the previous films. He also said that this film was the best of the four: "It's unbelievable. I just saw it last week. I personally think, it's better than the first one." In the United Kingdom, the British Board of Film Classification awarded the film a 15 rating (including the unrated version, released later), the same rating as Die Hard with a Vengeance and Die Hard 2, albeit both were cut for both theatrical and video release, (The first film in the series originally received an 18 certificate). All films have been re-rated 15 uncut. Die Hard 4.0 was released with no cuts made and the cinema version (i.e.,the U.S. PG-13 version) consumer advice read that it "contains frequent action violence and one use of strong language". The unrated version was released on DVD as the "Ultimate Action Edition" with the consumer advice "contains strong language and violence". In Australia, Die Hard 4.0 was released with the PG-13 cut with an M rating, the same as the others in the series (The Australian Classification Board is less strict with regards to language and to a lesser extent, violence). The unrated version was later released on DVD and Blu-ray also with an M rating. The film, notably never released in home media with its theatrical cut, has only been released in Australia as the extended edition. Home media The Blu-ray and DVD were released on October 29, 2007, in the United Kingdom, on October 31 in Hungary, November 20 in the United States, and December 12 in Australia. The DVD topped rental and sales charts in its opening week of release in the U.S. and Canada. There is an unrated version, which retains much of the original 'R-rated' dialogue, and a theatrical version of the film. However, the unrated version has a branching error that resulted in one of the unrated changes being omitted. The film briefly switches to the PG-13 version in the airbag scene; McClane's strong language is missing from this sequence (although international DVD releases of the unrated version are unaffected). The Blu-ray release features the PG-13 theatrical cut which runs at 128 minutes, while the Collector's Edition DVD includes both the unrated and theatrical versions. Time magazine's Richard Corliss named it one of the Top 10 DVDs of 2007, ranking it at No. 10. In 2015, the movie was featured in the "Die Hard: Nakatomi Plaza" boxed set, which featured the unrated cut of the film on Blu-Ray for the first time in the US. In 2017, the movie was included in the "Die Hard Collection" Blu-ray set with all 5 films in it. Though unlike the DVD, the Blu-ray doesn't contain the branching error during the airbag scene. The DVD for the film was the first to include a Digital Copy of the film which could be played on a PC or Mac computer and could also be imported into several models of portable video players. Mike Dunn, a president for 20th Century Fox, stated "The industry has sold nearly 12 billion DVDs to date, and the release of Live Free or Die Hard is the first one that allows consumers to move their content to other devices." Reception Box office Live Free or Die Hard debuted at No. 2 behind Ratatouille, at the U.S. box office and made $9.1 million in its first day of release in 3,172 theaters, the best opening day take of any film in the Die Hard series (not taking inflation into account). On its opening weekend Live Free or Die Hard made $33.3 million ($48.3 million counting Wednesday and Thursday). The film made $134.5 million domestically, and $249.0 million overseas for a total of $383.5 million, making it the twelfth highest-grossing film of 2007. As of 2011, it is the most successful film in the series. Critical response On Rotten Tomatoes, the film has an approval rating of 82% based on 210 reviews, and an average rating of 6.8/10. The site's critical consensus reads, "Live Free or Die Hard may be preposterous, but it's an efficient, action-packed summer popcorn flick with thrilling stunts and a commanding performance by Bruce Willis. Fans of the previous Die Hard films will not be disappointed." On Metacritic, the film has a weighted average score of 69 out of 100, based on 34 critics, indicating "generally favorable reviews". Audiences polled by CinemaScore gave the film an average grade of "A–" on an A+ to F scale. IGN stated, "Like the recent Rocky Balboa, this new Die Hard works as both its own story about an over-the-hill but still vital hero and as a nostalgia trip for those who grew up with the original films." On the television show Ebert & Roeper, film critic Richard Roeper and guest critic Katherine Tulich gave the film "two thumbs up", with Roeper stating that the film is "not the best or most exciting Die Hard, but it is a lot of fun" and that it is his favorite among the Die Hard sequels. Roeper also remarked, "Willis is in top form in his career-defining role." Michael Medved gave the film three and a half out of four stars, opining, "a smart script and spectacular special effects make this the best Die Hard of 'em all." Conversely, Lawrence Toppman of The Charlotte Observer stated: "I can safely say I've never seen anything as ridiculous as Live Free or Die Hard." Toppman also wrote that the film had a lack of memorable villains and referred to John McClane as "just a bald Terminator with better one-liners". References External links Live Free or Die Hard soundtrack information at the Soundtrackinfo.com "A Farewell to Arms", Wired article on which the film's script was based 2007 films 2007 action thriller films 2000s buddy cop films 20th Century Fox films American action thriller films American buddy cop films American films American sequel films Die Hard Dune Entertainment films 2000s English-language films Independence Day (United States) films Films scored by Marco Beltrami Films about computing Films about terrorism Films based on multiple works Films based on newspaper and magazine articles Films directed by Len Wiseman Films set in Maryland Films set in Washington, D.C. Films set in New York City Films set in the United States Films shot in Los Angeles Films shot in Baltimore Films shot in New Jersey Films shot in Washington, D.C. Films with screenplays by Mark Bomback Malware in fiction
2028143
https://en.wikipedia.org/wiki/Computer-supported%20collaboration
Computer-supported collaboration
Computer-supported collaboration research focuses on technology that affects groups, organizations, communities and societies, e.g., voice mail and text chat. It grew from cooperative work study of supporting people's work activities and working relationships. As net technology increasingly supported a wide range of recreational and social activities, consumer markets expanded the user base, enabling more and more people to connect online to create what researchers have called a computer supported cooperative work, which includes "all contexts in which technology is used to mediate human activities such as communication, coordination, cooperation, competition, entertainment, games, art, and music" (from CSCW 2004). Scope of the field Focused on output The subfield computer-mediated communication deals specifically with how humans use "computers" (or digital media) to form, support and maintain relationships with others (social uses), regulate information flow (instructional uses), and make decisions (including major financial and political ones). It does not focus on common work products or other "collaboration" but rather on "meeting" itself, and on trust. By contrast, CSC is focused on the output from, rather than the character or emotional consequences of, meetings or relationships, reflecting the difference between "communication" and "collaboration". Focused on contracts and rendezvous Unlike communication research, which focuses on trust, or computer science, which focuses on truth and logic, CSC focuses on cooperation and collaboration and decision making theory, which are more concerned with rendezvous and contract. For instance, auctions and market systems, which rely on bid and ask relationships, are studied as part of CSC but not usually as part of communication. The term CSC emerged in the 1990s to replace the following terms: workgroup computing, which emphasizes technology over the work being supported and seems to restrict inquiry to small organizational units. groupware, which became a commercial buzzword and was used to describe popular commercial products such as Lotus Notes. Check here for a comprehensive literature review. computer supported cooperative work, which is the name of a conference and which seems only to address research into experimental systems and the nature of workplaces and organizations doing "work", as opposed, say, to play or war. Collaboration is not a software Two different types of software are sometimes differentiated: social software, which produces social ties as its primary output, e.g., a social network service collaborative software, which produces a collaborative deliverable, e.g., an online collaborative encyclopedia like Wikipedia. Base technologies such as netnews, email, chat and wikis could be described as "social", "collaborative" or both or neither. Those who say "social" seem to focus on so-called "virtual community" while those who say "collaborative" seem to be more concerned with content management and the actual output. While software may be designed to achieve closer social ties or specific deliverables, it is hard to support collaboration without also enabling relationships to form, and hard to support a social interaction without some kind of shared co-authored works. May include games Accordingly, the differentiation between social and collaborative software may also be stated as that between "play" and "work". Some theorists hold that a play ethic should apply, and that work must become more game-like or play-like in order to make using computers a more comfortable experience. The study of MUDs and MMRPGs in the 1980s and 1990s led many to this conclusion, which is now not controversial. True multi-player computer games can be considered a simple form of collaboration, but only a few theorists include this as part of CSC. Not just about "computing" The relatively new areas of evolutionary computing, massively parallel algorithms, and even "artificial life" explore the solution of problems by the evolving interaction of large numbers of small actors, or agents, or decision-makers who interact in a largely unconstrained fashion. The "side effect" of the interaction may be a solution of interest, such as a new sorting algorithm; or there may be a permanent residual of the interaction, such as the setting of weights in a neural network that has now been "tuned" or "trained" to repeatedly solve a specific problem, such as making a decision about granting credit to a person, or distinguishing a diseased plant from a healthy one. Connectionism is a study of systems in which the learning is stored in the linkages, or connections, not in what is normally thought of as content. This expands the definition of "computing", such that it is not just the data, or the metadata, or the context of the data, but the computer itself which is being "processed". Requires protocols Communication essential to the collaboration, or disruptive of it, is studied in CSC proper. It is somehow hard to find or draw a line between a well-defined process and general human communications. Reflecting desired organization protocols and business processes and governance norms directly, so that regulated communication (the collaboration) can be told apart from free-form interactions, is important to collaboration research, if only to know where to stop the study of work and start the study of people. The subfield CMC or computer-mediated communication deals with human relationships. Basic tasks Tasks undertaken in this field resemble those of any social science, but with a special focus on systems integration and groups: Less ambitiously, specific CSC fields are often studied under their own names with no reference to the more general field of study, focusing instead on the technology with only minimal attention to the collaboration implied, e.g. video games, videoconferences. Since some specialized devices exist for games or conferences that do not include all of the usual boot image capabilities of a true "computer", studying these separately may be justified. There is also separate study of e-learning, e-government, e-democracy and telemedicine. The subfield telework also often stands alone. History Early research The development of this field reaches back to the late 1960s and the visionary assertions of Ted Nelson, Douglas Engelbart, Alan Kay, Glenn Gould, Nicholas Negroponte and others who saw a potential for digital media to ultimately redefine how people work. A very early thinker, Vannevar Bush, even suggested in 1945 As We May Think. Numbers The inventor of the computer "mouse", Douglas Engelbart, studied collaborative software (especially revision control in computer-aided software engineering and the way a graphic user interface could enable interpersonal communication) in the 1960s. Alan Kay worked on Smalltalk, which embodied these principles, in the 1970s, and by the 1980s it was well regarded and considered to represent the future of user interfaces. However, at this time, collaboration capabilities were limited. As few computers had even local area networks, and processors were slow and expensive, the idea of using them simply to accelerate and "augment" human communication was eccentric in many situations. Computers processed numbers, not text, and the collaboration was in general devoted only to better and more accurate handling of numbers. Text This began to change in the 1980s with the rise of personal computers, modems and more general use of the Internet for non-academic purposes. People were clearly collaborating online with all sorts of motives, but using a small suite of tools (LISTSERV, netnews, IRC, MUD) to support all of those motives. Research at this time focused on textual communication, as there was little or no exchange of audio and video representations. Some researchers, such as Brenda Laurel, emphasized how similar online dialogue was to a play, and applied Aristotle's model of drama to their analysis of computers for collaboration. Another major focus was hypertext—in its pre-HTML, pre-WWW form, focused more on links and semantic web applications than on graphics. Such systems as Superbook, NoteCards, KMS and the much simpler HyperTies and HyperCard were early examples of collaborative software used for e-learning. Audio In the 1990s, the rise of broadband networks and the dotcom boom presented the internet as mass media to a whole generation. By the late 1990s, VoIP and net phones and chat had emerged. For the first time, people used computers primarily as communications, not "computing" devices. This, however, had long been anticipated, predicted, and studied by experts in the field. Video collaboration is not usually studied. Online videoconferencing and webcams have been studied in small scale use for decades but since people simply do not have built-in facilities to create video together directly, they are properly a communication, not collaboration, concern. Pioneers Other pioneers in the field included Ted Nelson, Austin Henderson, Kjeld Schmidt, Lucy Suchman, Sara Bly, Randy Farmer, and many "economists, social psychologists, anthropologists, organizational theorists, educators, and anyone else who can shed light on group activity." - Grudin. Politics and business In this century, the focus has shifted to sociology, political science, management science and other business disciplines. This reflects the use of the net in politics and business and even other high-stakes collaboration situations, such as war. War Though it is not studied at the ACM conferences, military use of collaborative software has been a very major impetus of work on maps and data fusion, used in military intelligence. A number of conferences and journals are concerned primarily with the military use of digital media and the security implications thereof. Current research Current research in computer-supported collaboration includes: Speech recognition Early researchers, such as Bill Buxton, had focused on non-voice gestures (like humming or whistling) as a way to communicate with the machine while not interfering too directly with speech directed at a person. Some researchers believed voice as command interfaces were bad for this reason, because they encouraged speaking as if to a "slave". Link semantics HTML supports simple link types with the REL tag and REV tag. Some standards for using these on the WWW were proposed, most notably in 1994, by people very familiar with earlier work in SGML. However, no such scheme has ever been adopted by a large number of web users, and the "semantic web" remains unrealized. Attempts such as crit.org have sometimes collapsed totally. Identity and privacy Who am I, online? Can an account be assumed to be the same as a person's real-life identity? Should I have rights to continue any relationship I start through a service, even if I'm not using it any longer? Who owns information about the user? What about others (not the user) who are affected by information revealed or learned by me? Online identity and privacy concerns, especially identity theft, have grown to dominate the CSCW agenda in more recent years. The separate Computers, Freedom and Privacy conferences deal with larger social questions, but basic concerns that apply to systems and work process design tend still to be discussed as part of CSC research. Online decision making Where decisions are made based exclusively or mostly on information received or exchanged online, how do people rendezvous to signal their trust in it, and willingness to make major decisions on it? Team consensus decision making in software engineering, and the role of revision control, revert, reputation and other functions, has always been a major focus of CSC: There is no software without someone writing it. Presumably, those who do write it must understand something about collaboration in their own team. This design and code, however, is only one form of collaborative content. Collaborative content What are the most efficient and effective ways to share information? Can creative networks form through online meeting/work systems? Can people have equal power relationships in building content? By the late 1990s, with the rise of wikis (a simple repository and data dictionary that was easy for the public to use), the way consensus applied to joint editing, meeting agendas and so on had become a major concern. Different wikis adopted different social and pseudopolitical structures to combat the problems caused by conflicting points of view and differing opinions on content. Workflow How can work be made simpler, less prone to error, easier to learn? What role do diagrams and notations play in improving work output? What words do workers come to work already understanding, what do they misunderstand, and how can they use the same words to mean the same thing? Study of content management, enterprise taxonomy and the other core instructional capital of the learning organization has become increasingly important due to ISO standards and the use of continuous improvement methods. Natural language and application commands tend to converge over time, becoming reflexive user interfaces. Telework and human capital management The role of social network analysis and outsourcing services like e-lance, especially when combined in services like LinkedIn, is of particular concern in human capital management—again, especially in the software industry, where it is becoming more and more normal to run 24x7 globally distributed shops. Computer-supported collaboration on Art The romanticized notion of a lone, genius artist has existed since the time of Giorgio Vasari’s Lives of the Artists, published in 1568. Vasari promulgated the idea that artistic skill was endowed upon chosen individuals by gods, which created an enduring and largely false popular misunderstanding of many artistic processes. Artists have used collaboration to complete large scale works for centuries, but the myth of the lone artist was not widely questioned until the 1960s and 1970s. With the appearance of computers, and especially with the invention of the internet, collaboration on art became easier than before. This crowd-sourced creativity online is putting a "new twist" on traditional ideas of artistic ownership, online communication and art production. In some cases, people don't even know they are making contributions to online art. Artists in the computer era are considered more "socially aware" in a way that supports social collaboration on social matters. Art duos, such as the Italian Hackatao duo, collaborate both physically and online while creating their art in order to "create a meeting place between the NFT and traditional art worlds." Crowdsourcing aids with innovation processes, successful implementation and maintenance of ideas generation, thereby providing support for the development of promising innovative ideas. Crowdsourcing has been used in various ways from rousing musical numbers, to choreography, set design, costumes and marketing materials and in some cases was crowdsourced using social media platforms. Related fields Related fields are collaborative product development, CAD/CAM, computer-aided software engineering (CASE), concurrent engineering, workflow management, distance learning, telemedicine, medical CSCW and the real-time network conferences called MUDs (after "multi-user dungeons," although they are now used for more than game-playing). See also Citizen science Collaborative information seeking Collaborative work systems Collaborative development environment Computer-supported collaborative learning Integrated collaboration environment List of collaborative software List of project management software Mass collaboration Toolkits for User Innovation Wicked problem References External links SPARC - Space Physics and Aeronomy Research Collaboratory. Science Of Collaboratories - Science of Collaboratories Project Home, with links to over 100 specific collaboratories Paul Resnick - Professor Paul Resnick's home page ( papers on SocioTechnical Capital, reputation systems, ride share coordination services, recommender systems, collaborative filtering, social filtering). Reticula - Weblogs, Wikis, and Public Health Today. News, professional activities, and academic research. US National Health Information Network News about and links into the US NHIN and efforts to build a nationwide virtual electronic health record to support and facilitate electronic collaboration between clinicians, hospitals, patients, social work, and public health. Political Blogosphere - The Political Blogosphere and the 2004 U.S. Election: Divided They Blog, Adamic L. and Glance N., HP Labs, 2005. ("In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities.") - CSCW and Groupware Literature Guide: Randy's Reviews, Recommendations, and (Optional) Referrals. Collaboration Computing and society Computer-mediated communication
37080011
https://en.wikipedia.org/wiki/Mauro%20Pezz%C3%A8
Mauro Pezzè
Mauro Pezzè is a professor of the faculty of Informatics at the Università della Svizzera italiana (USI), Switzerland where he had been the Dean of the Faculty of Informatics between 2009 and 2012. He is also a professor of software engineering at the Università degli Studi di Milano-Bicocca. He has been co-chair of the International Conference on Software Engineering. Mauro is the co-author of Software testing and analysis: process, principles, and techniques published by Wiley in 2007. Since 2019 he is professor of Software Engineering at the Schaffhausen Institute of Technology. His research interests are mainly software redundancy, self-healing and self-adaptive software systems. References External links http://www.inf.usi.ch/faculty/pezze/ Year of birth missing (living people) Living people University of Lugano faculty University of Milano-Bicocca faculty
1775746
https://en.wikipedia.org/wiki/Project%20Monterey
Project Monterey
Project Monterey was an attempt to build a single Unix operating system that ran across a variety of 32-bit and 64-bit platforms, as well as supporting multi-processing. Announced in October 1998, several Unix vendors were involved; IBM provided POWER and PowerPC support from AIX, Santa Cruz Operation (SCO) provided IA-32 support, and Sequent added multi-processing (MP) support from their DYNIX/ptx system. Intel Corporation provided expertise and ISV development funding for porting to their upcoming IA-64 (Itanium Architecture) CPU platform, which was yet to be released at that time. The focus of the project was to create an enterprise-class UNIX for IA-64, which at the time was expected to eventually dominate the UNIX server market. By March 2001, however, "the explosion in popularity of Linux ... prompted IBM to quietly ditch" this; all involved attempted to find a niche in the rapidly developing Linux market and moved their focus away from Monterey. Sequent was acquired by IBM in 1999. In 2000, SCO's UNIX business was purchased by Caldera Systems, a Linux distributor, who later renamed themselves the SCO Group. In the same year, IBM eventually declared Monterey dead. Intel, IBM, Caldera Systems, and others had also been running a parallel effort to port Linux to IA-64, Project Trillian, which delivered workable code in February 2000. In late 2000, IBM announced a major effort to support Linux. In May 2001, the project announced the availability of a beta test version AIX-5L for IA-64, basically meeting its original primary goal. However, Intel had missed its delivery date for its first Itanium processor by two years, and the Monterey software had no market. With the exception of the IA-64 port and Dynix MP improvements, much of the Monterey effort was an attempt to standardize existing versions of Unix into a single compatible system. Such efforts had been undertaken in the past (e.g., 3DA) and had generally failed, as the companies involved were too reliant on vendor lock-in to fully support a standard that would allow their customers to leave for other products. With Monterey, two of the key partners already had a niche they expected to continue to serve in the future: POWER and IA-64 for IBM, IA-32 and IA-64 for SCO. The breakdown of Project Monterey was one of the factors leading to a lawsuit in 2003, where SCO Group sued IBM over their contributions to Linux. IBM sold only 32 Monterey licenses in 2001, and fewer in 2002. References External links Parallel computing Collaborative projects Unix variants Unix history Discontinued operating systems IBM operating systems Power ISA operating systems Computer-related introductions in 1998
34159624
https://en.wikipedia.org/wiki/Multiple%20Console%20Time%20Sharing%20System
Multiple Console Time Sharing System
The Multiple Console Time Sharing System (MCTS) was an operating system developed by General Motors Research Laboratories in the 1970s for the Control Data Corporation STAR-100 supercomputer. MCTS was built to support GM's computer-aided design (CAD) applications. MCTS was based on Multics. See also GM-NAA I/O SHARE Operating System Timeline of operating systems References Further reading F.N. Krull, "The origin of computer graphics within General Motors", Annals of the History of Computing, IEEE, Volume 16 Issue 3, (Fall 1994) pp.40 Discontinued operating systems Multics-like Proprietary operating systems Time-sharing operating systems Supercomputer operating systems
6227779
https://en.wikipedia.org/wiki/Hardware%20keylogger
Hardware keylogger
Hardware keyloggers are used for keystroke logging, a method of capturing and recording computer users' keystrokes, including sensitive passwords. They can be implemented via BIOS-level firmware, or alternatively, via a device plugged inline between a computer keyboard and a computer. They log all keyboard activity to their internal memory. Description Hardware keyloggers have an advantage over software keyloggers as they can begin logging from the moment a computer is turned on (and are therefore able to intercept passwords for the BIOS or disk encryption software). All hardware keylogger devices have to have the following: A microcontroller - this interprets the datastream between the keyboard and computer, processes it, and passes it to the non-volatile memory A non-volatile memory device, such as flash memory - this stores the recorded data, retaining it even when power is lost Generally, recorded data is retrieved by typing a special password into a computer text editor. The hardware keylogger plugged in between the keyboard and computer detects that the password has been typed and then presents the computer with "typed" data to produce a menu. Beyond text menu some keyloggers offer a high-speed download to speed up retrieval of stored data; this can be via USB mass-storage enumeration or with a USB or serial download adapter. Typically the memory capacity of a hardware keylogger may range from a few kilobytes to several gigabytes, with each keystroke recorded typically consuming a byte of memory. Types of hardware keyloggers A regular hardware keylogger is used for keystroke logging by means of a hardware circuit that is attached somewhere in between the computer keyboard and the computer. It logs all keyboard activity to its internal memory which can be accessed by typing in a series of pre-defined characters. A hardware keylogger has an advantage over a software solution; because it is not dependent on the computer's operating system it will not interfere with any program running on the target machine and hence cannot be detected by any software. They are typically designed to have an innocuous appearance that blends in with the rest of the cabling or hardware, such as appearing to be an EMC Balun. They can also be installed inside a keyboard itself (as a circuit attachment or modification), or the keyboard could be manufactured with this "feature". They are designed to work with legacy PS/2 keyboards, or more recently, with USB keyboards. Some variants, known as wireless hardware keyloggers, have the ability to be controlled and monitored remotely by means of a wireless communication standard. Wireless keylogger sniffers - Collect packets of data being transferred from a wireless keyboard and its receiver and then attempt to crack the encryption key being used to secure wireless communications between the two devices. Firmware - A computer's BIOS, which is typically responsible for handling keyboard events, can be reprogrammed so that it records keystrokes as it processes them. Keyboard overlays - a fake keypad is placed over the real one so that any keys pressed are registered by both the eavesdropping device as well as the legitimate one that the customer is using. Key commands - exist in much legitimate software. These programs require keyloggers to know when you’re using a specific command. Countermeasures Denial or monitoring of physical access to sensitive computers, e.g. by closed-circuit video surveillance and access control, is the most effective means of preventing hardware keylogger installation. Visual inspection is the easiest way of detecting hardware keyloggers. But there are also some techniques that can be used for most hardware keyloggers on the market, to detect them via software. In cases in which the computer case is hidden from view (e.g. at some public access kiosks where the case is in a locked box and only a monitor, keyboard, and mouse are exposed to view) and the user has no possibility to run software checks, a user might thwart a keylogger by typing part of a password, using the mouse to move to a text editor or other window, typing some garbage text, mousing back to the password window, typing the next part of the password, etc. so that the keylogger will record an unintelligible mix of garbage and password text. General keystroke logging countermeasures are also options against hardware keyloggers. The main risk associated with keylogger use is that physical access is needed twice: initially to install the keylogger, and secondly to retrieve it. Thus, if the victim discovers the keylogger, they can then set up a sting operation to catch the person in the act of retrieving it. This could include camera surveillance or the review of access card swipe records to determine who gained physical access to the area during the time period that the keylogger was removed. References External links Open Source PS/2 Hardware Keylogger Presentation about detection of hardware keyloggers Cryptographic attacks Surveillance
18371247
https://en.wikipedia.org/wiki/WinUSB
WinUSB
WinUSB is a generic USB driver provided by Microsoft, for their operating systems starting with Windows Vista but which is also available for Windows XP. It is aimed at simple devices that are accessed by only one application at a time (for example instruments like weather stations, devices that only need a diagnostic connection or for firmware upgrades). It enables the application to directly access the device through a simple software library. The library provides access to the pipes of the device. WinUSB exposes a client API that enables developers to work with USB devices from user-mode. Starting with Windows 7, USB MTP devices use WinUSB instead of the kernel mode filter driver. Advantages and disadvantages Advantages Doesn't require the knowledge to write a driver Speeds up development Disadvantages Only one application can access the device at a time Doesn't support isochronous transfers prior to Windows 8.1 Doesn't support USB Reset (as requested by DFU protocol for example) On other operating systems, the device still needs a custom driver WCID A WCID device, where WCID stands for "Windows Compatible ID", is a USB device that provides extra information to a Windows system, in order to facilitate automated driver installation and, in most circumstances, allow immediate access. WCID allows a device to be used by a Windows application almost as soon as it is plugged in, as opposed to the usual scenario where a USB device that is neither HID nor Mass Storage requires end-users to perform a manual driver installation. As such, WCID can bring the 'Plug-and-Play' functionality of HID and Mass Storage to any USB device (that sports a WCID aware firmware).WCID is an extension of the WinUSB Device functionality. Other solutions One solution is the use of a predefined USB device class. Operating systems provide built-in drivers for some of them. The most widely used device class for embedded devices is the USB communications device class (CDC). A CDC device can appear as a virtual serial port to simplify the use of a new device for older applications. Another solution is UsbDk. UsbDk supports all device types including isochronous and provides simpler way for device access acquisition that does not involve INF files creation and installation. UsbDk is open source, community supported and works on all Windows versions starting from Windows XP. If the previous solutions are inappropriate, one can write a custom driver. For newer versions of Microsoft Windows, it can be done using the Windows Driver Foundation. References The newest version can be found at USB.org Windows components
1024742
https://en.wikipedia.org/wiki/D-Link
D-Link
D-Link Corporation is a Taiwanese multinational networking equipment manufacturing corporation headquartered in Taipei, Taiwan. It was founded in March 1986 in Taipei as Datex Systems Inc. History D-Link Corporation changed its name from Datex Systems Inc. in 1994, when it went public and when it became the first networking company on the Taiwan Stock Exchange. It is now publicly traded on the TSEC and NSE stock exchanges. In 1988, Datex Systems Inc. released the industry's first peer-to-peer LANSmart Network Operating System, able to run concurrently with early networking systems such as Novell's NetWare and TCP/IP which most small network operating systems could not do at the time. In 2007, it was the leading networking company in the small to medium business (SMB) segment worldwide with 21.9% market share. In March 2008, it became the market leader in Wi-Fi product shipments worldwide, with 33% of the total market. In 2007, the company was featured in the "Info Tech 100", a listing of the world's best IT companies. It was also ranked as the 9th best IT company in the world for shareholder returns by BusinessWeek. In the same year, D-Link was released one of the first Wi-Fi Certified 802.11n draft 2.0 Wi-Fi router (DIR-655), which subsequently became one of the most successful draft 802.11n routers. In 2013, D-Link released its flagship draft 802.11ac Wireless AC1750 Dual-Band Router (DIR-868L) which attained the fastest ever wireless throughput tested by SmallNetBuilder as of May 2013. In April 2019, D-Link was named "Gartner Peer Insights Customers’ Choice for Wired and Wireless LAN Access Infrastructure". In 2022 they continued with their practise of selling home cameras that could not connect to their own application with the intent to force buyers to send them their personal information (adres, full name, phone number a.s.o.) under the pretense of needing all that information to be able to process their request further. Products and services/solutions D-Link's products are geared towards the networking and communications market. Its business products include switches, surveillance network cameras, firewalls, iSCSI SANs and business wireless, while consumer products cover consumer wireless devices, broadband devices, and the Digital Home devices (which include media players, storage, and surveillance camera/NVR). Examples of D-Link products Awards and recognition 2020 CES Innovation Award -         DWP-1020 5G NR Outdoor Unit -         DWR-2101 5G Wi-Fi 6 Mobile Hotspot 2020 iF Design Award -         DCS-8526LH Full HD Pan & Tilt Pro Wi-Fi Camera 2020 iOT Breakthrough Award -         COVR-1902 AC1900 Whole Home Mesh Wi-Fi System 2020 Taiwan Excellence Award -         DWR-2010 5G NR enhanced Gateway 2019 CES Innovation Award -         DWR-2010 5G NR Enhanced Gateway 2019 Taiwan Excellence Award -         DCS-8650LH -         DWR-976 -         DIR-2680 -         DIR-3060 -         COVR-C1203 -         COVR-2202 -         DCS-1820LM -         DCH-G601W Media reviews and awards D-Link DCS-9500T received the 2020 Editor's Choice Award from Technology Reseller D-Link Nuclias Won Best Remote Connectivity Solutions at the 2020 Transformational Leadership Awards by Tahawultech DCS-8630LH: “What I really liked about this D-Link outdoor camera was the ease of use… If you value peace of mind, this camera is for you.” (Canada Best Buy) COVR-C1203: “D-Link's Covr Dual Band Whole Home Wi-Fi System is stylish, simple to install, and delivers solid throughput speeds.” (U.S. PCMag) Certifications Quality Management Systems Certifications ISO 9001 ISO 14001 ISO 14064-1 Greenhouse Gas Verification Statement Product Security Certification IEC 62443-4-1 (2020 Q4) Information Security Management System ISO 27001 We have completed an independent 3rd party assessment of our software development process in line with an assessment of our software security processes in line with the Building Security in Maturity Model (BSIMM). Through BSIMM D-Link continue to improve our software security management in: Web-based applications Mobile devices applications Firmware communicating with hardware Controversies In January 2010, it was reported that HNAP vulnerabilities had been found on some D-Link routers. D-Link was also criticized for their response which was deemed confusing as to which models were affected and downplayed the seriousness of the risk. However the company issued fixes for these router vulnerabilities soon after. In January 2013, version v1.13 for the DIR-100 revA was reported to include a backdoor in the firmware. By passing a specific user agent in an HTTP request to the router, normal authentication is bypassed. It was reported that this backdoor had been present for some time. This backdoor however was closed soon after with a security patch issued by the company. Computerworld reported in January 2015 that ZynOS, a firmware used by some D-Link routers (as well as ZTE, TP-Link, and others), are vulnerable to DNS hijacking by an unauthenticated remote attacker, specifically when remote management is enabled. Affected models had already been phased out by the time the vulnerability was discovered and the company also issued a firmware patch for affected devices for those still using older hardware. Later in 2015, it was reported that D-Link leaked the private keys used to sign firmware updates for the DCS-5020L security camera and a variety of other D-Link products. The key expired in September 2015, but had been published online for seven months. The initial investigation did not produce any evidence that the certificates were abused. Also in 2015, D-Link was criticized for more HNAP vulnerabilities, and worse, introducing new vulnerabilities in their "fixed" firmware updates. On 5 January 2017, the Federal Trade Commission sued D-Link for failing to take reasonable steps to secure their routers and IP cameras. As D-Link marketing was misleading customers into believing their products were secure. The complaint also says security gaps could allow hackers to watch and record people on their D-Link cameras without their knowledge, target them for theft, or record private conversations. D-Link has denied these accusations and has enlisted Cause of Action Institute to file a motion against the FTC for their "baseless" charges. On 2 July 2019, the case was settled with D-Link not found to be liable for any of the alleged violations. D-Link agreed to continue to make security enhancements in its software security program and software development, with biennial, independent, third-party assessments, approved by the FTC. On January 18, 2021, Sven Krewitt, researcher at Risk Based Security, discovered multiple pre-authentication vulnerabilities in D-Link's DAP-2020 Wireless N Access Point product. D-Link confirmed these vulnerabilities in a support announcement and provided a patch to hot-fix the product's firmware. Server misuse In 2006, D-Link was accused of NTP vandalism, when it was found that its routers were sending time requests to a small NTP server in Denmark, incurring thousands of dollars of costs to its operator. D-Link initially refused to accept responsibility. Later, D-link products were found also to be abusing other time servers, including some operated by the US military and NASA. However, no malicious intent was discovered, and eventually D-Link and the sites owner Poul-Henning Kamp were able to agree to an amicable settlement regarding access to Kamp's GPS.Dix.dk NTP Time Server site, with existing products gaining authorized access to Kamp's server. GPL violation On 6 September 2006, the gpl-violations.org project prevailed in court litigation against D-Link Germany GmbH regarding D-Link's inappropriate and copyright infringing use of parts of the Linux kernel. D-Link Germany GmbH was ordered to pay plaintiff's costs. Following the judgement, D-Link agreed to a cease and desist request, ending distribution of the product, and paying legal costs. See also List of companies of Taiwan References External links Taiwanese companies established in 1986 Companies based in Taipei Companies listed on the Taiwan Stock Exchange Electronics companies of Taiwan Multinational companies headquartered in Taiwan Networking companies Networking hardware Networking hardware companies Routers (computing) Taiwanese brands Telecommunications equipment vendors
23525639
https://en.wikipedia.org/wiki/Scot%20Gresham-Lancaster
Scot Gresham-Lancaster
Scot Gresham-Lancaster (born 1954 in Redwood City, California) is an American composer, performer, instrument builder, educator and educational technology specialist. He uses computer networks to create new environments for musical and cross discipline expression. As a member of The Hub, he is one of the early pioneers of "computer network" music, which uses the behavior of interconnected music machines to create innovative ways for performers and computers to interact. He performed in a series of "co-located" performances, collaborating in real time with live and distant dancers, video artists and musicians in network-based performances. As a student, he studied with Philip Ianni, Roy Harris, Darius Milhaud, John Chowning, Robert Ashley, Terry Riley, Robert Sheff, David Cope, and Jack Jarret, among others. In the late 1970s, he worked closely with Serge Tcherepnin, helping with the construction and distribution of Serge's Serge Modular Music System. He went on to work at Oberheim Electronics. In the early 1980s, he was the technical director at the Mills College Center for Contemporary Music. He has taught at California State University, Hayward, Diablo Valley College, Ex'pression College for Digital Arts, Cogswell College, and San Jose State University. He taught at University of Texas at Dallas in the School of Arts Technology and Emerging Communication (ATEC) until 2017, and is currently a Visiting Researcher at CNMAT, UC Berkeley. He is also a Research Scientist at the ArtSci Lab at ATEC. He was a composer in residence at Mills College Center for Contemporary Music. At STEIM in Amsterdam, he has worked to develop new families of controllers to be used exclusively in the live performance of electroacoustic music. He is an alumnus of the Djerassi Artist Residency Program. He has toured and recorded as a member of The Hub, Room (with Chris Brown, Larry Ochs and William Winant), Alvin Curran, ROVA saxophone quartet, the Club Foot Orchestra, and the Dutch ambient group NYX. He has performed the music of Alvin Curran, Pauline Oliveros, John Zorn, and John Cage, under their direction, and worked as a technical assistant to Lou Harrison, Iannis Xenakis, David Tudor, Edmund Campion, Cindy Cox and among many others. Since 2006, he has collaborated with media artist Stephen Medaris Bull in a series of "karaoke cellphone operas" with initial funding provided by New York State Council for the Arts. He has worked in collaboration with Dallas theater director Thomas Riccio developing sonic interventions for many of his productions. Publications Experiences in Digital Terrain: Using Digital Elevation Models for Music and Interactive Multimedia The Aesthetics and History of the Hub: The Effects of Changing Technology on Network Computer Music Mixing in the Round Flying Blind: Network and feedback based systems in real time interactive music performances No There, There: A personal history of telematic performance Discography The HUB: Boundary Layer (3-CD retrospective) Tzadik TZ 8050-3 Orchestrate Clang Mass, solo work Live Interactive Electronics (1983-2001) (2003 OCM publishing) Fuzzybunny with Chris Brown and Tim Perkis Sonore 2001 The HUB: Wrecking Ball (Hub 2nd CD) Artifact 010 Yearbook Vol. 1 Track 5 Rastascan Nonstop Flight (HUB with Deep Listening Band) Music and Arts Metropolis (Clubfoot Orchestra) Voys vol.1 (Voys) Electric Rags New Albion NYX Axis Mundi (with Bert Barten) Lotus Records Gino Robair: Other Destinations Rastascan Records Vol. 17 CDCM Computer Music Series (Chain reaction) CDCM Room (Hall of Mirrors) Music and Art The HUB: Computer Network Music, Artifact 002 Talking Drum: Chris Brown, Pogus 21034 Notes External links Golden, Barbara. "Conversation with Chris Brown and Scot Gresham-Lancaster." eContact! 12.2 — Interviews (2) (April 2010). Montréal: CEC. Golden, Barbara. "Conversation with Scot Gresham-Lancaster." eContact! 12.2 — Interviews (2) (April 2010). Montréal: CEC. Cellphonia. "Mission: Cellphonia explores the social, technological, and creative possibilities of cell phones with bias to encourage new applications for cultural growth." (2006-present). 1954 births Living people Musicians from the San Francisco Bay Area American male composers 21st-century American composers California State University, East Bay faculty People from Redwood City, California 21st-century American male musicians
3019526
https://en.wikipedia.org/wiki/ProCD%2C%20Inc.%20v.%20Zeidenberg
ProCD, Inc. v. Zeidenberg
ProCD, Inc. v. Zeidenberg, 86 F.3d 1447 (7th Cir. 1996), is a United States contract case involving a "shrink wrap license". One issue presented to the court was whether a shrink wrap license was valid and enforceable. Judge Easterbrook wrote the opinion for the court and found such a license was valid and enforceable. The Seventh Circuit's decision overturned a lower court decision. Facts The case involved a graduate student, Matthew Zeidenberg, who purchased a telephone directory database, SelectPhone, on CD-ROM produced by ProCD. ProCD had compiled the information from over 3,000 telephone directories, at a cost of more than $10 million. To recoup its costs, ProCD discriminated based on price by charging commercial users a higher price than it did to everyday, non-commercial users. Zeidenberg purchased a non-commercial copy of SelectPhone and after opening the packaging and installing the software on his personal computer, Zeidenberg created a website and offered the information originally on the CD to visitors for a fee that was less than what ProCD charged its commercial customers. At the time of purchase, Zeidenberg may not have been aware of any prohibited use; however, the package itself stated that there was a license enclosed. Moreover, because "the software license splashed across the screen and would not let him proceed without indicating acceptance," Zeidenberg had ample opportunity to read the license before using SelectPhone. Zeidenberg was presented with this license when he installed the software, which he accepted by clicking assent at a suitable dialog box—this type of license is commonly known as a "click-through license" or "clickwrap". The license was contained, in full, on the CD. Holding The court first held that copyright law did not preempt contract law. Under the 1991 Supreme Court case Feist Publications v. Rural Telephone Service, the court held that a telephone directory was not protectable through copyright. In this case, the court assumes that a database of a telephone directory was equally not protectable. However, the court held that a contract could confer among the parties similar rights since those rights are not "equivalent to any of the exclusive rights within the general scope of copyright". The court then held the license valid and enforceable as a contract. The court relied primarily on the Uniform Commercial Code (UCC) sections 2-204 (describing a valid contract) and 2-606 (describing acceptance of a contract). There was little doubt that ProCD, in fact, offered use of the software as described by UCC section 2-206. The court examined more closely the question of acceptance. The court held that Zeidenberg did accept the offer by clicking through. The court noted that he "had no choice, because the software splashed the license on the screen and would not let him proceed without indicating acceptance". The court stated that Zeidenberg could have rejected the terms of the contract and returned the software. The court, in addition, noted the ability and "the opportunity to return goods can be important" under the Uniform Commercial Code. See also Boilerplate contract References External links Legal Discussion of the legacy of ProCD v. Zeidenberg, published in February 2004 by Jones Day (see section "Software License Jurisprudence") Matthew Zeidenberg's Home Page United States computer case law Software licenses United States Court of Appeals for the Seventh Circuit cases United States contract case law 1996 in United States case law
38151849
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20S4
Samsung Galaxy S4
The Samsung Galaxy S4 is an Android smartphone produced by Samsung Electronics as the fourth smartphone of the Samsung Galaxy S series and was first shown publicly on March 14, 2013, at Samsung Mobile Unpacked in New York City. It is the successor to the Galaxy S III, which maintains a similar design, but with upgraded hardware, more sensors, and an increased focus on software features that take advantage of its hardware capabilities—such as the ability to detect when a finger is hovered over the screen, and expanded eye tracking functionality, it was released the previous year. A hardware variant of the S4 became the first smartphone to support the emerging LTE Advanced mobile network standard (model number GT-i9506). The T-Mobile version of the Galaxy S4, named the model (SGH-M919), was released the same month. The phone's successor, the Samsung Galaxy S5, was released the next year. The Galaxy S4 is among the earliest phones to feature a 1080p Full HD display, 1080p front camera video recording, and among few to feature temperature and humidity sensors and a touch screen able to detect a floating finger. Sales and shipping The S4 was made available in late April 2013 on 327 carriers in 155 countries. It became Samsung's fastest selling smartphone and eventually Samsung's best selling smartphone with 20 million sold worldwide in the first two months, and 40 million in the first six months. In total, more than 80 million Galaxy S4 units have been sold, making it the most selling Android-powered mobile phone of all time. The Galaxy S4's successor, the Samsung Galaxy S5, was announced by Samsung in February 2014 ahead of a release in April of that year. Specifications Hardware Comparisons The Samsung Galaxy S4 uses a refined version of the hardware design introduced by the Samsung Galaxy S III, with a rounded, polycarbonate chassis and a removable rear cover. It is slightly lighter and narrower than the Samsung Galaxy S III, with a length of , a width of , and a thickness of . At the bottom of the device is a microphone and a microUSB port for data connections and charging; it also supports USB-OTG and MHL 2.0. Near the top of the device are a front-facing camera, an infrared transmitter for usage as universal remote control, proximity, and ambient light sensors, and a notification LED. In particular, the infrared sensor is used for the device's "Air View" features. A headphone jack, secondary microphone and infrared blaster are located at the top. The S4 is widely available in black and white color finishes; in selected regions, Samsung also introduced versions in red, purple, pink, brown with gold trim, and light pink with gold trim. In late-January 2014, Samsung's Russian website briefly listed a new black model with a plastic leather backing, similar to the Galaxy Note 3. The S4's Super AMOLED display with 1080×1920 pixels (1080p Full HD) is larger than the 720×1280 display of its predecessor, and also features a PenTile RGBG matrix. The pixel density has increased from 306 to 441 ppi, surfaced with Corning Gorilla Glass 3. An added glove mode option does increase touch sensitivity to allow detecting touch input through gloves. The Galaxy S4 is Samsung's first and one of the earliest mobile phones of all time to feature a 1080p display. Unlike previous models, the S4 does not contain FM radio support, citing the increased use of online media outlets for content consumption on mobile devices. Camera The camera of the Galaxy S4 uses a 13-megapixel Sony Exmor RS IMX135 image sensor (4128×3096), later used on the Galaxy Note 3. While the front cameras of the Galaxy S3 and Galaxy Note 2, both released in 2012, are only able to capture videos at up to 720p HD resolution, the front camera of the Galaxy S4 allows 1080p Full HD video recording for the first time in any Samsung mobile phone. The image sensor of the front camera is a Samsung CMOS S5K6B2 Related section: Camera software Sensors Like the Galaxy S3, the S4 is equipped with an accelerometer sensor, a gyroscope, a front-facing proximity sensor, digital compass and barometer. Unlike the predecessor, the S4 is also equipped with a hall sensor for the S View cover, a self-capacitive touch screen layer for Air View and thermometer and hygrometer sensors, the last two of which only the Galaxy S4 and Galaxy Note 3 out of all historical Samsung flagship devices are equipped with. International variants Galaxy S4 models use one of two processors, depending on the region and network compatibility. The S4 version for North America, most of Europe, parts of Asia, and other countries contains Qualcomm's Snapdragon 600 system-on-chip, containing a quad-core 1.9 GHz Krait 300 CPU and an Adreno 320 GPU. The chip also contains a modem which supports LTE. Other models include Samsung's Exynos 5 Octa system-on-chip with a heterogeneous CPU. The octa-core CPU comprises a 1.6 GHz quad-core Cortex-A15 cluster and a 1.2 GHz quad-core Cortex-A7 cluster. The chip can dynamically switch between the two clusters of cores based on CPU usage; the chip switches to the A15 cores when more processing power is needed, and stays on the A7 cores to conserve energy on lighter loads. Only one of the clusters is used at any particular moment, and software sees the processor as a single quad-core CPU. The SoC also contains an IT tri-core PowerVR SGX 544 graphics processing unit (GPU). Regional models of the S4 vary in support for LTE; for Exynos 5-based models, while the E300K/L/S versions support LTE, with the Cortex-A15 also clocked at 1.6 GHz. the GT-I9500 model does not. The S4 GT-I9505 includes a multiband LTE transceiver. LTE-A (LTE+) variant GT-i9506 On 24 June 2013, a variant supporting LTE Advanced (model number GT-i9506), the first commercially available device to do so, was announced for South Korea. In December 2013, it was also shipped to Germany as Samsung Galaxy S4 LTE+, but only with Telekom and Vodafone branding. It also was equipped with increased processing power by using the same CPU (Snapdragon 800) and GPU (Adreno 330) hardware as the Galaxy Note 3 SM-N9005, although no precluded video recording capabilities beyond 1080p at 30fps. Storage options The S4 comes with either 16 GB, 32 GB, 64 GB of internal storage, which can be supplemented with up to an additional 64 GB with a microSD card slot. Unofficially, the S4 microSD card slot supports 128GB cards as well. The S4 contains a 2600 mAh, NFC-enabled battery. Software Operating system The S4 was originally released with Android 4.2.2 and Samsung's TouchWiz Nature UX 2 interface. Interaction The Galaxy S4 is known for introducing a range of new extended interaction features to the Samsung Galaxy S series. Head tracking features have been extended on the S4, summarized as "Samsung SmartScreen". The new "Smart Scroll" feature can be used to scroll while looking at the screen by slightly tilting head or phone forward or backward, and "Smart Pause" allows the video player to pause videos if the user is not looking at the screen. "Smart Rotation" tracks the facial orientation using the front camera to match the screen rotation and "Smart Stay" prevents entering stand-by mode by deactivating display time-out while the user is looking at it. and "Air Gestures" implement gestures and other functionality (such as previewing images in the Gallery, messages in the precluded SMS/MMS app, a preview of the contact name, number and details in speed dial on the telephone app's number pad, and a preview tooltip when hovering above the seek bar in the video player and showing) by holding or swiping a hand or finger slightly above the screen, similarly to Samsung's Galaxy Note series, and adds a feature known as "Quick Glance", which uses the proximity sensor to wake the phone so it can display the clock, notifications and status such as battery charge. Air Gestures can also be used to accept calls ("Air call-accept"), to scroll through pages ("Air Jump") and swipe through gallery pictured ("Air Browse") and to move home screen icons ("Air Move"). There are additional motion gestures, including panning through and zooming into/out of an image in the precluded gallery software by tilting the phone and holding with one or two fingers respectively. To take a screenshot, alternatively to pressing the home button and power button simultaneously, a user can simply swipe their hand from one end of the screen to the other end horizontally. These interaction features have later been inherited by the Galaxy Note 3 and the Galaxy S5, but gradually removed from Samsung's flagship phones released afterwards. Camera software The camera app implements numerous new features (some of which were first seen on the Galaxy Camera), including an updated interface, and new modes such as "Drama" (which composes a moving element from multiple shots into a single photo), "Eraser" (which takes multiple shots and allows the user to remove unnecessary elements from a picture), "Dual Shot" (which uses the front-facing camera for a picture-in-picture effect), "Sound and Shot" (which allows the user to record a voice clip alongside a photo), "Animated Photo", and "Story Album" among others. Burst shots at full resolution are supported, but capped at twenty consecutive pictures at any selected resolution. The S4 also supports High Efficiency Video Coding (HEVC), a next-generation video codec. In addition, the Galaxy S4 is able to capture 9.6 Megapixel (4128×2322) still photos during 1080p video recording, even with enabled digital video stabilization. While the camera viewfinder user interface of the 2012's Samsung Galaxy S3, Galaxy Note 2, the competing iPhone 5 and iPhone 5s require switching between photo and video modes, the viewfinder of the Galaxy S4 camera software shows the buttons for both photo and video capture simultaneously. The virtual buttons for video recording, photo capture and camera modes have a metallic texture. A "remote viewfinder" feature allows casting the camera viewfinder and controls to a different unit through Wi-Fi Direct. Accessibility The Galaxy S4 is equipped with a range of accessibility features such as TalkBack. Miscellaneous The "Group Play" feature allows ad hoc sharing of files between Galaxy phones, along with multiplayer games and music streaming between S4 phones. The S4 also introduces Knox in the Android 4.3 update, a suite of features which implements a sandbox for enterprise environments that can co-exist with a user's "personal" data. Knox incorporates use of the ARM TrustZone extensions and security enhancements to the Android platform. The TouchWiz keyboard application on the Galaxy S4 has a built-in clipboard feature that allows holding up to twenty items such as text and screenshots. The precluded optical reader feature allows the optical character recognition of pictures through the camera or a picture stored in the gallery. The optical reader is also accessible through a menu in the keyboard for direct pasting into a text area. Other new pre-loaded apps include WatchOn (an electronic program guide that can utilize the S4's infrared transmitter to be a remote control), S Translator, the workout tracker S Health, S Voice Drive, S Memo, TripAdvisor, and an optical character recognition app. The previous "Hub" apps from past Samsung devices were replaced by a single Samsung Hub app, with access to music, e-books, and games that can be purchased by users. Updates A minor update was released in June 2013, restoring the ability to move apps to the SD card, and adding HDR video support among other tweaks. In November 2013, Samsung began rolling out an update to Android 4.3 for the S4, notably adding the Bluetooth low energy support needed for compatibility with the Galaxy Gear smartwatch. However, Samsung halted the rollout following reports from Galaxy S III users that Samsung's version of 4.3 had caused instability and increased battery usage. The rollout was resumed in December 2013. In February 2014, Samsung began rolling out an update to Android 4.4 "KitKat" for the S4; the update adds user interface tweaks such as a camera shortcut on the corner of the lock screen, options for setting default launcher and text messaging applications, support for printing, and a new location settings menu for tracking and controlling the use of location tracking by apps. It also makes significant changes to the handling of secondary storage on the device for security reasons; applications' access to the SD card is restricted to designated, app-specific directories only, while full access to internal primary storage is still allowed. Although this behavior has existed since Android 3.0 "Honeycomb", OEMs such as Samsung previously modified their distributions of Android to retain the previous behavior, allowing applications to have unlimited access to SD card contents. In January 2015, Samsung began rolling out an update to Android 5.0.1 "Lollipop" in Russia and India, an update which brings all the features of Lollipop, such as enhanced performance and lockscreen, including a refined interface with a flatter and geometric look, as seen on the Galaxy S5. Samsung paused the rollout soon after, when users reported major bugs. The rollout continued in March 2015 starting with unlocked models in the UK, Nordic and Baltic countries and has since then spread to several other countries. US (starting with AT&T and Sprint) and Canadian Samsung Galaxy S4 models received Android 5.0.1 Lollipop update in April 2015. Model variants Several different model variants of the S4 are sold, with most variants varying mainly in handling regional network types and bands. To prevent grey market reselling, models of the S4 manufactured after July 2013 implement a regional lockout system in certain regions, requiring that the first SIM card used on a European and North American model be from a carrier in that region. Samsung stated that the lock would be removed once a local SIM card is used. SIM format for all variants is Micro-SIM, which can have one or two depending on model. Google Play Edition At the Google I/O 2013 keynote, Samsung and Google revealed that an edition of the U.S. S4 would be released on June 26, 2013 through Google Play, with the HTC One M7, Sony Xperia Z Ultra, Motorola Moto G, and HTC One M8 releasing later on. initially featuring stock Android 4.2.2, the phone later updated to 4.4.4, with Samsung provided updates; it has an unlockable bootloader (similar to Nexus devices) and supports LTE on AT&T and T-Mobile's networks. The model number is GT-I9505G. Accessories At retail, the S4 is bundled with a USB cable, AC adapter, and in-ear headphones. The "S-View Cover" accessory "closes" the phone. When this cover is detected (by a hall effect sensor), the time and battery are displayed in this cover's window area. Reception Critical reception While some users considered all new Galaxy S4 features innovative and legitimately useful, others called them feature creep or just gimmicks. Those features are, for example, Smart Pause, Smart Rotation, Smart Scroll, Air View, Air gesture, Story Album and Temperature and humidity sensors. These features and sensors are available and optionally utilizable for users and software that needs them.Additionally, the Galaxy S4 is equipped with an “easy mode” that hides many features and increases the size of on-screen elements for easier readability for novice mobile phone users. The S4 received many positive reviews, though some criticism. Gigaom's Tofel says he would recommend the S4 "without hesitation" and says that it's "Samsung's defining phone". ReadWrite's Rowinski described the phone as a "solid" and "first-rate smartphone", but criticised Samsung's use of "bloatware, pre-loaded apps and features that you will likely never use". TIME'''s McCraken says the S4 is a smartphone with everything; it has the biggest screen and the most built-in features. He wishes the S4 would mark the end of Samsung's plan to add too many new features with its flagship smartphones. Technology journalist Walt Mossberg described the S4 as "a good phone, just not a great one". Mossberg wrote: "while I admire some of its features, overall, it isn’t a game-changer." He criticized the software as "especially weak" and "often gimmicky, duplicative of standard Android apps, or, in some cases, only intermittently functional." He urged readers to "consider the more polished-looking, and quite capable, HTC One, rather than defaulting to the latest Samsung."Consumer Reports named the S4 as the top smartphone as of May 2013 due to its screen quality, multitasking support, and built-in IR blaster. Critics noted that about half of the internal storage on the S4's 16 GB model was taken up by its system software, using 1 GB more than the S III and leaving only 8.5 to 9.15 GB for the storage of other data, including downloaded apps (some of which cannot be moved to the SD card). Samsung initially stated that the space was required for the S4's new features, but following a report regarding the issue on the BBC series Watchdog'', Samsung stated that it would review the possibility of optimising the S4's operating system to use less local drive space in a future update. Storage optimizations were brought in an update first released in June 2013, which frees 80 MB of internal storage and restores the ability to move apps to the device's microSD card. Commercial reception The S4 reached 10 million pre-orders from retailers in the first two weeks after its announcement. In the United States, this prompted Samsung to announce that due to larger than expected demand, the roll out of devices on U.S. carriers Sprint and T-Mobile would be slower than expected. The S4 sold 4 million in 4 days and 10 million in 27 days making it the then fastest selling smartphone in Samsung's history (this has been eclipsed by the Galaxy S5). The Galaxy S III sold 4 million units in 21 days, the Galaxy S II took 55 days and the Galaxy S took 85 days. Samsung shipped more than 20 million S4 smartphones by June 30, which is around 1.7 times faster than the Galaxy S III. As of October 23, 2013, Samsung has sold over 40 million S4 units six months after release. Battery problems and safety issues A house in Hong Kong is alleged to have been set on fire by an S4 in July 2013, followed by a minor burnt S4 in Pakistan. A minor fire was also reported in Newbury, United Kingdom in October 2013. Some users of the phone have also reported swelling batteries and overheating; Samsung has offered affected customers new batteries free of charge. On December 2, 2013, Canadian Richard Wygand uploaded a YouTube video describing his phone combusting. The phone was plugged into AC power overnight; he woke up to the smell of smoke and burning matter. In the video, the power cord was shown to be severely burnt and showed warped damage to the power plug. Later in the video, Wygand describes how he attempted to get a replacement: In his second video, uploaded a few days after the first one, Wygand states that in order to receive a replacement phone, Samsung allegedly asked him to sign a legal document requiring him to remove the video, remain silent about the agreement, and surrender any future claims against the company. It is unknown whether or not he signed the document, but his frustrations had been expressed in his video. In an interview with Mashable, Wygand said that since he posted the second video, no further response from Samsung was received. However, an official spokesman from Samsung told Mashable, "Samsung takes the safety and security of our customers very seriously. Our Samsung Canada team is in touch with the customer, and is investigating the issue." In the UK, companies which sold the S4 have varied in their reactions to claims by worried customers for replacement batteries under the Sale of Goods Act. Amazon, for example, have simply refunded part of the purchase price to allow for the cost of a replacement battery. O2 however insist that the complete phone, with the faulty battery, be returned to them so that they in turn can send it to Samsung to consider the claim. See also Comparison of Samsung Galaxy S smartphones Comparison of smartphones References External links Android (operating system) devices Smartphones Samsung mobile phones Samsung Galaxy Mobile phones introduced in 2013 Discontinued smartphones Mobile phones with user-replaceable battery Mobile phones with infrared transmitter Mobile phones with self-capacitive touch screen layer