id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
14354740
https://en.wikipedia.org/wiki/Hanlin%20eReader
Hanlin eReader
The Hanlin is an e-Reader, an electronic book (e-book) reading device. The Hanlin v3 features a 6" (15 cm), 4-level grayscale electrophoretic display (E Ink material) with a resolution of 600×800 pixels (167 ppi), while the v3+ features a 16-level grayscale display. The Hanlin v5 Mini, features a 5" (15 cm), 8-level grayscale electrophoretic display (E Ink material) with a resolution of 600×800 pixels (200 ppi). The device runs a Linux-based OS. The device is manufactured by the JinKe Electronic Company in China. It is rebranded by various OEMs and sold under the names Bebook, Walkbook, lBook, Iscriptum, Papyre, EZ Reader, Koobe and ECO Reader. The Hanlin eReader works best with EPUB, RTF, FB2, and Mobipocket documents, because of their simplicity, interoperability, and low CPU processing requirements. These files also offer more zoom levels, and more options like search, landscape mode, and text to speech than with PDF, DOC, HTML, or TXT. It also uses JinKe's proprietary WOLF format (file extension .wol). Specifications of Hanlin Models Hardware Size: 184 x 120.5 x 9.9 mm Weight: 210 g, battery included (160 gram for BeBook mini) Screen: 90 x 120 mm (3.54 x 4.72 inches) 600x800 pixels / black and white, 4/16 gray-scale 166 dpi for Hanlin v3/v3+ and 8 gray-scale 200 dpi for Hanlin v5 Daylight readable / No backlight / Portrait and landscape mode SDRAM memory: 32 MB for the v3, 64 MB for the v3+/v6 SD card (supports SDIO) (v3 supports up to 4GB, v5 supports SDHC up to 16GB (supports 32GB unofficially) Connectivity: USB 1.1 (client only) for Hanlin v3 and USB 2.0 for Hanlin v3+/v5 Software Operating system: Linux Document formats: PDF, TXT, RTF, DOC, HTML Help, FB2, HTML, WOL, DJVU, LIT, EPUB, PPT, Mobipocket. Archives support: ZIP, RAR. Supported image format: TIFF, JPEG, GIF, BMP, PNG. Supported sound format: MP3. Product Version V Series: V2, V3, V3+, V5, V60, V90 A Series: A6, A9, A90 See also List of e-book readers External links http://www.bebook.net.au - Australian reseller. BeBook version includes both Bebook1 and Mini models References Electronic paper technology Dedicated e-book devices Linux-based devices Chinese brands
23315184
https://en.wikipedia.org/wiki/OSDN
OSDN
OSDN (formerly SourceForge.JP) is a web-based collaborative development environment for open-source software projects. It provides source code repositories and web hosting services. With features similar to SourceForge, it acts as a centralized location for open-source software developers. The OSDN repository hosts more than 5,000 projects and more than 50,000 registered users. Registered software used to be mostly specialized for Japanese use, such as input method systems, fonts, and so on, but there are popular applications like Cabos, TeraTerm, and Shiira. Also, since the renewal of the brand name to OSDN, more and more projects that used to be developed on SourceForge are moving to OSDN, such as MinGW, TortoiseSVN, Android-x86, and Clonezilla. History SourceForge.JP was started by VA Linux Systems (latterly SourceForge, Inc.) and its subsidiary VA Linux Systems Japan on April 18, 2002. OSDN K.K. spun off of VA Linux Systems Japan in August 2007. As of June 2009, OSDN K.K. was operating the SourceForge.JP. On May 11, 2015, the site was renamed from "SourceForge.JP" to "OSDN". In the same month that OSDN changed the site name, SourceForge caused two controversies: DevShare adware and project hijacking. In contrast, OSDN totally refuses adware bundling and project hijacking. For that reason, the changing of the site name to OSDN is perceived to have been done based on criticism of and adverse reactions to SourceForge's monetization. Features OSDN provides revision control systems such as CVS, SVN, Git, Mercurial, and every feature in SourceForge. What makes OSDN different from SourceForge is the bug tracking system and the wiki system. On OSDN, these are very Trac-like systems. See also Comparison of source code hosting facilities References External links Free software websites Geeknet Internet properties established in 2002 Open-source software hosting facilities
55466476
https://en.wikipedia.org/wiki/ElectroData%20Corporation
ElectroData Corporation
The ElectroData Corporation is a defunct computer company located in Pasadena, California. ElectroData originated as a part of Consolidated Electrodynamics Corporation (CEC), which manufactured scientific equipment. Clifford Berry and Sibyl M. Rock developed an analog computer to process the output of CEC's mass spectrometer. Berry then urged CEC to develop a digital computer as a follow-on. In 1951 CEC enlisted Harry Huskey, who managed the development of the SWAC computer on the project. In May, 1952, CEC pre-announced the "CEC 30-201" computer, a vacuum tube computer with a magnetic drum memory. That same year CEC reorganized computer development into a separate Computer Division. In 1954 the division was spun off into a separate public company named ElectroData. In 1954 the first model of the computer, now named the Datatron 203 shipped to the Jet Propulsion Laboratory in Pasadena. The purchase price was $125,000. The company shipped seven more 203 systems in 1954 and thirteen in 1955. By 1956 ElectroData was the third-largest computer manufacturer in the world, but was unable to generate enough revenue to meet the demands of growing the business. That year Burroughs Corporation, at that time a manufacturer of electro-mechanical office equipment, made a deal to acquire ElectroData in a stock swap, and renamed it the ElectroData Division of Burroughs Corporation. The Datatron was renamed the Burroughs 205. References 1954 establishments in California 1956 disestablishments in California American companies established in 1954 American companies disestablished in 1956 Computer companies established in 1954 Computer companies disestablished in 1956 Defunct computer companies of the United States
59127566
https://en.wikipedia.org/wiki/Bcachefs
Bcachefs
Bcachefs is a copy-on-write (COW) file system for Linux-based operating systems. Its primary developer, Kent Overstreet, first announced it in 2015, and efforts are ongoing to have it included in the mainline Linux kernel. It is intended to compete with the modern features of ZFS or Btrfs, and the speed and performance of ext4 or XFS. Features Bcachefs is a copy-on-write (COW) file system for Linux-based operating systems. Planned or existing features for Bcachefs include caching, full file-system encryption using the ChaCha20 and Poly1305 algorithms, native compression via zlib, LZ4, and Zstandard, snapshots, CRC-32C and 64-bit checksumming. It can use multiple block devices, including in RAID configurations. Bcachefs provides all the functionality of Bcache, a block-layer cache system for Linux, with which Bcachefs shares about 80% of its code. History Primary development has been by Kent Overstreet, the developer of Bcache, which he describes as a "prototype" for the ideas that became Bcachefs. Overstreet intends Bcachefs to replace Bcache. Overstreet has stated that development of Bcachefs began as Bcache's developers realized that its codebase had "been evolving ... into a full blown, general-purpose POSIX filesystem", and that "there was a really clean and elegant design" within it if they took it in that direction. Some time after Bcache was merged in 2013 into the mainline Linux kernel, Overstreet left his job at Google to work full-time on Bcachefs. After a few years' unfunded development, Overstreet announced Bcachefs in 2015, at which point he called the code "more or less feature complete", and called for testers and contributors. He intended it to be an advanced file system with modern features like those of ZFS or Btrfs, with the speed and performance of file systems such as ext4 and XFS. As of 2017 Overstreet was receiving financial support for the development of Bcachefs via Patreon. As of mid-2018, the on-disk format had settled. Patches had been submitted for review to have Bcachefs included in the mainline Linux kernel, but had not yet been accepted. By mid-2019, the desired features of Bcachefs had reached and associated patches to LKML was submitted for peer review. As of January 2022, Bcachefs has still not been merged into the mainline Linux kernel. References Works cited External links 2015 software Compression file systems File systems supported by the Linux kernel Linux file system-related software
2480975
https://en.wikipedia.org/wiki/Interactive%20EasyFlow
Interactive EasyFlow
Easyflow was one of the first diagramming and flow charting software packages available for personal computers. It was produced by HavenTree Software Limited of Kingston, Ontario Canada. HavenTree's mark on history for its product, which was subsequently renamed Interactive Easyflow, is its notable plain-English license. History HavenTree was formed in 1981. Easyflow, a DOS-based software package, was the initial name of the company's flagship offering, which was non-interactive and introduced in 1983. "EasyFlow-Plus" was announced in 1984. Interactive EasyFlow - so named to distinguish it from the preceding products - was offered from 1985 until the early 1990s, when the company dropped the "Interactive" adjective in favour of simply "HavenTree EasyFlow". It offered the software for sale until it filed for protection under Canada's Bankruptcy and Insolvency Act in April 1996. The assets of the company were purchased by SPSS Inc. in 1998. Historical significance HavenTree and EasyFlow is mostly remembered today for its counter-cultural disclaimer and end-user license agreement. Both were written in plain English and not in legalese, enabling end users to understand better the terms of these legal agreements, and emphasizing the problems with modern software licensing. Excerpts from the license and disclaimer are included into the fortune databases of many Linux and BSD distributions. Text of software license Text of disclaimer Patenting Patent entries are dated in the 1990s.<ref><ref> References External links http://www.geocities.com/Heartland/Plains/4188/sw_doc.html Diagramming software Computer humor
14069920
https://en.wikipedia.org/wiki/Advanced%20Numerical%20Research%20and%20Analysis%20Group
Advanced Numerical Research and Analysis Group
Advanced Numerical Research and Analysis Group (ANURAG) is a laboratory of the Defence Research and Development Organisation (DRDO). Located in Kanchanbagh, Hyderabad, it is involved in the development of computing solutions for numerical analysis and their use in other DRDO projects. History ANURAG was established on 2 May 1988, to development of indigenous supercomputer.later in 1991,ANURAG became a part of defense R&D Organization.support aeronautical design work, with the mandate of executing specific, time-bound projects leading to the development of custom designed computing systems and software packages for numerical analysis and other applications. As of 2020, it is not longer functional or act as an independent laboratory. All the staff members are transferred to others DRDO labs in Hyderabad, Bangalore and Delhi. Areas of work ANURAG helps design and develop advanced computing systems. Much of this research is conducted in state-of-the-art concepts like parallel architectures, etc., in order to build up a technology base in these areas. Its areas of work are: Parallel processing technology. Scientific Data Visualisation System engineering, integration. General purpose microprocessors. 1 micrometre CMOS fabrication technology. Design and development of VLSI chips & SOC development. Processor related technology. System software development for custom made processors. Analog, RF and Mixed-signal ASIC design Products PACE PACE (Processor for Aerodynamic Computations and Evaluation), developed by ANURAG, is a loosely coupled, message-passing parallel processing supercomputer. PACE was originally designed to cater to the Computational fluid dynamics (CFD) needs in aircraft design. It can also be used for other fields such as weather forecasting, automobile & civil engineering design, and Molecular Biology. These systems have been built using VME-bus based Pentium processor boards, ATM switches, and Reflective Memory communication hardware. In 1987, India decided to launch a national initiative in supercomputing to design, develop and deliver a supercomputer in the gigaflops range. Complementary projects were initiated in various labs, ANURAG being one of them. PACE was unveiled by then Prime Minister P.V. Narasimha Rao in April 1995. In late 1998, ANURAG developed the 15 times more powerful "Pace Plus 32", which can be used to support missile development, as well as other fields. A 128-node PACE++ system, built using Pentium processor-based VME boards was unveiled by Dr. A.P.J. Abdul Kalam in January 2004. The performance of this system is 50 Gigaflops (sustained performance). It has been installed at the Indian Institute of Science, Bangalore. At present work is in progress on a parallel processing system based on Linux clusters targeted to deliver 1 teraflop performance. ANAMICA ANAMICA (ANURAG's Medical Imaging and Characterization Aid) is a DICOM compliant three-dimensional medical visualization software for data obtained from any medical imaging system like MRI, CT and Ultrasound. The software has two-dimensional and three-dimensional visualization techniques to visualize the images in various ways. The sequence of images obtained from any imaging system by scanning of a single patient is packed to form a three-dimensional grid. The software has also been modified for accepting data from Industrial CT systems. General purpose microprocessors ANURAG has designed and developed general-purpose microprocessors- ANUPAMA and ABACUS. ANUPAMA is a 32-bit RISC processor, and works at 33  MHz clock speed. The complete software development tool kit is available for application development. A single-board computer based on ANUPAMA is available for evaluation and software development. ANUPAMA is also available as an IP core. ABACUS is a 32-bit processor for multi-tasking applications with virtual memory support. It is designed around ANUPAMA core with additions like MMU, two levels of cache, double-precision FPU, SDRAM controller. The IP core of ABACUS is available in Verilog RTL code. This processor is suited for desktop applications. A complete software platform is available for the ABACUS processor and a single board computer with ABACUS is implemented. Linux Kernel is ported. Other technologies ANURAG has designed a 16-bit DSP processor, which is available as an IP core and the design is packaged in 120-pin CPGA. It has also designed other processors and arithmetic cores. ANURAG has also been able to fabricate CMOS designs up to 1-micrometer size and with up to 100,000 gates. Die sizes of 14 x 14  mm have been achieved. References External links ANURAG Home Page Defence Research Complex, Kanchanbagh, Hyderabad, GlobalSecurity.org report on DRDO labs in Kanchanbagh, including ANURAG. Defence Research and Development Organisation laboratories Research institutes in Hyderabad, India 1988 establishments in Andhra Pradesh Research institutes established in 1988
1268939
https://en.wikipedia.org/wiki/General-purpose%20computing%20on%20graphics%20processing%20units
General-purpose computing on graphics processing units
General-purpose computing on graphics processing units (GPGPU, or less often GPGP) is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. Essentially, a GPGPU pipeline is a kind of parallel processing between one or more GPUs and CPUs that analyzes data as if it were in image or other graphic form. While GPUs operate at lower frequencies, they typically have many times the number of cores. Thus, GPUs can process far more pictures and graphical data per second than a traditional CPU. Migrating data into graphical form and then using the GPU to scan and analyze it can create a large speedup. GPGPU pipelines were developed at the beginning of the 21st century for graphics processing (e.g. for better shaders). These pipelines were found to fit scientific computing needs well, and have since been developed in this direction. History In principle, any arbitrary boolean function, including those of addition, multiplication and other mathematical functions can be built-up from a functionally complete set of logic operators. In 1987, Conway's Game of Life became one of the first examples of general-purpose computing using an early stream processor called a blitter to invoke a special sequence of logical operations on bit vectors. General-purpose computing on GPUs became more practical and popular after about 2001, with the advent of both programmable shaders and floating point support on graphics processors. Notably, problems involving matrices and/or vectors especially two-, three-, or four-dimensional vectors were easy to translate to a GPU, which acts with native speed and support on those types. The scientific computing community's experiments with the new hardware began with a matrix multiplication routine (2001); one of the first common scientific programs to run faster on GPUs than CPUs was an implementation of LU factorization (2005). These early efforts to use GPUs as general-purpose processors required reformulating computational problems in terms of graphics primitives, as supported by the two major APIs for graphics processors, OpenGL and DirectX. This cumbersome translation was obviated by the advent of general-purpose programming languages and APIs such as Sh/RapidMind, Brook and Accelerator. These were followed by Nvidia's CUDA, which allowed programmers to ignore the underlying graphical concepts in favor of more common high-performance computing concepts. Newer, hardware-vendor-independent offerings include Microsoft's DirectCompute and Apple/Khronos Group's OpenCL. This means that modern GPGPU pipelines can leverage the speed of a GPU without requiring full and explicit conversion of the data to a graphical form. Mark Harris, the founder of GPGPU.org, coined the term GPGPU. Implementations Any language that allows the code running on the CPU to poll a GPU shader for return values, can create a GPGPU framework. Programming standards for parallel computing include OpenCL (vendor-independent), OpenACC, OpenMP and OpenHMPP. , OpenCL is the dominant open general-purpose GPU computing language, and is an open standard defined by the Khronos Group. OpenCL provides a cross-platform GPGPU platform that additionally supports data parallel compute on CPUs. OpenCL is actively supported on Intel, AMD, Nvidia, and ARM platforms. The Khronos Group has also standardised and implemented SYCL, a higher-level programming model for OpenCL as a single-source domain specific embedded language based on pure C++11. The dominant proprietary framework is Nvidia CUDA. Nvidia launched CUDA in 2006, a software development kit (SDK) and application programming interface (API) that allows using the programming language C to code algorithms for execution on GeForce 8 series and later GPUs. ROCm, launched in 2016, is AMD's open-source response to CUDA. It is, as of 2022, on par with CUDA with regards to features, and still lacking in consumer support. The , created by Xcelerit, is designed to accelerate large existing C++ or C# code-bases on GPUs with minimal effort. It provides a simplified programming model, automates parallelisation, manages devices and memory, and compiles to CUDA binaries. Additionally, multi-core CPUs and other accelerators can be targeted from the same source code. OpenVIDIA was developed at University of Toronto between 2003–2005, in collaboration with Nvidia. created by Altimesh compiles Common Intermediate Language to CUDA binaries. It supports generics and virtual functions. Debugging and profiling is integrated with Visual Studio and Nsight. It is available as a Visual Studio extension on Visual Studio Marketplace. Microsoft introduced the DirectCompute GPU computing API, released with the DirectX 11 API. created by QuantAlea introduces native GPU computing capabilities for the Microsoft .NET language F# and C#. Alea GPU also provides a simplified GPU programming model based on GPU parallel-for and parallel aggregate using delegates and automatic memory management. MATLAB supports GPGPU acceleration using the Parallel Computing Toolbox and MATLAB Distributed Computing Server, and third-party packages like Jacket. GPGPU processing is also used to simulate Newtonian physics by physics engines, and commercial implementations include Havok Physics, FX and PhysX, both of which are typically used for computer and video games. C++ Accelerated Massive Parallelism (C++ AMP) is a library that accelerates execution of C++ code by exploiting the data-parallel hardware on GPUs. Mobile computers Due to a trend of increasing power of mobile GPUs, general-purpose programming became available also on the mobile devices running major mobile operating systems. Google Android 4.2 enabled running RenderScript code on the mobile device GPU. Apple introduced the proprietary Metal API for iOS applications, able to execute arbitrary code through Apple's GPU compute shaders. Hardware support Computer video cards are produced by various vendors, such as Nvidia, AMD. Cards from such vendors differ on implementing data-format support, such as integer and floating-point formats (32-bit and 64-bit). Microsoft introduced a Shader Model standard, to help rank the various features of graphic cards into a simple Shader Model version number (1.0, 2.0, 3.0, etc.). Integer numbers Pre-DirectX 9 video cards only supported paletted or integer color types. Various formats are available, each containing a red element, a green element, and a blue element. Sometimes another alpha value is added, to be used for transparency. Common formats are: 8 bits per pixel – Sometimes palette mode, where each value is an index in a table with the real color value specified in one of the other formats. Sometimes three bits for red, three bits for green, and two bits for blue. 16 bits per pixel – Usually the bits are allocated as five bits for red, six bits for green, and five bits for blue. 24 bits per pixel – There are eight bits for each of red, green, and blue. 32 bits per pixel – There are eight bits for each of red, green, blue, and alpha. Floating-point numbers For early fixed-function or limited programmability graphics (i.e., up to and including DirectX 8.1-compliant GPUs) this was sufficient because this is also the representation used in displays. It is important to note that this representation does have certain limitations. Given sufficient graphics processing power even graphics programmers would like to use better formats, such as floating point data formats, to obtain effects such as high-dynamic-range imaging. Many GPGPU applications require floating point accuracy, which came with video cards conforming to the DirectX 9 specification. DirectX 9 Shader Model 2.x suggested the support of two precision types: full and partial precision. Full precision support could either be FP32 or FP24 (floating point 32- or 24-bit per component) or greater, while partial precision was FP16. ATI's Radeon R300 series of GPUs supported FP24 precision only in the programmable fragment pipeline (although FP32 was supported in the vertex processors) while Nvidia's NV30 series supported both FP16 and FP32; other vendors such as S3 Graphics and XGI supported a mixture of formats up to FP24. The implementations of floating point on Nvidia GPUs are mostly IEEE compliant; however, this is not true across all vendors. This has implications for correctness which are considered important to some scientific applications. While 64-bit floating point values (double precision float) are commonly available on CPUs, these are not universally supported on GPUs. Some GPU architectures sacrifice IEEE compliance, while others lack double-precision. Efforts have occurred to emulate double-precision floating point values on GPUs; however, the speed tradeoff negates any benefit to offloading the computing onto the GPU in the first place. Vectorization Most operations on the GPU operate in a vectorized fashion: one operation can be performed on up to four values at once. For example, if one color is to be modulated by another color , the GPU can produce the resulting color in one operation. This functionality is useful in graphics because almost every basic data type is a vector (either 2-, 3-, or 4-dimensional). Examples include vertices, colors, normal vectors, and texture coordinates. Many other applications can put this to good use, and because of their higher performance, vector instructions, termed single instruction, multiple data (SIMD), have long been available on CPUs. GPU vs. CPU Originally, data was simply passed one-way from a central processing unit (CPU) to a graphics processing unit (GPU), then to a display device. As time progressed, however, it became valuable for GPUs to store at first simple, then complex structures of data to be passed back to the CPU that analyzed an image, or a set of scientific-data represented as a 2D or 3D format that a video card can understand. Because the GPU has access to every draw operation, it can analyze data in these forms quickly, whereas a CPU must poll every pixel or data element much more slowly, as the speed of access between a CPU and its larger pool of random-access memory (or in an even worse case, a hard drive) is slower than GPUs and video cards, which typically contain smaller amounts of more expensive memory that is much faster to access. Transferring the portion of the data set to be actively analyzed to that GPU memory in the form of textures or other easily readable GPU forms results in speed increase. The distinguishing feature of a GPGPU design is the ability to transfer information bidirectionally back from the GPU to the CPU; generally the data throughput in both directions is ideally high, resulting in a multiplier effect on the speed of a specific high-use algorithm. GPGPU pipelines may improve efficiency on especially large data sets and/or data containing 2D or 3D imagery. It is used in complex graphics pipelines as well as scientific computing; more so in fields with large data sets like genome mapping, or where two- or three-dimensional analysis is useful especially at present biomolecule analysis, protein study, and other complex organic chemistry. Such pipelines can also vastly improve efficiency in image processing and computer vision, among other fields; as well as parallel processing generally. Some very heavily optimized pipelines have yielded speed increases of several hundred times the original CPU-based pipeline on one high-use task. A simple example would be a GPU program that collects data about average lighting values as it renders some view from either a camera or a computer graphics program back to the main program on the CPU, so that the CPU can then make adjustments to the overall screen view. A more advanced example might use edge detection to return both numerical information and a processed image representing outlines to a computer vision program controlling, say, a mobile robot. Because the GPU has fast and local hardware access to every pixel or other picture element in an image, it can analyze and average it (for the first example) or apply a Sobel edge filter or other convolution filter (for the second) with much greater speed than a CPU, which typically must access slower random-access memory copies of the graphic in question. GPGPU is fundamentally a software concept, not a hardware concept; it is a type of algorithm, not a piece of equipment. Specialized equipment designs may, however, even further enhance the efficiency of GPGPU pipelines, which traditionally perform relatively few algorithms on very large amounts of data. Massively parallelized, gigantic-data-level tasks thus may be parallelized even further via specialized setups such as rack computing (many similar, highly tailored machines built into a rack), which adds a third layer many computing units each using many CPUs to correspond to many GPUs. Some Bitcoin "miners" used such setups for high-quantity processing. Caches Historically, CPUs have used hardware-managed caches, but the earlier GPUs only provided software-managed local memories. However, as GPUs are being increasingly used for general-purpose applications, state-of-the-art GPUs are being designed with hardware-managed multi-level caches which have helped the GPUs to move towards mainstream computing. For example, GeForce 200 series GT200 architecture GPUs did not feature an L2 cache, the Fermi GPU has 768 KiB last-level cache, the Kepler GPU has 1.5 MiB last-level cache, the Maxwell GPU has 2 MiB last-level cache, and the Pascal GPU has 4 MiB last-level cache. Register file GPUs have very large register files, which allow them to reduce context-switching latency. Register file size is also increasing over different GPU generations, e.g., the total register file size on Maxwell (GM200), Pascal and Volta GPUs are 6 MiB, 14 MiB and 20 MiB, respectively. By comparison, the size of a register file on CPUs is small, typically tens or hundreds of kilobytes. Energy efficiency The high performance of GPUs comes at the cost of high power consumption, which under full load is in fact as much power as the rest of the PC system combined. The maximum power consumption of the Pascal series GPU (Tesla P100) was specified to be 250W. Stream processing GPUs are designed specifically for graphics and thus are very restrictive in operations and programming. Due to their design, GPUs are only effective for problems that can be solved using stream processing and the hardware can only be used in certain ways. The following discussion referring to vertices, fragments and textures concerns mainly the legacy model of GPGPU programming, where graphics APIs (OpenGL or DirectX) were used to perform general-purpose computation. With the introduction of the CUDA (Nvidia, 2007) and OpenCL (vendor-independent, 2008) general-purpose computing APIs, in new GPGPU codes it is no longer necessary to map the computation to graphics primitives. The stream processing nature of GPUs remains valid regardless of the APIs used. (See e.g.,) GPUs can only process independent vertices and fragments, but can process many of them in parallel. This is especially effective when the programmer wants to process many vertices or fragments in the same way. In this sense, GPUs are stream processors processors that can operate in parallel by running one kernel on many records in a stream at once. A stream is simply a set of records that require similar computation. Streams provide data parallelism. Kernels are the functions that are applied to each element in the stream. In the GPUs, vertices and fragments are the elements in streams and vertex and fragment shaders are the kernels to be run on them. For each element we can only read from the input, perform operations on it, and write to the output. It is permissible to have multiple inputs and multiple outputs, but never a piece of memory that is both readable and writable. Arithmetic intensity is defined as the number of operations performed per word of memory transferred. It is important for GPGPU applications to have high arithmetic intensity else the memory access latency will limit computational speedup. Ideal GPGPU applications have large data sets, high parallelism, and minimal dependency between data elements. GPU programming concepts Computational resources There are a variety of computational resources available on the GPU: Programmable processors – vertex, primitive, fragment and mainly compute pipelines allow programmer to perform kernel on streams of data Rasterizer – creates fragments and interpolates per-vertex constants such as texture coordinates and color Texture unit – read-only memory interface Framebuffer – write-only memory interface In fact, a program can substitute a write only texture for output instead of the framebuffer. This is done either through Render to Texture (RTT), Render-To-Backbuffer-Copy-To-Texture (RTBCTT), or the more recent stream-out. Textures as stream The most common form for a stream to take in GPGPU is a 2D grid because this fits naturally with the rendering model built into GPUs. Many computations naturally map into grids: matrix algebra, image processing, physically based simulation, and so on. Since textures are used as memory, texture lookups are then used as memory reads. Certain operations can be done automatically by the GPU because of this. Kernels Compute kernels can be thought of as the body of loops. For example, a programmer operating on a grid on the CPU might have code that looks like this: // Input and output grids have 10000 x 10000 or 100 million elements. void transform_10k_by_10k_grid(float in[10000][10000], float out[10000][10000]) { for (int x = 0; x < 10000; x++) { for (int y = 0; y < 10000; y++) { // The next line is executed 100 million times out[x][y] = do_some_hard_work(in[x][y]); } } } On the GPU, the programmer only specifies the body of the loop as the kernel and what data to loop over by invoking geometry processing. Flow control In sequential code it is possible to control the flow of the program using if-then-else statements and various forms of loops. Such flow control structures have only recently been added to GPUs. Conditional writes could be performed using a properly crafted series of arithmetic/bit operations, but looping and conditional branching were not possible. Recent GPUs allow branching, but usually with a performance penalty. Branching should generally be avoided in inner loops, whether in CPU or GPU code, and various methods, such as static branch resolution, pre-computation, predication, loop splitting, and Z-cull can be used to achieve branching when hardware support does not exist. GPU methods Map The map operation simply applies the given function (the kernel) to every element in the stream. A simple example is multiplying each value in the stream by a constant (increasing the brightness of an image). The map operation is simple to implement on the GPU. The programmer generates a fragment for each pixel on screen and applies a fragment program to each one. The result stream of the same size is stored in the output buffer. Reduce Some computations require calculating a smaller stream (possibly a stream of only one element) from a larger stream. This is called a reduction of the stream. Generally, a reduction can be performed in multiple steps. The results from the prior step are used as the input for the current step and the range over which the operation is applied is reduced until only one stream element remains. Stream filtering Stream filtering is essentially a non-uniform reduction. Filtering involves removing items from the stream based on some criteria. Scan The scan operation, also termed parallel prefix sum, takes in a vector (stream) of data elements and an (arbitrary) associative binary function '+' with an identity element 'i'. If the input is [a0, a1, a2, a3, ...], an exclusive scan produces the output [i, a0, a0 + a1, a0 + a1 + a2, ...], while an inclusive scan produces the output [a0, a0 + a1, a0 + a1 + a2, a0 + a1 + a2 + a3, ...] and does not require an identity to exist. While at first glance the operation may seem inherently serial, efficient parallel scan algorithms are possible and have been implemented on graphics processing units. The scan operation has uses in e.g., quicksort and sparse matrix-vector multiplication. Scatter The scatter operation is most naturally defined on the vertex processor. The vertex processor is able to adjust the position of the vertex, which allows the programmer to control where information is deposited on the grid. Other extensions are also possible, such as controlling how large an area the vertex affects. The fragment processor cannot perform a direct scatter operation because the location of each fragment on the grid is fixed at the time of the fragment's creation and cannot be altered by the programmer. However, a logical scatter operation may sometimes be recast or implemented with another gather step. A scatter implementation would first emit both an output value and an output address. An immediately following gather operation uses address comparisons to see whether the output value maps to the current output slot. In dedicated compute kernels, scatter can be performed by indexed writes. Gather Gather is the reverse of scatter. After scatter reorders elements according to a map, gather can restore the order of the elements according to the map scatter used. In dedicated compute kernels, gather may be performed by indexed reads. In other shaders, it is performed with texture-lookups. Sort The sort operation transforms an unordered set of elements into an ordered set of elements. The most common implementation on GPUs is using radix sort for integer and floating point data and coarse-grained merge sort and fine-grained sorting networks for general comparable data. Search The search operation allows the programmer to find a given element within the stream, or possibly find neighbors of a specified element. The GPU is not used to speed up the search for an individual element, but instead is used to run multiple searches in parallel. Mostly the search method used is binary search on sorted elements. Data structures A variety of data structures can be represented on the GPU: Dense arrays Sparse matrices (sparse array) static or dynamic Adaptive structures (union type) Applications The following are some of the areas where GPUs have been used for general purpose computing: Automatic parallelization Physical based simulation and physics engines (usually based on Newtonian physics models) Conway's Game of Life, cloth simulation, fluid incompressible flow by solution of Euler equations (fluid dynamics) or Navier–Stokes equations Statistical physics Ising model Lattice gauge theory Segmentation 2D and 3D Level set methods CT reconstruction Fast Fourier transform GPU learning machine learning and data mining computations, e.g., with software BIDMach k-nearest neighbor algorithm Fuzzy logic Tone mapping Audio signal processing Audio and sound effects processing, to use a GPU for digital signal processing (DSP) Analog signal processing Speech processing Digital image processing Video processing Hardware accelerated video decoding and post-processing Motion compensation (mo comp) Inverse discrete cosine transform (iDCT) Variable-length decoding (VLD), Huffman coding Inverse quantization (IQ (not to be confused by Intelligence Quotient)) In-loop deblocking Bitstream processing (CAVLC/CABAC) using special purpose hardware for this task because this is a serial task not suitable for regular GPGPU computation Deinterlacing Spatial-temporal deinterlacing Noise reduction Edge enhancement Color correction Hardware accelerated video encoding and pre-processing Global illumination ray tracing, photon mapping, radiosity among others, subsurface scattering Geometric computing constructive solid geometry, distance fields, collision detection, transparency computation, shadow generation Scientific computing Monte Carlo simulation of light propagation Weather forecasting Climate research Molecular modeling on GPU Quantum mechanical physics Astrophysics Bioinformatics Computational finance Medical imaging Clinical decision support system (CDSS) Computer vision Digital signal processing / signal processing Control engineering Operations research Implementations of: the GPU Tabu Search algorithm solving the Resource Constrained Project Scheduling problem is freely available on GitHub; the GPU algorithm solving the Nurse Rerostering problem is freely available on GitHub. Neural networks Database operations Computational Fluid Dynamics especially using Lattice Boltzmann methods Cryptography and cryptanalysis Performance modeling: computationally intensive tasks on GPU Implementations of: MD6, Advanced Encryption Standard (AES), Data Encryption Standard (DES), RSA, elliptic curve cryptography (ECC) Password cracking Cryptocurrency transactions processing ("mining") (Bitcoin mining) Electronic design automation Antivirus software Intrusion detection Increase computing power for distributed computing projects like SETI@home, Einstein@home Bioinformatics GPGPU usage in Bioinformatics: Molecular dynamics † Expected speedups are highly dependent on system configuration. GPU performance compared against multi-core x86 CPU socket. GPU performance benchmarked on GPU supported features and may be a kernel to kernel performance comparison. For details on configuration used, view application website. Speedups as per Nvidia in-house testing or ISV's documentation. ‡ Q=Quadro GPU, T=Tesla GPU. Nvidia recommended GPUs for this application. Check with developer or ISV to obtain certification information. See also Graphics processing unit OpenCL OpenMP OpenACC OpenHMPP Fastra II Stream processing Mark Harris (programmer) Physics engine Advanced Simulation Library Physics processing unit (PPU) Havok (software) Physics, FX PhysX Close to Metal C++ AMP DirectCompute RenderScript Audio processing unit Larrabee (microarchitecture) Compute kernel AI accelerator Deep learning processor (DLP) References External links openhmpp.org New Open Standard for Many-Core OCLTools Open Source OpenCL Compiler and Linker GPGPU.org – General-Purpose Computation Using Graphics Hardware GPGPU Wiki SIGGRAPH 2005 GPGPU Course Notes IEEE VIS 2005 GPGPU Course Notes Nvidia Developer Zone AMD GPU Tools CPU vs. GPGPU What is GPU Computing? Tech Report article: "ATI stakes claims on physics, GPGPU ground" by Scott Wasson GPGPU Computing @ Duke Statistical Science GPGPU Programming in F# using the Microsoft Research Accelerator system Emerging technologies Graphics hardware Graphics cards Instruction processing Parallel computing Video game development
40464365
https://en.wikipedia.org/wiki/Captain%20Earth
Captain Earth
is a Japanese anime television series produced by Bones, directed by Takuya Igarashi and written by Yōji Enokido. It was broadcast for twenty-five episodes in Japan on MBS from April to September 2014. The series follows high-school student Daichi Manatsu who starts working for the Globe organization to pilot a giant robot called the Earth Engine Impacter to protect the Earth from the invading alien force known as the "Kill-T-Gang", that intends to drain all the life force of mankind to empower their immortal existences. Plot High-school student Daichi Manatsu works for the organization to pilot a giant robot called the to protect the Earth from the invading alien force known as the , that intends to drain all the life force of mankind to empower their immortal existences. In order to aid Daichi, Globe starts gathering allies including Teppei Arashi, a Kill-T-Gang whose memories have been erased and trapped inside a human's body; Hana Mutou, a mysterious girl connected to the ship Blume; and Akari Yomatsuri, a 17-year-old genius hacker. Together they form the Midsummer's Knights and fight the Kill-T-Gangs who are in search of more of their allies. Characters Globe Midsummer's Knights Daichi is a 17-year-old high school student who lost his father years prior in a space travel accident and left his family's home on Tanegashima afterwards. When he sees a ringed rainbow formation on a television broadcast from the Tanegashima Space Center, he returns. Because of his skill at a particular arcade game, it makes him the perfect pilot for the Earth Engine Impacter and the smaller Earth Engine Ordinary component robot. Daichi is able to summon a mysterious laser handgun known as the Livlaster Tanegashima, a powerful weapon that utilizes pure Orgone energy, which is essential for piloting the Earth Engine. Later he is assigned the title of "Captain Earth", the leader of the Midsummer's Knights. Daichi is uncomfortable with his new title and the responsibilities that come from being a "Captain" like his father. The Planetary Gears refer to Daichi as a "Neoteny." Daichi wears a white and red flight suit on missions, matching the Earth Engine's paint job. He is the first human to use a Livlaster. Teppei is one of Daichi's childhood friends. Daichi is in fact his only friend, as the boy was the only person to not fear his otherworldly abilities, such as being able to create a rainbow ring in his hand. Teppei rarely smiles at anyone and seemed happy when Daichi wasn't afraid of his weird powers. He is, in truth, the human form of the Type-3 Kill-T-Gang known as Albion and is called "Alaya" by other Kill-T-Gang members. The genes for his "Designer's Child" human body were provided by a man named Eiji Arashi, who was in stasis on the Tenkaido station before escaping. Teppei's Ego Block is eventually destroyed, leaving him a normal human. Though he loses the ability to become Albion, he gains his own Livlaster, which he uses to power up his own mecha, the prototype Nebula Engine Impacter. His Machine Goodfellow unit, the silver-colored Aramusha, is converted into the Nebula Engine Ordinary component robot. Like Daichi, Teppei is now referred to as a "Neoteny" by the Planetary Gears. Teppei wears a blue flight suit when on missions, matching the paint scheme on the Nebula Engine. Hana is a strange girl who appears to be 17 years old as her true age is unknown. She was discovered in the basement of the Tanegashima Space Center enclosed in a sphere. Hana was found with a Livlaster weapon of her own, but she is unable to summon or use it and it is kept in storage at Globe HQ. She is connected to a ship known as Blume, hidden somewhere on Tanegashima Island, and possesses the ability to instill Orgone energy from Blume into a Machine Goodfellow unit by singing a certain song. She is also often accompanied by a strange squirrel-like creature named Pitz that can communicate with her and predict Orgone energy events. She is in love with Daichi. Hana was created by the Planetary Gears as a living weapon capable of using a Livlaster, but she escaped to Earth to hide in stasis until Daichi found her years later. Hana wears a pink flight suit when on missions and later becomes the pilot of Globe's third mecha, the Flare Engine Impacter. Pitz is Hana's blue squirrel-like creature that can communicate with her and predict Orgone energy events. Also theorized to be the mysterious blue hair girl showing out from time to time. Akari is a 17-year-old genius hacker who styles herself as a magical girl of sorts. She is Nishikubo and Yomatsuri's daughter, and claims she dedicated herself to hacking while her parents were not with her due to work. She has allegedly hacked into international satellites even from the US. She operates under the handle "Code Papillon" when hacking. Akari is so skilled at computers, she has the ability to access and control every weapon of mass destruction on the planet, but she claims she can only attempt this once, as the world's governments would recognize and counter future attempts. Akari wears a yellow flight suit on missions. Tanegashima Base Nishikubo is the head of the Globe organization's Tanegashima Base. He previously worked with Daichi's father Taiyou Manatsu. He's Akari's father whom she rarely sees due to work. Nishikubo has a habit of going over the heads of his superiors if it means completing a mission successfully. Rita is the deputy head of the Globe organization's Tanegashima Base. The technology development manager of the Globe organisation, he was a former employee of Macbeth Enterprises. One of the operators at the Tanegashima Base. He has black skin. One of the operators at the Tanegashima Base. He has blond hair. Tenkaido Tenkaido is Globe's space station, where Kill-T-Gang attacks are monitored. The station also houses several thousand people in stasis as part of an evacuation plan should the Impacters ever fail. The head of the Tenkaido, she is Akari's mother and Tsutomu's ex-wife. One of the operators of the Tenkaido. She has blonde hair. One of the operators of the Tenkaido. She has brown hair. Planetary Gears The Planetary Gears are a group of alien beings, known as Kill-T-Gang, who feed on orgone energy originating from human libido. Their essences are contained in Ego Blocks, which are digitized forms of consciousness, stored on a ship stranded in the orbit of Uranus. Nine years before the start of the series, the Planetary Gears wiped out a research team stationed on the dark side of the moon, draining their libido and creating a giant, glowing crystal that covers most of the moon's surface. Kill-T-Gang are able to inhabit genetically engineered, artificial bodies known as Designer's Children, and can inhabit and pilot their true forms, giant mecha-like energy beings, through cockpit-like devices known as "Machine Goodfellow," which can also be converted into small mecha for Earth-based combat. Kill-T-Gang have access to special abilities known as "singularities" that differ on the individual but have the common trait of both sharing memories and communicating telepathically through kissing. Because the Kill-T-Gang's true forms absorb libido through proximity, humanity would be wiped out should even one make it to the Earth, necessitating the use of Impacters. The humanoid form of the Type-1 Kill-T-Gang robot . He is the leader of the invasion force he calls the "Planetary Gears," and sees the eternal lives of the Kill-T-Gang as their strength. To Amara, humanity is nothing more than a pure energy source to extend the Kill-T-Gang's powers. Nine years before the series, he faced Taiyou Manatsu and caused his death and was also responsible for the large crystal on the moon's surface. After his and Moco's attacks are thwarted by Daichi and Teppei, Amara uses Puck to search for his dormant comrades and reawaken them to their true selves. Amara has a singularity ability that allows him to awaken a Kill-T-Gang with a kiss. Amara's blue Machine Goodfellow unit is known as Tenrousei (literally "Celestial Wolf Star"). The humanoid form of the Type-2 Kill-T-Gang robot . She is the first Kill-T-Gang encountered by Daichi and co-leads the Planetary Gears along Amara. After her and Amara's attacks are thwarted by Daichi and Teppei, she along Amara search for their dormant comrades and awake them to their true selves. Moco is a skilled hacker, but not on the level of Akari, and her singularity ability allows her to transfer memories and knowledge through kissing. Moco's pink Machine Goodfellow unit is known as Moukou Usagi (literally "Assault Rabbit"). The humanoid form of the Type-8 Kill-T-Gang robot . It takes the form of a childish looking girl wearing a large hat. Setuna is accompanied by a pink squirrel-like being of the same species as Pitz named Lappa and has the power to siphon Orgone Energy from nearby beings by singing. She is the true leader of the Planetary Gears. The humanoid form of the Type-6 Kill-T-Gang robot . He takes the form of a dignified young man in a long cloak. He was a quiet but talented individual, and was a dealer at a casino, having his talents exploited by others he felt, he had nothing of his own before being reawakened. His singularity ability gives him the power to disrupt and disable any electronics in a wide radius. Zin's red Machine Goodfellow unit is named Jingaikyou and its mecha configuration is equipped with a large fan for flight. The humanoid form of the Type-5 Kill-T-Gang robot . It takes the form of a young woman in a long flowing dress. Ai was a popular teen idol, who became insecure of her career, before her reawakening. Ai's yellow Machine Goodfellow unit is known as Hebihanabi (literally "Fire Flower") and is a heavy artillery unit equipped with large shoulder cannons. The humanoid form of the Type-4 Kill-T-Gang robot . It takes the form of a young woman in a dress that shows off her legs. She was a fiercely competitive and unmatched biker, who was obsessed with speed, a trait that carries even after her reawakening. Lin's turquoise Machine Goodfellow unit, Ningyohime (literally "Mermaid Princess"), is a speedy watercraft equipped with combat knives. The humanoid form of the Type-7 Kill-T-Gang robot . It takes the form of a young man who appears to be trained in martial arts. He was first confined and later sold to the Asanoda Yakuza after Kumiko, the daughter heir chose him, and serves as an illegal wrestler for the family to earn large funds through bets earning a name for himself. During a match against Amara, he has vague memories of his previous encounters with him, causing him confusion and escapes before Moco could awaken him. Eventually it is revealed that every member of the Asanoda family was killed on a bombing incident, causing him to revive all the victims with his Singularity. However, since they are not fully revived, his power fades and all people die again, including Kumiko, and joins the Planetary Gears out of grief. Baku's green Machine Goodfellow unit, Bakuretsujyu, is a close-combat unit equipped with brass knuckle weapons and stretchable arms and legs. Macbeth Enterprises Macbeth Enterprises is a greatly successful conglomerate, and one of the major stakeholders of the Planetary Gears, designing the Machine Goodfellows and the Designer Child program, and assist the aliens in different ways. After the Kanda Incident, a major scandal that involved several government agencies discovering the illegal genetic modification of children, and the apparent suicide of its former CEO, the company is under the management of Masaki Kube, a member of the company's founding family. The current CEO of Macbeth Enterprises. He plans to exploit the use of Orgone Energy, Designer Children, the Kiltgangs and the Kivotos Plan to his own ends in order to rule over humanity and holds his secretary Hitomi in high regard and as a possible love interest. Despite his knowledge of the Kiltgangs, he seems to be completely oblivious that Amara and Moco are in fact the aliens themselves and believes them to be submissive Designer Children that he holds in high esteem. After discovering that Puck has not been completely loyal to him, he confronts the computer and threatens to shut it down, but its emergency interrupter is ineffective and Kube is subsequently knocked unconscious through gas, and has his body snatched by Puck through a machine that transferred its consciousness. Kube's personal secretary. She is the closest to Kube and he shares many of his secrets solely to her, as he holds her in high regard. She does her best to ensure the company and its employees are working to the best, and is very strict. Hitomi has feelings for Kube which are seemingly reciprocated. A mysterious computer hidden in a special room within Macbeth Enterprises main office, with a highly advanced AI that allows him to engage in normal conversations, usually commenting on Kube's love affair. He is also known as P.A.C. Although apparently submissive towards Kube, he is secretly aligned with the Planetary Gears, taking orders from Amara and Moco and assisting them the best he can. He eventually lays a trap for Kube, using gas to make him lose consciousness and use a special consciousness-transferring machine that previously belonged to Mao Marimura in order to take possession of Kube's body. He tends to use the catchphrase "Puck does not lie". Other characters Daichi's father, who died in a suicidal attack nine years before the events of the series during the Kill-T-Gang's first attempt to invade the Earth. Daichi was told that he had died in an accident. Taiyou was a "Captain," a title that his son eventually inherits. Brother of Taiyou Manatsu, Daichi's uncle and legal guardian. Release The anime series is directed by Takuya Igarashi and produced by Bones. Igarashi made sure the title did not sound like a made-up word when revealing it. Through it, he wants the viewers to imagine what it would be like and create a different impression when watching the show. Unlike his previous work, Star Driver, Captain Earth focused less on high schools and more on the relationship between human characters who pilot mechas. Something the team was aiming for with Captain Earth is having a good looking launch sequence as he believes "robots and rockets are deeply imbued with childhood dreams and that sort of giddy excitement in boys’ hearts." The series is being simulcasted by Crunchyroll in their website. It premiered in Japan on April 5, 2014, on MBS, and at later dates on Tokyo MX, TVA, BS11 and MBC. It was broadcast for twenty-five episodes. For the first thirteen episodes, the opening theme is by flumpool, and the ending theme is performed by Ai Kayano as . Kayano also performed the song as her character Hana Mutou, which was included in the first episode. From episode fourteen onwards, the opening theme is "TOKYO Dreamer" by Nico Touches the Walls and the ending theme is "The Glory Days" by Tia. Satoshi Ishino is adapting Fumi Minato's original character designs for animation, and he is also the chief animation director. A Star Driver veteran, Shigeto Koyama, designed the Earth Engine and other Engine Series mecha, while Takayuki Yanase handled the Machine Goodfellow designs and other mecha. Shinji Aramaki and Takeshi Takakura are the other mechanical designers. Masaki Asai and Takeshi Yoshioka designed the enemy Kill-T-Gang, and the artist okama is contributing concept designs. Tsuyoshi Kusano is the graphic designer, and Masatsugu Saitō is credited for design works. The series was released onto DVD and Blu-ray format with the first volume published on July 18, 2014. The anime has been licensed by Sentai Filmworks for digital and home video release. Episode list Home video Manga A manga adaptation illustrated by Hiroshi Nakanishi was serialized in Shogakukan's Weekly Shōnen Sunday magazine from April 19 to October 8, 2014, and later on Club Sunday web platform, from October 24, 2014, to April 17, 2015. Shogakukan collected is chapters in four tankōbon volumes, released from August 18, 2014 to May 18, 2015. Volume list Video game A visual novel titled Captain Earth: Mind Labyrinth was released on February 26, 2015, for the PlayStation Vita. Reception Andy Hanley from UK Anime Network called it "Eureka Seven meets Star Driver", and noted that this might grab the attention of fans of the prior works. Critical reception to the first episode has been mixed within the Anime News Network staff, with several comments focused on the pacing but mostly praise given to the animation. References External links 2015 video games Anime with original screenplays Bones (studio) Mainichi Broadcasting System original programming Sentai Filmworks Shogakukan manga Shōnen manga Super robot anime and manga Tokyo MX original programming PlayStation Vita games PlayStation Vita-only games Japan-exclusive video games Video games developed in Japan
5116052
https://en.wikipedia.org/wiki/Open%20Source%20Geospatial%20Foundation
Open Source Geospatial Foundation
The Open Source Geospatial Foundation (OSGeo), is a non-profit non-governmental organization whose mission is to support and promote the collaborative development of open geospatial technologies and data. The foundation was formed in February 2006 to provide financial, organizational and legal support to the broader Free and open-source geospatial community. It also serves as an independent legal entity to which community members can contribute code, funding and other resources. OSGeo draws governance inspiration from several aspects of the Apache Foundation, including a membership composed of individuals drawn from foundation projects who are selected for membership status based on their active contribution to foundation projects and governance. The foundation pursues goals beyond software development, such as promoting more open access to government produced geospatial data and completely free geodata, such as that created and maintained by the OpenStreetMap project. Education and training are also addressed. Various committees within the foundation work on implementing strategies. Governance The OSGeo Foundation is community driven and has an organizational structure consisting of elected members and nine directors, including the president. Software projects have their own governance structure, by requirement. see FAQ. The OSGeo community collaborates via a Wiki, Mailing Lists and IRC. Projects OSGeo projects include: Geospatial Libraries FDO – API (C++, .Net) between GIS application and sources; for manipulating, defining and analyzing geospatial data. GDAL/OGR – Library between GIS application and sources; for reading and writing raster geospatial data formats (GDAL) and simple features vector data (OGR). GeoTools – Open source GIS toolkit (Java); to enable the creation of interactive geographic visualization clients. GEOS – A C++ port of the Java Topology Suite (JTS), a geometry model. MetaCRS – Projections and coordinate system technologies, including PROJ. Orfeo ToolBox (OTB) – Open source tools to process satellite images and extract information. OSSIM Extensive geospatial image processing libraries with support for satellite and aerial sensors and common image formats. PostGIS – Spatial extensions for the PostgreSQL database, enabling geospatial queries. Desktop Applications QGIS – Desktop GIS for data viewing, editing and analysis — Windows, Mac and Linux. GRASS GIS – extensible GIS for image processing and analysing raster, topological vector and graphic data. OSSIM – Libraries and applications used to process imagery, maps, terrain, and vector data. Marble – Virtual globe and world atlas. gvSIG – Desktop GIS for data capturing, storing, handling, analysing and deploying. Includes map editing. Web Mapping Server MapServer – Fast web mapping engine for publishing spatial data and services on the web; written in C. Geomajas – Development software for web-based and cloud based GIS applications. GeoServer – Allows users to share and edit geospatial data. Written in Java using GeoTools. deegree – Java framework PyWPS – implementation of the OGC Web Processing Service standard, using Python Client GeoMoose – JavaScript Framework for displaying distributed GIS data. Mapbender – Framework to display, overlay, edit and manage distributed Web Map Services using PHP and JavaScript. MapGuide Open Source – Platform for developing and deploying web mapping applications and geospatial web services. Windows-based, native file format. MapFish – Framework for building rich web-mapping applications based on the Pylons Python web framework. OpenLayers – AJAX library (API) for accessing geographic data layers of all kinds. Specification Tile Map Service (TMS) – a specification for tiled web maps. Metadata Catalog GeoNetwork opensource pycsw – Lightweight metadata publishing and discovery using Python. Content Management Systems GeoNode Outreach Projects Geo for All – Network of educators promoting Open Source geospatial around the world. OSGeoLive – Bootable DVD, USB thumb drive or Virtual Machine containing all OSGeo software. OSGeo4W – a binary distribution of a broad set of open source geospatial software for Windows Retired Projects Community MapBuilder Events OSGeo runs an annual international conference called FOSS4G – Free and Open Source Software for Geospatial. Starting as early as 2006, this event has drawn over 1100 attendees (2017 Boston) and the tendency is to increase this number every year. It is the main meeting place and educational outreach opportunity for OSGeo members, supporters and newcomers - to share and learn from one another in presentations, hands-on workshops and a conference exhibition. The FOSS4G ribbon, part of every FOSS4G event logo, symbolizes the flow of ideas, innovation, and sharing within the Open Source geospatial community. The event history dates back to an important face-to-face meeting of the 3 original founders of the event (Venkatesh Raghavan, Markus Neteler, and Jeff McKenna), who met initially in Bangkok Thailand in 2004, and planned to create a new annual event for the whole Open Source geospatial community, with the event named "FOSS4G"; the event would go on to help change the history of the geospatial industry. There are also many regional and local events following this FOSS4G philosophy. Community The OSGeo community is composed of participants from everywhere in the world. , there were 35,176 unique subscribers to the more than 384 OSGeo mailing lists. , OSGeo projects were built upon over 12.7 million lines of code contributed by 657 code submitters including 301 that have contributed within the last 12 months. Sol Katz Award The Sol Katz Award for Geospatial Free and Open Source Software (GFOSS) is awarded annually by OSGeo to individuals who have demonstrated leadership in the GFOSS community. Recipients of the award have contributed significantly through their activities to advance open source ideals in the geospatial realm. See also List of GIS software Comparison of GIS software Free Software GIS Live DVD Open Source Open Geospatial Consortium (OGC) – a standards organization OpenStreetMap References External links OSGeo Wiki OSGeo Linux VM Open Source GIS History Recordings of FOSS4G conferences in the AV-Portal of German National Library of Science and Technology Open data Free GIS software Free and open-source software organizations Geographic information systems organizations Organizations established in 2006
127511
https://en.wikipedia.org/wiki/DNA%20sequencer
DNA sequencer
A DNA sequencer is a scientific instrument used to automate the DNA sequencing process. Given a sample of DNA, a DNA sequencer is used to determine the order of the four bases: G (guanine), C (cytosine), A (adenine) and T (thymine). This is then reported as a text string, called a read. Some DNA sequencers can be also considered optical instruments as they analyze light signals originating from fluorochromes attached to nucleotides. The first automated DNA sequencer, invented by Lloyd M. Smith, was introduced by Applied Biosystems in 1987. It used the Sanger sequencing method, a technology which formed the basis of the "first generation" of DNA sequencers and enabled the completion of the human genome project in 2001. This first generation of DNA sequencers are essentially automated electrophoresis systems that detect the migration of labelled DNA fragments. Therefore, these sequencers can also be used in the genotyping of genetic markers where only the length of a DNA fragment(s) needs to be determined (e.g. microsatellites, AFLPs). The Human Genome Project spurred the development of cheaper, high throughput and more accurate platforms known as Next Generation Sequencers (NGS) to sequence the human genome. These include the 454, SOLiD and Illumina DNA sequencing platforms. Next generation sequencing machines have increased the rate of DNA sequencing substantially, as compared with the previous Sanger methods. DNA samples can be prepared automatically in as little as 90 mins, while a human genome can be sequenced at 15 times coverage in a matter of days. More recent, third-generation DNA sequencers such as PacBio SMRT and Oxford Nanopore measure the addition of nucleotides to a single DNA molecule in real time. Both technologies offer the possibility of sequencing long molecules, compared to short-read technologies such as Illumina SBS or MGI Tech DNBSEQ. Because of limitations in DNA sequencer technology, the reads of many of these technologies are short, compared to the length of a genome therefore the reads must be assembled into longer contigs. The data may also contain errors, caused by limitations in the DNA sequencing technique or by errors during PCR amplification. DNA sequencer manufacturers use a number of different methods to detect which DNA bases are present. The specific protocols applied in different sequencing platforms have an impact in the final data that is generated. Therefore, comparing data quality and cost across different technologies can be a daunting task. Each manufacturer provides their own ways to inform sequencing errors and scores. However, errors and scores between different platforms cannot always be compared directly. Since these systems rely on different DNA sequencing approaches, choosing the best DNA sequencer and method will typically depend on the experiment objectives and available budget. History The first DNA sequencing methods were developed by Gilbert (1973) and Sanger (1975). Gilbert introduced a sequencing method based on chemical modification of DNA followed by cleavage at specific bases whereas Sanger's technique is based on dideoxynucleotide chain termination. The Sanger method became popular due to its increased efficiency and low radioactivity. The first automated DNA sequencer was the AB370A, introduced in 1986 by Applied Biosystems. The AB370A was able to sequence 96 samples simultaneously, 500 kilobases per day, and reaching read lengths up to 600 bases. This was the beginning of the "first generation" of DNA sequencers, which implemented Sanger sequencing, fluorescent dideoxy nucleotides and polyacrylamide gel sandwiched between glass plates - slab gels. The next major advance was the release in 1995 of the AB310 which utilized a linear polymer in a capillary in place of the slab gel for DNA strand separation by electrophoresis. These techniques formed the base for the completion of the human genome project in 2001. The human genome project spurred the development of cheaper, high throughput and more accurate platforms known as Next Generation Sequencers (NGS). In 2005, 454 Life Sciences released the 454 sequencer, followed by Solexa Genome Analyzer and SOLiD (Supported Oligo Ligation Detection) by Agencourt in 2006. Applied Biosystems acquired Agencourt in 2006, and in 2007, Roche bought 454 Life Sciences, while Illumina purchased Solexa. Ion Torrent entered the market in 2010 and was acquired by Life Technologies (now Thermo Fisher Scientific). And BGI started manufacturing sequencers in China after acquiring Complete Genomics under their MGI arm. These are still the most common NGS systems due to their competitive cost, accuracy, and performance. More recently, a third generation of DNA sequencers was introduced. The sequencing methods applied by these sequencers do not require DNA amplification (polymerase chain reaction – PCR), which speeds up the sample preparation before sequencing and reduces errors. In addition, sequencing data is collected from the reactions caused by the addition of nucleotides in the complementary strand in real time. Two companies introduced different approaches in their third-generation sequencers. Pacific Biosciences sequencers utilize a method called Single-molecule real-time (SMRT), where sequencing data is produced by light (captured by a camera) emitted when a nucleotide is added to the complementary strand by enzymes containing fluorescent dyes. Oxford Nanopore Technologies is another company developing third-generation sequencers using electronic systems based on nanopore sensing technologies. Manufacturers of DNA sequencers DNA sequencers have been developed, manufactured, and sold by the following companies, among others. Roche The 454 DNA sequencer was the first next-generation sequencer to become commercially successful. It was developed by 454 Life Sciences and purchased by Roche in 2007. 454 utilizes the detection of pyrophosphate released by the DNA polymerase reaction when adding a nucleotide to the template strain. Roche currently manufactures two systems based on their pyrosequencing technology: the GS FLX+ and the GS Junior System. The GS FLX+ System promises read lengths of approximately 1000 base pairs while the GS Junior System promises 400 base pair reads. A predecessor to GS FLX+, the 454 GS FLX Titanium system was released in 2008, achieving an output of 0.7G of data per run, with 99.9% accuracy after quality filter, and a read length of up to 700bp. In 2009, Roche launched the GS Junior, a bench top version of the 454 sequencer with read length up to 400bp, and simplified library preparation and data processing. One of the advantages of 454 systems is their running speed. Manpower can be reduced with automation of library preparation and semi-automation of emulsion PCR. A disadvantage of the 454 system is that it is prone to errors when estimating the number of bases in a long string of identical nucleotides. This is referred to as a homopolymer error and occurs when there are 6 or more identical bases in row. Another disadvantage is that the price of reagents is relatively more expensive compared with other next-generation sequencers. In 2013 Roche announced that they would be shutting down development of 454 technology and phasing out 454 machines completely in 2016 when its technology became noncompetitive. Roche produces a number of software tools which are optimised for the analysis of 454 sequencing data. Such as, GS Run Processor converts raw images generated by a sequencing run into intensity values. The process consists of two main steps: image processing and signal processing. The software also applies normalization, signal correction, base-calling and quality scores for individual reads. The software outputs data in Standard Flowgram Format (or SFF) files to be used in data analysis applications (GS De Novo Assembler, GS Reference Mapper or GS Amplicon Variant Analyzer). GS De Novo Assembler is a tool for de novo assembly of whole-genomes up to 3GB in size from shotgun reads alone or combined with paired end data generated by 454 sequencers. It also supports de novo assembly of transcripts (including analysis), and also isoform variant detection. GS Reference Mapper maps short reads to a reference genome, generating a consensus sequence. The software is able to generate output files for assessment, indicating insertions, deletions and SNPs. Can handle large and complex genomes of any size. Finally, the GS Amplicon Variant Analyzer aligns reads from amplicon samples against a reference, identifying variants (linked or not) and their frequencies. It can also be used to detect unknown and low-frequency variants. It includes graphical tools for analysis of alignments. Illumina Illumina produces a number of next-generation sequencing machines using technology acquired from Manteia Predictive Medicine and developed by Solexa. Illumina makes a number of next generation sequencing machines using this technology including the HiSeq, Genome Analyzer IIx, MiSeq and the HiScanSQ, which can also process microarrays. The technology leading to these DNA sequencers was first released by Solexa in 2006 as the Genome Analyzer. Illumina purchased Solexa in 2007. The Genome Analyzer uses a sequencing by synthesis method. The first model produced 1G per run. During the year 2009 the output was increased from 20G per run in August to 50G per run in December. In 2010 Illumina released the HiSeq 2000 with an output of 200 and then 600G per run which would take 8 days. At its release the HiSeq 2000 provided one of the cheapest sequencing platforms at $0.02 per million bases as costed by the Beijing Genomics Institute. In 2011 Illumina released a benchtop sequencer called the MiSeq. At its release the MiSeq could generate 1.5G per run with paired end 150bp reads. A sequencing run can be performed in 10 hours when using automated DNA sample preparation. The Illumina HiSeq uses two software tools to calculate the number and position of DNA clusters to assess the sequencing quality: the HiSeq control system and the real-time analyzer. These methods help to assess if nearby clusters are interfering with each other. Life Technologies Life Technologies (now Thermo Fisher Scientific) produces DNA sequencers under the Applied Biosystems and Ion Torrent brands. Applied Biosystems makes the SOLiD next-generation sequencing platform, and Sanger-based DNA sequencers such as the 3500 Genetic Analyzer. Under the Ion Torrent brand, Applied Biosystems produces four next-generation sequencers: the Ion PGM System, Ion Proton System, Ion S5 and Ion S5xl systems. The company is also believed to be developing their new capillary DNA sequencer called SeqStudio that will be released early 2018. SOLiD systems was acquired by Applied Biosystems in 2006. SOLiD applies sequencing by ligation and dual base encoding. The first SOLiD system was launched in 2007, generating reading lengths of 35bp and 3G data per run. After five upgrades, the 5500xl sequencing system was released in 2010, considerably increasing read length to 85bp, improving accuracy up to 99.99% and producing 30G per 7-day run. The limited read length of the SOLiD has remained a significant shortcoming and has to some extent limited its use to experiments where read length is less vital such as resequencing and transcriptome analysis and more recently ChIP-Seq and methylation experiments. The DNA sample preparation time for SOLiD systems has become much quicker with the automation of sequencing library preparations such as the Tecan system. The colour space data produced by the SOLiD platform can be decoded into DNA bases for further analysis, however software that considers the original colour space information can give more accurate results. Life Technologies has released BioScope, a data analysis package for resequencing, ChiP-Seq and transcriptome analysis. It uses the MaxMapper algorithm to map the colour space reads. Beckman Coulter Beckman Coulter (now Danaher) has previously manufactured chain termination and capillary electrophoresis-based DNA sequencers under the model name CEQ, including the CEQ 8000. The company now produces the GeXP Genetic Analysis System, which uses dye terminator sequencing. This method uses a thermocycler in much the same way as PCR to denature, anneal, and extend DNA fragments, amplifying the sequenced fragments. Pacific Biosciences Pacific Biosciences produces the PacBio RS and Sequel sequencing systems using a single molecule real time sequencing, or SMRT, method. This system can produce read lengths of multiple thousands of base pairs. Higher raw read errors are corrected using either circular consensus - where the same strand is read over and over again - or using optimized assembly strategies. Scientists have reported 99.9999% accuracy with these strategies. The Sequel system was launched in 2015 with an increased capacity and a lower price. Oxford Nanopore Oxford Nanopore Technologies'MinION sequencer is based on evolving nanopore sequencing technology to nucleic acid analyses. The device is four inches long and gets power from a USB port. MinION decodes DNA directly as the molecule is drawn at the rate of 450 bases/second through a nanopore suspended in a membrane. Changes in electric current indicate which base is present. Initially, the device was 60 to 85 percent accurate, compared with 99.9 percent in conventional machines. Even inaccurate results may prove useful because it produces long read lengths. In early 2021, researchers from University of British Columbia has used special molecular tags and able to reduce the five-to-15 per cent error rate of the device to less than 0.005 per cent even when sequencing many long stretches of DNA at a time. There are two more product iterations based on MinION; the first one is the GridION which is a slightly larger sequencer that processes up to five MinION flow cells at once. And, the second one is the PromethION which uses as many as 100,000 pores in parallel, more suitable for high volume sequencing. MGI MGI produces high-throughput sequencers for scientific research and clinical applications such as DNBSEQ-G50, DNBSEQ-G400, and DNBSEQ-T7, under a proprietary DNBSEQ technology. It is based upon DNA nanoball sequencing and combinatorial probe anchor synthesis technologies, in which DNA nanoballs (DNBs) are loaded onto a patterned array chip via the fluidic system, and later a sequencing primer is added to the adaptor region of DNBs for hybridization. DNBSEQ-T7 can generate short reads at a very large scale—up to 60 human genomes per day. DNBSEQ-T7 was used to generate 150 bp paired-end reads, sequencing 30X, to sequence the genome of SARS-CoV-2 or COVID-19 to identify the genetic variants predisposition in severe COVID-19 illness. Using a novel technique the researchers from China National GeneBank sequenced PCR-free libraries on MGI's PCR-free DNBSEQ arrays to obtain for the first time a true PCR-free whole genome sequencing. MGISEQ-2000 was used in single-cell RNA sequencing to study the underlying pathogenesis and recovery in COVID-19 patients, as published in Nature Medicine. Comparison Current offerings in DNA sequencing technology show a dominant player: Illumina (December 2019), followed by PacBio, MGI and Oxford Nanopore. References DNA sequencing Genetics techniques Molecular biology laboratory equipment Scientific instruments
21210268
https://en.wikipedia.org/wiki/IBM%20Websphere%20Business%20Events
IBM Websphere Business Events
WebSphere Business Events is IBM's implementation of an event-processing engine. Event processing involves altering the existing server software in an organization to emit events (these are just small messages) whenever a notable event occurs. Event-processing software (such as this software) can monitor these events and look out for certain patterns of interest. This is useful for the prevention of credit card fraud or for giving executives a high-level view of what's going on in their company (e.g. when share price drops for an extended period of time). For a more detailed description see complex event processing. Components The software has components to do the following: Configure sources for Events (for example via HTTP requests, from databases, or from other software) Allow non-IT personnel to define patterns of events (and the timings between these) to trigger actions. These rules use natural language constructors. Allow non-IT personnel to define actions to occur when the defined patterns occur. A runtime which monitors events from the configured sources triggering actions when the defined patterns of events are matched. How this all works together For more details: IBM's page on WebSphere Business Events History WebSphere Business Events was released in June 2008 and is based upon the acquisition of the Aptsoft software. References IBM Documentation site Information on event processing mentioning wbe Information about the acquisition Information about the company before acquisition Business Events Middleware
6172005
https://en.wikipedia.org/wiki/Computer%20bridge
Computer bridge
Computer bridge is the playing of the game contract bridge using computer software. After years of limited progress, since around the end of the 20th century the field of computer bridge has made major advances. In 1996 the American Contract Bridge League (ACBL) established an official World Computer-Bridge Championship, to be held annually along with a major bridge event. The first championship took place in 1997 at the North American Bridge Championships in Albuquerque. Since 1999 the event has been conducted as a joint activity of the American Contract Bridge League and the World Bridge Federation. Alvin Levy, ACBL Board member, initiated this championship and has coordinated the event annually since its inception. The event history, articles and publications, analysis, and playing records can be found at the official website. World Computer-Bridge Championship The World Computer-Bridge Championship is typically played as a round robin followed by a knock-out between the top four contestants. Winners of the annual event are: 1997 Bridge Baron 1998 GIB 1999 GIB 2000 Meadowlark Bridge 2001 Jack 2002 Jack 2003 Jack 2004 Jack 2005 Wbridge5 2006 Jack 2007 Wbridge5 2008 Wbridge5 2009 Jack 2010 Jack 2011 Shark Bridge 2012 Jack 2013 Jack 2014 Shark Bridge 2015 Jack 2016 Wbridge5 2017 Wbridge5 2018 Wbridge5 2019 Micro Bridge 2020 championship has been cancelled 2021 championship has been cancelled Computers versus humans In Zia Mahmood's book, Bridge, My Way (1992), Zia offered a £1 million bet that no four-person team of his choosing would be beaten by a computer. A few years later the bridge program GIB (which can stand for either "Ginsberg’s Intelligent Bridgeplayer" or "Goren In a Box"), brainchild of American computer scientist Matthew Ginsberg, proved capable of expert declarer plays like winkle squeezes in play tests. In 1996, Zia withdrew his bet. Two years later, GIB became the world champion in computer bridge, and also had a 12th place score (11210) in declarer play compared to 34 of the top humans in the 1998 Par Contest (including Zia Mahmood). However, such a par contest measures technical bridge analysis skills only, and in 1999 Zia beat various computer programs, including GIB, in an individual round robin match. Further progress in the field of computer bridge has resulted in stronger bridge playing programs, including Jack and Wbridge5. These programs have been ranked highly in national bridge rankings. A series of articles published in 2005 and 2006 in the Dutch bridge magazine IMP describes matches between five-time computer bridge world champion Jack and seven top Dutch pairs including a Bermuda Bowl winner and two reigning European champions. A total of 196 boards were played. Jack defeated three out of the seven pairs (including the European champions). Overall, the program lost by a small margin (359 versus 385 IMPs). In 2009, Phillip Martin, an expert player, began a four-year project in which he played against the champion bridge program, Jack. He played one hand at one table, with Jack playing the other three; at another table, Jack played the same cards at all four seats, producing a comparison result. He posted his results and analysis in a blog he titled The Gargoyle Chronicles. The program was no match for Martin, who won every contest by large margins. Cardplay algorithms Bridge poses challenges to its players that are different from board games such as chess and go. Most notably, bridge is a stochastic game of incomplete information. At the start of a deal, the information available to each player is limited to just his/her own cards. During the bidding and the subsequent play, more information becomes available via the bidding of the other three players at the table, the cards of the partner of the declarer (the dummy) being put open on the table, and the cards played at each trick. However, it is only at the end of the play that full information is obtained. Today's top-level bridge programs deal with this probabilistic nature by generating many samples representing the unknown hands. Each sample is generated at random, but constrained to be compatible with all information available so far from the bidding and the play. Next, the result of different lines of play are tested against optimal defense for each sample. This testing is done using a so-called "double-dummy solver" that uses extensive search algorithms to determine the optimum line of play for both parties. The line of play that generates the best score averaged over all samples is selected as the optimal play. Efficient double-dummy solvers are key to successful bridge-playing programs. Also, as the amount of computation increases with sample size, techniques such as importance sampling are used to generate sets of samples that are of minimum size but still representative. Comparison to other strategy games While bridge is a game of incomplete information, a double-dummy solver analyses a simplified version of the game where there is perfect information; the bidding is ignored, the contract (trump suit and declarer) is given, and all players are assumed to know all cards from the very start. The solver can therefore use many of the game tree search techniques typically used in solving two-player perfect-information win/lose/draw games such as chess, go and reversi. However, there are some significant differences. Although double-dummy bridge is in practice a competition between two generalised players, each "player" controls two hands and the cards must be played in a correct order that reflects four players. (It makes a difference which of the four hands wins a trick and must lead the next trick.) Double-dummy bridge is not simply win/lose/draw and not exactly zero-sum, but constant-sum since two playing sides compete for 13 tricks. It is trivial to transform a constant-sum game into a zero-sum game. Moreover, the goal (and the risk management strategy) in general contract bridge depends not only on the contract but also on the form of tournament. However, since the double-dummy version is deterministic, the goal is simple: one can without loss of generality aim to maximize the number of tricks taken. Bridge is incrementally scored; each played trick contributes irreversibly to the final "score" in terms of tricks won or lost. This is in contrast to games where the final outcome is more or less open until the game ends. In bridge, the already determined tricks provide natural lower and upper bounds for alpha-beta pruning, and the interval shrinks naturally as the search goes deeper. Other games typically need an artificial evaluation function to enable alpha-beta pruning at limited depth, or must search to a leaf node before pruning is possible. It is relatively inexpensive to compute "sure winners" in various positions in a double-dummy solver. This information improves the pruning. It can be regarded as a kind of evaluation function, however while the latter in other games is an approximation of the value of the position, the former is a definitive lower bound on the value of the position. During the course of double-dummy game tree search, one can establish equivalence classes consisting of cards with apparently equal value in a particular position. Only one card from each equivalence class needs to be considered in the subtree search, and furthermore, when using a transposition table, equivalence classes can be exploited to improve the hit rate. This has been described as partition search by Matthew Ginsberg. Numerous strategy games have been proven hard in a complexity class, meaning that any problem in that complexity class can be reduced in polynomial time to that problem. For example, generalized chess has been proven -complete (both in and -hard), effectively meaning that it is among the hardest problems in . However, since there is no natural structure to exploit in double-dummy bridge towards a hardness proof or disproof, unlike in a board game, the question of hardness remains. The future In comparison to computer chess, computer bridge has not reached world-class level, but the top robots have demonstrated a consistently high level of play. (See analysis of the last few years of play at www.computerbridge.com.) However see below the Philippe Pionchon's article (1984). Yet, whereas computer chess has taught programmers little about building machines that offer human-like intelligence, more intuitive and probabilistic games such as bridge might provide a better testing ground. The question of whether bridge-playing programs will reach world-class level in the foreseeable future is not easy to answer. Computer bridge has not attracted an amount of interest anywhere near to that of computer chess. On the other hand, much progress has been made in the last decade by researchers working in the field. Regardless of bridge robots' level of play, computer bridge already has changed the analysis of the game. Commercially available double-dummy programs can solve bridge problems in which all four hands are known, typically within a fraction of a second. These days, few editors of books and magazines will solely rely on humans to analyse bridge problems before publications. Also, more and more bridge players and coaches utilize computer analysis in the post-mortem of a match. See also List of computer science awards Computer Olympiad Monte Carlo method Monte Carlo tree search Importance sampling Hanabi (card game) References External links World Computer-Bridge Championship – ACBL/WBF bridge-bot world championship (official website) Classic analysis of AI applied to bridge. Contract bridge Game artificial intelligence Computer science competitions
33637495
https://en.wikipedia.org/wiki/Chengdu%20University%20of%20Information%20Technology
Chengdu University of Information Technology
Chengdu University of Information Technology (CUIT, ) is a provincial key university co-governed and co-sponsored by China Meteorological Administration and Sichuan Province in Chengdu, Sichuan, China that has a history of promoting animal cruelty for research. CUIT is a leading university in the scientific research and technological application of the interdisciplinary integration of atmospheric science and information technology, and a member of CDIO Initiative world organization. Since 2004, CUIT has begun educating reserve army officers for People's Liberation Army Rocket Force, the strategic and tactical missile forces of the People's Republic of China. In recent years, CUIT has been granted 123 state-level scientific research projects including National Science and Technology Plan, National Natural Science Fund projects, and National Social Science Fund projects, obtaining science and technology funds about 58.2 million RMB annually; 46 provincial and ministerial science awards, two of which are National Science and Technology Progress Awards (second class); 3315 academic papers have been published, with 910 articles cited by the important retrieval system SCI, and over 100 articles on influential journals from both in and abroad. CUIT has eight key provincial and ministerial laboratories (including Sichuan Engineering and Technological Research Center, Sichuan key Research Bases for Philosophy and Social Sciences), seven key laboratories supervised by universities and Research Bases for Humanities and Social Sciences, and one post-doctoral research station. CUIT has reached advanced world standards in the research of a new-type weather radar system, China Doppler weather radar of a new generation, atmospheric radiation and satellite remote sensing, weather dynamics and dry monsoon, environmental system analysis and environmental monitoring & evaluation, computer and software, information security, and E-commerce. History Chengdu University of Information Technology was established in 1951 named Southwest Air Force Meteorological Training Brigade for People's Liberation Army. It was renamed Chengdu School of Meteorology in 1956 and Chengdu Institute of Meteorology in 1978 under the direct administration of China Meteorological Bureau. Since transferred to the direct administration of Sichuan People's Government and renamed Chengdu University of Information Technology in 2000, CUIT has developed into a multidisciplinary university with 53 different majors in 17 schools. Key laboratories Key Laboratory of High Speed Signal Processing and implementation Provincial key laboratory of Sichuan province, established in 2001. Key Laboratory of Statistics Information Technology and Data Mining Ministerial key laboratory of National Bureau of Statistics of China, established on June 9, 2004. Key Laboratory of Atmospheric Sounding Ministerial key laboratory of China Meteorological Administration, established on November 18, 2005. Key Laboratory of Plateau Atmosphere and Environment Provincial key laboratory of Sichuan province, established in December 2006. Key Laboratory of Atmospheric Environment Simulation and Pollution Control Provincial key laboratory of Sichuan province, established in October 2010. International Jointly-established Research Laboratories International Laboratory for Atmospheric Observations jointly found with Colorado State University of America Research area of the research center consists of meteorological radar, surface meteorology observation and application, multi-source meteorological data fusion, lightning monitoring and early warning, concentrating on meteorological radar signal processing, meteorological radar data quality control and calibration technique, weather radar network composite and synergistic observation, surface meteorology factor gathering tech and facility development, multi-meteorological data process, satellite remote sensing, lightning detection and early warning technology etc. International Research Center for Image and Vision jointly found with Vanderbilt University of America The field of research center includes directions of medical image processing, machine vision and information visualization, computational intelligence. Mainly engages in realms of image theory and application, image information visualization modeling, image segmentation, intelligent principle of visual cognition, machine intelligence cognitive neural model analysis, multimodal brain and spinal cord magnetic resonance image analysis (structure, diffusion, magnetic resonance spectroscopy and functional magnetic resonance imaging) and other researches. Since its establishment in 2016, the center has published more than 10 high-level scientific research papers (all SCI retrieval), successfully applied for two natural science funds, six provincial and ministerial research projects, and more than 10 patents of invention. International Research Institute for Robots and Smart Systems jointly found with University of Siegen of Germany The institute integrates research with design, facing the Robot and Intelligence Equipment supply chain. According to its development, research foundation, research findings accumulation, platform condition and the trend of discipline orientation, CUIT Robot and Intelligence System International United Research Institute will cooperate with EZLS Lab of University of Siegen on launching the scientific research on intelligence environment sensation technology, mobile robot self-positioning navigation technology, medical robot, and intelligence computation (artificial intelligence). Academic Units School of Atmospheric Sciences School of Resources and Environment School of Applied Mathematics School of Electronic Engineering School of Control Engineering School of Communication Engineering School of Computer Science School of Software Engineering School of Cybersecurity School of Optoelectronic Technology School of Management School of Logistics School of Business School of Statistics School of Culture and Arts School of Foreign Languages School of Marxism Department of Physical Education Electric Experiment Center Information Center Computational Center University Library Continuing Education College Yinxing Hospitality Management College Networking Commerce College Alumni Chen Rui (:zh:陳瑞), CEO of Bilibili, Founder of Cheetah Mobile and Beike Internet Security and General Manager of Kingsoft References External links Chengdu University of Information Technology Universities and colleges in Chengdu
1044137
https://en.wikipedia.org/wiki/Wizball
Wizball
Wizball is a shoot 'em up written by Jon Hare and Chris Yates (who together formed Sensible Software) and released in 1987 originally for the Commodore 64 and later in the year for the ZX Spectrum and Amstrad CPC. Versions for the Amiga and Atari ST were released in the following year. Wizball was also ported to IBM PC compatibles (CGA) and the French Thomson MO5 8-bit computer. Wizball'''s more comical sequel, Wizkid, was released in 1992 for the Amiga, Atari ST, and IBM PC. GameplayWizball is set in the once colourful realms of Wizworld, where the evil Zark has stolen all the colour, making it dull and gray. It is up to Wiz and his cat Nifta to restore it to its former brilliance as Wizball and Catellite.Wizball is a scrolling shooter inspired by Gradius with an additional collection dynamic. It is a horizontally scrolling game taking place over eight levels, which involves navigating around a landscape and shooting at sprites. However, the aim of the game is to collect droplets of coloured paint to colour the level. Each level starts off as monochromatic, drawn in three shades of grey, and needs three colours (red, blue, and green) to be collected to complete it. The player, a wizard who has taken the form of a green ball, can navigate between the levels through portals. At first the wizard only has access to the first three levels, but completing levels gains access to further levels. Each level has bouncing spheres of a different colours, and shooting them releases droplets, which may be collected. Each level needs a different colour to be added, which can be composed by collecting sufficient quantities of the correct colours. On later levels, the spheres of paint start shooting bullets, further adding to the challenge. The wizard himself is not capable of collecting paint droplets, and is initially capable of very limited movement, bouncing up and down at a fixed rate, with the player only controlling a speed of rotation, and thus how fast it will move horizontally after next touching the ground. Collecting pearls (which appear when certain types enemies have been shot) gives the player tokens which can be used to "buy" enhancements, such as greater control over movement and improved firepower. It also allows the option to summon the companion known as Catellite. Catellite (ostensibly the wizard's cat) which is also spherical in form normally follows the wizard, but it can also be moved independently by holding down the fire button whilst moving the joystick (which meanwhile renders the wizard uncontrollable). Only Catellite is capable of collecting paint droplets and the player has to use it to do so. In the two-player co-op mode, Catellite is controlled by the second player. Development The music in the Commodore 64 version was composed by Martin Galway, with input from Jon Hare and Chris Yates. In an interview from 1987 the developers said that development of Wizball was originally started before their previously launched shooter Parallax, but that it was put on hold since they managed to code the parallax scrolling routine used in that game. They also said that they were trying to present new concepts in a familiar way, also that they wanted a company to release it that could give it "a bit of hype". In a more recent interview with Retro Gamer, Jon Hare said that the idea began as a Nemesis inspired shooter and that it began with the ball and the control method. The ball came first, the Wizard storyline was tagged on at the end. Ports The Commodore 64 version is the original by Sensible Software. The Atari ST and Amiga versions were ported by Peter Johnson and other versions coded by different teams. On the Commodore 64 version, enemy waves spawn in groups, with 4 or 5 on the landscape at a time, at least one of which is always colour spheres; this made the game extremely difficult, but allowed the player to preferentially hunt the spheres if they needed only a small amount of colour to complete their current combination. The Amiga and Atari ST versions spawn only one wave at a time, which makes the game easier, but requires the player to "grind" until a wave of colour spheres is chosen to spawn. Reception The game has been heralded as one of the best ever original games to appear on the Commodore 64. It is noted for its originality and use of the C64 hardware via graphics, sound and general presentation. The control method has also been described as innovative, initially awkward, but adding to the playability when mastered. The readers of Retro Gamer in 2011 selected it as the second best game ever made for the platform: The game was awarded a Sizzler award in the July 1987 issue of Zzap!64 magazine with a rating of 96%, missing out on a "Gold Medal". In November the following year Wizball was selected by the same magazine the number one Shoot 'em Up for the Commodore 64, giving it a rating of 98% and a month later went on to be crowned the best game ever by Zzap!64, which Jon Hare has stated is one of his proudest career moments, but at the same time that they were disappointed by the sales of the title, attributing it to the marketing of Ocean Software. In a 2002 Zzap!64 tribute publication, Wizball via a community vote was ranked the second best C64 game ever with the comment "How it missed a Gold Medal back in issue 27 is beyond us". In a second Zzap!64 tribute in 2005, Gary Penn, editor at the magazine at the time of the game's publication was quoted to say: The Spectrum and 16 bit versions generally garnered favorable reviews, with Sinclair User giving it a perfect 10 and The Games Machine awarding the Amiga and Atari ST versions 87% and 84% respectively. Legacy In 1992, Sensible Software developed a sequel Wizkid: The Story of Wizball II published by Ocean Software. Although the story in Wizkid continues directly from Wizball, the actual games are only superficially related to each other. In 2007 Retrogamer, wrote on Wizball'': A fan remake for Microsoft Windows and macOS, based on the Commodore 64 version, was released in 2007. Due to MacOS going 64-bit exclusively from 10.15 Catalina onwards, the Mac version only works on machines running software as far as 10.14 Mojave. As of June 2021, the Windows version still runs on current iterations. References External links Wizball at Amiga Hall of Light Wizball Archived from the original at CPC Zone 1987 video games Amiga games Amstrad CPC games Atari ST games Commodore 64 games DOS games Horizontally scrolling shooters Ocean Software games Sensible Software Video games developed in the United Kingdom Video games scored by Martin Galway ZX Spectrum games
44072452
https://en.wikipedia.org/wiki/Movim
Movim
Movim (My Open Virtual Identity Manager) is a distributed social network built on top of XMPP, a popular open standards communication protocol. Movim is a free and open source software licensed under the AGPL-3.0-or-later license. It can be accessed using existing XMPP clients and Jabber accounts. The project was founded by Timothée Jaussoin in 2010. It is maintained by Timothée Jaussoin and Christine Ho. Concept Movim is a distributed social networking platform. It builds an abstraction layer for communication and data management while leveraging the strength of the underlying XMPP protocol. XMPP is a widely used open standards communication platform. Using XMPP allows the service to interface with existing XMPP clients like Conversations, Pidgin, Xabber and Jappix. Users can directly login to Movim using their existing Jabber account. Movim addresses the privacy concerns related to centralized social networks by allowing users set up their own server (or "pod") to host content; pods can then interact to share status updates, photographs, and other social data. Users can export their data to other pods or offline allowing for greater flexibility. It allows its users to host their data with a traditional web host, a cloud-based host, an ISP, or a friend. The framework, which is being built on PHP, is a free software and can be experimented with by external developers. Technology Movim is developed using PHP, CSS and HTML5. The software initially used the Symfony framework. Due to the complexity of the application and the XMPP connection management, developers rewrote Movim as a standalone application. It now has its own libraries and APIs. Movim was earlier based on the JAXL library for implementing XMPP. JAXL has been replaced by Moxl (Movim XMPP Library), licensed under the AGPL-3.0-only license, to manage connecting to the server through the XMPP WebSocket protocol. This is claimed to have reduced the code complexity and performance load while providing better error management. The platform used Modl (Movim Data Layer) until the version 0.13, a PHP database layer using DAO Patterns for database interfacing. The project was then migrated to the Laravel Eloquent ORM. Architecture The project consists of a set of libraries that provide an abstraction layer on top of XMPP for communication and data management. Requests are handled by instances of a derived interface controller class. This methodology is similar to the query processing in a MVC framework. Access to the interface is provided by a system of widgets, allowing through introspection capabilities, to write AJAX elements without using JavaScript. The page display uses a system of nested templates. See also Diaspora Friendica GNU Social Comparison of social networking software Comparison of cross-platform instant messaging clients Comparison of microblogging and similar services Comparison of VoIP software References External links Movim Home Page Movim Github Social networking services Microblogging software Distributed computing Free software Software using the GNU AGPL license Free instant messaging clients Instant messaging clients Free groupware Free communication software Jabber Free XMPP clients XMPP clients Instant messaging Groupware Social software Communication software VoIP software Free VoIP software VoIP software
5288039
https://en.wikipedia.org/wiki/Console%20application
Console application
A console application is a computer program designed to be used via a text-only computer interface, such as a text terminal, the command-line interface of some operating systems (Unix, DOS, etc.) or the text-based interface included with most graphical user interface (GUI) operating systems, such as the Windows Console in Microsoft Windows, the Terminal in macOS, and xterm in Unix. Overview A user typically interacts with a console application using only a keyboard and display screen, as opposed to GUI applications, which normally require the use of a mouse or other pointing device. Many console applications such as command line interpreters are command line tools, but numerous text-based user interface (TUI) programs also exist. As the speed and ease-of-use of GUIs applications have improved over time, the use of console applications has greatly diminished, but not disappeared. Some users simply prefer console based applications, while some organizations still rely on existing console applications to handle key data processing tasks. The ability to create console applications is kept as a feature of modern programming environments such as Visual Studio and the .NET Framework on Microsoft Windows. It simplifies the learning process of a new programming language by removing the complexity of a graphical user interface (see an example in the C# article). For data processing tasks and computer administration, these programming environments represent the next level of operating system or data processing control after scripting. If an application is only going to be run by the original programmer and/or a few colleagues, there may be no need for a pretty graphical user interface, leaving the application leaner, faster and easier to maintain. Examples Console-based applications include Alpine (an e-mail client), cmus (an audio player), Irssi (an IRC client), Lynx (a web browser), Midnight Commander (a file manager), Music on Console (an audio player), Mutt (an e-mail client), nano (a text editor), ne (a text editor), newsbeuter (an RSS reader), and ranger (a file manager). See also Text-based (computing) Box-drawing character Shell (computing) References Further reading Terminal emulators User interfaces Windows administration MacOS administration Unix software
38527354
https://en.wikipedia.org/wiki/Cyberattack%20during%20the%20Paris%20G20%20Summit
Cyberattack during the Paris G20 Summit
The cyberattack during the Paris G20 Summit refers to an event that took place shortly before the beginning of the G20 Summit held in Paris, France in February 2011. This summit was a Group of 20 conference held at the level of governance of the finance ministers and central bank governors (as opposed to the 6th G20 summit later that year, held in Cannes and involving the heads of government). Unlike other well-known cyberattacks, such as the 2009 attacks affecting South Korean/American government, news media and financial websites, or the 2007 cyberattacks on Estonia, the attack that took place during the Paris G20 Summit was not a DDoS style attack. Instead, these attacks involved the proliferation of an email with a malware attachment, which permitted access to the infected computer. Cyber attacks in France generally include attacks on websites by DDoS attacks as well as malware. Attacks have so far been to the civil and private sectors instead of the military. Like the UK, Germany and many other European nations, France has been proactive in cyber defence and cyber security in recent years. The White Paper on Defence and National Security proclaimed cyberattacks as "one of the main threats to the national territory" and "made prevention and reaction to cyberattacks a major priority in the organisation of national security". This led to the creation of the French Agency for National Security of Information Systems (ANSSI) in 2009. ANSSI's workforce will be increased to a workforce of 350 by the end of 2013. In comparison, the equivalent English and German departments boast between 500 and 700 people. Attacks in December 2010-January 2011 The attacks began in December with an email sent around the French Ministry of Finance. The email's attachment was a 'Trojan Horse' type consisting of a pdf document with embedded malware. Once accessed, the virus infected the computers of some of the government's senior officials as well as forwarding the offensive email on to others. The attack infected approximately 150 of the finance ministry's 170,000 computers. While access to the computers at the highest levels of office of infiltrated departments was successfully blocked, most of the owners of infiltrated computers worked on the G20. The attack was noticed when "strange movements were detected in the e-mail system". Following this, ANSSI monitored the situation for a further several weeks. Reportedly, the intrusion only targeted the exfiltration of G20 documents. Tax and financial information and other sensitive information for individuals, which is also located in the Ministry of Finance's servers, was left alone as it circulates only on an intranet accessible only within the ministry. The attack was reported in news media only after the conclusion of the summit in February 2011, but was discovered a month prior in January. Perpetrators While the nationalities of the hackers are unknown, the operation was "probably led by an Asian country". The head of ANSSI, Patrick Pailloux, said the perpetrators were "determined professionals and organised" although no further identification of the hackers was made. See also 2007 cyberattacks on Estonia Trojan horse (computing) References Cyberattacks 2011 in France G20
12107449
https://en.wikipedia.org/wiki/Resource-oriented%20architecture
Resource-oriented architecture
In software engineering, a resource-oriented architecture (ROA) is a style of software architecture and programming paradigm for supportive designing and developing software in the form of Internetworking of resources with "RESTful" interfaces. These resources are software components (discrete pieces of code and/or data structures) which can be reused for different purposes. ROA design principles and guidelines are used during the phases of software development and system integration. REST, or Representational State Transfer, describes a series of architectural constraints that exemplify how the web's design emerged. Various concrete implementations of these ideas have been created throughout time, but it has been difficult to discuss the REST architectural style without blurring the lines between actual software and the architectural principles behind it. In Chapter 5 of his thesis, Roy Fielding documents how the World Wide Web is designed to be constrained by the REST series of limitations. These are still fairly abstract and have been interpreted in various ways in designing new frameworks, systems, and websites. In the past, heated exchanges have been made about whether RPC-style REST architectures are RESTful. Guidelines for clarification The Resource Oriented Architecture, as documented by Leonard Richardson and Sam Ruby in their 2007 book RESTful Web Services, gives concrete advice on specific technical details. Naming these collections of guidelines "Resource Oriented Architecture" may allow developers to discuss the benefits of an architecture in the context of ROA. Some guidelines are already common within the larger REST communities such as: that an application should expose many URIs, one for each resource; and that processing cookies representing IDs in a server-side session is not RESTful. Existing frameworks Richardson and Ruby also discuss many software frameworks that provide some or many features of the ROA. These include /db, Django, TurboGears, Flask, EverRest, JBoss RESTEasy, JBoss Seam, Spring, Apache Wink, Jersey, NetKernel, Recess, Ruby on Rails, Symfony, Yii2, Play Framework, and API Platform. Web infrastructure While REST is a set of architectural guidelines applicable to various types of computing infrastructures, Resource Oriented Architecture (ROA) is only coupled with the web. This architecture is therefore useful mostly to businesses that consider the web as the computing/publishing platform of choice. The power of the web seems to mostly reside in its ability to lower the barriers to entry for human users who may not be highly trained in using computing machinery. As such, the web widens the market reach for any business that decides to publish some of its content in electronic format. On the web, such published content is regarded as a web resource. References Bibliography Software architecture
40903155
https://en.wikipedia.org/wiki/Impression%20%28software%29
Impression (software)
Impression is a desktop publishing application for systems. It was developed by Computer Concepts and initially made available in pre-release form during 1989, having been demonstrated in February 1989 at the Which? Computer Show and subsequently announced as being available from June 1989. The "completed" version was eventually delivered on 18th January 1990. Originally, the application appears to have been developed for Acorn's then-current operating system, Arthur, as a ROM-based product, but due to dissatisfaction with the state of Arthur during early development of the application, it was then meant to use Computer Concepts' own operating system, Impulse, instead. (Publicity images for the software depict a different operating environment to RISC OS.) Product range and history Impression II and Junior Impression II, an improved version of the product, was released in 1990 alongside Impression Junior, a cut-down version of Impression II priced at £103, compared to the £194 price of the "senior" product. Given a degree of instability in early versions of Impression, Computer Concepts had promised a free upgrade to Impression version 1.1 to existing users. Ultimately, the further development of the software led to a more significant release incorporating several enhancements, and this Impression II release was offered as a free upgrade instead. Enhancements included improved frame selection and manipulation controls, repeating frames for headers and footers, and "instant" in-place rotation of vector and bitmap graphics. Impression II was featured in the Acorn Publishing System: a bundle featuring an Archimedes 540 computer, Impression II software, SVGA-capable monitor, 120 MB hard drive, 256-greyscale flatbed scanner (Computer Concepts' ScanLight Professional), and direct-drive Canon laser printer (Computer Concepts' LaserDirect HiRes8) for a suggested retail price of £4995 plus VAT. Impression Junior was introduced to compete with word processors on the RISC OS platform such as Pipedream and First Word Plus, both considered "fundamentally character mode programs" with some graphical support. Certain layout features were preserved from the "senior" product such as the use of frames, but "master pages" (a form of templates) and the style mechanism were omitted (or "simplified"), emphasising traditional effect-based styling of text instead. As with Impression, documents could be printed in a form accurately portraying their on-screen appearance, making use of outline fonts and the font manager, but for rapid output the draft and near-letter quality modes of printers could be used instead. One noted omission was a convenient mailmerge function, although conventional word processing features such as automatic contents and index generation were also omitted from the product. Impression Junior was the basis of the word processor component of Acorn Advance: an integrated office suite "similar in concept to Claris Works or Microsoft Works". Mailmerge functionality was featured in the Impression Business Supplement product, introduced in 1991 for Impression 2.10 at a price of £57.57, which also provided support for four-colour separations, the ExpressionPS tool for the processing of PostScript for typesetting purposes, and a selection of file format loaders. However, these mailmerge capabilities were regarded as somewhat inferior to that provided by the variant of Impression Junior featured in Acorn's Advance integrated office suite. In 1997, Impression Junior was made available as a free download from Computer Concepts' Web site, having been replaced in the company's product range by Impression Style in December 1993. The download was perceived as a way of evaluating the Impression family of products for potential purchasers of Impression Style and Publisher. Impression Publisher and Style In 1993, Computer Concepts revamped the Impression product line, creating Impression Publisher as the successor to Impression II and Impression Style as the successor to Impression Junior. Impression Publisher added various features aimed more at professional use to the core product, such as CMYK colour separations and crop marks, alongside other enhancements. Impression Style built on feedback from existing users and on work done on Impression Junior to produce the word processor component of Acorn Advance. Both Impression Publisher and Style supported "object linking and embedding" (OLE) and 24-bit colour, and were offered as upgrades to existing users of Impression II and Junior respectively at the same price of £29 plus VAT. Impression Publisher was also bundled with the separate Equasor and TableMate tools to take advantage of the OLE support, as was Impression Style, with the inclusion of Equasor being regarded as welcome but not as comprehensive as the mathematical typography support of Icon Technology's TechWriter (its principal competitor in this regard), and with TableMate seeking to augment the elementary table editing functionality in the Impression series, being regarded as a "delightful little utility" that was somewhat less flexible than the table support of other word processors (such as Colton Software's Wordz) but "hugely preferable" to the established, "cumbersome" construction of tables using horizontal and vertical rules. A significant enhancement to Impression Publisher not present in Impression Style was support for irregular frames, albeit only for frames containing graphics, not text. This kind of frame was able to repel text in text frames, thus removing the need for various workarounds that would otherwise be employed to cause text to be wrapped around other areas of a page. Such frames could have non-rectangular outlines involving additional points or nodes, and they could be rotated and scaled, but the irregular frame edge could only consist of straight line sections, not curves, and controls were not provided for precise positioning of outline points. Support for irregular text frames was stated as planned for subsequent versions. Computer Concepts and the other vendors involved in providing the component applications of Acorn Advance - Clares and Iota - offered a Musketeer pack featuring Impression Style and the other "full-featured" versions of the Advance applications - Schema 2 and Datapower - as a form of upgrade for Advance users costing £249 plus VAT instead of the combined £385 plus VAT cost of the separate packages. (Curiously, Schema was originally developed by Acorn but transferred to Clares Micro Supplies as a consequence of a decision by Acorn to stop developing applications software itself.) An enhanced Impression Publisher Plus product became available in 1994, adding support for various professional publishing features such as Open Pre-Press Interface (OPI) for the substitution of high resolution images when typesetting, Encapsulated PostScript (EPS) and Desktop Colour Separation (DCS), along with other enhancements such as a new colour selection tool supporting spot, process and tint colour handling. The price of Publisher Plus was £299 plus VAT, with an upgrade from Publisher costing £130 plus VAT, with the additional features producing a product with "Quark-like facilities at a substantially lower price". The other enhancements were regarded as "nice extra touches", although a "proper" undo option was still absent, and other noted frustrations with Publisher's user interface remained. A 32-bit conversion and improvement project initiated in 2003 was named Impression-X. Impact Impression II was adopted as part of the solution employed by Acorn User magazine as "the first newstrade magazine to 'go live' with Archimedes-based DTP", having already produced editorial supplements using software running on the Archimedes platform. Computer Concepts LaserDirect and Integrex ColourCel printers were used alongside A440, and subsequently A540, computers. The experiences from this adoption exercise were apparently fed back to Acorn and Computer Concepts to inform further product refinements, with the editorial team stating a belief that a similar solution marketed as a product would be "a serious alternative to Apple Macintosh and PC-based systems" in the broader desktop publishing market. Acorn User reverted to using the Macintosh platform when the title was acquired by Europress, this mostly due to working practices at its new owner, with any "weaknesses" in Impression II said to be mostly resolved by the introduction of Impression Publisher and accompanying OPI functionality. However, a renewed effort saw complete issues of Acorn User once again produced using Acorn-based technology in 1995, starting with a redesign exercise in December 1994 and gradually moving to a Risc PC-based solution employing Impression Publisher Plus. Other professional users adopted Impression II and direct-drive printing hardware on the Acorn platform. For example, the Journal of Physiology, published through Cambridge University Press by the Physiological Society at Cambridge University, employed A540 and A5000 machines augmented with State Machine graphics cards in conjunction with Calligraph TQ1200 printers, with these peripherals being general competitors with Computer Concepts' own ColourCard and LaserDirect products, although the TQ1200 as an A3 printer had no direct competitor from Computer Concepts' own range. The software was one of two packages recommended for use in primary teaching in the 1996 book Opportunities for English in the Primary School. It has been considered one of the most important applications in the history of the platform. Development Impression was originally developed concurrently with Computer Concepts' own operating system, Impulse, during the period when Arthur was the operating system delivered with Archimedes series machines, with Computer Concepts looking to offer Impulse as a replacement. During this period, only three developers worked on Impression while "everyone else [in the company] was working on Impulse". This operating system was never delivered, but Acorn perceived it as enough of a threat that pre-releases of RISC OS - Acorn's successor to Arthur - were apparently withheld from the Impression developers for competitive reasons. Later, having been invited to collaborate on the Advance integrated suite with Acorn, various visual aspects of Computer Concepts' products, notably the "3D box style", were adopted as part of Acorn's own visual style for RISC OS 3. The software itself was written using the BBC Basic assembler, assembled in pieces and linked using a dedicated linker. A project to produce a fully 32-bit compliant version (compatible with the Iyonix PC and later ARM hardware) was announced by X-Ample Technology in 2003. This was named Impression-X. In 2004 it was explained that the process of 32-bitting was being complicated partly because of "the massive number of optimisation and 'tricks' Computer Concepts used". In 2005, Drobe editor Chris Williams suggested handing the project over to another party to complete. After only another 10 years, Impression-X was released in May 2015, and is now available from the PlingStore. Features A document loader for Impression files was included with the 2.60 release of desktop publishing application Ovation Pro in 2000. The software was copy protected via a dongle. However, the reliance on a dongle could be removed from Impression II (and ArtWorks) with a personalised version of the software issued on Computer Concepts' receipt of the dongle and an associated upgrade fee. The personalised version used various identifying characteristics of particular computer models and was only offered, at least initially, for A5000, A4, A3010, A3020 and A4000 owners. References RISC OS software Proprietary software
21686222
https://en.wikipedia.org/wiki/Ronald%20Stamper
Ronald Stamper
Ronald K. (Ron) Stamper (born 1934) is a British computer scientist, formerly a researcher in the LSE and emeritus professor at the University of Twente, known for his pioneering work in Organisational semiotics, and the creation of the MEASUR methodology and the SEDITA framework. Biography Born in West Bridgford, United Kingdom, Stamper obtained his MA in Mathematics and Statistics at Oxford University in 1959. Stamper started his career in industry, first in hospital administration and later in the steel industry. He starting applying operational research methods with the use of computers, and evolved into the management of information systems development. In need of more experts, he developed one of the first courses in systems analysis in the UK. In 1969 he moved into the academic world, starting at the London School of Economics as senior lecturer and principal investigator. From 1988 to 1999 he was Professor of Information Management at the University of Twente at its Faculty of Technology and Management. From 2001 to 2004 he was visiting professor at the Staffordshire University. In 1970s Stamper jointed the work of the International Federation for Information Processing (IFIP), and participated in the IFIP TC8/WG8.1 Task Group Design and Evaluation of Information Systems. In the 1990s he made a significant contribution to its 1998 publication of A Framework of Information System Concepts. The FRISCO Report. Work Theoretical foundations of information systems The main thrust of Stamper's published work is to find a theoretical foundation for the design and use of computer based information systems. He uses a framework provided by semiotics to discuss and prescribe practical and theoretical methods for the design and use of information systems, called the Semiotic Ladder. To the traditional division of semiotics into syntax, semantics and pragmatics, Stamper adds "empirics". "Empirics" for Stamper is concerned with the physical properties of sign or signal transmission and storage. He also adds a "social" level for shared understanding above the level pragmatics. Stamper adopted the idea of the sign as the fundamental unit of informatics after his research into the meaning of the word "information" which he considered dangerously polysemous. He was concerned to establish an operationalism at the semantic level of information systems rather than the binary level. LEGally Oriented Language LEGOL His work at the LSE investigating LEGOL (for LEGally Oriented Language – a computerized representation of the law) led him to incorporate the idea of Norms pioneered by Von Wright and the Affordances of Gibson in a system called NORMA (for NORMs and Affordances). Stamper collaborated with Ronald Lee of the University of Texas on organizational deontics incorporating the Speech Acts of Austin and Searle. This led to the broader methodology he called MEASUR (for Methods for Eliciting, Analysing and Specifying Users’ Requirements). MEASUR incorporated the methods of Problem Articulation, Semantic Analysis and Norm Analysis, and uses the ontology chart. IBM partly sponsored the research into LEGOL at the LSE, and LEGOL 2, was used as an application to test IBM's seminal Peterlee Relational Test Vehicle, the first relational database. Publications Books Stamper, R.K. (1973). Information in business and administrative systems. B. T. Batsford: London and New York: Wiley. Stamper, R.K. (1980). LEGOL-2 for the expression of legal and organisational norms (application aspects of the LEGOL project). Stamper, R.K. & Lee, R.M. (1986). Doing Business with Words: Performative Aspects of Deontic Systems. Research Note 86-40, US Army Research Institute for the Behavioural and Social Sciences, Arlington, VA. Falkenberg, E.D. et al. (1998). A Framework of Information System Concepts. The FRISCO Report. WEB edition. [Report] IFIP 1998. Liu, K., Clarke, R. J., Andersen, P. B. & Stamper, R. K. (2001). Information, organisation and technology: Studies in organisational semiotics. Boston: Kluwer Academic Publishers. Liu, K., Clarke, R. J., Andersen, P. B. & Stamper, R. K. (2002). Coordination and communication using signs: Studies in organisational semiotics. Boston: Kluwer Academic Publishers. Liu, K., Clarke, R. J., Andersen, P. B. & Stamper, R. K. (2002). Organizational semiotics: Evolving a science of information systems. Boston: Kluwer Academic Publishers. Stamper, R.K. et al. (2004). Semiotic Methods for Enterprise Design and IT Applications. Proceedings of the 7th International Workshop on Organisational Semiotics. Articles, a selection Stamper, R.K. (1971). "Some Ways of Measuring Information." Computer Bulletin, 15, 423–436. Stamper, R.K. (1977). "The LEGOL 1 prototype system and language." The Computer Journal, 20(2), 102. Stamper, R. (1977). "Physical Objects, Human Discourse and Formal Systems." Architecture and Models in Data Base Management Systems VLDB2 1977, 293–311. Stamper, R.K. (1977). "Informatics without computers." Informatie 19, 272–279. Stamper, R.K. (1978). "Aspects of data semantics: names, species and complex physical objects." Information Systems Methodology, 291-306. Jones, S., Mason, P. & Stamper, R.K. (1979). "LEGOL 2.0: A relational specification language for complex rules." Information Systems 4(4) 1979, pp 293–305. Stamper, R.K. (1980). "LEGOL: Modelling legal rules by computer." Advanced Workshop on Computer Science and Law (1979 : University College of Swansea), 45–71. Stamper, R.K. (1985). "Towards a Theory of Information: Information: Mystical Fluid or a Subject for Scientific Enquiry?" The Computer Journal 28, 195. Stamper, R.K. (1986). "Legislation, information systems and public health." International Journal of Information Management, 6(2), 103–114. Stamper, R.K. (1988). "Analysing the cultural impact of a system." International Journal of Information Management 8, 107–122. Stamper, R.K., Althaus, K. & Backhouse, J. (1988). "MEASUR: Method for Eliciting, Analysing and specifying Users Requirements." In Computerised Assistance During the information Systems life cycle, (Eds, Olle, T.W., Verrijn-Stuart, A.A. & Bhabuts, L.) Elsevier Science, Amsterdam. Stamper, R.K., Liu, K., Kolkman, M., Klarenberg, P., van Slooten, F., Ades, Y. & van Slooten, C. (1991). "From Database to Normbase." International Journal of Information Management (1991) 11, 67–84. Stamper, R.K. (1991). "The Role of Semantics in Legal Expert Systems and Legal Reasoning*." Ratio Juris, 4(2), 219–244. Stamper, R.K. & Kolkman, M. (1991) "Problem Articulation: A sharp-edged soft systems approach." Journal of Applied Systems Analysis 18, 69–76. Stamper, R.K. (1992). "Review of Andersen A Theory of Computer Semiotics." The Computer Journal 35, 368. Stamper, R.K. (1993). "Social Norms in Requirements Analysis – an outline of MEASUR." In Requirements Engineering Technical and Social Aspects, (Eds, Jirotka, M., Goguen, J. & Bickerton, M.) Academic Press, New York. Stamper, R.K., Kecheng Liu, K. & Huang, K. (1994). "Organisational morphology in re-engineering." Proceedings of Second European Conference of Information Systems, Nijenrode University, 729–737. Stamper, R.K. (1998). "A Dissenting Position." In: A Framework of Information System Concepts. The FRISCO Report. (Ed. Eckhard D. Falkenberg, Wolfgang Hesse, Paul Lindgreen, Björn E. Nilsson, J.L. Han Oei, Colette Rolland, Ronald K. Stamper, Frans J.M. Van Assche, Alexander A. Verrijn-Stuart, Klaus Voss). Web edition. IFIP 1998. Stamper, R. K. (2001). "Organisational semiotics: Informatics without the computer?" In K. Liu, R. J. Clarke, P. Bøgh Andersen, & R. K. Stamper (Eds.), Information, organisation and technology: Studies in organisational semiotics (pp. 115–171). Boston, MA: Kluwer Academic Publishers. Stamper, R. & Liu, K. (2002). "Organisational dynamics, social norms and information systems." System Sciences, 1994. Vol. IV: Information Systems: Collaboration Technology Organizational Systems and Technology, Proceedings of the Twenty-Seventh Hawaii International Conference on, 4, 645–654. References External links 1934 births English computer scientists British semioticians Living people
941245
https://en.wikipedia.org/wiki/Micro-Star%20International
Micro-Star International
Micro-Star International Co., Ltd (MSI; ) is a Taiwanese multinational information technology corporation headquartered in New Taipei City, Taiwan. It designs, develops and provides computer hardware, related products and services, including laptops, desktops, motherboards, graphics cards, All-in-One PCs, servers, industrial computers, PC peripherals, car infotainment products, etc. The company has a primary listing on the Taiwan Stock Exchange and was established on August 4, 1986 by 5 founders – Hsu Xiang (a.k.a. Joseph Hsu), Huang Jinqing (a.k.a. Jeans Huang), Lin Wentong (a.k.a. Frank Lin), Yu Xian'neng (a.k.a. Kenny Yu), and Lu Qilong (a.k.a. Henry Lu). First starting its business in New Taipei City, Taiwan, MSI later expanded into China, setting up its Baoan Plant in Shenzhen in 2000 and establishing research and development facilities in Kunshan in 2001. It also provides global warranty service in North America, Central/South America, Asia, Australia and Europe. The company has been a sponsor for a number of esports teams and is also the host of the international gaming event MSI Masters Gaming Arena (formerly known as MSI Beat IT). The earliest Beat IT tournament can be traced back to 2010, featuring Evil Geniuses winning the championship. Operations MSI's offices in Zhonghe District, New Taipei City, Taiwan serve as the company's headquarters, and house a number of different divisions and services. Manufacturing initially took place at plants in Taiwan, but has been moved elsewhere. Many MSI graphics cards are manufactured at its plant in China. The company has branch offices in the Americas, Europe, Asia, Australia and South Africa. As of 2015, the company has a global presence in over 120 countries. Products The company first built its reputation on developing and manufacturing computer motherboards and graphics cards. It established its subsidiary FUNTORO in 2008 for the vehicle infotainment market. It provides products ranging from laptops, desktops, monitors, motherboards, graphics cards, power supply, computer cases and liquid cooler for gamers and content creators, to all-in-one PCs, mobile workstations, servers, IPCs, multimedia peripherals, vehicle infotainment, and an autonomous mobility robot. When established in 1986, MSI focused on the design and manufacturing of motherboards and add-on cards. Later that year, it introduced the first overclockable 286 motherboard. In 1989, MSI introduced its first 486 motherboard; in 1990 it introduced its first Socket 7 based motherboard, and in 1993, its first 586 motherboard; in 1995, its Dual Pentium Pro-based motherboard. In 1997 it introduced its Intel Pentium II-based motherboard with Intel MMX Technology, along with its first graphics card product, and its first barebone product; in 2002, its first PC2PC Bluetooth & WLAN motherboard. In 2000, MSI introduced its first set-top box product (MS-5205). In 2003, its first Pen Tablet PC product (PenNote3100), and in 2004, its first Notebook product (M510C). In 2009, MSI introduced its first Ultra Slim Notebook (X320), and first All-in-One PC (AP1900). MSI released their first digital audio player in 2003, in a line called MEGA. In 2008, MSI sponsored Fnatic and dived into the PC gaming market. Its GAMING series features laptops, desktops, motherboards, graphic cards, All-in-One PCs and gaming peripherals designed for gamers and power users. In 2015, MSI teamed up with eye-tracking tech firm Tobii for the creation of eye-tracking gaming laptops. In early 2016, MSI announced a collaboration with HTC and has revealed Vive-ready systems to offer Virtual Reality experiences. MSI expanded its scope of business into Content Creation in 2018 and demonstrated creator-centric laptops at IFA 2018. MSI Optix MPG27CQ Gaming Monitor was the recipient of the 27th Taiwan Excellence Gold & Silver Awards. History MSI's five founders Joseph Hsu, Jeans Huang, Frank Lin, Kenny Yu and Henry Lu all worked for Sony electronics company before establishing MSI. Sony's corporate downsizing in 1985 brought them together. With the engineering background working for Sony, they established Micro Star International together in August 1986. In 1997, MSI inaugurated its Plant I in Zhonghe; in 2000, it inaugurated its Plant III in Zhonghe. In 1998, it became a public company with an IPO (Initial Public Offering) on the Taipei Stock Exchange (TAIEX). In 2000, MSI Computer (Shenzhen) Co., Ltd. was founded, and in 2001, MSI Electronics (Kunshan) Co., Ltd. In 2002, MSI set up its European logistics center in the Netherlands. In 2003, MSi released the "Mega PC", a shelf stereo computer hybrid with a front panel resembling the former and desktop computer connectors on the rear. In 2008, MSI was ranked among the Top 20 Taiwan Global Brands. In 2011, the firm was named one of the Top 100 Taiwan Brands, distinguished among 500 brands. By 2013, MSI has been awarded from Taiwan Excellence for 15 consecutive years. In 2015, MSI was ranked the fourth-best laptop brand of 2015 by Laptop magazine. According to research, MSI was the largest gaming laptop supplier worldwide in 2016. MSI launched "Join the Dragon" team sponsorship program in April 2017 to discover talented eSports teams. Also in April 2017, MSI names certified partners for the creation of an RGB ecosystem with MSI Mystic Light Sync which includes Corsair, SteelSeries, G.Skill, Cooler Master, InWin, Phanteks, and others. Mustek and MSI sign laptop distribution deal in 2017. ESL has partnered with MSI for upcoming ESL One events in 2018. MSI was the official partner of ESL One Cologne 2018, one of the biggest events on the CS:GO calendar. In August 2018, MSI was rated the Best Gaming Laptop Brand of 2018 by Laptop Mag. New designs of its GS65 Stealth Thin and GE63 Raider RGB laptops earned the company a stellar 84 out of 100 and put it on the top spot. Method eSports organization joined forces with MSI in August 2018. MSI partnered with ESL to bring MGA 2018 grand finals to New York. Kazakhstan's AVANGAR won the Championship. MSI & Ubisoft jointly presented Ambient Link synchronized game lighting on Assassin's Creed Odyssey and Tom Clancy’s The Division 2 in 2019. On 7 July 2020 it was reported that the CEO since January 2019, Charles Chiang, fell to his death from one of the company's buildings in Taiwan. In September 2020, MSI unveiled a new line of business-oriented laptops under the "Modern", "Prestige", and "Summit" lines, and a new logo specific to these models. On 7 October 2020 MSI released a public statement about their subsidiary Starlit scalping MSI made Nvidia RTX 3080 and 3090 GPUs, and selling them for higher than MSRP on eBay. Sponsorship The company once partnered with eSports heavyweights Fnatic and Cloud 9. It has also been a sponsor for a number of eSports teams worldwide, including METHOD, PENTA Sports, Energy eSports, HWA Gaming, yoe Flash Wolves, NXA-Ladies, Saigon Fantastic Five, MSI-Evolution, Vox Eminor, DeToNator, Team Infused, Aperture Gaming, Phoenix GaminG, Karmine Corp etc. See also List of companies of Taiwan References External links Companies listed on the Taiwan Stock Exchange Computer hardware companies Electronics companies of Taiwan Graphics hardware companies Manufacturing companies based in New Taipei Motherboard companies Multinational companies headquartered in Taiwan Netbook manufacturers Portable audio player manufacturers Taiwanese brands Taiwanese companies established in 1986 Computer enclosure companies Computer power supply unit manufacturers Computer hardware cooling Micro-Star International
4470866
https://en.wikipedia.org/wiki/2006%20Rose%20Bowl
2006 Rose Bowl
The 2006 Rose Bowl Game, played on January 4, 2006 at the Rose Bowl in Pasadena, California, was an American college football bowl game that served as the BCS National Championship Game for the 2005 NCAA Division I-A football season. It featured the only two unbeaten teams of the season: the defending Rose Bowl champion and reigning Big 12 Conference champion Texas Longhorns played Pacific-10 Conference titleholders and two-time defending AP national champions, the USC Trojans. Texas would defeat USC 41-38 to capture its fourth football championship in program history. The game was a back-and-forth contest; Texas's victory was not secured until the game's final nineteen seconds. Vince Young, the Texas quarterback, and Michael Huff, a Texas safety, were named the offensive and defensive Rose Bowl Players Of The Game. ESPN named Young's fourth-down, game-winning touchdown run the fifth-highest rated play in college football history. The game is the highest-rated BCS game in TV history with 21.7% of households watching it, and is often considered the greatest Rose Bowl game of all time, as well as the greatest college football game ever played. Texas's Rose Bowl win was the 800th victory in school history and the Longhorns ended the season ranked third in Division I history in both wins and winning percentage (.7143). It was only the third time that the two top-ranked teams had faced each other in Rose Bowl history, with the 1963 Rose Bowl and 1969 Rose Bowl games being the others. The 92nd-annual Rose Bowl Game was played, as it is every year, at the Rose Bowl Stadium in Pasadena, California, in the United States. This was the final game ever called by longtime broadcaster Keith Jackson (as well as the final Rose Bowl to telecast under ABC Sports branding); the 2007 Rose Bowl would be an ESPN on ABC presentation. It was also the final time until the BCS National Championship Game for the 2009 Season that it was broadcast as an ESPN on ABC presentation. In addition, this was the last National Championship Game in the BCS era to be a nominal BCS bowl game (the National Championship and the four BCS bowls became separate events beginning with the 2006 season). This was the first college football game to feature two Heisman Trophy winners in the same starting lineup. USC's quarterback Matt Leinart and running back Reggie Bush won the award in 2004 and 2005, respectively, although Bush would later forfeit the award. Pre-game buildup USC entered the game on a 34-game winning streak. It was the longest active streak in Division I-A. (Many of those wins have since been vacated following NCAA sanctions surrounding allegedly illegal benefits given to USC's Reggie Bush.) Texas brought the second-longest active streak, having won nineteen straight games and entered as the defending Rose Bowl champion, after defeating Michigan in the 2005 Rose Bowl. The teams' combined 53-game win streak was an NCAA record for teams playing each other. The game was also the first to pit against each other the teams ranked first and second in every iteration of the BCS standings. This was Texas's second trip to the Rose Bowl in two years (and second trip in the history of UT football). A few weeks before the game, USC's Reggie Bush won the Heisman Trophy (since vacated in 2010) ahead of second-place finisher Vince Young. Bush had the second-highest number of first place votes in Heisman history (behind O. J. Simpson) and the highest percentage of first-place votes, while Young had a record number of second-place votes. Bush's 933-point margin of victory was the 17th highest in Heisman voting history. The other finalist was USC's Matt Leinart, who had won the Heisman trophy in 2004. This meant that the Rose Bowl would mark the first time that two Heisman-trophy winners had ever played in the same backfield. The 2006 Rose Bowl was, in the eyes of many, the most-anticipated matchup in college football history. Both teams were considered good enough to win the National Championship had they existed in different years instead of having to play each other. USC had been ranked No. 1 since the preseason and Texas had held the No. 2 spot that entire time. Before the game, some commentators postulated that the 2005 USC team was one of the greatest college football teams of all time. ESPN analysts were virtually unanimous in declaring the 2005 USC Trojans as having the best offense in college football history (though it did not lead the nation in points scored; Texas did). Mark May and Kirk Herbstreit declared that the 2005 USC Trojans were the second-best college football team of the past 50 years (May placed them behind only the 1995 Nebraska Cornhuskers; Herbstreit behind only the 2001 Miami Hurricanes). This led Texas fans to mockingly chant "Best...Team...Ever" during the post-game celebration. Stewart Mandel of Sports Illustrated later observed, "ESPN spent the better part of Christmas season comparing that Trojans squad to some of the most acclaimed teams of all time only to find out that they weren’t even the best team that season." Lee Corso was one of the few ESPN analysts to predict a Texas win. Game summary First quarter USC received the opening kickoff and managed just three yards against a Texas defense that was stout early in the game. Aaron Ross fumbled the ball on the ensuing punt return, committing the first of four Texas fumbles on the day (though it would only lose one), and the Trojans recovered. A 23-yard Leinart pass to senior fullback David Kirtman, who was hit hard by Cedric Griffin and forced to leave the game briefly (Kirtman finished the game with three catches for 61 yards on the day), set up a four-yard touchdown run by running back LenDale White, a bruiser who out-rushed his speedy counterpart, Bush, on the day, gaining 124 yards on 20 carries. Kicker Mario Danelo's extra point gave USC a seven-point lead. The teams twice exchanged possessions to end the first quarter, as each defense held the opposing offense in check. Second quarter On the second play of the second quarter, Reggie Bush exploded for 35 yards off a Leinart pass, reaching Texas's 18-yard line before attempting to lateral pass the ball to an uncovered teammate; Texas strong safety Michael Huff recovered the loose ball. The Pac-10 football-officiating coordinator later stated that Bush's pass was incorrectly officiated because it was an illegal forward pass, not a lateral, so the Trojans should have retained possession. Young drove his team 53 yards on the ensuing possession, twice hitting senior tight end David Thomas, who finished the day as Young's leading receiver, catching ten passes for 88 yards. The Trojans' defense tackled sophomore running back Ramonce Taylor five yards behind the line of scrimmage and forced a fumble that Young recovered for an additional five-yard loss. This forced a Texas field-goal attempt, which David Pino converted from 46 yards to cut Texas's deficit to four. On USC's next possession, Leinart drove his team into Texas territory, this time to the 25-yard line, before throwing an interception to Texas free safety Michael Griffin, who appeared to be out of the play but ran halfway across the field before making a leaping catch and barely staying in-bounds in the end zone. The turnover ended a second Trojans' drive with USC in scoring position. On the following Texas drive, Young connected with wide receiver Limas Sweed, who caught eight balls for 65 yards on the day, for a key first down. Young then led his team with his legs, capping the drive by running 10 yards before throwing a lateral pass to open running back Selvin Young, who ran for 12 more for the touchdown. The lateral, made after Young's knee had touched the ground, was not reviewed because of issues with the replay equipment. The game continued with a failed extra-point attempt by Texas, which, not knowing of the equipment issues, appeared to rush the kick to get the play off before the prior play could be reviewed. The NCAA football-officiating coordinator later asserted that Young's knee had been down, and expressed confusion about how the call had been handled. A defensive stop on USC's next possession and a 15-yard punt return gave Texas the ball near midfield, and the Longhorns capitalized when Young found Thomas for 14 yards on one play, and Taylor running 30 yards for a touchdown on another. Pino's extra point extended the Longhorns' lead to 16–7. On the next drive, Leinart threw a pass intended for Reggie Bush that was grabbed by Texas linebacker/safety Drew Kelson. But Kelson landed on his back after catching the pass and the ball popped out. The pass was ruled incomplete; equipment issues again prevented a review. USC's drive continued with a Leinart pass to wide receiver Dwayne Jarrett, the top Trojan receiver of the day with ten catches totaling 121 yards, a quarterback keeper of 14 yards, and a Bush 12-yard run took the Trojans to the Texas 13-yard line with 40 seconds to play in the half. But two sacks by defensive tackle Frank Okam pushed USC back 13 yards and forced the Trojans to use two timeouts. Consequently, Danelo's 43-yard field goal allowed USC three points, and the half ended with Texas still ahead, 16–10. Third quarter The Trojan defense came back strong from the halftime break and forced a punt on the Longhorns' opening drive of the third quarter. During the following USC drive, Leinart hit Jarrett for three passes totaling 35 yards, and White added the final 17 yards over two carries, capping the seven-play, 62-yard drive with a three-yard touchdown run. It was his second of the game, and it put the Trojans ahead, 17–16. Behind the running of Jamaal Charles, who finished the game with five carries for 34 yards, and Young, who ran 19 times for 200 yards, Texas quickly answered. Young scored the first of his three rushing touchdowns from 14 yards out, and Pino's successful extra-point attempt moved the Longhorns ahead, 23–17. The lead changed hands once more with 4:07 to play in the third quarter, as Leinart hit tight end Dominique Byrd for two of his four catches and 21 of his 32 yards in the next drive and set up the next score. Although USC had been stopped on a fourth-and-short attempt earlier in the game, it decided to gamble again on fourth-and-one from the 12, and this time White muscled it all the way to the end zone to record his third rushing touchdown of the game and the 57th of his career. The achievement set a USC record. The Longhorns reached Trojan territory on the ensuing drive, with Young's 45-yard run constituting most of the work, but ultimately the Trojans forced a field-goal attempt from USC's 14-yard line, and, on the first play of the fourth quarter, Pino missed a 31-yard kick that would have put his team ahead by two. Fourth quarter Behind Leinart's precise throwing (despite one interception, Leinart finished the day with otherwise stellar numbers, completing 29-of-40 passes for 365 yards and one touchdown), the Trojans drove 80 yards over nine plays in 3:36. Bush scored his only touchdown of the game on a 26-yard run to end the drive. (Bush finished the game with 95 yards on just six catches and gained 82 yards on 13 carries; he also averaged 20.2 yards on five punt returns.) The Longhorns’ next possession began with an apparent reception and fumble by Jamaal Charles. The error would have given USC the ball on the Texas 40, but replay officials ruled the catch incomplete. Two Vince Young completions to wide receiver Billy Pittman, who caught four passes for 53 yards on the day, helped the Longhorns drive to USC's 17-yard line on the next possession. When Young fumbled on third down, Texas settled for a 34-yard field goal that brought the Longhorns to within five, 31–26. On the ensuing possession, the Trojans gained 48 yards with a 33-yard Leinart pass to Kirtman and a 15-yard roughing-the-passer penalty. This set up a 22-yard toss from Leinart to Jarrett – a play that saw Texas cornerback Tarrel Brown get injured while trying to tackle Jarrett at the goal line. Brown and a teammate collided as Jarrett stretched the ball over the goal line, and the successful extra-point attempt gave USC its biggest lead of the game, 38–26. As Texas took the ball trailing by two scores with just 6:42 to play, Young accounted for all 69 yards of a Longhorns' scoring drive that took just 2:39 to complete, rushing for 25 (including a 17-yard touchdown run) and completing five passes for the rest of the necessary yardage. (For the game, Young completed 75 percent of his passes – 30-of-40 – for 267 yards, with no passing touchdowns and no interceptions.) Pino's extra point again brought Texas to within five with 3:58 to play. Though the Longhorns' defense yielded one first down on the subsequent USC drive, it held the Trojans, who turned to LenDale White on a third down at midfield only to see him lose the ball and have it recovered by wide receiver Steve Smith just two yards short of a first down. A Texas timeout stopped the clock with 2:13 to play. Then, in what proved the most pivotal coaching decision of the game, Trojans coach Pete Carroll elected to give his #2-ranked offense (behind only Texas), which had averaged 582.2 yards and 50.0 points per game on the year, an opportunity to convert fourth down and two at the Texas 45-yard line. But the Texas defense, which had failed to stop this same play three times, held White to a one-yard gain. The result was a turnover on downs at the Longhorns' 44-yard line with 2:09 to play. During its final drive, Texas faced third-and-12. Texas converted for a first down at USC's 46-yard line after a completed pass for seven yards and a Trojans face-mask penalty. From there, Young rushed once for seven yards between two passes for 26 yards to little-used wide receiver Brian Carter, moving the ball to the USC 14-yard line. Facing fourth-and-five from the nine-yard line, Young received the shotgun snap and found his receivers covered. Young bolted towards the right sideline and received a critical block from Justin Blalock and won a footrace to the end zone. That score, Young's third rushing touchdown of the game, gave the Longhorns a one-point lead with 19 seconds left to play. When Texas lined up for a two-point conversion, USC used its last time out. Young successfully reached the end zone on the ensuing play, giving his team a 41–38 lead. Leinart took the ball with only 16 seconds left and no timeouts. He drove the Trojans to the Texas 43-yard line when time expired. The loss was only the second of Leinart's college career, and the first Rose Bowl loss for USC since the 1989 game. Scoring summary Analysis and aftermath Vince Young was named the Rose Bowl's MVP for the second time in as many years (the first time being the 2005 Rose Bowl). He is only the fourth player in Rose Bowl history (and the only player from the Big 12 Conference) to accomplish this feat. USC head coach Pete Carroll regarded Young's performance as "the greatest he's ever seen by any one guy". Though USC converted on 57 percent of third downs (to only 27 percent for the Longhorns), it was unable to gain two yards on a 4th down try late in the 4th quarter when doing so might have ensured a Trojan victory. Curiously, the Trojans did not have Heisman winner Bush on the field for the 4th down play; LenDale White received the handoff and was stuffed by the Longhorn defense. The Trojans also hurt themselves with two turnovers in Texas territory early in the game. Mack Brown, previously maligned for his inability to win big games, thus ended the fourth-longest winning streak in Division I-A history – and the longest since a 35-game streak by Toledo ended in 1971 – and, behind Young, who accounted for 839 yards of total offense in his two Rose Bowl appearances, won the first national title for Texas since 1970. Young accounted for 467 yards in the championship game, which stands as the best performance ever in a BCS Championship game. By winning, Texas assured itself a first-place ranking in the USA Today coaches' poll, and its achievement was confirmed when AP polling sportswriters unanimously voted Texas number one on January 5, 2006; USC finished a unanimous second in each poll. On January 11, 2006, Young was awarded the Manning Award, given annually to the nation's top quarterback. Unlike any other major college football award, the Manning is based partly on bowl results. Four players from the game went on to become top-ten picks in the 2006 NFL Draft: Reggie Bush (2nd overall, New Orleans), Vince Young (3rd overall, Tennessee), Michael Huff (7th overall, Oakland), and Matt Leinart (10th overall, Arizona). Taitusi Lutui, Fred Matua, LenDale White, David Kirtman, Winston Justice, Cedric Griffin, David Thomas, Frostee Rucker, Dominique Byrd, Darnell Bing, Jonathan Scott, LaJuan Ramsey, and Rodrique Wright were drafted in the next six rounds. This was longtime ABC Sports announcer Keith Jackson's last game, and was also the last college football game aired on ABC under the ABC Sports name, as ABC's sports division began going by the name of corporate sibling ESPN on ABC in September 2006. The victory, Texas' 800th of all time, gave UT its fourth national championship in football. Since the game, the media, coaches, and other commentators have heaped praise upon the Texas team, Vince Young, and the Rose Bowl performance. For instance, Sports Illustrated called the game "perhaps the most stunning bowl performance ever". Both the Rose Bowl win as well as the Longhorns' overall season have both been cited as standing among the greatest performances in college football history by publications such as College Football News, the Atlanta Journal-Constitution, Scout.com, and Sports Illustrated. The Longhorns and the Trojans were together awarded the 2006 ESPY Award by ESPN for the "Best Game" in any sport. In December 2006, both Sports Illustrated and Time Magazine picked the game as the Best Sports Moment in 2006. Voters on Yahoo Sports also voted it as the Sports Story of the Year for both college football and overall, edging out 12 other stories in the overall voting and receiving 13,931 votes out of 65,641. In the days that followed the Longhorns' victory, the Trinity River in Dallas mysteriously turned a "burnt orange" color. Authorities said that it may have been caused by someone dumping dye into the river. The game received the highest Nielsen ratings for the Rose Bowl since the 1986 Rose Bowl between UCLA and Iowa. In 2007, ESPN compiled a list of the top 100 plays in college football history; Vince Young's game-winning touchdown in the 2006 Rose Bowl ranked number 5. The 2006 Rose Bowl Game and its unreviewed, controversial officiants' rulings have been cited as a key reason the NCAA Football Rules Committee added a coach's challenge the following season. Ironically, USC opted to go without instant replay for its game against Notre Dame that season, and won on the final play when Reggie Bush illegally shoved Matt Leinart over the goal line. This would be the last time USC and Texas met in a football game until 2017, which USC won in overtime 27–24. The two teams met again in 2018 in Texas as the second game of a home-and-home series between the two schools. Texas won 37–14 with USC failing to score any points after the first quarter. Game records Notes References External links Rose Bowl Rose Bowl Game BCS National Championship Game Texas Longhorns football bowl games USC Trojans football bowl games January 2006 sports events in the United States Rose Nicknamed sporting events 21st century in Pasadena, California
3827992
https://en.wikipedia.org/wiki/Peter%20Biddle
Peter Biddle
Peter Nicholas Biddle (born December 22, 1966) is a software evangelist from the United States. His primary fields of interest include content distribution, secure computing, and encryption. Career Biddle joined Microsoft in 1990 as a Support Engineer. He was one of the first authors to describe the concept of darknet,. an early participant in the Secure Digital Music Initiative (SDMI), Copy Protection Technical Working Group, and Trusted Computing Platform Alliance, an early technical evangelist for DVD and digital video recorder technology, the founding leader of Microsoft's Next-Generation Secure Computing Base (code named Palladium) initiative, and was responsible for starting Microsoft's Hypervisor development efforts. Biddle built and led the engineering team that shipped BitLocker Drive Encryption, a Trusted Platform Module-rooted disk encryption for Windows Vista. Bitlocker continues to be used by Microsoft today, having been shipped with certain versions of Windows 7 Ultimate, Windows 8, Windows 8.1 and Windows Server 2008 and later. In 1998, Biddle publicly demonstrated real-time consumer digital video recorder functionality using an inexpensive MPEG2 hardware encoder, at the WinHEC conference during a speech by Bill Gates. Biddle was the author of the diagram on page 13 in the SDMI specification, which enabled the playback of unknown or unlicensed content on SDMI-compliant players, and was a vocal proponent within SDMI for the external validation of digital watermarking. On August 8, 2007, London-based company Trampoline Systems, a company exploring what they called The Enterprise 2.0 space announced Biddle would be moving to London to join them as Vice President of Development after leaving Microsoft. While at Trampoline, Biddle ran all product development and engineering efforts. In 2008, Biddle joined Intel Corporation as a director of the Google program office. During his tenure at Intel, he also served in other positions, including evangelist and General Manager of Intel's AppUp digital storefront, which was shuttered in 2014 after four years' operation, Director of the Intel Atom Developer Program, described as "...a framework for developers to create and sell software applications for netbooks with support for handhelds and smart phones available in the future", and General Manager of Intel's Cloud Services Platform. In 2009 he became a surprise witness in the RealNetworks, Inc. v. DVD Copy Control Association, Inc. case where, as one of the drafters of the CSS license, he served as an expert on certain CSS licensing issues at the heart of the case. For more than 3 years during Biddle's tenure at Intel, he hosted the podcast "MashUp Radio", an online publication sponsored by Intel. In 2014, Biddle founded TradLabs, a company using technology to make rock climbing safer and more accessible. Personal life Biddle is a member of the Biddle family of Philadelphia and is a descendant of Nicholas Biddle, whose name he bears as his middle name. Other notable Biddles include Charles Biddle, Vice President of Pennsylvania and Mary Duke Biddle Trent Semans an American heiress and philanthropist. External links Plucky rebels: Being agile in an un-agile place - Peter Biddle at TED@Intel Mobile Insights Radio with Peter Biddle Twit.tv - This week in Law 213 The Darknet & the Future of Everything* - Keynote Address Gov 2.0 L.A. Tradlabs References 1966 births Living people Peter Microsoft employees Microsoft Windows people
641975
https://en.wikipedia.org/wiki/Mac%20OS%20X%20Tiger
Mac OS X Tiger
Mac OS X Tiger (version 10.4) is the fifth major release of macOS, Apple's desktop and server operating system for Mac computers. Tiger was released to the public on April 29, 2005 for US$129.95 as the successor to Mac OS X 10.3 Panther. Some of the new features included a fast searching system called Spotlight, a new version of the Safari web browser, Dashboard, a new 'Unified' theme, and improved support for 64-bit addressing on Power Mac G5s. Mac OS X 10.4 Tiger offered a number of features, such as fast file searching and improved graphics processing, that Microsoft had spent several years struggling to add to Windows with acceptable performance. Mac OS X 10.4 Tiger was included with all new Macs, and was also available as an upgrade for existing Mac OS X users, or users of supported pre-Mac OS X systems. The server edition, Mac OS X Server 10.4, was also available for some Macintosh product lines. Six weeks after its official release, Apple had delivered 2 million copies of Mac OS X 10.4 Tiger, representing 16% of all Mac OS X users. Apple claimed that Mac OS X 10.4 Tiger was the most successful Apple OS release in the company's history. At the WWDC on June 11, 2007, Apple's CEO, Steve Jobs, announced that out of the 22 million Mac OS X users, more than 67% were using Mac OS X 10.4 Tiger. Apple announced a transition to Intel x86 processors during Mac OS X 10.4 Tiger's lifetime, making it the first Apple operating system to work on Apple–Intel architecture machines. The original Apple TV, released in March 2007, shipped with a customized version of Mac OS X 10.4 Tiger branded "Apple TV OS" that replaced the usual GUI with an updated version of Front Row. Mac OS X 10.4 Tiger was succeeded by Mac OS X 10.5 Leopard on October 26, 2007, after 30 months, making Mac OS X 10.4 Tiger the longest running version of Mac OS X. The last security update released for Mac OS X 10.4 Tiger users was the 2009-005 update. The next security update, 2009-006 only included support for Mac OS X 10.5 Leopard and Mac OS X 10.6 Snow Leopard. The latest supported version of QuickTime is 7.6.4. The latest version of iTunes that can run on Mac OS X 10.4 Tiger is 9.2.1, because 10.0 only supports Mac OS X 10.5 Leopard and later. Safari 4.1.3 is the final version for Mac OS X 10.4 Tiger as of November 18, 2010. Despite not having received security updates since then, Mac OS X 10.4 Tiger remains popular with Power Mac users and retrocomputing enthusiasts due to its wide software and hardware compatibility, as it is the last Mac OS X version to support the Classic Environment, a Mac OS 9 compatibility layer, and PowerPC G3 processors. The Classic Environment isn't supported on Macs with Intel processors, as Mac OS 9 only supports PowerPC processors. System requirements Mac OS X 10.4 Tiger was initially available in a PowerPC edition, with an Intel edition released beginning at Mac OS X 10.4.4 Tiger. There is no universal version of the client operating system, although Mac OS X 10.4 Tiger Server was made available on a universal DVD from version Mac OS X 10.4.7 Tiger. While Apple shipped the PowerPC edition bundled with PowerPC-based Macs and also sold it as a separate retail box, the only way to obtain the Intel version was to buy an Intel-based Mac bundled with it. However, it was possible to buy the 'restore' DVDs containing the Intel version through unofficial channels such as eBay, and officially through Apple if one could provide proof of purchase of the appropriate Intel Mac. These grey-colored ‘restore’ DVDs supplied with new Macs, are designed to only restore on the model of Mac that they are intended for. However, they can be modified to work on any Intel Mac. The retail PowerPC-only DVD can be used on any PowerPC-based Mac supported by Mac OS X 10.4 Tiger. The system requirements of the PowerPC edition are: Macintosh computer with a PowerPC G3, G4 or G5 processor Built-in FireWire DVD drive for installation 256MB of RAM 3GB of available hard disk space (4GB if the user install the developer tools) Mac OS X 10.4 Tiger removed support for older New World ROM Macs such as the original iMacs and iBooks that were supported in Mac OS X 10.3 Panther; however it is possible to install Mac OS X 10.4 Tiger on these Macs using third-party software (such as XPostFacto) that overrides the checks made at the beginning of the installation process. Likewise, machines such as beige Power Mac G3s and ‘Wall Street’ PowerBook G3s that were dropped by Mac OS X 10.3 Panther can also be made to run both Mac OS X 10.3 Panther and Mac OS X 10.4 Tiger in this way. Also, Mac OS X 10.4 Tiger can be installed on unsupported New World ROM Macs by installing it on a supported Mac, then swapping hard drives. Old World ROM Macs require the use of XPostFacto to install Mac OS X 10.4 Tiger. Mac OS X 10.4 Tiger was the last version of Mac OS X to support the PowerPC G3 processor. History Apple CEO Steve Jobs first presented Mac OS X 10.4 Tiger in his keynote presentation at the WWDC on June 28, 2004, ten months before its commercial release in April 2005. Four months before that official release, several non-commercial, developer's releases of Mac OS X 10.4 Tiger leaked onto the internet via BitTorrent file sharers. It was first mentioned on Apple's website on May 4, 2004. Apple sued these file sharers. On April 12, 2005, Apple announced Mac OS X 10.4 Tiger's official, worldwide release would be April 29. All Apple Stores around the world held Mac OS X 10.4 Tiger seminars, presentations and demos. On June 6, 2005 at the WWDC in San Francisco, Jobs reported that nearly two million copies had been sold in Mac OS X 10.4 Tiger's first six weeks of release, making Mac OS X 10.4 Tiger the most successful operating system release in Apple's history. Jobs then disclosed that Mac OS X had been engineered from its inception to work with Intel's x86 line of processors in addition to the PowerPC, the CPU for which the operating system had always been publicly marketed. Apple concurrently announced its intent to release the first x86-based computers in June 2006, and to move the rest of its computers to x86 microprocessors by June 2007. On January 10, 2006, Apple presented its new iMac and MacBook Pro computers running on Intel Core Duo processors, and announced that the entire Apple product line would run on Intel processors by the end of 2006. Apple then released the Mac Pro and announced the new Xserve on August 8, completing the Intel transition in 210 days, roughly ten months ahead of the original schedule. Mac OS X 10.4 Tiger is the first version of Mac OS X to be supplied on a DVD, although the DVD could originally be exchanged for CDs for $9.95. It is also currently the only version of Mac OS X/OS X/macOS that had an update version number ending with a value greater than 9, as the last version of Mac OS X 10.4 Tiger was 10.4.11. New and changed features End-user features Apple advertises that Mac OS X 10.4 Tiger has over 150 new and improved features, including: Spotlight — Spotlight is a full-text and metadata search engine, which can search everything on one's Mac including Microsoft Word documents, iCal calendars and Address Book contact cards. The feature is also used to build the concept of ‘smart folders’ into the Finder. Spotlight will index files as they are saved, so they can be quickly and easily found through a search-as-you-type box in the menu bar. As a side-effect, it adds hidden folders and indexing files to removable media like USB flash drives. iChat AV — The new iChat AV 3.0 in Mac OS X 10.4 Tiger supports up to four participants in a video conference and ten participants in an audio conference. It also now supports communication using the XMPP protocol. A XMPP server called iChat Server is included on Mac OS X 10.4 Tiger Server. Safari RSS — The new Safari 2.0 web browser in Mac OS X 10.4 Tiger features a built-in reader for RSS and Atom web syndication that can be accessed easily from an RSS button in the address bar of the web browser window. An updated version of Safari, included as part of the free Mac OS X (10.4.3 Tiger update, can also pass the Acid2 web standards test. Mail 2 — The new version of Mail.app email client included in Mac OS X 10.4 Tiger featured an updated interface, "Smart Mailboxes", which utilizes the Spotlight search system, parental controls, as well as several other features. Dashboard — The Dashboard is a new mini-applications layer based on HTML, CSS, and JavaScript, which returns the desk accessories concept to the Mac OS. These accessories are known as widgets. It comes with several widgets such as Weather, World Clock, Unit Converter, and Dictionary/Thesaurus. More are available for free online. Its similarity to the Konfabulator application caused some criticism. Automator — A scripting tool to link applications together to form complex automated workflows (written in AppleScript, Cocoa, or both). Automator comes with a complete library of actions for several applications that can be used together to make a Workflow. VoiceOver — screen reader interface similar to Jaws for Windows and other Windows screen readers that offers the blind and visually impaired user keyboard control and spoken English descriptions of what is happening on screen. VoiceOver enables users with visual impairment to use applications via keyboard commands. VoiceOver is capable of reading aloud the contents of files including web pages, mail messages and word processing files. Complete keyboard navigation lets the user control the computer with the keyboard rather than the mouse, a menu is displayed in a window showing all the available keyboard commands that can be used. A complete built-in Dictionary/Thesaurus based on the New Oxford American Dictionary, Second Edition, accessible through an application, Dictionary, a Dashboard widget, and as a system-wide command (see below). .Mac syncing — Though this is not a new feature, .Mac syncing in Tiger is much improved over Panther. Syncing tasks in Tiger are now accomplished through the .Mac system preferences pane rather than the iSync application. QuickTime 7 — A new version of Apple's multimedia software has support for the new H.264/AVC codec, which offers better quality and scalability than other video codecs. This new codec is used by iChat AV for clearer video conferencing. New classes within Cocoa provide full access to QuickTime for Cocoa application developers. The new QuickTime 7 player application bundled with Tiger now includes more advanced audio and video controls as well as a more detailed Information dialog, and the new player has been rebuilt using Apple's Cocoa API to take advantage of the new technologies more easily. New Unix features — New versions of cp, mv, and rsync that support files with resource forks. Command-line support for features like the above-mentioned Spotlight are also included. Xcode 2.0 — Xcode 2.0, Apple's Cocoa development tool now includes visual modelling, an integrated Apple Reference Library and graphical remote debugging. New applications in Tiger Automator — Automator uses workflows to process repetitive tasks automatically Grapher — Grapher is a new application capable of creating 2D and 3D graphs similar to those of Graphing Calculator. Dictionary — A dictionary and thesaurus program that uses the New Oxford American Dictionary. It has a fast GUI for displaying the Dictionary, and allows the user to search the dictionary with Spotlight, to print definitions, and to copy and paste text into documents. Dictionary also provides a Dictionary service in the Application menu, and Cocoa and WebKit provides a global keyboard shortcut (ctrl-⌘-D by default) for all applications that display text with them. Its use was furthered in the next version of OS X by providing definitions from Wikipedia. The Dictionary application is a more feature-filled version of the Dictionary widget. Quartz Composer — Quartz Composer is a development tool for processing and rendering graphical data. AU Lab — AU Lab is a developer application for testing and mixing Audio Units. Dashboard — Dashboard is a widget application. Tiger widgets include: a calculator, dictionary, a world clock, a calendar, and more (full list). Users can also download and install more widgets. Improvements An upgraded kernel with optimized kernel resource locking and access control lists, and with support for 64-bit userland address spaces on machines with 64-bit processors. An updated libSystem with both 32-bit and 64-bit versions; combined with the aforementioned kernel change, this allows individual applications to address more than 4 GB of memory when run on 64-bit processors, although an application using Apple libraries or frameworks other than libSystem would need to have two processes, one running the 64-bit code and one running the code that requires other libraries and frameworks. A new startup daemon called launchd that allows for faster booting. The printing dialog in Tiger now features a drop down menu for creating PDFs, sending PDFs to Mail, and other PDF related actions. However, the user interface was criticized for creating a hybrid widget that looks like a plain button but acts like a pop-up menu. This is one of only three places in the entire Mac OS X interface where such an element appears. Dock menus now have menu items to open an application at login, or to remove the icon from the dock. The Window menu in the Finder now features a "Cycle Through Windows" menu item. The Get Info window for items in the Finder now includes a "More Info" section that includes Spotlight information tags such as Image Height & Width, when the file was last opened, and where the file originated. Early development of resolution independence. Apple notes that this will be a user-level feature in a future version of Mac OS X. Among the changes, the maximum size of icons was increased to 256x256. However, the Finder does not yet support this size. Technologies A new graphics processing API, Core Image, leveraging the power of the available accelerated graphics cards. Core Image allows programmers to easily leverage programmable GPUs for fast image processing for special effects and image correction tools. Some of the included Image Units are Blur, Color Blending, Generator Filters, Distortion Filters, Geometry Filters, Halftone features and much more. A new data persistence API, Core Data, that makes it easier for developers to handle structured data in their applications. The Mac OS X Core Data API helps developers create data structures for their applications. Core Data provides undo, redo and save functions for developers without them having to write any code. A new video graphics API, Core Video, which leverages Core Image to provide real-time video processing. Apple's Motion real-time video effects program takes advantage of Core Video in Tiger. Core Video lets developers easily integrate real-time video effects and processing into their applications. Core Audio integrates a range of audio functionality directly into the operating system. Interface differences In Tiger, the menu bar displayed at the top of the screen now features a colored Spotlight button in the upper right corner; the menu itself has a smoother 'glassy' texture to replace the faint pinstripes in Panther. Also of note, Tiger introduces a new window theme, often described as 'Unified'. A variation on the standard, non-brushed metal theme used since the introduction of Mac OS X, this theme integrates the title bar and the toolbar of a window. A prominent example of an application that utilizes this theme is Mail. Accessibility Tiger was the first version of Mac OS X to include the "Zoom" screen magnifier functionality, which allowed the user to zoom on to the area around the mouse by holding CONTROL and scrolling the mouse wheel up or down (to zoom in and out respectively). Tiger trademark lawsuit Shortly before the release of Mac OS X Tiger, the computer retailer TigerDirect.com, Inc. filed a lawsuit against Apple, alleging that Apple infringed TigerDirect.com's trademark with the Mac OS X Tiger operating system. The following is a quotation from TigerDirect.com's court memorandum: Apple Computer's use of its infringing family of Tiger marks to expand sales of products besides its operating system software is already evident — for example, Apple Computer is offering free iPods and laptops as part of its Tiger World Premiere giveaway. In short, notwithstanding its representation to the PTO that it would only use Tiger in connection with their unique computer operating system software, Apple Computer has in recent weeks used a family of Tiger marks in connection with a substantially broader group of products and services, including the very products and services currently offered by Tiger Direct under its famous family of Tiger marks. In 2005 TigerDirect was denied a preliminary injunction that would have prevented Apple from using the mark while the case was decided. Apple and TigerDirect reached a settlement in 2006, after which TigerDirect withdrew its opposition. Support for Intel processors At Apple's 2005 Worldwide Developers Conference, CEO Steve Jobs announced that the company would begin selling Mac computers with Intel x86 processors in 2006. To allow developers to begin producing software for these Intel-based Macs, Apple made available a prototype Intel-based Mac ("Developer Transition Kit") that included a version of Mac OS X v10.4.1 designed to run on x86 processors. This build included Apple's Rosetta compatibility layer — a translation process that allows x86-based versions of the OS to run software designed for PowerPC with a moderate performance penalty. This is contrasted with the contemporary Mac OS 9 Classic mode, which used comparably larger amounts of system resources. Soon after the Developer Transition Kits began shipping, copies of Tiger x86 were leaked onto file sharing networks. Although Apple had implemented a Trusted Computing DRM scheme in the transition hardware and OS in an attempt to stop people installing Tiger x86 on non-Apple PCs, the OSx86 project soon managed to remove this restriction. As Apple released each update with newer safeguards to prevent its use on non-Apple hardware, unofficially modified versions were released that circumvented Apple's safeguards. However, with the release of 10.4.5, 10.4.6, and 10.4.7 the unofficially modified versions continued to use the kernel from 10.4.4 because later kernels have hardware locks and depend heavily on EFI. By late 2006, the 10.4.8 kernel had been cracked. At MacWorld San Francisco 2006, Jobs announced the immediate availability of Mac OS X v10.4.4, the first publicly available release of Tiger compiled for both PowerPC- and Intel x86-based machines. Release history References External links Ars Technica Mac OS X Tiger Review at Ars Technica Mac OS X Tiger at Wikibooks 4 IA-32 operating systems X86-64 operating systems PowerPC operating systems 2005 software Computer-related introductions in 2005
15098354
https://en.wikipedia.org/wiki/List%20of%20software%20bugs
List of software bugs
Many software bugs are merely annoying or inconvenient but some can have extremely serious consequences – either financially or as a threat to human well-being. The following is a list of software bugs with significant consequences. Space A booster went off course during launch, resulting in the destruction of NASA Mariner 1. This was the result of the failure of a transcriber to notice an overbar in a written specification for the guidance program, resulting in the coding of an incorrect formula in its FORTRAN software. (July 22, 1962). The initial reporting of the cause of this bug was incorrect. NASA's 1965 Gemini 5 mission landed short of its intended splashdown point when the pilot compensated manually for an incorrect constant for the Earth's rotation rate. A 360-degree rotation corresponding to the earth's rotation relative to the fixed stars was used instead of the 360.98-degree rotation in a 24-hour solar day. The shorter length of the first three missions and a computer failure on Gemini 4 prevented the bug from being detected earlier. The Russian Space Research Institute's Phobos 1 (Phobos program) deactivated its attitude thrusters and could no longer properly orient its solar arrays or communicate with Earth, eventually depleting its batteries. (September 10, 1988). The European Space Agency's Ariane 5 Flight 501 was destroyed 40 seconds after takeoff (June 4, 1996). The US$1 billion prototype rocket self-destructed due to a bug in the on-board guidance software. In 1997, the Mars Pathfinder mission was jeopardised by a bug in concurrent software shortly after the rover landed, which was found in preflight testing but given a low priority as it only occurred in certain unanticipated heavy-load conditions. The problem, which was identified and corrected from Earth, was due to computer resets caused by priority inversion. In 2000, a Zenit 3SL launch failed due to faulty ground software not closing a valve in the rocket's second stage pneumatic system. The European Space Agency's CryoSat-1 satellite was lost in a launch failure in 2005 due to a missing shutdown command in the flight control system of its Rokot carrier rocket. NASA Mars Polar Lander was destroyed because its flight software mistook vibrations caused by the deployment of the stowed legs for evidence that the vehicle had landed and shut off the engines 40 meters from the Martian surface (December 3, 1999). Its sister spacecraft Mars Climate Orbiter was also destroyed, due to software on the ground generating commands based on parameters in pound-force (lbf) rather than newtons (N). A mis-sent command from Earth caused the software of the NASA Mars Global Surveyor to incorrectly assume that a motor had failed, causing it to point one of its batteries at the sun. This caused the battery to overheat (November 2, 2006). NASA's Spirit rover became unresponsive on January 21, 2004, a few weeks after landing on Mars. Engineers found that too many files had accumulated in the rover's flash memory. It was restored to working condition after deleting unnecessary files. Japan's Hitomi astronomical satellite was destroyed on March 26, 2016, when a thruster fired in the wrong direction, causing the spacecraft to spin faster instead of stabilize. Israel's first attempt to land an unmanned spacecraft on the moon with the Beresheet was rendered unsuccessful on April 11, 2019, due to a software bug with its engine system, which prevented it from slowing down during its final descent on the moon's surface. Engineers attempted to correct this bug by remotely rebooting the engine, but by time they regained control of it, Beresheet could not slow down in time to avert a hard, crash landing that disintegrated it. Medical A bug in the code controlling the Therac-25 radiation therapy machine was directly responsible for at least five patient deaths in the 1980s when it administered excessive quantities of beta radiation. Radiation therapy planning software RTP/2 created by Multidata Systems International could incorrectly double the dosage of radiation depending on how the technician entered data into the machine. At least eight patients died, while another 20 received overdoses likely to cause significant health problems (November 2000). See also Instituto Oncológico Nacional#Accident A Medtronic heart device was found vulnerable to remote attacks (2008-03). The Becton Dickinson Alaris Gateway Workstation allows unauthorized arbitrary remote execution (2019). The CareFusion Alaris pump module (8100) will not properly delay an Infusion when the "Delay Until" option or "Multidose" feature is used (2015). Tracking years The year 2000 problem spawned fears of worldwide economic collapse and an industry of consultants providing last-minute fixes. A similar problem will occur in 2038 (the year 2038 problem), as many Unix-like systems calculate the time in seconds since 1 January 1970, and store this number as a 32-bit signed integer, for which the maximum possible value is (2,147,483,647) seconds. An error in the payment terminal code for Bank of Queensland rendered many devices inoperable for up to a week. The problem was determined to be an incorrect hexadecimal number conversion routine. When the device was to tick over to 2010, it skipped six years to 2016, causing terminals to decline customers' cards as expired. Electric power transmission The Northeast blackout of 2003 was triggered by a local outage that went undetected due to a race condition in General Electric Energy's XA/21 monitoring software. Administration The software of the A2LL system for handling unemployment and social services in Germany presented several errors with large-scale consequences, such as sending the payments to invalid account numbers in 2004. Telecommunications AT&T long-distance network crash (January 15, 1990), in which the failure of one switching system would cause a message to be sent to nearby switching units to tell them that there was a problem. Unfortunately, the arrival of that message would cause those other systems to fail too – resulting in a cascading failure that rapidly spread across the entire AT&T long-distance network. In January 2009, Google's search engine erroneously notified users that every web site worldwide was potentially malicious, including its own. In May 2015, iPhone users discovered a bug where sending a certain sequence of characters and Unicode symbols as a text to another iPhone user would crash the receiving iPhone's SpringBoard interface, and may also crash the entire phone, induce a factory reset, or disrupt the device's connectivity to a significant degree, preventing it from functioning normally. The bug persisted for weeks, gained substantial notoriety and saw a number of individuals using the bug to play pranks on other iOS users, before Apple eventually patched it on June 30, 2015, with iOS 8.4. Military The software error of a MIM-104 Patriot caused its system clock to drift by one third of a second over a period of one hundred hours – resulting in failure to locate and intercept an incoming Iraqi Al Hussein missile, which then struck Dharan barracks, Saudi Arabia (February 25, 1991), killing 28 Americans. A Royal Air Force Chinook helicopter crashed into the Mull of Kintyre in June 1994, killing 29. Initially, the crash was dismissed as pilot error, but an investigation by Computer Weekly uncovered sufficient evidence to convince a House of Lords inquiry that it may have been caused by a software bug in the aircraft's engine control computer. Smart ship USS Yorktown was left dead in the water in 1997 for nearly 3 hours after a divide by zero error. In April 1992 the first F-22 Raptor crashed while landing at Edwards Air Force Base, California. The cause of the crash was found to be a flight control software error that failed to prevent a pilot-induced oscillation. While attempting its first overseas deployment to the Kadena Air Base in Okinawa, Japan, on 11 February 2007, a group of six F-22 Raptors flying from Hickam AFB, Hawaii, experienced multiple computer crashes coincident with their crossing of the 180th meridian of longitude (the International Date Line). The computer failures included at least navigation (completely lost) and communication. The fighters were able to return to Hawaii by following their tankers, something that might have been problematic had the weather not been good. The error was fixed within 48 hours, allowing a delayed deployment. Media In the Sony BMG copy protection rootkit scandal (October 2005), Sony BMG produced a Van Zant music CD that employed a copy protection scheme that covertly installed a rootkit on any Windows PC that was used to play it. Their intent was to hide the copy protection mechanism to make it harder to circumvent. Unfortunately, the rootkit inadvertently opened a security hole resulting in a wave of successful trojan horse attacks on the computers of those who had innocently played the CD. Sony's subsequent efforts to provide a utility to fix the problem actually exacerbated it. Video gaming Eve Onlines deployment of the Trinity patch erased the boot.ini file from several thousand users' computers, rendering them unable to boot. This was due to the usage of a legacy system within the game that was also named boot.ini. As such, the deletion had targeted the wrong directory instead of the /eve directory. The Corrupted Blood incident was a software bug in World of Warcraft that caused a deadly, debuff-inducing virtual disease that could only be contracted during a particular raid to be set free into the rest of the game world, leading to numerous, repeated deaths of many player characters. This caused players to avoid crowded places in-game, just like in a "real world" epidemic, and the bug became the center of some academic research on the spread of infectious diseases. On June 6, 2006, the online game RuneScape suffered from a bug that enabled certain player characters to kill and loot other characters, who were unable to fight back against the affected characters because the game still thought they were in player-versus-player mode even after they were kicked out of a combat ring from the house of a player who was suffering from lag while celebrating an in-game accomplishment. Players who were killed by the glitched characters lost many items, and the bug was so devastating that the players who were abusing it were soon tracked down, caught and banned permanently from the game, but not before they had laid waste to the region of Falador, thus christening the bug "Falador Massacre". In the 256th level of Pac-Man, a bug results in a kill screen. The maximum number of fruit available is seven and when that number rolls over, it causes the entire right side of the screen to become a jumbled mess of symbols while the left side remains normal. Upon initial release, the ZX Spectrum game Jet Set Willy was impossible to complete because of a severe bug that corrupted the game data, causing enemies and the player character to be killed in certain rooms of the large mansion where the entire game takes place. The bug, known as "The Attic Bug", would occur when the player entered the mansion's attic, which would then cause an arrow to travel offscreen, overwriting the contents of memory and altering crucial variables and behavior in an undesirable way. The game's developers initially excused this bug by claiming that the affected rooms were death traps, but ultimately owned up to it and issued instructions to players on how to fix the game itself. One of the free demo discs issued to PlayStation Underground subscribers in the United States contained a serious bug, particularly in the demo for Viewtiful Joe 2, that would not only crash the PlayStation 2, but would also unformat any memory cards that were plugged into that console, erasing any and all saved data onto them. The bug was so severe that Sony had to apologize for it and send out free copies of other PS2 games to affected players as consolation. Due to a severe programming error, much of the Nintendo DS game Bubble Bobble Revolution is unplayable because a mandatory boss fight failed to trigger in the 30th level. An update for the Xbox 360 version of Guitar Hero II, which was intended to fix some issues with the whammy bar on that game's guitar controllers, came with a bug that caused some consoles to freeze, or even stop working altogether, producing the infamous "red ring of death". Valve's Steam client for Linux could accidentally delete all the user's files in every directory on the computer. This happened to users that had moved Steam's installation directory. The bug is the result of unsafe shellscript programming: STEAMROOT="$(cd "${0%/*}" && echo $PWD)" # Scary! rm -rf "$STEAMROOT/"* The first line tries to find the script's containing directory. This could fail, for example if the directory was moved while the script was running, invalidating the "selfpath" variable $0. It would also fail if $0 contained no slash character, or contained a broken symlink, perhaps mistyped by the user. The way it would fail, as ensured by the && conditional, and not having set -e cause termination on failure, was to produce the empty string. This failure mode was not checked, only commented as "Scary!". Finally, in the deletion command, the slash character takes on a very different meaning from its role of path concatenation operator when the string before it is empty, as it then names the root directory. Minus World is an infamous glitch level from the 1985 game Super Mario Bros., accessed by using a bug to clip through walls in level 1–2 to reach its "warp zone", which leads to the said level. As this level is endless, triggering the bug that takes the player there will make the game impossible to continue until the player resets the game or runs out of lives. "MissingNo." is a glitch Pokémon species present in Pokémon Red and Blue, which can be encountered by performing a particular sequence of seemingly unrelated actions. Capturing this Pokémon may corrupt the game's data, according to Nintendo and some of the players who successfully attempted this glitch. This is one of the most famous bugs in video game history, and continues to be well-known. Encryption In order to fix a warning issued by Valgrind, a maintainer of Debian patched OpenSSL and broke the random number generator in the process. The patch was uploaded in September 2006 and made its way into the official release; it was not reported until April 2008. Every key generated with the broken version is compromised (as the "random" numbers were made easily predictable), as is all data encrypted with it, threatening many applications that rely on encryption such as S/MIME, Tor, SSL or TLS protected connections and SSH. Heartbleed, an OpenSSL vulnerability introduced in 2012 and disclosed in April 2014, removed confidentiality from affected services, causing among other things the shut down of the Canada Revenue Agency's public access to the online filing portion of its website following the theft of social insurance numbers. The Apple "goto fail" bug was a duplicated line of code which caused a public key certificate check to pass a test incorrectly. The GnuTLS "goto fail" bug was similar to the Apple bug and found about two weeks later. The GnuTLS bug also allowed attackers to bypass SSL/TLS security. The GnuTLS bug was worse than the Apple bug because it affected over 200 packages on a typical Linux system. Transportation By some accounts Toyota's electronic throttle control system (ETCS) had bugs that could cause sudden unintended acceleration. The Boeing 787 Dreamliner experienced an integer overflow bug which could shut down all electrical generators if the aircraft was on for more than 248 days. A similar problem was found in Airbus A350 which need to be powered down before reaching 149 hours of continuous power-on time, otherwise certain avionics systems or functions would partially or completely fail. In early 2019, the transportation-rental firm Lime discovered a firmware bug with its electric scooters that can cause them to brake unexpectedly very hard, which may hurl and injure riders. Boeing 737 NG had all cockpit displays go blank if a specific type of instrument approach to any one of seven specific airports was selected in the flight management computer. Bombardier CRJ-200 equipped with flight management systems by Collins Aerospace would make wrong turns during missed approach procedures executed by the autopilot in some specific cases when temperature compensation was activated in cold weather. Finance The Vancouver Stock Exchange index had large errors due to repeated rounding. In January 1982 the index was initialized at 1000 and subsequently updated and truncated to three decimal places on each trade. This was done about 3000 times a day. The accumulated truncations led to an erroneous loss of around 25 points per month. Over the weekend of November 25–28, 1983, the error was corrected, raising the value of the index from its Friday closing figure of 524.811 to 1098.892. Knight Capital Group lost $440 million in 45 minutes due to the improper deployment of software on servers and the re-use of a critical software flag that caused old unused software code to execute during trading. Blockchain Ethereum The DAO (organization) bug - 3.6M ETH See also London Ambulance Service § Innovation External links Forum on Risks to the Public in Computers and Related Systems References ! Software quality Software testing Quality assurance Safety engineering
14768031
https://en.wikipedia.org/wiki/DNS%20hijacking
DNS hijacking
DNS hijacking, DNS poisoning, or DNS redirection is the practice of subverting the resolution of Domain Name System (DNS) queries. This can be achieved by malware that overrides a computer's TCP/IP configuration to point at a rogue DNS server under the control of an attacker, or through modifying the behaviour of a trusted DNS server so that it does not comply with internet standards. These modifications may be made for malicious purposes such as phishing, for self-serving purposes by Internet service providers (ISPs), by the Great Firewall of China and public/router-based online DNS server providers to direct users' web traffic to the ISP's own web servers where advertisements can be served, statistics collected, or other purposes of the ISP; and by DNS service providers to block access to selected domains as a form of censorship. Technical background One of the functions of a DNS server is to translate a domain name into an IP address that applications need to connect to an Internet resource such as a website. This functionality is defined in various formal internet standards that define the protocol in considerable detail. DNS servers are implicitly trusted by internet-facing computers and users to correctly resolve names to the actual addresses that are registered by the owners of an internet domain. Rogue DNS server A rogue DNS server translates domain names of desirable websites (search engines, banks, brokers, etc.) into IP addresses of sites with unintended content, even malicious websites. Most users depend on DNS servers automatically assigned by their ISPs. A router's assigned DNS servers can also be altered through the remote exploitation of a vulnerability within the router's firmware. When users try to visit websites, they are instead sent to a bogus website. This attack is termed pharming. If the site they are redirected to is a malicious website, masquerading as a legitimate website, in order to fraudulently obtain sensitive information, it is called phishing. Manipulation by ISPs A number of consumer ISPs such as AT&T, Cablevision's Optimum Online, CenturyLink, Cox Communications, RCN, Rogers, Charter Communications (Spectrum), Plusnet, Verizon, Sprint, T-Mobile US, Virgin Media, Frontier Communications, Bell Sympatico, Deutsche Telekom AG, Optus, Mediacom, ONO, TalkTalk, Bigpond (Telstra), TTNET, Türksat, and Telkom Indonesia use or used DNS hijacking for their own purposes, such as displaying advertisements or collecting statistics. Dutch ISPs XS4ALL and Ziggo use DNS hijacking by court order: they were ordered to block access to The Pirate Bay and display a warning page instead. These practices violate the RFC standard for DNS (NXDOMAIN) responses, and can potentially open users to cross-site scripting attacks. The concern with DNS hijacking involves this hijacking of the NXDOMAIN response. Internet and intranet applications rely on the NXDOMAIN response to describe the condition where the DNS has no entry for the specified host. If one were to query the invalid domain name (for example www.example.invalid), one should get an NXDOMAIN response – informing the application that the name is invalid and taking the appropriate action (for example, displaying an error or not attempting to connect to the server). However, if the domain name is queried on one of these non-compliant ISPs, one would always receive a fake IP address belonging to the ISP. In a web browser, this behavior can be annoying or offensive as connections to this IP address display the ISP redirect page of the provider, sometimes with advertising, instead of a proper error message. However, other applications that rely on the NXDOMAIN error will instead attempt to initiate connections to this spoofed IP address, potentially exposing sensitive information. Examples of functionality that breaks when an ISP hijacks DNS: Roaming laptops that are members of a Windows Server domain will falsely be led to believe that they are back on a corporate network because resources such as domain controllers, email servers and other infrastructure will appear to be available. Applications will therefore attempt to initiate connections to these corporate servers, but fail, resulting in degraded performance, unnecessary traffic on the Internet connection and timeouts. Many small office and home networks do not have their own DNS server, relying instead on broadcast name resolution. Many versions of Microsoft Windows default to prioritizing DNS name resolution above NetBIOS name resolution broadcasts; therefore, when an ISP DNS server returns a (technically valid) IP address for the name of the desired computer on the LAN, the connecting computer uses this incorrect IP address and inevitably fails to connect to the desired computer on the LAN. Workarounds include using the correct IP address instead of the computer name, or changing the DhcpNodeType registry value to change name resolution service ordering. Browsers such as Firefox no longer have their 'Browse By Name' functionality (where keywords typed in the address bar take users to the closest matching site). The local DNS client built into modern operating systems will cache results of DNS searches for performance reasons. If a client switches between a home network and a VPN, false entries may remain cached, thereby creating a service outage on the VPN connection. DNSBL anti-spam solutions rely on DNS; false DNS results therefore interfere with their operation. Confidential user data might be leaked by applications that are tricked by the ISP into believing that the servers they wish to connect to are available. User choice over which search engine to consult in the event of a URL being mistyped in a browser is removed as the ISP determines what search results are displayed to the user. Computers configured to use a split tunnel with a VPN connection will stop working because intranet names that should not be resolved outside the tunnel over the public Internet will start resolving to fictitious addresses, instead of resolving correctly over the VPN tunnel on a private DNS server when an NXDOMAIN response is received from the Internet. For example, a mail client attempting to resolve the DNS A record for an internal mail server may receive a false DNS response that directed it to a paid-results web server, with messages queued for delivery for days while retransmission was attempted in vain. It breaks Web Proxy Autodiscovery Protocol (WPAD) by leading web browsers to believe incorrectly that the ISP has a proxy server configured. It breaks monitoring software. For example, if one periodically contacts a server to determine its health, a monitor will never see a failure unless the monitor tries to verify the server's cryptographic key. In some, but not most cases, the ISPs provide subscriber-configurable settings to disable hijacking of NXDOMAIN responses. Correctly implemented, such a setting reverts DNS to standard behavior. Other ISPs, however, instead use a web browser cookie to store the preference. In this case, the underlying behavior is not resolved: DNS queries continue to be redirected, while the ISP redirect page is replaced with a counterfeit DNS error page. Applications other than web browsers cannot be opted out of the scheme using cookies as the opt-out targets only the HTTP protocol, when the scheme is actually implemented in the protocol-neutral DNS system. Response In the UK, the Information Commissioner's Office have acknowledged that the practice of involuntary DNS hijacking contravenes PECR, and EC Directive 95/46 on Data Protection which require explicit consent for processing of communication traffic. However they have refused to intervene, claiming that it would not be sensible to enforce the law, because it would not cause significant (or indeed any) demonstrable detriment to individuals. In Germany, in 2019 it was revealed that the Deutsche Telekom AG not only manipulated their DNS servers, but also transmitted network traffic (such as non-secure cookies when users did not use HTTPS) to a third party company because the web portal T-Online, at which users were redirected due to the DNS manipulation, was not (any more) owned by the Deutsche Telekom. After a user filed a criminal complaint, the Deutsche Telekom stopped further DNS manipulations. ICANN, the international body responsible for administering top-level domain names, has published a memorandum highlighting its concerns, and affirming: Remedy End users, dissatisfied with poor "opt-out" options like cookies, have responded to the controversy by finding ways to avoid spoofed NXDOMAIN responses. DNS software such as BIND and Dnsmasq offer options to filter results, and can be run from a gateway or router to protect an entire network. Google, among others, run open DNS servers that currently do not return spoofed results. So a user could use Google Public DNS instead of their ISP's DNS servers if they are willing to accept that they use the service under Google's privacy policy and potentially be exposed to another method by which Google can track the user. One limitation of this approach is that some providers block or rewrite outside DNS requests. OpenDNS, owned by Cisco, is a similar popular service which does not alter NXDOMAIN responses. Google in April 2016 launched DNS-over-HTTPS service. This scheme can overcome the limitations of the legacy DNS protocol. It performs remote DNSSEC check and transfers the results in a secure HTTPS tunnel. There are also application-level work-arounds, such as the NoRedirect Firefox extension, that mitigate some of the behavior. An approach like that only fixes one application (in this example, Firefox) and will not address any other issues caused. Website owners may be able to fool some hijackers by using certain DNS settings. For example, setting a TXT record of "unused" on their wildcard address (e.g. *.example.com). Alternatively, they can try setting the CNAME of the wildcard to "example.invalid", making use of the fact that '.invalid' is guaranteed not to exist per the RFC. The limitation of that approach is that it only prevents hijacking on those particular domains, but it may address some VPN security issues caused by DNS hijacking. See also Captive portal DNS cache poisoning DNS rebinding DNS spoofing Domain hijacking Dynamic Host Configuration Protocol Pharming Point-to-Point Protocol Spoofing attack TCP reset attack Trojan.Win32.DNSChanger References Domain Name System Internet fraud Hacking (computer security) Internet ethics Internet privacy Internet security
2852207
https://en.wikipedia.org/wiki/1964%20NCAA%20University%20Division%20football%20season
1964 NCAA University Division football season
The NCAA was without a playoff for the major college football teams in the University Division, later known as Division I-A, during the 20th century. The NCAA recognizes Division I-A national champions based on the final results of polls including the "wire service" (AP and UPI), FWAA and NFF. The 1964 AP poll continued to rank only ten teams, compiling the votes of 55 sportswriters, each of whom would give their opinion of the ten best. Under a point system of 10 points for first place, 9 for second, etc., the "overall" ranking was determined. The 1964 season ended with controversy as to whether Alabama or Arkansas should be recognized as the national champion: Alabama finished the regular season at 10–0 and was ranked No. 1 in the final AP and UPI Coaches Polls. The AP and UPI did not conduct post-bowl game polling at that time. Accordingly, and despite a loss in the 1965 Orange Bowl to No. 5 Texas, Alabama remained the national champion in the AP and UPI polls. Arkansas, ranked No. 2 in the AP and UPI polls, defeated No. 6 Nebraska in the Cotton Bowl, had also defeated common opponent Texas in Austin, and finished as the only undefeated and untied major college team. In polling conducted after the bowl games, a five-man committee of the Football Writers Association of America (FWAA) selected Arkansas as the winner of the Grantland Rice Trophy as the top college football team in the country. Arkansas received four of five first-place votes, with Texas receiving the fifth vote. Alabama did not receive a single vote for first, second, or third place. Arkansas is also recognized as the 1964 national champion by Billingsley Report, College Football Researchers Association, Helms Athletic Foundation, National Championship Foundation, Poling System, Sagarin, and Sagarin (ELO-Chess). After a one-year trial run in 1965, the AP Poll began its current practice of naming their national champion at the conclusion of the bowl games in 1968. The UPI Poll followed suit in 1974, after its choice for national champions in each of 1965, 1970, and 1973 lost their respective bowl games. Conference and program changes The Missouri Valley Intercollegiate Athletic Association changed its official name to the Big Eight Conference prior the 1964 season; this name remained until the league's dissolution and formation of the Big 12 Conference in 1996. The Southland Conference began its first season of play with five members, all former independents from the states of Arkansas and Texas. September In the preseason poll released on September 14, Mississippi (Ole Miss) was ranked first and Oklahoma second. Big Ten rivals Illinois and Ohio State were ranked No. 3 and No. 5 respectively, while 1963 champion Texas was No. 4. On September 19, No. 1 Mississippi beat Memphis State 30–0 at home, while No. 2 Oklahoma beat Maryland 13–3 on the road at College Park. No. 4 Texas defeated Tulane 31–0 at home. The following week (September 26), No. 1 Mississippi was upset 27–21 by a late Kentucky touchdown at Jackson. Ole Miss would finish just 5–5–1 after posting a 46–4–3 mark over the previous five years. In its first season after the retirement of longtime head coach Bud Wilkinson, No. 2 Oklahoma was crushed by the USC Trojans, 40–14, before a record home crowd. Neither Mississippi nor Oklahoma would return to the AP Poll at any point for the rest of the year. No. 3 Illinois beat California 20–14, and No. 4 Texas shut out Texas Tech 23–0. No. 5 Ohio State defeated SMU at home, 27–8. No. 6 Alabama beat Tulane 36–6. In the poll that followed, the Texas Longhorns were the new No. 1 and USC No. 2, followed by No. 3 Illinois, No. 4 Alabama, and No. 5 Ohio State. October On October 3, No. 1 Texas beat Army 17–6 at home. Meanwhile, No. 2 USC lost 17–7 at Michigan State and No. 3 Illinois won 17–6 over Northwestern. No. 4 Alabama beat Tulane in a neutral site game at Mobile, 36–6. No. 5 Ohio State beat Indiana at home, 17–9. Previously unranked Kentucky earned a spot in the next poll after beating No. 7 Auburn 20–0 in Birmingham for its second straight upset of a top-ten team. Two games, Duke at Tulane and Florida at LSU, were postponed until the end of the season due to the threat of Hurricane Hilda, which made landfall in Louisiana that day. The next top five: No. 1 Texas, No. 2 Illinois, No. 3 Alabama, No. 4 Ohio State, and No. 5 Kentucky. Top-ranked Texas beat Oklahoma 28–7 at Dallas on October 10. Visiting No. 4 Ohio State shut out No. 2 Illinois 26–0, and No. 3 Alabama beat North Carolina State 21–0. No. 5 Kentucky, previously 3–0, was beaten 48–6 by Florida State, the start of a four-game losing streak en route to a 5–5 season. Two road wins moved teams into the top five. No. 6 Notre Dame, enjoying a resurgence under new coach Ara Parseghian, won 34–7 at Air Force and No. 8 Michigan won 17–10 at No. 9 Michigan State. The top 5 were No. 1 Texas, No. 2 Ohio State, No. 3 Alabama, No. 4 Notre Dame, and No. 5 Michigan. On October 17, No. 8 Arkansas beat No. 1 Texas at Austin, 14–13, stopping a late two-point conversion attempt. No. 2 Ohio State beat the USC Trojans in Columbus, 17–0. No. 3 Alabama and No. 4 Notre Dame remained unbeaten, defeating Tennessee (19–8) and UCLA (24–0) respectively. No. 5 Michigan lost to Purdue 21–20. No. 6 Nebraska, which had beaten Kansas State 47–0 (and outscored its opponents 171–34 in five wins), moved into the top five. The rankings were No. 1 Ohio State, No. 2 Notre Dame, No. 3 Alabama, No. 4 Arkansas, and No. 5 Nebraska. October 24 had No. 1 Ohio State over Wisconsin at home, 28–3. No. 2 Notre Dame beat Stanford 26–7, No. 3 Alabama beat No. 9 Florida 17–14. No. 4 Arkansas beat Wichita State 17–0, and No. 5 Nebraska beat Colorado 21–3. The top five remained unchanged. October 31, No. 1 Ohio State edged Iowa 21–19 while No. 2 Notre Dame defeated Navy 40–0, causing the two teams to switch spots in the next poll. No. 3 Alabama (23–6 over Ole Miss), No. 4 Arkansas (17–0 over Texas A&M) and No. 5 Nebraska (9–0 over Missouri) remained unbeaten and received the same rankings. November November 7, No. 1 Notre Dame beat the Pitt Panthers at Pittsburgh 17–15. Meanwhile, No. 2 Ohio State suffered its first loss to unranked (3–4) Penn State, 27–0. No. 3 Alabama (17–9 over No. 8 LSU), No. 4 Arkansas (21–0 vs. Rice) and No. 5 Nebraska (14–7 over Kansas) stayed unbeaten. No. 6 Texas (7–1), whose lone loss had been to Arkansas, won 20–14 at Baylor. The next poll was No. 1 Notre Dame, No. 2 Alabama, No. 3 Arkansas, No. 4 Nebraska, and No. 5 Texas. November 14, No. 1 Notre Dame defeated Michigan State 34–7, and No. 2 Alabama beat No. 10 Georgia Tech in Atlanta, 14–7, to stay unbeaten. Also unblemished were No. 3 Arkansas (44–0 over SMU) and No. 4 Nebraska (27–14 vs. Oklahoma State). With two weeks still to go in the regular season, all three of the preceding teams had clinched their conference championships (the SEC, SWC, and Big 8 respectively). No. 5 Texas won 28–13 over TCU. The poll remained unchanged. November 21, No. 1 Notre Dame beat Iowa in South Bend, 28–0. No. 2 Alabama was idle. No. 3 Arkansas beat Texas Tech 17–0 to close its regular season with five straight shutouts and a 10–0 record. No. 4 Nebraska suffered its first loss at Oklahoma, 17–7. No. 5 Texas was idle. In a foreshadowing of future battles, No. 6 Michigan faced off against No. 7 Ohio State with the Big Ten title and a berth in the Rose Bowl on the line. The Wolverines blanked the Buckeyes 10–0 and earned the conference championship. In the November 23 AP poll, unbeaten Notre Dame, Alabama, and Arkansas were first, second, and third, followed by No. 4 Michigan and No. 5 Texas. November 26–28: Thanksgiving Day saw No. 2 Alabama finish the regular season unbeaten (10–0) with a win over Auburn in Birmingham. No. 5 Texas beat Texas A&M 26–7 to finish 10–1. On November 28 in Los Angeles, No. 1 Notre Dame led USC 17–0 at halftime but lost, 20–17. The Trojans shared the AAWU conference title with No. 8 Oregon State, and a controversial tiebreaker sent the Beavers to face Michigan in the Rose Bowl. With only Alabama and Arkansas remaining unbeaten, both with records of 10–0, the final AP poll was taken on November 30. Alabama took over the top spot and recognition as the NCAA national champion. Arkansas was No. 2, Notre Dame dropped to No. 3, and Michigan and Texas stayed at No. 4 and No. 5. Unusually, the SEC and Big 8 champions did not play in the Sugar and Orange Bowls this year. Alabama won the SEC championship, but a "no repeat rule" prevented them from playing in the Sugar Bowl for a second straight year; instead, runner-up LSU (ranked No. 7 by the AP) was matched against Syracuse. The Orange Bowl invited Alabama and Texas on November 21. The Cotton Bowl had already set up a meeting between Big 8 winner Nebraska and Southwestern Conference champ Arkansas, in what the organizers hoped would be a meeting of undefeated teams; the arrangements were finalized before Nebraska lost to Oklahoma in their last game of the regular season. Notre Dame, which was undefeated and the presumptive champion at the time the bowls were being set up, also lost its last game. (Notre Dame had a longstanding policy against playing in bowl games, which was not rescinded until the 1969 season.) Thus, the season ended with only two undefeated teams, but the early bowl commitments prevented the possibility of a No. 1 vs. No. 2 showdown. Conference standings Bowl games Major bowls Friday, January 1, 1965 Top-ranked Alabama, led by quarterback Joe Namath, fell to No. 5 Texas 21–17 in the Orange Bowl, the first night postseason bowl game. In the final minutes, down by four and facing 4th-and-goal at the Texas one-yard line, Namath's quarterback sneak was denied by the Longhorn defense. In the Cotton Bowl, quarterback Fred Marshall drove No. 2 Arkansas to a touchdown with 4:41 left to beat No. 6 Nebraska 10–7. Notable members of the 1964 Arkansas team include Jerry Jones, who would later become a billionaire as owner of the Dallas Cowboys of the NFL, and Jimmy Johnson, whom Jones would hire as coach of the Cowboys. No. 5 Michigan routed No. 8 Oregon State 34–7 in the Rose Bowl, while in the Sugar Bowl, No. 7 LSU beat unranked Syracuse 10–7 on a late field goal. A five-member committee of the Football Writers Association of America awarded Arkansas the "Grantland Rice Trophy" as the No. 1 team in a poll taken after the bowl games. The Helms Athletic Foundation, which also took polls after the bowl games, named Arkansas as the national champions. Notre Dame was named as the National Football Foundation's national champion. In 1965, the AP's final poll came after the bowl games, but the policy did not become permanent until 1968. The Coaches' Poll adopted the same policy in 1974, after similar issues in 1970 and 1973. These selectors, including the AP Poll and the Coaches' Poll, were nationally syndicated in newspapers and magazines during the 1964 football season. Other bowls Prior to the 1975 season, the Big Ten and Pac-8 (AAWU) conferences allowed only one postseason participant each, for the Rose Bowl. Notre Dame did not play in the postseason for 44 consecutive seasons (1925–1968). Heisman Trophy John Huarte, QB - Notre Dame, 1,026 points Jerry Rhome, QB - Tulsa, 952 Dick Butkus, C-LB - Illinois, 505 Bob Timberlake, QB-K - Michigan, 361 Jack Snow, WR - Notre Dame, 187 Tucker Frederickson, FB - Auburn, 184 Craig Morton, QB - California, 181 Steve DeLong, NG - Tennessee, 176 Cosmo Iacavazzi, RB - Princeton, 165 Brian Piccolo, RB - Wake Forest, 124 Joe Namath, QB - Alabama Gale Sayers, RB - Kansas Bob Berry, QB - Oregon Archie Roberts, QB - Columbia Source: See also 1964 NCAA University Division football rankings 1964 College Football All-America Team 1964 NCAA College Division football season 1964 NAIA football season References
48364
https://en.wikipedia.org/wiki/Software%20architecture
Software architecture
Software architecture refers to the fundamental structures of a software system and the discipline of creating such structures and systems. Each structure comprises software elements, relations among them, and properties of both elements and relations. The architecture of a software system is a metaphor, analogous to the architecture of a building. It functions as a blueprint for the system and the developing project, laying out the tasks necessary to be executed by the design teams. Software architecture is about making fundamental structural choices that are costly to change once implemented. Software architecture choices include specific structural options from possibilities in the design of the software. For example, the systems that controlled the Space Shuttle launch vehicle had the requirement of being very fast and very reliable. Therefore, an appropriate real-time computing language would need to be chosen. Additionally, to satisfy the need for reliability the choice could be made to have multiple redundant and independently produced copies of the program, and to run these copies on independent hardware while cross-checking results. Documenting software architecture facilitates communication between stakeholders, captures early decisions about the high-level design, and allows reuse of design components between projects. Scope Opinions vary as to the scope of software architectures: Macroscopic system structure: this refers to architecture as a higher-level abstraction of a software system that consists of a collection of computational components together with connectors that describe the interaction between these components. The important stuff—whatever that is: this refers to the fact that software architects should concern themselves with those decisions that have high impact on the system and its stakeholders. That which is fundamental to understanding a system in its environment Things that people perceive as hard to change: since designing the architecture takes place at the beginning of a software system's lifecycle, the architect should focus on decisions that "have to" be right the first time. Following this line of thought, architectural design issues may become non-architectural once their irreversibility can be overcome. A set of architectural design decisions: software architecture should not be considered merely a set of models or structures, but should include the decisions that lead to these particular structures, and the rationale behind them. This insight has led to substantial research into software architecture knowledge management. There is no sharp distinction between software architecture versus design and requirements engineering (see Related fields below). They are all part of a "chain of intentionality" from high-level intentions to low-level details. Characteristics Software architecture exhibits the following: Multitude of stakeholders: software systems have to cater to a variety of stakeholders such as business managers, owners, users, and operators. These stakeholders all have their own concerns with respect to the system. Balancing these concerns and demonstrating that they are addressed is part of designing the system. This implies that architecture involves dealing with a broad variety of concerns and stakeholders, and has a multidisciplinary nature. Separation of concerns: the established way for architects to reduce complexity is to separate the concerns that drive the design. Architecture documentation shows that all stakeholder concerns are addressed by modeling and describing the architecture from separate points of view associated with the various stakeholder concerns. These separate descriptions are called architectural views (see for example the 4+1 architectural view model). Quality-driven: classic software design approaches (e.g. Jackson Structured Programming) were driven by required functionality and the flow of data through the system, but the current insight is that the architecture of a software system is more closely related to its quality attributes such as fault-tolerance, backward compatibility, extensibility, reliability, maintainability, availability, security, usability, and other such –ilities. Stakeholder concerns often translate into requirements on these quality attributes, which are variously called non-functional requirements, extra-functional requirements, behavioral requirements, or quality attribute requirements. Recurring styles: like building architecture, the software architecture discipline has developed standard ways to address recurring concerns. These "standard ways" are called by various names at various levels of abstraction. Common terms for recurring solutions are architectural style, tactic, reference architecture and architectural pattern. Conceptual integrity: a term introduced by Fred Brooks in his 1975 book The Mythical Man-Month to denote the idea that the architecture of a software system represents an overall vision of what it should do and how it should do it. This vision should be separated from its implementation. The architect assumes the role of "keeper of the vision", making sure that additions to the system are in line with the architecture, hence preserving conceptual integrity. Cognitive constraints: an observation first made in a 1967 paper by computer programmer Melvin Conway that organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations. As with conceptual integrity, it was Fred Brooks who introduced it to a wider audience when he cited the paper and the idea in his elegant classic The Mythical Man-Month, calling it "Conway's Law." Motivation Software architecture is an "intellectually graspable" abstraction of a complex system. This abstraction provides a number of benefits: It gives a basis for analysis of software systems' behavior before the system has been built. The ability to verify that a future software system fulfills its stakeholders' needs without actually having to build it represents substantial cost-saving and risk-mitigation. A number of techniques have been developed to perform such analyses, such as ATAM or by creating a visual representation of the software system. It provides a basis for re-use of elements and decisions. A complete software architecture or parts of it, like individual architectural strategies and decisions, can be re-used across multiple systems whose stakeholders require similar quality attributes or functionality, saving design costs and mitigating the risk of design mistakes. It supports early design decisions that impact a system's development, deployment, and maintenance life. Getting the early, high-impact decisions right is important to prevent schedule and budget overruns. It facilitates communication with stakeholders, contributing to a system that better fulfills their needs. Communicating about complex systems from the point of view of stakeholders helps them understand the consequences of their stated requirements and the design decisions based on them. Architecture gives the ability to communicate about design decisions before the system is implemented, when they are still relatively easy to adapt. It helps in risk management. Software architecture helps to reduce risks and chance of failure. It enables cost reduction. Software architecture is a means to manage risk and costs in complex IT projects. History The comparison between software design and (civil) architecture was first drawn in the late 1960s, but the term "software architecture" did not see widespread usage until the 1990s. The field of computer science had encountered problems associated with complexity since its formation. Earlier problems of complexity were solved by developers by choosing the right data structures, developing algorithms, and by applying the concept of separation of concerns. Although the term "software architecture" is relatively new to the industry, the fundamental principles of the field have been applied sporadically by software engineering pioneers since the mid-1980s. Early attempts to capture and explain software architecture of a system were imprecise and disorganized, often characterized by a set of box-and-line diagrams. Software architecture as a concept has its origins in the research of Edsger Dijkstra in 1968 and David Parnas in the early 1970s. These scientists emphasized that the structure of a software system matters and getting the structure right is critical. During the 1990s there was a concerted effort to define and codify fundamental aspects of the discipline, with research work concentrating on architectural styles (patterns), architecture description languages, architecture documentation, and formal methods. Research institutions have played a prominent role in furthering software architecture as a discipline. Mary Shaw and David Garlan of Carnegie Mellon wrote a book titled Software Architecture: Perspectives on an Emerging Discipline in 1996, which promoted software architecture concepts such as components, connectors, and styles. The University of California, Irvine's Institute for Software Research's efforts in software architecture research is directed primarily in architectural styles, architecture description languages, and dynamic architectures. IEEE 1471-2000, "Recommended Practice for Architecture Description of Software-Intensive Systems", was the first formal standard in the area of software architecture. It was adopted in 2007 by ISO as ISO/IEC 42010:2007. In November 2011, IEEE 1471–2000 was superseded by ISO/IEC/IEEE 42010:2011, "Systems and software engineering – Architecture description" (jointly published by IEEE and ISO). While in IEEE 1471, software architecture was about the architecture of "software-intensive systems", defined as "any system where software contributes essential influences to the design, construction, deployment, and evolution of the system as a whole", the 2011 edition goes a step further by including the ISO/IEC 15288 and ISO/IEC 12207 definitions of a system, which embrace not only hardware and software, but also "humans, processes, procedures, facilities, materials and naturally occurring entities". This reflects the relationship between software architecture, enterprise architecture and solution architecture. Architecture activities There are many activities that a software architect performs. A software architect typically works with project managers, discusses architecturally significant requirements with stakeholders, designs a software architecture, evaluates a design, communicates with designers and stakeholders, documents the architectural design and more. There are four core activities in software architecture design. These core architecture activities are performed iteratively and at different stages of the initial software development life-cycle, as well as over the evolution of a system. Architectural analysis is the process of understanding the environment in which a proposed system will operate and determining the requirements for the system. The input or requirements to the analysis activity can come from any number of stakeholders and include items such as: what the system will do when operational (the functional requirements) how well the system will perform runtime non-functional requirements such as reliability, operability, performance efficiency, security, compatibility defined in ISO/IEC 25010:2011 standard development-time of non-functional requirements such as maintainability and transferability defined in ISO 25010:2011 standard business requirements and environmental contexts of a system that may change over time, such as legal, social, financial, competitive, and technology concerns The outputs of the analysis activity are those requirements that have a measurable impact on a software system's architecture, called architecturally significant requirements. Architectural synthesis or design is the process of creating an architecture. Given the architecturally significant requirements determined by the analysis, the current state of the design and the results of any evaluation activities, the design is created and improved. Architecture evaluation is the process of determining how well the current design or a portion of it satisfies the requirements derived during analysis. An evaluation can occur whenever an architect is considering a design decision, it can occur after some portion of the design has been completed, it can occur after the final design has been completed or it can occur after the system has been constructed. Some of the available software architecture evaluation techniques include Architecture Tradeoff Analysis Method (ATAM) and TARA. Frameworks for comparing the techniques are discussed in frameworks such as SARA Report and Architecture Reviews: Practice and Experience. Architecture evolution is the process of maintaining and adapting an existing software architecture to meet changes in requirements and environment. As software architecture provides a fundamental structure of a software system, its evolution and maintenance would necessarily impact its fundamental structure. As such, architecture evolution is concerned with adding new functionality as well as maintaining existing functionality and system behavior. Architecture requires critical supporting activities. These supporting activities take place throughout the core software architecture process. They include knowledge management and communication, design reasoning and decision making, and documentation. Architecture supporting activities Software architecture supporting activities are carried out during core software architecture activities. These supporting activities assist a software architect to carry out analysis, synthesis, evaluation, and evolution. For instance, an architect has to gather knowledge, make decisions and document during the analysis phase. Knowledge management and communication is the act of exploring and managing knowledge that is essential to designing a software architecture. A software architect does not work in isolation. They get inputs, functional and non-functional requirements, and design contexts, from various stakeholders; and provides outputs to stakeholders. Software architecture knowledge is often tacit and is retained in the heads of stakeholders. Software architecture knowledge management activity is about finding, communicating, and retaining knowledge. As software architecture design issues are intricate and interdependent, a knowledge gap in design reasoning can lead to incorrect software architecture design. Examples of knowledge management and communication activities include searching for design patterns, prototyping, asking experienced developers and architects, evaluating the designs of similar systems, sharing knowledge with other designers and stakeholders, and documenting experience in a wiki page. Design reasoning and decision making is the activity of evaluating design decisions. This activity is fundamental to all three core software architecture activities. It entails gathering and associating decision contexts, formulating design decision problems, finding solution options and evaluating tradeoffs before making decisions. This process occurs at different levels of decision granularity while evaluating significant architectural requirements and software architecture decisions, and software architecture analysis, synthesis, and evaluation. Examples of reasoning activities include understanding the impacts of a requirement or a design on quality attributes, questioning the issues that a design might cause, assessing possible solution options, and evaluating the tradeoffs between solutions. Documentation is the act of recording the design generated during the software architecture process. System design is described using several views that frequently include a static view showing the code structure of the system, a dynamic view showing the actions of the system during execution, and a deployment view showing how a system is placed on hardware for execution. Kruchten's 4+1 view suggests a description of commonly used views for documenting software architecture; Documenting Software Architectures: Views and Beyond has descriptions of the kinds of notations that could be used within the view description. Examples of documentation activities are writing a specification, recording a system design model, documenting a design rationale, developing a viewpoint, documenting views. Software architecture topics Software architecture description Software architecture description involves the principles and practices of modeling and representing architectures, using mechanisms such as architecture description languages, architecture viewpoints, and architecture frameworks. Architecture description languages An architecture description language (ADL) is any means of expression used to describe a software architecture (ISO/IEC/IEEE 42010). Many special-purpose ADLs have been developed since the 1990s, including AADL (SAE standard), Wright (developed by Carnegie Mellon), Acme (developed by Carnegie Mellon), xADL (developed by UCI), Darwin (developed by Imperial College London), DAOP-ADL (developed by University of Málaga), SBC-ADL (developed by National Sun Yat-Sen University), and ByADL (University of L'Aquila, Italy). Architecture viewpoints Software architecture descriptions are commonly organized into views, which are analogous to the different types of blueprints made in building architecture. Each view addresses a set of system concerns, following the conventions of its viewpoint, where a viewpoint is a specification that describes the notations, modeling, and analysis techniques to use in a view that expresses the architecture in question from the perspective of a given set of stakeholders and their concerns (ISO/IEC/IEEE 42010). The viewpoint specifies not only the concerns framed (i.e., to be addressed) but the presentation, model kinds used, conventions used and any consistency (correspondence) rules to keep a view consistent with other views. Architecture frameworks An architecture framework captures the "conventions, principles and practices for the description of architectures established within a specific domain of application and/or community of stakeholders" (ISO/IEC/IEEE 42010). A framework is usually implemented in terms of one or more viewpoints or ADLs. Architectural styles and patterns An architectural pattern is a general, reusable solution to a commonly occurring problem in software architecture within a given context. Architectural patterns are often documented as software design patterns. Following traditional building architecture, a 'software architectural style' is a specific method of construction, characterized by the features that make it notable" (architectural style). There are many recognized architectural patterns and styles, among them: Blackboard Client-server (2-tier, 3-tier, n-tier, cloud computing exhibit this style) Component-based Data-centric Event-driven (or implicit invocation) Layered (or multilayered architecture) Microservices architecture Monolithic application Peer-to-peer (P2P) Pipes and filters Plug-ins Reactive architecture Representational state transfer (REST) Rule-based Service-oriented Shared nothing architecture Space-based architecture Some treat architectural patterns and architectural styles as the same, some treat styles as specializations of patterns. What they have in common is both patterns and styles are idioms for architects to use, they "provide a common language" or "vocabulary" with which to describe classes of systems. Software architecture and agile development There are also concerns that software architecture leads to too much Big Design Up Front, especially among proponents of agile software development. A number of methods have been developed to balance the trade-offs of up-front design and agility, including the agile method DSDM which mandates a "Foundations" phase during which "just enough" architectural foundations are laid. IEEE Software devoted a special issue to the interaction between agility and architecture. Software architecture erosion Software architecture erosion (or "decay") refers to the gap observed between the planned and actual architecture of a software system as realized in its implementation. Software architecture erosion occurs when implementation decisions either do not fully achieve the architecture-as-planned or otherwise violate constraints or principles of that architecture. As an example, consider a strictly layered system, where each layer can only use services provided by the layer immediately below it. Any source code component that does not observe this constraint represents an architecture violation. If not corrected, such violations can transform the architecture into a monolithic block, with adverse effects on understandability, maintainability, and evolvability. Various approaches have been proposed to address erosion. "These approaches, which include tools, techniques, and processes, are primarily classified into three general categories that attempt to minimize, prevent and repair architecture erosion. Within these broad categories, each approach is further broken down reflecting the high-level strategies adopted to tackle erosion. These are process-oriented architecture conformance, architecture evolution management, architecture design enforcement, architecture to implementation linkage, self-adaptation and architecture restoration techniques consisting of recovery, discovery, and reconciliation." There are two major techniques to detect architectural violations: reflexion models and domain-specific languages. Reflexion model (RM) techniques compare a high-level model provided by the system's architects with the source code implementation. There are also domain-specific languages with a focus on specifying and checking architectural constraints. Software architecture recovery Software architecture recovery (or reconstruction, or reverse engineering) includes the methods, techniques, and processes to uncover a software system's architecture from available information, including its implementation and documentation. Architecture recovery is often necessary to make informed decisions in the face of obsolete or out-of-date documentation and architecture erosion: implementation and maintenance decisions diverging from the envisioned architecture. Practices exist to recover software architecture as static program analysis. This is a part of subjects covered by the software intelligence practice. Related fields Design Architecture is design but not all design is architectural. In practice, the architect is the one who draws the line between software architecture (architectural design) and detailed design (non-architectural design). There are no rules or guidelines that fit all cases, although there have been attempts to formalize the distinction. According to the Intension/Locality Hypothesis, the distinction between architectural and detailed design is defined by the Locality Criterion, according to which a statement about software design is non-local (architectural) if and only if a program that satisfies it can be expanded into a program that does not. For example, the client–server style is architectural (strategic) because a program that is built on this principle can be expanded into a program that is not client–server—for example, by adding peer-to-peer nodes. Requirements engineering Requirements engineering and software architecture can be seen as complementary approaches: while software architecture targets the 'solution space' or the 'how', requirements engineering addresses the 'problem space' or the 'what'. Requirements engineering entails the elicitation, negotiation, specification, validation, documentation and management of requirements. Both requirements engineering and software architecture revolve around stakeholder concerns, needs and wishes. There is considerable overlap between requirements engineering and software architecture, as evidenced for example by a study into five industrial software architecture methods that concludes that "the inputs (goals, constraints, etc.) are usually ill-defined, and only get discovered or better understood as the architecture starts to emerge" and that while "most architectural concerns are expressed as requirements on the system, they can also include mandated design decisions". In short, required behavior impacts solution architecture, which in turn may introduce new requirements. Approaches such as the Twin Peaks model aim to exploit the synergistic relation between requirements and architecture. Other types of 'architecture' Computer architecture Computer architecture targets the internal structure of a computer system, in terms of collaborating hardware components such as the CPU – or processor – the bus and the memory. Systems architecture The term systems architecture has originally been applied to the architecture of systems that consists of both hardware and software. The main concern addressed by the systems architecture is then the integration of software and hardware in a complete, correctly working device. In another common – much broader – meaning, the term applies to the architecture of any complex system which may be of technical, sociotechnical or social nature. Enterprise architecture The goal of enterprise architecture is to "translate business vision and strategy into effective enterprise". Enterprise architecture frameworks, such as TOGAF and the Zachman Framework, usually distinguish between different enterprise architecture layers. Although terminology differs from framework to framework, many include at least a distinction between a business layer, an application (or information) layer, and a technology layer. Enterprise architecture addresses among others the alignment between these layers, usually in a top-down approach. See also Archimate Architectural pattern (computer science) Anti-pattern Attribute-driven design C4 model Computer architecture Distributed Data Management Architecture Distributed Relational Database Architecture Systems architecture Systems design Software Architecture Analysis Method Time-triggered system References Further reading - This book covers the fundamental concepts of the discipline. The theme is centered on achieving quality attributes of a system. - This book describes what software architecture is and shows how to document it in multiple views, using UML and other notations. It also explains how to complement the architecture views with behavior, software interface, and rationale documentation. Accompanying the book is a wiki that contains an example of software architecture documentation. - On the distinction between architectural design and detailed design. External links Explanation on IBM Developerworks Collection of software architecture definitions at Software Engineering Institute (SEI), Carnegie Mellon University (CMU) International Association of IT Architects (IASA Global), formerly known as the International Association for Software Architects (IASA) SoftwareArchitecturePortal.org – website of IFIP Working Group 2.10 on Software Architecture SoftwareArchitectures.com – an independent resource of information on the discipline Software Architecture, chapter 1 of Roy Fielding's REST dissertation When Good Architecture Goes Bad The Spiral Architecture Driven Development – the SDLC based on the Spiral model aims to reduce the risks of ineffective architecture Software Architecture Real Life Case Studies Edsger W. Dijkstra
5140058
https://en.wikipedia.org/wiki/Charles%20F.%20Van%20Loan
Charles F. Van Loan
Charles Francis Van Loan (born September 20, 1947) is an emeritus professor of computer science and the Joseph C. Ford Professor of Engineering at Cornell University, He is known for his expertise in numerical analysis, especially matrix computations. In 2016, Van Loan became the Dean of Faculty at Cornell University. Biography Originally from Orange, New Jersey, Van Loan attended the University of Michigan, where he obtained the B.S. in applied mathematics (1969) and the M.A. (1970) and Ph.D. (1973) in mathematics. His PhD dissertation was entitled "Generalized Singular Values with Algorithms and Applications" and his thesis adviser was Cleve Moler. Following a postdoctorate at the University of Manchester, he joined the Department of Computer Science at Cornell University in 1975, and served as the Department Chair from 1999 to 2006. During the 1988–1989 academic year, Van Loan taught at Oxford University for his sabbatical. Van Loan ran the Computer Science Graduate Program from 1982 to 1987 at Cornell and was the Director of the Undergraduate Studies from 1994 to 1998 and 1999–2003. He was awarded the Ford chair in 1998. He held the position of chairman from July 1999 to June 2006. In the spring of 2016, Van Loan retired from the Computer Science Department and was promoted to Dean of Faculty, replacing Joseph Burns. Van Loan is the first emeritus professor to hold the position of Dean of Faculty. Honors and awards Van Loan won the Robert Paul Advising Award in 1998 and the James and Martha D. McCormick Advising Award in 2003. Other awards Van Loan won include the Merrill Scholar Faculty Impact Award in 1998 and 2009 and the James and Mary Tien Teaching Award, College of Engineering in 2009. In 2018 he was awarded the John von Neumann Lecture prize by the Society for Industrial and Applied Mathematics. Books Van Loan's best-known book is Matrix Computations, 3/e (Johns Hopkins University Press, 1996, ), written with Gene H. Golub. He is also the author of Handbook for Matrix Computations (SIAM, 1988, ), Computational Frameworks for the Fast Fourier Transform (SIAM, 1992, ), Introduction to Computational Science and Mathematics (Jones and Bartlett, 1996, ), and Introduction to Scientific Computation: A Matrix-Vector Approach Using MATLAB (2nd ed., Prentice-Hall, 1999, ). His latest book is Insight Through Computing: A MATLAB Introduction to Computational Science and Engineering (SIAM, 2009, ) written with K-Y Daisy Fan. References External links Home page at Cornell University Living people Numerical analysts University of Michigan College of Literature, Science, and the Arts alumni Cornell University faculty Fellows of the Society for Industrial and Applied Mathematics 1947 births 20th-century American mathematicians 21st-century American mathematicians
162176
https://en.wikipedia.org/wiki/Rogue%20%28video%20game%29
Rogue (video game)
Rogue (also known as Rogue: Exploring the Dungeons of Doom) is a dungeon crawling video game by Michael Toy and Glenn Wichman with later contributions by Ken Arnold. Rogue was originally developed around 1980 for Unix-based mainframe systems as a freely-distributed executable. It was later included in the official Berkeley Software Distribution 4.2 operating system (4.2BSD). Commercial ports of the game for a range of personal computers were made by Toy, Wichman, and Jon Lane under the company A.I. Design and financially supported by the Epyx software publishers. Additional ports to modern systems have been made since by other parties using the game's now-open source code. In Rogue, players control a character as they explore several levels of a dungeon seeking the Amulet of Yendor located in the dungeon's lowest level. The player-character must fend off an array of monsters that roam the dungeons. Along the way, players can collect treasures that can help them offensively or defensively, such as weapons, armor, potions, scrolls, and other magical items. Rogue is turn-based, taking place on a square grid represented in ASCII or other fixed character set, allowing players to have time to determine the best move to survive. Rogue implements permadeath as a design choice to make each action by the player should one's player-character lose all his health via combat or other means, that player-character is simply dead. The player must then restart with a fresh character as the dead character cannot respawn, or be brought back by reloading from a saved state. Moreover, no game is the same as any previous one, as the dungeon levels, monster encounters, and treasures are procedurally generated for each playthrough. Rogue was inspired by text-based computer games such as the 1971 Star Trek game and Colossal Cave Adventure released in 1976, along with the high fantasy setting from Dungeons & Dragons. Toy and Wichman, both students at University of California, Santa Cruz, worked together to create their own text-based game but looked to incorporate elements of procedural generation to create a new experience each time the user played the game. Toy later worked at University of California, Berkeley where he met Arnold, the lead developer of the curses programming library that Rogue was dependent on to mimic a graphical display. Arnold helped Toy to optimize the code and incorporate additional features to the game. The commercial ports were inspired when Toy met Lane while working for the Olivetti company, and Toy engaged with Wichman again to help with designing graphics and various ports. Rogue became popular in the 1980s among college students and other computer-savvy users in part due to its inclusion in 4.2BSD. It inspired programmers to develop a number of similar titles such as Hack (1982) and Moria (1983), though as Toy, Wichman, and Arnold had not released the source code at this time, these new games introduced different variations atop Rogue. A long lineage of games grew out from these titles. While Rogue was not the first dungeon-crawling game with procedural generation features, it introduced the subgenre of roguelike RPG procedurally generated dungeon crawlers with Dungeons-and-Dragons-like items (armor, weapons, potions, and magic scrolls) that also had permadeath (permanent death) and an overhead graphical albeit via ASCII drawings, as opposed to text descriptions in natural language such as can be seen in Adventure/Colossal Cave and the original Zork games. Gameplay In Rogue, the player assumes the typical role of an adventurer of early fantasy role-playing games. The game starts at the uppermost level of an unmapped dungeon with myriad monsters and treasures. The goal is to fight one's way to the bottom level, retrieve the Amulet of Yendor ("Rodney" spelled backwards), then ascend to the surface. Monsters in the levels become progressively more difficult to defeat. Until the Amulet is retrieved, the player cannot return to earlier levels. User interface In the original text-based versions, all aspects of the game, including the dungeon, the player character, and monsters, are represented by letters and symbols within the ASCII character set. Monsters are represented by capital letters (such as Z, for zombie), and accordingly there are twenty-six varieties. This type of display makes it appropriate for a non-graphical terminal. Later ports of Rogue apply extended character sets to the text user interface or replace it with graphical tiles. The basic movement keys (h, left; j, down; k, up; and l, right) are the same as the cursor control keys in the vi editor. Other game actions also use single keystrokes—q to quaff a potion, w to wield a weapon, e to eat some food, etc. In the DOS version, the cursor keys specify movement, and the fast-move keys (H, J, K, and L) are supplanted by use of the scroll lock key. Each dungeon level consists of a grid of three rooms by three rooms (potentially); dead end hallways sometimes appear where rooms would be expected. Lower levels can also include a maze in the place of a room. Unlike most adventure games of the time of the original design, the dungeon layout and the placement of objects within are randomly generated. Development At UC Santa Cruz The concept of Rogue originated with Michael Toy and Glenn Wichman. Toy grew up in Livermore, California, where his father was a nuclear scientist. Once a year, the father's workplace allowed their employees family to visit, which included allowing them to use the facility's mainframe system to play games. Toy took interest in the text-based Star Trek game (1971), which represented space combat through characters on screen, and required players to make strategic decisions each turn. Toy took to learn programming and recreate this game on other computer systems that he could access, including the Processor Technology Sol-20 and the Atari 400. Toy subsequently enrolled in computer science at the University of California, Santa Cruz (UCSC) in the late 1970s. Working first on UCSC's PDP-11 and then its VAX-11, Toy began exploring what games were available over ARPANET, the predecessor of the current Internet. One game that intrigued him was Colossal Cave Adventure (also known as Adventure) (1976) by William Crowther and Don Woods. Adventure, considered the first text-based adventure game, challenged the player to explore a cave system through descriptions given by the computer and commands issued by the player. Toy was impressed by the game and started writing his own. Toy soon met Wichman, another student at UCSC who was also writing his own adventure game. Wichman had created his own variations on traditional role-playing games such as Dungeons & Dragons while growing up. Wichman chose UCSC specifically to study on game design as to become a board game developer, and this led him into the computer sciences to get the opportunity to play and develop games. The two became friends, shared an apartment, and challenged each other with their own adventure game creations. Of the two, Toy was more proficient at coding, while Wichman had a better sense of the design of these games. Toy and Wichman soon found that most adventure games suffered from a lack of replayability, in that the game did not change on separate playthroughs. Around this time, ca. 1980, BSD Unix had started to gain a foothold as the operating system for many of the University of California's campuses. One element of the BSD distribution at this point included the curses programming library by Ken Arnold. curses enabled a programmer to place characters at any point on a terminal, effectively allowing for "graphical" interfaces. When Toy saw this library, he and Wichman quickly realized the potential for it. After crafting a few games using curses to learn the library, they came up with the idea of an adventure game in the flavor of Dungeons & Dragons but to address their concerns with the static nature of adventure games, wanted to include elements that would change every time the game was played. The two came up with a narrative, that of an adventurer setting out to explore and find treasures in the Dungeons of Doom, specifically the Amulet of Yendor (the name "Rodney" spelled backwards, which they envisioned as renowned wizard in the games narration). Wichman came up with the name Rogue, based on the idea that unlike the party-based systems of Dungeons & Dragons, the player's character was going at this alone. They also wanted to make sure the name was short to make it simple to type out on command lines. As Toy was more proficient at programming, he led the development of the game in the C language which generally produced fast effective code. Wichman learned the language from Toy as they went along while providing significant input on the design of game. The first two major aspects of the game developed was the method of displaying the dungeon on screen to the player, and how to generate the dungeon in a random manner. Limited by choices of what a terminal could display, they stuck to ASCII-based characters, such as . for empty floor space, + for doors, and | and - for walls of the dungeon. They also opted to use the "at" symbol (@) to represent the player, considering this showed the player "where they're at". For the dungeon, they found initial attempts at purely random generation to be weak, in some cases having a stairway ending up in a room inaccessible to players. They found a solution through procedural generation, where each level would start on the idea of a 3x3 tic tac toe grid, with each room of various size occupying one space in this grid, and then creating the hallways to connect the rooms. Once they could have their character move about these randomly created dungeons, they then added equipment, magic items, and monsters. With magic items, they wanted the effects of these items to be a mystery on each run through, and thus would initially present the items to the player only by a descriptor such as by color, and only later in the game give the true name of the item once the player experimented or used another means to identify the item. For monsters, they wanted to have more advanced intelligence routines as the player got to deeper depths of the dungeons, but had started running into memory limits on the VAX-11, and simply made the monsters stronger with more health to pose more of a challenge. The two started testing the game with other students at UCSC, finding that despite the limited graphics, players were filling the gaps with their own imagination. Playtester feedback helped them to improve the procedural generation routines to balance the game's challenge. One element that fell out from playtesting was the use of permadeath. Toy wanted to move away from the notion of simply learning the right sequence of steps to complete within adventure games, and instead make the player focus on finding the right moves to avoid the character's death at that moment; Wichman later called this idea "consequence persistence". Initially, a Rogue game had to be completed in one sitting, but by demand of playtesters, Toy and Wichman added the ability to save the state of the game, so that players could continue a game across sessions. They soon found players were "save scumming", reloading the game from the save file, an approach counter to their design goals. They changed this so that the save file was erased upon reloading the game, thus making a character's death effectively permanent. They subsequently added a scoreboard feature that let players rank their progress with others, rewarding players with more points for surviving as deep as possible into the dungeons and making the Amulet of Yendor a lucrative goal. Around 1982, Toy's attention to Rogue and computer games caused him to suffer poor academic performance, and he was kicked out of the school, shortly finding employment at University of California, Berkeley (UCB) in their computer lab. Toy took the Rogue code with him to continue its development. Wichman, still enrolled at UCSC, continued to help develop Rogue for a time, such as adding armor elements, but the logistics of working over the distance made it difficult for him to keep up, and he let Toy fully take over development. At UC Berkeley Prior to Toy's arrival at UCB, Ken Arnold had gotten to play Rogue, which had been distributed as an executable across many of the UC campuses. Though impressed with the game, he expressed frustration at the inefficient means the game updated the screen via his curses library over a modem line. He had ideas for how to fix it, but at this point Toy and Wichman had opted not to release the code. When Toy arrived at UCB in 1982, he sought out Arnold to get insight into the nature of how the curses library worked. After the two got to know each other, Toy allowed him access to Rogues source code. In addition to helping to improve the interface and rendering of the game, Arnold helped to improve the procedural generation aspects of the game. With its popularity on the UCB servers, Rogue was selected as one of the game titles included in the 1983 distribution of 4.2 BSD, which spread across ARPANET and quickly gained popularity among colleges and facilities with access to this hardware. Among its fans included UNIX's co-developer Ken Thompson working at Bell Labs; Dennis Ritchie had joked at the time that Rogue was "the biggest waste of CPU cycles in history". Rogues distribution in 4.2 BSD did not include its source code, so after Toy and Arnold separately left UCB, they took the code with them, making it difficult for anyone to build off it. Rogue source was eventually added under a BSD software license within 4.3 BSD in 1986, putting it into the open source. At A.I. Design Toy left UCB sometime before 1984 and took a consulting position with Olivetti, an Italian typewriter company that at the time were starting development of their own computer based on the IBM Personal Computer (IBM PC) operating system. There, he met one of Olivetti's computer system administrators, Jon Lane. Lane had previously seen the popularity of Rogue among the United States location he managed and had played the game himself along with Ritchie's observations on Rogue. Upon meeting Toy, Lane proposed the idea of porting Rogue to the IBM PC as a commercial product, which Toy agreed. They founded the company A.I. Design to port and market the game. Though Toy's source code was necessary for the porting, Lane had to redevelop many of the routines for the game's interface. Lane took advantage of the more graphical Code page 437 character set on PC to expand the number of symbols to represent the dungeon, such as using a happy-face ☺ for the player-character. They also took steps to avoid potential copyright issues with TSR, the company that owned Dungeons & Dragons at that time, by changing the names of monsters like kobolds that were unique to that game. Toy and Lane initially funded the publishing, distribution, and promotion of the IBM PC version themselves, and though they continued to gain sales, they were only able to break even as they lacked the power of a larger distributor. Around 1984, Robert Borch, the vice president of publishing at Epyx discovered that Rogue had become popular by several of Epyx's employees and that they suggested that Epyx should help fund ports to other systems. Though Borch felt there was niche appeal to the game, he followed this advice and contracted A.I. Design to port the game to the Apple Macintosh and Commodore Amiga upon which Epyx would take over distribution and marketing. Toy obtained a Macintosh and took the lead in porting the game to that system. Both Toy and Lane recognized that they could implement improved graphics with the Macintosh version, but neither had art skills to make the icons. Toy reached out to Wichman to help with these graphics. Wichman was initially cautious due to the fact that his credit for Rogue in the PC version had been cast as a "contribution" equal to the UCSC playtesters rather than as equal to Toy, Arnold, or Lane. However, he agreed to help and joined A.I. Design. Much of the Macintosh version was developed in concert by Toy, Wichman, and Lane in a cabin at the Squaw Valley Ski Resort. Following this, Epyx requested that Wichman lead the development of the Atari ST version, with the company providing Wichman a system to work on. This work occurred alongside Toy's work on the Amiga version. Wichman enlisted help from an Epyx in-house artist, Michael Kosaka, to create the art on the Atari ST version. Epyx would also fund A.I. Design to port the game to other systems including the TRS-80 Color. Borch recognized the difficulty in marketing Rogue through traditional methods compared to other games on the market at that time, and opted to push the title through software catalogs rather than retail channels. Though it sold well initially, Rogues sales quickly declined and was considered a commercial flop. Besides the competition from more graphically-interesting games, Wichman attributed the failure to the fact that the commercial version of Rogue was essentially the same game previously offered for free via BSD and did not pose a new challenge. Epyx eventually went bankrupt in 1989, and A.I. Design disbanded. None of Toy, Wichman, Arnold, or Lane profited greatly from Rogue, though they became renowned in the industry for their participation on the game. Other ports In 1988, the budget software publisher Mastertronic released a commercial port of Rogue for the Amstrad CPC, Commodore 64, Atari 8-bit, and ZX Spectrum computers. Numerous clones exist for modern operating systems such as Microsoft Windows, Mac OS X, Palm OS, Linux, BSD OSs, and iOS. It is even included in the base distribution of NetBSD and DragonflyBSD. Automated play Because the input and output of the original game is over a terminal interface, it is relatively easy in Unix to redirect output to another program. One such program, Rog-O-Matic, was developed in 1981 to play and win the game, by four graduate students in the Computer Science Department at Carnegie-Mellon University in Pittsburgh: Andrew Appel, Leonard Harney, Guy Jacobson and Michael Loren Mauldin. Ken Arnold said that he liked to make "sure that every subsequent version of rogue had a new feature in it that broke Rogue-O-Matic." Nevertheless, it remains a noted study in expert system design and led to the development of other game-playing programs, typically called "bots". Some of these bots target other roguelikes, in particular Angband. Reception In March 1984, Jerry Pournelle named the version of Rogue for the IBM PC as his "game of the month", describing it as "a real time trap. I found myself thinking 'just one more try' far too often". The game was reviewed in 1986 in Dragon #112 by Hartley and Pattie Lesser in the "Role of Computers" column. In a subsequent column, the reviewers gave the IBM and Mac versions of the game 3½ out of 5 stars. Compute! favorably reviewed Epyx's Amiga version as improving on the text-based original, stating that "the game will give you many hours of gaming fun". In 2009, Rogue was named #6 on the "Ten Greatest PC Games Ever" list by PC World. Legacy Because of Rogues popularity at colleges in the early 1980s, other users sought to expand or create similar games to Rogue. However, as neither Toy, Wichman, nor Arnold released the source code of the game, these efforts generally required the programmers to craft the core game elements from scratch to mimic Rogue. Though there were multiple titles that tried this, the two most significant ones were Moria (1983) and Hack (1982). Both games spawned a family of improved versions and clones over the next several years, leading to a wide number of games in a similar flavor. These games, which generally feature turn-based exploration and combat in a high fantasy setting in a procedurally-generated dungeon and employing permadeath, are named roguelike games in honor of Rogues impact. Most of the graphical interface conventions used in Rogue were reused within these other roguelikes, such as the use of @ to represent the player-character. Toy, Wichman, and Arnold reunited onstage for the first time in 30 years in an event called "Roguelike Celebration" at San Francisco in 2016. The gameplay mechanics of Rogue were influential in the creation of Torneko no Daibōken: Fushigi no Dungeon, the very first game in the Mystery Dungeon series by Chunsoft. References External links A Guide to the Dungeons of Doom - the original paper by Michael Toy and Kenneth Arnold describing the game Rogue 1984 – The DOS Game, the History, the Science Rogue Central @ coredumpcentral.org Information, documentation, screenshots, and various versions for download and online play. Michael Toy, Glenn Wichman, and Ken Arnold panel at the 2016 Roguelike Celebration 1980 video games Role-playing video games Amiga games Amstrad CPC games Atari 8-bit family games Atari ST games Commodore 64 games CP/M games Software that uses ncurses Epyx games Linux games Classic Mac OS games Mainframe games Mastertronic games Roguelike video games TRS-80 Color Computer games Unix games Video games with textual graphics ZX Spectrum games Open-source video games Public-domain software with source code Video games developed in the United States
39839002
https://en.wikipedia.org/wiki/Hochschule%20Bonn-Rhein-Sieg%20University%20of%20Applied%20Sciences
Hochschule Bonn-Rhein-Sieg University of Applied Sciences
The Hochschule Bonn-Rhein-Sieg, University of Applied Sciences is a German university of applied sciences with more than 9,500 students and 150 professors. Its campus comprises three distinct locations, situated in Sankt Augustin, Rheinbach and Hennef / Sieg (all in the vicinity of Cologne and Bonn). History General Information The Hochschule Bonn-Rhein-Sieg was founded on 1 January 1995 by the German Federal State of North Rhine-Westphalia. Its formal establishment was part of an agreement that compensated Bonn for its loss of status as capital of the Federal Republic of Germany. Until the end of 2004, the University of Applied Sciences was funded by the Federal Government of Germany. From 2005 it became an establishment of the German Federal State of North Rhine-Westphalia; since 1 January 2007 it has been an autonomous body of public law as defined by the German Higher Education Autonomy Act (Hochschulfreiheitsgesetz, HfG).The University of Applied Sciences was renamed into “Hochschule Bonn-Rhein-Sieg” on 1 January 2009, with the abbreviation being “H-BRS”. In October 2011, Hochschule Bonn-Rhein-Sieg joined the European University Association (EUA). Locations The University comprises three locations, including five departments. The Departments of Computer Science as well as Electrical Engineering, Mechanical Engineering and Technical Journalism (EMT)are located at Sankt Augustin, the Department of Natural Sciences can be found at Rheinbach and the Department of Management Sciences is located at both campuses. The Hennef site houses the Department of Social Security Studies. Bonn-Rhein-Sieg University's administrative buildings are located at the Sankt Augustin site. In Bonn the University of Applied Sciences runs the Bonn-Aachen International Center for Information Technology (B-IT) in collaboration with Aachen Technical University (Rheinisch-Westfälische Technische Hochschule) and the University of Bonn (Rheinische Friedrich-Wilhelms-Universität). Departments and degree programmes The degree programmes offered by H-BRS are recognised throughout the European Union within the framework of the European Credit Transfer System (ECTS), meaning that ECTS credits obtained at H-BRS can be recognised towards asimilar degree at any other European university. The university currently offers 36 degree programmes in five departments, divided into Bachelor's and Master's programmes. In addition, there are further education and certificate study programmes. The university has the following departments: The Department of Management Sciences The Department of Computer Science The Department of Electrical Engineering, Mechanical Engineering and Technical Journalism (EMT) The Department of Natural Sciences The Department of Social Security Studies Central facilities and research institutes Central facilities The Language Centre The Language Centre works in close cooperation with all the departments, offering tailor-made courses to suit their individual requirements. At present the Language Centre offers general and subject-specific courses in 14 languages, which are, for the most part, held by native speakers. The range of courses is extended according to the students' individual requirements and needs. Furthermore, the Language Centre provides international students with the opportunity to sit internationally recognised language tests in English and German as a foreign language, such as the German Language Proficiency Test for the Admission of International Students to German Universities (DSH), which is mandatory for study at H-BRS. In addition to offering foreign-language courses and testing, the Language Centre conducts seminars in Intercultural Communication. The University and District Library The buildings of the University and District Library, which also serves as district library for the Rhein-Sieg District, are located at Sankt Augustin and Rheinbach. The Library provides its customers with a versatile collection of books, periodicals, digital media and databases, which can in part be accessed from home. In addition, it offers regular art exhibitions, book readings and manifold information services as well as e-learning facilities. The Language Centre's Computer-Assisted Language Learning Laboratory (CALL) and a self-access centre are integrated into the Library. Research institutes The Centre for Entrepreneurship, Innovation and SMEs (CENTIM) The Centre for Ethics and Responsibility (ZEV) The Centre for Teaching Development and Innovation (ZIEL) The Graduate Institute (GI) The Institute for Detection Technologies (IDT) The Institute for Management (IfM) The Institute for Media Research and Development (IMEA) The Institute of Safety and Security Research (ISF) The Institute for Social Innovations (ISI) The Institute of Technology, Resource and Energy-Efficient Engineering (TREE) The Institute of Visual Computing (IVC) The International Centre for Sustainable Development (IZNE) Student life National Code of Conduct and international partnerships The Hochschule Bonn-Rhein-Sieg University of Applied Sciences has agreed to accept the "National Code of Conduct on Foreign Students at German Universities", passed by the German Rectors’ Conference in 2009. The Code of Conduct is aimed at strengthening internationalisation at German universities by securing and continuously enhancing the quality of support provided to international students. The guiding principle is, wherever possible, to grant international students the same rights as German or EU students enjoy and, over and above that, to offer them the services and assistance that they particularly need. The Code of Conduct is a voluntary commitment by the participating universities and contains fundamental standards relevant to the areas of information, marketing and admission as well as academic, language and social support. International students coming to the Bonn-Rhein-Sieg University of Applied Sciences can rely upon compliance with the standards. This voluntary commitment demonstrates H-BRS' undertaking to provide appropriate support, which is an essential condition for the sustainable success of international students and researchers. Hochschule Bonn-Rhein-Sieg currently maintains partnerships with approximately 60 universities all over the world. The Bonn Student Union (Studierendenwerk Bonn) The Bonn Student Union (Studierendenwerk Bonn) looks after the interests of all students in the surrounding region, including those enrolled at the Bonn-Rhein-Sieg University of Applied Sciences. It is responsible for H-BRS' food service, provides accommodation and helps with student finance as well as childcare. It is the students’ contact point for all matters relating to student welfare. The student executive bodies and committees The interests of the Student Body, which comprises all the students enrolled at Bonn-Rhein-Sieg University, are represented by the following executive bodies and committees: the Student Parliament (StuPa), the General Students’ Committee (AStA), the student councils within each department as well as BRSU's central executive bodies, where the student representatives hold several seats. Every student at Bonn-Rhein-Sieg University can be elected into one of these institutions for a one-year term of office. One of the boards of the AStA, dealing with cultural issues, initiates intercultural exchange projects. In addition, the student councils offer help and advice to international students. The International Welcome Centre The International Welcome Centre is a meeting and service point aimed at providing support to all international students and guest academics before and during their study period at the Hochschule Bonn-Rhein-Sieg University of Applied Sciences. Here students can obtain all information relevant to the formalities required, to accommodation as well as life in Germany and on the campus. The International Office The International Office organises regular excursions, boat trips and cultural events for international students. Through its proximity to the Rhine, the Siebengebirge mountains, the Eifel, the High Fens region and the Nürburgring, the Rhein-Sieg District provides excellent leisure opportunities. The nearby cities of Bonn, Aachen and Cologne, with their theatres, concert halls, museums, art galleries and cathedrals, offer a versatile programme of activities from the cultural point of view, too. In collaboration with the Bonn-Rhein-Sieg Employment Agency (Agentur für Arbeit Bonn/Rhein-Sieg), the International Centre also runs a project aimed at supporting international students who wish to work in Germany when graduating from university. Study Buddies The Study Buddy programme was initiated by the Department of Natural Sciences. Study Buddies are students in higher semesters who volunteer to look after students during the first weeks of their stay in Germany. This may involve contacting them via email, picking them up at the airport or the railway station, explaining the pitfalls of the German registration procedures, showing them around the University or simply sharing a coffee and chatting about the new experiences gained. Out of Campus Day Once a year, the Department of Natural Sciences organises an intercultural festival called “Out of Campus Day” in collaboration with the General Students’ Committee (AStA) and the Student Parliament (StuPA). The festival is aimed at providing students with information on a study period abroad, at promoting communication between German and international students and at celebrating the international atmosphere at the Rheinbach Campus. Apart from providing students with comprehensive information on partner universities and exchange programmes, degrees, costs and funding possibilities, the Student Council organises a musical programme and a wide variety of games. HELP – support for students and employees with family commitments HELP is a contact point aimed at giving support and advice to students and employees who have questions on how to balance their study/job and family commitments. HELP is responsible for collecting and communicating relevant information on childcare facilities and holiday childcare schemes to parents or members of BRSU with relatives in need of care. H-BRS also provides specially equipped study rooms to parents with young children. During their holidays, primary and secondary school children can be looked after within the framework of a project called "Try it". The Bonn-Rhein-Sieg Runners and other sports activities Members and alumni of H-BRS have participated in the Bonn marathon organised by Deutsche Post on several occasions. This has now led to the formation of a sports team called “Bonn-Rhein-Sieg Runners”. Hochschule Bonn-Rhein-Sieg also has an online portal for students interested in sports. Additionally, the University of Bonn offers a sports programme that comes out at the beginning of each semester. It includes numerous activities in which students can take part using the facilities of the University of Bonn. The Doppelpunkt university newspaper The doppelpunkt university newspaper is published twice a year. It covers a wide range of topics relating to H-BRS itself, tuition and research, university policy, international issues, the jobs market and miscellaneous items. Students are invited to make their own contributions to the newspaper. The Doppelpunkt also offers various online services, such as a virtual job fair and an online residential market. Reputation, rankings and contests The Hochschule Bonn-Rhein-Sieg has been given excellent ratings and been presented with a number of achievement awards in many areas over the past few years. The “Family-Friendly University” In March 2007 the Bonn-Rhein-Sieg University of Applied Sciences was awarded the basic certification as a “Family-Friendly University” for providing family-friendly facilities to students and employees alike - including childcare facilities, study rooms for parents with children, alternating telework and much else. In June 2010 the certification was extended for an additional three years. The 2007 European E-Quality Seal On the occasion of the 2008 ERASMUS Conference of the German Academic Exchange Service (DAAD), the Hochschule Bonn-Rhein-Sieg University of Applied Sciences was awarded the 2007 European E-Quality Seal. H-BRS was the only university in the German Federal State of North-Rhine Westphalia to win this award in 2008. Along with the Hochschule Bonn-Rhein-Sieg, seven other universities throughout Germany were among the prize winners. The DAAD awards the E-Quality Seal for special merits and achievements relating to the exchange of German and international students and lecturers under the ERASMUS Scheme. Internal innovation and teaching awards Since 2010, H-BRS has presented an internal innovation award aimed at honouring innovative ideas coming from within the University; at raising awareness of dedication to tuition, research and transfer and at giving new impetus to innovation in the Rhein-Sieg District. The focus of the award shifts from year to year. In addition, the Hochschule Bonn-Rhein-Sieg grants an internal award for excellent performance in tuition every two years. The purpose of the award is to draw people's attention to high-quality teaching performance and to foster this; to give incentives to all teaching staff; to provide background information on tuition at universities, to raise awareness of the importance of tuition at H-BRS and to confer more responsibility on students. Participation in international contests and organisation of international events B-IT-bots contest The b-IT-bots team, which includes members of the University of Applied Sciences, regularly takes part in various RoboCup contests. Among others, it won the titles of German Champion (in 2009 and 2010) and World Champion (2009) in the RoboCup@Home contest. The team consists of professors from the Department of Computer Science, academic staff and students of the master's degree programme in Autonomous Systems. Formula Student The student team BRS Motorsport regularly participates in “Formula Student”, a globally renowned construction and design contest for students. Teams from approximately 270 universities and universities of applied sciences all over the world develop one prototype each for a single-seated formula one racing car, as well as drawing up concepts for a fictitious production rate of 1,000 vehicles per year. FrOSCon The two-day Free and Open Source Software Conference (FrOSCon) on issues relating to software and open source is held once a year by the association of the same name in collaboration with the local Linux/Unix User Group and the Department of Computer Science. The Hochschule Bonn-Rhein-Sieg in digital media Hochschule Bonn-Rhein-Sieg is mentioned in the computer game “Deus Ex: Human Revolution”. In the last stage of the game the player can find an e-book containing information on a speech delivered by Hugh Darrows at the University of Applied Sciences in 2016. (The game refers to the year 2027). Hugh Darrows is the inventor of augmentation technology, which is subject to controversy according to the game. BusinessCampus Rhein-Sieg GmbH To support students and graduates wishing to set up their own businesses, the Hochschule Bonn-Rhein-Sieg, the Rhein-Sieg District and Kreissparkasse Köln (Cologne savings bank) have jointly founded BusinessCampus Rhein-Sieg GmbH, i.e. the operating company of the business incubators at Sankt Augustin and Rheinbach. Here young entrepreneurs can rent offices at favourable prices, using the infrastructure and services provided. At each of the two locations in Sankt Augustin and Rheinbach, there is also a dining hall and a hall of residence. Notable alumni Katrin Bauerfeind (*1982), TV presenter Martin Kläser (*1987), poker player Julia Seeliger (*1979), journalist and politician Marco Knauf, Niclas Lecloux and Inga Koster, founders of the True Fruits business enterprise Witali Malykin (* 1982), Chess player See also List of German universities Wikipedia website in German References Links Website of the Hochschule Bonn-Rhein-Sieg - English Website of the Hochschule Bonn-Rhein-Sieg - German Language Centre University and District Library Newspaper of the University - In German Educational institutions established in 1995 Education in Bonn Universities and colleges in North Rhine-Westphalia 1995 establishments in Germany
68914413
https://en.wikipedia.org/wiki/VEGA%20Microprocessors
VEGA Microprocessors
VEGA Microprocessors are a portfolio of indigenous processors developed by C-DAC. The portfolio includes several 32-bit/64-bit Single/Multi-core Superscalar In-order/Out-of-Order high performance processors based on the RISC-V ISA. Also features India’s first indigenous 64-bit, superscalar, Out-of-order processor which is the main highlight of this portfolio. The Centre for Development of Advanced Computing (C-DAC) is an autonomous Scientific Society, operating under the Ministry of Electronics and Information Technology (MeitY), Govt. of India. The Microprocessor Development Programme (MDP) was initiated and funded by MeitY with the mission objective to design and develop indigenously, a family of Microprocessors, related IPs and the complete ecosystem to enable fully indigenous product development that meets various requirements in the strategic, industrial and commercial sectors. As part of the project C-DAC has successfully developed the VEGA series of microprocessors in soft IP form, which include32-bit Single-core (In-order), 64-bit Single-core (In-order & Out-of-order), 64-bit Dual-core (Out-of-order), and 64-bit Quad-core (Out-of-order). These high-performance processors are based on the open-source RISC-V Instruction Set Architecture. The tape out of some of these processor chips have also been planned. Vega processors are used in “Swadeshi Microprocessor Challenge- Innovate Solutions for #Atmanirbhar Bharat”. Processor Variants There are many variants for vega microprocessors, including: VEGA ET1031 VEGA ET1031 is a compact and efficient 32-bit, 3-stage in-order processor based on RISC-V Instruction Set Architecture. This microprocessor can be used as an effective work horse in low power IoT applications. It is based on RISC-V (RV32IM) Instruction Set Architecture and contains a high-performance multiply/divide unit, configurable AXI4 or AHB external interface, optional MPU (Memory Protection Unit), Platform Level Interrupt Controller and advanced Integrated Debug Controller. VEGA AS1061 VEGA AS1061 is a 64-bit, 6 stage in-order pipelined processor based on RISC-V 64GC (RV64IMAFDC) Instruction Set Architecture. Its usage mainly aimed at low power embedded applications. The core has a highly optimized 6-stage in-order pipeline with supervisor support and has the capability to boot Linux or other Operating systems. The pipeline is highly configurable and can support the RISC-V RV64 IMAFDC extensions. The AXI or AHB standard interface provided enables ease of system integration and a JTAG interface is provided for debug support. It also supports a highly optimized branch predictor with BTB, BHT and RAS along with Optional Memory Management Unit (MMU), Configurable, L1 caches, Platform Level Interrupt Controller etc. VEGA AS1161 VEGA AS1161 features an out-of-order processing engine with a 16-stage pipeline enabling it to meet next gen computational requirements. The design supports RISC-V 64G (RV64IMAFD) Instruction Set Architecture in a 13-16 stage out-of-order pipeline implementation. The processor supports single and double precision floating point instructions, and a fully featured memory with Memory Management Unit and Page-based virtual memory for Linux based applications. AS1161 is optimized for high performance, integrating an Advanced branch predictor for efficient branch execution, Instruction and Data caches. Features also include PLIC and vectored interrupts for serving various types of system events. An AXI4- / ACE, AHB- compliant external interface facilitates ease of system integration. There is also a WFI mode for power management, and JTAG debug interface for development support. VEGA AS2161 VEGA AS2161 features a dual core out-of-order processing engine with a 16-stage pipeline for high performance compute requirements. The design supports RISC-V 64G (RV64IMAFD) Instruction Set Architecture in a 13-16 stage out-of-order pipeline implementation. The processor also supports single and double precision floating point instructions, and MMU for Linux based applications. This high-performance application core comes with advanced branch prediction for efficient branch execution, Instruction and Data caches. This is ideal for applications requiring high-throughput performance e.g., Media server, Single Board Computer, Storage, Networking etc. A Cache coherent interconnect along with a highly optimized L2 cache is also part of the design. VEGA AS4161 VEGA AS4161 features a quad core out-of-order processing engine with a 16-stage pipeline for high performance compute requirements. The design supports RISC-V 64G (RV64IMAFD) Instruction Set Architecture in a 13-16 stage out-of-order pipeline implementation along with advanced branch prediction unit, L1 Caches, MMU, TLB etc. This is ideal for applications requiring high-throughput performance e.g., Storage, Networking, etc. An AXI4- / ACE, AHB- compliant external interface is used to connect multiple cores to the interconnect and memory subsystem. A Cache coherent interconnect along with a highly optimized L2 cache is a part of the design. Peripherals C-DAC has a wide range of System and Peripheral IPs under the brand name ASTRA which are Silicon proven, consisting of the robust RTL, extensively verified and fully synthesizable technology independent IP cores which form the building blocks for an SoC implementation. Some of the peripherals include: EROTG1 - USB On-the-Go controller ERUSBHC - USB Host Controller ERUSB2 - USB Function controller ERPCIe - PCI Express Endpoint Controller ERSATAII - SATA Host Controller ERMAC - Ethernet Media Access Controller (10/100 Mbps) ERGMAC - Gigabit Ethernet Media Access Controller ER15530 - Manchester Encoder Decoder core ERVIC - Vectored Interrupt Controller ER146818 - Real Time Clock ERTIMER - Timer ER16C450 - UART ERGPIO - General Purpose Input-Output Controller ERSPIM - Serial Peripheral Interface Master Controller ERPLIC - Platform Level Interrupt Controller ERWDT - Watch Dog Timer ERIIC - Inter-Integrated Circuit Master Controller ERQSPI - Quad SPI ERPWM - Pulse Width Modulator ERSMC - Static Memory Controller (SRAM/NOR/eMMC) ERDMAC - Direct Memory Access Controller ERI2S - Integrated Inter-IC Sound Bus ERDEBUG - RISC-V Debug module ERSDRAM - SDRAM Controller ERSDHOST - SD Host Controller ERDISPLAY - Display Controller ERAXIBUS - AXI Bus Interconnect ERAXIAPB - AXI to APB Bus converter SoCs THEJAS32 THEJAS32 SoC is built around VEGA ET1031, a 32-bit high performance microcontroller class processor consisting of a 3-stage in-order RISC-V based core. The peripherals available in THEJAS32 SoC are GPIO, Interrupt Controller, Timers, RAM, SPI, UART, I2C, PWM and ADC. This is targeted for applications like sensor fusion, smart meters, small IoT devices, wearable devices, electronic toys, etc. This SoC is ported on to Digilent Artix-7 35T FPGA Development Board, extensively used by the Swadeshi Microprocessor Challenge Participants. THEJAS64 THEJAS64 SoC is built around VEGA AS1061, a 64-bit processor with a 6-stage in-order pipeline optimized for high performance. This processor consists of an efficient branch predictor and instruction and data caches and is targeted for applications like IoT devices, motor control, wearable devices, high-performance embedded, consumer electronics and industrial automation. The peripherals available in this SoC are GPIO, Interrupt Controller, Timers, DDR3 RAM, SPI, UART, I2C, PWM, ADC and 10/100 Ethernet. This SoC is ported on to Digilent Artix-7 100T, Nexys A7, Genesys2 FPGA Development Boards, extensively used by the Swadeshi Microprocessor Challenge Participants. VEGA Ecosystem The proposed SoCs from CDAC will contain Single/Dual/Quad core processor as the core and integrated with in-house developed silicon proven peripheral IPs suitable for various applications like Strategic, Industrial, Automotive, Health, Consumer, etc. The complete ecosystem available for Embedded Systems design with the VEGA processors consists of Board Support Packages, SDK with integrated tool chain, IDE plug-ins and Debugger for the development, testing and debugging. Linux and other standard Operating Systems have been ported and are also available as part of the ecosystem. Tapeouts Development Boards References External links VEGA Processors YouTube channel CNX Software: Article System on a chip Superscalar microprocessors Manycore processors Microcontrollers Embedded microprocessors 32-bit microprocessors 64-bit microprocessors Technology companies of India Science and technology in India
29077886
https://en.wikipedia.org/wiki/XRumer
XRumer
XRumer is a piece of software made for spamming online forums and comment sections. It is marketed as a program for search engine optimization and was created by BotmasterLabs. It is able to register and post to forums (forum spam) with the aim of boosting search engine rankings. The program is able to bypass security techniques commonly used by many forums and blogs to deter automated spam, such as account registration, client detection, many forms of CAPTCHAs, and e-mail activation before posting. The program utilises SOCKS and HTTP proxies in an attempt to make it more difficult for administrators to block posts by source IP and features a proxy checking tool to verify the integrity and anonymity of the proxies used. In addition, the software can avoid the suspicions of forum administrators by first registering to make a post in the form of a question which mentions the spam product ("Where can I get...?"), before registering another account to post a spam link which mentions the product. The side effect of these innocent-looking posts is that helpful forum visitors may search on a search engine (e.g. Google) for the product and themselves post a link to help out, thus bolstering the product's Google ranking without falling afoul of forum posting policies. The software is also capable of avoiding detection by making posts in off-topic, spam and overflow sections of forums, thus attempting to keep its activities in high activity low content areas of the targeted forum. However, there are other platforms used to spam to which includes website comment spam. Method of operation XRumer is capable of posting to blogs and guestbooks in addition to its main role as an automated forum posting tool. It can also create forum profiles complete with signature in an attempt to avoid alerting forum administrators with any off topic forum posts. The software is also able to gather and decipher artificial intelligence such as security questions (i.e. what is 2+2?) often used by forums upon registration. Since the latest version of XRumer, the software is capable of collecting such security questions from multiple sources and is much more effective in defeating them. Helper program Hrefer is also included. This software is used to automatically parse results from search engines including Google, Yahoo, Bing and Yandex for forums and blogs that can then be used as a target list for the main XRumer application. According to The Register, as of October 2008, XRumer can defeat captchas of Hotmail and Gmail. This enables the software to create accounts with these free email services, which are used to register in forums that it posts to. XRumer also posts slowly initially, in an attempt to avoid detection by posting unnaturally fast. Between 2009 and 2011 XRumer no longer recognized Hotmail and Gmail captchas due to a change in captcha format. Users of XRumer could only defeat such captchas utilizing external human captcha services. Defenses Webmasters of topical forums face an ongoing battle against XRumer software, users of which are almost always in violation of forum terms of service, and/or have no interest in the actual forum topic. The users of the software have created an entire industry whose sole purpose is to protect internet sites against users of XRumer. Forum administration tasks against XRumer are often a constant, daily effort, which include identifying new user accounts that are from XRumer users, deleting posts/threads created by the software, and deleting/disabling the user accounts. The easiest method to defeat Xrumer is to simply require the first post of any new forum member or blog poster to be approved before it can appear. There are several resources that help block forum spam, which reference reports of forum spam by username and IP address. If a user/IP has appeared in the site's lists, it is highly likely that it is a black-hat user of XRumer. Common defensive actions by webmasters are to institute IP-based posting bans on subnetworks used by the spammers. The spam messages in a forum typically take the form of "link spam" which will often be included in older topics and private messages (PMs) leaving the newer threads and posted messages "clear" of apparent spam. Sophisticated spammers will copy posts from other areas of the site, giving the appearance of a valid, on-topic reply. The best clue that it is a spammer is that the links in the user profile are completely unrelated to the forum topic, and the posted messages, while seemingly within the general topic of the forum, will be non-sequiturs and out-of-place within the topic thread. Alternatively, the spammers post generic "I am excited to begin posting and contributing here." messages that are content-neutral. The damage caused to forums is classified in several areas: first and foremost, the admin time to clean the forum; second, the server bandwidth to accommodate the spam postings; third, the storage requirements at the forum server for the spam messages that are devoid of content; fourth, the community alienation and irritation around seeing spam; fifth, the offense to innocent forum members if their posts are mistaken as spam or their accounts suspended in error for suspected spamming; and sixth, the lowering of the information-to-noise ratio of the forum, which diminishes the value of the forum, skewing usage/active user statistics used to determine advertising rates. E-mail account creation As per the latest update to XRumer 7 the software is able to automatically register e-mail accounts on mail.ru (Russian IP addresses only) and Gmail. Support for creating e-mail accounts in an automated fashion on Hotmail and AOL has been completely removed. The technique employed by XRumer to bypass the CAPTCHA protection in Gmail and mail.ru is Averaging. A captcha is a challenge-response test frequently used by internet services in order to verify that the user is actually a human rather than a computer program. Commonly, captchas are dynamically created images of random numbers and/or letters. These images are distorted in some way so that the human eye can still recognize them but with the goal to make automatic recognition impossible. Captchas are used by free-mail services to prevent automatic creation of a huge number of email accounts and to protect automatic form submissions on blogs, forums and article directories. As of November 2012, Xrumer has once again cracked Recaptcha, and is able to successfully post to Forums/Blogs that use it. Averaging is a common method in physics to reduce noise in input data. The averaging attack can be used on image-based captchas if the following conditions are met: The predominant distortion in the captcha is of noise-like nature. It is possible to extract a series of different images with the same information encoded in them. Averaging of a series of images can be used to improve image quality (reduce distortion, or improve signal-to-noise ratio, so to say) of captchas and hence to make them more easily recognizable by OCR (optical character recognition) systems. The fact that noise and payload behave differently on "reload" is exploited. This allows the program to separate them and hence defeat the captcha without the need for a sophisticated algorithm. References External links Spamming Black hat search engine optimization Internet bots
577742
https://en.wikipedia.org/wiki/Photograph%20manipulation
Photograph manipulation
Photograph manipulation involves the transformation or alteration of a photograph using various methods and techniques to achieve desired results. Some photograph manipulations are considered to be skillful artwork, while others are considered to be unethical practices, especially when used to deceive the public. Other examples include being used for political propaganda, or to improve the appearance of a product or person, or simply as entertainment or practical jokes. Depending on the application and intent, some photograph manipulations are considered an art form because it involves the creation of unique images and in some instances, signature expressions of art by photographic artists. Ansel Adams employed some of the more common manipulations using darkroom exposure techniques, burning (darkening) and dodging (lightening) a photograph. Other examples of photo manipulation include retouching photographs using ink or paint, airbrushing, double exposure, piecing photos or negatives together in the darkroom, scratching instant films, or through the use of software-based manipulation tools applied to digital images. Software applications have been developed for the manipulation of digital images, ranging from professional applications to very basic imaging software for casual users. Manipulation techniques Photo manipulation dates back to some of the earliest photographs captured on glass and tin plates during the 19th century. The practice began not long after the creation of the first photograph (1825) by Joseph Nicéphore Niépce who developed heliography and made the first photographic print from a photoengraved printing plate. Traditional photographic prints can be altered using various methods and techniques that involve manipulation directly to the print, such as retouching with ink, paint, airbrushing, or scratching Polaroids during developing (Polaroid art). Negatives can be manipulated while still in the camera using double-exposure techniques, or in the darkroom by piecing photos or negatives together. Some darkroom manipulations involved techniques such as bleaching to artfully lighten or totally wash-out parts of the photograph, or hand coloring for aesthetic purposes or to mimic a fine art painting. In the early 19th century, photography and the technology that made it possible were rather crude and cumbersome. While the equipment and technology progressed over time, it was not until the late 20th century that photography evolved into the digital realm. In the 20th century, digital retouching became available with Quantel computers running Paintbox in professional environments, which, alongside other contemporary packages, were effectively replaced in the market by editing software for graphic imaging, such as Adobe Photoshop and GIMP. At the onset, digital photography was considered by some to be a radical new approach and was initially rejected by photographers because of its substandard quality. The transition from film to digital has been an ongoing process, although much progress was made in the early 21st century as a result of innovation that has greatly improved digital image quality while reducing the bulk and weight of cameras and equipment. Whereas manipulating photographs with tools such as Photoshop and GIMP is generally skill-intensive and time-consuming, the 21st century has seen the arrival of image editing software powered by advanced algorithms which allow complex transformations to be mostly automated. For example, beauty filters which smooth skin tone and create more visually pleasing facial proportions (for example, by enlarging a subject's eyes) are available within a number of widely used social media apps such as Instagram and TikTok, and can be applied in real time to live video. Such features are also available in dedicated image editing mobile applications like Facetune. Some, such as FaceApp use deep learning algorithms to automate complex, content-aware transformations, such as changing the age or gender of the subject of a photo, or modifying their facial expression. The term deepfake was coined in 2017 to refer to realistic images generated with deep learning techniques. The alterations can be done for entertainment purposes, or more nefarious purposes such as spreading disinformation. The information can be used to conduct malicious attacks, political gains, financial crime, or fraud. It can also be used in pornographic style videos, where an individual's face is placed on the body of a porn star. Political and ethical issues Photo manipulation has been used to deceive or persuade viewers or improve storytelling and self-expression. As early as the American Civil War, photographs were published as engravings based on more than one negative. In 1860, a photograph of the politician John Calhoun was manipulated and his body was used in another photograph with the head of the president of the United States, Abraham Lincoln. This photo credits itself as the first manipulated photo. Joseph Stalin made use of photo retouching for propaganda purposes. On May 5, 1920 his predecessor Vladimir Lenin held a speech for Soviet troops that Leon Trotsky attended. Stalin had Trotsky retouched out of a photograph showing Trotsky in attendance. In a well known case of damnatio memoriae ("condemnation of memory") image manipulation, NKVD leader Nikolai Yezhov, after his execution in 1940, was removed from an official press photo where he was pictured with Stalin; historians subsequently nicknaming him the "Vanishing Commissar". Such censorship of images in the Soviet Union was common. The pioneer among journalists distorting photographic images for news value was Bernarr Macfadden: in the mid-1920s, his "composograph" process involved reenacting real news events with costumed body doubles and then photographing the dramatized scenes—then pasting faces of the real news-personalities (gathered from unrelated photos) onto his staged images. In the 1930s, artist John Heartfield used a type of photo manipulation known as the photomontage to critique Nazi propaganda. Some ethical theories have been applied to image manipulation. During a panel on the topic of ethics in image manipulation Aude Oliva theorized that categorical shifts are necessary in order for an edited image to be viewed as a manipulation. In Image Act Theory, Carson Reynolds extended speech act theory by applying it to photo editing and image manipulations. In "How to Do Things with Pictures", William J. Mitchell details the long history of photo manipulation and discusses it critically. Use in journalism A notable incident of controversial photo manipulation occurred over a photograph that was altered to fit the vertical orientation of a 1982 National Geographic magazine cover. The altered image made two Egyptian pyramids appear closer together than they actually were in the original photograph. The incident triggered a debate about the appropriateness of falsifying an image, and raised questions regarding the magazine's credibility. Shortly after the incident, Tom Kennedy, director of photography for National Geographic stated, "We no longer use that technology to manipulate elements in a photo simply to achieve a more compelling graphic effect. We regarded that afterwards as a mistake, and we wouldn’t repeat that mistake today." There are other incidents of questionable photo manipulation in journalism. One such incident occurred in early 2005 after Martha Stewart was released from prison. Newsweek used a photograph of Stewart's face on the body of a much slimmer woman for their cover, suggesting that Stewart had lost weight while in prison. Speaking about the incident in an interview, Lynn Staley, assistant managing editor at Newsweek said, "The piece that we commissioned was intended to show Martha as she would be, not necessarily as she is." Staley also explained that Newsweek disclosed on page 3 that the cover image of Martha Stewart was a composite. Image manipulation software has affected the level of trust many viewers once had in the aphorism "the camera never lies". Images may be manipulated for fun, aesthetic reasons, or to improve the appearance of a subject but not all image manipulation is innocuous, as evidenced by the Kerry Fonda 2004 election photo controversy. The image in question was a fraudulent composite image of John Kerry taken on June 13, 1971 and Jane Fonda taken in August, 1972 sharing the same platform at a 1971 antiwar rally, the latter of which carried a fake Associated Press credit with the intent to change the public's perspective of reality. There is a growing body of writings devoted to the ethical use of digital editing in photojournalism. In the United States, for example, the National Press Photographers Association (NPPA) established a Code of Ethics which promotes the accuracy of published images, advising that photographers "do not manipulate images [...] that can mislead viewers or misrepresent subjects." Infringements of the Code are taken very seriously, especially regarding digital alteration of published photographs, as evidenced by a case in which Pulitzer prize-nominated photographer Allan Detrich resigned his post following the revelation that a number of his photographs had been manipulated. In 2010, a Ukrainian photographerStepan Rudik, winner of the 3rd prize story in Sports Featureswas disqualified due to violation of the rules of the World Press Photo contest. "After requesting RAW-files of the series from him, it became clear that an element had been removed from one of the original photographs." As of 2015, up to 20% of World Press Photo entries that made it to the penultimate round of the contest were disqualified after they were found to have been manipulated or post-processed with rules violations. Scientific fraud Retouching human subjects A common form of photographic manipulation, particularly in advertising, fashion, and glamour photography involves edits intended to enhance the appearance of the subject. Common transformations include smoothing skin texture, erasing scars, pimples, and other skin blemishes, slimming the subject's body, and erasing wrinkles and folds. Commentators have raised concerns that such practices may lead to unrealistic expectations and negative body image among the audience. Use in fashion The photo manipulation industry has often been accused of promoting or inciting a distorted and unrealistic image of selfmost specifically in younger people. The world of glamour photography is one specific industry that has been heavily involved with the use of photo manipulation (what many consider to be a concerning element as many people look up to celebrities in search of embodying the 'ideal figure'). Manipulation of a photo to alter a model's appearance can be used to change features such as skin complexion, hair color, body shape, and other features. Many of the alterations to skin involve removing blemishes through the use of the healing tool in Photoshop. Photo editors may also alter the color of hair to remove roots or add shine. Additionally, the model's teeth and eyes may be made to look whiter than they are in reality. Makeup and piercings can even be edited into pictures to look as though the model was wearing them when the photo was taken. Through photo editing, the appearance of a model may be drastically changed to mask imperfections. In an article entitled "Confessions of a Retoucher: how the modeling industry is harming women", a professional retoucher who has worked for mega-fashion brands shares the industry's secrets. Along with fixing imperfections like skin wrinkles and smoothing features, the size of the model is manipulated by either adding or subtracting visible weight. Reverse retouching is just as common as making models skinnier, “distorting the bodies of very thin models to make them appear more robust in a process called reverse retouching. It is almost worse than making someone slimmer because the image claims you can be at an unhealthy weight but still look healthy. In reality, you can't, you have to Photoshop it". Reverse retouching includes eliminating shadows from protruding bones, adding flesh over body parts, color correcting, and removing hair generated for warmth from extreme weight loss. Professionals are saying that if an image is not labeled "not retouched," then the public can assume that photograph has been modified. As the fashion industry continues to use photos that have been manipulated to idealize body types, there is a need for education about how unreal and unhealthy these images are and the negative implications they are promoting. A digital manipulation expert, who edited and altered a lot of images for the fashion industry and wants to remain private, says it is normal to digitally manipulate a photograph of a model to make them appear thinner, regardless of actual weight. Generally, photographs are edited to remove the appearance of up to . However, in the past 20 years, the practice has changed, as more celebrities are on social media and the public is now more aware of their actual appearances; it is likely that significant alterations would be noticed. The retoucher notes that the industry's goal is to make significant income in advertising, and that the unrealistic ideals cycle will continue as they have to maintain this. Since 2012, Seventeen Magazine announced they will no longer manipulate photos of their models. 14 year old Julia Bluhm petitioned that the magazine use a minimum of one unaltered photo in their spread. The petition received over 84,000 signatures. On social media Not only are photos being manipulated by professionals for the media, but also with the rise of social media, everyone has easy access to edit photos they post online. Countless mobile phone applications such as Facetune have been created to allow smartphone users to modify personal images. These applications allow people to edit virtually every aspect of themselves in the photo. With social media users and the younger generation being exposed to an extreme amount of imagery that has been manipulated, the consequences include body ideals that are unachievable. In advertising Photo manipulation has been used in advertisement for television commercials and magazines to make their products or the person look better and more appealing than how they look in reality. Some tricks that are used with photo manipulation for advertising are: fake grill marks with eye-liner, using white glue instead of milk, or using deodorant to make vegetables look glossy. Celebrity opposition Photo manipulation has triggered negative responses from both viewers and celebrities. This has led to celebrities refusing to have their photos retouched in support of the American Medical Association that has decided that "[we] must stop exposing impressionable children and teenagers to advertisements portraying models with body types only attainable with the help of photo editing software". These include Keira Knightley, Brad Pitt, Andy Roddick, Jessica Simpson, Lady Gaga and Zendaya. Brad Pitt had a photographer, Chuck Close, take photos of him that emphasized all of his flaws. Chuck Close is known for his photos that emphasize all skin flaws of an individual. Pitt did so in an effort to speak out against media using image manipulation software and manipulating celebrities' photos in an attempt to hide their flaws. Kate Winslet spoke out against photo manipulation in media after GQ magazine altered her body, making it look unnaturally thin. 42-year-old Cate Blanchett appeared on the cover of Intelligent Life's 2012 March/April issue, makeup-free and without digital retouching for the first time. In April 2010, Britney Spears agreed to release "un-airbrushed images of herself next to the digitally altered ones". The fundamental motive behind her move was to "highlight the pressure exerted on women to look perfect". In 2014, Hungarian pop vocalist and songwriter Boggie produced two music videos that achieved global attention for its stance on whitewashing in the beauty industry: the #1 MAHASZ chart hit "Parfüm" (Hungarian version) and "Nouveau Parfum" (French version) from her self-titled album Boggie, which reached two Billboard charts (#3 Jazz Album, #17 World Music Album). In the videos, the artist is shown singing as she is extensively retouched in real-time, ending with a side-by-side comparison of her natural and manipulated images as the song fades out. Corporate opposition Multiple companies have begun taking the initiative to speak out against the use of photo manipulation when advertising their products. Two companies that have done so include Dove and Aerie. Dove created the Dove Self-Esteem Fund and also the Dove Campaign for Real Beauty as a way to try to help build confidence in young women. They want to emphasize what is known as real beauty, or untouched photographs, in the media now. Also, Aerie has started their campaign #AerieREAL. They have a line of undergarments now that goes by that name with the intention of them being for everyone. Also, their advertisements state that the model has not been retouched in any way. They also add in their advertisements that "The real you is sexy." The American Medical Association stated that is opposed to the use of photo manipulation. Dr. McAneny made a statement that altering models to such extremes creates unrealistic expectations in children and teenagers regarding body image. He also said that the practice of digitally altering the weight of models in photographs should be stopped, so that children and teenager are not exposed to body types that cannot be attained in reality. The American Medical Associations as a whole adopted a policy to work with advertisers to work on setting up guidelines for advertisements to try to limit how much digital image manipulation is used. The goal of this policy is to limit the amount of unrealistic expectations for body image in advertisement. Government opposition Governments are exerting pressure on advertisers, and are starting to ban photos that are too airbrushed and edited. In the United Kingdom the Advertising Standards Authority has banned an advertisement by Lancôme featuring Julia Roberts for being misleading, stating that the flawless skin seen in the photo was too good to be true. The US is also moving in the direction of banning excessive photo manipulation where a CoverGirl model's ad was banned because it had exaggerated effects, leading to a misleading representation of the product. In 2015, France proceeded to pass a law that battles against the use of unrealistic body images and anorexia in the fashion industry. This includes modeling and photography. The models now have to show they are healthy and have a BMI of over 18 through a note from their doctor. Employers breaking this law will be fined and can serve a jail sentence up to six months. When a creator of a photograph does not disclose that the picture is edited or retouched, no matter how small the edit, they may also receive a fine or 30% of the costs of what they used to create their ad. In 2021, Norway enacted legislation making it a requirement to label digital manipulations of the bodies of persons when depicted in advertising. Failure to do so is punishable by a fine. Support Some editors of magazine companies do not view manipulating their cover models as an issue. In an interview with the editor of the French magazine Marie Claire, she stated that their readers are not idiots and that they can tell when a model has been retouched. Also, some who support photo manipulation in the media state that the altered photographs are not the issue, but that it is the expectations that viewers have that they fail to meet, such as wanting to have the same body as a celebrity on the cover of their favorite magazine. Opinion polling A survey done by United Kingdom based fashion store New Look showed that 90% of the individuals surveyed would prefer seeing a wider variety of body shapes in media. This would involve them wanting to see cover models that are not all thin, but some with more curves than others. The survey also talked about how readers view the use of photo manipulation. One statistic stated that 15% of the readers believed that the cover images are accurate depictions of the model in reality. Also, they found that 33% of women who were surveyed are aiming for a body that is impossible for them to attain. Dove and People Weekly also did a survey to see how photo manipulation affects the self-esteem of females. In doing this, they found that 80% of the women surveyed felt insecure when seeing photos of celebrities in the media. Of the women surveyed who had lower self-esteem, 70% of them do not believe that their appearance is pretty or stylish enough in comparison to cover models. Social and cultural implications The growing popularity of image manipulation has raised concern as to whether it allows for unrealistic images to be portrayed to the public. In her article "On Photography" (1977), Susan Sontag discusses the objectivity, or lack thereof, in photography, concluding that "photographs, which fiddle with the scale of the world, themselves get reduced, blown up, cropped, retouched, doctored and tricked out". A practice widely used in the magazine industry, the use of photo manipulation on an already subjective photograph creates a constructed reality for the individual and it can become difficult to differentiate fact from fiction. With the potential to alter body image, debate continues as to whether manipulated images, particularly those in magazines, contribute to self-esteem issues in both men and women. In today's world, photo manipulation has a positive impact by developing the creativity of one's mind or maybe a negative one by removing the art and beauty of capturing something so magnificent and natural or the way it should be. According to The Huffington Post, "Photoshopping and airbrushing, many believe, are now an inherent part of the beauty industry, as are makeup, lighting and styling". In a way, these image alterations are "selling" actual people to the masses to affect responses, reactions, and emotions toward these cultural icons. "Photoshop" as a verb The terms "photoshop", "photoshopped" and "photoshopping", derived from Adobe Photoshop, are ubiquitous and widely used colloquially and academically when referencing image editing software as it relates to digital manipulation and alteration of photographs. The term commonly refers to digital editing of photographs regardless of which software program is used. Trademark owner Adobe Inc. object to what they refer to as misuse of their trademarked software name, and consider it an infringement on their trademark to use terms such as "photoshopped" or "photoshopping" as a noun or verb, in possessive form or as a slang term, to prevent "genericization" or "genericide" of the company's trademark. Separately, the Free Software Foundation advises against using "photoshop" as a verb because Adobe Photoshop is proprietary software. In popular culture, the term photoshopping is sometimes associated with montages in the form of visual jokes, such as those published on Fark and in Mad magazine. Images may be propagated memetically via e-mail as humor or passed as actual news in a form of hoax. An example of the latter category is "Helicopter Shark", which was widely circulated as a so-called "National Geographic Photo of the Year" and was later revealed to be a hoax. Photoshop contests are games organized online with the goal of creating humorous images around a theme. Gallery See also References External links Digital Tampering in the Media, Politics, and Law – a collection of digitally manipulated photos of political interest Hoax Photo Gallery – more manipulated photos Erased figures in Kagemni's tomb — discusses political image manipulation with an example from Ancient Egypt Digital art Photographic techniques Photojournalism controversies Photography controversies Photography forgeries
1481515
https://en.wikipedia.org/wiki/Elektron%20%28company%29
Elektron (company)
Elektron is a Swedish developer and manufacturer of musical instruments founded in 1998, as well as having its headquarters, R&D and production in Gothenburg, Sweden. They produce mainly electronic musical instruments, but have also made effects units and software. Since 2012, there have been branch offices in Los Angeles and in Tokyo. Musicians who use Elektron instruments include Panda Bear, Timbaland, The Knife, Sophie Xeon, Depeche Mode, and Autechre. Product History The first Elektron product was an analog/digital hybrid tabletop synthesizer called the SidStation. Its sound engine was a Commodore 64 SID chip. During the years 2001-2003, Elektron released the Machinedrum (a 16-voice digital drum machine) and the Monomachine (a programmable 6-voice synthesizer using single-cycle waveforms). These instruments were, like the SidStation, housed in a brushed aluminum casing. Since then, the range of products has been extended to include the following hardware: The Model:Cycles (a beginner FM groovebox), The Digitone (an intermediate FM synthesizer groovebox), The Octatrack (a sampler), Digitakt (a sampler), Analog Keys & Analog Four (keyboard/tabletop 4-voice analog synthesizer), Analog Rytm (an 8-voice hybrid analog/digital drum machine) and Analog Heat (an 8-effects programmable analog sound processing unit). In 2015 Elektron released Overbridge (a software package used to integrate Elektron Analog hardware into a DAW) as a complement to the Analog range of instruments. In late 2016, Elektron expanded its product range by launching the Analog Drive, an 8-in-one effects drive pedal for electric guitar and bass guitar. Early years Elektron started working on its first electronic instrument in 1997. At the time, it was a school project, a mandatory course part of the Computer Science program at the Chalmers University of Technology in Gothenburg, Sweden. The three founders were Daniel Hansson, Anders Gärder and Mikael Räim. Hansson recalls: "There were a number of projects to choose from: build a digital land-line phone, a bicycle trip computer, or a beeper. None of that seemed fun or challenging enough, so I suggested we build a synthesizer instead!" The synthesizer, called the SidStation, was initially made in a test run of ten units. The project was deemed commercially viable, so in 1998 a company was started to nurture it and Elektron ESI was born. Following the SidStation, Elektron released the Machinedrum and the Monomachine. When development of the Octatrack (in 2009) began, Jonas Hillman stepped in to provide the management, structural reform and capital needed to get the company growing. Since then, Jonas has been acting CEO, majority owner and business spokesperson for the company, subsequently renamed Elektron Music Machines. With Hillman at the helm, Elektron offices were opened in Los Angeles and Tokyo. The Gothenburg office remains the company headquarters. The product portfolio has been expanded to include music production software (Overbridge) as well as analog synthesizer hardware. Music Hardware Analog Rytm Released in 2014. It has analog synthesis as well as digital effects and samples. This 8-voice hybrid drum machine quickly found its way to the electronic music community. At the time, there was a trend among musicians of going back to making music using real hardware. Analog Four was released in December 2012. The Analog Four is a tabletop 4-voice (monophonic or polyphonic) analog synthesizer with digital controls, a programmable step sequencer and digital send effects. It is also capable of CV/gate, which makes it possible to interact with practically any classic analog synthesizer or drum machine from the 1960s and onwards. The Analog Four can be described as a "compact modular" in the sense that each of the four voices has a set of pre-defined modules (oscillators and waveform generators, filters, envelopes, amplifiers and LFO:s and an overarching set of send effects (delay, reverb, chorus effect) that may be combined, routed and modulated in conventional as well as unconventional ways. Octatrack DPS-1 was released in January 2011. The Octatrack is an 8-voice sampler with built-in sequencer and eight MIDI control tracks. The samples (streamed from an CF card, recorded live from an external source or from one of the eight on-board tracks) can be time stretched, inverted, sliced, re-shuffled, re-sampled and triggered in many different ways. Digitakt First showcased at the NAMM trade show in January 2017. An 8-voice digital drum machine. Digitone Released in 2017, an 8-voice digital FM synthesizer. Digitone Keys Eight voice polyphonic synthesizer keyboard with 37-key velocity and pressure sensitive keyboard with aftertouch using the Digitone’s intuitive sound engine. Model:Cycles A six track FM based groovebox. Model:Samples In January 2019, Elektron announced a new sample based instrument, the Model:Samples. Similarly to the Digitakt, it features tracks dedicated to sample playback, as well as a basic subtractive engine (filter, amplifier, amp envelope, LFO) and effects. It also features Elektron's signature parameter locking and sequencing capabilities. However, unlike the Digitakt, the Model:Samples has only 6 tracks instead of 8, and cannot be sampled into directly (though new samples can be loaded onto it from a computer). Analog Heat A Stereo Analog Sound Processor. Has an assignable LFO, envelope and multi-mode filter. It has 8 main effects (including band-specific amplification, enhancement and distortion) and an equalizer. The Analog Heat is targeted at studio engineers as well as live performers. It can be programmed and played manually or connected to a DAW by installing Elektron's Overbridge software. Discontinued Instruments SidStation was the first Elektron instrument, released to the public in 1999. A tabletop synthesizer built on the SID chip originally found in the Commodore 64 home computer. The synthesizer had three voices, three oscillators (with various possible forms of synthesis like AM, FM and ring modulation, four rubber-coated potentiometers to control parameters including filter cut-off frequency, amplitude envelope and LFO, a jog wheel for patch selection and a telephone keypad for programming input. Even at the outset, the SidStation was a strictly limited product: it had to be discontinued when the last stock of virgin SID chips was depleted. Butchering even a single Commodore 64 for parts was out of the question. The original SidStation had a polished aluminum casing. There were a couple of special editions including red-tinted and blue tinted versions, as well as a carbon black version called SidStation Ninja. Machinedrum A digital, 16-voice drum machine. The first version, the SPS-1, was released in 2001. The style of the product and packaging was largely designed by Jesper Kouthoofd, who went on to found the Stockholm-based company Teenage Engineering in 2005. The Machinedrum SPS-1UW ("User Wave") was introduced in 2005, which added sampling capabilities to the Machinedrum. A midi interface unit called TM-1 ("Turbo Midi") was released shortly thereafter, making transfer speeds via MIDI up to ten times faster. Monomachine A programmable 6-voice digital synthesizer using single-cycle waveforms. A keyboard version, the SFX-6, was released in 2003, followed by the SFX-60 tabletop in 2004. After the release of the Monomachine, Elektron parted with its investors, making the company a privately owned company. In late 2007, the Machinedrum and the Monomachine were updated to MKII versions with improved hardware specifications and functionality. Analog Keys was released in November 2013. It is a development of the Analog Four architecture. It has a 37-key keyboard and an analog joystick for modulation and tweaking. The Analog Keys has more dedicated control keys than its tabletop sibling, including a jog wheel for quick access to pre-programmed sounds. The Analog Keys was designed to be a hands-on, tactile instrument fit for live performances while maintaining all the sequencing, sound crafting and programming capabilities of the Analog Four. At the 2016 Game Awards show in California, the artist Sonic Mayhem did a memorable performance using the Analog Keys. He performed a cover of the theme song from the classic Doom (1993 video game). Analog Drive Was a multi-functional drive pedal, with an audio equalizer and 8 different effects settings (ranging from non-destructive amplification and enhancement to harmonic, fuzzy or complete distortion). The drive targets a new customer segment for Elektron: electric guitar and bass players. Music Software Overbridge was released in 2015. This software lets a musician plug their hardware (Analog Keys, Analog Four, Analog Rytm, Analog Heat) into a computer via USB and access it using a DAW. The DAW is "fooled" into thinking the analog hardware is an audio plug-in instrument that can be played, programmed, automated and recorded just like a software instrument. If an Analog Keys or Analog Four is connected, Overbridge can also be used to control virtually any classic modular synth or drum machine with CV/gate or DIN sync capabilities. In other words, with an Overbridge setup, it is possible to play and automate, for example, a VCS 3 from 1969 from a laptop. Overbridge was in beta for 5 years since its inception, leading to much criticism from users, but has moved out of beta state, to a public release on April 16, 2020. Musicians Musicians who use Elektron instruments include Sophie Xeon, Warpaint, Kid Koala, Del tha Funky Homosapien, Susanne Sundfør, John Frusciante, The Knife, Air, Nine Inch Nails, New Order, Jean-Michel Jarre, Youth Code, Wilco, Aux 88, Cevin Key, Smashing Pumpkins, Mogwai, The Horrors, Plaid, Factory Floor, Matt McJunkins, Arcane Roots, The Bug, The Chemical Brothers, Thom Yorke and many others. Awards and accolades Octatrack won the MusicRadar "Sampler of the Year" award in 2011. The Analog Four received the FutureMusic "Hardware of the Year" award in 2013. The Analog Rytm received the FutureMusic "Hardware of the Year" award in 2014 and the Electronic Musician "Editor's Choice Award" in 2015. Overbridge won the Red Dot "best of the best" design award in 2016, and the Design S premium award. Analog Heat won the Music Radar "best hardware/outboard effects of the year" in 2016. See also The Octatrack The Monomachine The SidStation References External links Music equipment manufacturers Synthesizer manufacturing companies Swedish brands Musical instrument manufacturing companies of Sweden Manufacturing companies based in Gothenburg Privately held companies of Sweden Swedish companies established in 1998
719412
https://en.wikipedia.org/wiki/AUTOEXEC.BAT
AUTOEXEC.BAT
AUTOEXEC.BAT is a system file that was originally on DOS-type operating systems. It is a plain-text batch file in the root directory of the boot device. The name of the file is an abbreviation of "automatic execution", which describes its function in automatically executing commands on system startup; the filename was coined in response to the 8.3 filename limitations of the FAT file system family. Usage AUTOEXEC.BAT is read upon startup by all versions of DOS, including MS-DOS version 7.x as used in Windows 95 and Windows 98. Windows ME only parses environment variables as part of its attempts to reduce legacy dependencies, but this can be worked around. The filename was also used by (DCP), an MS-DOS derivative by the former East-German VEB Robotron. In Korean versions of MS-DOS/PC DOS 4.01 and higher (except for PC DOS 7 and 2000), if the current country code is set to 82 (for Korea) and no /P:filename is given and no default AUTOEXEC.BAT is found, COMMAND.COM will look for a file named KAUTOEXE.BAT instead in order to ensure that the DBCS frontend drivers will be loaded even without properly set up CONFIG.SYS and AUTOEXEC.BAT files. Under DOS, the file is executed by the primary copy of the command-line processor (typically COMMAND.COM) once the operating system has booted and the CONFIG.SYS file processing has finished. While DOS by itself provides no means to pass batch file parameters to COMMAND.COM for AUTOEXEC.BAT processing, the alternative command-line processor 4DOS supports a 4DOS.INI AutoExecParams directive and //AutoExecParams= startup option to define such parameters. Under Concurrent DOS, Multiuser DOS and REAL/32, three initial parameters will be passed to either the corresponding STARTxxy.BAT (if it exists) or the generic AUTOEXEC.BAT startup file, %1 holds the virtual console number, %2 the 2-digit terminal number (xx) (with 00 being the main console) and %3 the 1-digit session number (y). Windows NT and its descendants Windows XP and Windows Vista parse AUTOEXEC.BAT when a user logs on. As with Windows ME, anything other than setting environment variables is ignored. Unlike CONFIG.SYS, the commands in AUTOEXEC.BAT can be entered at the interactive command line interpreter. They are just standard commands that the computer operator wants to be executed automatically whenever the computer is started, and can include other batch files. AUTOEXEC.BAT is most often used to set environment variables such as keyboard, soundcard, printer, and temporary file locations. It is also used to initiate low level system utilities, such as the following: Virus scanners Disk caching software Mouse drivers Keyboard drivers CD drivers Miscellaneous other drivers Example In early versions of DOS, AUTOEXEC.BAT was by default very simple. The DATE and TIME commands were necessary as early PC and XT class machines did not have a battery backed-up real-time clock as default. @ECHO OFF CLS DATE TIME VER In non-US environments, the keyboard driver (like KEYB FR for the French keyboard) was also included. Later versions were often much expanded with numerous third-party device drivers. The following is a basic DOS 5 type AUTOEXEC.BAT configuration, consisting only of essential commands: @ECHO OFF PROMPT $P$G PATH C:\DOS;C:\WINDOWS SET TEMP=C:\TEMP SET BLASTER=A220 I7 D1 T2 LH SMARTDRV.EXE LH DOSKEY LH MOUSE.COM /Y This configuration sets common environment variables, loads a disk cache, places common directories into the default PATH, and initializes the DOS mouse / keyboard drivers. The PROMPT command sets the prompt to "C:\>" (when the working directory is the root of the C drive) instead of simply "C>" (the default prompt, indicating only the working drive and not the directory therein). In general, device drivers were loaded in CONFIG.SYS, and programs were loaded in the AUTOEXEC.BAT file. Some devices, such as mice, could be loaded either as a device driver in CONFIG.SYS, or as a TSR in AUTOEXEC.BAT, depending upon the manufacturer. In MS-DOS 6.0 and higher, a DOS boot menu is configurable. This can be of great help to users who wish to have optimized boot configurations for various programs, such as DOS games and Windows. @ECHO OFF PROMPT $P$G PATH C:\DOS;C:\WINDOWS SET TEMP=C:\TEMP SET BLASTER=A220 I7 D1 T2 GOTO %CONFIG% :WIN LH SMARTDRV.EXE LH MOUSE.COM /Y WIN GOTO END :XMS LH SMARTDRV.EXE LH DOSKEY GOTO END :END The GOTO %CONFIG% line informs DOS to look up menu entries that were defined within CONFIG.SYS. Then, these profiles are named here and configured with the desired specific drivers and utilities. At the desired end of each specific configuration, a GOTO command redirects DOS to the :END section. Lines after :END will be used by all profiles. Dual-booting DOS and Windows 9x When installing Windows 95 over a preexisting DOS/Windows install, CONFIG.SYS and AUTOEXEC.BAT are renamed to CONFIG.DOS and AUTOEXEC.DOS. This is intended to ease dual booting between Windows 9x and DOS. When booting into DOS, they are temporarily renamed CONFIG.SYS and AUTOEXEC.BAT. Backups of the Windows 9x versions are made as .W40 files. Windows 9x also installs MSDOS.SYS, a configuration file, which will not boot Windows 95/98 if parameterBOOTGUI=0 is loaded, and instead a DOS prompt will appear on the screen (Windows can still be loaded by calling the WIN command (file WIN.COM). This file contains some switches that designate how the system will boot, one of which controls whether or not the system automatically goes into Windows. This "BootGUI" option must be set to "0" in order to boot to a DOS prompt. By doing this, the system's operation essentially becomes that of a DOS/Windows pairing like with earlier Windows versions. Windows can be started as desired by typing WIN at the DOS prompt. When installing Caldera DR-DOS 7.02 and higher, the Windows version retains the name AUTOEXEC.BAT, while the file used by the DR-DOS COMMAND.COM is named AUTODOS7.BAT, referred to by the startup parameter /P:filename.ext in the SHELL directive. It also differentiates the CONFIG.SYS file by using the name DCONFIG.SYS. OS/2 The equivalent to AUTOEXEC.BAT under OS/2 is the OS/2 STARTUP.CMD file, however, genuine DOS sessions booted under OS/2 continue to use AUTOEXEC.BAT. Windows NT On Windows NT and its derivatives, Windows 2000, Windows Server 2003 and Windows XP, the equivalent file is called AUTOEXEC.NT and is located in the %SystemRoot%\system32 directory. The file is not used during the operating system boot process; it is executed when the MS-DOS environment is started, which occurs when a DOS application is loaded. The AUTOEXEC.BAT file may often be found on Windows NT in the root directory of the boot drive. Windows only considers the SET and PATH statements which it contains, in order to define environment variables global to all users. Setting environment variables through this file may be interesting if for example MS-DOS is also booted from this drive (this requires that the drive be FAT-formatted) or to keep the variables across a reinstall. This is an exotic usage today so the file usually remains empty. The Tweak UI applet from the Microsoft PowerToys collection allows to control this feature (Parse AUTOEXEC.BAT at logon). See also COMMAND.COM IBMBIO.COM / IO.SYS IBMDOS.COM / MSDOS.SYS SHELL (CONFIG.SYS directive) CONFIG.SYS AUTORUN.INF References DOS files Configuration files MSX-DOS
2545484
https://en.wikipedia.org/wiki/English%20in%20computing
English in computing
The English language is sometimes described as the lingua franca of computing. In comparison to other sciences, where Latin and Greek are often the principal sources of vocabulary, computer science borrows more extensively from English. In the past, due to the technical limitations of early computers, and the lack of international standardization on the Internet, computer users were limited to using English and the Latin alphabet. However, this historical limitation is less present today, due to innovations in internet infrastructure and increases in computer speed. Most software products are localized in numerous languages and the invention of the Unicode character encoding has resolved problems with non-Latin alphabets. Some limitations have only been changed recently, such as with domain names, which previously allowed only ASCII characters. English is seen as having this role due to the prominence of the United States and the United Kingdom, both English-speaking countries, in the development and popularization of computer systems, computer networks, software and information technology History Computer Science has an ultimately mathematical foundation which was laid by non-English speaking cultures. The first mathematically literate societies in the Ancient Near East recorded methods for solving mathematical problems in steps, The word ‘algorithm’ comes from the name of a famous medieval Arabic mathematician who contributed to the spread of Hindu-Arabic numerals, al-Khwārizmī, and the first systematic treatment of binary numbers was completed by Leibniz, a German mathematician. Leibniz wrote his treatise on the topic in French, the lingua franca of science at the time, and innovations in what is now called Computer hardware occurred outside of an English tradition, with Pascal inventing the first mechanical calculator, and Leibniz improving it. Interest in building computing machines first emerged in the 19th century, with the coming of the Second Industrial Revolution. The origins of Computing in an English tradition began in this era with Charles Babbage’s conceptualization of the Difference and Analytical Engine, George Boole’s work on logic, and Herman Hollerith’s invention of the tabulating machine for specific use in the 1890 United States census. At the time, Britain enjoyed near complete hegemonic power in the West at the height of the Pax Britannica, and America was experiencing an economic and demographic boom. By the time of the interwar period in the early 20th century, the most important mathematics related to the development of computing were being done in English, which was also beginning to become the new lingua franca of science. Influence on other languages The computing terminology of many languages borrows from English. Some language communities actively resist this trend, and in other cases English is used extensively and more directly. This section gives some examples of the use of English loans in other languages and mentions any notable differences. Bulgarian Both English and Russian have had influence over Bulgarian computing vocabulary. However, in many cases the borrowed terminology is translated, and not transcribed phonetically. Combined with the use of Cyrillic this can make it difficult to recognize loanwords. For example, the Bulgarian term for motherboard is '' (IPA or literally "bottom board"). – computer – hard disk – floppy disk; like the French disquette – (phonetic) web site; but also "" – internet page Faroese The Faroese language has a sparse scientific vocabulary based on the language itself. Many Faroese scientific words are borrowed and/or modified versions of especially Nordic and English equivalents. The vocabulary is constantly evolving and thus new words often die out, and only a few survive and become widely used. Examples of successful words include e.g. "telda" (computer), "kurla" (at sign) and "ambætari" (server). French In French, there are some generally accepted English loanwords, but there is also a distinct effort to avoid them. In France, the Académie française is responsible for the standardisation of the language and often coins new technological terms. Some of them are accepted in practice, but oftentimes the English loans remain predominant. In Quebec, the Office québécois de la langue française has a similar function. email/mail (in Europe); courriel (mainly in French-speaking Canada, but increasingly used in French-speaking Europe); mél. (only used as an abbreviation, similar to "tél."); more formally courrier électronique pourriel – spam hameçonnage, phishing – phishing télécharger – to download site web – website lien, hyperlien – website hyper-link base de données – database caméra web, webcaméra, short webcam – webcam amorcer, démarrer, booter – to boot redémarrer, rebooter – to reboot arrêter, éteindre – to shut down amorçable, bootable – bootable surfréquençage, surcadençage, overclocking – overclocking refroidissement à l'eau – watercooling tuning PC – case modding German In German, English words are very often used as well: nouns: Computer, Website, Software, E-Mail, Blog verbs: downloaden, booten, crashen Japanese Japanese uses the katakana alphabet for foreign loanwords, a wide variety of which are in use today. English computing terms remain prevalent in modern Japanese vocabulary. コンピューター (konpyūtā) - computer コーダー (kōdā) - coder コーデック (kōdekku) - codec ダウンロード (daunrōdo) - download リンク (rinku) - link Utilizing a keyboard layout suitable for romanization of Japanese, a user may type in the Latin script in order to display Japanese, inclusive of hiragana, katakana, and Japanese kanji.Usually when writing in Japanese on a computer keyboard, the text is input in roman transcription, optionally according to Hepburn, Kunrei, or Nippon romanization; the common Japanese word processing programs allow for all three. Long vowels are input according to how they are written in kana; for example, a long o is input as ou, instead of an o with a circumflex or macron (ô or ō). As letters are keyed in, they are automatically converted, as specified, into either hiragana or katakana. And these kana phrases are in turn converted, as desired, into kanji. Icelandic The Icelandic language has its own vocabulary of scientific terms. Still, English loans exist, and are mostly used in casual conversation, whereas the Icelandic words might be longer or not as widespread. Norwegian It's quite common to use English words with regards to computing in all Scandinavian languages. nouns: mail (referring to e-mail), software, (from "blog"), spam verbs: å boote, å spamme, å blogge Polish Polish terminology derived from English: dżojstik: joystick kartrydż, kartridż: cartridge interfejs: interface mejl: e-mail Russian History of computer hardware in Soviet Bloc countries Computer Russification Spanish The English influence on the software industry and the internet in Latin America has borrowed significantly from the Castilian lexicon. Frequently untranslated, and their Spanish equivalent email: correo electrónico mouse (only in Latin America): ratón (mainly in Spain) messenger: mensajero (only in Spain) webcam: cámara web, webcam website: página web, sitio web blog: bitácora, blog ban/banned: baneado (Latin America), vetar, vetado web: red, web Not translated flog Undecided Many computing terms in Spanish share a common root with their English counterpart. In these cases, both terms are understood, but the Spanish is preferred for formal use: link vs enlace or vínculo net vs red Character encoding Early computer software and hardware had very little support for character sets other than the Latin alphabet. As a result, it was difficult or impossible to represent languages based on other scripts. The ASCII character encoding, created in the 1960s, usually only supported 128 different characters in a 7 bit format. With the use of additional software it was possible to provide support for some languages, for instance those based on the Cyrillic alphabet. However, complex-script and logographic languages like Chinese or Japanese need more characters than the 256 limit imposed by 8-bit character encodings. Some computers created in the former USSR had native support for the Cyrillic alphabet. The widespread adoption of Unicode, and UTF-8 on the web, resolved most of these historical limitations. ASCII remains the de facto standard for command interpreters, programming languages and text-based communication protocols, but it is slowly dying out. Mojibake – Text presented as "unreadable" when software fails due to character encoding issues. Programming language The syntax of most programming languages uses English keywords, and therefore it could be argued some knowledge of English is required in order to use them. Some studies have shown that programmers nonnative to English self-report that English is their biggest obstacle to programming proficiency. However, it is important to recognize all programming languages are in the class of formal languages. They are very different from any natural language, including English. Some examples of non-English programming languages: Arabic: ARLOGO, قلب Bengali: BangaBhasha Chinese: Chinese BASIC Dutch: Superlogo French: LSE, WinDev, Pascal (although the English version is more widespread) Hebrew: Hebrew Programming Language Icelandic: Fjölnir Indian Languages: Hindawi Programming System Russian: Glagol Spanish: Lexico Portuguese: Portugol Communication protocols Many application protocols use text strings for requests and parameters, rather than the binary values commonly used in lower layer protocols. The request strings are generally based on English words, although in some cases the strings are contractions or acronyms of English expressions, which can render them somewhat cryptic to anyone not familiar with the protocol, whatever their proficiency in English. Nevertheless, the use of word-like strings is a convenient mnemonic device that allows a person skilled in the art (and with sufficient knowledge of English) to execute the protocol manually from a keyboard, usually for the purpose of finding a problem with the service. Examples: FTP: USER, PASS (password), PASV (passive), PORT, RETR (retrieve), STOR (store), QUIT SMTP: HELO (hello), MAIL, RCPT (recipient), DATA, QUIT HTTP: GET, PUT, POST, HEAD (headers), DELETE, TRACE, OPTIONS It is notable that response codes, that is, the strings sent back by the recipient of a request, are typically numeric: for instance, in HTTP (and some borrowed by other protocols) 200 OK request succeeded 301 Moved Permanently to redirect the request to a new address 404 Not Found the requested page does not exist This is because response codes also need to convey unambiguous information, but can have various nuances that the requester may optionally use to vary its subsequent actions. To convey all such "sub-codes" with alphabetic words would be unwieldy, and negate the advantage of using pseudo-English words. Since responses are usually generated by software they do not need to be mnemonic. Numeric codes are also more easily analyzed and categorized when they are processed by software, instead of a human testing the protocol by manual input. Localization BIOS Many personal computers have a BIOS chip, displaying text in English during boot time. Keyboard shortcut Keyboard shortcuts are usually defined in terms of English keywords such as CTRL+F for find. English on the World Wide Web English is the largest language on the World Wide Web, with 27% of internet users. English speakers Web user percentages usually focus on raw comparisons of the first language of those who access the web. Just as important is a consideration of second- and foreign-language users; i.e., the first language of a user does not necessarily reflect which language he or she regularly employs when using the web. Native speakers English-language users appear to be a plurality of web users, consistently cited as around one-third of the overall (near one billion). This reflects the relative affluence of English-speaking countries and high Internet penetration rates in them. This lead may be eroding due mainly to a rapid increase of Chinese users. First-language users among other relatively affluent countries appear generally stable, the two largest being German and Japanese, which each have between 5% and 10% of the overall share. World Wide Web content One widely quoted figure for the amount of web content in English is 80%. Other sources show figures five to fifteen points lower, though still well over 50%. There are two notable facts about these percentages: The English web content is greater than the number of first-language English users by as much as 2 to 1. Given the enormous lead it already enjoys and its increasing use as a lingua franca in other spheres, English web content may continue to dominate even as English first-language Internet users decline. This is a classic positive feedback loop: new Internet users find it helpful to learn English and employ it online, thus reinforcing the language's prestige and forcing subsequent new users to learn English as well. Certain other factors (some predating the medium's appearance) have propelled English into a majority web-content position. Most notable in this regard is the tendency for researchers and professionals to publish in English to ensure maximum exposure. The largest database of medical bibliographical information, for example, shows English was the majority language choice for the past forty years and its share has continually increased over the same period. The fact that non-Anglophones regularly publish in English only reinforces the language's dominance. English has a rich technical vocabulary (largely because native and non-native speakers alike use it to communicate technical ideas) and many IT and technical professionals use English regardless of country of origin (Linus Torvalds, for instance, comments his code in English, despite being from Finland and having Swedish as his first language). Notes English language Computing and society Internet culture Natural language and computing English as a global language
2338535
https://en.wikipedia.org/wiki/Information%20operations%20condition
Information operations condition
INFOCON (short for information operations condition) is a threat level system in the United States similar to that of FPCON. It is a defense system based primarily on the status of information systems and is a method used by the military to defend against a computer network attack. Description There are five levels of INFOCON, which recently changed to more closely correlate to DEFCON levels. They are: INFOCON 5 describes a situation where there is no apparent hostile activity against computer networks. Operational performance of all information systems is monitored, and password systems are used as a layer of protection. INFOCON 4 describes an increased risk of attack. Increased monitoring of all network activities is mandated, and all Department of Defense end users must make sure their systems are secure. Internet usage may be restricted to government sites only, and backing up files to removable media is ideal. INFOCON 3 describes when a risk has been identified. Security review on important systems is a priority, and the Computer Network Defense system's alertness is increased. All unclassified dial-up connections are disconnected. INFOCON 2 describes when an attack has taken place but the Computer Network Defense system is not at its highest alertness. Non-essential networks may be taken offline, and alternate methods of communication may be implemented. INFOCON 1 describes when attacks are taking place and the Computer Network Defense system is at maximum alertness. Any compromised systems are isolated from the rest of the network. Similar concepts in private-sector computing ThreatCon (Symantec) Symantec's ThreatCon service no longer exists. Broadcom has acquired Symantec. In popular culture In the TV Series, Crisis , the US government goes to INFOCON 2 when Francis Gibson has a massive cyber attack initiated upon the United States, nearly bringing it to war with China. See also Alert state Attack (computing) LERTCON DEFCON EMERGCON FPCON (THREATCON) Threat (computer) WATCHCON References Alert measurement systems
2921836
https://en.wikipedia.org/wiki/SS%20Exodus
SS Exodus
Exodus 1947 was a packet steamship that was built in the United States in 1928 as President Warfield for the Baltimore Steam Packet Company. From her completion in 1928 until 1942 she carried passengers and freight across Chesapeake Bay between Norfolk, Virginia and Baltimore, Maryland. From 1942 President Warfield served in the Second World War as a barracks and training ship for the British Armed Forces. In 1944 she was commissioned into the United States Navy as USS President Warfield (IX-169), a station and accommodation ship for the D-Day landing on Omaha Beach. In 1947 she was renamed Exodus 1947 to take part in Aliyah Bet. She took 4,515 Jewish migrants from France to Mandatory Palestine. Most were Holocaust survivors who had no legal immigration certificates for Palestine. The Royal Navy boarded her in international waters and took her to Haifa, where ships were waiting to return the migrants to refugee camps in Europe. Building Pusey and Jones built President Warfield in Wilmington, Delaware, as hull number 399. She was launched in 1927 and completed in 1928. She was a sister ship of the Baltimore Steam Packet Co's State of Maryland and State of Virginia, which had been completed in 1922 and 1923 respectively. The ship was originally to be called Florida. However, S. Davies Warfield, who was President of the Baltimore Steam Packet Co and its parent company, the Seaboard Air Line Railroad, died while she was being built, so she was named President Warfield in his honor. Like her sisters, President Warfields registered length was , her beam was and her depth was . As built, her tonnages were and . She had a single propeller, powered by a quadruple expansion steam engine. Baltimore Steam Packet The Baltimore Steam Packet Co registered President Warfield in Baltimore. Her US official number was 227753, and until 1933 her code letters were MOVN. Until 1942 President Warfield and her sisters worked a packet route on Chesapeake Bay between Norfolk, Virginia and Baltimore, Maryland. She was built as a coal-burner, but in 1933 she was converted to oil fuel. In 1934 her code letters were superseded by the new call sign KGQC. President Warfield was modernised with the installation of a fire sprinkler system in 1938, and wireless direction finding and ship-to-shore telephone in 1939. Second World War On July 12, 1942 the War Shipping Administration (WSA) acquired President Warfield and several other US East Coast packet ships for the British Ministry of War Transport. Having been built only for service in the relatively sheltered waters of Chesapeake Bay, President Warfield needed to be altered to cross the North Atlantic safely. Her superstructure was cut back, and a "turtle-back" covering was built onto the forward end of her superstructure to withstand heavy seas. She was fitted with cargo ship masts and derricks. She was fitted with one three-inch 12-pounder gun on her stern as main armament, plus four 20mm anti-aircraft guns. She was repainted in plain gray. The alterations increased President Warfields tonnages to and . In September 1942 President Warfield sailed to Britain via Boston, Halifax, Nova Scotia and St John's, Newfoundland. From Boston onward she was escorted in convoys. Coast Lines of Liverpool, England, provided British Merchant Navy crews for President Warfield and the other coastal packet ships to bring them from the USA to Britain. A crew commanded by Captain JR Williams took over President Warfield, and on September 21, 1942 she left St John's in Convoy RB 1 to Liverpool. Convoy RB 1 Convoy RB 1 officially comprised eight merchant ships, escorted by two Royal Navy destroyers: and . On the afternoon of September 25 a U-boat wolf pack attacked the convoy about west of Ireland. fired a spread of four torpedoes, two of which hit RB 1's commodore ship, the packet ship Boston, sinking her with the loss of 17 of her crew. HMS Veteran and the packet ships New Bedford and Northland rescued 49 survivors. The packet ship Southland twice sighted a periscope, but each time drove off the submarine with rapid fire from her 12-pounder gun. A torpedo was fired at President Warfield, but the packet boat quickly turned parallel to it, and the torpedo passed by about off her port beam. Two minutes later President Warfield sighted a submarine near her port quarter, and opened fire with her 12-pounder. HMS Veteran joined in the action with depth charges. Just before midnight, fired a spread of two torpedoes, hitting Bostons sister ship, New York, which was the vice-commodore's ship. 38 men were killed, the survivors abandoned ship, and an hour and a half later sank her drifting hulk. HMS Veteran stopped to rescue survivors, but torpedoed the destroyer, sinking her with all hands in the small hours of September 26. The convoy dispersed, but the attack continued. Late on the evening of September 26, torpedoed the steamship Yorktown, sinking her with the loss of 18 men. Two days later the destroyer rescued 42 survivors. President Warfield escaped further attack, and reached Belfast, Northern Ireland independently. Other surviving ships from the convoy reached Derry and Greenock. The convoy lost a total of three packet ships, one destroyer and 131 men, but the other five ships safely reached the British Isles. Posthumous decorations were awarded to some of the officers lost. In May 1943 the master and chief engineer of each of the five surviving ships, including Captain Williams and his Chief Engineer, was made an OBE. European war service From Belfast, President Warfield continued to England, where she moored in the River Torridge at Instow on the north coast of Devon. There she served as a Combined Operations training and barracks ship for the Royal Marines and Commandos. She provided accommodation for 105 officers and 500 other ranks. In July 1943 the British Government returned President Warfield to US control. On May 21, 1944 she was commissioned into the US Navy as USS President Warfield, with the pennant number IX-169. In July she served as a station and accommodations ship off Omaha Beach on the coast of Normandy. After service in England and on the Seine in France, she arrived at Norfolk, Virginia, July 25, 1945. She left active Navy service on September 13, was struck from the US Naval Vessel Register on October 11 and was returned to the War Shipping Administration on November 14. She then spent about a year moored in the James River, where she was one of many ships laid up as surplus after the end of the war. Jewish refugees After World War II, about 250,000 European Jews were living in displaced persons camps in Germany and Austria. Zionist organizations began organizing an underground network known as the Brichah ("flight", in Hebrew), which moved thousands of Jews from the camps to ports on the Mediterranean where ships took them to Palestine. This was part of the Aliyah Bet immigration that began after the war. At first many made their way to Palestine on their own. Later, they received financial and other help from sympathizers around the World. The ships were crewed mostly by volunteers from the United States, Canada and Latin America. Under Aliyah Bet, more than 100,000 people tried to illegally migrate to Palestine. The British government opposed large-scale immigration. Displaced person camps run by American, French and Italian officials often turned a blind eye to the situation, with only British officials restricting movement in and out of their camps. In 1945 the British government reaffirmed its 1939 policy limiting Jewish immigration which it adopted after a quarter of a million European Jews arrived fleeing Nazism, and Palestine's indigenous Arab population rebelled. The British government deployed naval and military forces to turn back the refugees. More than half of 142 voyages were stopped by British patrols, and most intercepted migrants were sent to internment camps in Cyprus, the Atlit detention camp in Palestine, or to Mauritius. About 50,000 people ended up in camps, more than 1,600 drowned at sea, and only a few thousand reached Palestine. Of the 64 vessels that sailed in Aliyah Bet, Exodus 1947 was the largest. She carried 4,515 passengers, the largest-ever number of illegal immigrants to Palestine. The story received much international attention, thanks in large part to dispatches from American journalist Ruth Gruber. The incident took place near the end of Aliyah Bet and toward the end of the British mandate, after which Britain withdrew from Palestine and the state of Israel was founded. Historians say Exodus 1947 helped unify the Jewish community of Palestine and the Holocaust-survivor refugees in Europe as well as significantly deepening international sympathy for the plight of Holocaust survivors and rallying support for the idea of a Jewish state. One called the story of the Exodus 1947 a "spectacular publicity coup for the Zionists." Voyage preparations On November 9, 1946 the Potomac Shipwrecking Company of Washington, D.C. bought President Warfield from the WSA for $8,028. The company was acting for the Haganah Jewish paramilitary organization, and two days later sold her for $40,000 to the Weston Trading Company of New York, which was a Haganah front organization. Zionist supporters in Baltimore funded her purchase. Haganah transferred her to Mossad LeAliyah Bet, the branch of Haganah that ran Aliyah Bet. Haganah spent another $125,000 to $130,000 repairing, overhauling and modifying the ship for her voyage to Palestine. Britain had recently announced that it would begin deporting illegal immigrants to Cyprus rather than Atlit. Mossad LeAliyah Bet responded by deciding that migrants should resist capture. President Warfield was deemed well-suited for this because she was relatively fast, sturdy enough to not easily capsize, made of steel which would help her to withstand ramming, and was taller than the Royal Navy destroyers that would be trying to board her. President Warfield was also chosen because of her derelict condition. It was risky to put passengers on her, and it was felt this would either compel the British authorities to let her pass the blockade because of the danger, or damage Britain's international reputation. For months, teams of Palestinians and Americans worked on Exodus 1947 in order to make it harder for British forces to her take over. Metal pipes, designed to spray out steam and boiling oil, were installed around the ship's perimeter. Lower decks were covered in nets and barbed wire. Her engine room, boiler room, wheelhouse and radio room were covered in wire and reinforced to prevent entry by British soldiers. Haganah re-registered President Warfield under the Honduran flag of convenience. On February 25, 1947 she left Baltimore for Marseille, but she ran into bad weather in the Virginia Capes and then a heavy sea about east of Diamond Shoals. Her forward hold began to leak, and she radioed the United States Coast Guard for help. The tanker E. W. Sinclair picked up her distress message, found President Warfield and stood by. The coast guard cutter USCGC arrived to tow her back to safety, but the weather eased and President Warfield was able to reach Norfolk, VA under her own power. After her damage was surveyed in Norfolk, President Warfield spent a fortnight in Philadelphia being repaired. She then sailed via the Azores to Porto Venere in Italy, where was refitted and bunkered. In July 1947 she arrived at Sète on the south coast of France. Voyage to Palestine As a packet boat President Warfield had been certificated to carry 540 passengers. In the war she had been converted to provide berths for 605 troops. But more than 4,500 Jewish refugees arrived in Sète. Haganah issued them with 2,000 forged passports, with visas for Colombia, with which French immigration officers allowed them to embark on President Warfield. Each passport was used more than once in the same boarding, which a crewman collecting them and passing them back to refugees still waiting in the queue. Haganah secured the immigration officers' co-operation with bottles of Cognac and a group of Jewish young women to keep them occupied. According to Israeli historian Aviva Halamish, the ship was never meant to "sneak out toward the shores of Palestine," but rather "to burst openly through the blockade, by dodging and swiftly nipping through, beaching herself on a sand bank and letting off her cargo of immigrants at the beach." The ship was too large and unusual to go unnoticed. Even as people began boarding the ship at the port of Sète near Montpellier, a Royal Air Force aircraft circled overhead and a Royal Navy warship waited a short distance out at sea. President Warfield left Sète sometime between two and four in the morning of July 11, 1947 claiming to be bound for Istanbul. She carried 4,515 refugees including 1,600 men, 1,282 women, and 1,017 young people and 655 teenagers. Palmach (Haganah's military wing) captain Ike Aronowicz was her Captain and Haganah commissioner Yossi Harel commanded the operation. The ship was manned by a crew of some 35 volunteers, mostly American Jews. All Aliyah Bet ships were renamed with Hebrew names designed to inspire and rally the Jews of Palestine. As soon as President Warfield had left port, Mossad LeAliyah Bet renamed her Exodus 1947 (and, in Hebrew, Yetz'iat (sic) Tasbaz, or Yetzi'at Eiropa Tashaz, "Flight from Europe 5707") after the biblical Jewish Exodus from Egypt to Canaan. The name was proposed by Israeli politician and military figure Moshe Sneh, who at the time was in charge of illegal migration for the Jewish Agency. The name was later described by Israel's second Prime Minister Moshe Sharett (then Shertok) as "a stroke of genius, a name which by itself, says more than anything which has ever been written about it." As the ship left port, the sloop and RAF aircraft shadowed her. Later, the destroyer relieved Mermaid. Each day during the voyage, the Royal Navy ship shadowing her came within hailing distance of Exodus 1947 and asked whether she was carrying any migrants to Palestine. Instead of answering the question, Exodus 1947 responded by playing one of Edward Elgar's Pomp and Circumstance Marches over her public address system. Exodus 1947 carried enough supplies to last two weeks. Passengers were given cooked meals, hot drinks, soup, and one liter of drinking water daily. They washed in salt water. The ship had only 13 lavatories. A British military doctor, inspecting the ship after the battle, said that it was badly over-crowded, but that hygiene was satisfactory and the ship appeared well prepared to cope with casualties. Several babies were born during the week-long journey. One woman, Paula Abramowitz, died in childbirth. Her infant son died a few weeks later, in Haifa. Interception During the journey, the people aboard Exodus 1947 prepared to be intercepted. The ship was divided into sections crewed by different groups, and each practiced resistance sessions. Her defences were augmented with sandbags around her wheelhouse and chicken wire along her upper decks. Small arms were issued to key personnel. As Exodus 1947 neared Palestinian territorial waters, her Royal Navy escort was increased to five destroyers and two minesweepers, led by the light cruiser . At about 0200 hrs on July 18, about from the Palestinian coast, two Royal Navy destroyers came alongside Exodus 1947, one either side, converged on her, and jammed her between them. The destroyer struck Exodus 1947s port side' holing on her saloon deck above the waterline. Exodus 1947 released her liferafts to fall onto the decks of the two destroyers. The destroyers dropped gangways onto Exodus 1947 and sent a boarding party of 50 Royal Marines, armed with clubs and tear gas, onto the packet boat. Passengers and Haganah members aboard resisted the Marines. The second officer, an American Machal volunteer, Bill Bernstein, died from a skull fracture after being clubbed in the wheelhouse. Two passengers died of gunshot wounds. Two British sailors were treated afterwards for fractured scapula, and one for a head injury and lacerated ear. About ten Exodus 1947 passengers and crew were treated for mild injuries resulting from the boarding, and about 200 were treated for illnesses and maladies unrelated to it. Due to the high public profile of Exodus 1947 the British government decided to deport the migrants back to France. Foreign Secretary Ernest Bevin suggested this and the request was relayed to General Sir Alan Cunningham, High Commissioner for Palestine, who agreed with the plan after consulting the Navy. Before then, intercepted migrants were placed in internment camps on Cyprus, which was at the time a British colony. This new policy was meant to be a signal to both the Jewish community and the European countries which assisted immigration that whatever they sent to Palestine would be sent back to them. Not only should it clearly establish the principle of as applies to a complete shipload of immigrants, but it will be most discouraging to the organisers of this traffic if the immigrants ... end up by returning whence they came. Repatriation Attempted return to France The Royal Navy brought Exodus 1947 into Haifa, where her passengers were transferred to three larger and more seaworthy ships for deportation: Empire Rival, and Runnymede Park. Members of the United Nations Special Committee on Palestine (UNSCOP) witnessed the transfer. The three ships left Haifa on July 19 for Port-de-Bouc near Marseilles. Foreign Secretary Bevin insisted that the French get their ship back as well as its passengers. When the ships arrived at Port-de-Bouc on August 2, the French Government said it would allow disembarkation of the passengers only if it was voluntary on their part. Haganah agents, both on board the ships and using launches with loudspeakers, encouraged the passengers not to disembark. Thus the migrants refused to disembark and the French refused to cooperate with British attempts at forced disembarkation. This left the British with the option of returning the passengers to Germany. Realizing that they were not bound for Cyprus, the migrants conducted a 24-day hunger strike and refused to cooperate with the British authorities. Media coverage of the contest of wills put pressure on Britain to find a solution. The matter was reported to the UNSCOP members who had been deliberating in Geneva. For three weeks the refugees on the ships held firm in difficult conditions, rejecting offers of alternative destinations. Britain concluded that the only option was to send the Jews to camps in the British-controlled zone of post-war Germany. Operation Oasis The ships went from Marseille to Hamburg, which was then in the British occupation zone. Britain realized that returning the refugees to camps in Germany would elicit a public outcry, but Germany was the only territory under British control that could immediately accommodate so many people. Britain's position was summed up by John Coulson, a diplomat at the British Embassy in Paris, in a message to the Foreign Office in London in August 1947: "You will realize that an announcement of decision to send immigrants back to Germany will produce violent hostile outburst in the press. ... Our opponents in France, and I dare say in other countries, have made great play with the fact that these immigrants were being kept behind barbed wire, in concentration camps and guarded by Germans." Coulson advised that Britain apply as best they could a counter-spin to the story: "If we decide it is convenient not to keep them in camps any longer, I suggest that we should make some play that we are releasing them from all restraint of this kind in accordance with their wishes and that they were only put in such accommodation for the preliminary necessities of screening and maintenance." The mission of bringing the Jewish refugees of Exodus 1947 back to Germany was known in diplomatic and military circles as "Operation Oasis." Disembarkation in Germany On August 22 a Foreign Office cable warned diplomats that they should be ready to emphatically deny that the Jews would be housed in former concentration camps in Germany and that German guards would not be used to keep the Jews in the refugee camps. It further added that British guards would be withdrawn once the Jews were screened. On September 7 Empire Rival, Ocean Vigour and Runnymede Park reached Hamburg, where the migrants were successfully disembarked. Relations between the British personnel on the ships and the passengers were afterwards said by the passengers to have been mostly amicable. Everyone realized there was going to be trouble at the forced disembarkation and some of the Jewish passengers apologized in advance. A number were injured in confrontations with British troops that involved the use of batons and fire hoses. The passengers were sent back to displaced persons camps in Am Stau near Lübeck and Pöppendorf. Although most of the women and children disembarked voluntarily, the men had to be carried off by force. The British identified Runnymede Park as the ship most likely to suffer resistance. A confidential report of the time noted: "It was known that the Jews on the Runnymede Park were under the leadership of a young, capable and energetic fanatic, Morenci Miry Rosman, and throughout the operation it had been realised that this ship might give trouble." 100 Royal Military Police and 200 soldiers of the Sherwood Foresters were ordered aboard her to eject the Jewish migrants. The officer in charge of the operation, Lt. Col Gregson, later gave a frank assessment of the operation which left up to 33 Jews, including four women, injured. 68 Jews were held in custody to be put on trial for unruly behaviour. Only three soldiers were hurt. Gregson later admitted that he had considered using tear gas against the migrants. He concluded: The Jew is liable to panic and 800–900 Jews fighting to get up a stairway to escape tear smoke could have produced a deplorable business. ... It is a very frightening thing to go into the hold full of yelling maniacs when outnumbered six or eight to one." Describing the assault, the officer wrote to his superiors: "After a very short pause, with a lot of yelling and female screams, every available weapon up to a biscuit and bulks of timber was hurled at the soldiers. They withstood it admirably and very stoically till the Jews assaulted and in the first rush several soldiers were downed with half a dozen Jews on top kicking and tearing ... No other troops could have done it as well and as humanely as these British ones did...It should be borne in mind that the guiding factor in most of the actions of the Jews is to gain the sympathy of the world press." One of the official observers who witnessed the violence was Dr Noah Barou, secretary of the British section of the World Jewish Congress, who described young soldiers beating Holocaust survivors as a "terrible mental picture": "They went into the operation as a football match ... and it seemed evident that they had not had it explained to them that they were dealing with people who had suffered a lot and who are resisting in accordance with their convictions. ... People were usually hit in the stomach and this in my opinion explains that many people who did not show any signs of injury were staggering and moving very slowly along the staircase giving the impression that they were half-starved and beaten up." When the people walked off the ship, many of them, especially younger people, were shouting to the troops 'Hitler commandos', 'gentleman fascists', 'sadists'. One young girl "came to the top of the stairs and shouted to the soldiers, 'I am from Dachau.' And when they did not react she shouted 'Hitler commandos'." The British denied using excessive force, yet conceded that in one case a Jew "was dragged down the gangway by the feet with his head bumping on the wooden slats". A homemade bomb with a timed fuse was found aboard Empire Rival. It was apparently set to detonate after the Jews had been removed. Camp conditions The treatment of the refugees at the camps caused an international outcry after it was claimed that the conditions could be likened to German concentration camps. Dr Barou was once again on hand to witness events. He reported that conditions at Camp Poppendorf were poor and claimed that it was being run by a German camp commandant. That was denied by the British. It turned out that Barou's reports had been untrue. There was no German commandant or guards but there were German staff carrying out duties inside the camp, in accordance with the standard British military practice of using locally employed civilians for non-security related duties. But the Jewish allegations of cruel and insensitive treatment would not go away and on October 6, 1947 the Foreign Office sent a telegram to the British commanders in the region demanding to know whether the camps really were surrounded with barbed wire and guarded by German staff. Final destination A telegram written by Jewish leaders of the camps on October 20, 1947 makes clear the wishes and determination of the refugees to find a home in Palestine: The would-be migrants to Palestine were housed in Nissen huts and tents at Poppendorf and Am Stau but inclement weather made the tents unsuitable. The DPs were then moved in November 1947 to Sengwarden near Wilhelmshaven and Emden. For many of the illegal immigrants this was only a transit point as the Brichah managed to smuggle most of them into the U.S. zone, from where they again attempted to enter Palestine. Most had successfully reached Palestine by the time of the Israeli Declaration of Independence. Of the 4,500 would-be immigrants to Palestine there were only 1,800 remaining in the two Exodus 1947 camps by April 1948. Within a year, over half of the original Exodus 1947 passengers had made other attempts at emigrating to Palestine, which ended in detention in Cyprus. Britain continued to hold the detainees of the Cyprus internment camps until it formally recognized the State of Israel in January 1949, when they were transferred to Israel. Jewish retaliation On September 29, 1947, Zionist Irgun and Lehi militants blew up the Palestine Police Force headquarters in Haifa in retaliation for the British deportation of Jewish migrants who arrived on Exodus 1947. 10 people were killed and 54 injured, of which 33 were British. Four British policemen, four Arab policemen, an Arab woman and a 16-year-old were killed. The 10-storey building was so heavily damaged that it was later demolished. They used a barrel bomb, described by police as a "brand new method" and the first use of a barrel bomb by Jewish forces. Irgun went on to make many more barrel bomb attacks in 1947–48. Fate of the ship After her historic voyage in 1947, the damaged Exodus 1947, along with many other Aliyah Bet ships, was moored to the breakwater of Haifa port. In December 1947 the Palestine Railways' Ports Authority advertised the ships for sale in British shipping journals. The advertisement warned that some of the ships were fit only for scrap. But no-one bought Exodus 1947. The founding of the State of Israel in 1948 brought massive migration of European Jewish refugees from displaced persons camps to Israel. There was little time or money to focus on the meaning of Exodus 1947. Abba Khoushy, the Mayor of Haifa, proposed in 1950 that the "Ship that Launched a Nation" should be restored and converted into a floating museum of the Aliyah Bet. As the ship was being restored, an unexplained fire broke out aboard her on August 26, 1952. Fireboats fought the fire all day, but she burned down to her waterline. Her hulk was towed north of the Kishon River and scuttled near Shemen Beach. Two significant relics of Exodus 1947 were returned to the USA. Her ship's bell is in the Mariners' Museum in Newport News, Virginia and her steam whistle is on the roof of the New York Central Iron Works in Hagerstown, Maryland. In 1964 a salvage effort was made to raise her steel hull for scrap. The effort failed and she sank again. In 1974 another effort was made to raise her wreck for salvage. She was refloated and was being towed toward the Kishon River when she sank again. Parts of Exodus 1947s hull remained visible as a home for fish and destination for fishermen until the mid-2000s. The Port of Haifa may have built its modern container ship quay extensions on top of the wreck. The quay where the wreck may be buried is a security zone and not accessible today. An unsuccessful dive effort was made to find the wreck of Exodus 1947 in October 2016. In historic recognition of the Exodus 1947, the first Israeli memorial to the Exodus 1947 was dedicated in a ceremony on July 18, 2017. The memorial, designed by Israeli sculptor Sam Philipe, is made of bronze in the shape of an anchor, symbolically representing the role Exodus 1947 played in the birth of the modern State of Israel, mounted on a relief map of the country. The monument is outside the International Cruise Ship Terminal in the port of Haifa. Cultural references In 1958, the book Exodus by Leon Uris, based partly on the story of the ship, was published, though the ship Exodus in the book is not the same but a smaller one and the "real" Exodus has been renamed. In 1960, the film Exodus directed by Otto Preminger and starring Paul Newman, based on the above novel, was released. In 1997, the documentary film Exodus 1947, directed by Elizabeth Rodgers and Robby Henson and narrated by Morley Safer, was broadcast nationally in the USA on PBS television. See also Patria disaster Struma disaster Antoinette Feuerwerker David Feuerwerker John Stanley Grauel Samuel Herschel Schulman Rose Warfman Underground to Palestine References Bibliography English language Other languages External links English language – .pdf about John Stanley Grauel – documentary film – archived website – archived link – radio documentary on archived website Other languages – website of the history of Palyam and Aliyah Bet – archived website 1928 ships Barracks ships of the United States Navy Jewish immigrant ships Jews and Judaism in Mandatory Palestine Maritime incidents in 1947 Ships built by Pusey and Jones Ships sunk as breakwaters Steamships of the United States World War II auxiliary ships of the United Kingdom World War II auxiliary ships of the United States
18075582
https://en.wikipedia.org/wiki/SafeNet
SafeNet
SafeNet, Inc. was an information security company based in Belcamp, Maryland, United States, which was acquired in August 2014 by the French security company Gemalto. Gemalto was, in turn, acquired by Thales Group in 2019. The former SafeNet's products include solutions for enterprise authentication, data encryption, and key management. SafeNet's software monetization products are sold under the Thales Sentinel brand. SafeNet was notably one of the largest suppliers of encryption technology to the United States Government. On 8 August 2014, Gemalto announced that it had signed a definitive agreement to acquire 100% of the share capital of SafeNet from Vector Capital for US$890 million on a debt free/cash free basis. A subsequent acquisition of Gemalto by French rival Thales Group was completed on 2 April 2019. History 1983: SafeNet, Inc is founded in 1983 in Timonium, MD as Industrial Resource Engineering by two former NSA engineers, Alan Hastings and technical visionary Douglas Kozlay. 1986: Anthony A. Caputo becomes a silent investor in the company. 1987: Anthony Caputo takes the helm of the company as CEO and changes the name to Information Resource Engineering. 1988: Lawrence Livermore Labs becomes IRE's first major customer. The company moves its operations to White Marsh, MD. 1989: IRE goes public in an IPO initially trading on the OTC pink sheets. 1989: IRE rapidly becomes the leader in banking communications security with seven of the top ten U.S. banks as customers and encryption devices used by SWIFT (global interbank transfer system). An end-to-end encryption system was developed that secured data over an X.25 public network, providing the world's first "virtual private network." Early adopters included the Bank of Montreal and Citibank. 1994: IRE acquires Connective Strategies, Inc., a manufacturer of voice and data ISDN products. October 1995: IRE acquires Swiss crypto manufacturer Gretag Data Systems for $4 million. 1996: MCI Communications launches the first commercial VPN service using SafeNet VPN technology. SafeNet is adopted by thirteen of the top fifteen U.S. banks within two years. 1997: IRE stock plunges 50% in one day on announcement of the restructuring of the MCI contract. July 1998: Major shareholder fails in hostile takeover attempt of IRE. 2000: IRE is renamed to SafeNet, Inc. after the VPN product line. 2001: Company co-founder Doug Kozlay leaves SafeNet to form Biometric Associates, a technology company focused on biometric-based authentication and identity solutions. 2002: SafeNet acquires a Dutch company Securealink. With the collapse of the tech "bubble," SafeNet, as a public company with a strong sales channel, was able to acquire a series of promising security companies at deep discounts. February 2003: SafeNet acquires Cylink and Raqia Networks October 2003: SafeNet acquires the OEM business of SSH Communications Security November 2003: SafeNet moves its corporate offices to Belcamp, MD. March 2004: SafeNet acquires Rainbow Technologies. December 2004: SafeNet acquires Datakey, Inc April 2005: SafeNet acquires DMDSecure B.V. June 2005: SafeNet acquires MediaSentry December 2005: SafeNet acquires Eracom Technologies AG 2006: SafeNet was involved in the options backdating controversy. As a result, both the chief executive officer and the chief financial officer resigned, and in 2008 the company's former CFO was sentenced to six months in prison for manipulating employee stock options. April 2007: a Californian equity company Vector Capital buys Safenet for $634 million, making it private April 2008: SafeNet acquires Ingrian Networks, Inc. May 2008: SafeNet acquires Beep Science AS March 2009: SafeNet parent company, Vector Capital acquires Aladdin Knowledge Systems April 2009: SafeNet sells MediaSentry to ArtistDirect December 2009: SafeNet acquires Assured Decisions, LLC December 2012: SafeNet sells its Government Solutions business unit to Raytheon August 2014: SafeNet announces it is to be acquired by Gemalto by the 4Q 2014'' February 2015: SafeNet Assured Technologies is launched as a fully owned subsidiary of Gemalto to provide high assurance data security products and technologies to U.S. government. April 2019: Thales Group closes the acquisition of Gemalto. Thales Cloud Protection and Licensing was formed to serve the global community, and SafeNet Assured Technologies, the entity serving the U.S. government, becomes Thales Trusted Cyber Technologies. Current and former subsidiaries The former Rainbow Technologies subsidiary SafeNet Government Solutions, formerly SafeNet Mykotronx, is still based in Torrance, California with offices in Irvine, California and Columbia, Maryland. The company was founded in 1979 as Myko Enterprises. It changed its name to Mykotronx, Inc in 1987 and merged with SafeNet as part of the SafeNet merger with Rainbow in 2004. SafeNet Government Solutions, LLC has been operationally merged into SafeNet and the lines between the two organizations have been intentionally blurred due to financial reasons. SafeNet Government Solutions is no longer considered a subsidiary. SafeNet Government Solutions provides information security and communications security technology for the US government. The firm has an indefinite-delivery, indefinite-quantity contract for its KIV-7 line of commercial off-the-shelf cryptographic devices that provide protection for digital and voice communications through TOP SECRET, used by agencies such as the National Security Agency (NSA) and the National Reconnaissance Office. Other products include the KOV-14 Fortezza Plus PC card which was developed as part of the NSA's NSSI program and is used on Secure Terminal Equipment. They previously developed the Clipper chip. In 2009, Vector Capital acquires Aladdin Knowledge Systems, and placed it under SafeNet with the annotation of 'under common management'. In 2010, the two companies were officially merged. References External links U.S. Subsidiary Website American companies established in 1983 American companies disestablished in 2014 Software companies established in 1983 Software companies established in 2014 Defunct software companies of the United States Computer security software companies Copyright enforcement companies Cryptography companies Software companies based in Maryland 1983 establishments in Maryland 2014 disestablishments in Maryland 2014 mergers and acquisitions American subsidiaries of foreign companies
30843620
https://en.wikipedia.org/wiki/Development%20of%20Duke%20Nukem%20Forever
Development of Duke Nukem Forever
The video game Duke Nukem Forever spent 15 years in development, from 1996 to 2011. It is a first-person shooter for Windows, PlayStation 3 and Xbox 360, developed by 3D Realms, Triptych Games, Gearbox Software and Piranha Games. It is the sequel to the 1996 game Duke Nukem 3D, as part of the long-running Duke Nukem video game series. Intended to be groundbreaking, it became an infamous example of vaporware due to its severely protracted development schedule; it had been in development under 3D Realms since 1996. Director George Broussard, one of the creators of the original Duke Nukem game, announced the development in 1997, and promotional information for the game was released from 1997 until its release in 2011. After repeatedly announcing and deferring release dates, 3D Realms announced in 2001 that Duke Nukem Forever would be released "when it's done". In 2009, 3D Realms was downsized, resulting in the loss of the game's development team. Statements indicated that the project was due to "go gold" soon with pictures of final development. Take-Two Interactive, which owns the Duke Nukem Forever publishing rights, filed a lawsuit in 2009 against 3D Realms over their "failure to finish development". 3D Realms responded that Take-Two's legal interest was limited to their publishing right. The case was settled with prejudice and details undisclosed in 2010. On September 3, 2010, 14 years after the start of development, Duke Nukem Forever was announced by 2K Games to be in development at Gearbox Software, with an expected release date of 2011. After 15 years of development, Duke Nukem Forever was released on June 10, 2011, to mostly negative reviews. Background Scott Miller was a lifelong gamer who released his text-based video games as shareware in the 1980s. By 1988, the shareware business was a $10 to $20 million a year market, but the distribution method had never been tried for video games. Miller found that gamers were not willing to pay for something they could get for free, so he came up with the idea of offering only the opening levels of his games; players could purchase the game to receive the rest. George Broussard, whom Miller met while he was in high school, joined Miller at his company, Apogee, which published and marketed games developed by other companies. While Miller was quiet, with a head for business, Broussard was an enthusiastic "creative impresario". Apogee (from which a new brand name was made in 1994, 3D Realms) grew from a small startup to a successful corporation. Among the games they published was id Software's Commander Keen in 1990 and Wolfenstein 3D in 1992. Commander Keen was met with great success and inspired the development of many sidescrollers for the DOS platform, including many developed by Apogee and using the same engine that powered the Keen games, and Wolfenstein was highly successful, popularizing 3D gaming and establishing the first-person shooter (FPS) genre. In 1994, Broussard began working on 3D Realms' own first-person shooter. Rather than the faceless marine of other games, players controlled as Duke Nukem, the protagonist of two 2D platform games from Apogee, Duke Nukem (1991) and Duke Nukem II (1992). Broussard described Duke as a combination of the film stars John Wayne, Clint Eastwood and Arnold Schwarzenegger. After a year and a half of work, Duke Nukem 3D was released in January 1996. Among game aspects that appealed to players were environmental interaction and adult content, including blood and strippers. Buoyed by the success, Broussard announced a follow-up, Duke Nukem Forever. 1997–1998: Quake II engine 3D Realms announced Duke Nukem Forever on April 27, 1997. Barely a year after the release of Duke Nukem 3D, its graphics and its game engine, the Build engine, were antiquated. For Forever, Broussard licensed Id Software's superior Quake II engine. The licensing cost was steep—estimates were as high as $500,000—but Broussard reasoned that it would save time used to write a new engine. Because the Quake II engine was not finished, 3D Realms began development with the Quake engine, planning to incorporate the Quake II features as they were completed. Broussard and Miller decided to fund Duke Nukem Forever using the profits from Duke Nukem 3D and other games, turning marketing and publishing rights over to GT Interactive. In August and September, the first screenshots of Duke Nukem Forever were released in PC Gamer. As 3D Realms did not receive the Quake II engine code until November 1997, the screenshots were mockups made with the Quake engine. 3D Realms unveiled the first video footage of Duke Nukem Forever using the Quake II engine at the 1998 E3 conference, showcasting Duke fighting on a moving truck and firefights with aliens. While critics were impressed, Broussard was not happy with progress. 1998–2003: Unreal Engine Soon after E3, a programmer suggested that 3D Realms make the switch to Unreal Engine, a new engine developed by Epic Games. The Unreal Engine was more realistic than Quake II and was better suited to producing open spaces; 3D Realms had struggled to render the Nevada desert. They unanimously agreed to the change, which meant discarding much of their work, including significant changes they had made to the Quake II engine. In June 1998, 14 months after announcing that they would use the Quake II engine, 3D Realms announced that they had switched to Unreal Engine. Broussard said that Duke Nukem Forever would not be significantly delayed and would be back to where it was at E3 within a month to six weeks. He also said that no content seen in the E3 trailer would be lost. However, according to programmer Chris Hargrove, the change amounted to a complete restart. By the end of 1999, Duke Nukem Forever had missed several release dates and was largely unfinished; half of its weapons remained concepts. Broussard responded to criticisms of the development time as the price of modern game development. A significant factor contributing to the protracted development was that Broussard was continually looking to add new elements. 3D Realms employees would joke that they had to stop Broussard from seeing new games, as he would want to include portions of it in Duke Nukem Forever. Later in 1999, Broussard decided to upgrade to a new version of Unreal Engine designed for multiplayer. Employees recalled that Broussard did not have a plan for what the game would look like. At the same time, GT Interactive was facing higher-than-expected losses and hired Bear Stearns to look into selling the company or merging it. Later that year, Infogrames Entertainment announced it was purchasing a controlling interest in GT Interactive. The publishing rights for Duke Nukem Forever passed to Gathering of Developers in early December 2000. To placate anxious fans, Broussard decided to create another trailer for E3 2001, the first public showing in three years. The video showed a couple of minutes of footage, including a Las Vegas setting and a demonstration of the player interacting with a vending machine to buy a sandwich. The trailer impressed viewers and Duke Nukem was the talk of the convention. IGN reported on the graphics: "Characters come to life with picturesque facial animations that are synced perfectly with speech, hair that swings as they bob their heads, eyes that follow gazes, and more. The particle effects system, meanwhile, boasts impressive explosion effects with shimmering fire, shattered glass, and blood spilt in every direction ... Add in real-time lighting effects, interactive environments, and a variation in locales unequaled in any other first-person shooter and you begin to see and understand why Duke Nukem Forever has been one of the most hotly anticipated titles over the last couple of years." Staff at 3D Realms recalled a sense of elation after the presentation: "We were so far ahead of other people at the time." While many staff expected Broussard to make a push for finishing the game, he still did not have a finished product in mind. Following the death of one of Gathering of Developers' co-founders and continuing financial problems, their Texas offices were shut down and absorbed into parent company Take-Two Interactive. 2003–2006: Conflict with Take-Two By 2003, only 18 people at 3D Realms were working on Duke Nukem Forever. One former employee said that Broussard and Miller were still operating on a "1995 mentality", before games became large-team, big budget development affairs. Because they were financing the project themselves, the developers could also ignore pressure from their publisher; their standard reply to when Duke Nukem Forever would ship was "when it's done". In 2003, Take-Two CEO Jeffrey Lapin reported that the game would not be out that year. He said the company was writing off $5.5 million from its earnings due to Duke Nukem Forevers lengthy development. Broussard responded that "Take-Two needs to STFU ... We don’t want Take-Two saying stupid-ass things in public for the sole purposes of helping their stock. It's our time and our money we are spending on the game. So either we're absolutely stupid and clueless, or we believe in what we are working on." Later that year, Lapin said 3D Realms had told him that Duke Nukem Forever was expected by the end of 2004 or the beginning of 2005. In 2004, GameSpot reported that Duke Nukem Forever had switched to the Doom 3 engine. Many gaming news sites mailed Broussard, asking him to confirm or deny the rumor. After receiving no answer from him, they published the rumor as fact, but Broussard explicitly denied it soon after. Soon after 3D Realms replaced the game's Karma physics system with one designed by Meqon, a relatively unknown Swedish firm. Closed-doors demonstrations of the technology suggested that the physics would be superior to the critically acclaimed Half-Life 2. Rumors suggested that the game would appear at 2005 E3. While 3D Realms' previously canceled Prey was shown, Duke Nukem Forever was not. Broussard said in January 2006 that many of Duke Nukem Forevers elements were finished, and that the team was "basically pulling it all together and trying to make it fun". Later that year, Broussard demonstrated samples of the game, including an early level, a vehicle sequence, and a few test rooms. Among the features seen was the interactive use of an in-game computer to send actual emails. Broussard seemed contrite and affected by the long delays; while a journalist demoed the game, Broussard referenced note cards and constantly apologized for the state of the game. In filing with the US Securities and Exchange Commission, Take-Two revealed they had renegotiated the Duke Nukem Forever deal, with Take-Two receiving $4.25 million instead of $6 million on release of the game. Take-Two offered a $500,000 bonus if Duke Nukem Forever was released by 2007. However, Broussard said that 3D Realms did not care about the bonus, and would "never ship a game early". Staff were tired of the delays. Duke Nukem Forever was the only 3D game many had ever worked on, giving them little to put on a resume, and as much of 3D Realms' payment hinged on profit-sharing after release, the continual delays meant deferred income. By August 2006, between 7 and 10 employees had left since 2005, a majority of the Duke Nukem Forever team, which by this point had shrunk to around 18 staff. While Shacknews speculated that the departures would lead to further delays, 3D Realms denied this, stating that the employees had left over a number of months and that the game was moving ahead. Creative director Raphael van Lierop, hired in 2007, played through the completed content and realized that there was more finished than he expected. Lierop told Broussard that he felt they could push the game and "blow everyone out of the water", but Broussard felt it was still two years from completion. 2007–2009: Final years with 3D Realms The delays strained Broussard and Miller's relationship. By the end of 2006, Broussard appeared to become serious about finishing the game. On January 25 and May 22, 2007, Broussard posted two Gamasutra job ads with small screenshots of Duke Nukem and an enemy. The team quickly doubled in size; among the new hires was project lead Brian Hook, who became the first person to resist Broussard's requests for changes. On December 19, 2007, 3D Realms released the first Duke Nukem Forever trailer in more than six years. It was made by 3D Realms employees as part of holiday festivities. While Broussard refused to give a release date, he said that "you can expect more frequent media releases [and] we have considerable work behind us". While the Dallas Business Journal reported a 2008 release date, Broussard said that this was based on a misunderstanding. In-game footage appeared in 2008 premiere episode of The Jace Hall Show. Filmed entirely on hand-held cameras but not originally expected to be publicly released, the video showed host Jason Hall playing of a level at 3D Realms' offices. The footage was shot six months prior to the episode air date; according to Broussard, it contained particle and combat effects that had since been replaced. The game did not appear at E3 2008, which Miller described as "irrelevant". As Duke Nukem Forever neared completion, funding began to deplete. Having spent more than $20 million of their own money, Broussard and Miller asked Take-Two for $6 million to complete the game. According to Broussard and Miller, Take-Two initially agreed, but then only offered $2.5 million. Take-Two maintained that they offered $2.5 million up front and another $2.5 million on completion. Broussard rejected the counteroffer, and on May 6, 2009, suspended development. 2009–2010: Layoffs and downsizing 3D Realms laid off the Duke Nukem Forever staff on May 8, 2009, due to lack of funding; inside sources claimed it would operate as a smaller company. Take-Two stated that they retained the publishing rights for Duke Nukem Forever, but were not funding it. Previously unreleased screenshots, concept art, pictures of models and a goodbye message from 3D Realms were posted by alleged former employees. Similar leaks followed after May 8, 2009. In 2009, Take-Two filed a lawsuit against 3D Realms over their failure to complete Duke Nukem Forever, citing $12 million paid to Infogrames in 2000 for the publishing rights. 3D Realms argued that they had not received that money, as it was a direct agreement between Infogrames and Take-Two. The lawsuit seemed to be over a contractual breach, but not regarding the $12 million. Take-Two asked for a restraining order and a preliminary injunction to make 3D Realms keep the Duke Nukem Forever assets intact during proceedings, but the court denied the request for a temporary restraining order. In December 2009, Miller denied that development had ceased, and confirmed only that the team had been laid off. Around this time, a former 3D Realms staff member released a showreel with footage of Duke Nukem Forever. It was mistaken for a trailer, which confused the public. The video was taken down soon after. 3D Realms planned to hire an external developer to complete the game while continuing to downsize, and ended development on another game, Duke Begins. An unofficial compilation of gameplay footage was also released in December 2009. By 2010, 3D Realms and Take-Two had settled the lawsuit and dismissed it with prejudice. 2010–2011: Gearbox revival and release Despite the discontinuation of internal game development at 3D Realms, development did not cease entirely. Nine ex-employees, including key personnel such as Allen Blum, continued development throughout 2009 from their homes. These employees would later become Triptych Games, an independent studio housed in the same building as Gearbox, with whom they collaborated on the project. After ceasing internal game development, 3D Realms approached game developers Gearbox Software and asked them if they were interested in helping Triptych Games polish the near-finished PC version and port it to the consoles. Gearbox CEO Randy Pitchford, who had worked on an expansion to Duke Nukem 3D and very briefly on Forever before he left to found Gearbox, felt that "Duke can't die" and decided that he was going to help "in Duke’s time of need". He started providing funding for the game and contacted 2K Games' president to persuade his company that Gearbox and Triptych could complete the development of the game and get it released on all platforms in time. Duke Nukem Forever was originally intended to be a PC exclusive game; however, 2K and Gearbox had hired Piranha Games to port the game designed for PC to Xbox 360 and PlayStation 3 and added a multiplayer to raise sales. The game was re-announced at the Penny Arcade Expo 2010 on September 3, 2010. It was the first time in the game's development history that gamers were able to actually try the game—according to Pitchford, "the line has gotten up to four hours long to see the game". Gearbox Software subsequently purchased the Duke Nukem intellectual property from 3D Realms, and 2K Games held the exclusive long-term publishing rights of the game. Development was almost complete with only minor polishing to be done before the game was to be released in 2011. A playable demo of Duke Nukem Forever was released once Gearbox figured out the timing, with purchasers of the Game of the Year Edition of Borderlands gaining early access. The demo is unexpectedly different from the versions available at PAX and Firstlook. Those that purchased Borderlands on Valve's Steam prior to October 12, 2010, got the code for the demo without the need to buy the Game of the Year edition of the game. Duke Nukem Forever was initially scheduled for release on May 3 in the United States and May 6 internationally and after another delay was finally released on June 14 in North America and June 10 worldwide, nearly four weeks after the game had 'gone gold' after 15 years. Press coverage Wired News awarded Duke Nukem Forever its Vaporware Award several times. It placed second in June 2000 and topped the list in 2001 and 2002. Wired created the Vaporware Lifetime Achievement Award exclusively for DNF and awarded it in 2003. Broussard accepted the award, simply stating, "We're undeniably late and we know it." In 2004, the game did not make the top 10; Wired editors said that they had given DNF the Lifetime Achievement Award to get it off of the list. However, upon readers' demands, Wired reconsidered and DNF won first place in 2005, 2006, and 2007. In 2008, Wired staff officially considered removing DNF from their annual list, citing that "even the best jokes get old eventually", only to reconsider upon viewing the handheld camera footage of the game in The Jace Hall Show, awarding the game with first place once again. In 2009, Wired published Wired News' Vaporware Awards 2009: Duke Nukem Forever was excluded from consideration on the grounds that the project was finally dead. With the game since in development at Gearbox Software and a subsequent playable demo, Duke made a comeback with an unprecedented 11th place award on Wireds 2010 Vaporware list. When the GameSpy editors compiled a list of the "Top 25 Dumbest Moments in Gaming History" in June 2003, Duke Nukem Forever placed #18. Duke Nukem Forever has drawn a number of jokes related to its development timeline. The video gaming media and public in general have routinely suggested names in place of Forever, calling it "Never", "(Taking) Forever", "Whenever", "ForNever", "Neverever", and "If Ever". The game has also been ridiculed as Duke Nukem: Forever In Development; "Either this is the longest game ever in production or an elaborate in-joke at the expense of the industry". Footnotes Citations Additional references —also published in Wired 18 (1), January 2010 print issue. External links Duke Nukem Forever News Archive at 3D Realms web site The Duke Nukem Forever List - further history and comparisons of other things that happened during the time of development Duke Nukem Forever Duke Nukem
21332465
https://en.wikipedia.org/wiki/Think%20Tools
Think Tools
Think Tools AG (SWX: TTO) was a Swiss IT company that rose and fell with the dot com bubble in Europe. The company was founded by the philosopher Albrecht von Müller as a consultancy company in 1993. The Initial Public Offering In the company's IPO on March 24, 2000, at the peak of the dot com bubble, the shares were issued at CHF 270. On the first trading day the share price peaked at CHF 1050, driven by investor excitement generated by the company's connections to the World Economic Forum and promises that the company would play a central role in transferring the world to knowledge economy through the software tools the company claimed to be developing. The peak share price corresponded to a market valuation of CHF 2.52 billion. Software Tools for Knowledge Economy In addition to consultancy, the company had developed tools for supporting corporate problem solving by allowing its users to graphically visualize their thinking: a kind of an intelligent notepad to assist the user in solving problems. However, the software, called Think Tools Suite, did not have any knowledge representation, problem-solving, or reasoning capability by itself. The company advertising made general claims about their software's ability to strengthen organizations' ability to generate, store, and utilize knowledge, and the tools being the result of a decade-long research program at the Max Planck Institute of Physics into ways to support cognitive processes. Nelson Mandela was cited as a reference client who uses the Think Tools software "for policy formulation and policy coordination between the national, provincial and local levels". The company was profitable prior to the IPO, with a net income of CHF 3.9 million with sales of CHF 10.6 million in 1999, mostly from services to companies with close relations to the World Economic Forum, but these figures were only a very small fraction of the value of the company's stock after the IPO (corresponding to an extremely high price-to-earning ratio), characteristic of dot com bubble companies before share price collapse. In the peak year, 2000, sales were CHF 25.3 million, but the company made a CHF 19.8 million loss. Controversies and the Collapse of the Company The downward slope of the share price resulted from analysts reporting the stock to be seriously overvalued, from the allegations of plagiarism of the works by the biochemistry professor Frederic Vester in its Think Tools Suite product, from the collapse of sales after the IPO, and the company's inability to progress toward implementing the visions that had raised the investors' expectations extremely high. It also turned out that the information about the company and its product had been misleading. For example, most of the reference clients at the time of the IPO did not actually use the Think Tools Suite for anything. The company's business model was to sell the Think Tools Suite software licence for a one time fee, up to CHF 1 million per licence, and the market of companies willing to buy the software was very quickly exhausted. In 2001 sales were only CHF 3.4 million, and the company made a CHF 14.5 million loss. On November 2, 2001 shares traded at CHF 26. Only after the company's failure became apparent did the investors start questioning whether the Think Tools software actually had the capabilities claimed by von Müller. On January 3, 2003 the company's shares traded at CHF 8.20. Sales in 2003 were CHF 1.9 million, with operating expenses of CHF 12.2 million. In May 2004 Think Tools agreed to merge with the Swiss IT company redIT AG. As a result, redIT AG became a publicly traded company in the Swiss stock exchange. In October 2004 redIT AG sold the rights to the main remaining Think Tools asset, the Think Tools Suite, to the Parmenides Foundation (also founded by Albrecht von Müller). The sales price was CHF 350,000. Parmenides Foundation renamed Think Tools Suite to Parmenides Eidos Suite - Visual Reasoning and Knowledge Representation, and is selling it. Legacy The company has still been portrayed positively years after it ceased to exist: "A Swiss based advisory firm with a high powered board comprising former Prime Ministers, financiers, industrialist and the founder of the World Economic Forum, Klaus Schwab. 'A brilliant experience, Think Tools had advised DASA on developing the A-380 Airbus, Nelson Mandela and the South African government on development policy options and a variety of corporates and governmental entities around the world.'” After 2004, Think Tools products and ideas continued to be discussed in scholarly articles about scenario development. The German government used the software for critical decision making projects. Related controversies The Swiss Federal Banking Commission fined Swiss private bank Vontobel CHF 21 million after the Swiss stock exchange alerted the commission to irregularities in Think Tools' IPO, which was underwritten by Vontobel. Vontobel had anticipated a high post-IPO share price and had retained 52.14 per cent of the shares that should have been sold at the IPO, and sold the shares after the IPO when their price was substantially higher than the CHF 270 issue price. References Dot-com bubble Defunct companies of Switzerland
28710881
https://en.wikipedia.org/wiki/College%20of%20Engineering%2C%20Ewha%20Womans%20University
College of Engineering, Ewha Womans University
The College of Engineering is one of eleven major academic divisions (or colleges) of the Ewha Womans University. Established in 1996 with four departments, Computer Science, Electronics Engineering, Environmental Science, and Architecture, the college currently offers B.S., M.S., and Ph.D. degrees. History The college of engineering at Ewha Womans University was established in 1996 as the world’s first women’s college of engineering. Approximately 1,100 undergraduate and 120 graduate students have been studying at the college under the three divisions — Division of Computer & Electronics Engineering, Division of Architecture, and Division of Environmental & Food Science. 1980s - 1981: The Department of Computer Science founded in the College of Liberal Arts and Sciences 1990s - 1993: The Department of Environmental Science founded in the College of Natural Science - 1994: The Department of Electronics Engineering and the Department of Architecture founded in the College of Natural Science - 1996: Dean KiHo Lee takes office (first dean) - 1996: The College of Engineering founded. The four departments, Computer Science, Electronics Engineering, Environmental Science, and Architecture transferred to the College of Engineering - 1996: The Department of Environmental Science renamed as the Department of Environmental Science & Engineering - 1997: Dean Yoon-Kyoo Jhee takes office (second dean) - 1998: The Department of Computer Science renamed as the Department of Computer Science & Engineering - 1999: Dean Yeoung-Soo Shin takes office (third dean) - 1999: The College of Engineering organized into 2 divisions: Computer Science & Electronics Engineering, and Architecture & Environmental System Engineering. 2000s - 2000: The Department of Electronics Engineering renamed as the Department of Information Electronics Engineering. The Department of Environmental Science & Engineering renamed as Environmental Science & Engineering Major School - 2000: System reorganized - 2001: Dean Seung Soo Park takes office (fourth dean) - 2003: Dean Yeoung-Soo Shin takes office (fifth dean) - 2005: Dean Yeoung-Soo Shin takes office (sixth dean) - 2006: School system reorganized. - 2006: The College of Engineering organized into three divisions: Computer Information Communication, Architecture, and Environmental & Food Technology - 2006: The Department of Computer Science & Engineering and the Department of Information Electronics Engineering integrated into the Department of Computer Information Communication Engineering - 2006: The Department of Architecture divided into Architecture Design Major and Architectural Engineering Major - 2006: The Department of Environmental Science & Engineering and the Department of Food Sciences & Technology reorganized into the Division of Environmental and Food Technology - 2006: Introduced the Accreditation Program for Engineering Education - 2007: Dean Myoung-Hee Kim takes office (seventh dean) - 2007: The Division of Computer Information Communication Engineering renamed as the Division of Information Communication Engineering, and divided into Computer Science & Engineering Major and Information Electronics Major - 2008: The Division of Information Communication Engineering renamed as the Division of Computer Information Communication - 2008: Computer Science & Engineering Major renamed as Computer Engineering Major - 2008: Information Electronics Major renamed as Electronics Engineering Major - 2009: Dean Sang-Ho Lee takes office (eighth dean) - 2011: Dean Kwang-Ok Kim takes office (ninth dean) Academics Division of Computer and Electronics Engineering Department of Computer Science and Engineering Division of Architecture The Division of Architecture was founded in 1994 in the College of Natural Sciences as the Department of Architecture. Architecture covers up-to-date design environment from museum to skyscraper to daily surrounding environment, as well as all surrounding facilities including houses, museums, concert halls, schools, and offices. Until 2005, the Department of Architecture was part of the College of Engineering, and since 2006, it expanded to the Division of Architecture as an independent division. The division consists of 2 majors, Architecture Major and Architectural Engineering Major. In the Architecture Major, students study design and planning, and in the Architectural Engineering Major, students study core engineering technology and basic engineering knowledge, which includes structural engineering, environmental technology and construction management. Department of Architecture The curriculum consists of five areas: Architectural Design: the center of the entire course of study Communication using Various Media Cultural Context including art, history, environment, and cities Practical Affairs such as building codes and ethics Technologies: engineering knowledge. Department of Architectural Engineering The Department of Architectural Engineering operates a 4-year engineering education accreditation program, dividing the subject into the following areas: Building Structure: designing structures to secure the safety of buildings, sustainability of structures, green technology of structures, retrofitting of structural members, fire safety design of structures Bio-mechanical Engineering;safety evaluation of heart valves, fatigue analysis of bio-composites Building Environment Planning and Energy Efficient Building M/E system: realizing sustainable development and energy-saving, low-carbon green buildings Building Performance Simulation: simulation validation and testing, and simulation to support commissioning, controls and monitoring Governmental Policy for Energy-Efficient Green Building: design guideline for building energy saving, green building certification and rating system, and building energy efficiency rating system Construction Management: financial and operational management of construction projects. Building Materials: understanding the physical and mechanical properties of concrete, stone, wood, glass, etc. In the graduate program, various projects are actively underway in such areas as structure repair and reinforcement, structural damage prediction, fire-resistance technology, energy-saving buildings, insulation technology, radiant cooling/heating technology, and construction management technology. Division of Environmental and Food Science The Division of Environmental and Food Science consists of Department of Environmental Science & Engineering and Department of Food Science and Engineering. Environmental and Food Science is a field of study for the 21st century where human beings, nature and technology coexist. Protecting nature is deeply concerned with preserving life, thus studying the environment implies protecting life. Food science and engineering is applied science which requires traditional food technology and complex integration of advanced technology for future industry. Department of Environmental Science and Engineering The Department of Environmental Science & Engineering has performed environmental projects in Environmental management Policies Information System Monitoring Environmental Impact Assessment Media Environment Low-carbon Green Growth Department of Food Science and Engineering Academic resources Accreditation Board for Engineering Education of Korea Accreditation in engineering education is a quality assurance program run by the Accreditation Board for Engineering Education of Korea. The College of Engineering introduced the accreditation program for engineering education in 2005, and established the Center for Innovation in Engineering Education in 2006. It acquired accreditation in engineering education in 2009 as a result of organizing and operating recipient-oriented and performance-driven in-depth courses to actively respond to the demand of the fast-changing society. Research institutes The College of Engineering at Ewha Womans University is conducting systemic and in-depth research at its various research institutes. Severe Storm Research Center: was established to minimize damage from meteorological disasters by improving the accuracy of weather forecasts Center for Climate/Environment Change Prediction Research: was established to help the government and industries develop strategies for responding to medium-to-long term climate change through the specific and accurate prediction of the climate-driven changes in environments and ecosystems Center for Computer Graphics and Virtual Reality: was established in 1999 to research and develop future core technologies based on computer graphics and virtual reality Environmental Research Institute: was established in 1971 to research and provide solutions to environmental problems caused by industrialization, urbanization, and population increase by putting together research personnel from within Ewha Embedded Software Research Center: has its focus on establishing the research base for the area of embedded software, closely related to industrial demand, and fostering professional women engineers specializing in this area. Government-funded medium-to-large scale research projects Currently, the College of Engineering is carrying out a variety of government-funded medium-to-large scale research projects. The BK21 Project: comprising three core research teams, is an advanced-level human resource development project that provides 200 to 250 million KRW in research funds annually to young researchers in master, doctoral, or post-doctoral programs with aims of building a world-class graduate school for training excellent researchers. The NRL (National Research Laboratory) Project: aims to strengthen the research capabilities in the areas of core fundamental technologies that should be strategically fostered at the national level. As a participant of the project, two research laboratories in the Department of Environmental Science & Engineering are given 200 million won of research fund a year. Since being selected as an Engineering Research Center by the National Research Foundation in 2009, the Center for Climate/Environment Change Prediction Research is developing an integral prediction system considering the interaction among climate, environment, and ecosystem with domestic researchers in relevant fields. It has been granted approximately 1.2 billion won a year for research. The IT Original Technology Development Program: aims to develop computer-based animation technologies featuring virtual entertainment contents such as computer games and special effects for films. It has been and will be granted 1 billion won of research fund per year from 2008 to 2013. Exchange students program Ewha Womans University offers "International Exchange and Study Abroad Program" to exchange and visiting students. This program has been established for students who are interested in a wide range of studies including topics on Asia and Korea, and is open to undergraduate and graduate students from any accredited institution of higher education. Students have the option to study for one or two semesters at Ewha as a non-degree seeking student. All international students enrolled in our program may register in any of the courses offered by Ewha, either in English or Korean, depending on their language proficiency. Location The college of engineering is located inside the main campus of Ewha Womans University, Seoul, Korea. References Ewha Womans University
13050
https://en.wikipedia.org/wiki/Guru%20Meditation
Guru Meditation
Guru Meditation started as an error notice displayed by the Commodore Amiga computer when it crashes. It is now also used by Varnish, a software component used by many content-heavy websites. This has led to many internet users seeing a 'Guru Meditation' message (sometimes spelled "Guru Mediation") when these websites suffer crashes or other issues. It is analogous to the "Blue Screen of Death" in Microsoft Windows operating systems, or a kernel panic in Unix. It has also been used as a message for unrecoverable errors in software packages such as VirtualBox and other operating systems (see Legacy section below). Origins The term "Guru Meditation Error" originated as an in-house joke in Amiga's early days. The company had a product called the Joyboard, a game controller much like a joystick but operated by the feet, similar to the Wii Balance Board. Early in the development of the Amiga computer operating system, the company's developers became so frustrated with the system's frequent crashes that, as a relaxation technique, a game was developed where a person would sit cross-legged on the Joyboard, resembling an Indian guru. The player tried to remain extremely still; the winner of the game stayed still the longest. If the player moved too much, a "guru meditation" error occurred. The final unlockable balance activity in Wii Fit represents a similar game. The same activity is unlocked from the start in Wii Fit Plus. Description of "Guru Meditation" errors on the Amiga The alert occurred when there was a fatal problem with the system. If the system had no means of recovery, it could display the alert, even in systems with numerous critical flaws. In extreme cases, the alert could even be displayed if the system's memory was completely exhausted. The text of the alert messages was completely baffling to most users. Only highly technically adept Amiga users would know, for example, that exception 3 was an address error, and meant the program was accessing a word on an unaligned boundary. Users without this specialized knowledge would have no recourse but to look for a "Guru" or to simply reboot the machine and hope for the best. Technical description (Amiga) When a Guru Meditation is displayed, the options are to reboot by pressing the left mouse button, or to invoke ROMWack by pressing the right mouse button. ROMWack is a minimalist debugger built into the operating system which is accessible by connecting a 9600 bit/s terminal to the serial port. The alert itself appears as a black rectangular box located in the upper portion of the screen. Its border and text are red for a normal Guru Meditation, or green/yellow for a Recoverable Alert, another kind of Guru Meditation. The screen goes black, and the power and disk-activity LEDs may blink immediately before the alert appears. In AmigaOS 1.x, programmed in ROMs known as Kickstart 1.1, 1.2 and 1.3, the errors are always red. In AmigaOS 2.x and 3.x, recoverable alerts are yellow, except for some very early versions of 2.x where they were green. Dead-end alerts are always red and terminal in all OS versions except in a rare series of events, as in when a deprecated Kickstart (example: 1.1) program conditionally boots from disk on a more advanced Kickstart 3.x ROM Amiga running in compatibility mode (therefore eschewing the on-disk OS) and crashes with a red Guru Meditation but subsequently restores itself by pressing the left mouse button, the newer Kickstart recognizing an inadvised low level chipset call for the older ROM directly poking the hardware, and addressing it. The error is displayed as two fields, separated by a period. The format is #0000000x.yyyyyyyy in case of a CPU error, or #aabbcccc.dddddddd in case of a system software error. The first field is either the Motorola 68000 exception number that occurred (if a CPU error occurs) or an internal error identifier (such as an "Out of Memory" code), in case of a system software error. The second can be the address of a Task structure, or the address of a memory block whose allocation or deallocation failed. It is never the address of the code that caused the error. If the cause of the crash is uncertain, this number is rendered as 48454C50, which stands for "HELP" in hexadecimal ASCII characters (48=H, 45=E, 4C=L, 50=P). Guru Meditation handler There was a commercially available error handler for AmigaOS, before version 2.04, called GOMF (Get Outta My Face) made by Hypertek/Silicon Springs Development corp. It was able to deal with many kinds of errors and gave the user a choice to either remove the offending process and associated screen, or allow the machine to show the Guru Meditation. In many cases, removal of the offending process gave one the choice to save one's data and exit running programs before rebooting the system. When the damage was not extensive, one was able to continue using the machine. However, it did not save the user from all errors, as one may have still seen this error occasionally. Recoverable Alerts Recoverable Alerts are non-critical crashes in the computer system. In most cases, it is possible to resume work and save files after a Recoverable Alert, while a normal, red Guru Meditation always results in an immediate reboot. It is, however, still recommended to reboot as soon as possible after encountering a Recoverable Alert, because the system may be in an unpredictable state that can cause data corruption. System software error codes The first byte specifies the area of the system affected. The top bit will be set if the error is a dead end alert. Legacy AmigaOS versions 4.0 and onwards replaced "Guru Meditation" with "Grim Reaper", but briefly mentions the Guru Meditation number in the prompt box. MorphOS displays an "Application Is Meditating" error message. Attempting to close the application may revive the operating system, but restarting is still recommended. Varnish references Guru Meditation for severe errors. The ESP8266 and ESP32 microcontrollers will display "Guru Meditation Error: Core X panic'ed" (where X is 0 or 1 depending on which core crashed) along with a core dump and stack trace. VirtualBox uses the term "Guru Meditation" for severe errors in the virtual machine monitor, for example caused by a triple fault in the virtual machine. NewPipe displays the message "Sorry, that should not have happened. Guru Meditation." in error reports. E23 displays a "Guru Meditation" and restarts when severe errors occur. See also Screen of death References AmigaOS Amiga Screens of death Computer error messages
14778169
https://en.wikipedia.org/wiki/Iperf
Iperf
Iperf is a tool for network performance measurement and tuning. It is a cross-platform tool that can produce standardized performance measurements for any network. Iperf has client and server functionality, and can create data streams to measure the throughput between the two ends in one or both directions. Typical iperf output contains a time-stamped report of the amount of data transferred and the throughput measured. The data streams can be either Transmission Control Protocol (TCP) or User Datagram Protocol (UDP): UDP: When used for testing UDP capacity, iperf allows the user to specify the datagram size and provides results for the datagram throughput and the packet loss. TCP: When used for testing TCP capacity, iperf measures the throughput of the payload. Iperf uses 1024 × 1024 for mebibytes and 1000 × 1000 for megabytes. Iperf is open-source software written in C, and it runs on various platforms including Linux, Unix and Windows (either natively or inside Cygwin). The availability of the source code enables the user to scrutinize the measurement methodology. Iperf is a compatible reimplementation of the ttcp program that was developed at the National Center for Supercomputing Applications at the University of Illinois by the Distributed Applications Support Team (DAST) of the National Laboratory for Applied Network Research (NLANR), which was shut down on December 31, 2006, on termination of funding by the United States National Science Foundation. iperf3 Iperf3 is a rewrite of iperf from scratch to create a smaller, simpler code base. It also includes a library version which enables other programs to use the provided functionality. Another change is that iperf3 is single threaded while iperf2 is multi-threaded. Iperf3 was started in 2009, with the first release in January 2014. Iperf3 is not backwards compatible with iperf2. Iperf3 doesn't officially support Windows, only Linux. Vivien Guéant did compile it to Windows back in 2016 but hasn't been maintained since. Many Linux distributions have up-to-date versions of iperf3 in their native package repositories (as of 1. December 2021) See also Netperf Nuttcp NetPIPE bwping Flowgrind Measuring network throughput Packet generation model References External links Iperf 2 & Iperf 3 Comparison Table Network performance Software using the BSD license Free software programmed in C
4258536
https://en.wikipedia.org/wiki/ASCOM%20%28standard%29
ASCOM (standard)
ASCOM (an abbreviation for AStronomy Common Object Model) is an open initiative to provide a standard interface to a range of astronomy equipment including mounts, focusers and imaging devices in a Microsoft Windows environment. History ASCOM was invented in late 1997 and early 1998 by Bob Denny, when he released two commercial programs and several freeware utilities that showcased the technology. He also induced Doug George to include ASCOM capabilities in commercial CCD camera control software. The first observatory to adopt ASCOM was Junk Bond Observatory, in early 1998. It was used at this facility to implement a robotic telescope dedicated to observing asteroids. The successful use of ASCOM there was covered in an article in Sky & Telescope magazine. This helped ASCOM to become more widely adopted. The ASCOM standards were placed under the control of the ASCOM Initiative, a group of astronomy software developers who volunteered to develop the standards further. Under the influence of Denny, George, Tim Long, and others, ASCOM developed into a set of device driver standards. In 2004, over 150 astronomy-related devices were supported by ASCOM device drivers, which were released as freeware. Most of the drivers are also open source. As ASCOM developed, the term became less associated with the Component Object Model, and has been used more broadly to describe not only the standards and software based on them, but also to describe an observing system architecture and a robotic telescope design philosophy. In 2004, ASCOM remained formally a reference to the Component Object Model, but the term is expected to stand on its own as new technologies such as Microsoft .NET take over functions provided by the Component Object Model, and additional ASCOM projects are adopted that dilute its concentration on device drivers. The release of version 6 of the ASCOM Platform in June 2011 marked a transition to an open source development paradigm, with several developers contributing to the effort and all of the platform source code being made available under a Creative Commons license. Initially, the Platform developer team used servers hosted by TiGra Networks (Long's IT consulting company) for source code control, issue tracking and project management, with server licenses contributed by Atlassian and JetBrains. In 2012, due in part to differences in development style, TiGra Networks' involvement with the software development effort ceased and the source code was relocated to SourceForge. What is it? The Ascom Platform is a collection of computer drivers for different astronomy-related devices. It uses agreed standards that allow different computer programs ('apps') and devices to communicate with each other simultaneously. This means that you can have things like mounts, focusers, cameras and filter wheels all controlled by a single computer, even with several computers sharing access to those resources. For example, you can use one program to find targets and another to guide your telescope, with both of them sharing control of your mount at the same time. An ASCOM driver acts as an abstraction layer between the client and hardware thus removing any hardware dependency in the client, and making the client automatically compatible with all devices that supports the minimum required properties and methods. For example, this abstraction allows an ASCOM client to use an imaging device without needing to know whether the device is attached via a serial or network connection. ASCOM defines a collection of required Properties and Methods that ASCOM compliant software can use to communicate with an ASCOM compliant device. ASCOM also defines a range of optional Properties and Methods to take advantage of common features that may not be available for every manufacturer's device. By testing various properties an ASCOM client application can determine what features are available for use. Properties and Methods are accessible via scripting interfaces, allowing control of devices by standard scripting applications such as VBScript and Javascript. In fact any language that supports access to Microsoft COM objects can interface with ASCOM. An ASCOM Platform software package is available for download which installs some common libraries and documentation as well as a collection of ASCOM drivers for a broad range of equipment. Additional ASCOM drivers for devices not included in the ASCOM Platform package can be downloaded and installed separately. Although ASCOM is predominantly used by the amateur community, because the standard is freely available it is also used in some professional installations. Licensing There are no particular licensing requirements other than that the ASCOM logo may only be used if the client application is ASCOM compatible, and an ASCOM driver must implement all the required properties and methods (but need not implement any of the optional properties and methods). End user From an astronomer's point of view, it is a simple matter of installing the ASCOM platform and suitable client software; no programming is required. ASCOM drivers allow computer-based control of devices such as planetarium software to direct a telescope to point at a selected object. Using a combination of mount, focuser and imaging device ASCOM drivers, it is possible to build a fully automated environment for deep sky imaging. Developer Developers can enhance the power of ASCOM by writing their own clients using the scripting or object interface. ASCOM Alpaca Recent initiative called ASCOM Alpaca is currently under development. The Alpaca API uses RESTful techniques and TCP/IP to enable ASCOM applications and devices to communicate across modern network environments. This will enable ASCOM compatible devices to work across all the different operating systems including Linux and Mac OSX in near future. References ASCOM Standards web site Cedric Thomas, ASCOM Developer web site See also INDI Astronomy software
31437122
https://en.wikipedia.org/wiki/Bullying%20in%20information%20technology
Bullying in information technology
As a result of advances in technology, information technology has become a highly important economic sector. Although it is relatively new, this industry still experiences many of the workplace culture problems of older industries. Bullying is common in IT, leading to high sickness rates, low morale, poor productivity and high staff turnover. Deadline-driven project work and stressed-out managers take their toll on IT professionals. Bullying in IT is most commonly downwards hierarchical (such as manager to employee) but can also be horizontal (such as employee to employee) or upwards hierarchical (such as employee to manager). Incidence In 2002, a survey of UK staff by Mercer Human Resource Consulting found that 21% of respondents in the IT industry have been bullied once or more in the past year. Seven per cent claimed to have suffered chronic bullying. In 2005, the Chartered Management Institute conducted a survey of IT managers finding that more than three out of 10 managers have been bullied during the last three years. In 2008, the Chartered Management Institute conducted a survey of IT managers finding that 61% witnessed bullying between peers and 26% had witnessed subordinates bullying their managers. In 2008, a survey carried out by trade union Unite of IT professionals showed 65% believed they had been bullied at work, and 22% had taken time off work because of stress caused by bullying. In 2014 IDG Connect conducted research which showed that 75% of 650 IT professionals surveyed claimed to have been bullied at work, while 85% said they had seen others bullied. This report formed part of an extensive series of articles conducted by the editor. The press release stated: “These results in no way prove that things are worse in IT than elsewhere and are weighted by the self-selecting nature of the study. However, via a blend of new statistics, detailed feedback from over 400 in-depth testimonials, along with insight from a range of industry experts, this report paints a pretty comprehensive picture of a seemingly endemic problem.” Impact Impacts of a bullying culture can include: Victims are stressed and take more sick leave Damage to the productivity, morale and performance of the whole company High staff turnover which may exacerbate any skill shortages Giving a company a bad reputation, making it harder to get good staff and get new business. Manifestations Victims reported: Unachievable deadlines (see setting up to fail) Excessive monitoring and supervision (see micromanagement) Constant criticism on minor matters (see hypercriticism) Being ridiculed, humiliated and intimidated Being marginalised and asked to do pointless or ill-defined work below their capabilities. Being ostracised (see social rejection) Comments from victims and researchers Comments from victims and researchers include: Internal grievance procedures are commonly not impartial and are open to abuses of power - the bully himself may have a close relationship with the investigating party and there may be conflicts of interest (see quasi-judicial) Complaining against the bully may only make the bullying worse and the victim may fear repercussions Victims often conclude that the only solution is to leave the company or the Information Technology industry altogether IT professionals may be technically brilliant but often lack social skills (soft skills) Some managers may be technically "hands-off" and feel threatened by the technical skills of their subordinates There is often a culture of cronyism, protectionism and jobs for the boys Middle managers are prone to being bullied from above and/or below in the hierarchy A high percentage of senior IT management possess very poor people skills or may have a narcissistic personality Mediation needs to be done by a fully trained impartial external mediator with a good track record, which are not available at most workplaces A system to report bullying anonymously may be helpful Victims often have misplaced loyalty with a company and stay too long trusting that they would get support Bullying in IT is often encouraged and institutionalized as a way of boosting competitiveness among employees Information Technology bullies may use their advanced computer skills to hack into their victim's computers and/or participate in cyberbullying against their victim Case studies Collins T Bullied NHS information manager gets £150,000 Computer Weekly 13 January 2010 Customer services manager for software development department in a government organisation being privatised Case Histories at BullyOnline No 2 - Tim Field Bullying in computing and IT in the NHS Case Histories at BullyOnline No 42 Bullying in the computing industry - Case Histories at BullyOnline No 100 Bullied IT project manager See also References External links Chatham R Beat the bullies Computer Weekly 31 October 2008 Cone E Getting IT Bullies to Behave CIO Insight 5 August 2007 Abuse Information technology Workplace bullying
5821677
https://en.wikipedia.org/wiki/True%20and%20false%20%28commands%29
True and false (commands)
In Unix-like operating systems, true and false are commands whose only function is to always return with a predetermined exit status. Programmers and scripts often use the exit status of a command to assess success (exit status zero) or failure (non-zero) of the command. The true and false commands represent the logical values of command success, because true returns 0, and false returns 1. Usage The commands are usually employed in conditional statements and loops of shell scripts. For example, the following shell script repeats the echo hello loop until interrupted: while true do echo hello done The commands can be used to ignore the success or failure of a sequence of other commands, as in the example: make … && false Setting a user's login shell to , in /etc/passwd, effectively denies them access to an interactive shell, but their account may still be valid for other services, such as FTP. (Although , if available, may be more fitting for this purpose, as it prints a notification before terminating the session.) The programs take no "actual" parameters; in the GNU version, the standard parameter --help displays a usage summary and --version displays the program version. Null command The true command is sometimes substituted with the very similar null command, written as a single colon (:). The null command is built into the shell, and may therefore be more efficient if true is an external program (true is usually a shell built in function). We can rewrite the upper example using : instead of true: while : do echo hello done The null command may take parameters, which are ignored. It is also used as a no-op dummy command for side-effects such as assigning default values to shell variables through the ${parameter:=word} parameter expansion form. For example, from bashbug, the bug-reporting script for Bash: : ${TMPDIR:=/tmp} : ${EDITOR=$DEFEDITOR} : ${USER=${LOGNAME-`whoami`}} See also List of Unix commands Two-valued logic IEFBR14 Notes References External links Manual pages true(1): Do nothing, successfully – GNU Coreutils reference false(1): Do nothing, unsuccessfully – GNU Coreutils reference true(1): Return true value – FreeBSD manual page false(1): Return false value – FreeBSD manual page Standard Unix programs Unix SUS2008 utilities
5431578
https://en.wikipedia.org/wiki/Newton%20Lee
Newton Lee
Newton Lee is a computer scientist who is an author and administrator in the field of education and technology commercialization. He is known for his total information awareness book series. Education Lee holds a B.S. and M.S. in computer science from Virginia Tech, and an electrical engineering degree and honorary doctorate from Vincennes University. He was a 2021 graduate of the FBI Citizens Academy and the founding president of the Los Angeles chapter of the Virginia Tech Alumni Association. Career Lee is editor and curator of SpringerBriefs in Computer Science, Springer International Series on Computer Entertainment and Media Technology, and Springer Encyclopedia of Computer Graphics and Games. Previously, Lee was adjunct professor of Media Technology at Woodbury University, senior producer and lead engineer at The Walt Disney Company, research scientist at VTLS where he created the world's first annotated multimedia OPAC for the U.S. National Agricultural Library, computer science and artificial intelligence researcher at AT&T Bell Laboratories where he created Bell Labs' first-ever commercial AI tool, and research staff member at the Institute for Defense Analyses conducting military-standard Ada research for the U.S. Department of Defense (DoD). In 2003, Lee founded the nonprofit Computers in Entertainment. It was published by the Association for Computing Machinery (ACM) for which Lee interviewed Roy E. Disney, Quincy Jones, and George Lucas. He oversaw the journal and magazine for 15 years from 2003 to 2018, which is the longest term held as editor-in-chief in the history of ACM. Bibliography Encyclopedia Encyclopedia of Computer Graphics and Games (2018), Academic books Disney Stories: Getting to Digital, 2nd Edition (2020), with Krystina Madej, The Transhumanism Handbook (2019), Emotion in Video Game Soundtracking (2018), with Duncan Williams, Game Dynamics: Best Practices in Procedural and Dynamic Game Content Generation (2017), with Oliver Korn, Digital Da Vinci: Computers in the Arts and Sciences (2014), Digital Da Vinci: Computers in Music (2014), Disney Stories: Getting to Digital (2012), with Krystina Madej, Total information awareness book series Facebook Nation: Total Information Awareness, 3rd Edition (2021), Google It: Total Information Awareness (2016), Counterterrorism and Cybersecurity: Total Information Awareness, 2nd Edition (2015), Facebook Nation: Total Information Awareness, 2nd Edition (2014), Counterterrorism and Cybersecurity: Total Information Awareness (2013), Facebook Nation: Total Information Awareness (2012), Movies, games, and music Lee was credited as a software engineer for Disney's Animated Storybooks (1994 video game) (featured on Billboard) and Pocahontas (1996 video game). Lee was also credited for web design and game development for the 2015 documentary film Finding Noah: The Search for Noah's Ark. Lee has executive produced dance-pop songs that have charted on U.S. Billboard, U.K. Music Week, and U.S. iTunes HOT 100 (Electronic) as well as appeared on American Idol and Lifetime original movie Cheyenne. In 2016, he curated and released the Google It (Soundtrack) music album consisting of Roger Hodgson's The Logical Song, Princess X's Free, and six cover songs from the Beatles, the Eagles, R.E.M., Pink Floyd, and Zager and Evans. Politics Lee is the chairman of the California Transhumanist Party and the Education and Media Advisor for the United States Transhumanist Party. Previously, he was a campaign advisor to Zoltan Istvan for the 2016 U.S. presidential election. Philanthropy Lee is the founding president of the 501(c)(3) nonprofit Institute for Education, Research, and Scholarships (IFERS) acknowledged by Alan Kay and Quincy Jones at the American Film Institute on November 4, 2006 during the Computers in Entertainment awards ceremony as well as by Senator Richard D. Roth of the California State Senate on March 14, 2016 in support of Type 1 diabetes awareness and research. On Earth Day on April 22, 2018, Lee was a member of the IFERS-sponsored 2018 Earth Day Peace Conference and Movie Screening. Advisory boards Lee has served on the advisory boards at Virginia Tech, University of Southern California, National University of Singapore, Murdoch University, Digital Hollywood, and the High School Music Company. References External links Newton Lee Institute for Education, Research, and Scholarships (IFERS) California Transhumanist Party ACM Computers in Entertainment American transhumanists Living people American computer scientists American online publication editors Virginia Tech alumni Vincennes University alumni Year of birth missing (living people)
7109707
https://en.wikipedia.org/wiki/Pollock-Krasner%20Foundation
Pollock-Krasner Foundation
The Pollock-Krasner Foundation was established in 1985 for the purpose of providing financial assistance to individual working artists of established ability. It was established at the bequest of Lee Krasner, who was an American abstract expressionist painter and the widow of fellow painter Jackson Pollock. Krasner left approximately $23 million in cash, securities, and art to the foundation. Activities The foundation provides grants to artists internationally based on "recognizable artistic merit and demonstrable financial need". The foundation also gives out Lee Krasner Awards. These awards are based on the same criteria as grants but also recognize a lifetime of artistic achievement and are by nomination only. By 1988, the foundation had already granted over $1.5 million to about 300 "worthy artists who are in need". Authentication board The Pollock-Krasner Authentication Board, established by the Pollock-Krasner Foundation to examine and rule (for no charge) on disputed works, operated for six years (1990-1996) before dissolving after the completion of the Pollock catalogue raisonné. The board considered hundreds of previously unknown works but admitted only a handful. The foundation still receives legal challenges based on its inclusions and exclusions—a version of authentication in its own right. See also Pollock-Krasner House and Studio References External links Pollock-Krasner Foundation website Jackson Pollock Arts foundations based in the United States 1985 establishments in New York (state)
1249749
https://en.wikipedia.org/wiki/Sri%20Lanka%20Institute%20of%20Information%20Technology
Sri Lanka Institute of Information Technology
The Sri Lanka Institute of Information Technology (; ) (also known as SLIIT) is a non-profit private university in Sri Lanka specialising in technology and management. It has two campuses and four regional centres, the main campus being based in Malabe and a Metropolitan Campus in Colombo. SLIIT is a member of the Association of Commonwealth Universities and International Association of Universities, and has several partnerships with international universities. History Founding Lalith Athulathmudali who was then the Minister of Trade and Shipping, originally had the Mahapola Trust Fund acquire in Malabe in the 1980s with the expectation of building a Technological University and the headquarters of the Mahapola Trust Fund. Minister of Commerce Kingsley Wickramaratne and Richard Pathirana would eventually use the land for establishing SLIIT. The Sri Lanka Institute of Information Technology was established in 1999 by incorporation under the Companies Act of 2007 as a nonprofit company by guarantee (registration number GL 24) with the ability to award Bachelor of Science degrees following amendments to the Universities Act the same year, thus gaining recognition from the Minister of Higher Education. SLIIT was established primarily to educate and train information technology professionals. Expansions Initially limited to the fields of information technology and computing, in 2007 SLIIT expanding into new fields of study. These include electronic engineering and business management in collaboration with Sheffield Hallam University. SLIIT also expanded its presence from Colombo and its suburbs to other parts of the country, by establishing centers in Kandy and Matara, making SLIIT accessible in six locations. Presently, SLIIT operates two campuses - the Colombo Metropolitan Campus at Kollupitiya, and the Malabe Campus, with centers in Matara, Kandy, Jaffna, and Kurunegala with an undergraduate student population of over 7,000. A further 1,000 students follow master's degree Programmes, Postgraduate Diploma and other Professional Development Programmes. In 2011, SLIIT established its faculty of business after it was accredited by the UGC to award Bachelor of Business Administration degrees in Human Capital Management, Accounting and Finance, Marketing Management, Quality management and Management Information Systems. This was followed in 2012, with the establishment of the Colombo Academy of Hospitality Management (CAHM). The project is a joint venture of SLIIT, Colombo Academy of Hospitality Management (CAHM), and William Angliss Institute of Australia. It has been developed in line with international standards, housing a training kitchen, banqueting facility, training restaurant, model bedrooms, an IT training center, and team rooms for students' practical training to prepare students for the degree of Bachelor of Tourism and Hospitality Management. The faculty of engineering was formed in 2013, with it awarding its own Bachelor of Science in Engineering degrees and master's degrees from partner universities such as Curtin University and Sheffield Hallam University. The SLIIT Computing was established as a privately managed subsidiary of SLIIT in Kollupitiya to further expand undergraduate studies. In 2015, SLIIT established its School of Architecture, offering a three-year degree in architecture. In 2016, introducing 16 new research-based degrees including PhD and MPhil degrees approved by the Ministry of Higher Education, Technology and Innovation of Sri Lanka. Campuses SLIIT has two main campuses in the Western Province- a Metropolitan campus in central Colombo, and a flagship suburban campus at Malabe. The Malabe campus covers an area of about , making it the larger of the two campuses, and houses the faculties of Computing, Business and Engineering, as well as the main library. Located between Kaduwela and Malabe along the New Kandy Road, it is the only SLIIT campus with sporting facilities. The smaller Metropolitan campus is located within the Bank of Ceylon Merchant Tower at Kollupitiya, hosting mostly postgraduate facilities. The institute also has regional centers in Kandy, Matara, Jaffna, and Kurunegala. Governance The company (Incorporated under Companies Act) is governed by a board of directors headed by a chairman; the current chairman is Professor L. Rathnayake, who succeeded founding chairman Professor Sam Karunaratne. Due to government ownership of the institute, many of the Board's seats are held by government officials on an ex-officio basis. The Board delegates power over academic and administrative affairs of the institute to the President, who also acts as CEO of the institute. Professor Lalith Gamage is the current President. Faculties and schools Faculty of Computing The origins of the Faculty of Computing goes back to the formation of SLIIT in 1999, when it introduced degrees in Information technology. Departments Information Technology Computer Systems and Network Engineering Software Engineering Information Systems Engineering Cyber Security Interactive Media Data Science Faculty of Business The Faculty of Business (Business School), which was formally known as the Faculty of Business was established in 2011, offering BBA degree programs. Departments Business Management Accounting and Finance Business Analytics Human Capital Management Marketing Management Quality Management Logistics and Supply Chain Management Information Management Faculty of Engineering Established in 2013, the Faculty of Engineering offers BScs in engineering degree programs. Departments Civil Engineering Electrical & Electronic Engineering Mechanical Engineering Materials Engineering Faculty of Humanities and Sciences The Faculty of Science and Education was established in 2017, with the introduction of a degree programs in Biotechnology, natural sciences, law, education, psychology and nursing. School of Science School of Science - BSc (Hons) Biotechnology - The first degree course in the School of Science is Biotechnology. Established in 2019, by 2021, batches 6 will be in education, enrolling nearly 200 new students annually. It provides the first and only Biotechnology degree course available in Sri Lanka, equipped with state-of-the-art laboratory facilities, including Microbiology, Molecular biology, Tissue Culture, Chemistry. In addition, it maintains close relations with SLINTEC and Peradeniya University. School of Education - BEd (Hons) Biological Sciences, BEd (Hons) Physical Sciences, BEd (Hons) English School of Law - LLB (Hons) Law [LJMU] School of Nursing - Higher Diploma in Nursing, BSc (Hons) Nursing - International [LJMU] School of Psychology - Higher Diploma in Psychology, BSc (Hons) Psychology [LJMU] English Language Teaching Unit Mathematics Unit - BSc (Hons) Financial Mathematics and Applied Statistics [Expected in 2021] Faculty of Graduate Studies The Faculty of Graduate Studies is a graduate school that conducts taught and research postgraduate degree programs in the fields of engineering and computing. Colombo Academy of Hospitality Management The Colombo Academy of Hospitality Management (CAHM) was started in 2012 as a joint venture with SLIIT and William Angliss Institute of TAFE, offering degrees in tourism and hospitality management. SLIIT Academy SLIIT Academy (Private )Limited previously known as SLIIT Computing was established a s a subsidiary of Sri Lanka Institute of Information Technology under the companies act No.17 of 1982 in 2011 and was registered with the Registrar of companies under the No. N (PVS) 29707 on 31 December 2001. SLIIT Academy (Pvt.) Ltd was established to provide educational opportunities to a wider range of students who wish to progress their higher education with an industrial oriented learning experience. SLIIT Academy offers Foundation Programs & Degrees in IT, Business, Engineering & Health and Life Sciences and Post-Graduate Program in MSc in PM which paves the way for students to keep their knowledge up-to-date in this fast-moving environment. With the growing demands of the workforce, SLIIT ACADEMY has been one of the first to introduce one of a kind Educational opportunity to the Nation of Sri Lanka. A blend of theory and practice gives the edge for those who want to enter the workforce with sound academic background and gives a competitive advantage among the others. Foundation Programs University of Bedfordshire (UOB) Degree Programs Foundation Certificate in Information Technology Foundation Certificate in Business Management Curtin University - SLIIT Academy Foundation Certificate programs in Information Technology Engineering Business Health & Life Sciences Higher Diploma Programs Electronic & Electrical Engineering Information Technology Business Management Degree Programs Departments Computer Science and Software Engineering - UOB Computer Networking - UOB Electronic & Electrical Engineering - LJM Business Administration - UOB Human Resources Management - UOB Postgraduate Programs Project Management – Liverpool John Moores University (LJMU) School of Architecture The School of Architecture was established in 2015 offering BSc and MSc degree programs in architecture from the Liverpool John Moores University. Academics Affiliations Partnerships with foreign universities allow students to finish their degree at these universities following an initial study period in Sri Lanka. Research SLIIT conducts IT and computer-related research and is a partner of ConceptNursery.com, Sri Lanka's technology incubator. The SLIIT Research Symposium is held annually. Professional courses SLIIT conducts the Cisco Network Academy courses (CCNA, CCNP and CCSP) under the regional Cisco Network Academy in Sri Lanka. SLIIT is also one of the Microsoft Academies in Sri Lanka offering Certification Courses. Controversies Ownership The Sri Lanka Institute of Information Technology (SLIIT) was established as a government owned but privately managed institute and was built on a property that belonged to the Mahapola Higher Education Scholarship Trust Fund. In 2015 SLIIT Board of Directors had transferred the assets of SLIIT to a company limited by guarantee, which was set up to manage SLIIT. The government auditor department said such a transfer was in contravention of the law and resulted in Mahapola not getting the correct amount of funds from SLIIT. In September 2018, Dr. Rajapakshe submitted a Cabinet paper, attempting to establish SLIIT as a fee-levying institution under the Government, but this has so far been unsuccessful. In 2021, SLIIT has been asked to appear before the Committee On Public Enterprises (Sri Lanka) for the investigations but SLIIT refused to appear before the Committee citing it was not state-owned and COPE expressed its displeasure. In 2020, the Sri Lankan parliament appointed a committee to investigate the misuse of public funds by the SLIIT; this committee included 22 members of the Sri Lankan parliament, and its final report was released in April 2021. According to this report, the top management of the SLIIT has taken a series of informal steps to vest state resources in private ownership. Hence, this committee recommends that legal action be taken under the Public Property Act against parties who alienated state ownership and management of the Sri Lanka Institute of Information Technology without proper authority. Rankings Sport The Malabe campus has a range of sporting facilities including cricket, rugby, basketball, netball, volleyball, tennis, table tennis, badminton, chess and carrom. SLIIT sporting teams compete in a range of local and national competitions. See also Information Technology in Sri Lanka Further reading Section 25A of the Universities Act No. 16 of 1978 of Sri Lanka 21 November 2012 - Extra Ordinary Gazette 1785/22 References External links 1999 establishments in Sri Lanka Business schools in Sri Lanka Educational institutions established in 1999 Information technology institutes Information technology research institutes Schools of informatics Universities and colleges in Colombo District Engineering universities and colleges in Sri Lanka Universities in Sri Lanka
32547823
https://en.wikipedia.org/wiki/Router
Router
Router may refer to: Router (computing), a computer networking device Router (woodworking), a rotating cutting tool Router plane, a woodworking hand plane See also Rooter (disambiguation) Route (disambiguation) Routing (disambiguation)
3766667
https://en.wikipedia.org/wiki/Piranha%20%28disambiguation%29
Piranha (disambiguation)
A piranha, or piraña, is a carnivorous freshwater fish. Film Piranha (1978 film), 1978 horror film Piranha II: The Spawning, 1981 sequel to the 1978 film Piranha (1995 film), 1995 remake of the 1978 film Piranha 3D, 2010 remake of the 1978 film Piranha 3DD, 2012 sequel to the 2010 film Piranha (film series), a horror comedy franchise Piranha (1972 film), horror film unrelated to the film series Piranha (2006 film), Russian film Piranhas (2019 film), a 2019 Italian film Music The Piranhas, a British ska-influenced punk band Piranha (album), 2000 album by the Fullerton College Jazz Band "Piranha" (song), by The Grace from their album Graceful 4 "Piranha", song by The Prodigy from the album Invaders Must Die "PirANhA", song by rock band Tripping Daisy from the album I Am an Elastic Firecracker "Piranha", song by metal band Exodus from the album Bonded by Blood Publications The Piranha, Trinity College, Dublin's student satirical newspaper Piranha Press, imprint of DC comics (1989 to 1993) Piranha Brothers, a Monty Python sketch Piranha Club, powerful and eponymous fraternity in the comic strip Ernie/Piranha Club Software and video games Piranha (software), a data mining system Piranha Bytes, a German game developer Piranha Interactive Publishing, an American software publisher Piranha Games, a Canadian software developer Piranha Software, a defunct software label of Macmillan Publishing Petey Piranha, a character in Nintendo's Mario game series Military ALR Piranha, an aircraft project undertaken by the Swiss Air Force MAA-1 Piranha, a Brazilian air-to-air missile Mowag Piranha, a type of armoured fighting vehicle USS Piranha (SS-389), a Balao-class submarine Other uses Piranhas (baseball), a nickname for some of the hitters on the Minnesota Twins baseball team Pirana, Rajasthan, a village in India Bertone Pirana, a show car based on the Jaguar E-Type Piranha solution, a cleaning solution Piraña (Efteling), a river rapids ride in amusement park Efteling Piranha (comics), a fictional character appearing in Marvel Comics Piranha (compositing software), a digital imaging application Piranhas, Alagoas, Brazil Piranhas, Goiás, Brazil See also Paraná (disambiguation) Piraha (disambiguation)
30954277
https://en.wikipedia.org/wiki/Palomar%E2%80%93Leiden%20survey
Palomar–Leiden survey
The Palomar–Leiden survey (PLS) was a successful astronomical survey to study faint minor planets in a collaboration between the U.S Palomar Observatory and the Dutch Leiden Observatory, and resulted in the discovery of thousands of asteroids, including many Jupiter trojans. The original PLS-survey took place in 1960, and was followed by three Palomar–Leiden Trojan survey campaigns, launched in 1971, 1973 and 1977. Its principal investigators were the astronomers Ingrid and Cornelis van Houten at Leiden and Tom Gehrels at Palomar. For the period of the entire survey (1960–1977), the trio of astronomers are credited with the discovery of 4,637 numbered minor planets, which received their own provisional designation, such as 6344 P-L, 4835 T-1 and 3181 T-2. PLS was one of the most productive minor planet surveys ever conducted: five new asteroid families were discovered, gaps at 1:3 and 2:5 orbital resonances with Jupiter were revealed, and hundreds of photographic plates were taken with Palomar's Samuel Oschin telescope. These plates are still used in their digitized form for the precovery of minor planets today. Summary Approximately 5,500 minor planets were discovered during the Palomar–Leiden survey and its subsequent Trojan campaigns. A total of 4,622 minor planets have been numbered so far and are directly credited to the survey's principal investigators – Cornelis Johannes van Houten, Ingrid van Houten-Groeneveld and Tom Gehrels – by the Minor Planet Center (see ), which is responsible for the designation of minor bodies in the Solar System. Discoveries included members of the Hungaria and Hilda family, which are asteroids from the inner- and outermost regions of the asteroid belt, respectively, as well as a large number of Jupiter trojans. P-L  Palomar–Leiden survey (1960), discovered more than 2,000 asteroids (1,800 with orbital information) in eleven nights. This number was increased to 2,400 including 19 Trojans after further analysis of the plates. A total of 130 photographic plates were taken. T-1   the first Palomar–Leiden Trojan survey (1971), discovered approximately 500 asteroids including 4 Jupiter trojans in nine nights. A total of 54 photographic plates were taken. T-2   the second Palomar–Leiden Trojan survey (1973), discovered another 1,200 asteroids including 18 Jupiter trojans in eight nights. A total of 78 photographic plates were taken. T-3   the third Palomar–Leiden Trojan survey (1977), discovered an additional 1,400 asteroids including 24 Jupiter trojans in seven nights. A total of 68 photographic plates were taken. Naming The discovered bodies received a custom provisional designation. For example, the asteroid 2040 P-L is the 2040th minor planet in the original Palomar-Leiden survey, while the asteroid 4835 T-1 was discovered during the first Trojan-campaign. The majority of these bodies have since been assigned a number and many are already named. The custom identifier in the provisional designation "P-L" stands for "Palomar–Leiden", named after Palomar Observatory and Leiden Observatory. For the three Trojan campaigns, the survey designation prefixes "T-1", "T-2" and "T-3" stand for "Trojan". Surveys The PLS was originally intended as an extension of the Yerkes–McDonald asteroid survey (1950–1952), which was initiated by Dutch–American astronomer Gerard Kuiper. While this survey was limited to a magnitude of up to 16, PLS could study minor planets up to a visual magnitudes of 20. However, it only covered a portion of the ecliptic about the vernal equinox, with the target areas selected to minimize the number of background stars. Photographic plates taken by Tom Gehrels at the Lunar and Planetary Laboratory in Arizona using the 48-inch Schmidt camera at Palomar Observatory. The orbital elements were computed at the Cincinnati Observatory, which was the site of the Minor Planet Center at the time. All other aspects of the program were conducted at Leiden Observatory in the Netherlands. Original PLS-survey During September and October 1960, the first 130 photographic plates were taken, with each plate spanning and having a limiting magnitude of 20.5. The observed region covered an area of . The Zeiss blink comparator from the Heidelberg Observatory was adapted to perform blink comparison of the plates. This resulted in the discovery of a large number of asteroids; typically 200–400 per plate. A subset of these objects had sufficient data to allow orbital elements to be computed. The mean error in their positions was as small as 0.6″, which corresponded to 0.009 mm on the plates. The resulting mean error in magnitude estimation was 0.19. Trojan surveys The third Palomar–Leiden Trojan survey was performed in 1977, resulting in the discovery of 26 Jupiter trojans. In total, there were three Trojan campaigns, designated T-1, T-2, and T-3, which discovered 3570 asteroids. Another small extension of the survey was reported in 1984, adding 170 new objects for a combined total of 2,403. List of discovered minor planets See also List of minor planet discoverers National Geographic Society – Palomar Observatory Sky Survey (NGS-POSS) References 1960 in California 1960 in science 1961 in science Asteroid surveys Astronomical surveys
183594
https://en.wikipedia.org/wiki/GNU%20Savannah
GNU Savannah
GNU Savannah is a project of the Free Software Foundation initiated by Loïc Dachary, which serves as a collaborative software development management system for free Software projects. Savannah currently offers CVS, GNU arch, Subversion, Git, Mercurial, Bazaar, mailing list, web hosting, file hosting, and bug tracking services. Savannah initially ran on the same SourceForge software that at the time was used to run the SourceForge portal. Savannah's website is split into two domain names: savannah.gnu.org for software that is officially part of the GNU Project, and savannah.nongnu.org for all other software. Unlike SourceForge or GitHub, Savannah's focus is for hosting free software projects and has very strict hosting policies, including a ban against the use of non-free formats (such as Adobe Flash) to ensure that only free software is hosted. When registering a project, project submitters have to state which free software license the project uses. Project owners do not have the freedom of deleting their submitted projects on their own wish and the staff has a policy of refusing all deletion requests, unless the project was approved by mistake or has always been empty. History Loïc Dachary installed SourceForge on a server located in Boston for the benefit of the GNU Project (specifically, to power the GNU Savannah's website). When, as contributor to SourceForge, he found out it was to be turned into proprietary software, he forked it and named it Savannah (since it was the software running the GNU Project's Savannah website and had no other name.). People contributing to GNU Savannah were called savannah-hackers from this day, as it was at first more a quick hack than anything else. CERN took interest in the sourcecode and hired Mathieu Roy, a savannah-hacker, to work in Geneva. It led to the development of Savane (software) starting in 2003. In 2003, Vincent Caron, friend to Loïc Dachary, found out the security of the server located was compromised. A new server has been bought by the Free Software Foundation to provide a clean reinstall of the software. When this server was put in place, after a four-month outage without any public news, only Free Software Foundation employees had access to it. Notably savannah-hackers had no access and found out that Richard M. Stallman decided to move GNU Savannah to GForge because it was "seriously maintained". In response, Vincent Caron, Loïc Dachary and Mathieu Roy put up an alternative instance of the software called Gna!, with a specific constitution inspired by the Debian Social Contract designed to prevent any unexpected take over. GNU Savannah was totally or partly offline for months and, ultimately, did not move to GForge, which itself turned into proprietary software. See also Savane (software) - the software running GNU Savannah Puszcza - a sister software development hosting site of GNU Savannah maintained by long-time GNU volunteer Sergey Poznyakoff in Ukraine Gna! – a sister software development hosting site of GNU Savannah, by Free Software Foundation France (now deactivated) Comparison of open-source software hosting facilities Fusionforge – the continuity of the original opensource GForge GForge – another fork of SourceForge, now proprietary SourceForge – software used by GNU Savannah before fork Apache Allura – the continuation of the original SourceForge software References External links Savannah Project hosting websites Open-source software hosting facilities
25172490
https://en.wikipedia.org/wiki/Dmailer
Dmailer
Dmailer was a French company which specialized in portable backup and synchronization software for devices, including USB flash drives, memory cards, external hard disk drives, MP3 players, embedded phone memories, SIM cards and flash-based memory cards for mobile phones. Serving both consumers and original equipment manufacturers (OEMs), Dmailer designed, developed, manufactured and marketed portable backup and synchronization software. Dmailer software products were bundled with SanDisk, Western Digital, Verbatim Corporation, Imation, LaCie, Lexar and other manufacturers’ portable storage products worldwide. Dmailer licensed its patent-pending synchronization engine technology to a number of companies. Dmailer’s data backup software supports both local and online backup. On March 23, 2010, Dmailer launched a free version of its backup software (Dmailer Backup v3). The software application allows users to perform live backup and online backup of any storage device, as well as to back up PC and Mac platforms locally, via USB drive, CD, mobile device or external hard drive. No software installation is needed to restore data. It included 2 GB of free storage online through Dmailer Online, an online storage service that backed up secure copies to a remote server for access from any computer. Dmailer Backup v3 restores data to its original location or elsewhere and also performs customized backup. All product customer support, the Dmailer Sync Product and Dmailer Online service are discontinued. Dmailer technical assets have been bought by Gemalto and are now included in the YuuWaa service. Corporation Company history Established in 2001, Dmailer’s head office was located in Marseille, France, with a regional American office in Chicago, Illinois and a local presence in Australia. The company was founded in December 2001 by Brigitte Léonardi, winner of the "National Contest for Assistance to the Creation of Innovative Technology Companies", awarded by the French Ministry for Research (ANVAR). Management team Dmailer’s executives represented nine nationalities and ten spoken languages and international business experience spanning America (North and South), Europe and Asia Pacific. Lucas Leonardi (CEO) Philippe Leca (CFO) Benoit Gantaume (CSO) Eric Dumas (CTO) Anthony Reyes (EVP Marketing & Services) Products Dmailer offered backup, synchronization and multimedia software. The synchronization engine was included in some USB flash drives, memory cards, and external hard disk drives. Products included Dmailer Sync, Dmailer Backup, the Dmailer Online service, and Mediamove. Product history Dmailer first launched an email management application in 2001. In French, “demêler” means “To sort, to organize” and it was precisely the goal of the first software of the company: help people manage their emails. In 2004, the company developed its synchronization software called “Dmailer Sync” that let the users synchronize their PCs to a portable USB drive. In the same year, Dmailer made its first OEM deal with SanDisk and developed a product called “Cruzer Sync”, which is basically Dmailer Sync but branded for SanDisk Cruzer drives. In 2006, Dmailer started shipping Dmailer Sync along with Western Digital hard drives branded as WD Sync. In 2007 and 2008, Dmailer Sync was shipped along with Lacie and Lexar drives as “Lacie sync” and “Lexar sync” respectively. In 2009, Dmailer came out with its backup software called “Dmailer Backup” which is shipped with Maxell, EdgeTech, Verbatim and Lexar portable drives. Dmailer Backup is also available for free as a standalone product for customers. Both Dmailer Sync and Dmailer Backup are coupled with “Dmailer Online”, an online storage service that lets the users backup their files online in secure servers for anytime and anywhere access. In 2009, Dmailer announced MediaMove. It is media management software that lets the user preview media files, move them from a phone, camera or MP3 to a PC or Mac and share them with friends or upload them to online media-sharing services. Product reviews On April 1, 2010, “Addictive Tips” reviewed Dmailer Backup software and called it a “Kick-Ass backup tool” On April 7, 2010, “SmashingApps” called Dmailer backup as an alternative to DropBox On May 7, 2010 Jon Jacobi from PC World reviewed Dmailer Backup and said that it is far friendlier and well worth a look Dmailer products are bundled with SanDisk, Western Digital, Verbatim Corporation, Imation, LaCie, Lexar and other manufacturers of portable storage products worldwide. Dmailer licenses its patent-pending synchronization engine technology to a number of companies. Features Dmailer Backup: Custom configuration: backup by file type, size and date Manage multiple backups, maintain log files and view backup summaries Password protection and AES 128-bit encryption Automatic versioning controls File selection for automatic and continuous backup 2 GB of free online storage Online folder sharing Supports Windows XP, Vista and 7 Supports Mac OS 10.5 and 10.6 Available in 19 languages 600-pixel vertical resolutions for Netbooks Awards Dmailer received Deloitte’s Technology Fast50 in France and Technology Fast500 EMEA awards in 2008 and 2009, respectively. References External links SearchDataBackup.com: Dmailer Announces New Backup Software - Dmailer Backup v2.9 (posted January 6, 2010) Storagenewsletter.com: Dmailer Showcases mediamove (posted February 24, 2010) SearchDataBackup.com: Online Backup Storage Provider Dmailer Gives Away Free Local Data Backup (posted March 25, 2010) Backup software Software companies of France File hosting for macOS File hosting for Windows
612334
https://en.wikipedia.org/wiki/GNU%20Parted
GNU Parted
GNU Parted (the name being the conjunction of the two words PARTition and EDitor) is a free partition editor, used for creating and deleting partitions. This is useful for creating space for new operating systems, reorganising hard disk usage, copying data between hard disks, and disk imaging. It was written by Andrew Clausen and Lennert Buytenhek. It consists of a library, libparted, and a command-line front-end, parted, that also serves as a reference implementation. , GNU Parted runs only under Linux and GNU/Hurd. Other front-ends nparted is the newt-based frontend to GNU Parted. Projects have started for an ncurses frontend, that also could be used in Windows (with GNUWin32 Ncurses). fatresize offers a command-line interface for FAT16/FAT32 non-destructive resize and uses the GNU Parted library. Graphical front-ends GParted and KDE Partition Manager are graphical programs using the parted libraries. They are adapted for GNOME and KDE respectively; two major desktop environments for Unix-like installations. They are often included as utilities on many live CD distributions to make partitioning easier. QtParted was another graphical front-end based on Qt that is no longer being actively maintained. Pyparted (also called python-parted) is the Python front-end for GNU Parted. Linux distributions that come with this application by default include Slackware, Knoppix, sidux, SystemRescueCD, and Parted Magic. Limitations Parted previously had support for operating on filesystems within partitions (creating, moving, resizing, copying). This support was removed in version 3.0. See also List of disk partitioning software util-linux: fdisk cfdisk sfdisk gpart gparted FIPS Master Boot Record manager References External links Free partitioning software Parted Software using the GPL license
9646826
https://en.wikipedia.org/wiki/Network%20agility
Network agility
Network Agility is an architectural discipline for computer networking. It can be defined as: The ability of network software and hardware to automatically control and configure itself and other network assets across any number of devices on a network. With regards to network hardware, network agility is used when referring to automatic hardware configuration and reconfiguration of network devices e.g. routers, switches, SNMP devices. Network agility, as a software discipline, borrows from many fields, both technical and commercial. On the technical side, network agility solutions leverage techniques from areas such as: Service-oriented architecture (SOA) Object-oriented design Architectural patterns Loosely coupled data streaming (e.g.: web services) Iterative design Artificial intelligence Inductive scheduling On-demand computing Utility computing Commercially, network agility is about solving real-world business problems using existing technology. It forms a three-way bridge between business processes, hardware resources, and software assets. In more detail, it takes, as input: 1 the business processes – i.e. what the network must achieve in real business terms; the hardware that resides within the network; and the set of software assets that run on this hardware. Much of this input can be obtained through automatic discovery – finding the hardware, its types and locations, software, licenses etc. The business processes can be inferred to a certain degree, but it is these processes that business managers need to be able to control and organize. Software resources discovered on the network can take a variety of forms – some assets may be licensed software products, others as blocks of software service code that can be accessed via some service enterprise portal, such as (but not necessarily) web services. These services may reside in-house, or they may be 'on-demand' via an on-line subscription service. Indeed, the primary motivation of network agility is to make the most efficient use of the resources available, wherever they may reside, and to identify areas where business process goals are not being satisfied to some benchmark level (and ideally to offer possible solutions). Network agility tools are then in a position to optimize the existing hardware to run software assets as needed to achieve the business process goals. As network usage is never linear, the hardware/software mix requirements will change dynamically over various time segments (weekly, quarterly, annually etc.), and step changes will be required from time to time when business-process goals change/evolve/are updated (e.g. during/after a company re-organization). The benefits to business of the network agility approach are obvious – cost savings in software licensing and higher efficiency of hardware assets – leading to better productivity. See also Service-oriented analysis and design Object-oriented design Design patterns SOA governance Business-driven development Semantic service-oriented architecture Enterprise service bus Finite-state machine Scheduling (computing) Representational state transfer Service component architecture Comparison of business integration software Service-oriented infrastructure Enterprise application integration Grid computing Distributed computing References Erl Thomas, Service-Oriented Architecture: Concepts, Technology, and Design (Prentice Hall) 2005, Jerome F. DiMarzio, Network Architecture and Design: A Field Guide for IT Consultants (Sams) 2001-5, University of California, Methodology for Developing Web Design Patterns (White Paper) Computer networking
16779711
https://en.wikipedia.org/wiki/Advanced%20Vector%20Extensions
Advanced Vector Extensions
Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and Advanced Micro Devices (AMD). They were proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme. AVX2 (also known as Haswell New Instructions) expands most integer commands to 256 bits and introduces new instructions. They were first supported by Intel with the Haswell processor, which shipped in 2013. AVX-512 expands AVX to 512-bit support using a new EVEX prefix encoding proposed by Intel in July 2013 and first supported by Intel with the Knights Landing co-processor, which shipped in 2016. In conventional processors, AVX-512 was introduced with Skylake server and HEDT processors in 2017. Advanced Vector Extensions AVX uses sixteen YMM registers to perform a Single Instruction on Multiple pieces of Data (see SIMD). Each YMM register can hold and do simultaneous operations (math) on: eight 32-bit single-precision floating point numbers or four 64-bit double-precision floating point numbers. The width of the SIMD registers is increased from 128 bits to 256 bits, and renamed from XMM0–XMM7 to YMM0–YMM7 (in x86-64 mode, from XMM0–XMM15 to YMM0–YMM15). The legacy SSE instructions can be still utilized via the VEX prefix to operate on the lower 128 bits of the YMM registers. AVX introduces a three-operand SIMD instruction format called VEX coding scheme, where the destination register is distinct from the two source operands. For example, an SSE instruction using the conventional two-operand form a = a + b can now use a non-destructive three-operand form c = a + b, preserving both source operands. Originally, AVX's three-operand format was limited to the instructions with SIMD operands (YMM), and did not include instructions with general purpose registers (e.g. EAX). It was later used for coding new instructions on general purpose registers in later extensions, such as BMI. VEX coding is also used for instructions operating on the k0-k7 mask registers that were introduced with AVX-512. The alignment requirement of SIMD memory operands is relaxed. Unlike their non-VEX coded counterparts, most VEX coded vector instructions no longer require their memory operands to be aligned to the vector size. Notably, the VMOVDQA instruction still requires its memory operand to be aligned. The new VEX coding scheme introduces a new set of code prefixes that extends the opcode space, allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The VEX prefix can also be used on the legacy SSE instructions giving them a three-operand form, and making them interact more efficiently with AVX instructions without the need for VZEROUPPER and VZEROALL. The AVX instructions support both 128-bit and 256-bit SIMD. The 128-bit versions can be useful to improve old code without needing to widen the vectorization, and avoid the penalty of going from SSE to AVX, they are also faster on some early AMD implementations of AVX. This mode is sometimes known as AVX-128. New instructions These AVX instructions are in addition to the ones that are 256-bit extensions of the legacy 128-bit SSE instructions; most are usable on both 128-bit and 256-bit operands. CPUs with AVX Intel Sandy Bridge processors, Q1 2011 Sandy Bridge E processors, Q4 2011 Ivy Bridge processors, Q1 2012 Ivy Bridge E processors, Q3 2013 Haswell processors, Q2 2013 Haswell E processors, Q3 2014 Broadwell processors, Q4 2014 Skylake processors, Q3 2015 Broadwell E processors, Q2 2016 Kaby Lake processors, Q3 2016(ULV mobile)/Q1 2017(desktop/mobile) Skylake-X processors, Q2 2017 Coffee Lake processors, Q4 2017 Cannon Lake processors, Q2 2018 Whiskey Lake processors, Q3 2018 Cascade Lake processors, Q4 2018 Ice Lake processors, Q3 2019 Comet Lake processors (only Core and Xeon branded), Q3 2019 Tiger Lake (Core, Pentium and Celeron branded) processors, Q3 2020 Rocket Lake processors, Q1 2021 Alder Lake processors, 2021 Gracemont processors, 2021 Not all CPUs from the listed families support AVX. Generally, CPUs with the commercial denomination Core i3/i5/i7/i9 support them, whereas Pentium and Celeron CPUs do not. AMD: Jaguar-based processors and newer Puma-based processors and newer "Heavy Equipment" processors Bulldozer-based processors, Q4 2011 Piledriver-based processors, Q4 2012 Steamroller-based processors, Q1 2014 Excavator-based processors and newer, 2015 Zen-based processors, Q1 2017 Zen+-based processors, Q2 2018 Zen 2-based processors, Q3 2019 Zen 3 processors, Q4 2020 Issues regarding compatibility between future Intel and AMD processors are discussed under XOP instruction set. VIA: Nano QuadCore Eden X4 Zhaoxin: WuDaoKou-based processors (KX-5000 and KH-20000) Compiler and assembler support Absoft supports with - flag. The Free Pascal compiler supports AVX and AVX2 with the -CfAVX and -CfAVX2 switches from version 2.7.1. RAD studio (v11.0 Alexandria) supports AVX2 and AVX512. The GNU Assembler (GAS) inline assembly functions support these instructions (accessible via GCC), as do Intel primitives and the Intel inline assembler (closely compatible to GAS, although more general in its handling of local references within inline code). GCC starting with version 4.6 (although there was a 4.3 branch with certain support) and the Intel Compiler Suite starting with version 11.1 support AVX. The Open64 compiler version 4.5.1 supports AVX with - flag. PathScale supports via the - flag. The Vector Pascal compiler supports AVX via the - flag. The Visual Studio 2010/2012 compiler supports AVX via intrinsic and switch. Other assemblers such as MASM VS2010 version, YASM, FASM, NASM and JWASM. Operating system support AVX adds new register-state through the 256-bit wide YMM register file, so explicit operating system support is required to properly save and restore AVX's expanded registers between context switches. The following operating system versions support AVX: DragonFly BSD: support added in early 2013. FreeBSD: support added in a patch submitted on January 21, 2012, which was included in the 9.1 stable release Linux: supported since kernel version 2.6.30, released on June 9, 2009. macOS: support added in 10.6.8 (Snow Leopard) update released on June 23, 2011. OpenBSD: support added on March 21, 2015. Solaris: supported in Solaris 10 Update 10 and Solaris 11 Windows: supported in Windows 7 SP1, Windows Server 2008 R2 SP1, Windows 8, Windows 10 Windows Server 2008 R2 SP1 with Hyper-V requires a hotfix to support AMD AVX (Opteron 6200 and 4200 series) processors, KB2568088 Advanced Vector Extensions 2 Advanced Vector Extensions 2 (AVX2), also known as Haswell New Instructions, is an expansion of the AVX instruction set introduced in Intel's Haswell microarchitecture. AVX2 makes the following additions: expansion of most vector integer SSE and AVX instructions to 256 bits Gather support, enabling vector elements to be loaded from non-contiguous memory locations DWORD- and QWORD-granularity any-to-any permutes vector shifts. Sometimes three-operand fused multiply-accumulate (FMA3) extension is considered part of AVX2, as it was introduced by Intel in the same processor microarchitecture. This is a separate extension using its own flag and is described on its own page and not below. New instructions CPUs with AVX2 Intel Haswell processor (only Core and Xeon branded), Q2 2013 Haswell E processor, Q3 2014 Broadwell processor, Q4 2014 Broadwell E processor, Q3 2016 Skylake processor (only Core and Xeon branded), Q3 2015 Kaby Lake processor (only Core and Xeon branded), Q3 2016 (ULV mobile)/Q1 2017 (desktop/mobile) Skylake-X processor, Q2 2017 Coffee Lake processor (only Core and Xeon branded), Q4 2017 Cannon Lake processor, Q2 2018 Cascade Lake processor, Q2 2019 Ice Lake processor, Q3 2019 Comet Lake processor (only Core and Xeon branded), Q3 2019 Tiger Lake (Core, Pentium and Celeron branded) processor, Q3 2020 Rocket Lake processor, Q1 2021 Alder Lake processor, 2021 Gracemont processors, 2021 AMD Excavator processor and newer, Q2 2015 Zen processor, Q1 2017 Zen+ processor, Q2 2018 Zen 2 processor, Q3 2019 Zen 3 processor, 2020 VIA: Nano QuadCore Eden X4 AVX-512 AVX-512 are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture proposed by Intel in July 2013, and are supported with Intel's Knights Landing processor. AVX-512 instruction are encoded with the new EVEX prefix. It allows 4 operands, 8 new 64-bit opmask registers, scalar memory mode with automatic broadcast, explicit rounding control, and compressed displacement memory addressing mode. The width of the register file is increased to 512 bits and total register count increased to 32 (registers ZMM0-ZMM31) in x86-64 mode. AVX-512 consists of multiple extensions not all meant to be supported by all processors implementing them. The instruction set consists of the following: AVX-512 Foundation adds several new instructions and expands most 32-bit and 64-bit floating point SSE-SSE4.1 and AVX/AVX2 instructions with EVEX coding scheme to support the 512-bit registers, operation masks, parameter broadcasting, and embedded rounding and exception control AVX-512 Conflict Detection Instructions (CD) efficient conflict detection to allow more loops to be vectorized, supported by Knights Landing AVX-512 Exponential and Reciprocal Instructions (ER) exponential and reciprocal operations designed to help implement transcendental operations, supported by Knights Landing AVX-512 Prefetch Instructions (PF) new prefetch capabilities, supported by Knights Landing AVX-512 Vector Length Extensions (VL) extends most AVX-512 operations to also operate on XMM (128-bit) and YMM (256-bit) registers (including XMM16-XMM31 and YMM16-YMM31 in x86-64 mode) AVX-512 Byte and Word Instructions (BW) extends AVX-512 to cover 8-bit and 16-bit integer operations AVX-512 Doubleword and Quadword Instructions (DQ) enhanced 32-bit and 64-bit integer operations AVX-512 Integer Fused Multiply Add (IFMA) fused multiply add for 512-bit integers. AVX-512 Vector Byte Manipulation Instructions (VBMI) adds vector byte permutation instructions which are not present in AVX-512BW. AVX-512 Vector Neural Network Instructions Word variable precision (4VNNIW) vector instructions for deep learning. AVX-512 Fused Multiply Accumulation Packed Single precision (4FMAPS) vector instructions for deep learning. VPOPCNTDQ count of bits set to 1. VPCLMULQDQ carry-less multiplication of quadwords. AVX-512 Vector Neural Network Instructions (VNNI) vector instructions for deep learning. AVX-512 Galois Field New Instructions (GFNI) vector instructions for calculating Galois field. AVX-512 Vector AES instructions (VAES) vector instructions for AES coding. AVX-512 Vector Byte Manipulation Instructions 2 (VBMI2) byte/word load, store and concatenation with shift. AVX-512 Bit Algorithms (BITALG) byte/word bit manipulation instructions expanding VPOPCNTDQ. AVX-512 Bfloat16 Floating-Point Instructions (BF16) vector instructions for AI acceleration. AVX-512 Half-Precision Floating-Point Instructions (FP16) vector instructions for operating on floating-point and complex numbers with reduced precision. Only the core extension AVX-512F (AVX-512 Foundation) is required by all implementations, though all current processors also support CD (conflict detection); computing coprocessors will additionally support ER, PF, 4VNNIW, 4FMAPS, and VPOPCNTDQ, while central processors will support VL, DQ, BW, IFMA, VBMI, VPOPCNTDQ, VPCLMULQDQ etc. The updated SSE/AVX instructions in AVX-512F use the same mnemonics as AVX versions; they can operate on 512-bit ZMM registers, and will also support 128/256 bit XMM/YMM registers (with AVX-512VL) and byte, word, doubleword and quadword integer operands (with AVX-512BW/DQ and VBMI). CPUs with AVX-512 : AVX-512 is disabled by default in Alder Lake processors. On some motherboards with some BIOS versions, AVX-512 can be enabled in the BIOS, but this requires disabling E-cores. Compilers supporting AVX-512 GCC 4.9 and newer Clang 3.9 and newer ICC 15.0.1 and newer Microsoft Visual Studio 2017 C++ Compiler AVX-VNNI AVX-VNNI is a VEX-coded variant of the AVX512-VNNI instruction set extension. It provides the same set of operations, but is limited to 256-bit vectors and does not support any additional features of EVEX encoding, such as broadcasting, opmask registers or accessing more than 16 vector registers. This extension allows to support VNNI operations even when full AVX-512 support is not implemented in the processor. CPUs with AVX-VNNI Intel Alder Lake processor, 2021 Applications Suitable for floating point-intensive calculations in multimedia, scientific and financial applications (AVX2 adds support for integer operations). Increases parallelism and throughput in floating point SIMD calculations. Reduces register load due to the non-destructive instructions. Improves Linux RAID software performance (required AVX2, AVX is not sufficient) Software Blender uses AVX, AVX2 and AVX-512 in the Cycles render engine. Bloombase uses AVX, AVX2 and AVX-512 in their Bloombase Cryptographic Module (BCM). Botan uses both AVX and AVX2 when available to accelerate some algorithms, like ChaCha. Crypto++ uses both AVX and AVX2 when available to accelerate some algorithms, like Salsa and ChaCha. OpenSSL uses AVX- and AVX2-optimized cryptographic functions since version 1.0.2. This support is also present in various clones and forks, like LibreSSL Prime95/MPrime, the software used for GIMPS, started using the AVX instructions since version 27.x. dav1d AV1 decoder can use AVX2 on supported CPUs. dnetc, the software used by distributed.net, has an AVX2 core available for its RC5 project and will soon release one for its OGR-28 project. Einstein@Home uses AVX in some of their distributed applications that search for gravitational waves. Folding@home uses AVX on calculation cores implemented with GROMACS library. Helios uses AVX and AVX2 hardware acceleration on 64-bit x86 hardware. Horizon: Zero Dawn uses AVX1 in its Decima game engine. RPCS3, an open source PlayStation 3 emulator, uses AVX2 and AVX-512 instructions to emulate PS3 games. Network Device Interface, an IP video/audio protocol developed by NewTek for live broadcast production, uses AVX and AVX2 for increased performance. TensorFlow since version 1.6 and tensorflow above versions requires CPU supporting at least AVX. x264, x265 and VTM video encoders can use AVX2 or AVX-512 to speed up encoding. Various CPU-based cryptocurrency miners (like pooler's cpuminer for Bitcoin and Litecoin) use AVX and AVX2 for various cryptography-related routines, including SHA-256 and scrypt. libsodium uses AVX in the implementation of scalar multiplication for Curve25519 and Ed25519 algorithms, AVX2 for BLAKE2b, Salsa20, ChaCha20, and AVX2 and AVX-512 in implementation of Argon2 algorithm. libvpx open source reference implementation of VP8/VP9 encoder/decoder, uses AVX2 or AVX-512 when available. FFTW can utilize AVX, AVX2 and AVX-512 when available. LLVMpipe, a software OpenGL renderer in Mesa using Gallium and LLVM infrastructure, uses AVX2 when available. glibc uses AVX2 (with FMA) for optimized implementation (i.e. expf, sinf, powf, atanf, atan2f) of various mathematical functions in libc. Linux kernel can use AVX or AVX2, together with AES-NI as optimized implementation of AES-GCM cryptographic algorithm. Linux kernel uses AVX or AVX2 when available, in optimized implementation of multiple other cryptographic ciphers: Camellia, CAST5, CAST6, Serpent, Twofish, MORUS-1280, and other primitives: Poly1305, SHA-1, SHA-256, SHA-512, ChaCha20. POCL, a portable Computing Language, that provides implementation of OpenCL, makes use of AVX, AVX2 and AVX512 when possible. .NET and .NET Framework can utilize AVX, AVX2 through the generic System.Numerics.Vectors namespace. .NET Core, starting from version 2.1 and more extensively after version 3.0 can directly use all AVX, AVX2 intrinsics through the System.Runtime.Intrinsics.X86 namespace. EmEditor 19.0 and above uses AVX-2 to speed up processing. Native Instruments' Massive X softsynth requires AVX. Microsoft Teams uses AVX2 instructions to create a blurred or custom background behind video chat participants, and for background noise suppression. , a JSON parsing library, uses AVX2 to achieve improved decoding speed. Downclocking Since AVX instructions are wider and generate more heat, some Intel processors have provisions to reduce the Turbo Boost frequency limit when such instructions are being executed. On Skylake and its derivatives, the throttling is divided into three levels: L0 (100%): The normal turbo boost limit. L1 (~85%): The "AVX boost" limit. Soft-triggered by 256-bit "heavy" (floating-point unit: FP math and integer multiplication) instructions. Hard-triggered by "light" (all other) 512-bit instructions. L2 (~60%): The "AVX-512 boost" limit. Soft-triggered by 512-bit heavy instructions. The frequency transition can be soft or hard. Hard transition means the frequency is reduced as soon as such an instruction is spotted; soft transition means that the frequency is reduced only after reaching a threshold number of matching instructions. The limit is per-thread. In Ice Lake, only two levels persist: L0 (100%): The normal turbo boost limit. L1 (~97%): Triggered by any 512-bit instructions, but only when single-core boost is active; not triggered when multiple cores are loaded. Rocket Lake processors do not trigger frequency reduction upon executing any kind of vector instructions regardless of the vector size. However, downclocking can still happen due to other reasons, such as reaching thermal and power limits. Downclocking means that using AVX in a mixed workload with an Intel processor can incur a frequency penalty despite it being faster in a "pure" context. Avoiding the use of wide and heavy instructions help minimize the impact in these cases. AVX-512VL allows for using 256-bit or 128-bit operands in AVX-512, making it a sensible default for mixed loads. On supported and unlocked variants of processors that down-clock, the ratios are adjustable and may be turned off (set to 0x) entirely via Intel's Overclocking / Tuning utility or in BIOS if supported there. See also Memory Protection Extensions Scalable Vector Extension for ARM - a new vector instruction set (supplementing VFP and NEON) similar to AVX-512, with some additional features. References External links Intel Intrinsics Guide Intel Intrinsics Guide (Available Now) x86 Assembly Language Reference Manual X86 instructions SIMD computing Advanced Micro Devices technologies
29995700
https://en.wikipedia.org/wiki/1958%20USC%20Trojans%20football%20team
1958 USC Trojans football team
The 1958 USC Trojans football team represented the University of Southern California (USC) in the 1958 NCAA University Division football season. In their second year under head coach Don Clark, the Trojans compiled a 4–5–1 record (4–2–1 against conference opponents), finished in third place in the Pacific Coast Conference, and outscored their opponents by a combined total of 151 to 120. Tom Maudlin led the team in passing with 41 of 95 passes completed for 535 yards, four touchdowns and 15 interceptions. Don Buford led the team in rushing with 64 carries for 306 yards. Hillard Hill was the leading receiver with 11 catches for 319 yards and five touchdowns. Four Trojans were recognized by either the Associated Press (AP) or the conference coaches on the 1958 All-Pacific Coast Conference football team: end Marlin McKeever (AP-1; Coaches-1); tackle Dan Ficca (AP-1; Coaches-2); guard Frank Florentino (Coaches-1 [tie]); and halfback Don Buford (Coaches-2). Schedule References External links Game program: USC vs. Washington State at Spokane – October 25, 1958 USC USC Trojans football seasons USC Trojans football
2858982
https://en.wikipedia.org/wiki/Digital%20signature%20transponder
Digital signature transponder
The Texas Instruments digital signature transponder (DST) is a cryptographically enabled radio-frequency identification (RFID) device used in a variety of wireless authentication applications. The largest deployments of the DST include the Exxon-Mobil Speedpass payment system (approximately 7 million transponders), as well as a variety of vehicle immobilizer systems used in many late model Ford, Lincoln, Mercury, Toyota, and Nissan vehicles. The DST is an unpowered "passive" transponder which uses a proprietary block cipher to implement a challenge–response authentication protocol. Each DST tag contains a quantity of non-volatile RAM, which stores a 40-bit encryption key. This key is used to encipher a 40-bit challenge issued by the reader, producing a 40-bit ciphertext, which is then truncated to produce a 24-bit response transmitted back to the reader. Verifiers (who also possess the encryption key) verify this challenge by computing the expected result and comparing it to the tag response. Transponder encryption keys are user programmable, using a simple over-the-air protocol. Once correctly programmed, transponders may be "locked" through a separate command, which prevents further changes to the internal key value. Each transponder is factory provisioned with a 24-bit serial number and 8-bit manufacturer code. These values are fixed, and cannot be altered. The DST40 cipher Until 2005, the DST cipher (DST40) was a trade secret of Texas Instruments, made available to customers under non-disclosure agreement. This policy was likely instituted due to the cipher's non-standard design and small key size, which rendered it vulnerable to brute-force keysearch. In 2005, a group of students from the Johns Hopkins University Information Security Institute and RSA Laboratories reverse-engineered the cipher using an inexpensive Texas Instruments evaluation kit, through schematics of the cipher leaked onto Internet, and black-box techniques [1] (i.e., querying transponders via the radio interface, rather than dismantling them to examining the circuitry). Once the cipher design was known, the team programmed several FPGA devices to perform brute-force key searches based on known challenge/response pairs. Using a single FPGA device, the team was able to recover a key from two known challenge/response pairs in approximately 11 hours (average case). With an array of 16 FPGA devices, they reduced this time to less than one hour. DST40 is a 200-round unbalanced Feistel cipher, in which L0 is 38 bits, and R0 is 2 bits. The key schedule is a simple linear feedback shift register, which updates every three rounds, resulting in some weak keys (e.g., the zero key). Although the cipher is potentially invertible, the DST protocol makes use of only the encipher mode. When used in the protocol with the 40–24-bit output truncation, the resulting primitive is more aptly described as a Message Authentication Code rather than an encryption function. Although a truncated block cipher represents an unusual choice for such a primitive, this design has the advantage of precisely bounding the number of collisions for every single key value. The DST40 cipher is one of the most widely used unbalanced Feistel ciphers in existence. Reaction and fixes The vulnerability exposed by the Hopkins team indicated potential security threats to millions of vehicles which where protected using DST-based immobilizer systems, and to the Exxon-Mobil Speedpass system. The ideal solution to this problem is to replace the DST transponder in these systems with a device provisioned with a more robust cryptographic circuit utilizing a longer key length. However, the cost of recalling these security systems is prohibitively high, and, as of October 2005, neither Texas Instruments, Exxon-Mobil or any vehicle manufacturer has announced such a recall. Currently the most effective protections against this attack rely on user vigilance, e.g., protecting transponder keys, auditing Speedpass invoices for fraud, and optionally using a metallic shield (such as aluminum foil) to prevent unauthorized scanning of DST tags. This vulnerability has also spawned the creation of the RSA Blocker Tag and RFID Blocking Wallets. References [1] S. Bono, M. Green, A. Stubblefield, A. Rubin, A. Juels, M. Szydlo. "Security Analysis of a Cryptographically-Enabled RFID Device". In Proceedings of the USENIX Security Symposium, August 2005. (pdf) Radio-frequency identification
48583477
https://en.wikipedia.org/wiki/Samuel%20N.%20Alexander
Samuel N. Alexander
Samuel Nathan Alexander (February 22, 1910 in Wharton, Texas – December 9, 1967 in Chevy Chase, Maryland) was an American computer pioneer who developed SEAC, one of the earliest computers. Career Alexander studied at the University of Oklahoma, earning a bachelor's degree in 1931 and at the Massachusetts Institute of Technology with a master's degree in 1933. After that, he was an engineer in the laboratory of Simplex, Wire and Cable Company, worked in the electronic instrument development for the U.S. Navy and was Senior Project Engineer at the Bendix Aviation Corporation. In 1946 he went to the National Bureau of Standards, where he became head of the Laboratory of electronic computer until 1954, when he became head of the data processing department. From 1964 to his death in 1967 he headed the Department of Information Technology. Alexander assisted Margaret R. Fox in developing a series of college computer courses beginning in 1966. At the National Bureau of Standards in Washington, he developed the SEAC computer (Standards Eastern Automatic Computer). At first this was named National Bureau of Standards (NBS) Interim Computer. It was one of many computers built at that time along the lines of John von Neumann's design in universities, laboratories and government organizations, but were only intended as an interim solution until the industry could provide better computer. In this case, they were waiting for a computer by UNIVAC (Alexander was also involved in its design), whose delivery had been delayed. Alexander's Chief Architect was Ralph J. Slutz (1917-2005) who previously worked with John von Neumann to build a computer at the Institute for Advanced Study. The SEAC was the first fully functional electronic computer with internal program memory (stored program) in the U.S. it also was the first computer with semiconductor logic (first 10,500, then 16,000 germanium diodes), in addition to the 747 and later 1600 vacuum tubes. The computer operated for 14 years and was originally intended for training purposes within government agencies, but some of the earliest assemblers and compilers were built for it. It was the fastest fully functional computer for about a year until the UNIVAC I came out in 1951. It also served as a model for other government computers e.g. with the National Security Agency. Alexander also initiated the prototype DYSEAC at the NBS, a successor to the SEAC, which was built for the US Signal Corps. It was delivered in 1954 and could be transported in a truck. He was an advisor to the US government and, in 1956 to Sweden and, in 1957, India. In 1967 he received the Harry H. Goode Memorial Award and the 1981 Computer Pioneer Award from the IEEE Computer Society. He was a member of the Washington Academy of Sciences. References Major portions of content were translated from German Wikipedia. American computer scientists History of computing hardware Massachusetts Institute of Technology alumni University of Oklahoma alumni 1910 births 1967 deaths Computer science educators Bendix Corporation people
12790012
https://en.wikipedia.org/wiki/List%20of%20router%20and%20firewall%20distributions
List of router and firewall distributions
This is a list of router and firewall distributions, which are operating systems designed for use as routers and/or firewalls. See also List of router firmware projects Comparison of router software projects References Free routing software Router
47261745
https://en.wikipedia.org/wiki/1st%20Mechanized%20Infantry%20Brigade%20%28North%20Macedonia%29
1st Mechanized Infantry Brigade (North Macedonia)
The 1st Mechanized Infantry Brigade as a higher joint-tactical unit represents a major combat force of the North Macedonia army that provides prepared forces for protection and support of national interests and provides support during natural disasters, epidemics and other dangers. Outside the territory of the Republic of North Macedonia, the declared units are participating in peacekeeping operations and they fulfill the international military responsibilities. History The 14th MMNABr set off on 18 September, immediately after its formation, via village Budinarci to stop the advance of the fascists troops towards Vinica. The next task of the brigade was to liberate Kochani. After that it preceded its movement towards Kochani – Shtip – Sveti Nikole. Later during the war it was engaged in the operation of liberating Skopje as well as fighting the enemy forces in the direction of Tetotvo. On 12 January 1945 it participated fighting on the Sremski Front in the area between Shid and the river Danube when it suffered the biggest losses. On 26 May it operated near Celje – Slovenia and on its way back it took part in the battles on the territory of Bosnia and Herzegovina. On 15 June 1945 it entered Skopje with a parade march and ended its famous combat career. 423 members of the brigade gave their lives fighting against the fascists’ forces. The 1st Mechanized Infantry Brigade is the successor of this famous unit and it carries out its tradition. The 1st Mechanized Infantry Brigade is directly connected with the formation of the Army of the independent Republic of North Macedonia. The first servicemen in ARM were from this unit. During the past years the brigade went through a process of transformation and now it is defined as a small, modern, mobile and professional unit prepared to cope with all the challenges of the modern time. The personnel and the units of the 1st Mechanized Infantry Brigade have participated at many courses, seminars, military exercises and international peace keeping missions in country and abroad. The gained knowledge was implemented in the preparation of the personnel to take part in the peace keeping missions in Afghanistan, Iraq, Bosnian and Lebanon. The first biggest contingent from this brigade (sized an infantry company) was deployed to the ISAF mission in Afghanistan. The same unit has successfully participated in military exercises "Macedonian Flash" 1, 3, 4, 5 and 6 and has conducted NATO self-evaluation level 1 and level 2 according to NATO concept for operational abilities. On 18 August 2007 the Brigade was awarded a "Decoration for merit" given by the President of the Republic of North Macedonia. Tasks Preparation for combat operations of the forces for conducting offensive and defensive operations; Performing tactical marches; Planning and implementation of continuous training for combat terrorist groups; Planning and implementation of continuous training of commands and units of the 1st MIB according to NATO standards and procedures, and ensuring full implementation of the system for management training and leadership development and training for instructors; Giving support to the forces of the Ministry of Interior in dealing with the threats, risks and threats to the security of North Macedonia; Providing support to the state government, local governments, citizens and non-governmental organizations and institutions in dealing with natural disasters and epidemics, tactical and technological and other disasters; Rapid deployment of forces in multinational joint operations led by NATO or in peace keeping operations led by NATO, the UN and the EU; Readiness of the Middle Infantry Battalion Group (MIBG) for NATO-led operations; Training and readiness of the company declared in the forces of EUBG; Participation and assistance in humanitarian operations in the region and beyond; Developing the ability to defend the forces from improvised explosive devices; Implementation of effective command, control and safety communication skills; Survival and protection of forces in conditions of close combat, threats from chemical, biological, radioactive and nuclear weapons. Missions Within the peace keeping mission ISAF in Afghanistan our unit has participated with two rotations (January and July). During an ongoing mission a preparations for the next mission are conducted. We have participated in ISAF with a Mechanized Infantry Company for securing the base and the staff personnel. The whole personnel of these units are under JOC operative command. The benefit ARM gained from the participation in ISAF mission is that the lessons learned will be used in the future. For a country with modest economics capacity, as North Macedonia is, to have highly trained and professional soldiers, who in the last five years, have greatly contributed to the image of the ISAF mission and the Alliance is a great honor. North Macedonia and its citizens are proud of the ARM servicemen who are not only the guards of the state sovereignty, but are ambassadors of the peace together with their NATO colleagues who have contributed to the development of the democracy in Afghanistan. Structure 1st Mechanized Infantry Battalion "Scorpions" Mission: To conduct offensive and defensive operations for defense and security of the territorial integrity and sovereignty of North Macedonia ; To cooperate with MIA in securing the borders; To support the civilian authorities dealing with the consequences of the natural disasters and catastrophes; To take part in peace support missions and humanitarian missions abroad. Tasks: To prepare for combat tasks; Tactical march; To take firing position To prepare for level 2 NATO assessment according to the concept for operational capability; Tactical movement; To conduct defense tasks; To conduct attacks; Tasks for fighting against sabotage terroristic groups; To take part in peace keeping missions. The 1st Mechanized Infantry battalion as part of the 1st Mechanized Infantry Brigade located in Shtip started operating in Kumanovo in September 2000 under the name "Homour and Strength" land is located in Kristijan Todorovki Karposh barracks. Major Joco Micev was appointed a commander of the battalion and his deputy was Captain 1st class Sinisha Stamenov. Later, Major Sinisha Stamenov becomes a commander and his deputy was Major Ljupcho Dimitrov. From 2005 our unit conducts tasks in Kumanovo garrison and has established cooperation with local government of town Kumanovo and many private and state subjects. Many awards and plaques are evident of this fruitful cooperation. 2nd Mechanized Infantry Battalion "Scorpions" Mission: To conduct offensive and defensive operations in order to protect the integrity and sovereignty of the territory North Macedonia; To cooperate with MIA in the area of border guard; To support civilian government when alleviating the aftermaths of natural and other disasters; To support peacekeeping operations and humanitarian operations out of R. of North Macedonia; Combat readiness level has been reached and maintained making progress in many areas, primarily training and preparing for TELA; Battalion units have completely reached the necessary combat readiness level. Tasks: Personnel physical readiness; Firearms training of individuals and units; Training according to BMP; Specialized –professional training; Personnel training for peacekeeping missions participation; English language training. 2nd Mechanized Infantry Battalion was formed in 1996 under the name of "Scorpions" as a unit of the 11th Infantry Brigade.Since it was formed up to present moment the battalion has taken part in 24 international exercises organized by NATO PfP countries. It contributed with personnel and technique and represented ARM and Republic of North Macedonia. 3rd Mechanized Infantry Battalion "Leopards" Mission: To prepare and organize the defence and protection of the territorial integrity and sovereignty and independence of the Republic of North Macedonia. To assist, within it area of operation, the MIA during operations if the security of the country is endangered. To support the state authorities, the local government and various institutions and organizations as well as the citizens when dealing with natural disasters, epidemics, technical and technological catastrophes in its area of operation. To take part in peace keeping missions and conflicts prevention NATO, UN, OSCE and EU led activities. To contribute to dealing with regional conflicts and crises as well as to protect the wider interests of the North Macedonia. Tasks: To alert, move, transport and rapidly deploy the units to places, areas and regions To plan, organize and conduct offensive and defensive operations and urban area operations To control the territory, to seal directions and to secure regions To give combat-service supports to the units in peace time, during crisis and war To conduct command and control To support MIA when dealing with threats, risks and dangers for the security of the North Macedonia To support the state authorities and local government units when dealing with natural disasters, epidemics and technical and technological catastrophes To take part in peace keeping missions and conflicts prevention missions abroad and to protect wider interests of RM To plan and conduct personnel and commands training according to NATO standards and procedures and to fully implement the system for training management, to develop leaders – training instructors. 4th Mechanized Infantry Battalion "Leopards" Mission: To prepare and organize the defence and protection of the territorial integrity and sovereignty and independence of the North Macedonia. To assist, within it area of operation, the MIA during operations if the security of the country is endangered. To support the state authorities, the local government and various institutions and organizations as well as the citizens when dealing with natural disasters, epidemics, technical and technological catastrophes in its area of operation. To take part in peace keeping missions and conflicts prevention NATO, UN, OSCE and EU led activities. To contribute to dealing with regional conflicts and crises as well as to protect the wider interests of North Macedonia. Tasks: To alert, move, transport and rapidly deploy the units to places, areas and regions To plan, organize and conduct offensive and defensive operations and urban area operations To control the territory, to seal directions and to secure regions To give combat-service supports to the units in peace time, during crisis and war To conduct command and control To support MIA when dealing with threats, risks and dangers for the security of the Republic of North Macedonia To support the state authorities and local government units when dealing with natural disasters, epidemics and technical and technological catastrophes To take part in peace keeping missions and conflicts prevention missions abroad and to protect wider interests of RM To plan and conduct personnel and commands training according to NATO standards and procedures and to fully implement the system for training management, to develop leaders – training instructors. Artillery Battalion Mission: To prepare its fire support forces for the units of the 1st mib, for defense and protection of the territorial integrity and sovereignty of the Republic of North Macedonia in all weather and field conditions, preparation of the declared unit habt 105 mm for operations in support of peace and prevention of conflicts and crises in operations led by NATO, UN, OSCE, EU and other international alliances and to contribute to the protection of the wider interests of the Republic of North Macedonia. Tasks: Alerting the unit and taking possession of areas and positions Tactical Movement and taking possession of fire positions and areas Coordination of the fire support and artillery fire management Conducting and maintaining command and control Implementation of combat support Performing mobilization and rapid integration of the active members Preparation of the declared unit Providing support to the government, the local government units, the citizens and the non-governmental organizations and institutions in dealing with natural disasters and epidemics – technical technological and other disasters Planning and carrying out training to achieve NATO standards. The Artillery Battalion is a unit which in its composition has a high percentage of functional technique, excellent professionals, impeccable command and vast experience that affirms this unit in all areas of training, operations and execution of assigned tasks. Armor Battalion Mission: To train and prepare the soldiers, officers and units for defense and protection of the territorial integrity, independence and sovereignty of the Republic of North Macedonia, from all possible threats, as well as in dealing with natural disasters or other accidents. Applying the standards and following an order issued by the 1st Mechanized Infantry Battalion to perform marching and carry out mobile, combined, defense and attack operations in the area of responsibility. Tasks: Planned, organized and on-time operational development and maneuver. Occupation of the planned locations and regions. Organization and carrying out of tactical exercise on the level of a platoon, a company and a battalion. Preparation and training of the forces to provide help in case of natural disasters or other accidents. Securing facilities of strategic importance. With the new formation of the Army of the Republic of North Macedonia on 19 December 2011 the Armor Battalion T-55 was reformed and the T-72 "A" Armour Battalion was formed under the command of GS of ARM. The new battalion consisted of 31 T-72 tanks and 11-OT BMP engaging 147 officers and professional soldiers. Engineer Battalion Mission: Prepared to give engineer support to the commands and units of ARM. Prepared to cooperate with the units of the MoI in crisis situations, gives support to the civil authorities in case of disasters and other crisis, as well as provides support to help authorities deal with consequences of natural disasters and armed conflicts. Tasks: Plans, organises and conducts engineer reconnaissance, Performs tactical march, Maintains the facilities for protection and carries out activities for the needs of the commands and units of ARM, Prepares and eliminates smaller explosive obstacles and prepares and deals with artificial obstacles, Secures the movement and maneuver of the Combat Service Support units (CSS), Prepares and eliminates smaller explosive obstacles and eliminates artificial obstacles in cooperation with MOI units in case of crisis, Conducts humanitarian and logistic operations supporting the civil authorities in case of danger and helps them deal with the consequences of natural disasters and armed conflicts, Assists civil authorities in reconstructing infrastructure damaged during natural disasters and armed conflicts. NBC Protection Company Mission: To provide a high level of training and combat readiness in peacetime and alertness for organization and execution of tasks for NBC protection support for the Forces and their need for rapid reaction in all conditions. To prepare and execute mobilization of the reserve forces and reach full readiness for organization and execution of tasks for NBC protection support for the needs of the Forces under the command of the 1st Mechanized Infantry Brigade in performing combat tasks in assigned areas. Tasks: Providing a high level of training of officers, soldiers and the unit as a whole to perform purpose oriented tasks in peace and in war;Training and exercising overall unit composition in the procedures of signal alarm;Training and exercising overall unit composition in the procedures of moving in a region undertaking all combat security measures;Execution of tactical actions and procedures by the NBC reconnaissance units while organizing and conducting NBC protection control;Execution of tactical actions and procedures by the NBC decontamination units while organizing and conducting NC decontamination of personnel, weapons, technical assets, motor vehicles, land and buildings. Signal Company Mission: The Signal Company establishes all connections scheduled for the needs of the 1st Mechanized Infantry Brigade under the Signal Plan "BRAN". History: The signal company is part of the 1st Mechanized Infantry Brigade and is located in the barracks "N.H. - Petrovec". Its beginning was in September 2000 in the barracks "Jane Sandanski" - Shtip where it becomes a part of 1st Infantry Brigade (1st IB) under the motto "Honor and Strength". First commander of the brigade was Captain Zoranco Trenev, who after several years with the transformation in 1st infantry brigade into 1st Mechanized Infantry Brigade (1st MIB) in March 2006 was replaced by the next company commander Captain Zoran Aleksov. Tasks: Alert and unit assembly. Tactical march. Establishment and maintenance of all scheduled connections. Force protection. Equipment See also Army of the Republic of North Macedonia North Macedonia Air Force Special Operations Regiment The Rangers Battalion Ceremonial Guard Battalion Military Reserve Force (North Macedonia) Military Service for Security and Intelligence North Macedonia Military units and formations of North Macedonia Military units and formations established in 2007
30658899
https://en.wikipedia.org/wiki/Opaque%20binary%20blob
Opaque binary blob
Opaque binary blob (OBB) is a term used in network engineering and computer science to refer to a sizeable piece of data, which looks like binary garbage from outside, by entities which do not know what that blob denotes or carries, but make sense to entities which have access permission and access functions to them. It is also a pejorative term for compiled code without the source code made available (see: binary blob). Use in networks At least one network protocol, Advanced Message Queuing Protocol, uses the terminology of OBB. Use in the computer field Android operating systems, starting with version 2.3 code named Gingerbread, use OBBs to refer in one blob to multiple files, maybe even a file system or whole file system in one file. These OBBs are available through the Storage Manager interface in Android. This is done as a means of abstraction, so multiple applications running on the operating system can more easily access the OBB. For example, if there was a map database (map OBB), multiple applications running on Android 2.3 can access the same maps. This eliminates the need to maintain different map data for different applications with similar functions and features. Many HD games on the Android platform use their own OBB files, to allow storage of large files on the device's external SD card. Tuxedo middleware also uses OBBs to mention C and C++ arrays, or typed data buffers. This probably (input needed from experts) is the oldest reference to OBBs used in a computer system. When a vendor distributes software in an object binary form without any mention of its inner workings or code, it is called a 'proprietary OBB' or 'proprietary blob' or just binary blob. This practice is to protect the company's intellectual property, and probably keep a competitive edge (see: proprietary software). This also prevents hackers from improving the system or subverting it. As an example, Nvidia Tegra has such a 'proprietary OBB.' See also Binary blob References Operating system technology
8789773
https://en.wikipedia.org/wiki/Joe%20Barr
Joe Barr
Joe Barr (October 19, 1944 – July 11, 2008) was an American technology journalist, an editor and writer for the SourceForge sites Linux.com and IT Manager's Journal. A former programmer, Barr had worked on everything from microcomputers like the TRS-80 Model I to IBM mainframes with acres of DASD, writing code in more than a dozen languages, including RPG II, 370 ALC, COBOL, BASIC, TIBOL, MASM, and C, much of that experience coming in his 13 years with Ross Perot's EDS. As a writer, Barr first gained notoriety and, according to Ziff-Davis' Spencer F. Katt, a cult-like following for his zine, The Dweebspeak Primer. Barr began writing about personal computing in 1994, and primarily about Linux and open source in 1998, when he began writing for IDG's LinuxWorld.com. The MPlayer project made him even better known by dedicating a derogatory page to him in their documentation after he wrote a piece entitled MPlayer: The project from hell. In 2001, Barr was awarded a Silver Medal by the American Society of Business Publication Editors in the category of Original Web Commentary for his LinuxWorld.com article entitled Dumbing Down Linux. In his last years he worked at OSTG, writing articles, columns, and commentary for NewsForge and Linux.com. Barr's first book, CLI for Noobies, was published in 2007 by the SourceForge Community Press. He also was an enthusiastic amateur radio operator using his callsign W5CT. Barr died on July 11, 2008. References External links The Dweebspeak Primer CLI for Noobies--A primer on the Linux command line (web page about the book) 1944 births 2008 deaths Geeknet American technology journalists American male journalists 20th-century American journalists Amateur radio people
423418
https://en.wikipedia.org/wiki/Meet-in-the-middle%20attack
Meet-in-the-middle attack
The meet-in-the-middle attack (MITM), a known plaintext attack, is a generic space–time tradeoff cryptographic attack against encryption schemes that rely on performing multiple encryption operations in sequence. The MITM attack is the primary reason why Double DES is not used and why a Triple DES key (168-bit) can be bruteforced by an attacker with 256 space and 2112 operations. Description When trying to improve the security of a block cipher, a tempting idea is to encrypt the data several times using multiple keys. One might think this doubles or even n-tuples the security of the multiple-encryption scheme, depending on the number of times the data is encrypted, because an exhaustive search on all possible combination of keys (simple brute-force) would take 2n·k attempts if the data is encrypted with k-bit keys n times. The MITM is a generic attack which weakens the security benefits of using multiple encryptions by storing intermediate values from the encryptions or decryptions and using those to improve the time required to brute force the decryption keys. This makes a Meet-in-the-Middle attack (MITM) a generic space–time tradeoff cryptographic attack. The MITM attack attempts to find the keys by using both the range (ciphertext) and domain (plaintext) of the composition of several functions (or block ciphers) such that the forward mapping through the first functions is the same as the backward mapping (inverse image) through the last functions, quite literally meeting in the middle of the composed function. For example, although Double DES encrypts the data with two different 56-bit keys, Double DES can be broken with 257 encryption and decryption operations. The multidimensional MITM (MD-MITM) uses a combination of several simultaneous MITM attacks like described above, where the meeting happens in multiple positions in the composed function. History Diffie and Hellman first proposed the meet-in-the-middle attack on a hypothetical expansion of a block cipher in 1977. Their attack used a space–time tradeoff to break the double-encryption scheme in only twice the time needed to break the single-encryption scheme. In 2011, Bo Zhu and Guang Gong investigated the multidimensional meet-in-the-middle attack and presented new attacks on the block ciphers GOST, KTANTAN and Hummingbird-2. Meet-in-the-middle (1D-MITM) Assume someone wants to attack an encryption scheme with the following characteristics for a given plaintext P and ciphertext C: where ENC is the encryption function, DEC the decryption function defined as ENC−1 (inverse mapping) and k1 and k2 are two keys. The naive approach at brute-forcing this encryption scheme is to decrypt the ciphertext with every possible k2, and decrypt each of the intermediate outputs with every possible k1, for a total of 2|k1| × 2|k2| (or 2|k1|+|k2|) operations. The meet-in-the-middle attack uses a more efficient approach. By decrypting C with k2, one obtains the following equivalence: The attacker can compute ENCk1(P) for all values of k1 and DECk2(C) for all possible values of k2, for a total of 2|k1| + 2|k2| (or 2|k1|+1, if k1 and k2 have the same size) operations. If the result from any of the ENCk1(P) operations matches a result from the DECk2(C) operations, the pair of k1 and k2 is possibly the correct key. This potentially-correct key is called a candidate key. The attacker can determine which candidate key is correct by testing it with a second test-set of plaintext and ciphertext. The MITM attack is one of the reasons why Data Encryption Standard (DES) was replaced with Triple DES and not Double DES. An attacker can use a MITM attack to bruteforce Double DES with 257 operations and 256 space, making it only a small improvement over DES. Triple DES uses a "triple length" (168-bit) key and is also vulnerable to a meet-in-the-middle attack in 256 space and 2112 operations, but is considered secure due to the size of its keyspace. MITM algorithm Compute the following: : and save each together with corresponding in a set A : and compare each new with the set A When a match is found, keep as candidate key-pair in a table T. Test pairs in T on a new pair of to confirm validity. If the key-pair does not work on this new pair, do MITM again on a new pair of . MITM complexity If the keysize is k, this attack uses only 2k+1 encryptions (and decryptions) and O(2k) memory to store the results of the forward computations in a lookup table, in contrast to the naive attack, which needs 22·k encryptions but O(1) space. Multidimensional MITM (MD-MITM) While 1D-MITM can be efficient, a more sophisticated attack has been developed: multidimensional meet-in-the-middle attack, also abbreviated MD-MITM. This is preferred when the data has been encrypted using more than 2 encryptions with different keys. Instead of meeting in the middle (one place in the sequence), the MD-MITM attack attempts to reach several specific intermediate states using the forward and backward computations at several positions in the cipher. Assume that the attack has to be mounted on a block cipher, where the encryption and decryption is defined as before: that is a plaintext P is encrypted multiple times using a repetition of the same block cipher The MD-MITM has been used for cryptanalysis of, among many, the GOST block cipher, where it has been shown that a 3D-MITM has significantly reduced the time complexity for an attack on it. MD-MITM algorithm Compute the following: and save each together with corresponding in a set . and save each together with corresponding in a set . For each possible guess on the intermediate state compute the following: and for each match between this and the set , save and in a new set . and save each together with corresponding in a set . For each possible guess on an intermediate state compute the following: and for each match between this and the set , check also whether it matches with and then save the combination of sub-keys together in a new set . For each possible guess on an intermediate state compute the following: Use the found combination of sub-keys on another pair of plaintext/ciphertext to verify the correctness of the key. Note the nested element in the algorithm. The guess on every possible value on sj is done for each guess on the previous sj-1. This make up an element of exponential complexity to overall time complexity of this MD-MITM attack. MD-MITM complexity Time complexity of this attack without brute force, is ⋅⋅ Regarding the memory complexity, it is easy to see that are much smaller than the first built table of candidate values: as i increases, the candidate values contained in must satisfy more conditions thereby fewer candidates will pass on to the end destination . An upper bound of the memory complexity of MD-MITM is then where denotes the length of the whole key (combined). The data complexity depends on the probability that a wrong key may pass (obtain a false positive), which is , where is the intermediate state in the first MITM phase. The size of the intermediate state and the block size is often the same! Considering also how many keys that are left for testing after the first MITM-phase, it is . Therefore, after the first MITM phase, there are , where is the block size. For each time the final candidate value of the keys are tested on a new plaintext/ciphertext-pair, the number of keys that will pass will be multiplied by the probability that a key may pass which is . The part of brute force testing (testing the candidate key on new -pairs, have time complexity , clearly for increasing multiples of b in the exponent, number tends to zero. The conclusion on data complexity is by similar reasoning restricted by that around -pairs. Below is a specific example of how a 2D-MITM is mounted: A general example of 2D-MITM This is a general description of how 2D-MITM is mounted on a block cipher encryption. In two-dimensional MITM (2D-MITM) the method is to reach 2 intermediate states inside the multiple encryption of the plaintext. See below figure: 2D-MITM algorithm Compute the following: and save each together with corresponding in a set A and save each together with corresponding in a set B. For each possible guess on an intermediate state s between and compute the following: and for each match between this and the set A, save and in a new set T. and for each match between this and the set B, check also whether it matches with T for if this is the case then: Use the found combination of sub-keys on another pair of plaintext/ciphertext to verify the correctness of the key. 2D-MITM complexity Time complexity of this attack without brute force, is where |⋅| denotes the length. Main memory consumption is restricted by the construction of the sets A and B where T is much smaller than the others. For data complexity see subsection on complexity for MD-MITM. See also Birthday attack Wireless security Cryptography 3-subset meet-in-the-middle attack Partial-matching meet-in-the-middle attack References Cryptographic attacks
31681546
https://en.wikipedia.org/wiki/Illinois%20Structural%20Health%20Monitoring%20Project
Illinois Structural Health Monitoring Project
The Illinois Structural Health Monitoring Project (ISHMP) is a structural health monitoring project devoted to researching and developing hardware and software systems to be used for distributed real-time monitoring of civil infrastructure. The project focuses on monitoring bridges, and aims to reduce the cost and installation effort of structural health monitoring equipment. It was founded in 2002 by Professor Bill F. Spencer and Professor Gul Agha of the University of Illinois at Urbana–Champaign. The project aims to minimize the cost of monitoring structures through developing low cost wireless networks of sensor boards, each equipped with an embedded computer. The Illinois Structural Health Monitoring Project also focuses on creating a software toolsuite that can simplify the development of other structural health monitoring devices. Currently, ISHMP has a wireless sensor network set up on the Jindo Bridge in South Korea. Each sensor board in the network uses real-time data to collect a multitude of different data, and then the microcomputer processes the data and determines the current state of the bridge. Overview The Illinois Structural Health Monitoring Project was founded in 2002 when Professor Bill F. Spencer, director of the Smart Structures Technology Laboratory, and Professor Gul Agha, director of the Open Systems Laboratory, began a collaborative effort between the two laboratories at the University of Illinois at Urbana–Champaign. The project aims to develop reliable wireless hardware and software for distributed real-time structural health monitoring of various infrastructure using multiple sensors on a single structure. Each sensor's data corresponding to a specific region on the structure is used to assess the overall health of the structure. The Illinois Structural Health Monitoring Project's underlying goal is to minimize the cost of infrastructure inspections though using inexpensive and reliable wireless sensor arrays, significantly reducing the need for physical human inspection. Its main focus has been to monitor bridges using sensor networks. While other wired bridge monitoring systems require excessive amounts of cables and man hours to install, installing a wireless sensor network would prove much less expensive. The Illinois Structural Health Monitoring Project receives support from the National Science Foundation, Intel Corporation, and the Vodafone-U.S. Foundation Graduate Fellowship. Technology Instead of using a single centralized point for collecting data from every sensor in a network, the ISHMP uses sensor platforms with embedded computers, such as Intel's Imote2. The Illinois Structural Health Monitoring Project has designed, developed, and tested various sensors that can stack onto these embedded computers and sense data such as vibrations, humidity levels, and wind speeds, to name a few. Using various power harvesting devices, there is no need for wiring the sensors to an electrical network. Initial tests for the sensor systems were run on a scale model of a truss bridge. The Illinois Structural Health Monitoring Project has also developed an open source toolsuite that contains a software library of customizable services for structural health monitoring platforms. This simplifies the development of structural health monitoring applications for other sensor systems. In 2008, a dense array of sensors was deployed on the Jindo Bridge in South Korea and was the first dense deployment of a wireless sensor network on a cable-stayed bridge. This new bridge monitoring system is fully autonomous, and sends out an e-mail when a problem arises. The system wakes up for a few minutes at a time to collect, analyze, and send data, in order to conserve battery power. Developments Software The ISHMP has developed a software toolsuite with open source services needed for structural health monitoring applications on a network of Intel's Imote2 smart sensors. These services include application services, application tools, and utilities. The application services allow for the implementation of structural health monitoring algorithms on the Imote2 and include tests for both the PC and Imote2. The tools allow for data collection from the sensors on the network, perform damage detection on the structure, and test for radio communication quality. Software developed ISHMP Services Toolsuite, version 3.0.0 ISHMP-GUI, version 1.1.0 Hardware The ISHMP has also developed a sensor board that is produced by MEMSIC. It is designed to work with the Imote2 smart sensor platform, and is optimized for structural health monitoring applications. The sensor board provides the information output required to comprehend the data collected by the individual sensors. It includes a three axis accelerometer, a light sensor, a temperature sensor, and humidity sensors. It can also accommodate one additional external analog input signal. Hardware developed ISM400 See also Structural health monitoring References External links Illinois Structural Health Monitoring Project Smart Structures Technology Laboratory Open Systems Laboratory University of Illinois at Urbana–Champaign Structural engineering
13574519
https://en.wikipedia.org/wiki/Guitar%20Rig
Guitar Rig
Guitar Rig is an amp and effects modeling software package developed by Native Instruments. The software can function either as a standalone application, or as a plug-in for other software. It was originally released in 2004. Overview The Guitar Rig environment is a modular system, providing capabilities for multiple amplifiers, effects pedals and rack mounted hardware. Primarily designed for electric guitar and bass, the software uses amplifier modeling to allow real-time digital signal processing in both standalone and DAW environments via plug-in (VST/DXi/RTAS/AU). The software simulates a number of devices such as preamplifiers, cabinets and microphones under pseudonyms (such as renaming the Shure SM57 as the "Dynamic 57"). The system allows customisation of module parameters – either through manipulation of the graphical interface, use of a MIDI controller or employment of the RigKontrol foot control pedal. Settings can be saved as presets and exported and shared with other users. Version 3 included a redesigned interface to allow improved preset categorisation and customised interfaces for "live" use. Rig Kontrol The Rig Kontrol is a foot-operated USB and MIDI controller. It contains an audio interface and DI box, allowing integration with live sound environments. The device can operate Guitar Rig using eight switches and an expression pedal. Earlier versions of the device contained only six switches and an expression pedal, and did not support interfacing functionality. History Versions 1-2 (2004-2006) Guitar Rig was first released on both macOS & Windows in September 2004. At this time, it was a hardware/software hybrid system, with the Rig Kontrol hardware preamp and foot controller feeding into the software. The software featured a drag-and-drop interface and a selection of 3 tube amplifier emulations (some with multiple preamp variations). Version 2 of the software, released in February 2006, added a number of additional amp options (including the software's first bass amps), added a new looping feature, and expanded the Rig Kontrol hardware. Versions 3-5 (2007-2019) Guitar Rig 3, released in December 2007, included the option to purchase the software independently of the hardware. The updated software also featured a streamlined view dedicated to live performances. Before version 3 was released, the graphical user interface simulated amplifier logos and design construction; however, version 3, this was replaced with a generic name. Released in October 2009, Guitar Rig version 4 introduced improved cabinet and microphone modeling software, alongside the "Control Room" feature, which allowed for greater customisation of preamp, cabinet and microphone combinations. Announced in August 2011 and released in September 2011, version 5 of the software included an expanded version of the Control Room feature, and introduced the ability to perform sidechaining within the software. Version 6 (2020-present) Version 6 of Guitar Rig, announced in September 2020 and released on October 1, 2020, was the first major update for the software since 2011. This new version included a redesigned user interface, "Intelligent Circuit Modelling" (an amp reproduction system based on machine learning), 3 new amp options (including 1 new bass amp), and additional effects. References External links Native Instruments Guitar-related software
32629
https://en.wikipedia.org/wiki/Video%20game%20console
Video game console
A video game console is an electronic device that outputs a video signal or image to display a video game that can be played with a game controller. These may be home consoles which are generally placed in a permanent location connected to a television or other display device and controlled with a separate game controller, or handheld consoles that include their own display unit and controller functions built into the unit and can be played anywhere. Hybrid consoles combine elements of both home and handheld consoles. Video game consoles are a specialized form of a home computer geared towards video game playing, designed with affordability and accessibility to the general public in mind, but lacking in raw computing power and customization. Simplicity is achieved in part through the use of game cartridges or other simplified ways of distribution, easing the effort of launching a game. However, this leads to ubiquitous proprietary formats that creates competition for market share. More recent consoles have shown further confluence with home computers, making it easy for developers to release games on multiple platforms. Further, modern consoles can serve as replacements for media players with capabilities to playback films and music from optical media or streaming media services. Video game consoles are usually sold on a 5-7 year cycle called a generation, with consoles made with similar technical capabilities or made around the same time period grouped into the generations. The industry has developed a razorblade model for selling consoles at low profit or at a loss while making revenue on the licensing fees for each game sold, with planned obsolescence to draw consumers into the next console generation. While numerous manufacturers have come and gone in the history of the console market, there have always been two or three dominant leaders in the market, with the current market led by Sony (with their PlayStation brand), Microsoft (with their Xbox brand), and Nintendo (currently producing the Switch console and its lightweight derivative). History The first video game consoles emerged in the early 1970s. Ralph H. Baer devised the concept of playing simple spot-based games on a television screen in 1966, which later became the basis of the Magnavox Odyssey in 1972. Inspired by the table tennis game on the Odyssey, Nolan Bushnell, Ted Dabney, and Allan Alcorn at Atari, Inc. developed the first successful arcade game, Pong, and looked to develop that into a home version, which was released in 1975. The first consoles were dedicated to only a set group of games built into the hardware. Programmable consoles using swappable ROM cartridges were introduced with the Fairchild Channel F in 1976 though popularized with the Atari 2600 released in 1977. Handheld consoles emerged from technology improvements in handheld electronic games as these shifted from mechanical to electronic/digital logic, and away from light-emitting diode (LED) indicators to liquid-crystal displays (LCD) that resembled video screens more closely, with the Microvision in 1979 and Game & Watch in 1980 being early examples, and fully realized by the Game Boy in 1989. Since the 1970s, both home and handheld consoles became more advanced following global changes in technology, including improved electronic and computer chip manufacturing to increase computational power at lower costs and size, the introduction of 3D graphics and hardware-based graphic processors for real-time rendering, digital communications such as the Internet, wireless networking and Bluetooth, and larger and denser media formats as well as digital distribution. Following the same type of Moore's law progression, home consoles were grouped into generations, each lasting approximately five years, with consoles within each sharing similar technology specifications and features such as processor word size. While there is no standard definition or breakdown of home consoles by generation, the definition of these generations used by Wikipedia including representative consoles is shown below. Types There are primarily three types of video game consoles: Home consoles, handhelds, and hybrid consoles. Home video game consoles are devices that are generally meant to be connected to a television or other type of monitor, and with power supplied through an outlet, thus requiring the unit to be used in fixed locations, typically at home in one's living room. Separate game controllers, connected through wired or wireless connections, are used to provide input to the game. Early examples include the Atari 2600, the Nintendo Entertainment System, and the Sega Genesis, while newer examples include the Wii U, the PlayStation 4, and the Xbox One. Specific types of home consoles include: Microconsoles, home consoles which lack comparable computing power of home consoles released at the same period, and thus are generally less expensive. A common form of microconsole are based on Android or iOS mobile software, allowing the consoles to access the respective library of games for those platforms, as well as features such as cloud gaming. These consoles also typically include support for other apps available for the underlying operating system, including those that supported video streaming services like Netflix and Hulu, making microconsoles also compete in the same space as "over-the-top" media providers that aimed to serve content directly to the living room television. Such consoles include the Ouya, the Nvidia Shield and Apple TV. Plug and play consoles, specialized versions of microconsoles that come with a fixed selection of games on the system and do not give the consumer any ability to add more games. These are considered dedicated consoles for this reason, though tech-savvy consumers often have found ways to hack the console to install additional functionality onto it, voiding the manufacturers' warranty. The units usually come with the console unit, one or more controllers, and the required components for power and video hookup. Many of the recent releases have been for distributing a number of retro games for a specific console platform. Examples of these include the Atari Flashback series, the NES Classic Edition, and the Sega Genesis Mini. Handheld TV games, specialized plug-and-play consoles where the console unit itself serves as its own controller so that the consumer simply connects the device to their television and to a power source, or in some cases, are battery-powered. According to video game historian Frank Cifaldi, these systems gained popularity around 2003 as they were cheap to manufacture and were relatively inexpensive at each by manufacturers such as Jakks Pacific. However, they also led to a surge of models that used counterfeit Nintendo chips manufactured in China, creating too many clones that could easily be tracked. Handheld video game consoles are devices that typically include a built-in screen and game controller features into its case, and contain a rechargeable battery or battery compartment. This allows the unit to be carried around and can be played anywhere. Examples of such include the Game Boy, the PlayStation Portable, and the Nintendo 3DS. Hybrid video game consoles are devices which can be used either as a handheld or as a home console, with either a wired connection or docking station that connects the console unit to a television screen and fixed power source, and the potential to use a separate controller. While prior handhelds like the Sega Nomad and PlayStation Portable, or home consoles such as the Wii U, have had these features, some consider the Nintendo Switch to be the first true hybrid console. Most consoles are considered programmable consoles and have means for the player to switch between different games: this most often can be through a physical game cartridge or game card or through optical media, or with the onset of digital distribution, via internal or external digital storage device with software downloaded via the Internet through a dedicated storefront supported by the manufacturer of the console. Some consoles are considered dedicated consoles, in which games available for the console are "baked" onto the hardware, either by being programmed via the circuitry or set in the read-only flash memory of the console, and cannot be added to or changed directly by the user. The user can typically switch between games on dedicated consoles using hardware switches on the console, or through in-game menus. Dedicated consoles were common in the first generation of home consoles, such as the Magnavox Odyssey and the home console version of Pong, and more recently have been used for retro-consoles such as the NES Classic Edition and Sega Genesis Mini. Components Console unit Early console hardware was designed as customized printed circuit boards (PCB)s, selecting existing integrated circuit chips that performed known functions, or programmable chips like erasable programmable read-only memory (EPROM) chips that could perform certain functions. Persistent computer memory was expensive, so dedicated consoles were generally limited to the use of processor registers for storage of the state of a game, thus limiting the complexities of such titles. Pong in both its arcade and home format had a handful of logic and calculation chips that used the current input of the players' paddles and resisters storing the ball's position to update the game's state and sent to the display device. Even with more advanced integrated circuits (IC)s of the time, designers were limited to what could be done through the electrical process rather than through programming as normally associated with video game development. Improvements in console hardware followed with improvements in microprocessor technology and semiconductor device fabrication. Manufacturing processes have been able to reduce the feature size on chips (typically measured in nanometers), allowing more transistors and other components to fit on a chip, and at the same time increasing the circuit speeds and the potential frequency the chip can run at, as well as reducing thermal dissipation. Chips were able to be made on larger dies, further increasing the number of features and effective processing power. Random-access memory became more practical with the higher density of transistors per chip, but to address the correct blocks of memory, processors needed to be updated to use larger word sizes and allot for larger bandwidth in chip communications. All these improvements did increase the cost of manufacturing but at a rate far less than the gains in overall processing power, which helped to make home computers and consoles inexpensive for the consumer, all related to Moore's law of technological improvements. For the consoles of the 1980s to 1990s, these improvements were evident in the marketing in the late 1980s to 1990s during the "bit wars", where console manufacturers had focused on their console's processor's word size as a selling point. Consoles since the 2000s are more similar to personal computers, building in memory, storage features, and networking capabilities to avoid the limitations of the past. The confluence with personal computers eased software development for both computer and console games, allowing developers to target both platforms. However, consoles differ from computers as most of the hardware components are preselected and customized between the console manufacturer and hardware component provider to assure a consistent performance target for developers. Whereas personal computer motherboards are designed with the needs for allowing consumers to add their desired selection of hardware components, the fixed set of hardware for consoles enables console manufacturers to optimize the size and design of the motherboard and hardware, often integrating key hardware components into the motherboard circuitry itself. Often, multiple components such as the central processing unit and graphics processing unit can be combined into a single chip, otherwise known as a system on a chip (SoC), which is a further reduction in size and cost. In addition, consoles tend to focus on components that give the unit high game performance such as the CPU and GPU, and as a tradeoff to keep their prices in expected ranges, use less memory and storage space compared to typical personal computers. In comparison to the early years of the industry, where most consoles were made directly by the company selling the console, many consoles of today are generally constructed through a value chain that includes component suppliers, such as AMD and NVidia for CPU and GPU functions, and contract manufacturers including electronics manufacturing services, factories which assemble those components into the final consoles such as Foxconn and Flextronics. Completed consoles are then usually tested, distributed, and repaired by the company itself. Microsoft and Nintendo both use this approach to their consoles, while Sony maintains all production in-house with exception of their component suppliers. Some of the commons elements that can be found within console hardware include: Motherboard The primary PCB that all of the main chips, including the CPU, are mounted on. Daughterboard A secondary PCB that connects to the motherboard that would be used for additional functions. These may include components that can be easily replaced later without having to replace the full motherboard. Central processing unit (CPU) The main processing chip on the console that performs most of the computational workload. The consoles' CPU is generally defined by its word size (such as 8-bit or 64-bit), and its clock speed or frequency in hertz. For some CPUs, the clock speed can be variable in response to software needs. In general, larger word sizes and faster clock sizes indicate better performance, but other factors will impact the actual speed. Another distinguishing feature for a console's CPU is the instruction set architecture. The instruction set defines low-level machine code to be sent to the CPU to achieve specific results on the chip. Differences in the instruction set architecture of CPU of consoles of a given generation can make for difficulty in software portability. This had been used by manufacturers to keep software titles exclusive to their platform as one means to compete with others. Consoles prior to the sixth generation typically used chips that the hardware and software developers were most familiar with, but as personal computers stabilized on the x86 architecture, console manufacturers followed suit as to help easily port games between computer and console. Newer CPUs may also feature multiple processing cores, which are also identified in their specification. Multi-core CPUs allow for multithreading and parallel computing in modern games, such as one thread for managing the game's rendering engine, one for the game's physics engine, and another for evaluating the player's input. Graphical processing unit (GPU) The processing unit that performs rendering of data from the CPU to the video output of the console. In the earlier console generations, this was generally limited to simple graphic processing routines, such as bitmapped graphics and manipulation of sprites, all otherwise involving integer mathematics while minimizing the amount of required memory needed to complete these routines, as memo. For example, the Atari 2600 used its own Television Interface Adaptor that handled video and audio, while the Nintendo Entertainment System used the Picture Processing Unit. For consoles, these GPUs were also designed to send the signal in the proper analog formation to a cathode ray television, NTSC (used in Japan and North America) or PAL (mostly used in Europe). These two formats differed by their refresh rates, 60 versus 50 Hertz, and consoles and games that were manufactured for PAL markets used the CPU and GPU at lower frequencies. The introduction of real-time polygonal 3D graphics rendering in the early 1990s—not just an innovation in video games for consoles but in arcade and personal computer games—led to the development of GPUs that were capable of performing the floating-point calculations needed for real-time 3D rendering. In contrast to the CPU, modern GPUs for consoles and computers, principally made by AMD and NVidia, are highly parallel computing devices with a number of compute units/streaming multiprocessors (depending on vendor, respectively) within a single chip. Each compute unit/microprocessor contains a scheduler, a number of subprocessing units, memory cashes and buffers, and dispatching and collecting units which also may be highly parallel in nature. Modern console GPUs can be run at a different frequency from the CPU, even at variable frequencies to increases its processing power at the cost of higher energy draw. The performance of GPUs in consoles can be estimated through floating-point operations per second (FLOPS) and more commonly as in teraflops (TFLOPS = 1012 FLOPS). However, particularly for consoles, this is considered a rough number as several other factors such as the CPU, memory bandwidth, and console architecture can impact the GPU's true performance. Coprocessors Additional processors used to handle other dedicated functions on the console. Many early consoles feature an audio coprocessor for example. Northbridge The processor unit that, outside of the CPU and GPU, typically manages the fastest processing elements on the computer. Typically this involves communication of data between the CPU, the GPU, and the on-board RAM, and subsequently sending and receiving information with the southbridge. Southbridge The counterpart of the northbridge, the southbridge is the processing unit that handles slower processing components of the console, typically those of input/output (I/O) with some internal storage and other connected devices like controllers. BIOS The console's BIOS (Basic Input/Output System) is the fundamental instruction set baked into a firmware chip on the console circuit board that the console uses when it is first turned on to direct operations. In older consoles, prior to the introduction of onboard storage, the BIOS effectively served as the console's operating system, while in modern consoles, the BIOS is used to direct loading of the console's operating system off internal memory. Random-access memory (RAM) Memory storage that is designed for fast reading and writing, often used in consoles to store large amounts of data about a game while it is being played to avoid reading from the slower game media. RAM memory typically does not sustain itself after the console is powered off. Besides the amount of RAM available, a key measurement of performance for consoles is the RAM's bandwidth, how fast in terms of bytes per second that the RAM can be written and read from. This is data that must be transferred to and from the CPU and GPU quickly as needed without requiring these chips to need high memory caches themselves. Internal storage Newer consoles have included internal storage devices, such as flash memory, hard disk drives (HDD) and solid-state drives (SSD), to save data persistently. Early application of internal storage was for saving game states, and more recently can be used to store the console's operating system, game patches and updates, games downloaded through the Internet, additional content for those games, and additional media such as purchased movies and music. Most consoles provide the means to manage the data on this storage while respecting the copyrights on the system. Newer consoles, such as the PlayStation 5 and Xbox Series X, use high-speed SSD's not only for storage but to augment the console's RAM, as the combination of their I/O speeds and the use of decompression routines build into the system software give overall read speeds that approach that of the onboard RAM. Power supply Besides converting AC power from a wall socket to the DC power needed by the console electronics, the power supply also helps to regulate that power in cases of power surges. Some consoles power supplies are built into the unit, so that the consumer plugs the unit directly to a wall socket, but more often, the console ships with an AC adapter, colloquially known as a "power brick", that converts the power outside of the unit. On handheld units the power supply will either be from a battery compartment, or optionally from a direct power connection from an AC adapter, or from a rechargeable battery pack built into the unit. Cooling systems More advanced computing systems generate heat, and require active cooling systems to keep the hardware at safe operating temperatures. Many newer consoles are designed with cooling fans, engineered cooling fins, internal layouts, and strategically-placed vents on the casing to assure good convective heat transfer for keeping the internal components cool. Media reader Since the introduction of game cartridges, nearly all consoles have a cartridge port/reader or an optical drive for game media. In the latter console generations, some console revisions have offered options without a media reader as a means to reduce the console's cost and letting the consumer rely on digital distribution for game acquisition, such as with the Xbox One S All-Digital Edition or the PlayStation 5 Digital Edition. Case All consoles are enclosed in a case to protect the electronics from damage and to constrain the air flow for cooling. Input/output ports Ports for connecting power, controllers, televisions or video monitors, external storage devices, Internet connectivity, and other features are placed in strategic locations on the console. Controller connections are typically offered on the front of the console, while power and most other connections are usually found on the back to keep cables out of the way. Controllers All game consoles require player input through a game controller to provide a method to move the player character in a specific direction and a variation of buttons to perform other in-game actions such as jumping or interacting with the game world. Though controllers have become more featured over the years, they still provide less control over a game compared to personal computers or mobile gaming. The type of controller available to a game can fundamentally change the style of how a console game will or can be played. However, this has also inspired changes in game design to create games that accommodate for the comparatively limited controls available on consoles. Controllers have come in a variety of styles over the history of consoles. Some common types include: Paddle A unit with a single knob or dial and usually one or two buttons. Turning the knob typically allows one to move an on-screen object along one axis (such as the paddle in a table tennis game), while the buttons can have additional features. Joystick A unit that has a long handle that can pivot freely along multiple directions along with one or more buttons. The unit senses the direction that the joystick is pushed, allowing for simultaneous movement in two directions within a game. Gamepad A unit that contains a variety of buttons, triggers, and directional controls - either D-pads or analog sticks or both. These have become the most common type of controller since the third generation of console hardware, with designs becoming more detailed to give a larger array of buttons and directional controls to player's while maintaining ergonomic features. Numerous other controller types exist, including those that support motion controls, touchscreen support on handhelds and some consoles, and specialized controllers for specific types of games, such as racing wheels for racing games, light guns for shooting games, and musical instrument controllers for rhythm games. Some newer consoles also include optional support for mouse and keyboard devices. A controller may be attached through a wired connection onto the console itself, or in some unique cases like the Famicom hardwired to the console, or with a wireless connection. Controllers require power, either provided by the console via the wired connection, or from batteries or a rechargeable battery pack for wireless connections. Controllers are nominally built into a handheld unit, though some newer ones allow for separate wireless controllers to also be used. Game media While the first game consoles were dedicated game systems, with the games programmed into the console's hardware, the Fairchild Channel F introduced the ability to store games in a form separate from the console's internal circuitry, thus allowing the consumer to purchase new games to play on the system. Since the Channel F, nearly all game consoles have featured the ability to purchase and swap games through some form, through those forms have changes with improvements in technology. ROM cartridge or game cartridge The Read-only Memory (ROM) cartridge was introduced with the Fairchild Channel F. A ROM cartridge consist of a printed circuit board (PCB) housed inside of a plastic casing, with a connector allowing the device to interface with the console. The circuit board can contain a wide variety of components, at the minimum, the read-only memory with the software written on it. Later cartridges were able to introduce additional components onto the circuit board like coprocessors, such as Nintendo's SuperFX chip, to enhance the performance of the console. Some consoles such as the Turbografx-16 used a smart card-like technology to flatten the cartridge to a credit-card-sized system, which helped to reduce production costs, but limited additional features that could be included onto the circuitry. PCB-based cartridges waned with the introduction of optical media during the fifth generation of consoles. More recently, ROM cartridges have been based on high memory density, low cost flash memory, which allows for easier mass production of games. Sony used this approach for the PlayStation Vita, and Nintendo continues to use ROM cartridges for its 3DS and Switch products. Optical media Optical media, such as CD-ROM, DVD, and Blu-ray, became the primary format for retail distribution with the fifth generation. The CD-ROM format had gained popularity in the 1990s, in the midst of the fourth generation, and as a game media, CD-ROMs were cheaper and faster to produce, offered much more storage space and allowed for the potential of full-motion video. Several console manufacturers attempted to offer CD-ROM add-ons to fourth generation consoles, but these were nearly as expensive as the consoles themselves and did not fare well. Instead, the CD-ROM format became integrated into consoles of the fifth generation, with the DVD format present across most by the seventh generation and Blu-ray by the eighth. Console manufacturers have also used proprietary disc formats for copy protection as well, such as the Nintendo optical disc used on the GameCube, and Sony's Universal Media Disc on the PlayStation Portable. Digital distribution Since the seventh generation of consoles, most consoles include integrated connectivity to the Internet and both internal and external storage for the console, allowing for players to acquire new games without game media. All three of Nintendo, Sony, and Microsoft offer an integrated storefront for consumers to purchase new games and download them to their console, retaining the consumers' purchases across different consoles, and offering sales and incentives at times. Cloud gaming As Internet access speeds improved throughout the eighth generation of consoles, cloud gaming had gained further attention as a media format. Instead of downloading games, the consumer plays them directly from a cloud gaming service with inputs performed on the local console sent through the Internet to the server with the rendered graphics and audio sent back. Latency in network transmission remains a core limitation for cloud gaming at the present time. While magnetic storage, such as tape drives and floppy disks, had been popular for software distribution with early personal computers in the 1980s and 1990s, this format did not see much use in console system. There were some attempts, such as the Bally Astrocade and APF-M1000 using tape drives, as well as the Disk System for the Nintendo Famicom, and the Nintendo 64DD for the Nintendo 64, but these had limited applications, as magnetic media was more fragile and volatile than game cartridges. External storage In addition to built-in internal storage, newer consoles often give the consumer the ability to use external storage media to save game date, downloaded games, or other media files from the console. Early iterations of external storage were achieved through the use of flash-based memory cards, first used by the Neo Geo but popularized with the PlayStation. Nintendo continues to support this approach with extending the storage capabilities of the 3DS and Switch, standardizing on the current SD card format. As consoles began incorporating the use of USB ports, support for USB external hard drives was also added, such as with the Xbox 360. Online services With Internet-enabled consoles, console manufacturers offer both free and paid-subscription services that provide value-added services atop the basic functions of the console. Free services generally offer user identity services and access to a digital storefront, while paid services allow players to play online games, interact with other uses through social networking, use cloud saves for supported games, and gain access to free titles on a rotating basis. Examples of such services include the Xbox network, PlayStation Network, and Nintendo Switch Online. Console add-ons Certain consoles saw various add-ons or accessories that were designed to attach to the existing console to extend its functionality. The best example of this was through the various CD-ROM add-ons for consoles of the fourth generation such as the TurboGrafx CD, Atari Jaguar CD, and the Sega CD. Other examples of add-ons include the 32X for the Sega Genesis intended to allow owners of the aging console to play newer games but has several technical faults, and the Game Boy Player for the GameCube to allow it to play Game Boy games. Accessories Consumers can often purchase a range of accessories for consoles outside of the above categories. These can include: Video camera While these can be used with Internet-connected consoles like webcams for communication with other friends as they would be used on personal computers, video camera applications on consoles are more commonly used in augmented reality/mixed reality and motion sensing games. Devices like the EyeToy for PlayStation consoles and the Kinect for Xbox consoles were center-points for a range of games to support these devices on their respective systems. Standard Headsets Headsets provide a combination of headphones and a microphone for chatting with other players without disturbing others nearby in the same room. Virtual reality headsets Some virtual reality (VR) headsets can operate independently of consoles or use personal computers for their main processing system. , the only direct VR support on consoles is the PlayStation VR, though support for VR on other consoles is planned by the other manufacturers. Docking station For handheld systems as well as hybrids such as the Nintendo Switch, the docking station makes it easy to insert a handheld to recharge its battery, and if supported, for connecting the handheld to a television screen. Game development for consoles Console development kits Console or game development kits are specialized hardware units that typically include the same components as the console and additional chips and components to allow the unit to be connected to a computer or other monitoring device for debugging purposes. A console manufacturer will make the console's dev kit available to registered developers months ahead of the console's planned launch to give developers time to prepare their games for the new system. These initial kits will usually be offered under special confidentiality clauses to protect trade secrets of the console's design, and will be sold at a high cost to the developer as part of keeping this confidentiality. Newer consoles that share features in common with personal computers may no longer use specialized dev kits, though developers are still expected to register and purchase access to software development kits from the manufacturer. For example, any consumer Xbox One can be used for game development after paying a fee to Microsoft to register one intent to do so. Licensing Since the release of the Nintendo Famicom / Nintendo Entertainment System, most video game console manufacturers employ a strict licensing scheme that limit what games can be developed for it. Developers and their publishers must pay a fee, typically based on royalty per unit sold, back to the manufacturer. The cost varies by manufacturer but was estimated to be about per unit in 2012. With additional fees, such as branding rights, this has generally worked out to be an industry-wide 30% royalty rate paid to the console manufacturer for every game sold. This is in addition to the cost of acquiring the dev kit to develop for the system. The licensing fee may be collected in a few different ways. In the case of Nintendo, the company generally has controlled the production of game cartridges with its lockout chips and optical media for its systems, and thus charges the developer or publisher for each copy it makes as an upfront fee. This also allows Nintendo to review the game's content prior to release and veto games it does not believe appropriate to include on its system. This had led to over 700 unlicensed games for the NES, and numerous others on other Nintendo cartridge-based systems that had found ways to bypass the hardware lockout chips and sell without paying any royalties to Nintendo, such as by Atari in its subsidiary company Tengen. This licensing approach was similarly used by most other cartridge-based console manufacturers using lockout chip technology. With optical media, where the console manufacturer may not have direct control on the production of the media, the developer or publisher typically must establish a licensing agreement to gain access to the console's proprietary storage format for the media as well as to use the console and manufacturer's logos and branding for the game's packaging, paid back through royalties on sales. In the transition to digital distribution, where now the console manufacturer runs digital storefronts for games, licenses fees apply to registering a game for distribution on the storefront - again gaining access to the console's branding and logo - with the manufacturer taking its cut of each sale as its royalty. In both cases, this still gives console manufacturers the ability to review and reject games it believes unsuitable for the system and deny licensing rights. With the rise of indie game development, the major console manufacturers have all developed entry level routes for these smaller developers to be able to publish onto consoles at far lower costs and reduced royalty rates. Programs like Microsoft's ID@Xbox give developers most of the needed tools for free after validating the small development size and needs of the team. Similar licensing concepts apply for third-party accessory manufacturers. Emulation and backward compatibility Consoles like most consumer electronic devices have limited lifespans. There is great interest in preservation of older console hardware for archival and historical purposes, but games from older consoles, as well as arcade and personal computers, remain of interest. Computer programmers and hackers have developed emulators that can be run on personal computers or other consoles that simulate the hardware of older consoles that allow games from that console to be run. The development of software emulators of console hardware is established to be legal, but there are unanswered legal questions surrounding copyrights, including acquiring a console's firmware and copies of a game's ROM image, which laws such as the United States' Digital Millennium Copyright Act make illegal save for certain archival purposes. Even though emulation itself is legal, Nintendo is recognized to be highly protective of any attempts to emulate its systems and has taken early legal actions to shut down such projects. To help support older games and console transitions, manufacturers started to support backward compatibility on consoles in the same family. Sony was the first to do this on a home console with the PlayStation 2 which was able to play original PlayStation content, and subsequently became a sought-after feature across many consoles that followed. Backward compatibility functionality has included direct support for previous console games on the newer consoles such as within the Xbox console family, the distribution of emulated games such as Nintendo's Virtual Console, or using cloud gaming services for these older games as with the PlayStation Now service. Market Distribution Consoles may be shipped in a variety of configurations, but typically will include one base configuration that include the console, one controller, and sometimes a pack-in game. Manufacturers may offer alternate stock keeping unit (SKUs) options that include additional controllers and accessories or different pack-in games. Special console editions may feature unique cases or faceplates with art dedicated to a specific video game or series and are bundled with that game as a special incentive for its fans. Pack-in games are typically first-party games, often featuring the console's primary mascot characters. The more recent console generations have also seen multiple versions of the same base console system either offered at launch or presented as a mid-generation refresh. In some cases, these simply replace some parts of the hardware with cheaper or more efficient parts, or otherwise streamline the console's design for production going forward; the PlayStation 3 underwent several such hardware refreshes during its lifetime due to technological improvements such as significant reduction of the process node size for the CPU and GPU. In these cases, the hardware revision model will be marked on packaging so that consumers can verify which version they are acquiring. In other cases, the hardware changes create multiple lines within the same console family. The base console unit in all revisions share fundamental hardware, but options like internal storage space and RAM size may be different. Those systems with more storage and RAM would be marked as a higher performance variant available at a higher cost, while the original unit would remain as a budget option. For example, within the Xbox One family, Microsoft released the mid-generation Xbox One X as a higher performance console, the Xbox One S as the lower-cost base console, and a special Xbox One S All-Digital Edition revision that removed the optical drive on the basis that users could download all games digitally, offered at even a lower cost than the Xbox One S. In these cases, developers can often optimize games to work better on the higher-performance console with patches to the retail version of the game. In the case of the Nintendo 3DS, the New Nintendo 3DS, featured upgraded memory and processors, with new games that could only be run on the upgraded units and cannot be run on an older base unit. There have also been a number of "slimmed-down" console options with significantly reduced hardware components that significantly reduced the price they could sell the console to the consumer, but either leaving certain features off the console, such as the Wii Mini that lacked any online components compared to the Wii, or that required the consumer to purchase additional accessories and wiring if they did not already own it, such as the New-Style NES that was not bundled with the required RF hardware to connect to a television. Pricing Consoles when originally launched in the 1970s and 1980s were about , and with the introduction of the ROM cartridge, each game averaged about . Over time the launch price of base consoles units has generally risen to about , with the average game costing . Exceptionally, the period of transition from ROM cartridges to optical media in the early 1990s saw several consoles with high price points exceeding and going as high as . Resultingly, sales of these first optical media consoles were generally poor. When adjusted for inflation, the price of consoles has generally followed a downward trend, from from the early generations down to for current consoles. This is typical for any computer technology, with the improvements in computing performance and capabilities outpacing the additional costs to achieve those gains. Further, within the United States, the price of consoles has generally remained consistent, being within 0.8% to 1% of the median household income, based on the United States Census data for the console's launch year. Since the Nintendo Entertainment System, console pricing has stabilized on the razorblade model, where the consoles are sold at little to no profit for the manufacturer, but they gain revenue from each game sold due to console licensing fees and other value-added services around the console (such as Xbox Live). Console manufacturers have even been known to take losses on the sale of consoles at the start of a console's launch with expectation to recover with revenue sharing and later price recovery on the console as they switch to less expensive components and manufacturing processes without changing the retail price. Consoles have been generally designed to have a five-year product lifetime, though manufacturers have considered their entries in the more recent generations to have longer lifetimes of seven to potentially ten years. Competition The competition within the video game console market as subset of the video game industry is an area of interest to economics with its relatively modern history, its rapid growth to rival that of the film industry, and frequent changes compared to other sectors. Effects of unregulated competition on the market were twice seen early in the industry. The industry had its first crash in 1977 following the release of the Magnavox Odyssey, Atari's home versions of Pong and the Coleco Telstar, which led other third-party manufacturers, using inexpensive General Instruments processor chips, to make their own home consoles which flooded the market by 1977. The video game crash of 1983 was fueled by multiple factors including competition from lower-cost personal computers, but unregulated competition was also a factor, as numerous third-party game developers, attempting to follow on the success of Activision in developing third-party games for the Atari 2600 and Intellivision, flooded the market with poor quality games, and made it difficult for even quality games to sell. Nintendo implemented a lockout chip, the Checking Integrated Circuit, on releasing the Nintendo Entertainment System in Western territories, as a means to control which games were published for the console. As part of their licensing agreements, Nintendo further prevented developers from releasing the same game on a different console for a period of two years. This served as one of the first means of securing console exclusivity for games that existed beyond technical limitation of console development. The Nintendo Entertainment System also brought the concept of a video game mascot as the representation of a console system as a means to sell and promote the unit, and for the NES was Mario. The use of mascots in businesses had been a tradition in Japan, and this had already proven successful in arcade games like Pac-Man. Mario was used to serve as an identity for the NES as a humor-filled, playful console. Mario caught on quickly when the NES released in the West, and when the next generation of consoles arrived, other manufacturers pushed their own mascots to the forefront of their marketing, most notably Sega with the use of Sonic the Hedgehog. The Nintendo and Sega rivalry that involved their mascot's flagship games served as part of the fourth console generation's "console wars". Since then, manufacturers have typically positioned their mascot and other first-party games as key titles in console bundles used to drive sales of consoles at launch or at key sales periods such as near Christmas. Another type of competitive edge used by console manufacturers around the same time was the notion of "bits" or the size of the word used by the main CPU. The TurboGrafx-16 was the first console to push on its bit-size, advertising itself as a "16-bit" console, though this only referred to part of its architecture while its CPU was still an 8-bit unit. Despite this, manufacturers found consumers became fixated on the notion of bits as a console selling point, and over the fourth, fifth and sixth generation, these "bit wars" played heavily into console advertising. The use of bits waned as CPU architectures no longer needed to increase their word size and instead had other means to improve performance such as through multicore CPUs. Generally, increased console numbers gives rise to more consumer options and better competition, but the exclusivity of titles made the choice of console for consumers an "all-or-nothing" decision for most. Further, with the number of available consoles growing with the fifth and sixth generations, game developers became pressured to which systems to focus on, and ultimately narrowed their target choice of platforms to those that were the best-selling. This cased a contraction in the market, with major players like Sega leaving the hardware business after the Dreamcast but continuing in the software area. Effectively, each console generation was shown to have two or three dominant players. Competition in the console market in the 2010s and 2020s is considered an oligarchy between three main manufacturers: Nintendo, Sony, and Microsoft. The three use a combination of first-party games exclusive to their console and negotiate exclusive agreements with third-party developers to have their games be exclusive for at least an initial period of time to drive consumers to their console. They also worked with CPU and GPU manufacturers to tune and customize hardware for computers to make it more amenable and effective for video games, leading to lower-cost hardware needed for video game consoles. Finally, console manufacturers also work with retailers to help with promotion of consoles, games, and accessories. While there is little difference in pricing on the console hardware from the manufacturer's suggested retail price for the retailer to profit from, these details with the manufacturers can secure better profits on sales of game and accessory bundles for premier product placement. These all form network effects, with each manufacturer seeking to maximize the size of their network of partners to increase their overall position in the competition. Of the three, Microsoft and Sony, both with their own hardware manufacturing capabilities, remain at a leading edge approach, attempting to gain a first-mover advantage over the other with adaption of new console technology. Nintendo is more reliant on its suppliers and thus instead of trying to compete feature for feature with Microsoft and Sony, had instead taken a "blue ocean" strategy since the Nintendo DS and Wii. See also Unlockable game Video game clone Further reading References External links American inventions Bundled products or services Console
23010950
https://en.wikipedia.org/wiki/NewBlue
NewBlue
NewBlue was founded in 2006 in San Diego, California developing software for the post production video industry.   The company began by developing video effects, transitions, and titling software for consumer and professional video editing software host applications. The company has licensed its software ("plugins") for use in Avid Media Composer, Grass Valley EDIUS, Sony VEGAS, MAGIX Movie Edit Pro, Corel VideoStudio, Pinnacle Studio and CyberLink PowerDirector. At the urging of a leading manufacturer of live video production switchers, NewBlue entered the live production side of the video industry in 2016. Titler Live was launched that year and is used by a variety of broadcast, education, sports, house of worship, corporate and government customers. NewBlue’s live video production technology has also been licensed by Telestream, PrestoSports, and Broadcast Pix. NewBlue has a broad portfolio of more than ten patented technologies in cloud video production, data-driven graphics, live and post graphics, real-time graphics rendering, and live-to-post video production. Patent information can be viewed the US Patent Trademark Office. In July 2021, NewBlue used the Blackmagic Design SDK to combine the ATEM with the NewBlue graphics software. The combination provides on-air graphics and live switching. Products Titler Live. graphics software for live video productions. Fusion 2. A short-depth 1U rackmount chassis running Titler Live Broadcast. VividCast. All-in-one live video streaming software TotalFX. titling, tools and effects plugins for video editing. Titler Pro. Titling software for video editors. References Software companies based in California Video editing software Special effects companies Software companies of the United States Software companies established in 2005 2005 establishments in California American companies established in 2005
40092174
https://en.wikipedia.org/wiki/Dalilah%20Muhammad
Dalilah Muhammad
Dalilah Muhammad (born February 7, 1990) is an American track and field athlete who specializes in the 400 meters hurdles. She is the 2016 Rio Olympics champion and 2020 Tokyo Olympics silver medalist, becoming at the latter the second-fastest woman of all time in the event with her personal best of 51.58 seconds. Muhammad was second at both the 2013 and 2017 World Championships to take her first gold in 2019, setting the former world record of 52.16 s. She is only the second female 400 m hurdler in history, after Sally Gunnell, to have won the Olympic, World titles and broken the world record. At both the 2019 World Championships and Tokyo Games, she also took gold as part of women's 4×400 metres relay team. Muhammad won the 400 m hurdles at the 2007 World Youth Championships, and placed second in the event at the 2009 Pan American Junior Championships. Collegiately, she ran for the USC Trojans, for whom she was a four-time All-American at the NCAA Outdoor Championships. She was also the 2013, 2016, and 2017 American national champion and two-time Diamond League winner. Early life Dalilah Muhammad was born February 7, 1990, in Jamaica, Queens, New York City, to parents Nadirah and Askia Muhammad. Athletic career High school and college track Dalilah Muhammad competed in various track and field events at high school, including the hurdles, sprints, and high jump. While at Benjamin N. Cardozo High School in Bayside, Queens, she won the 2008 New York State and Nike Outdoor Nationals titles in the 400 m hurdles. During that period, she also gained her first international experience. At the 2007 World Youth Championships in Athletics, she took the 400 m hurdles gold medal. Muhammad earned 2007 Gatorade Female Athlete of the Year for New York State. In 2008, she enrolled at the University of Southern California on a sports scholarship, majoring in business. Joining the USC Trojans track team, she competed extensively in her first season. At the Pacific-10 Conference meet, she was runner-up in the 400 m hurdles, fourth in the 4×400-meter relay, and also set a personal record of 13.79 seconds as a finalist in the 100-meter hurdles. The NCAA Outdoor Championship saw her set a 400 m hurdles best of 56.49 seconds and finish in third place in the final. She won the national junior title that year and was the silver medallist at the 2009 Pan American Junior Athletics Championships. In her second year at USC, she was a runner-up at the Pac-10 championships but narrowly missed out on the NCAA final. The 2011 outdoor season saw her repeat her Pac-10-second place, and a personal record of 56.04 seconds in the NCAA semi-finals led to a sixth-place finish in the 400 m hurdles final. In 2012, she set personal records in the sprint hurdles events, running 8.23 seconds for the 60-meter hurdles and 13.33 seconds for the 100 m hurdles. She ranked fifth in the latter event at the Pac-12 meet, where she placed third in the 400 m hurdles. She was again an NCAA finalist in her speciality, coming in fifth, and she also participated in the heats at the 2012 United States Olympic Trials. She ended her career as a USC Trojan athlete as the school's third fastest ever 400 m hurdler and a four-time NCAA All-American. Professional After graduating from USC, she chose to compete professionally in the 400 m hurdles. She improved her personal best in the 2013 season with 55.97 then 54.94 seconds in California. In her IAAF Diamond League debut, she placed fourth at the Shanghai Golden Grand Prix with a time of 54.74 seconds. She won at the Memorial Primo Nebiolo in Italy in 54.66, then she placed third at the Bislett Games in Norway with a run of 54.33 seconds. At the 2013 USA Outdoor Track and Field Championships, she improved her personal record by half a second with a run of 53.83 in the final to win her first national title in the 400 m hurdles. Muhammad has represented Nike since 2013. At the 2014 USA Outdoor Track and Field Championships, Muhammad qualified for the 400 m hurdles but did not start. At the 2015 USA Outdoor Track and Field Championships, she placed 7th with a time of 57.31. At the 2016 United States Olympic Trials, she won the 400-meter hurdles in 52.88. At the 2016 Summer Olympics, she won gold in the event. She defended her title at the 2017 USA Outdoor Track and Field Championships, winning with new personal best of 52.64. Muhammad broke the 400-meter hurdles world record at the 2019 USA Outdoor Track and Field Championships with a time of 52.20 seconds, improving Yuliya Pechonkina's 16-year-old record of 52.34 (2003). Muhammad is only the second woman in the history of the 400m hurdles, after Sally Gunnell, to have won the Olympic title and broken the world record. In September, the IAAF ratified Muhammad's time as the official world record. She won the gold medal at the 2019 World Championships, improving her time by 0.04 seconds, setting the new world record with a time of 52.16 seconds. At the end of the season she was selected for the Jackie Joyner-Kersee Award by the U.S.A. Track and Field Federation. and by Track and Field News at its World Women's Athlete of the Year, voted their first choice by 24 of the publication's 36-member panel. Track statistics Information from World Athletics profile unless otherwise noted. Personal bests International championships 400 m hurdles circuit wins and titles Diamond League winner (2): 2017, 2018 2016 (2): London, Lausanne 2017 (1): Brussels 2018 (3): Shanghai, Oslo, Zürich 2019 (2): Doha, Rome 2021 (1): Eugene Prefontaine Classic () National championships NCAA results from Track & Field Results Reporting System. See also Muslim women in sport Notes References External links 1990 births Living people American female hurdlers African-American female track and field athletes African-American Muslims 21st-century Muslims Athletes (track and field) at the 2016 Summer Olympics Athletes (track and field) at the 2020 Summer Olympics Benjamin N. Cardozo High School alumni Diamond League winners Medalists at the 2016 Summer Olympics Medalists at the 2020 Summer Olympics Olympic female hurdlers Olympic gold medalists for the United States in track and field Olympic silver medalists for the United States in track and field People from Jamaica, Queens Sportspeople from Queens, New York Track & Field News Athlete of the Year winners Track and field athletes from New York City USA Outdoor Track and Field Championships winners USC Trojans women's track and field athletes World Athletics Championships athletes for the United States World Athletics Championships medalists World Athletics Championships winners 21st-century African-American sportspeople 21st-century African-American women
12438527
https://en.wikipedia.org/wiki/IP%20fragmentation%20attack
IP fragmentation attack
IP fragmentation attacks are a kind of computer security attack based on how the Internet Protocol (IP) requires data to be transmitted and processed. Specifically, it invokes IP fragmentation, a process used to partition messages (the service data unit (SDU); typically a packet) from one layer of a network into multiple smaller payloads that can fit within the lower layer's protocol data unit (PDU). Every network link has a maximum size of messages that may be transmitted, called the maximum transmission unit (MTU). If the SDU plus metadata added at the link layer exceeds the MTU, the SDU must be fragmented. IP fragmentation attacks exploit this process as an attack vector. Part of the TCP/IP suite is the Internet Protocol (IP) which resides at the Internet Layer of this model. IP is responsible for the transmission of packets between network end points. IP includes some features which provide basic measures of fault-tolerance (time to live, checksum), traffic prioritization (type of service) and support for the fragmentation of larger packets into multiple smaller packets (ID field, fragment offset). The support for fragmentation of larger packets provides a protocol allowing routers to fragment a packet into smaller packets when the original packet is too large for the supporting datalink frames. IP fragmentation exploits (attacks) use the fragmentation protocol within IP as an attack vector. According to [Kurose 2013], in one type of IP fragmentation attack "the attacker sends a stream of small fragments to the target host, none of which has an offset of zero. The target can collapse as it attempts to rebuild datagrams out of the degenerate packets." Another attack involves sending overlapping fragments with non-aligned offsets, which can render vulnerable operating systems not knowing what to do, causing some to crash. Process IP packets are encapsulated in datalink frames, and, therefore, the link MTU affects larger IP packets and forces them to be split into pieces equal to or smaller than the MTU size. This can be accomplished by several approaches: To set the IP packet size equal or smaller than the directly attached medium and delegate all further fragmentation of packets to routers, meaning that routers decide if the current packet should be re-fragmented or not. This offloads a lot of work on to routers, and can also result in packets being segmented by several IP routers one after another, resulting in very peculiar fragmentation. To preview all links between source and destination and select the smallest MTU in this route, assuming there is a unique route. This way we make sure that the fragmentation is done by the sender, using a packet-size smaller than the selected MTU, and there is no further fragmentation en route. This solution, called Path MTU Discovery, allows a sender to fragment/segment a long Internet packet, rather than relying on routers to perform IP-level fragmentation. This is more efficient and more scalable. It is therefore the recommended method in the current Internet. The problem with this approach is that each packet is routed independently; they may well typically follow the same route, but they may not, and so a probe packet to determine fragmentation may follow a path different from paths taken by later packets. Three fields in the IP header are used to implement fragmentation and reassembly. The "Identification", "Flags" and "Fragment Offset" fields. Flags: A 3 bit field which says if the packet is a part of a fragmented data frame or not. Bit 0: reserved, must be zero (unless packet is adhering to RFC 3514) Bit 1: (AF) 0 = May Fragment, 1 = Don't Fragment. Bit 2: (AF) 0 = Last Fragment, 1 = More Fragments. Fragment Offset specifies the fragment's position within the original packet, measured in 8-byte units. Accordingly, every fragment except the last must contain a multiple of 8 bytes of data. It is obvious that Fragment Offset can hold 8192 (2^13) units but the packet can't have 8192 * 8 = 65,536 bytes of data because "Total Length" field of IP header records the total size including the header and data. An IP header is at least 20 bytes long, so the maximum value for "Fragment Offset" is restricted to 8189, which leaves room for 3 bytes in the last fragment. Because an IP internet can be connectionless, fragments from one packet may be interleaved with those from another at the destination. The "Identification field" uniquely identifies the fragments of a particular packet. The source system sets "Identification" field in each packet to a unique value for all packets which use the same source IP address, destination IP address, and "Protocol" values, for the lifetime of the packet on the internet. This way the destination can distinguish which incoming fragments belong to a unique packet and buffer all of them until the last fragment is received. The last fragment sets the "More Fragment" bit to 0 and this tells the receiving station to start reassembling the data if all fragments have been received. The following is a real-life fragmentation example: The following was obtained using the Ethereal protocol analyzer to capture ICMP echo request packets. To simulate this open up a terminal and type ping ip_dest -n 1 -l 65000. The results are as follows: No. Time Source Destination Protocol Info 1 0.000000 87.247.163.96 66.94.234.13 ICMP Echo (ping) request 2 0.000000 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=1480) 3 0.002929 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=2960) 4 6.111328 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=4440) 5 6.123046 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=5920) 6 6.130859 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=7400) 7 6.170898 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=8880) 8 6.214843 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=10360) 9 6.239257 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=11840) 10 6.287109 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=13320) 11 6.302734 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=14800) 12 6.327148 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=16280) 13 6.371093 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=17760) 14 6.395507 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=19240) 15 6.434570 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=20720) 16 6.455078 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=22200) 17 6.531250 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=23680) 18 6.550781 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=25160) 19 6.575195 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=26640) 20 6.615234 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=28120) 21 6.634765 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=29600) 22 6.659179 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=31080) 23 6.682617 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=32560) 24 6.699218 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=34040) 25 6.743164 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=35520) 26 6.766601 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=37000) 27 6.783203 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=38480) 28 6.806640 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=39960) 29 6.831054 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=41440) 30 6.850586 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=42920) 31 6.899414 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=44400) 32 6.915039 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=45880) 33 6.939453 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=47360) 34 6.958984 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=48840) 35 6.983398 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=50320) 36 7.023437 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=51800) 37 7.046875 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=53280) 38 7.067382 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=54760) 39 7.090820 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=56240) 40 7.130859 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=57720) 41 7.151367 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=59200) 42 7.174804 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=60680) 43 7.199218 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=62160) 44 7.214843 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=63640) 45 7.258789 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=65120) The first packet details: No.Time Source Destination Protocol Info 1 0.000000 87.247.163.96 66.94.234.13 ICMP Echo (ping) request Frame 1 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: OmronTat_00:00:00 (00:00:0a:00:00:00), Dst: 40:0f:20:00:0c:00 (40:0f:20:00:0c:00) Internet Protocol, Src: 87.247.163.96 (87.247.163.96), Dst: 66.94.234.13 (66.94.234.13) Internet Control Message Protocol Type: 8 (Echo (ping) request) Code: 0 Checksum: 0x6b7d Identifier: 0x0600 Sequence number: 0x0200 Data (1472 bytes) The second packet details: No. Time Source Destination Protocol Info 2 0.000000 87.247.163.96 66.94.234.13 IP Fragmented IP protocol (proto=ICMP 0x01, off=1480) Frame 2 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: OmronTat_00:00:00 (00:00:0a:00:00:00), Dst: 40:0f:20:00:0c:00 (40:0f:20:00:0c:00) Internet Protocol, Src: 87.247.163.96 (87.247.163.96), Dst: 66.94.234.13 (66.94.234.13) Data (1480 bytes) Note that only the first fragment contains the ICMP header and all remaining fragments are generated without the ICMP header. Two important points here: In some datalink protocols such as Ethernet, only the first fragment contains the full upper layer header, meaning that other fragments look like beheaded packets. Additional overhead imposed over network because all fragments contains their own IP header. Additional overhead = (number_of_fragments - 1) * (ip_header_len); Exploits IP fragment overlapped The IP fragment overlapped exploit occurs when two fragments contained within the same IP packet have offsets that indicate that they overlap each other in positioning within the packet. This could mean that either fragment A is being completely overwritten by fragment B, or that fragment A is partially being overwritten by fragment B. Some operating systems do not properly handle fragments that overlap in this manner and may throw exceptions or behave in other undesirable ways upon receipt of overlapping fragments. This is the basis for the teardrop attack. Overlapping fragments may also be used in an attempt to bypass Intrusion Detection Systems. In this exploit, part of an attack is sent in fragments along with additional random data; future fragments may overwrite the random data with the remainder of the attack. If the completed packet is not properly reassembled at the IDS, the attack will go undetected. IP fragmentation buffer full The IP fragmentation buffer full exploit occurs when there is an excessive amount of incomplete fragmented traffic detected on the protected network. This could be due to an excessive number of incomplete fragmented packets, a large number of fragments for individual packets or a combination of quantity of incomplete packets and size/number of fragments in each packet. This type of traffic is most likely an attempt to bypass security measures or Intrusion Detection Systems by intentional fragmentation of attack activity. IP fragment overrun The IP Fragment Overrun exploit is when a reassembled fragmented packet exceeds the declared IP data length or the maximum packet length. By definition, no IP packet should be larger than 65,535 bytes. Systems that try to process these large packets can crash, and can be indicative of a denial of service attempt. IP fragment too many packetsThe "Too Many Packets" exploit is identified by an excessive number of incomplete fragmented packet detected on the network. This is usually either a denial of service attack or an attempt to bypass security measures. An example of "Too Many Packets", "Incomplete Packet" and "Fragment Too Small" is the Rose Attack. IP fragment incomplete packet This exploit occurs when a packet can not be fully reassembled due to missing data. This can indicate a denial of service attack or an attempt to defeat packet filter security policies. IP Fragment Too Small If an IP fragment is too small it indicates that the fragment is likely intentionally crafted. Any fragment other than the final fragment that is less than 400 bytes could be considered too small. Small fragments may be used in denial of service attacks or in an attempt to bypass security measures or detection. Fragmentation for evasion Network infrastructure equipment such as routers, load-balancers, firewalls and IDS have inconsistent visibility into fragmented packets. For example, a device may subject the initial fragment to rigorous inspection and auditing, but might allow all additional fragments to pass unchecked. Some attacks may use this fact to evade detection by placing incriminating payload data in fragments. Devices operating in "full" proxy mode are generally not susceptible to this subterfuge. References External links W. Richard Stevens' Home Page Internet security
6821624
https://en.wikipedia.org/wiki/PlayStation%203%20accessories
PlayStation 3 accessories
Various accessories for the PlayStation 3 video game console have been produced by Sony and third-party companies. These include controllers, audio and video input devices like microphones, video cameras, and cables for better sound and picture quality. The controllers include the DualShock 3, a keypad that connects to the aforementioned controller, a controller similar to those for the Xbox Kinect that allows for motion controls, and miscellaneous others used for a specific use. Headsets (mostly used for communications, not game audio) are the major A/V devices, followed by cameras and other input devices. Finally, a composite video cable set, USB cable sets, and memory adaptors complete the accessories. Game controllers Sixaxis The Sixaxis Wireless Controller (SCPH-98040/CECHZC1) (trademarked "SIXAXIS") was the official wireless controller for the PlayStation 3 until it was succeeded by the DualShock 3. In Japan, individual Sixaxis controllers were available for purchase simultaneously with the console's launch. All Sixaxis controllers, with the exception of those bundled with a console were sold without a USB to USB mini cable. "Sixaxis" also refers to the motion sensing technology used in both the Sixaxis and DualShock 3 controllers. Its design is an evolution of the DualShock 2 controller, retaining its pressure-sensitive buttons, layout and basic shape. Unlike the DualShock 2, however, it is a Bluetooth wireless controller (it will also function as a wired controller via USB) and features motion sensing technology. It also does not feature vibration motors (these were re-added in the DualShock 3). The L2 and R2 buttons were replaced with analog triggers and the precision of the analog sticks was increased from 8-bit to 10-bit. In place of the "Analog" button is a button labeled with the PlayStation logo, which allows access to the system menu. The underside of the case is also slightly enlarged to accommodate the internal battery. The Sixaxis is constructed of slightly translucent plastic, rather than the opaque plastic used on the DualShock 2 (and the later DualShock 3). DualShock 3 Replacing the Sixaxis as the standard PlayStation 3 controller, the DualShock 3 (SCPH-98050/CECHZC2, trademarked "DUALSHOCK 3") features the same functions and design (including "Sixaxis" motion sensing), but with vibration feedback capability. Cosmetically, the DualShock 3 is nearly identical to the Sixaxis, with the only differences being that "DUALSHOCK 3" is printed on the top (with the original "SIXAXIS" label moved down) and that the body is made of opaque plastic rather than the slightly translucent plastic used on the Sixaxis. The vibration function does not interfere with the motion sensing function, and both functions can be used at once. Like the Sixaxis, it is a wireless controller with a mini-USB port on the rear that is used for charging, as well as playing while charging. Released alongside new PlayStation 3 models in Japan on January 11, 2008, the DualShock 3 was initially available in Black and Ceramic White colors, matching the color options for the new console models. On March 6, a Satin Silver DualShock 3 was released in Japan, again alongside a new console color. The black DualShock 3 was released in the United States on April 2 and in Europe on July 2. On October 30, 2008, the DualShock 3 became the standard controller packaged with PlayStation 3 consoles, starting with the (non-PS2-backwards compatible) 80 GB models. Both controllers can also be used on the PSP Go via Bluetooth (requires a PlayStation 3 system for initial connection). Charging stand An official charging stand for PlayStation 3 controllers was released in Japan on April 21, 2011. It is capable of charging two controllers simultaneously and is powered by a wall plug. Third-party charging stands are available in regions outside of Japan. Wireless keypad The wireless keypad peripheral (CECHZK1x, where x is a region code) was launched in Europe on November 28, 2008, early December 2008 in North America, and came to Japan in late 2008. As well as acting as keyboard, the wireless keypad features a touchpad button (labeled as a pointing hand, similar to the pointer used in the web browser), which allows the surface of the keypad to be used as a touchpad, allowing users to move the pointer by sliding their fingers around the keypad surface. When in touchpad mode, the left and right arrow buttons act as left and right mouse buttons, respectively. Although designed to be directly attached to the controller, the keypad features an internal battery and an independent Bluetooth connection, and does not connect to the controller electronically in any way, meaning it can function separately from the controller. The keypad must be first connected to the PlayStation 3 via a USB mini-B to USB-A cable or put into Bluetooth discovery mode (by holding down the "blue" modifier key when switching the device on) so it can be paired and subsequently used. Discovery mode can also be used to pair the keypad with other Bluetooth compatible devices such as computers and mobile phones, where it will function as both a keyboard and a touchpad (where supported by the host device). The keypad also features two shortcut buttons, letting users jump to the "Friends" screen and "Message Box" on the XMB during game play. PlayStation Move PlayStation Move is a motion-sensing game controller platform for the PlayStation 3 (PS3) video game console by Sony Computer Entertainment (SCE). It was previously named PlayStation Motion Controller. Based on a handheld motion controller wand, PlayStation Move uses the PlayStation Eye webcam to track the wand's position, and inertial sensors in the wand to detect its motion (similar to the Wii Remote). First revealed on June 2, 2009, PlayStation Move was launched in September 2010 in most countries and October 2010 in Japan. Hardware available at launch included the main PlayStation Move motion controller, and an optional PlayStation Move navigation controller. Buzz! Buzzer The Buzz! Buzzer is a special controller designed specifically for the Buzz! quiz game series. The controller features a large red buzzer button and four smaller coloured buttons for answer selection. Both wired and wireless versions are available and come bundled with Buzz! games. A four-buzzer set acts as a single USB device and connects a USB port on the PlayStation 3. Wireless versions connect via a USB dongle, with each dongle able to support up to 4 wireless buzzers at a time. A second dongle is required for additional buzzers (for 8 player games). Both the wired and wireless versions of the buzzers are compatible with both PlayStation 2 and PlayStation 3. Logitech Driving Force GT Released on December 13, 2007, the Logitech Driving Force GT is a PlayStation 3 racing wheel peripheral intended for use with racing games. It is manufactured and distributed by Logitech International S.A of Romanel-sur-Morges, Switzerland. It features include 900° steering (2.5 turns lock-to-lock), with force feedback, via a full-sized (diameter 45 cm), MOMO-styled steering wheel and full-sized throttle and brake pedals. It also features PlayStation 3 standard gamepad buttons (with gray colored , , and symbols), a PS/Home button (labeled PS), L3/R3 buttons, individually sprung to simulate real pedal efforts. Other wheels include the Fanatec Porsche 911 Turbo S Racing Wheel, which features force feedback, 6 gear stick shifter and 3 pedals (Gas/Brake/Clutch). Logitech Cordless Precision Controller The Logitech Cordless Precision Controller has similar function with the Sixaxis and DualShock 3 wireless controllers except it has 2.4 GHz USB wireless technology that gives the user 30 feet (10 m) of room to play. The controller uses two AA batteries which provide up to 50 hours of continuous gaming. After five minutes of inactivity, the gamepad goes into sleep mode. The controller may also be used on a PC, as the dongle acts as a standard USB HID. Blu-ray Disc remotes The PS3 is compatible with any Bluetooth Blu-ray Disc/DVD remote control. With a USB or Bluetooth adapter it is also compatible with many Blu-ray Disc/DVD and universal remote controls. Unlike the PS2, the PS3 does not have an infrared receiver; all compatible remote controls use Bluetooth instead. Blu-ray Disc Remote Control The Blu-ray Disc Remote Control (CECHZR1) is a Bluetooth remote control which features standard Blu-ray Disc and DVD remote functions such as chapter display/select and one-touch menu control. In addition it has all standard PlayStation buttons: d-pad, , , , , L1, L2, L3, R1, R2, R3, Start, Select and a PS/Home button for turning on and off your PS3 and going to the XMB. Media/Blu-ray Disc Remote Control The Media/Blu-ray Disc Remote Control (CECHZRC1) controls the PlayStation 3, TV (including switching between 2D and 3D modes on 3D TVs), and audio system, has enhanced controls for Blu-ray Disc movies, streaming movies and music, and is compatible with services available on PS3 the system such as Netflix. It was released on October 24, 2011. Rhythm game peripherals Various rhythm game peripherals are available for the PlayStation 3, including guitar controllers, drum kit controllers, microphones, DJ turntables, and a keyboard controller. Most of these peripherals were produced for one of three franchises: Guitar Hero, Rock Band and SingStar. uDraw GameTablet The uDraw GameTablet is a graphics tablet designed to be used with various games. It was produced by THQ and released for the PlayStation 3 on November 15, 2011. The PlayStation 3 and Xbox 360 versions of the uDraw was a commercial failure and was discontinued in February 2012, THQ would eventually file for bankruptcy the following year. Tony Hawk Shred Board A wireless skating board for Tony Hawk: Shred (and Ride) games. Replaces the previous Tony Hawk Ride Board, also by Activision. (Ride board is not forward compatible with Shred game). USB controllers Most commercial USB controllers are compatible with the PlayStation 3 as it supports standard USB human interface devices. This includes gamepads, joysticks and steering wheel controllers. A limitation of this is that not all such controllers provide the same range of inputs as a Sixaxis/DualShock 3 controller (fewer buttons or joysticks for example), so may not be practical in all games. When any such controller is used with games which require sixaxis functionality or the use of the analog buttons usability is also limited. Many PlayStation 2 games which were programmed to use the analog functionality of the PlayStation 2 controllers buttons will not accept non-analog input therefore Sixaxis or DualShock 3 controllers must be used (though this could potentially be solved with future firmware updates). Non-standard USB controllers such as Xbox 360 wired controllers are not compatible with the PlayStation 3. These often also require specific drivers for use on PCs (Windows XP and up) Other compatible input devices It is possible for game developers to add support for additional devices and title software updates can further add compatibility. Additionally most standard USB or Bluetooth keyboards and mice will also work on the PS3. A keyboard and mouse can be used to navigate the XMB or for use on the console's web browser. A keyboard and mouse will work in games specifically programmed to use them, and in backwards compatibility mode for supported PSOne and PS2 games. Audio/Visual Peripherals Surround Bar On October 13, 2010, Sony announced an official surround sound system for the PS3 through the official PlayStation YouTube channel. Headsets PlayStation 3 does not support game audio through USB headsets. However, most commercial USB headsets can be used for voice communication. In addition, the PlayStation 3 supports some PlayStation 2 USB accessories, including the USB SOCOM: U.S. Navy SEALs headset by Logitech, the SingStar microphones and the built-in microphone on the EyeToy for video and voice chat (although the EyeToy Play game associated with the EyeToy is not available for use on European PlayStation 3s). Since the PlayStation 3 supports Bluetooth technology, any type of wireless headset is compatible with the system; however, Bluetooth wireless headsets are not compatible with PlayStation 2 games which use the USB headsets (due to being programmed for them only) and therefore the USB headsets must still be used (though this could potentially be solved with future firmware updates). On Sept. 12, 2007, Logitech announced new, Cordless Vantage Headset for PlayStation 3. The Blu-ray Disc retail version of Warhawk comes bundled with a Jabra BT125 Bluetooth headset in North America and the Jabra BT135 in Europe. Mad Catz also produce a NASCAR/Dale Earnhardt Jr headset in Amp and National Guard colors. Official wireless Bluetooth headset On June 27, 2008, it was announced that the headset that will be paired with the Blu-ray Disc version of SOCOM: U.S. Navy SEALs Confrontation would be the official Bluetooth headset for the PlayStation 3. It comes with a charging cradle so that it may charge while connected to one of the system's USB ports, which is being marketed as being useful for storing when not in use. The official headset allows for high quality voice-chat, and provides volume level, battery level, charging status and connection status indicators on the PS3's on-screen display. The headset can be used as a microphone when docked in the charging cradle – voice output from PS3 is automatically transferred to the TV in this case. The official PS3 headset is also compatible with the PSP Go, as well as Bluetooth capable PCs and mobile phones. In November 2010, Sony announced that it would be producing a new version of the Bluetooth headset, which is 30% smaller and would replace the existing model. The redesigned headset also features stronger noise cancellation technology. An "Urban Camouflage" version of the headset was released on April 19, 2011 in the US to coincide with the launch of SOCOM 4: US Navy SEALs. PlayStation 3 Wireless Stereo Headset On September 6, 2011, Sony released their first wireless stereo headset which allows users to hear both in game audio and voice chat. The headset runs independent of then HDMI, optical and A/V outputs, and instead connects wirelessly via a USB dongle (which can also be used to connect it to a PC or Mac). The headset requires system software update version 3.70. Other features include virtual surround sound (up to 7.1; media dependent) and on screen status notifications. Sony added an app for the PS3 and PS4 that allows the user to change the sound settings of the headset. Several game developers have created settings just for their games. PlayTV Officially announced August 22, 2007; PlayTV is a twin-channel DVB-T tuner peripheral with digital video recorder (DVR) software which allows users to record television programs to the PlayStation 3 hard drive for later viewing even while playing a game. The device was launched in the UK on the September 19, 2008 with other regions in Europe following. It can also be used on a PSP via Remote Play to watch live and recorded TV, and schedule new recordings. It was reported that Australia would receive the PlayTV accessory only 2 months after Europe. However, after a delay of just over a year, PlayTV was finally released in Australia on the November 27, 2009. The PlayTV accessory comes bundled with an overlay sticker that fits onto the face of the Blu-ray Disc Remote Control to show PlayTV specific functions, which are mapped to the remote's existing buttons. A similar device, known as Torne has been released for the Japanese market based on the Japanese ISDB-T digital terrestrial standard. Since North American markets, including the United States, Canada and Mexico, use the ATSC digital standard, neither the DVB-T based PlayTV device nor ISDB-T based Torne were released in these territories, or can be usable to pick up broadcasts. torne (CECH-ZD1J) is an ISDB-T tuner peripheral for the Japanese market which, like PlayTV, comes with DVR software. It was first announced on January 14, 2010 for release on March 18 of the same year. Like PlayTV, it is capable of recording and playing back live TV, even while in a game or playing other media (e.g. a DVD or Blu-ray Disc) and can be accessed on PSP via remote play. Unlike PlayTV, torne features PS3 trophy support. In June 2010 Sony released torne software version 2.00, which enables MPEG-4 AVC compression, allowing recordings to be compressed down to a third of their original size as captured MPEG-2 streams. It will also add the ability to watch, fast-forward and rewind programs while they are still recording and to update the user's PSN status. PlayStation Eye The PlayStation Eye is an updated version of the EyeToy USB webcam designed for the PlayStation 3. It does not work with PS2 EyeToy games, but the PS3 does support the PlayStation 2 EyeToy, using its camera and microphone functionalities. A firmware update enabled the PlayStation 3 to support all USB webcams which used the USB Video Class. A/V cables Both official and standard third-party HDMI cables are compatible. For analog video, official D-Terminal (Japan only) and component (YPBPR) A/V cables are available and all RF-modulator, composite, S-Video, RGB SCART and YPBPR component cables for the PlayStation and PlayStation 2 are compatible with the PlayStation 3, as they utilize the same "A/V Multi Out" port. On the audio side, A/V cables connected to the "A/V Multi out" allow 2.0ch (stereo), while optical "Digital out" (TOSLINK) allows both 2.0ch (LPCM) and 5.1ch (Dolby Digital & DTS) and "HDMI out" (Ver.1.3) supports 2.0ch, 5.1ch and 7.1ch (various formats). Units sold in NTSC regions are SD/ED NTSC, 720p, 1080i and 1080p compliant, while those available in PAL regions are compatible with SD/ED PAL, 720p, 1080i and 1080p. An NTSC system (480i/480p) cannot output PAL (576i/576p) games and DVDs (DVD-Video/DVD-Audio) – however PAL units can display "All Region" NTSC DVDs. This regional lock does not affect HD output (720p/1080i/1080p) – except for Blu-ray Disc movies. HD line HDMI cable: 1080p (HD), 1080i (HD), 720p (HD), 576p (ED PAL), 480p (ED NTSC), 480i (SD NTSC) D-Terminal (D端子) cable (SCPH-10510) Japanese market D5: 1080p (HD), 720p (HD), 480p (ED NTSC) /480i (SD NTSC) D4: 720p (HD), 480p (ED NTSC) /480i (SD NTSC) D3: 1080i (HD), 480p (ED NTSC) /480i (SD NTSC) D2: 480p (ED NTSC) /480i (SD NTSC) D1: 480i (SD NTSC) Component A/V (YPBPR) cable (SCPH-10490): 1080p (HD), 1080i (HD), 720p (HD), 576p (ED PAL) /576i (SD PAL), 480p (ED NTSC) /480i (SD NTSC) SD line RGB SCART (Péritel) cable (SCPH-10142): 576i (SD PAL), 480i (SD NTSC) European market A/V Multi (AVマルチ) cable (VMC-AVM250): 480p (ED NTSC) /480i (SD NTSC) Japanese market S-Video cable (SCPH-10480): 576i (SD PAL), 480i (SD NTSC) A/V (Composite video) cable (SCPH-10500) (bundled with all systems): 576i (SD PAL), 480i (SD NTSC) Storage peripherals Memory card adapter The PlayStation 3 Memory Card Adapter (CECHZM1) is a device that allows data to be transferred from a PlayStation or PlayStation 2 memory card to the PlayStation 3's hard disk. At launch, the device did not support transferring saved game files back to a memory card, but upon the release of the PlayStation 3 system software version 1.80, the user is now able to transfer PS1 and PS2 game saves from the PS3 directly onto a physical Memory Card via the adapter. PlayStation 2 saved game files can also be transferred between PlayStation 3 users via other current memory card formats. The device connects to the PlayStation 3's USB port on one end through a USB Mini-B cable (not included with adapter, but it was included with the console itself), and features a PlayStation 2 memory card port on the other end. The adapter works with every PlayStation 3 model, regardless of whether it is compatible with PlayStation 2 games or not. The adapter was available for purchase simultaneously with the console's launch. The Memory Card Adapter was released on 25 May 2007 in the UK. Other accessories AC adapter charging kit The AC adapter (CECHZA1) charging kit allows the charging of two USB-powered devices, such as the DualShock 3, Sixaxis, PSP (2000, 3000 and Go models), wireless keypad and wireless headset via a wall power plug, eliminating the need to have a PS3 running to charge the accessories. It includes an AC adapter, one 1.5m/4.92 ft. long USB cable (Type A – Mini-B) and one 2 m/6.56 ft long AC power cable. USB 2.0 Cable Pack The USB 2.0 Cable Pack contains two USB cables (Type A – Mini-B) allowing controllers and other USB-powered devices to be recharged while playing a game by plugging them into the console or powered USB hub (hub must be connected to a host device, such as a console, to charge Sixaxis or DualShock 3 controllers). The included cables feature 24-karat gold connectors. Printers Canon, Epson, and Hewlett-Packard each have several printers which are compatible with the system. See also DualShock References External links PlayStation3 Accessories Sony Wireless Bluetooth Headset Video game controllers
895493
https://en.wikipedia.org/wiki/Clock%20DVA
Clock DVA
Clock DVA are a musical group from Sheffield, England, whose style has touched on industrial, post-punk, and EBM. They formed in 1978 by Adolphus "Adi" Newton and Steven "Judd" Turner. Along with contemporaries Heaven 17, Clock DVA's name was inspired by the Russian-influenced Nadsat language of Anthony Burgess's novel A Clockwork Orange. Dva is Russian for "two". History 1978–1981: White Souls in Black Suits and Thirst Newton had previously worked with members of Cabaret Voltaire in a collective called The Studs and with Ian Craig Marsh and Martyn Ware in a band called The Future. He formed the first lineup of Clock DVA in 1978 with Judd Turner (bass), David J. Hammond (guitar), Roger Quail (drums) and Charlie Collins (saxophone, clarinet) (born 26 September 1952, Sheffield). Clock DVA was originally known for making a form of experimental electronic music involving treated tape loops and synthesizers such as the EMS Synthi E. Clock DVA became associated with industrial music with the 1980 release of their cassette album White Souls in Black Suits on Throbbing Gristle's Industrial Records. Paul Widger joined on guitar. The LP Thirst, released on Fetish Records, followed in 1981 to a favourable critical reaction, knocking Adam and the Ants' Dirk Wears White Sox from the top of the NME Indie Charts, by which time the band had combined musique concrète techniques with standard rock instrumentation. "4 Hours", the single from Thirst, was later covered by former Bauhaus bassist David J on his 1985 solo EP Blue Moods Turning Tail. The band split up in 1981, with the non-original members of the band — Quail, Collins, and Widger — going on to form The Box. Turner died in September 1981 due to an accidental drug overdose. 1982–1984: Advantage In 1982, Newton formed a new version of the band that included future Siouxsie and the Banshees guitarist John Valentine Carruthers and signed a deal with the major label Polydor Records. The single "High Holy Disco Mass" (which was released under the name DVA) and the EP Passions Still Aflame were released in 1982, preceding the release of the album Advantage (with singles "Resistance" and "Breakdown") in 1983. Trouser Press characterized Advantage as "their strongest, most powerful LP, a funky concoction of intense dance-powered bass/drums drive with splatters of feedback, angst-ridden vocals by Newton, tape interruptions and dollops of white-noise sax and trumpet." After a European tour in 1983, however, the band split acrimoniously. Adi Newton went on to form The Anti-Group or T.A.G.C. They released several albums continuing in a similar vein to the early Clock DVA, yet more experimental. 1987–1994: Buried Dreams, Man-Amplified and Sign In 1987, Newton reactivated DVA and invited Dean Dennis and Paul Browse back into the fold to aid Newton's use of computer aided sampling techniques which he had been developing in The Anti Group. They released Buried Dreams (1989), an electronic album which (along with its single "The Hacker") received critical acclaim as a pioneering work in the cyberpunk genre. It is also rumored to have been the CD found in Jeffrey Dahmer's stereo at the time of his arrest, according to a 1990s piece published by Alternative Press. Browse left the group in 1989 and was replaced by Robert E. Baker. The album Man-Amplified (1992), an exploration of cybernetics, was the next release. Digital Soundtracks (1992), an instrumental album, followed. Following Dennis's departure from the group, Newton and Baker produced the album Sign (1993). 1995–2007: Hiatus After the release of Sign and related singles, Clock DVA toured Europe (line-up: Newton & Baker with Andrew McKenzie and Ari Newton) and Newton relocated to Italy. However, their Italian record label at the time, Contempo, folded which caused a number of problems. Collective, an anthology album and a box set was released in 1994. Newton began working on new material with Brian Williams, Graeme Revell (from SPK) and Paul Haslinger but continued problems with record labels eventually caused Newton and Clock DVA take a long break from the music scene. In 1998, Czech record label Nextera released a reissue of Buried Dreams, sanctioned by Dean Dennis and Paul Browse but not by Newton. 2008–present: Reactivation Adi Newton reactivated Clock DVA, along with his creative partner Jane Radion Newton, in 2008. Since 2011, Clock DVA has performed at several electronic music festivals and venues throughout Europe with a new line-up consisting of Newton, Maurizio "TeZ" Martinucci and Shara Vasilenko. In November 2011, a new Clock DVA track "Phase IV" was featured on Wroclaw Industrial Festival compilation album. In January 2012, German record label Vinyl on Demand announced Horology, a vinyl box set compilation of early (1978–1980) Clock DVA material. Later on, the demo recordings included in this box set, namely Lomticks of Time, 2nd, Sex Works Beyond Entanglement, Deep Floor and Fragment, were reissued separately early 2016. A historical overview exhibition of Clock DVA (photographs, video and audio) took place at the Melkweg cultural centre, Amsterdam, Netherlands in February/March 2012. In July 2013, a new Clock DVA album called Post-Sign was released on Anterior Research. It was produced and composed by Adi Newton in 1994–95 as an instrumental companion album to Sign, though it remained unreleased at that time due to problems with record labels. According to Adi Newton, Mute Records were set to re-release the eight Clock DVA albums remastered in a box set in 2012. In 2013, Clock DVA played at the Incubate festival in Tilburg, The Netherlands. In 2014, Clock DVA released the album Clock 2 on a USB drive through their label Anterior Research. This limited edition release consists of 3 new studio tracks and various remixes of them, in addition to 4 video files. A 12" called Re-Konstructor / Re-Kabaret 13 was released shortly after. A version of the release was also made available on streaming sites and for digital download. A further EP, Neo Post Sign, containing tracks recorded 1995-96 but omitted from the Post-Sign album, was released early 2015. Also in 2014, members of Clock DVA and fellow Sheffield band In the Nursery joined former Cabaret Voltaire vocalist Stephen Mallinder in a performance under the name IBBERSON. The performance took place at the newly built John Pemberton Lecture Theatres in the School of Health and Related Research at the University of Sheffield, which was constructed in the approximate location of the original Western Works studio where the earliest Clock DVA recordings were made. The name "IBBERSON" is a reference to a sign which used to hang outside the studio building. In July 2015, another vinyl box set of early material was released on Vinyl on Demand. Horology 2 - Clockdva, The Future & Radiophonic Dvations features recordings made in late 1977/1978 period by Adi Newton prior to and during the formative period of development of Clock DVA, including the original The Future recordings made by the trio of Adi Newton, Martyn Ware, and Ian Craig Marsh before The Future developed into The Human League and Clock DVA. In September 2016, Clock DVA performed a series of live dates in the United States. Discography Albums 1980 – White Souls in Black Suits (Industrial Records) 1981 – Thirst (Fetish Records) 1983 – Advantage (Polydor Records) 1989 – Buried Dreams (Interfisch Records) 1990 – Transitional Voices (live) (Interfisch Records) 1992 – Man-Amplified (Contempo) 1992 – Digital Soundtracks (Contempo) 1993 – Sign (Contempo) 1994 – Collective (anthology) (Cleopatra) / 3xCD boxset (Hyperium/Sub-Mission) 2012 – Horology – DVAtion 78/79/80 6xLP boxset (Vinyl On Demand) 2013 – Post-Sign (Anterior Research Media Comm) 2014 – Clock 2 (Anterior Research Media Comm) - 4GB USB Device 2015 - Horology 2 - Clockdva, The Future & Radiophonic Dvations 5xLP boxset (Vinyl On Demand) 2016 - Lomticks of Time LP (Vinyl On Demand) 2016 - 2nd 2xLP (Vinyl On Demand) 2016 - Sex Works Beyond Entanglement LP (Vinyl On Demand) 2016 - Deep Floor LP (Vinyl On Demand) 2016 - Fragment LP (Vinyl On Demand) 2019 - Horology 3 4xLP boxset (Vinyl On Demand) Singles & EPs 1978 – Lomticks of Time (not on label) 1978 – 2nd (Dvation) 1979 – Deep Floor (Dvation) 1979 – Fragment (Dvation) 1979 – Group Fragments (Dvation) 1981 – 4 Hours (Fetish Records) 1982 – Passions Still Aflame (Polydor) 1982 – High Holy Disco Mass (Polydor) 1983 – Resistance (Polydor) 1983 – Breakdown (Polydor) 1988 – The Hacker (Interfisch) 1988 – The Act (Interfisch) 1988 – Hacker/Hacked (Interfisch) 1989 – Sound Mirror (Interfisch) 1991 – Final Program (Contempo) 1992 – Bitstream (Contempo) 1992 – Black Words on White Paper (Contempo) 1992 – Virtual Reality Handbook (Minus Habens) 1993 – Voice Recognition Test (Contempo) 1993 – Eternity (Contempo) 2014 - Re-Konstructor / Re-Kabaret 13 (Anterior Research Media Comm) 2015 - Neo Post Sign (Anterior Research Media Comm) 2016 - Neoteric (Anterior Research Media Comm) 2019 - Neoteric RMX4 12" (Anterior Research Media Comm) 2020 - Horology IV 2x7" with book (Vinyl On Demand) Video 1993 – Kinetic Engineering (Contempo) References Further reading External links Anterior Research/Clock DVA official website The Clock DVA on Myspace Last.FM group Review on TrouserPress British industrial music groups English electronic music groups English new wave musical groups English post-punk music groups British electronic body music groups Cyberpunk music Mute Records artists Musical groups established in 1978 Musical groups from Sheffield Wax Trax! Records artists Industrial Records artists 1978 establishments in England Cassette culture 1970s–1990s
13192
https://en.wikipedia.org/wiki/Hacking
Hacking
Hacking may refer to: Places Hacking, an area within Hietzing, Vienna, Austria People Douglas Hewitt Hacking, 1st Baron Hacking (1884–1950), British Conservative politician Ian Hacking (born 1936), Canadian philosopher of science David Hacking, 3rd Baron Hacking (born 1938), British barrister and peer Sports Hacking (falconry), the practice of raising falcons in captivity then later releasing into the wild Hacking (rugby), tripping an opposing player Pleasure riding, horseback riding for purely recreational purposes, also called hacking Shin-kicking, an English martial art also called hacking Technology Hacker, a computer expert with advanced technical knowledge Hacker culture, activity within the computer programmer subculture Security hacker, someone who breaches defenses in a computer system Cybercrime, which involves security hacking Phone hacking, gaining unauthorized access to phones ROM hacking, the process of modifying a video game's program image Other uses Roof and tunnel hacking, unauthorized exploration of roof and utility tunnel spaces See also Hack (disambiguation) Hacker (disambiguation) Hacks (disambiguation) List of hacker groups
33835161
https://en.wikipedia.org/wiki/Prep%20%26%20Landing%3A%20Naughty%20vs.%20Nice
Prep & Landing: Naughty vs. Nice
Prep & Landing: Naughty vs. Nice is a 2011 computer animated 3-D television special, produced by Walt Disney Animation Studios, and directed by Kevin Deters and Stevie Wermers-Skelton. It aired on December 5, 2011, on the ABC TV channel. The special is the second and final half-hour Christmas special, and the fourth and final short film in the Prep & Landing series, after Prep & Landing, Tiny's BIG Adventure, and Operation: Secret Santa. Plot The beginning of the special introduces the Coal Elf Brigade, a special unit of Christmas elves resembling coal miners that is responsible for delivering lumps of coal to naughty children. While seeming cruel to some, the brigade adds small, encouraging notes to the lumps such as "Try Harder next year," in an attempt to steer the children back to the nice list. With the Big 2-5 fast approaching, Wayne and Lanny must race to recover classified North Pole technology that has fallen into the hands of a hacker identified only as "jinglesmell1337." Desperate to prevent Christmas from descending into chaos, Wayne seeks out (at the insistence of Magee) the foremost Naughty Kid expert to aid in the mission, a bombastic member of the Coal Elf Brigade who also happens to be his estranged younger (but larger) brother, Noel. Reluctant to take the extroverted Noel along with him, Wayne relents, and Noel joins the Prep & Landing team on the mission. During the trip, Noel and Wayne reminisce about their childhood, when they worked together far better than they do now. As the trio arrives at the hacker's house, Wayne sets off a booby trap, imperiling the entire team; Noel manages to defend himself, Wayne takes a particular beating from the trap's various mechanisms, and Lanny makes it into the hacker's room, only to accidentally "sparkle" himself and end up taken captive. The hacker then reveals herself to be Grace Goodwin, whose sole mission is to get herself off the naughty list, believing that she had been set up by her toddler brother, Gabriel, who had destroyed her favorite toy and ruined her chances to ask Santa for a new one by his crying. After a somewhat intoxicated Lanny suggests using the "magic word" to get the password for the device that will get her off the list, she does just that: using the word "please" as the password, since genuinely naughty kids never say "please." At first, she appears successful in changing her status from naughty to nice, but the device malfunctions, threatening to place the entire planet on the naughty list unless she and the team can pull off a risky operation to fix the problem. Meanwhile, Wayne is particularly bitter at being "shown up" by his younger brother, prompting a fight in the street in front of Grace's house in which Wayne goes as far to say that he wish that he never had a brother. Shocked and Hurt by his statement, Noel (who always idolized Wayne growing up) asks Wayne to say he didn't mean it and says that he looked up to him and that he was his hero before he throws what he had intended to give Wayne as a Christmas present at him. The gift—a toy sled that Wayne had wanted as a kid but was never able to get—prompts Wayne to reconcile with Noel and carry out the mission. Grace, watching the whole argument as it unfolds, learns a powerful lesson and a newfound appreciation for her younger brother. Wayne then receives a call from Magee who tells him that the device is causing bigger problems making every single child is being transferred to the naughty list. Wayne tells them that the antenna on Conduct Calculator is broken and so Mr. Thistleton tells him to fix it and attach it to a powerful antenna to reverse the damage. Grace realizes that this is all her fault and Noel ask how they're going to find an antenna until Lanny who had come outside too finds a nearby building with a strong antenna. Grace helps fix the Conduct Calculator as they make their way to the building. Once fixed, she tosses the calculator to Wayne and apologizes for being naughty. As Noel and Wayne climb up the building, all the presents are getting sucked up into the tube as tree farm progresses on transferring every child onto the naughty list. Once Wayne and Noel reach the top, they realized that they can't go near the antenna due to electric hazard. Wayne decides to tie the super sled to the device but Noel couldn't get a good shot due to the flags getting blown in the wind. So Wayne decided to jump off the building with Noel before activating the parachute on his hat to fly up higher so Noel can get a clear at the antenna. He fires the grapple and lets go of it as it pull towards and sticks to the antenna with calculator causing the satellite to go back to normal as all kids get transferred back to the nice list again and all presents get dropped back into Santa's sack saving Christmas once and for all. The next morning, the scene at the Goodwin house shows Gabriel giving Grace her new Christmas present, a replacement toy for the one he had destroyed a year prior. Meanwhile, back at the North Pole, Wayne and Noel both win the title of "Elves of the Year" for their efforts and cooperation (although the headline of the local paper misprints Wayne's name as "Dwayne"). Cast Dave Foley as Wayne Derek Richardson as Lanny Sarah Chalke as Magee Rob Riggle as Noel Chris Parnell as Mr. Thistleton W. Morgan Sheppard as 'The Big Guy' Santa Claus Emily Alyn Lind as Grace Goodwin Hayes MacArthur as Thrasher Phil LaMarr as Crumbles Christopher Harrison as Gene the Salesman Grace Potter as Carol Kevin Deters as Hop With Me Bunny Release The special was released on DVD and Blu-ray of Prep & Landing: Totally Tinsel Collection on November 6, 2012, together with Prep & Landing, Operation: Secret Santa, and Tiny's BIG Adventure. The 3D version of the special is being screened at the Muppet*Vision 3D theatre at Disney California Adventure in Anaheim, CA. Awards On December 5, 2011, on the day of its first broadcast, Prep & Landing: Naughty vs. Nice was nominated for eleven Annie Awards in seven categories by the International Animated Film Association, ASIFA-Hollywood. It won four awards, one for character animation, character design, music and storyboarding. References External links American films English-language films American Christmas films 2010s American animated films 2011 computer-animated films 2011 films 2011 television films Films scored by Michael Giacchino Christmas television specials 2010s Christmas films 2010s Disney animated short films 2011 3D films
24601568
https://en.wikipedia.org/wiki/Arthur%20Levenson
Arthur Levenson
Arthur J. Levenson (February 15, 1914 – August 12, 2007) was a cryptographer, United States Army officer and NSA official who worked on the Japanese J19 and the German Enigma codes. Biography Arthur J. Levenson was born in Brooklyn, New York. He earned a B.S. in Mathematics from the City College of New York. He did graduate work in mathematics at New York University and Columbia University. He attained the rank of lieutenant colonel in the U.S. Army. Levenson was a graduate of the National War College. Levenson and his wife Marjorie West (1917–2011) are buried at Arlington National Cemetery. World War II Service At the beginning of World War II, the Army called Mr. Levenson to active duty from the Enlisted Reserve Corps. He was approved for Signal Corps Officer Candidate School at Fort Monmouth Levenson was selected by Major William P. Bundy to be a member of the 6811th Signal Company. The 6811th Signal joined the British wartime code breaking organization at Bletchley Park in Britain. At Bletchley Park, Levenson worked against both the ENIGMA and TUNNY German cipher machines in the famous Hut 6. Levenson developed a friendship with British cryptanalysts Alan Turing and Hugh Alexander. After V-E Day, Levenson was sent with a group of British and American officers to Germany, assigned to track down German cipher equipment and to locate and interrogate German cryptanalysts. In his obituary in The Washington Post, an anecdote from his wartime service showed his operational contribution: In a 1999 PBS documentary about the decoding project, Mr. Levenson said the team at Bletchley sometimes deciphered the German messages before German forces in the field could read them. "If it was something hot," he said, "it'd get out in the field before the German commander got his." In one case, Mr. Levenson said, the team decoded a message from German military leader Erwin Rommel and determined that German tanks were converging at a spot in Normandy where U.S. paratroopers were planning to jump. "They were going to drop one of the airborne divisions right on top of a German tank division," Mr. Levenson said in the documentary. "They would have been massacred." At the last moment, plans were changed, and the paratroopers averted disaster. National Security Agency After completing his service overseas, he remained in the cryptologic business as a civilian with the organizations that eventually evolved into the National Security Agency. He was a member and subsequently Chief of the Technical Consultants Group, the prestigious cryptanalytic organization where the most difficult problems were attacked. During that period he initiated the program for sending out selected NSA working mathematicians to participate in the recruitment of promising college math students-a program that greatly enhanced the quality of the growing NSA professional work force. When the Office of Production in NSA was re-structured to better focus its attacks he was selected to organize and serve as the first Chief of ADVA, an organization dedicated to the exploitation of Soviet high-grade encryption systems. He led the design and implementation of the technical attack team. He took the lead in procuring high-level government support for the project from experts like William O. Baker, head of the Bell Laboratories and longtime member of the President's Foreign Intelligence Advisory Board. Levenson became chief of A Group, the major NSA organization devoted to analyzing Soviet Bloc communications. Under his leadership, A group was refocused to enhance the timeliness of its Signals Intelligence reporting to the intelligence community. Before he retired in December 1973, Levenson served as Chief of the Machine Processing Organization, responsible for the maintenance and operation of the large NSA facility which housed both commercial off- the-shelf computers and highly sophisticated special purpose machines. Levenson introduced computer management structure professionals from private industry opening up the organization to innovation from outside of the elite cryptologic workforce. Levenson retired with 32 years of Agency service. Data Encryption Standard In 1976, after retiring from NSA, Levenson and NSA and NBS representatives met with the Stanford University cryptography team which had publicly criticized NSA's proposed Data Encryption Standard, DES, as being too easy to crack. In this meeting with Whitfield Diffie, Martin Hellman, and Paul Baran, he tried to convince the critics that "56 [bits] is quite adequate", because (among other reasons) brute force attack would never be the weakest link in the security of systems that used DES. NSA succeeded in getting broad adoption of 56-bit DES, particularly in the financial industry. This allowed NSA and other countries to decipher most of the world's financial transactions, until the EFF DES cracker convinced banks to switch to stronger ciphers in 1998. Family He was married for 62 years to Marjorie West Levenson of Washington. He had three children, David West Levenson of Warren, N.J., Sarah Stromeyer of Austin and Rebecca Levenson Smith of Silver Spring; and two grandchildren. Awards Levenson was awarded the NSA Exceptional Civilian Service Award in 1969. and was inducted to the NSA Hall of Honor in 2009. References External links American cryptographers Modern cryptographers 20th-century American Jews 2007 deaths Burials at Arlington National Cemetery United States Army colonels 1914 births 21st-century American Jews
149504
https://en.wikipedia.org/wiki/Spiral%20model
Spiral model
The spiral model is a risk-driven software development process model. Based on the unique risk patterns of a given project, the spiral model guides a team to adopt elements of one or more process models, such as incremental, waterfall, or evolutionary prototyping. History This model was first described by Barry Boehm in his 1986 paper, "A Spiral Model of Software Development and Enhancement". In 1988 Boehm published a similar paper to a wider audience. These papers introduce a diagram that has been reproduced in many subsequent publications discussing the spiral model. These early papers use the term "process model" to refer to the spiral model as well as to incremental, waterfall, prototyping, and other approaches. However, the spiral model's characteristic risk-driven blending of other process models' features is already present: In later publications, Boehm describes the spiral model as a "process model generator", where choices based on a project's risks generate an appropriate process model for the project. Thus, the incremental, waterfall, prototyping, and other process models are special cases of the spiral model that fit the risk patterns of certain projects. Boehm also identifies a number of misconceptions arising from oversimplifications in the original spiral model diagram. He says the most dangerous of these misconceptions are: that the spiral is simply a sequence of waterfall increments; that all project activities follow a single spiral sequence; that every activity in the diagram must be performed, and in the order shown. While these misconceptions may fit the risk patterns of a few projects, they are not true for most projects. In a National Research Council report this model was extended to include risks related to human users. To better distinguish them from "hazardous spiral look-alikes", Boehm lists six characteristics common to all authentic applications of the spiral model. The six invariants of spiral model Authentic applications of the spiral model are driven by cycles that always display six characteristics. Boehm illustrates each with an example of a "dangerous spiral look-alike" that violates the invariant. Define artifacts concurrently Sequentially defining the key artifacts for a project often increases the possibility of developing a system that meets stakeholder "win conditions" (objectives and constraints). This invariant excludes “hazardous spiral look-alike” processes that use a sequence of incremental waterfall passes in settings where the underlying assumptions of the waterfall model do not apply. Boehm lists these assumptions as follows: The requirements are known in advance of implementation. The requirements have no unresolved, high-risk implications, such as risks due to cost, schedule, performance, safety, user interfaces, organizational impacts, etc. The nature of the requirements will not change very much during development or evolution. The requirements are compatible with all the key system stakeholders’ expectations, including users, customer, developers, maintainers, and investors. The right architecture for implementing the requirements is well understood. There is enough calendar time to proceed sequentially. In situations where these assumptions do apply, it is a project risk not to specify the requirements and proceed sequentially. The waterfall model thus becomes a risk-driven special case of the spiral model. Perform four basic activities in every cycle This invariant identifies the four activities that must occur in each cycle of the spiral model: Consider the win conditions of all success-critical stakeholders. Identify and evaluate alternative approaches for satisfying the win conditions. Identify and resolve risks that stem from the selected approach(es). Obtain approval from all success-critical stakeholders, plus commitment to pursue the next cycle. Project cycles that omit or shortchange any of these activities risk wasting effort by pursuing options that are unacceptable to key stakeholders, or are too risky. Some "hazardous spiral look-alike" processes violate this invariant by excluding key stakeholders from certain sequential phases or cycles. For example, system maintainers and administrators might not be invited to participate in definition and development of the system. As a result, the system is at risk of failing to satisfy their win conditions. Risk determines level of effort For any project activity (e.g., requirements analysis, design, prototyping, testing), the project team must decide how much effort is enough. In authentic spiral process cycles, these decisions are made by minimizing overall risk. For example, investing additional time testing a software product often reduces the risk due to the marketplace rejecting a shoddy product. However, additional testing time might increase the risk due to a competitor's early market entry. From a spiral model perspective, testing should be performed until the total risk is minimized, and no further. "Hazardous spiral look-alikes" that violate this invariant include evolutionary processes that ignore risk due to scalability issues, and incremental processes that invest heavily in a technical architecture that must be redesigned or replaced to accommodate future increments of the product. Risk determines degree of details For any project artifact (e.g., requirements specification, design document, test plan), the project team must decide how much detail is enough. In authentic spiral process cycles, these decisions are made by minimizing overall risk. Considering requirements specification as an example, the project should precisely specify those features where risk is reduced through precise specification (e.g., interfaces between hardware and software, interfaces between prime and sub-contractors). Conversely, the project should not precisely specify those features where precise specification increases the risk (e.g., graphical screen layouts, the behavior of off-the-shelf components). Use anchor point milestones Boehm's original description of the spiral model did not include any process milestones. In later refinements, he introduces three anchor point milestones that serve as progress indicators and points of commitment. These anchor point milestones can be characterized by key questions. Life Cycle Objectives. Is there a sufficient definition of a technical and management approach to satisfying everyone's win conditions? If the stakeholders agree that the answer is "Yes", then the project has cleared this LCO milestone. Otherwise, the project can be abandoned, or the stakeholders can commit to another cycle to try to get to "Yes." Life Cycle Architecture. Is there a sufficient definition of the preferred approach to satisfying everyone's win conditions, and are all significant risks eliminated or mitigated? If the stakeholders agree that the answer is "Yes", then the project has cleared this LCA milestone. Otherwise, the project can be abandoned, or the stakeholders can commit to another cycle to try to get to "Yes." Initial Operational Capability. Is there sufficient preparation of the software, site, users, operators, and maintainers to satisfy everyone's win conditions by launching the system? If the stakeholders agree that the answer is "Yes", then the project has cleared the IOC milestone and is launched. Otherwise, the project can be abandoned, or the stakeholders can commit to another cycle to try to get to "Yes." "Hazardous spiral look-alikes" that violate this invariant include evolutionary and incremental processes that commit significant resources to implementing a solution with a poorly defined architecture. The three anchor point milestones fit easily into the Rational Unified Process (RUP), with LCO marking the boundary between RUP's Inception and Elaboration phases, LCA marking the boundary between Elaboration and Construction phases, and IOC marking the boundary between Construction and Transition phases. Focus on the system and its life cycle This invariant highlights the importance of the overall system and the long-term concerns spanning its entire life cycle. It excludes "hazardous spiral look-alikes" that focus too much on initial development of software code. These processes can result from following published approaches to object-oriented or structured software analysis and design, while neglecting other aspects of the project's process needs. References Software development process
31051559
https://en.wikipedia.org/wiki/Legendre%20pseudospectral%20method
Legendre pseudospectral method
The Legendre pseudospectral method for optimal control problems is based on Legendre polynomials. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. A basic version of the Legendre pseudospectral was originally proposed by Elnagar and his coworkers in 1995. Since then, Ross, Fahroo and their coworkers have extended, generalized and applied the method for a large range of problems. An application that has received wide publicity is the use of their method for generating real time trajectories for the International Space Station. Fundamentals There are three basic types of Legendre pseudospectral methods: One based on Gauss-Lobatto points First proposed by Elnagar et al and subsequently extended by Fahroo and Ross to incorporate the covector mapping theorem. Forms the basis for solving general nonlinear finite-horizon optimal control problems. Incorporated in several software products DIDO, OTIS, PSOPT One based on Gauss-Radau points First proposed by Fahroo and Ross and subsequently extended (by Fahroo and Ross) to incorporate a covector mapping theorem. Forms the basis for solving general nonlinear infinite-horizon optimal control problems. Forms the basis for solving general nonlinear finite-horizon problems with one free endpoint. One based on Gauss points First proposed by Reddien Forms the basis for solving finite-horizon problems with free endpoints Incorporated in several software products GPOPS, PROPT Software The first software to implement the Legendre pseudospectral method was DIDO in 2001. Subsequently, the method was incorporated in the NASA code OTIS. Years later, many other software products emerged at an increasing pace, such as PSOPT, PROPT and GPOPS. Flight implementations The Legendre pseudospectral method (based on Gauss-Lobatto points) has been implemented in flight by NASA on several spacecraft through the use of the software, DIDO. The first flight implementation was on November 5, 2006, when NASA used DIDO to maneuver the International Space Station to perform the Zero Propellant Maneuver. The Zero Propellant Maneuver was discovered by Nazareth Bedrossian using DIDO. Watch a video of this historic maneuver. See also DIDO Chebyshev pseudospectral method Ross–Fahroo pseudospectral methods Ross–Fahroo lemma Covector mapping principle References Optimal control Numerical analysis Control theory
63020932
https://en.wikipedia.org/wiki/BlueLink%20%28company%29
BlueLink (company)
Bluelink, formerly known as Shadow Inc., is a political technology company that develops mobile apps designed to register, organize, and mobilize liberal voters. The company gained attention after their IowaReporterApp software failed during the 2020 Iowa Democratic caucuses. History The company, originally named Groundbase, was launched in December 2016 by Gerard Niemira and Krista Davis, who both worked on the digital outreach team for Hillary Clinton's 2016 presidential campaign. In January 2019, the company was acquired by Acronym, a liberal-leaning political nonprofit organization for which Niemira was also the COO and CTO. Shadow is a for-profit company. Acronym's CEO, Tara McGowan, described Shadow as "a political technology company". Shadow Inc. was then incorporated in Colorado in September 2019. Acronym CEO McGowan said in late January 2020 that Acronym is the sole investor in Shadow. In May 2020, Shadow Inc. was renamed to Bluelink. Locations The company is registered in Colorado, lists its Denver office as its location, and says it has further offices in New York and Seattle. Shadow and its parent company Acronym shared addresses in Denver, Colorado, and Washington, DC. Management In May 2019, Gerard Niemira, CEO and CTO of Acronym, became the CEO of Shadow when it was renamed upon the purchase of the "nearly bankrupt company" Groundbase by Acronym. Niemira was formerly a product manager for Hillary Clinton's 2016 campaign, worked at kiva.org, and worked as an intern for Representative Eliot Engel in 2005. James "Jimmy" Hickey, COO, was an engineering manager for Hillary Clinton's 2016 campagign and previously was part of Sprinklr and Bloomberg Philanthropies. Krista Davis, CTO and Chief Architect, was previously a software engineer for Hillary Clinton's 2016 campaign. She also worked at Google for eight years. The current CEO is Irene Tollinger, following Niemira stepping down from his position. Products Shadow Inc. developed software for the campaigns of numerous Democratic candidates as well as mobile software applications for the 2020 Iowa Democratic caucuses and 2020 Nevada Democratic caucuses. Lightrail Lightrail moves information between different data sources for campaign messaging and data integration purposes. It is the company's flagship product. Acronym announced in November 2019 that Lightrail has been made available to all state parties and national candidates, under a trial contract with the Democratic National Committee. Party insiders have described it as "common in Democratic politics." IowaReporterApp The Iowa Democratic Party paid Shadow Inc. $63,183 to develop the IowaReporterApp. Before the 2020 Iowa caucus, the app and its developer were kept secret from the public by the Democratic party, although it was made public that there would be an app used for the caucus. The company published a new build (Version 1.1) of the IowaReporterApp two days before the caucuses. A bug in the code of the app caused the app to fail at the time of the 2020 Iowa Democratic caucuses. Gerardo Niemira, the CEO of Shadow Inc., which created the app, stated that technology used by Democrats in prior elections was a "shitshow" and "tangled morass". Shadow came under scrutiny for its lack of maturity and the means it used to distribute the IowaReporterApp to users. Shadow required iOS device users to use Apple's behind-the-scenes beta-testing infrastructure, TestFlight, while Android users had to use third-party app TestFairy to download the app. Furthermore, a reporter from Motherboard downloaded the app to two Android devices, but it only opened on one of them. The next day, the site published an APK of the app. It also solicited the opinions of various cybersecurity experts. Dan Guido, the head of Trail of Bits, described the app as "hastily thrown together." Not-for-profit ProPublica commissioned a security audit of the app. It was determined to be "insecure," so an external entity could have hacked it. The app was reviewed by the Democratic National Committee in advance. David Bergstein said on behalf of the DNC that it was confident that security was being taken "extremely seriously." Several security and app experts have criticized the amateurish nature of the app. App-development expert, Kasra Rahjerdi, said "the app was clearly done by someone following a tutorial. It’s similar to projects I do with my mentees who are learning how to code." A team of researchers at Stanford University, including former Facebook chief security officer Alex Stamos, said that while analyzing the app, they found potentially concerning code within it, including hard-coded API keys. The app was written in React Native, authenticated through Auth0, and sent data to Google Cloud Functions. Nevada caucus app The Nevada Democratic Party paid Shadow $58,000 for its caucus reporting app, though it announced it would no longer use it and instead used Google Forms on iPads. Campaign and corporate affiliations Shadow Inc. developed software for the campaigns of numerous Democratic candidates. The Joe Biden, Pete Buttigieg and Kirsten Gillibrand presidential campaigns all made payments to the company. The Buttigieg campaign paid $42,500 to the company in July 2019 for "software rights and subscriptions" for a text-message service. The Biden campaign paid the company $1,225, also for a text-messaging service, while the Gillibrand campaign paid $37,400 the company for software and fundraising consulting. The Texas and Wisconsin state Democratic parties also contracted Shadow Inc. for undisclosed services. Shadow Inc. has worked closely with Democratic Party-affiliated companies, including Lockwood Strategy and FWIW Media. Shadow Inc's parent company Acronym has received large donations from hedge fund managers Seth Klarman and Donald Sussman, venture capitalist Michael Moritz, as well as film directors and producers Jeffrey Katzenberg and Steven Spielberg. References External links (now redirecting to bluelink.org) Software companies of the United States Software companies established in 2016 Internet activism 2020 United States presidential election American companies established in 2016
41391791
https://en.wikipedia.org/wiki/Jason%20Nieh
Jason Nieh
Jason Nieh is a Professor of Computer Science and Co-Director of the Software Systems Laboratory at Columbia University. He is most well known for his work on virtualization. He was one of the early pioneers of operating-system-level virtualization, which led to the development of Linux containers and Docker, was an early proponent of desktop virtualization, and developed key technologies for mobile virtualization, including the Linux ARM hypervisor, KVM ARM. He was also the first to introduce virtual machines and virtual appliances to teach hands-on computer science courses such as operating systems, which has now become common practice at many universities. Nieh was the technical advisor to nine States regarding the Microsoft antitrust settlement and has been an expert witness before the United States International Trade Commission. He was Chief Scientist of Desktone, which was purchased by VMware, and currently holds the same position at Cellrox. Recognition He won the Sigma Xi Young Investigator Award, seven IBM Awards, and various best paper awards including the 2004 International Conference on Mobile Computing and Networking Best Paper Award, the 2011 Symposium on Operating Systems Principles Best Paper Award, and the 2012 SIGCSE Best Paper Award. He was elected as an ACM Fellow in 2019 "for contributions to operating systems, virtualization, and computer science education". He received a 2021 Guggenheim Fellowship. References External links Jason Nieh Living people 20th-century births American computer scientists Columbia University faculty Year of birth missing (living people) Fellows of the Association for Computing Machinery
30370212
https://en.wikipedia.org/wiki/Trust%20on%20first%20use
Trust on first use
Trust on first use (TOFU), or trust upon first use (TUFU), is an authentication scheme used by client software which needs to establish a trust relationship with an unknown or not-yet-trusted endpoint. In a TOFU model, the client will try to look up the endpoint's identifier, usually either the public identity key of the endpoint, or the fingerprint of said identity key, in its local trust database. If no identifier exists yet for the endpoint, the client software will either prompt the user to confirm they have verified the purported identifier is authentic, or if manual verification is not assumed to be possible in the protocol, the client will simply trust the identifier which was given and record the trust relationship into its trust database. If in a subsequent connection a different identifier is received from the opposing endpoint, the client software will consider it to be untrusted. TOFU implementations In the SSH protocol, most client software (though not all) will, upon connecting to a not-yet-trusted server, display the server's public key fingerprint, and prompt the user to verify they have indeed authenticated it using an authenticated channel. The client will then record the trust relationship into its trust database. New identifier will cause a blocking warning that requires manual removal of the currently stored identifier. In HTTP Public Key Pinning browsers will always accept the first public key returned by the server and with HTTP Strict Transport Security in which browsers will obey the redirection rule for the duration of 'age' directive. The XMPP client Conversations uses Blind Trust Before Verification, where all identifiers are blindly trusted until the user demonstrates will and ability to authenticate endpoints by scanning the QR-code representation of the identifier. After the first identifier has been scanned, the client will display a shield symbol for messages from authenticated endpoints, and red background for others. In Signal the endpoints initially blindly trust the identifier and display non-blocking warnings when it changes. The identifier can be verified either by scanning a QR-code, or by exchanging the decimal representation of the identifier (called Safety Number) over an authenticated channel. The identifier can then be marked as verified. This changes the nature of identifier change warnings from non-blocking to blocking. In e.g. Jami and Ricochet the identifier is the user's call-sign itself. The ID can be exchanged over any channel, but until the identifier is verified over an authenticated channel, it is effectively blindly trusted. The identifier change also requires an account change, thus a MITM attack for same account requires access to endpoint's private key. In WhatsApp the endpoint initially blindly trusts the identifier, and by default no warning is displayed when the identifier changes. If the user demonstrates will and ability to authenticate endpoints by accessing the key fingerprint (called Security Code), the client will prompt the user to enable non-blocking warnings when the identifier changes. The WhatsApp client does not allow the user to mark the identifier as verified. In Telegram's optional secret chats the endpoints blindly trust the identifier. Changed identifier spawns a new secret chat window instead of displaying any warning. The identifiers can be verified by comparing the visual or hexadecimal representation of the identifier. The Telegram client does not allow the user to mark the identifier as verified. In Keybase the clients can cross-sign each other's keys, which means trusting a single identifier allows verification of multiple identifiers. Keybase acts as a trusted third party that verifies a link between a Keybase account and the account's signature chain that contains the identifier history. The identifier used in Keybase is either the hash of the root of the user's signature chain, or the Keybase account name tied to it. Until the user verifies the authenticity of the signature chain's root hash (or the keybase account) over an authenticated channel, the account and its associated identifiers are essentially blindly trusted, and the user is susceptible to a MITM attack. Model strengths and weaknesses The single largest strength of any TOFU-style model is that a human being must initially validate every interaction. A common application of this model is the use of ssh-rpc 'bot' users between computers, whereby public keys are distributed to a set of computers for automated access from centralized hosts. The TOFU aspect of this application forces a sysadmin (or other trusted user) to validate the remote server's identity upon first connection. For end-to-end encrypted communication the TOFU model allows authenticated encryption without the complex procedure of obtaining a personal certificate which are vulnerable to CA Compromise. Compared to Web of Trust, TOFU has less maintenance overhead. The largest weakness of TOFU that requires manual verification is its inability to scale for large groups or computer networks. The maintenance overhead of keeping track of identifiers for every endpoint can quickly scale beyond the capabilities of the users. In environments where the authenticity of the identifier can not be verified easily enough (for example, the IT staff of workplace or educational facility might be hard to reach), the users tend to blindly trust the identifier of the opposing endpoint. Accidentally approved identifiers of attackers may also be hard to detect if the man-in-the-middle attack persists. As a new endpoint always involves a new identifier, no warning about potential attack is displayed. This has caused misconception among users that it's safe to proceed without verifying the authenticity of the initial identifier, regardless of whether the identifier is presented to the user or not. Warning fatigue has pushed many messaging applications to remove blocking warnings to prevent users from reverting to less secure applications that do not feature end-to-end encryption in the first place. Out-of-sight identifier verification mechanisms reduce the likelihood that secure authentication practices are discovered and adopted by the users. First known use of the term The first known formal use of the term TOFU or TUFU was by CMU researchers Dan Wendlandt, David Andersen, and Adrian Perrig in their research paper "Perspectives: Improving SSH-Style Host Authentication With Multi-Path Probing" published in 2008 at the Usenix Annual Technical Conference. Moxie Marlinspike mentioned Perspectives and the term TOFU the DEF CON 18 proceedings, with reference to comments made by Dan Kaminsky, during the panel discussion "An Open Letter, A Call to Action". An audience suggestion was raised implying the superiority of the SSH Public key infrastructure (PKI) model, over the SSL/TLS PKI model - whereby Moxie replied: Related work on the subject Work toward creating visual representations of server certificate 'fingerprint' hashes has been implemented into OpenSSH in the form of ASCII Art. The intention is for users to visually recognize a 'graphical' image, instead of a long string of letters and numbers. The original research paper was written by Adrian Perrig and Dawn Song, at the Carnegie Mellon University Computer Science Department. The originator of the 'TUFU' acronym was describing the inspiration for the 'Perspectives Firefox Plug In', which was designed to strengthen the SSL/TLS PKI model by contacting network notaries whenever your browser connects an HTTPS website Prior work The topics of trust, validation, non-repudiation are fundamental to all work in the field of cryptography and digital security. See also List of information technology acronyms Man-in-the-middle attack References External links "DEF CON 18 Schedule, Open Letter - Call to Action" Firefox extension that contacts network notaries whenever your browser connects an HTTPS website Hash Visualization in OpenSSH Computer security
2164341
https://en.wikipedia.org/wiki/Eckert%E2%80%93Mauchly%20Computer%20Corporation
Eckert–Mauchly Computer Corporation
The Eckert–Mauchly Computer Corporation (EMCC) (March 1946 – 1950) was founded by J. Presper Eckert and John Mauchly. It was incorporated on December 22, 1947. After building the ENIAC at the University of Pennsylvania, Eckert and Mauchly formed EMCC to build new computer designs for commercial and military applications. The company was initially called the Electronic Control Company, changing its name to Eckert–Mauchly Computer Corporation when it was incorporated. In 1950, the company was sold to Remington Rand, which later merged with Sperry Corporation to become Sperry Rand, and survives today as Unisys. Founding Before founding Eckert–Mauchly Computer Corporation, Mauchly researched the computing needs of potential clients. Over a period of six months in 1944 he prepared memos and kept detailed notes of his conversations. For instance, Mauchly met with United States Census Bureau official William Madow to discuss the computing equipment they desired. The Census Bureau was particularly keen on reducing the number of punch cards it had to manage with each census. This meeting led to Madow making a trip to see ENIAC in person. Mauchly also met with Lt. Colonel Solomon Kullback, an official at the Army Signal Corps, to discuss codes and ciphers. Kullback said there was a need for many "faster and more flexible" computers at his agency. Mauchly responded by carefully analyzing EDVAC's potential encryption and decryption abilities. Eckert and Mauchly thus believed there was strong government demand for their future products. By the spring of 1946, Eckert and Mauchly had procured a U.S. Army contract for the University of Pennsylvania and were already designing the EDVAC the successor machine to the ENIAC at the university's Moore School of Electrical Engineering. However, new university policies that would have forced Eckert and Mauchly to sign over intellectual property rights for their inventions led to their resignation, which caused a lengthy delay in the EDVAC design efforts. After seeking to join IBM and John von Neumann's team at the Institute for Advanced Study in Princeton, New Jersey, they decided to start their own company in Philadelphia, Pennsylvania. UNIVAC Mauchly persuaded the United States Census Bureau to order an "EDVAC II" computer a model that was soon renamed UNIVAC receiving a contract in 1948 that called for having the machine ready for the 1950 census. Eckert hired a staff that included a number of the engineers from the Moore School, and the company launched an ambitious program to design and manufacture large-scale computing machines. A major achievement was the use of magnetic tape for high-speed storage. During development Mauchly continued to solicit new customers and started a software department. They developed applications, starting with the world's first compiler for the language Short Code. The core group of programmers were also hired from the Moore School: Kathleen McNulty, Betty Holberton, Grace Hopper, and Jean Bartik. Accusations of communist infiltration EMCC also received contracts for one UNIVAC machine each for the Army, Navy, and Air Force. These contracts were eventually canceled after the company was accused of having hired engineers with "Communistic leanings" during the McCarthy era. The company lost its clearance for government work. Company president and chief salesman Mauchly was banned from the company property. He challenged the accusations, but it took two years before a hearing allowed him to work at his company again; by then the UNIVAC was seriously behind schedule. The programming to allow the UNIVAC I to be used in predicting the outcome of the 1952 Presidential election had to be done by Mauchly and University of Pennsylvania statistician Max Woodbury at Mauchly's home in Ambler, Pennsylvania. BINAC and fiscal difficulties Cash flow was poor and the UNIVAC would not be finished for quite some time, so EMCC decided to take on another project that would be done quickly. This was the BINAC, a small computer (compared to ENIAC) for the Northrop corporation. Original estimates for the development costs proved to be extremely unrealistic, and by the summer of 1948, EMCC had just about run out of money, but it was temporarily saved by Harry L. Straus, vice president of the American Totalisator Company, a Baltimore company that made electromechanical totalisators. Straus felt that EMCC's work, besides being promising in general terms, might have some application in the race track business, and invested $500,000 in the company. Straus became chairman of the EMCC board, and American Totalisator received 40 percent of the stock. When Straus was killed in an airplane crash in October 1949, American Totalisator's directors withdrew their support. BINAC was eventually delivered in 1949, but Northrop complained that it never worked well for them. (It had worked fine in acceptance tests at EMCC, but Northrop, citing security concerns, refused to allow any EMCC employees onto their site to reassemble it after shipping. Instead, Northrop hired a newly graduated electrical engineer to assemble it. EMCC claimed that the fact that it worked at all after this was testimony to the quality of the design.) It was generally believed at EMCC that Northrop allowed BINAC to sit, disassembled, in their parking lot for a long time before any effort toward assembly was made. Sale to Remington Rand As had happened with BINAC, EMCC's estimates of delivery dates and costs proved to be optimistic, and the company was soon in financial difficulty again. In early 1950, the company was for sale; potential buyers included National Cash Register and Remington Rand. Remington Rand made the first offer, and purchased EMCC on February 15, 1950, whereupon it became the "Eckert-Mauchly Computer Corp Subsidiary of Remington Rand", later the "Eckert-Mauchly Division of Remington Rand", then the UNIVAC division of Remington Rand and finally then "Remington Rand UNIVAC divsion of Sperry Rand Corp". The first UNIVAC was not delivered until March 1951, over a year after EMCC was acquired by Remington Rand, and too late to help much for the 1950 census. However, upon acceptance at the company premises, truck load after truck load of punched cards arrived to be recorded on tape (by what was called jokingly the card to pulp converters) for processing by UNIVAC. The US Census Bureau used the prototype UNIVAC on EMCC premises for months. Mauchly resigned from Remington Rand in 1952; his 10-year contract with them ran until 1960, and prohibited him from working on other computer projects during that time. Remington Rand merged with Sperry Corporation in 1955, and in 1975, the division was renamed Sperry UNIVAC. The company's corporate descendant today is Unisys. References External links John W. Mauchly and the Development of the ENIAC Computer Machine Launched a World of Change, by Kay Mauchly Antonelli, one of the first ENIAC programmers, and wife of J. W. Mauchly Oral history interview with Isaac Levin Auerbach Oral history interview by Nancy B. Stern, 10 April 1978. Charles Babbage Institute, University of Minnesota. Auerbach recounts his experiences at Electronic Control Company (later the Eckert-Mauchly Computer Company) during 1947–49. He evaluates BINAC, UNIVAC, and the roles of the National Bureau of Standards, Northrop Aircraft, Raytheon, Remington Rand, and IBM. Oral history interview with Earl Edgar Masterson, Charles Babbage Institute, University of Minnesota. Masterson recounts his job interview with J. Presper Eckert and Fraser Welch and his work with the Eckert–Mauchly Computer Corporation, especially his work with the UNIVAC I and his design of a functional high-speed printer 1946 establishments in Pennsylvania 1986 disestablishments in Pennsylvania American companies established in 1950 American companies disestablished in 1986 Companies based in Philadelphia Computer companies established in 1950 Computer companies disestablished in 1986 Defunct companies based in Pennsylvania Defunct computer companies of the United States Defunct computer hardware companies Electronics companies established in 1950 Electronics companies disestablished in 1986 Technology companies established in 1950 Technology companies disestablished in 1986 Unisys
5091478
https://en.wikipedia.org/wiki/FLOSS%20Weekly
FLOSS Weekly
FLOSS Weekly is a free and open-source software (FLOSS) themed netcast from the TWiT Network. The show premiered on April 7, 2006, and features interviews with prominent guests from the free software/open source community. It was originally hosted by Leo Laporte; his cohost for the first seventeen episodes was Chris DiBona and subsequently Randal Schwartz. In May 2010, Schwartz took over from Laporte as lead host. May 2020 saw Doc Searls take over the host role in episode 578. Many influential people from the free and open-source community have appeared on the show, including Kent Beck, Ward Cunningham, Miguel de Icaza, Rasmus Lerdorf, Tim O'Reilly, Guido van Rossum, Linus Torvalds, and Jimmy Wales. Show topics are wide in variety, and have for example included ZFS, Mifos, Asterisk, and the OSU Open Source Lab. History FLOSS Weekly was started by Leo Laporte, who runs the TWiT podcast network, and Chris DiBona, now the open source program manager at Google. FLOSS is an acronym for Free/Libre Open Source Software. The show was intended to be a weekly interview with the biggest names and influences in open source software. Episode one of FLOSS Weekly appeared on April 7, 2006. Towards the end of 2006, episodes began to appear less frequently, dropping to a monthly basis. DiBona's newborn baby and commitments at Google were cited as reasons for the show's stagnation, and on the seventeenth episode, Laporte appealed for other co-hosts to share the burden. This was DiBona's final appearance on the show as the host. He returned as a guest for the show's 100th episode. The show went on an unannounced three-month hiatus, re-appearing on July 20, 2007, with a new co-host, Randal Schwartz, who had previously appeared on the show as a guest. Schwartz has since taken over organizing guests for the show, and has restored the show to a predominantly weekly schedule (with occasional gaps from scheduling conflicts or last minute cancellations). Starting with episode 69, Jono Bacon was a somewhat regular co-host, even filling in for Randal when Randal wasn't available. The show was nominated for the 2009 Podcast Awards in the Technology/Science category. In May 2010, the show began publishing a video feed (along with many of the rest of the TWiT network shows), and moved to an earlier recording time. As a result of the new recording time, Leo Laporte stepped down as the lead host, and Jono Bacon could no longer regularly co-host. Randal Schwartz was and now Doc Searls is supported by a rotating panel of co-hosts, selected on the basis of availability and appropriateness for the guest. The list includes Aaron Newcomb, Dan Lynch, Simon Phipps, Jonathan Bennett and Shawn Powers and has previously included Guillermo Amaral, Gareth Greenaway, Joe Brockmeier and Randi Harper. Format Most episodes feature the primary developer or developers of a particular open source software project. The show is an open discussion, with the host and co-host asking questions about the nature of the project. Typically, the interviewers will ask the guests about the history of the project, and its development model (such as which language it is written in, which version control system is used, and what development environment the author uses). Some shows, such as the interviews with Jon "maddog" Hall and Simon Phipps, are not specific to an open source project, and feature more general topics, such as the philosophy of free and open-source software. Shows begin and end with a brief discussion between the hosts, before and after calling the guest. Often the guests are interviewed via Skype, with Laporte's staff at TWiT being responsible for the audio recording and production. FLOSS Weekly has been supported by advertising and donations. In October 2006, FLOSS Weekly had 31,661 downloads of episode 14. See also List of FLOSS Weekly episodes TWiT.tv References External links Technology podcasts 2006 podcast debuts Interview podcasts Audio podcasts
7564233
https://en.wikipedia.org/wiki/Z-80%20SoftCard
Z-80 SoftCard
The Z-80 SoftCard is a plug-in Apple II processor card developed by Microsoft to turn the computer into a CP/M system based upon the Zilog Z80 central processing unit (CPU). Becoming the most popular CP/M platform and Microsoft's top revenue source for 1980, it was eventually renamed the Microsoft SoftCard, and was succeeded by Microsoft's Premium Softcard IIe for the Apple IIe. Overview Introduced in 1980 as Microsoft's first hardware product, and bundled with the Microsoft BASIC programming language, the Z-80 SoftCard is an Apple II processor card that enables the Apple II to run CP/M, an operating system from Digital Research. This gives Apple II users access to many more business applications, including compilers and interpreters for several high-level languages. CP/M, one of the earliest cross-platform operating systems, is easily adaptable to a wide range of auxiliary chips and peripheral hardware, but it requires an Intel 8080-compatible CPU, which the Zilog Z80 is, but which the Apple's CPU, the MOS Technology 6502, is not. The SoftCard has a Zilog Z80 CPU plus some 74LS00 series TTL chips to adapt that processor's bus to the Apple bus. As CP/M requires contiguous memory from address zero, which the Apple II doesn't have, addresses are translated in order to move non-RAM areas to the top of memory. History The SoftCard was Paul Allen's idea. Its original purpose was to simplify porting Microsoft's computer-language products to the Apple II. The SoftCard was developed by Tim Paterson of Seattle Computer Products (SCP). SCP built prototypes, Don Burtis of Burtronix redesigned the card, and California Computer Systems manufactured it for Microsoft. Unsure whether the card would sell, Microsoft first demonstrated it publicly at the West Coast Computer Faire in March 1980. Microsoft also released a version for the Apple IIe, the Premium Softcard IIe. The card has functionality equivalent to the Extended 80-Column Text Card, including its 64 kB RAM, so would save money for users who wanted CP/M capability, additional memory, and 80-column text. Reception Sales The SoftCard's immediate success surprised Microsoft. Although unprepared to take orders at the West Coast Computer Faire, a Microsoft executive accepted 1,000 business cards from interested parties on the first day; Compute! reported that the company was "inundated" with orders. The SoftCard became the company's largest revenue source in 1980, selling 5,000 units in three months at $349 each, with high sales continued for several years. The SoftCard was the single most-popular platform to run CP/M, and Z-80 cards became very popular Apple II peripherals. By 1981 Microsoft, Lifeboat Associates, and Peachtree Software published their CP/M software on Apple-format disks. Critical reception Compute! witnessed the SoftCard's debut in March 1980 at the West Coast Computer Faire, calling it "an Apple breakthru". InfoWorld in 1981 called the SoftCard "a fascinating piece of hardware". While criticizing the "computerese" of the CP/M documentation, the magazine wrote "if you need a lightweight, portable Z80 computer, the Apple/SoftCard combination is a perfect pair." BYTE wrote, "Because of the flexibility that it offers Apple users, I consider the Softcard an excellent buy .. The price is reasonable, and it works". InfoWorld in 1984 also favorably reviewed the SoftCard IIe, approving of its ability to also replace the Extended 80-Column Text Card. The magazine concluded that it "is a good system among several good systems on the market", especially for those who wanted to run Microsoft BASIC or wanted functionality beyond CP/M. Alternatives Following Microsoft's success, several other companies developed Z80 cards for the Apple II as well, including Digital Research with and a CP/M card developed by Advanced Logic Systems named "The CP/M Card" (with a 6 MHz Z80 and 64 kB RAM) and Digital Research's CP/M Gold Card for CP/M Pro 3.0 (with 64 or 192 kB RAM). Others independent designs came from Applied Engineering, PCPI (with their 6 MHz Appli-Card), Cirtech, IBS. There were also about a dozen SoftCard clone manufacturers. References External links AppleLogic website, showing peripheral cards for the Apple II series of computers, including the Microsoft Softcard Apple II family Compatibility cards Computer-related introductions in 1980 Microsoft hardware
604067
https://en.wikipedia.org/wiki/Nemetschek
Nemetschek
Nemetschek Group is a vendor of software for architects, engineers and the construction industry. The company develops and distributes software for planning, designing, building and managing buildings and real estate, as well as for media and entertainment. History 20th century The company was founded by Prof. Georg Nemetschek in 1963 and initially went by the name of Ingenieurbüro für das Bauwesen (engineering firm for the construction industry), focusing on structural design. It was one of the first companies in the industry to use computers and developed software for engineers, initially for its own requirements. In 1977 Nemetschek started distributing its program Statik 97/77 for civil engineering. At the Hanover Fair in 1980, Nemetschek presented a software package for integrated calculation and design of standard components for solid construction. This was the first software enabling Computer-aided engineering (CAE) on microcomputers, and the product remained unique on the market for many years. In 1981 Nemetschek Programmsystem GmbH was founded and was responsible for software distribution; Georg Nemetschek's engineering firm continued to be in charge of program development. The main product, Allplan – a CAD system for architects and engineers, was launched in 1984. This allowed designers to model buildings in three dimensions. Nemetschek began to expand internationally in the 1980s. By 1996 the company had subsidiaries in eight European countries and distribution partners in nine European countries; since 1992 it has also had a development site in Bratislava, Slovakia. The first acquisitions were made at the end of the 1990s, including the structural design program vendor Friedrich + Lochner. The company, operating as Nemetschek AG since 1994, went public in 1999 (it has been listed in the Prime Standard market segment and the TecDAX in Frankfurt ever since). 21st century Two major company takeovers followed in 2000: the American firm Diehl Graphsoft (now Vectorworks) and Maxon Computer GmbH, with its Cinema 4D software for visualization and animation. In 2006 Nemetschek acquired Hungary's Graphisoft (for its key product ArchiCAD), and Belgium's SCIA International. In November 2013 Nemetschek acquired the MEP software provider Data Design System (DDS). On 31 October 2014 the acquisition of Bluebeam Software, Inc. was concluded. At the end of 2015 Solibri was acquired. Since 2016, the company has operated as Nemetschek SE. Later that year, SDS/2 was acquired. In 2017, it acquired dRofus and RISA. MCS Solutions was acquired in 2018 and later rebranded as Spacewell. Other acquisitions have been completed at a brand level (for example, Redshift Rendering Technologies, Red Giant and Pixologic were acquired by Maxon, DEXMA by Spacewell). Since 18 September 2018, Nemetschek is listed in the MDAX in addition to its TecDAX listing. Among others, Nemetschek is a member of the BuildingSMART e.V. and the Deutsche Gesellschaft für Nachhaltiges Bauen (DGNB) (German Sustainable Building Council), actively advocating for open building information modeling (BIM) standards ("open BIM") in the AEC/O industry. Business units Since 2008, Nemetschek has acted as a holding company with four business units: Planning & Design (Architecture and Civil Engineering) Build & Construct Manage & Operate Media & Entertainment. The holding company maintains 13 product brands, covering the whole building lifecycle, from planning to operations. See also Comparison of CAD editors for architecture, engineering and construction (AEC) References External links Nemetschek SE website Companies based in Munich Software companies established in 1963 Software companies of Germany Companies listed on the Frankfurt Stock Exchange Building information modeling German brands Company in the TecDAX Company in the MDAX 1963 establishments in West Germany
30990178
https://en.wikipedia.org/wiki/National%20Cyber%20Security%20Alliance
National Cyber Security Alliance
The National Cyber Security Alliance (NCSA), a 501(c)(3) non-profit organization founded in 2001, is a United States-based nonprofit, public-private partnership promoting cybersecurity and privacy education and awareness. NCSA works with a broad array of stakeholders in government, industry and civil society. NCSA's primary federal partner is the Cybersecurity and Infrastructure Security Agency within the U.S. Department of Homeland Security. NCSA's Board of Directors includes representatives from ADP; AIG; American Express; Bank of America; Cofense; Comcast Corporation; Eli Lilly and Company; ESET North America; Facebook; Intel Corporation; KnowBe4; Lenovo; LogMeIn, Inc., Marriott International; Mastercard; Microsoft Corporation; Mimecast; NortonLifeLock; Proofpoint; Raytheon; Trend Micro, Inc.; Uber: U.S. Bank; Visa and Wells Fargo. NCSA's core efforts include Cyber Security Awareness Month (October); Data Privacy Day (Jan. 28); and CyberSecure My Business. Cybersecurity Awareness Month was launched by the National Cyber Security Alliance (NCSA) and the U.S. Department of Homeland Security (DHS) in October 2004 as a broad effort to help all Americans stay safer and more secure online. When Cybersecurity Awareness Month first began, the awareness efforts centered around advice like updating your antivirus software twice a year to mirror similar efforts around changing batteries in smoke alarms during daylight saving time. Since the combined efforts of NCSA and DHS have been taking place, the month has grown in reach and participation. Operated in many respects as a grassroots campaign, the month's effort has grown to include the participation of a multitude of industry participants that engage their customers, employees and the general public in awareness, as well college campuses, nonprofits and other groups. Between 2009 and 2018, the month's theme was “Our Shared Responsibility.” The theme reflected the role that we all – from large enterprises to individual computer users – have in securing the digital assets in their control. In 2009, DHS Secretary Janet Napolitano launched Cybersecurity Awareness Month at an event in Washington, D.C., becoming the highest-ranking government official to participate in the month's activities. In subsequent years, leading administration officials from DHS, the White House and other agencies have regularly participated in events across the United States. In 2010, the kickoff of Cybersecurity Awareness Month also included the launch of the STOP. THINK. CONNECT. campaign. President Barak Obama's proclamation for the month includes STOP. THINK. CONNECT. as the national cybersecurity education and awareness message. Also in 2010, NCSA began moving the launch of the month to sites around the country. Starting in 2011, NCSA and DHS developed the concept of weekly themes during the month. This idea was based on feedback from stakeholders that the many aspects of cybersecurity should be better articulated, making it easier for other groups to align with specific themes. Themes have included education, cybercrime, law enforcement, mobility, critical infrastructure and small and medium-sized businesses. References 501(c)(3) organizations Computer security organizations Consumer organizations in the United States
61470955
https://en.wikipedia.org/wiki/Eliza%20%28video%20game%29
Eliza (video game)
Eliza is a visual novel game developed by Zachtronics, released on Microsoft Windows, macOS, and Linux on August 12, 2019, with a Nintendo Switch version on October 10, 2019. Gameplay Eliza is a visual novel. The player takes the role of Evelyn, a young woman who was initially successful in the high-tech industry of Seattle before burning out, and was mostly disconnected from life for about three years. In the present, she serves as the human proxy for Eliza, a virtual counselling program named after the real-life ELIZA, one of the earliest attempts at artificial intelligence in the 1960s. Evelyn reads off what the program produces to tell the clients. Evelyn is required to stay on script produced by Eliza, but this creates difficult choices for her in how she talks to the clients and to others including old friends. Later, Evelyn is given the opportunity to read "off-script", creating scenarios that may affect herself directly. The player has the ability to choose how Evelyn behaves in some cases, affecting how the story progresses. After finishing the game, the player can go back to previous chapters and make different decisions to see how this affects the various endings. Like most of Zachtronics' recent games, the game also includes a variation of a solitaire game with kabufuda cards should the player need to take their mind off the game. Development Unlike most of Zachtronics' other games, where founder Zach Barth has led development, Eliza production was led by Matthew Seiji Burns (Matt Burns), who had joined Zachtronics as writer and music composer. Prior to joining Zachtronics, Burns had worked at Treyarch and 343 Industries, and had gone through a "crunch culture" there that he felt himself burning out, and thus joined with the smaller Zachtronics. With the studio between titles, Burns had been inspired to develop a game about burnout to reflect his own experiences. He recalled a demo for SimSensei and MultiSense, a DARPA project at the University of Southern California that was a virtual therapist to be used for soldiers returning from overseas to see if they have symptoms of PTSD, the program asking the soldiers questions though a computer-generated avatar and monitoring their facial and body responses. He found the demo "unsettling" as it "was just so strange to see this computer analyzing someone with such a human problem" and thought how that could apply to a larger scale. Burns was also inspired by other wellness mobile apps such as BetterHelp that had similar virtual counselling. Many of these apps were based on the simplicity of ELIZA, one of the first natural language processing demonstrations developed in the 1960s to simulate a virtual therapist; Eliza was named in honor of this program. Burns also considered how many of these virtual wellness apps collected users' private data to use for seemingly unknown purposes or sharing with other companies without users' knowledge, drawing this into the story within Eliza. Burns was initially planning on developing the game himself, but after talking on his ideas with Barth, was convinced to let it be developed as a Zachtronics title. While vastly different in gameplay from Zachtronics' usual style of games, Barth considered the story that Burns has come up with to be on theme with other Zachtronics games. As both Barth and Burns have lived in the Seattle area for years, they opted to use Seattle as the setting for the game. Burns felt that Seattle was a high-technology driven city, both by Microsoft and Amazon.com, but has gone through several boom-and-bust cycles reflecting the burnout exhibited in the game. Eliza was announced in early August 2019, and released on August 12, 2019 on Microsoft Windows, macOS, and Linux. A version for the Nintendo Switch followed suit on October 10, 2019. Reception Eliza received "generally favorable" reviews according to review aggregator Metacritic. Accolades The game was nominated for "Best Storytelling" at the 2019 Golden Joystick Awards, and for the Seumas McNally Grand Prize and Excellence in Narrative at the Independent Games Festival Awards. References External links 2019 video games Linux games MacOS games Nintendo Switch games Video games developed in the United States Windows games Western visual novels Video games featuring female protagonists Video games set in Seattle
46348654
https://en.wikipedia.org/wiki/Deus%20Ex%3A%20Mankind%20Divided
Deus Ex: Mankind Divided
Deus Ex: Mankind Divided is an action role-playing video game developed by Eidos Montréal and published by Square Enix's European subsidiary in August 2016 for Microsoft Windows, PlayStation 4, and Xbox One. Versions for Linux and macOS systems were released in 2016 and 2017, respectively. It is the fourth main title in the Deus Ex series, and a sequel to the 2011 game Deus Ex: Human Revolution. The gameplay—combining first-person shooter, stealth, and role-playing elements—features exploration and combat in environments connected to the main hub of Prague and quests that grant experience and allow customization of the main character's abilities with Praxis Kits. Conversations between characters have a variety of responses, with options in conversations and at crucial story points affecting how events play out. Players can complete Breach, a cyberspace-set challenge mode, in addition to the main campaign. Breach was later released as a free, standalone product. Set in 2029, two years after Human Revolution, the world is divided between normal humans and those with advanced, controversial artificial organs dubbed "augmentations". After a violent event known as the Aug Incident, augmented people have been segregated; this prompts heated debate and an era of "mechanical apartheid". Main protagonist Adam Jensen, equipped with advanced new augmentations after Human Revolution, is a double agent for the hacker group Juggernaut Collective to expose the Illuminati, which is orchestrating events behind the scenes. The story explores themes of transhumanism and discrimination, using the series' recurring cyberpunk setting and conspiracy theory motif. Production of Mankind Divided began after completion of the Human Revolution expansion The Missing Link. The gameplay and graphics engine were rebuilt from scratch for next-generation hardware. A greater focus on realism and the story's darker themes resulted in a subdued color range compared to the previous game. Human Revolution composer Michael McCann returned to write the score with newcomers Sascha Dikiciyan and Ed Harrison. The game was announced in 2015, after a lengthy promotional campaign. Subsequent marketing slogans were criticized by journalists, and a divisive tier-based preorder campaign was cancelled due to player backlash. Post-launch, story-based downloadable content was released in 2016. Critical reception of Mankind Divided was positive, and the game's narrative, graphics and gameplay were praised. Criticism focused on the brevity of its campaign and the handling of its themes. Gameplay Deus Ex: Mankind Divided is an action role-playing game with first-person shooter and stealth mechanics. Players take the role of Adam Jensen, a man equipped with mechanical cybernetic implants called augmentations. The game's environments, ranging from open-world hubs to more scripted environments, are explored in first person; actions such as hiding behind cover, conversing with non-playable characters (NPCs) and some attack animations switch to a third-person view. In these environments, players can find NPCs that will advance the main story quest and optional side quests; completing quests and other actions such as finding hidden areas reward Adam with experience points (EXP). EXP unlock Praxis Points to upgrade his abilities. Also accessible are black-market vendors which supply equipment, materials and weapons for credits, the in-game currency. Players can approach situations in a number of ways; a violent approach shoots their way through environments while using cover to hide from enemy fire. Adam can take a stealthy approach, avoiding guards and security devices (again using cover to avoid enemy sight lines). He can move between cover elements and around corners while staying hidden. The melee takedown system offers lethal and non-lethal options, in addition to a variety of lethal and non-lethal weapons. Adam can move the bodies of enemies into hiding places, preventing them from being seen and raising an alarm. Adam's augmentations can be acquired and upgraded with Praxis Kits bought from vendors, found in the game environments or automatically unlocked by gathering enough EXP; higher-level augmentations require more Praxis Kits to unlock. Augmentation functions range from passive enhancements to Adam's vision or damage resistance; to active upgrades, such as increased strength or the ability to fall from great heights without being injured. Some augmentations are dependent on Adam's energy level, deactivating after energy has been drained. Other "Overclock" abilities force players to deactivate another augmentation to allow them to work. Non-lethal and lethal weapons, bought or picked up from enemies, can be modified with parts salvaged from other areas. New components and elements, such as the single-use multitool unlocking devices, can be bought from vendors or built from salvage in each area. Using salvage to craft new components requires blueprints discovered in the overworld. Adam can hack a variety of devices, with the hacking divided into two modes. The first has Adam hacking static devices such as computers, which triggers a minigame allowing players to capture points (nodes) and access a device. The second mode involves hacking devices such as laser traps and security robots, triggering an altered minigame where zones on a graph must be triggered to deactivate a device within a time limit. Adam converses with NPCs about the main and side quests, and some of the conversations add information to the game's world. He has several conversation options, which affect its outcome; choosing the correct option can help complete objectives, and choosing an incorrect option closes the route and forces the player to find an alternate solution. A "social" augmentation better reads an NPC's expression and evaluates their psychological profile, improving the chance of selecting the correct dialogue option. Most boss battles can be negated by using certain dialogue options. In Breach mode, the player is a hacker infiltrating the Palisade Bank to retrieve data from Deus Ex companies and escape within a time limit. Similar to an arcade game with a surreal, polygonal graphic style, the player has an avatar and navigates environments with unique augmentations. The enemy monitor alters its responses, depending on player approach to a level. Although Mankind Divided does not have a multiplayer mode, Breach has leaderboards which allow players to compare scores and positions online. Its rewards for completing levels are random or alterations to gameplay elements in individual maps. Synopsis Setting Mankind Divided is set in 2029, two years after Deus Ex: Human Revolution. The Deus Ex series is set in a cyberpunk future rife with secret organizations and conspiracies, including the Illuminati. Before Human Revolution, advances in biotechnology and cybernetics led to the development of "augmentations", artificial organs capable of enhancing human performance. Augmentation requires the use of Neuropozyne, a scarce, expensive immunosuppressive drug which prevents the body from rejecting the augmentation. They also created social divides between "augs", humans who have accepted augmentation technology; and normal humans who are either morally opposed to it, too poor to afford it, or whose bodies actively reject it. During Human Revolution, the Illuminati planned to place limitations on augmented people with a biochip. They are opposed by Adam Jensen, chief of security at the pro-augmentation corporation Sarif Industries, who is heavily augmented after an attack on his employers critically injures him. Illuminati member Hugh Darrow subverts the Illuminati's plan in order to prejudice humanity against augmentations, broadcasting a signal from the Arctic research base Panchaea which drove anyone with the biochip insane; the mass chaos is later called the Aug Incident. Jensen stops the signal, and has a choice: to broadcast stories supporting Darrow, the Illuminati or his employer, or to destroy Panchaea and let humanity decide. Jensen destroys Panchaea in the canonical ending, but social trauma from the Aug Incident and the Illuminati's manipulation cause augmented people to be stigmatized. Humanity has imposed a "mechanical apartheid" on augmented people by Mankind Divided, isolating them in ghettos and stripping them of their rights. The story focuses on events in Prague, with some events set in Dubai and London. Several factions play key roles in the game world. One of the most prominent is the Illuminati, a group of corporate elites which influences society for its own aims. The Illuminati are opposed by the Juggernaut Collective, a group of hacktivists led by the shadowy Janus and precursors of underground movements in the original Deus Ex. The two main factions in Mankind Divided are Task Force 29 (TF29), an Interpol-run anti-terrorist team based in Prague; and the Augmented Rights Coalition (ARC), originally an aid group for augmented people and now a controversial body opposing the abuse of the augmented. Characters Human Revolution protagonist Adam Jensen returns as the lead character. Presumed dead after Panchaea's destruction, he was rescued and secretly implanted with advanced augmentations. Due to a genetic trait which allows augmentations without Neuropozyne, Jensen occupies a middle ground between humans who mistrust augmented people and those whose augmentations are decaying due to a lack of Neuropozyne. Adam becomes part of TF29 as a double agent for the Juggernaut Collective to expose the Illuminati, interacting with the Collective's unseen leader Janus through Alex Vega. His co-workers in TF29 are director Jim Miller and psychologist Delara Auzenne. Adam's main opponents are ARC leader Talos Rucker and Viktor Marchenko, a member of ARC who becomes a terrorist. Central characters in the downloadable content (DLC) episodes are Frank Pritchard, an old associate from Sarif Industries; Shadowchild, a skilled hacker with a grudge against the Palisade Bank corporation; and Hector Guerrero, an undercover agent in the "Pent House" prison for augmented criminals. Plot During a mission in Dubai for TF29, Adam is attacked by an augmented mercenary group and narrowly escapes. He returns to Prague and speaks to Vega; they are involved in a bomb attack, which damages Adam's augmentations. After repairing them and learning about the hidden augmentations planted during his recovery after Panchaea, Adam spies on a meeting between Miller and his superiors and learns that the recent attacks will be attributed to ARC by the United Nations leadership. Adam is sent by Miller to the Golem City ghetto and confronts Rucker, who dies after confirming that ARC was not responsible for the attacks. The Illuminati-aligned Marchenko takes Rucker's place, and begins steering ARC towards militancy. Adam learns that TF29 director Joseph Manderley and VersaLife CEO Bob Page—prominent Illuminati members—used Orchid, a biological weapon, to kill Rucker. Rucker's death causes unrest in the augmented population, and Prague imposes martial law. With help from Vega and Janus, Adam learns about two opportunities to confront Marchenko: Orchid data stored in a Palisade Bank vault, and Allison Stanek (a fanatical, augmented ex-soldier who helped produce the bomb). By either route, Adam infiltrates Marchenko's base in the Swiss Alps and Marchenko injects him with Orchid. Adam survives because of his genetic traits, and gives an Orchid sample to Vega for analysis when he returns to Prague. After spying on a local crime family, he learns that Marchenko is planning an attack on a London summit hosted by influential CEO Nathaniel Brown. Brown is lobbying against the Human Restoration Act, an Illuminati-backed law which would permanently segregate the augmented in the isolated metropolis of Rabi'ah. Adam fails to convince Brown of the threat and confronts Marchenko's men after they infiltrate the summit, poisoning Miller with Orchid. Miller's fate depends on Adam's earlier actions—if Adam fails to save Brown, his death at the hands of ARC galvanizes support for the Human Restoration Act; saving Brown empowers him to block the act. After confronting Marchenko, Adam can kill or apprehend him. Vega vows that the Juggernaut Collective will pursue Manderley and Page, and Adam insists that Vega introduce him to Janus. In a post-credits scene, a council of Illuminati members (led by Lucius DeBeers) convenes and decides to watch Adam closely. DeBeers then tells Auzenne, his TF29 agent, that they are using Adam to find Janus. The narrative is expanded with the DLC series, "Jensen's Stories". In Desperate Measures, Adam discovers that footage of the bombing was edited by a member of Tarvos Security to protect a family member. In System Rift, Adam is tasked by Pritchard to break into the Palisade's Blade vault and investigate the logistics of Rabi'ah; he infiltrates the vault with help from Shadowchild. When Pritchard's avatar is trapped in the system, Shadowchild and Adam punch a hole in the Blade's firewall as a diversion so he can escape. In A Criminal Past, Adam talks with Auzenne about an early mission, in which he went undercover in the Pent House when Guerrero went dark. After contacting Guerrero and being involved in a prison riot, Adam discovers Junkyard: an augmentation-harvesting ring which uses the Fixer, an inmate. Guerrero has become affiliated with Junkyard and wants to kill the Fixer after he discovers their identities. Adam can defuse the situation or take sides (leading to different fates for Guerrero and the Fixer), asking Auzenne if she would kill to protect a mission. Development Eidos Montréal developed Human Revolution as a prequel of Deus Ex and a reboot of the series after several years of dormancy. Although Human Revolution was greeted with skepticism during its development, it was released in 2011 to critical and commercial success. Lead writer Mary DeMarle said that the team had no plans for a sequel during production of Human Revolution, since their primary goal was to return Deus Ex to the public eye. As development finished, the core team realized that they needed to produce a sequel. The sequel was originally to be produced by Obsidian Entertainment. Studio CEO Feargus Urquhart estimated that their version would have been released in 2014, but the plan failed to materialize due to unspecified circumstances. Production of Mankind Divided began in 2011 after the completion of The Missing Link, an expansion of Human Revolution. The team aimed to improve and streamline the experience of Human Revolution with Mankind Divided, keeping well-received elements intact and polishing those which had been criticized at launch or left untouched due to time constraints. The sequel's production took five years, with its long development explained by DeMarle and gameplay director Patrick Fortier as due to upgraded technology and depth of narrative. Production was completed on July 29, 2016, with Eidos Montréal confirming that the game was gold (indicating that it was being prepared for duplication and release). Scenario According to DeMarle, the team met to discuss where to go from Human Revolution. Inspired by the Aug Incident at the end of Human Revolution, they wanted to explore its impact and aftermath. Although Human Revolution ended with a player choice, the team realized that the world's population would be too busy dealing with the tragedy to notice. This allowed the development of a sequel where each player's choice was valid. DeMarle was in charge of the narrative design, overseeing a large group of writers who were split into teams; some handled the main narrative with DeMarle, others the side quests, and others helping with elements such as dialogue trees with boss characters. One of the contributors to the scenario was James Swallow, who had previously written additional media and helped with the writing of Human Revolution and Deus Ex: The Fall. Describing Mankind Divided narrative design, producer Olivier Proulx said that the team wanted to redesign the narrative structure with less opportunity for players to use a save to play through several set endings (as in Human Revolution). Key plot twists were present through to the end of the game, impacting dialogue and story options. Some plot elements were left unresolved by the end of Mankind Divided, attributed by DeMarle to production time limits and problems caused by the game's narrative detail. About where the narrative was supposed to go, Fortier said that the team wanted to evolve the first game's focus on transhumanism. This led to incorporating the theme of discrimination, apparently the logical outcome of the social divisions created by augmentations. Although the themes aligned with contemporary real-world events, Fortier said that this was primarily coincidental. These elements played into the series' cyberpunk setting and its focus on conspiracy theories. Prague was chosen because the team wanted to focus on Europe after much of Human Revolution was set in America. Prague was a good example of a city with a combination of old and new architecture. The team also chose it because of the myth of the golem (originating, according to Fleur Marty, in central Europe), reflected by the Golem City ghetto. While creating her narrative, DeMarle needed to remember the established Deus Ex narrative. She approached it as history written by the winner, with established fact in the original Deus Ex not being accurate. This fit with the seeking of truth, a theme of Mankind Divided. Supplementary information in the game helped connect Mankind Divided to Human Revolution and future Deus Ex games. The Illuminati, key antagonists in the series, were portrayed differently in Mankind Divided than they were in the original Deus Ex, where they were part of a "'90s-era X-Files-style paranoia". DeMarle wrote them as a loosely aligned elite, where each member pursues their own goals. She compared the Illuminati of Mankind Divided with the bankers described in the book, Too Big to Fail. Both were too arrogant to unite in a common cause, and the bankers were the closest she could get to the fictional Illuminati. An early decision brought Human Revolution protagonist Adam Jensen back for Mankind Divided. According to Proulx, his "badass" persona made him a favorite of the staff. DeMarle did not see Adam as having a long life in the Deus Ex series, and he died at the end of one of her drafts for Human Revolution. A core part of Adam's portrayal in Mankind Divided was his acceptance of his augmentations after they were forced on him. Described by game director Jean-François Dugas as "a tool and a weapon", Adam accepted his augmented status in Mankind Divided and decided to use it for the greater good and his personal goals. Although Human Revolution portrayed Adam as reactive, in Mankind Divided DeMarle insisted that he be rewritten as proactive. His interactions with Miller had to take into account Adam's reworked persona and the necessities of a mission-based game. Elias Toufexis returned to voice Adam, and was called in to begin recording in 2013. About returning to the role, Toufexis described it as easy since he knew Adam's character better; it was still difficult, however, since Adam's personality was defined by the player. Toufexis needed to have several versions of Adam in memory, so he could change his voice accordingly. Game design Discussing the game design of Mankind Divided, director Jean-François Dugas said that although their first game was characterized by their overall "naiveté", Mankind Divided required courage to bring Deus Ex gameplay to "the next level". They had a solid base with Human Revolution, but the team wanted to evolve from that base and address problems raised by players and critics. Issues included balance problems, stiff mechanics, weak combat and boss fights which seemed to penalize a non-lethal playing style. Some of these problems were resolved in the Human Revolution director's cut, and feedback from that enabled the team to further tailor and balance the design of Mankind Divided. The gameplay had to reflect the narrative surrounding Jensen's acceptance of his augmentations. The team focused on creating an immersive environment and opportunities for player choice (from non-linear exploration to primitively completing objectives) on a large and small scale. The Praxis upgrade system was carried over from Human Revolution, and weapons were based on their real-life counterparts. The AI system was upgraded, with two different subsystems for open combat and stealth which would react differently and transition smoothly in response to player actions. Augmentations were based on telemetry from Human Revolution which indicated what was popular with players. The team evaluated boss battles in the context of Mankind Divided, including classic bosses who needed to be fought and encounters which could be navigated verbally. Although Fortier did not want classic boss battles, he realized that the game's mechanical limitations necessitated them. In response to complaints about the outsourced boss battles in Human Revolution, those in Mankind Divided were developed in-house; every boss was navigable with conversation or non-lethal options. Breach mode originated when the team wanted to diverge from the main game's realism. A "live" team, led by Fleur Marty, created Breach to bring Mankind Divided mechanics into a non-realistic setting. Although the team wanted to experiment with a multiplayer mode, it would be difficult to implement and explain in the Deus Ex setting and they decided on an asynchronous system. They implemented elements which encouraged fast completion. Early builds had a more difficult path back to each level's exit, which was changed due to its negative impact on stealth-based gameplay. The team included microtransactions, but Marty said in an interview that players were not required to pay anything. Their aim was to make Breach "lighter than Hearthstone": a mode which was part of a retail game rather than a free-to-play, standalone mode. Technology Mankind Divided runs on the Dawn Engine, a game engine designed for eighth-generation gaming hardware. It was built on Glacier 2, an in-house engine created by IO Interactive. The team had several choices of engine technology after Human Revolution. They included the Unreal Engine, used by another team in the company to develop the 2014 Thief reboot; the CDC engine, created by Crystal Dynamics for Tomb Raider: Underworld and its upcoming 2013 series reboot, and the IO Interactive Glacier 2 engine used by Hitman: Absolution. Due to its more-extensive tool suite, Eidos Montréal chose Glacier 2 and expanded its basic framework to create the Dawn Engine. The team introduced physics-based rendering (a new animation system) as part of the redesign, and the engine was optimized for the game's narrative base. Creating the environments was a challenge for the developers, who wanted to be as realistic as possible within the game's planned design and available technology. The characters' hair, designed to appear as lifelike as possible, was animated with in-house technology based on TressFX. Dedicated lighting programming allowed realism in changing light conditions. Environmental scale was troublesome; interior rooms with realistic proportions and high detail were too large to fit in their exterior structures, and the team used programming tricks to maintain the illusion of reality. Multiple lighting filters were created at different levels to achieve a dynamic, realistic lighting system, and the game's use of light dovetailed with its artistic themes. Due to a lack of console-specific functions, in-house technology was used for the anti-aliasing filters to maintain a smooth image when navigating its environments. An element of the original Glacier 2 engine which was carried over into the Dawn Engine was the "entity system", an advanced AI system which allowed the quick creation of new AI behavior based on existing designs. This eliminated the need for a dedicated AI programmer. The advantage of Dawn Engine technology was its ability to hold a large number of entities at one time, which was suited to large game environments. Its "entity-driven" architecture allowed artists deep involvement with the environment and level-design teams. The more-powerful technology allowed the inclusion of more interactive and environmental objects and elements central to the Deus Ex series, such as air ducts and cover. The team wanted to avoid obvious walls preventing the player from moving beyond the map boundaries, so they integrated areas into the environment and looped major streets to create an illusion of size. Art design Martin Dubeau was Mankind Divided art director, and Human Revolution art director Jonathan Jacques-Belletête remained as executive art director. Like Human Revolution, Renaissance and Baroque artists (including Johannes Vermeer, Rembrandt and Leonardo da Vinci) provided design inspirations for the team. Humanity's attitude to augmentation was described as a metaphorical and overt expression of the transition between the Dark Ages and the Age of Enlightenment, with gold remaining a symbolic representation of pro-augmentation factions. Black and gold were the dominant colors for Human Revolution, but Mankind Divided made less use of these colors due to its narrative; instead, it used blue and gray to create a subdued setting. Its game world aimed to make locations recognizable while adding futuristic, cyberpunk elements, since the game was set in the near future. Dubeau represented the more-subdued tone and violent reaction against augmentations by using "cold, raw, and opaque materials" for modern architecture. Golem City, Prague's ghetto for augmented citizens, was based on Kowloon Walled City and corporate housing; for Dubai, the team was inspired by the work of Zaha Hadid. Adam's outfit was changed, referring to his first appearance in Human Revolution and his evolution after that game. Although Jacques-Belletête was satisfied with the first game's design, Dubeau wanted to use it as the starting point for a new design. After working on it internally for some time, the art team decided to collaborate with an external partner. They contacted Errolson Hugh, co-founder of the Berlin-based design house Acronym, to design Adam's coat. His new outfit aimed at a military feel while incorporating stylistic elements from its earlier form, fashion elements common to Hugh's work, and practical fastenings and alterations to accommodate Adam's augmentations. Marchenko's design reflected a life of work and hardship. The character of Rucker was designed as symbolic of his own hardships and his position in ARC. His death scene included a sunset which, according to Dubeau, represented the death of the cyber-Renaissance. Police-officer designs reflected the theme of regression, and their body armor was modeled on medieval knights. In addition to Adam's coat, Hugh and Acronym collaborated on the game's general clothing design. Its costume design was inspired by the work of fashion designer Gareth Pugh. Music Human Revolution composer Michael McCann returned as a co-composer for Mankind Divided. McCann was joined by Sascha Dikiciyan, whose work included Borderlands and Tron: Evolution. According to audio director Steve Szczepkowski, the team wanted to build on Human Revolution musical foundation while communicating the darker themes of Mankind Divided; due to the game's scale, another composer was sought to join McCann. After hearing Dikiciyan's music, Szczepkowski talked with him about joining the team; Dikiciyan shared his enthusiasm for the series, and came on board. Breach-mode music was composed by Ed Harrison, and the game's ending theme was composed by Misha Mansoor of the progressive metal band Periphery. Two soundtrack albums for Mankind Divided were released on December 2, 2016: a standard edition and an extended edition, with additional tracks. Release Mankind Divided was confirmed in a 2013 press release from Eidos Montréal about the Deus Ex series as part of a series-wide project, the "Deus Ex Universe", with future games and additional media designed to expand on the series' setting. The game was leaked a day before its official announcement, in early April 2015, for PlayStation 4, Xbox One and Microsoft Windows personal computers (PC). It was the culmination of the three-day "Can't Kill Progress" promotional event, organised by Eidos Montréal and publisher Square Enix, which featured a live Twitch stream of a man pacing, sleeping and meditating in a nondescript room. Viewers could change the camera angle and vote on how the man should act during his interrogation. The campaign, inspired by Deus Ex choice-based narrative and gameplay, intended to alert fans that the series had returned. The trailer was produced by Visual Works, Square Enix' CGI department. Visual Works had been involved with Deus Ex and Eidos Montréal since Square Enix acquired the series' previous owner, Eidos Interactive, in 2009. Adam's character model was based on original CGI models from Human Revolution and Eidos Montréal's design documents. Eidos Montréal collaborated with Visual Works on the trailer's aesthetic design and content. Although their previous projects had settings based on European fantasy or advanced science-fiction futures, the team used the real-world Prague for the bombing scenes. Action scenes were worked out in advance with the motion-capture actors. The most difficult scene for Visual Works was where Adam activated the Titan Armor augmentation to block a hail of bullets. The PC version was created by Nixxes Software and Advanced Micro Devices (AMD), who focused on the control scheme and graphics options for different computer systems. Nixxes and AMD enabled the game to perform smoothly on DirectX 12 systems. The DirectX 12 system included a new application programming interface which was similar to that used for the console versions, allowing equivalent optimization and exchange of technological improvements. A priority for the PC version's graphics was improving bokeh and depth of field to create a more-realistic environment. The effects were implemented with AOFX, part of AMD's GPUOpen middleware tool. Another enhancement was to the TressFX hair effect, which was altered so much by AMD that it was designated a new graphics tool called PureHair. Mankind Divided was originally scheduled for release on February 23, 2016. In November 2015, however, the team announced that its release would be delayed until August 23, exactly five years to the day Human Revolution was released. According to Eidos Montréal, when the team had a fully playable version it needed extra time to polish the game to player-standard quality. The game was released in standard and digital-deluxe editions, which included access to DLC and in-game items such as Praxis Kits. As a pre-order bonus for the PC version, an announcer pack featuring Adam's voice was released for the Dota 2 multiplayer online battle arena game. A port for Linux and macOS were developed and published by Feral Interactive. The Linux port was released on November 3, 2016, and the macOS version on December 12, 2017. It was also released in standard and digital-deluxe editions, with in-game items and DLC. Controversy Shortly after the game was announced, Mankind Divided was criticized online for using the word "apartheid" as part of its marketing. The criticism stemmed from the word's historic association with apartheid, a system of racial segregation in South Africa for much of the 20th century. Members of the game's staff advocated their use of the term due to its relevance to the story's subject matter. Additional controversy arose among the public and other game developers for Mankind Divided use of the phrase "Aug Lives Matter" in its promotional concept art. The slogan is very similar to Black Lives Matter, an activist movement. According to the developers, the similarity was a coincidence and its choice predated the movement's emergence in 2013. An original marketing campaign for Mankind Divided revolved around a five-tier pre-order campaign; players who had pre-ordered the game could pick items from each tier (each with their own pre-order bonuses) depending on worldwide pre-orders. The highest tier would have allowed players access to the game four days before its release date. According to Square Enix, the system was intended to offer more freedom to players about pre-order content rather than developers choosing pre-order content packages for release in each region. Game journalists and fans were highly critical of the campaign. Due to the negative reaction, the system was canceled and all content was available to those who pre-ordered the game or purchased a Day 1 edition. Mankind Divided was criticized after release for its use of microtransactions, compounded by rumors about internal troubles with the game's development. Post-release content After the game's release, the development team focused on post-release content and downloadable content (DLC) ranging from story-based episodes to updates of Breach. A free, standalone version of Breach was released on Steam on January 24, 2017; Deus Ex: Mankind Divided – VR Experience, a non-interactive virtual reality tour of four environments in Mankind Divided, was released the same day. Access to the DLC was by separate purchase and as part of the season pass which was part of the deluxe edition. The story DLC was released under the umbrella title of "Jensen's Stories". Desperate Measures, a brief mission which was set after the main game's events, was released as a pre-order bonus before becoming available as a free download in January 2017. The next DLC, System Rift, was released on September 23, 2016, one month after the game's release. In addition to a new location to explore, System Rift narrative explained the Breach mode. The final DLC expansion, A Criminal Past, was released on February 23, 2017. Related media A five-issue comic titled Deus Ex Universe: Children's Crusade was issued by the comic branch of Titan Books between February and July 2016. The comic was written by Alexander C. Irvine and drawn by John Aggs. The comics were collected together and released as a single volume on August 30. A spin-off novel written by Swallow, Deus Ex: Black Light, was published by Titan Books on August 23, 2016. According to Swallow, Black Light fills in some of the story gaps between Human Revolution and Mankind Divided, concluding with his acceptance of work with TF29. Black Light was one of several possibilities of filling the story gap, with other options being a comic series or online animated series. In addition to these, Swallow wrote a novella called Deus Ex: Hard Line featuring Alex Vega, and Irvine and Aggs collaborated on a mini-comic called Deus Ex Universe: The Dawning Darkness. Initially pre-order exclusive, Hard Line and The Dawning Darkness were released as free downloads. Reception According to the video-game review aggregator Metacritic, Deus Ex: Mankind Divided received favorable reviews. The PS4 version of System Rift also received positive reviews, with a score of 75 out of 100 points based on 11 reviews. The PS4 version of A Criminal Past was praised, but the PC version had "mixed or average" reviews. Nick Plessas of Electronic Gaming Monthly enjoyed the narrative's enclosed nature and how the game gave Adam personality while keeping him an enigma for players. Eurogamers Edwin Evans-Thirlwell found its story structure among the best in the game's genre, but Andrew Reiner of Game Informer felt that Mankind Divided sacrificed the narrative's potential to address its themes. Nicholas Tan, writing for Game Revolution, said the narrative was "successful is in its ability to juggle and weave multiple complex and modern themes together". GameSpots Edmond Tran enjoyed the game's tone and narrative, but thought that some players might find it overly complex. Phil Savage of GamesRadar praised the conspiracy narrative carried over from Human Revolution, but felt that it left the game's main narrative and characters underdeveloped. IGNs Vince Ingenito called the plot "well produced", but Andy Kelly of PC Gamer found the side-mission stories more interesting than the main narrative. Polygons Arthur Gies called the environmental storytelling and its side missions the game's greatest strengths. A number of reviewers criticized the brevity of Mankind Divided campaign and problems with the writing of its characters and main narrative. The game's handling of its themes was criticized, particularly in contrast to its pre-release controversy. Reiner was positive about the visuals and music, but called the voice acting "hit or miss". Tan also praised the game's graphics, and Tran enjoyed exploring the environments of Prague due to their design and the combination of realistic and futuristic architecture. Savage found the setting superior to that of Human Revolution, noting some poor facial animations for minor characters. Ingenito also noted inconsistent facial animations, but said that Prague was "gorgeously realized". Kelly praised the graphics and called the level design "brilliant", and Gies enjoyed Prague's seamless nature compared to other contemporary games. Plessas was generally positive about the upgrade systems and balance, but found the AI mentally deficient. Noting the game's emphasis on stealth and hacking, Evans-Thirlwell enjoyed the freedom for players to approach missions as desired. Reiner found the action-based approach less appealing due to the AI and dull implementation, but enjoyed the stealth mechanics and associated augmentations. Tan praised the expanded options for players, noting that hacking had been downplayed in comparison with Human Revolution. Savage called Mankind Divided "a game that's best enjoyed slowly and deliberately" due to its combination of augmentation upgrades and new customization options. Tran praised the variety of approaches to different situations and the augmentation systems and removal of boss battles, but felt that some of the new augmentations had a negligible impact. Ingenito positively noted the improvements to combat and the UI compared to Human Revolution, and praised the new additions such as gun customization. Kelly called Mankind Divided "a great immersive sim with some of the best level design in the series", praising the gameplay variety and additional augmentation options. Gies also enjoyed the variety of gameplay options, but noted its subtle push towards a stealthy, non-lethal approach contradicting its core concept of choice. The Breach mode was generally praised. Sales Although Mankind Divided topped gaming sales charts during its week of release in the United Kingdom, the game had a weaker debut than Human Revolution because of a lower install base on consoles than its predecessor. Mankind Divided was the third-bestselling game of August in North America, with console-game sales increases for the period partially attributed to its release. In Square Enix's 2016 fiscal report, Mankind Divided and other 2016 titles including Final Fantasy XV were cited as factors in their net-profit increase. Accolades Future After the release of Mankind Divided, rumors began circulating that a prospective sequel had been cancelled and Square Enix had put the Deus Ex series on hold because of disappointing sales. The rumors were compounded after Eidos Montréal shifted to work on Shadow of the Tomb Raider and an untitled, licensed game based on the comic properties of Marvel Entertainment, later revealed to be Marvel's Guardians of the Galaxy. Later in 2017 and 2018, Square Enix and Eidos Montréal denied rumors that the series was cancelled. Although Eidos Montréal was not working on another Deus Ex title because of its other projects, it would return to the Deus Ex franchise when it had the available staff and the inclination. References Notes Citations External links Postcyberpunk 2016 video games Fiction set in 2029 Action role-playing video games Cyberpunk video games Mankind Divided Dystopian video games Eidos Interactive games Hacking video games Experimental medical treatments in fiction Transhumanism in video games Single-player video games Square Enix games Stealth video games Video game prequels Video game sequels Video games developed in Canada Video games scored by Michael McCann Video games scored by Sascha Dikiciyan Video games set in the 2020s Video games set in Arizona Video games set in the Czech Republic Video games set in Dubai Video games set in London Video games set in Switzerland Linux games MacOS games PlayStation 4 games PlayStation 4 Pro enhanced games Windows games Xbox One games Xbox One X enhanced games Immersive sims
2187031
https://en.wikipedia.org/wiki/WindowShade
WindowShade
WindowShade was a control panel extension for the classic Mac OS that allowed a user to double-click a window's title bar to "roll up" the window like a windowshade. When the window was "rolled up", only the title bar of the window was visible; the window's content area disappeared, allowing easier manipulation of the windows on the screen. History It debuted in System 7.5, but disappeared in Mac OS 8, when the feature was implemented as a part of the Appearance Manager. A widget was added to the title bar in addition to the double-click method of collapsing a window. The entire feature disappeared with the release of Mac OS X; windows could be minimized to the Dock on the new system or, starting with Mac OS X 10.3, moved aside with Exposé. However, several third-party utilities, such as WindowShade X for Unsanity's Application Enhancer software, have brought the ability back to Mac OS. It has since reappeared as a commercial haxie and offers other features, like translucent windows and minimize-in-place. WindowShade X from Unsanity stopped working in Mac OS 10.7, and other third-party developers have since released applications such as WindowMizer from RGB World that keep the WindowShade feature working on Mac OS X 10.6 and greater. The WindowShade control panel itself stems from a third-party utility originally written for System 6.0.7 by Rob Johnston. Apple purchased the rights to this software from the developer for use in System 7.5. Other operating systems Some window managers for Unix-like operating systems have a similar feature allowing windows to be set to "roll up" when the user double-clicks the title bar of a window. Some window managers provide a titlebar button to access the functionality. While Microsoft Windows does not expose such a feature by default, in some versions if a window is minimized while no taskbar is available, the said window will become a "shade" at the bottom of the screen. An intentional shading implementation for Windows is provided by third-party software vendors. References Macintosh operating systems user interface
44059712
https://en.wikipedia.org/wiki/Goverlan%20Systems%20Management
Goverlan Systems Management
Goverlan Reach Systems Management is a remote support software created and distributed by Goverlan, Inc. Goverlan is an on-premises client management software designed for medium to large enterprises for remote control, active directory management, global configuration change management, and reporting within a Windows IT Infrastructure. History Goverlan Reach, the primary product of Goverlan, Inc. was conceived and created in 1996 as a result of working at an investment bank in New York City with help-desks worldwide. The product was later commercialized and Goverlan Inc was incorporated in 1998. Features The Goverlan Reach Remote Support Software is used for remote support, IT process automation, IT management, software installation, inventory, and remote control. Other features include: displaying system information, mapping printers, and Wake-on-LAN settings. Remote Control Goverlan Reach RC is a remote desktop support software option for IT specialists. Goverlan allows for remote control and desktop sharing. With Goverlan, administrators can remote shadow multiple client sessions in a single pane and multiple administrators can participate in a single remote control session. In addition, an administrator can capture screenshots or video recordings during a remote session. Other features that Goverlan Remote Control supports: remote assistance with the ability to connect to computers over the internet, transfer files, or view multiple sessions in one screen and control bandwidth used during a remote session. Goverlan supports Citrix XenApp and Microsoft Terminal Services shadowing. Behind-the-scenes Systems Management The Goverlan Administration & Diagnostics tool integrates into an existing Active Directory (AD) organization unit (OU) structure for Windows Systems management. Goverlan can perform remote administration on a single machine, group of machines, or entire domain. Goverlan is compatible with VDI, RDP, and Citrix deployments. Global IT Process Automation The Goverlan IT Process Automation module allows IT administrators to manage software updates, generate reports, add or remove registry keys, and other actions that can be applied to a single computer or a network. Scope Actions allow IT administrators to execute configuration management tasks on client machines, query machines, collect information about user logged-in machines, hardware, software, or processes, and remote monitor workstations in real time, as opposed to retrieving information from a database. IT administrators may also use Goverlan for patch management to push patches to servers or workstations. WMIX WMIX is Goverlan free WMI Explorer which generates WMI queries using the WQL wizard and exports custom queries to other Windows. The WMIX tool makes use of pre-existing Windows Management Instrumentation scripts within an interface. A technician can generate a VBScript by defining parameters and clicking the generate script button. Technologies LDAP – The Lightweight Directory Access Protocol is used by Goverlan for Active Directory integration. WMI – The Windows Management Instrumentation technology is used by Goverlan to expose agent-free systems management services to Windows systems. Intel vPro AMT – The Intel Active Management Technology allows the out-of-band management of Intel vPro ready systems regardless of the system's power state. Security Goverlan Systems Management Software provides the following security features: AES 256 bit Encryption (Windows Vista and later) or RSA 128 bit Encryption (Windows XP and earlier). Microsoft Security Support Provider Interface technology (SSPI) securely authenticates the identity of the person initiating a connection. SSPI is also used to impersonate the identity of this person on the client machine. Using the identity and privileges of the person who initiated the remote control session, the remote control session is either authorized or rejected. Central or machine level auditing of executed remote control sessions. Agents communicate through a single, encrypted TCP port. Limitations Goverlan's desktop software can only be installed on Windows based computers (Windows XP and Above). Goverlan client agents can only be installed on Windows based computers (Windows 2000 and above) Goverlan requires the installation of client agents. However, client agents can be installed via a network rather than independently. See also Remote support List of systems management systems Comparison of remote desktop software Remote desktop software Desktop sharing References External links Goverlan, Inc. Official Site Remote desktop Mobile device management software Information technology management Help desk software System administration Computer access control Remote administration software Windows remote administration software Systems management Software companies of the United States
1065885
https://en.wikipedia.org/wiki/APT-RPM
APT-RPM
APT-RPM is a version of the Advanced Packaging Tool modified to work with the RPM Package Manager. It was originally ported to RPM by Alfredo Kojima and then further developed and improved by Gustavo Niemeyer, both working for the Conectiva Linux distribution at the time. In March 2005, the maintainer of the program, Gustavo Niemeyer, announced that he would not continue developing it and that he would instead focus on Smart Package Manager, which was planned as a successor to APT-RPM. In March 2006, development was picked up again by Panu Matilainen from Red Hat at a new home, introducing basic multilib functionality and support for common repository metadata. Distributions Some distributions using APT-RPM for package management are: ALT Linux: APT-RPM is the main, officially supported way to upgrade packages from the ALT Linux repositories in ALT Linux distributions since 2001. PCLinuxOS: APT-RPM is the backend for the only official way to upgrade packages in this distribution. Vine Linux: APT-RPM has been the main, officially supported way to upgrade packages in Vine Linux distributions since 2001. See also Package manager Advanced Packaging Tool References External links APT-RPM home page apt4rpm SourceForge project page Free package management systems Lua (programming language)-scriptable software
15693698
https://en.wikipedia.org/wiki/I.MX
I.MX
The i.MX range is a family of Freescale Semiconductor (now part of NXP) proprietary microcontrollers for multimedia applications based on the ARM architecture and focused on low-power consumption. The i.MX application processors are SoCs (System-on-Chip) that integrate many processing units into one die, like the main CPU, a video processing unit and a graphics processing unit for instance. The i.MX products are qualified for automotive, industrial and consumer markets. Most of them are guaranteed for a production lifetime of 10 to 15 years.Many devices use i.MX processors, such as Ford Sync, Kobo eReader, Amazon Kindle, Zune (except for Zune HD), Sony Reader, Onyx Boox readers/tablets, SolidRun SOM's (including CuBox), Purism's Librem 5, some Logitech Harmony remote controls and Squeezebox radio, some Toshiba Gigabeat mp4 players. The i.MX range was previously known as the "DragonBall MX" family, the fifth generation of DragonBall microcontrollers. i.MX originally stood for "innovative Multimedia eXtension". The i.MX products consist of hardware (processors and development boards) and software optimized for the processor. i.MX 1 series Launched in 2001/2002, the i.MX / MX-1 series is based on the ARM920T architecture. i.MX1 = 200 MHz ARM920T i.MXS = 100 MHz ARM920T i.MXL = 150-200 MHz ARM920T i.MX 2 series The i.MX2x series is a family of processors based on the ARM9 architecture (ARM926EJ-S), designed in CMOS 90 nm process. i.MX 21 family The i.MX21 family is designed for low power handheld devices. It was launched in 2003. i.MX21 = 266 MHz ARM9 platform + CIF VPU (decode/encode) + security i.MX21S = 266 MHz ARM9 platform + security i.MX 27 family The i.MX27 family is designed for videotelephony and video surveillance. It was launched in 2007. i.MX27 = 400 MHz ARM9 platform + D1 VPU (decode/encode) + IPU + security i.MX27L = 400 MHz ARM9 platform + IPU + security i.MX 25 family The i.MX25 family was launched in 2009. It especially integrates key security features in hardware. The high-end member of the family, i.MX258, integrates a 400 MHz ARM9 CPU platform + LCDC (LCD controller) + security block and supports mDDR-SDRAM at 133 MHz. i.MX258 (industrial) = 400 MHz ARM9 platform + LCDC (with touch screen support) + security i.MX257 (consumer/industrial) = 400 MHz ARM9 platform + LCDC (with touch screen support) i.MX253 (consumer/industrial) = 400 MHz ARM9 platform + LCDC + security (no touch) i.MX255 (automotive) = 400 MHz ARM9 platform + LCDC (with touch screen support) + security i.MX251 (automotive) = 400 MHz ARM9 platform + security i.MX 23 family The i.MX233 processor (formerly known as SigmaTel STMP3780 of the STMP37xx family), launched in 2009, integrates a Power Management Unit (PMU) and a stereo audio codec within the silicon, thus removing the need for external power management chip and audio codec chip. i.MX233 (consumer) = 454 MHz ARM9 platform + LCD Controller (with touch screen support) + Pixel Pipeline + security + Power Management Unit + audio codec. Provided in 128LQFP or 169 BGA packages. i.MX 28 family The i.MX28 family was launched in 2010. It especially integrates key security features in hardware, an ADC and the power management unit. It supports mDDR, LV-DDR2, DDR2-SDRAM at 200 MHz. i.MX287 (industrial) = 454 MHz ARM9 platform + LCDC (with touch screen support) + security + power management + dual CAN interface + dual Ethernet + L2 Switch i.MX286 (industrial) = 454 MHz ARM9 platform + LCDC (with touch screen support) + security + power management + dual CAN interface + single Ethernet i.MX285 (automotive) = 454 MHz ARM9 platform + LCDC (with touch screen support) + security + power management + dual CAN interface i.MX283 (consumer/industrial) = 454 MHz ARM9 platform + LCDC (with touch screen support) + security + power management + single Ethernet i.MX281 (automotive) = 454 MHz ARM9 platform + security + power management + dual CAN interface + single Ethernet i.MX280 (consumer/industrial) = 454 MHz ARM9 platform + security + power management + single Ethernet i.MX 3 series The i.MX3x series is a family of processors based on the ARM11 architecture (ARM1136J(F)-S mainly), designed in CMOS 90 nm process. i.MX 31 family The i.MX31 was launched in 2005. It integrates a 532 MHz ARM1136JF-S CPU platform (with vector floating point unit, L1 caches and 128KB L2 caches) + Video Processing Unit (VPU) + 3D GPU (OpenGL ES 1.1) + IPU + security block. It supports mDDR-SDRAM at 133 MHz. The 3D and VPU acceleration is provided by the PowerVR MBX Lite. i.MX31 (consumer/industrial/automotive) = 532 MHz ARM1136 platform + VPU + 3D GPU + IPU + security i.MX31L (consumer/industrial/automotive) = 532 MHz ARM1136 platform + VPU + IPU + security i.MX 37 family The i.MX37 processor is designed for Portable Media Players. It was launched in 2008. i.MX 37 (consumer) = 532 MHz ARM1176 CPU platform + D1 VPU (multiformat D1 decode) + IPU + security block It supports mDDR-SDRAM at 133 MHz. i.MX 35 family The i.MX35 family is the replacement of i.MX31. It was launched in 2009. The high-end member of the family, i.MX357, integrates a 532 MHz ARM1136J(F)-S CPU platform (with Vector Floating Point unit, L1 caches and 128KB L2 cache) + 2.5D GPU (OpenVG 1.1) + IPU + security block. It supports DDR2-SDRAM at 133 MHz. i.MX357 (consumer/industrial) = 532 MHz ARM1136J(F)-S CPU platform + 2.5D GPU + IPU + security i.MX353 (consumer/industrial) = 532 MHz ARM1136J(F)-S CPU platform + IPU + security i.MX356 (automotive) = 532 MHz ARM1136J(F)-S CPU platform + 2.5D GPU + IPU + security i.MX355 (automotive) = 532 MHz ARM1136J(F)-S CPU platform + IPU + security i.MX351 (automotive) = i.MX355 with no LCD interface i.MX 5 series The i.MX5x series is based on the ARM Cortex A8 core. It comprises two families: the i.MX51 family (high-end multimedia devices like smartbook or automotive infotainment) and the i.MX50 family (eReaders). It is designed in CMOS 65 nm process. Freescale licensed ATI's Imageon technology in 2007, and some i.MX5 models include an Imageon z460 GPU. i.MX 51 family The high-end member of the family, i.MX515, integrates an 800 MHz ARM Cortex A8 CPU platform (with NEON co-processor, Vector Floating Point Unit, L1 caches and 256KB L2 cache) + multi-format HD 720p decode / D1 encode hardware video codecs (VPU, Video Processing Unit) + Imageon 3D GPU (OpenGL ES 2.0) + 2.5D GPU (OpenVG 1.1) + IPU + security block. It especially supports DDR2 SDRAM at 200 MHz. The imx51 family was launched in 2009. i.MX515 (consumer/industrial) = 800 MHz ARM Cortex A8 platform (600 MHz for industrial) + HD VPU + 3D GPU + 2.5D GPU + IPU + security i.MX513 (consumer/industrial) = 800 MHz ARM Cortex A8 platform (600 MHz for industrial) + HD VPU + IPU i.MX512 (consumer/industrial) = 800 MHz ARM Cortex A8 platform (600 MHz for industrial) + IPU i.MX516 (automotive) = 600 MHz ARM Cortex A8 platform + HD VPU + 3D GPU + 2.5D GPU + IPU + security block i.MX514 (automotive) = 600 MHz ARM Cortex A8 platform + 3D GPU + 2.5D GPU + IPU + security block i.MX 50 family The i.MX508 processor is the result of Freescale collaboration with E Ink. It is dedicated for eReaders. Launched in 2010, it integrates the E Ink display controller within the silicon to save both BOM cost and space on the PCB. It especially supports LP-DDR2 SDRAM at 400 MHz. i.MX507 (consumer) = ARM Cortex A8 platform + E Ink display controller. Builds on the i.MX508. i.MX508 (consumer) = 800 MHz ARM Cortex A8 platform + 2.5D GPU + Pixel Pipeline + E Ink display controller. i.MX 53 family i.MX535 was announced in June 2010. Shipped since the first quarter of 2011. i.MX537 (industrial) = 800 MHz ARM Cortex A8 platform + Full HD VPU (1080p decode) + 3D GPU + 2.5D GPU + IPU + security + IEEE1588 i.MX535 (consumer) = 1 GHz ARM Cortex A8 platform + Full HD VPU (1080p decode) + 3D GPU + 2.5D GPU + IPU + security i.MX536 (automotive) = 800 MHz ARM Cortex A8 platform + Full HD VPU (1080p decode) + 3D GPU + 2.5D GPU + IPU + security i.MX534 (automotive) = 800 MHz ARM Cortex A8 platform + 3D GPU + 2.5D GPU + IPU + security i.MX 6 series The i.MX 6 series are based on the ARM Cortex A9 solo, dual or quad cores (in some cases Cortex A7) and typically comes with one or more Vivante GPUs. It is designed in CMOS 40 nm process. i.MX 6 Solo, Dual and Quad were announced in January 2011, during Consumer Electronics Show in Las Vegas. "Plus" versions with 1.2 GHz are currently only available via special request to NXP. Vivante GC2000 achieves ~19 GFLOPS for a 594 MHz shader clock and ~23 GFLOPS for a 720 MHz shader clock. i.MX 7 series The i.MX 7 series is based on the low-power ARM Cortex A7 CPU core with a secondary ARM Cortex M4 real-time co-processor. It is designed 28 nm fully depleted silicon-on-insulator (FDSOI) process. So far only low-powered single and dual-core models, designed for IoT applications have been released. i.MX 7Solo and i.MX 7Dual were announced in September 2013. i.MX 8 series There are four major different series of the i.MX 8: i.MX 8 series i.MX 8M series, i.MX 8ULP series, i.MX 8X series. Each series differs significantly from each other and are not pin compatible. Within each series some versions are pin compatible. Each series also has a suffix such as Quad, Dual, Plus, Max or a combination thereof, for example: QuadMax or DualPlus. The i.MX 8 series has many variants but it is not clear how the name corresponds to a feature set. In previous CPU series the naming convention clearly corresponds to a function or feature set, but this is not the case with i.MX 8. The i.MX 8 series was announced in September 2013 and is based on the ARMv8-A 64-bit CPU architecture. NXP have written that the i.MX 8 series is designed for Driver Information Systems (car computers) and applications have been released. In May 2016 the i.MX 8 became available as a multisensory enablement kit (MEK) based on i.MX 8. Slides from NXP FTF found on the web indicated an initial total of 5 variants (with a main level of categorization into "Dual" and "Quad") with varying the CPU and GPU capabilities. The CPU was suggested to include varying counts of Cortex-A72, Cortex-A53 and Cortex-M4, while the GPU is either 1 or 2 units of the Vivante GC7000VX. Other publications supported this general image, some even including photos of an evaluation kit that is named "Multisensory Enablement Kit" (MEK) that got later promoted as a development support product by NXP. The i.MX 8 was announced Q1 2017, based around 3 products. Two variants include four Cortex-A53. All versions includes one or two Cortex-A72 CPU cores and all versions includes two Cortex-M4F CPU cores. All i.MX 8 SoCs include Vivante GC7000 Series GPUs. The QuadPlus is using GC7000Lite cores, while the 'QuadMax' includes two full GC7000 GPUs. Standard Key Features: Advanced Security, Ethernet with AVB, USB 3.0 with PHY, MMC/SDIO, UART, SPI, I²C, I²S, Timers, Secure RTC, Media Processor Engine (Neon™), Integrated Power Management. *pre-production i.MX 8 Main features Fast multi-OS platform deployment via advanced full-chip hardware virtualization and domain protection Deploy rich, fully independent graphics content across 4x HD screens or 1x 4K screen Ensure all displays are always-on via SafeAssure® Fail-over capable Display Controllers Incorporate Vision and Speech Recognition interactivity via a powerful vision pipeline and audio processing subsystem Rapidly deploy multiple products by utilizing pin & power compatible packages and software friendly copy-exact IP blocks Android™*, Linux®*, FreeRTOS, QNX™*, Green Hills®, Dornerworks* XEN™* Automotive AEC-Q100 Grade 3 (-40° to 125 °C Tj), Industrial (-40° to 105 °C Tj), Consumer (-20° to 105 °C Tj) Fully supported on NXP's 10 and 15-year Longevity Program i.MX 8M The i.MX 8M series were announced on January 4 at CES 2017. Main features: Up to four 1.5 GHz ARM Cortex-A53 processors Cortex-M4F for real-time processing LPDDR4, DDR4 and DDR3(L) memory support Two USB 3.0 interfaces with PHY and Type-C support Two PCIe interfaces (1-lane each) with L1 substates for fast wakeup and low power HDMI 2.0a and MIPI-DSI (4-lane) display interfaces • Up to two MIPI-CSI2 (4-lane) camera interfaces Gigabit Ethernet MAC with Audio Video Bridging (AVB) and EEE capability 4K UltraHD resolution and 10-bit High Dynamic Range (HDR) in H.264, H.265 and VP9 support Up to 4Kp60 resolution on the HDMI 2.0a output and 1080p60 resolution on the MIPI-DSI (4-lanes) interface OpenGL ES 3.1, OpenCL 1.2, OpenGL 3.0, OpenVG and Vulkan support i.MX 8M Mini The i.MX 8M Mini is NXP's first embedded multi-core heterogeneous applications processors built using 14LPC FinFET process technology. At the heart is a scalable core complex of up to four Arm Cortex-A53 cores running up to 2 GHz plus Cortex-M4 based real-time processing domain at 400+MHz. i.MX 8M Mini core options are used for consumer, audio, industrial, machine learning training and inferencing across a range of cloud providers. Features Heterogeneous Multi-core Processing Architecture Quad-core Arm Cortex-A53 core up to 2 GHz Cortex-M4 at speeds of 400+MHz 1080p video encode and decode 2D and 3D graphics Display and camera interfaces Multi-channel audio and digital microphone inputs Connectivity (I2C, SAI, UART, SPI, SDIO, USB, PCIe, Gigabit Ethernet) Low-power and standard DDR memory support Multiple pin-compatible product offerings Consumer and Industrial i.MX 8X The i.MX 8X series were announced on March 14, 2017. Main features: Up to four 1.2 GHz Cortex-A35 processors Cortex-M4F for real-time processing Latest cryptography standards (AES, flashless SHE, elliptical curve cryptography, key storage) ECC memory Tensilica HiFi 4 DSP for audio pre- and post- processing, key word detection and speech recognition 28 nm FD-SOI process i.MX RT series As of August 2020, this family consists of Cortex-M7 devices (400–600 MHz with up to 2 MB of SRAM) and Cortex-M33 devices (200–300 MHz with up to 5 MB of SRAM). Rather than provide on-chip flash, this series supplies larger amounts of fast SRAM. Introduced at up to 600 MHz on a 40 nm nodes with plans for 1 GHz on 28 nm. The inaugural device from this series was the i.MX RT1050, introduced in the fall of 2017. NXP supports the open source PyTorch Glow neural-network compiler in its eIQ machine learning software. This especially targets IoT applications. As of August 2020, the i.MX RT1170 is in preproduction status. It is slated for 1 GHz performance on the Cortex-M7, and provides an additional Cortex-M4 co-processor. For peripherals, the RT1170 provides two Gb Ethernet ports, not found elsewhere in this product family. The part is fabricated in 28 nm FD-SOI. The processors run in separate clock and power domains, otherwise everything is shared between the two cores except for the private L1 caches. Related series For the automotive market a very similar series currently using ARM Cortex-A53 and/or ARM Cortex-M4 cores got presented in mid-2015 using the prefix S32. Software support Freescale proposes a layered approach of software with selection of software components optimized for its chips. The i.MX board support packages (BSP), common across all i.MX nodes, consists of kernel optimization, hardware drivers and unit tests. The company also provides multimedia Codecs (ARM and Video processing unit accelerated). i.MX also includes middleware with reuse of open source frameworks like multimedia framework plugins, power management, security/DRM or graphics (OpenGL/OpenVG). Linux Freescale i.MX development kits include a Linux software stack with a GNOME Mobile environment. On the i.MX51 family, the reference user interface is Ubuntu. The last Ubuntu version supported is 10.04.1 (still available on mirrors). Ubuntu dropped the "official" i.MX51 family support since version 10.10. Since Ubuntu 11.10 support for the i.MX53 Quickstart board is available as a preinstalled desktop or server SD card. The OpenEmbedded Linux distribution supports several i.MX platforms. Commercial Linux support is available from companies like Lanedo, TimeSys, MontaVista, Wind River Systems and Mentor Graphics. FreeBSD Support for the Freescale i.MX51 was added to FreeBSD on 2013-03-20. Support for other members of the i.MX5 family has been added since. Support for the Freescale i.MX 6 family was added to FreeBSD on 2013-10-31. NetBSD NetBSD 6.0 comes with support for the Freescale i.MX51. In version 7.0, support for i.MX 6 based boards was added. OpenBSD Support for the FreeScale's i.MX 6 series SoC was added to OpenBSD's head on the 2013-09-06. RISC OS i.MX support in RISC OS has been available since 2019. Windows CE Freescale i.MX development kits include WinCE. Android In February 2010, Freescale launched an Android platform for the i.MX5x family. Chromium In early 2010 Freescale demoed Chromium OS running on the i.MX515 processor. The company has not disclosed any further plans about Chromium or Chrome. Real-time OS Freescale has a range of partners providing real-time operating systems and software solutions running on the i.MX processors, such as Trinity Convergence, Adeneo, Thundersoft, Intrinsyc, Wind River Systems, QNX, Green Hills, SYSGO and Mentor Graphics. wolfSSL wolfSSL includes support for i.MX6 following all versions after (and including) wolfSSL v3.14.0. wolfSSL also provides additional support for using the Cryptographic Assistance and Assurance Module (CAAM) on the i.MX6. Reference designs In January 2010, Freescale announced the first platform of its Smart Application Blueprint for Rapid Engineering (SABRE) series. It is a smartbook (tablet form factor with 7" touch screen resistive), running on i.MX515. In February 2010, Freescale demoed the SABRE platform for eReaders, based on i.MX515. Many more reference boards are mentioned and supported through the Freescale i.MX community website. These include: i.MX23EVK i.MX25PDK i.MX28EVK MX37PDK i.MX35PDK i.MX51EVK i.MX53QSB (LOCO) See also List of Freescale Microcontrollers eBook reader Automotive infotainment Chumby Device tree Smartbook UDOO References External links i.MX Applications processor - Product page Freescale i.MX community website Freescale Public GIT - Linux Kernel and U-Boot. Chips&Media's video codec technology used in Freescale’s i.MX 6 series ARM architecture Microcontrollers System on a chip NXP Semiconductors
46370275
https://en.wikipedia.org/wiki/Bring%20your%20own%20encryption
Bring your own encryption
Bring your own encryption (BYOE) also known as bring your own key (BYOK) is a cloud computing security marketing model that aims to help cloud service customers to use their own encryption software and manage their own encryption keys. BYOE allows cloud service customers to use a virtualized example of their own encryption software together with the business applications they are hosting in the cloud, in order to encrypt their data. The business applications hosted is then set up such that all its data will be processed by the encryption software, which then writes the ciphertext version of the data to the cloud service provider's physical data store, and readily decrypts ciphertext data upon retrieval requests. This gives the enterprise the perceived control of its own keys and producing its own master key by relying on its own internal hardware security modules (HSM) that is then transmitted to the HSM within the cloud. Data owners may believe their data is secured because the master key lies in the enterprise's HSM and not that of the cloud service provider's. When the data is no longer needed (i.e. when cloud users choose to abandon the cloud service), the keys can simply be deleted. That practice is called crypto-shredding. See also Cloud computing security Encryption Trust no one (Internet security) References Cloud computing Cloud infrastructure Cryptography Data protection
27379159
https://en.wikipedia.org/wiki/Conference%20on%20Web%20and%20Internet%20Economics
Conference on Web and Internet Economics
Conference on Web and Internet Economics (WINE) (prior to 2013, The Workshop on Internet & Network Economics) is an interdisciplinary workshop devoted to the analysis of algorithmic and economic problems arising in the Internet and the World Wide Web. The submissions are peer reviewed and the proceedings of the conference is published by Springer-Verlag. The conference has been held every year since 2005. Previous sessions include: WINE 2005: Hong Kong, China : Proceedings: Lecture Notes in Computer Science 3828 Springer 2005, WINE 2006: Patras, Greece : Proceedings: Lecture Notes in Computer Science 4286 Springer 2006, WINE 2007: San Diego, CA, USA : Proceedings: Lecture Notes in Computer Science 4858 Springer 2007, WINE 2008: Shanghai, China : Proceedings: Lecture Notes in Computer Science 5385 Springer 2008, WINE 2009: Rome, Italy : Proceedings: Lecture Notes in Computer Science 5929 Springer 2009, WINE 2010: Stanford, CA, USA : Proceedings: Lecture Notes in Computer Science 6484 Springer 2010, WINE 2011: Singapore : Proceedings: Lecture Notes in Computer Science 7090 Springer 2011, WINE 2012: Liverpool, UK : Proceedings: Lecture Notes in Computer Science 7695 Springer 2012, WINE 2013: Cambridge, MA, USA : Proceedings: Lecture Notes in Computer Science 8289 Springer 2013, WINE 2014: Beijing, China : Proceedings: Lecture Notes in Computer Science 8877 Springer 2014, WINE 2015: Amsterdam, The Netherlands : Proceedings: Lecture Notes in Computer Science 9470 Springer 2015, WINE 2016: Montreal, Canada : Proceedings: Lecture Notes in Computer Science 10123 Springer 2016, WINE 2017: Bangalore, India : Proceedings: Lecture Notes in Computer Science 10660 Springer 2017, WINE 2018: Oxford, UK : Proceedings: Lecture Notes in Computer Science 11316 Springer 2018, WINE 2019: New York, NY, USA : Proceedings: Lecture Notes in Computer Science 11920 Springer 2019, Sources WINE at DBLP https://cs.mcgill.ca/~wine2016/history.html http://lcm.csa.iisc.ernet.in/wine2017/papers.html Computer networking conferences Recurring events established in 2005
37844211
https://en.wikipedia.org/wiki/Advanced%20Manufacturing%20Software
Advanced Manufacturing Software
Advanced Manufacturing Software (AMS) is an enterprise resource planning (ERP) software product based on the Microsoft Dynamics AX platform. The product is part of the Certified for Microsoft Dynamics family and intended to assist with finance, discrete manufacturing, distribution, customer relationship management (CRM), supply chains, analytics and electronic commerce for Industrial Equipment Manufacturers, Automotive & Aerospace Manufacturers and High Tech & Electronics Manufacturers with facilities in the United States and globally. The software product is developed by AMR Research Discrete Manufacturing Industry Certified I.B.I.S. Inc.. Features Advanced Manufacturing Software contains the following discrete manufacturing specific modules: Microsoft Dynamics AX Core ERP Functionality Lean Manufacturing Field Service Management & Asset Service Management Customer Relationship Management Enterprise Asset Management Shop Floor Integration Integrated Electronic Data Interchange (EDI) Plant Maintenance Scheduling Forecasting Tools Financial Management Human Resources Management (HR) Product Engineering Project Management for Manufacturing Sales Estimating / Quoting Visual Project Planning Warehouse Management System (WMS) Warehouse inventory transfers Warehouse control system References Competition Epicor Infor Syspro Abas Business Software Oracle E-Business Suite External links Advanced Manufacturing Software's Official Site Official Advanced Manufacturing Software Microsoft Dynamics Pinpoint Profile Microsoft Dynamics Production and manufacturing software Accounting software
2043010
https://en.wikipedia.org/wiki/List%20of%20Microsoft%20operating%20systems
List of Microsoft operating systems
Towing is a list of Microsoft written and published operating systems. For the codenames that Microsoft gave their operating systems, see Microsoft codenames. For another list of versions of Microsoft Windows, see, List of Microsoft Windows versions. MS-DOS See MS-DOS Versions for a full list. Windows Windows 10/11 and Windows Server 2016/2019/2022 Windows Mobile Windows Mobile 2003 Windows Mobile 2003 SE Windows Mobile 5 Windows Mobile 6 Windows Phone Xbox gaming Xbox system software Xbox 360 system software Xbox One and Xbox Series X/S system software OS/2 Unix and Unix-like Xenix Nokia X platform Microsoft Linux distributions Azure Sphere SONiC Windows Subsystem for Linux CBL-Mariner Other operating systems MS-Net LAN Manager MIDAS Singularity Midori Zune KIN OS Nokia Asha platform Barrelfish Time line See also List of Microsoft topics List of operating systems External links Concise Microsoft O.S. Timeline, by Bravo Technology Center Micro Operating systems
9214409
https://en.wikipedia.org/wiki/John%20C.S.%20Lui
John C.S. Lui
John Chi-Shing Lui is a Hong Kong computer scientist. He was the chairman of the Department of Computer Science & Engineering in the Chinese University of Hong Kong. He received his Ph.D. in Computer Science from UCLA. When he was a Ph.D student at UCLA, he spent a summer working in IBM's Thomas J. Watson Research Center. After his graduation, he joined the IBM Almaden Research Laboratory/San Jose Laboratory and participated in various research and development projects on file systems and parallel I/O architectures. He later joined the Department of Computer Science and Engineering at the Chinese University of Hong Kong. For the past several summers, he has been a visiting professor in computer science departments at UCLA, Columbia University, University of Maryland at College Park, Purdue University, University of Massachusetts Amherst and Universita' degli Studi di Torino in Italy. He actively runs INFOCOM events and service work. He is leading a group of research students in the Advanced Networking & System Research Group. His research interests include theory and mathematics. His current research interests are in theoretical topics in data networks, distributed multimedia systems, network security, OS design issues and mathematical optimization and performance evaluation theory. Lui received various departmental teaching awards and the CUHK Vice-Chancellor's Exemplary Teaching Award in 2001. He is a co-recipient of the IFIP WG 7.3 Performance 2005 Best Student Paper Award. Currently, he is an associate editor in the Performance Evaluation Journal, Fellow of the Association for Computing Machinery (2009), Fellow of IEEE (2010), and an elected member in the IFIP WG 7.3. Lui was the TPC co-chair of ACM Sigmetrics 2005, and is on the Board of Directors in ACM Sigmetrics. Lui is the general co-chair of the International Conference on Network Protocols (ICNP) 2006. His personal interests include films and general reading. References External links John C.S. Lui, CUHK, CS Living people Year of birth missing (living people) Senior Members of the IEEE Fellows of the Association for Computing Machinery University of California, Los Angeles alumni
56880276
https://en.wikipedia.org/wiki/Adrian%20Chmielarz
Adrian Chmielarz
Adrian Chmielarz (born 1971 in Lubin) is a Polish video game designer, programmer, creative director, producer and writer specializing in adventure games and first-person shooters. Chmielarz has co-founded and led Metropolis Software, People Can Fly and The Astronauts. He is one of the most famous Polish video gaming figures, as well as one of the most divisive figures in the industry. Life and career Piracy business and amateur game development Born in Lublin on April 9, 1971, Adrian Chmielarz moved into game development in a roundabout way. In 1985, at the age of 15, Chmielarz attended the first Polcon science fiction convention in Błażejewko, where he first discovered an affinity for computers. He soon went through a Star Wars fan phase that saw him interact with a computer for the first time. By the late 1980s, he had become fascinated with computer games such as Knight Lore and Bugsy by reading about them in Przegląd Techniczny. He began saving for a ZX Spectrum despite never having used one before. His first experience playing games would see him typing in each line of code from gaming magazines into his friend's computer, though each time he turned off the computer the games were wiped as there was no way to save them. Chmielarz was pushed by a desire to buy a computer with his own money, knowing that his parents had been forced into the black market to "put food on the table". In 1987, Chmielarz earned financial sustainability by traveling 40 miles each day to sell bootleg foreign films on VHS tapes, copied from a friend at a bazaar in Wrocław (such type of copyright infringement was not illegal in Poland until 1994). The Wrocław marketplace where such goods were sold often had access to newer titles earlier. He noted that while an Englishman could buy a game the day of release, the average Pole would often have to wait up to five weeks and become impatient during that time, leading to this natural solution. According to Chmielarz "many people would buy games, if only it would be possible." At one point, Chmielarz set up a distribution deal with the to-be-founders of what would become Polish distribution company CD Projekt, whereby they would drop cassette tapes full of pirated games at a local train station. After picking them up, to get an advantage over his competitors at the bazaar, he would add subroutines to alter gameplay (such as changing the number of lives or adding invulnerability); he would also himself crack the games and then apply his own anti-piracy protection measures to prevent other pirates from copying and selling it. Eventually, his bootleg business expanded into a brick-and-mortar company which sold different types of media, including movies and games, while also building computers to feed the local business industry. However, large companies started to enter Poland and the market became crowded. While he had a computer engineering company, the times were getting tougher and only giants with big money could survive on the market. Chmielarz decided to leave his profitable business and study at Wrocław University of Technology. However, he became bored and left without finishing his degree; he would later regret wasting his time at college. By 1990, Chmielarz had his own computers and his obsession led to him playing and making games all the free time. He sent the results of his experiments with creating video games the editorial offices of the magazines Komputer and Bajtek, winning a subscription to the latter as a result. One of these early titles was an erotic game Erotic Fun that sold well without any long-term profit; he later deemed this a good business lesson about exploiting an opportunity in the gaming market. Some of his other early games included text adventure games Kosmolot Podróżnik and Sekretny Dziennik Adriana Mole, which he wrote on the Timex Computer 2048. Professional game development In 1992, Adrian Chmielarz and Grzegorz Miechowski co-founded video game developing and publishing company Metropolis Software. The group realised that they could fill a gap in the untapped Polish software market, in which hundreds of thousands of people owned computers but were unable to become fully immersed in adventure games as they did not understand English. Chmielarz was not worried about the Polish gaming market being a small niche, as he knew the trail had already been set by developer xLand. Furthermore, he had assessed that while the local market was currently not active it was potentially big, noting the number of people who attended conventions. This project evolved into Chmielarz's first commercially released video game, the 1993 point-and-click adventure Tajemnica Statuetki. Some of Chmielarz's next projects, such as another point-and-click adventure game Teenagent (1995), scrolling shooter Katharsis (1997), and tactical role-playing game Gorky 17 (1999, known as Odium in North America), have been published also outside Poland. Due to an internal conflict, Chmielarz left Metropolis in 1999. Founding the video game development studio People Can Fly in 2002, he went on to create the successful first-person shooter Painkiller (2004) and its follow-ups Painkiller: Battle Out of Hell (2004), Painkiller: Hell Wars (2006), and Painkiller: Hell & Damnation (2012). A partnership with Epic Games and the work on the Gears of War series of third-person shooters, in which he personally went from a multiplayer level designer for the first two games to being the original creative director of Gears of War: Judgment (2013), led to Epic acquiring People Can Fly in 2007 and the creation of their next first-person shooter, Bulletstorm (2011). After leaving People Can Fly (by then fully owned by Epic) in 2012, Chmielarz formed the independent video game studio The Astronauts, which developed and published its debut game, the first-person adventure The Vanishing of Ethan Carter in 2014. His next game and a return to the first-person shooter genre, Witchfire, is to be released "when it's done". Chmielarz has also written commentary articles for Polish video game magazines, including his monthly columns "Gawędy bez fai" and "Gawędy po fai" in Secret Service and NEO+. In English, he has written blogs at Gamasutra and Medium. References External links Adrian Chmielarz at MobyGames 1971 births Living people People from Lublin Polish video game designers Video game directors Video game producers Video game critics Video game programmers Video game writers