id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
59666 | https://en.wikipedia.org/wiki/Gosling%20Emacs | Gosling Emacs | Gosling Emacs (often shortened to "Gosmacs" or "gmacs") is a discontinued Emacs implementation written in 1981 by James Gosling in C.
Gosling initially allowed Gosling Emacs to be redistributed with no formal restrictions, as required by the "Emacs commune" since the 1970s, but later sold it to UniPress. The disputes with UniPress inspired the creation of the first formal license for Emacs, which later became the GPL, as Congress had introduced copyright for software in 1980.
Features
Gosling Emacs was especially noteworthy because of the effective redisplay code, which used a dynamic programming technique to solve the classical string-to-string correction problem. The algorithm was quite sophisticated; that section of the source was headed by a skull-and-crossbones in ASCII art, warning any would-be improver that even if they thought they understood how the display code worked, they probably did not.
Distribution
Since Gosling had permitted its unrestricted redistribution, Richard Stallman used some Gosling Emacs code in the initial version of GNU Emacs. Among other things, he rewrote part of the Gosling code headed by the skull-and-crossbones comment and made it "...shorter, faster, clearer and more flexible."
In 1983 UniPress began selling Gosling Emacs on Unix for $395 and on VMS for $2,500, marketing it as "EMACS–multi-window text editor (Gosling version)".
Controversially, Unipress asked Stallman to stop distributing his version of Emacs for Unix.
UniPress never took legal action against Stallman or his nascent Free Software Foundation, believing "hobbyists and academics could never produce an Emacs that could compete" with their product. All Gosling Emacs code was removed from GNU Emacs by version 16.56 (July 1985), with the possible exception of a few particularly involved sections of the display code. The latest versions of GNU Emacs (since August 2004) do not feature the skull-and-crossbones warning.
Extension language
Its extension language, Mocklisp, has a syntax that appears similar to Lisp, but Mocklisp does not have lists or any other structured datatypes. The Mocklisp interpreter, built by Gosling and a collaborator, was replaced by a full Lisp interpreter in GNU Emacs.
References
Christopher Kelty, "EMACS, grep, and UNIX: authorship, invention and translation in software", https://web.archive.org/web/20110728022656/http://www.burlingtontelecom.net/~ashawley/gnu/emacs/ConText-Kelty.pdf
Emacs
Unix text editors
1981 software |
39687564 | https://en.wikipedia.org/wiki/Lori%20A.%20Clarke | Lori A. Clarke | Lori A. Clarke is an American computer scientist noted for her research on software engineering.
Biography
Clarke received a B.A. in Mathematics from the University of Rochester in 1969. She received a Ph.D in Computer Science from the University of Colorado in 1976.
She then joined the Department of Computer Science at the University of Massachusetts Amherst as an assistant professor in 1976. While there she was promoted to associate professor in 1981 and to professor in 1986. In 2011, she became the chair of the School of Computer Science. In 2015 she became an Emeritus Professor.
She was a board member for SIGSOFT from 1985 to 2001, including the chair from 1993 to 1997. She was a board member of CRA from 1999 to 2009. She is also noted for her leadership in broadening participation in computing. She has been a member of the CRA-W Board since 2001 and was the co-chair of CRA-W from 2005 to 2008.
Awards
In the year 1998 she was named an ACM Fellow.
Her other notable awards include:
IEEE Fellow in 2011 for contributions to software testing and verification.
ACM SIGSOFT Outstanding Research Award, 2011
ACM SIGSOFT Distinguished Service Award, 2002
References
External links
University of Massachusetts Amherst: Lori A. Clarke, Department of Computer Science
Living people
American computer scientists
American women computer scientists
University of Massachusetts Amherst faculty
Fellows of the Association for Computing Machinery
Fellow Members of the IEEE
Year of birth missing (living people)
21st-century American women |
11335534 | https://en.wikipedia.org/wiki/Enterprise%20Integration%20Patterns | Enterprise Integration Patterns | Enterprise Integration Patterns is a book by Gregor Hohpe and Bobby Woolf and describes 65 patterns for the use of enterprise application integration and message-oriented middleware in the form of a pattern language.
The integration (messaging) pattern language
The pattern language presented in the book consists of 65 patterns structured into 9 categories, which largely follow the flow of a message from one system to the next through channels, routing, and transformations. The book includes an icon-based pattern language, sometimes nicknamed "GregorGrams" after one of the authors. Excerpts from the book (short pattern descriptions) are available on the supporting website (see External links).
Integration styles and types
The book distinguishes four top-level alternatives for integration:
File Transfer
Shared Database
Remote Procedure Invocation
Messaging
The following integration types are introduced:
Information Portal
Data Replication
Shared Business Function
Service Oriented Architecture
Distributed Business Process
Business-to-Business Integration
Tightly Coupled Interaction vs. Loosely Coupled Interaction
Messaging
Message Channel
Message
Pipes and Filters
Message Router
Message Translator
Message Endpoint
Message Channel
Point-to-Point Channel
Publish-Subscribe Channel
Datatype Channel
Invalid Message Channel
Dead Letter Channel
Guaranteed Delivery
Channel Adapter
Messaging Bridge
Message Bus
Message Construction
Command Message
Document Message
Event Message
Request-Reply
Return Address
Correlation Identifier
Message Sequence
Message Expiration
Format Indicator
Message Router
Content-Based Router
Message Filter
Dynamic Router
Recipient List
Splitter
Aggregator
Resequencer
Composed Message Processor
Scatter-Gather
Routing Slip
Process Manager
Message Broker
Message Transformation
Envelope Wrapper
Content Enricher
Content Filter
Claim Check
Normalizer
Canonical Data Model
Message Endpoint
Messaging Gateway
Messaging Mapper
Transactional Client
Polling Consumer
Event-Driven Consumer
Competing Consumers
Message Dispatcher
Selective Consumer
Durable Subscriber
Idempotent Receiver
Service Activator
System Management
Control Bus
Detour
Wire Tap
Message History
Message Store
Smart Proxy
Test Message
Channel Purger
The pattern language continues to be relevant as of today, for instance in cloud application development and integration, and in the internet of things. In 2015, the two book authors reunited—for the first time since the publication of the book—for a retrospective and interview in IEEE Software.
Implementation
Enterprise Integration Patterns are implemented in many open source integration solutions. Notable implementations include Spring Integration, Apache Camel, Red Hat Fuse, Mule ESB and Guaraná DSL.
See also
Enterprise messaging system
Loose coupling
Software design pattern
References
External links
American non-fiction books
2003 non-fiction books
Software engineering books
Software design patterns
Enterprise application integration
Message-oriented middleware
Monographs |
5334934 | https://en.wikipedia.org/wiki/Debug%20%28command%29 | Debug (command) | The line-oriented debugger DEBUG is an external command in operating systems such as DOS, OS/2 and Windows (only in 16-bit/32-bit versions).
DEBUG can act as an assembler, disassembler, or hex dump program allowing users to interactively examine memory contents (in assembly language, hexadecimal or ASCII), make changes, and selectively execute COM, EXE and other file types. It also has several subcommands which are used to access specific disk sectors, I/O ports and memory addresses.
Overview
Traditionally, all computers and operating systems have included a maintenance function, used to determine whether a program is working correctly. DEBUG was originally written by Tim Paterson to serve this purpose in 86-DOS. When Paterson began working for Microsoft in the early 1980s he brought the program with him. DEBUG was part of and has been included in MS-DOS/PC DOS and certain versions of Microsoft Windows. Originally named DEBUG.COM, the executable was renamed into DEBUG.EXE with DOS 5.0.
Windows XP and later versions included DEBUG for the MS-DOS subsystem to maintain MS-DOS compatibility. The 16-bit DOS commands are not available on 64-bit editions of Windows.
The MS-DOS/PC DOS DEBUG has several limitations:
In assembly/disassembly modes it only supports 8086 opcodes.
It can only access 16-bit registers and not 32-bit extended registers.
When the "N" subcommand for naming files is used, the filename is stored from offset DS:5D to DS:67 (the Program Segment Prefix File Control Block area), meaning that the program can only save files in FAT 8.3 filename format.
Enhanced DEBUG packages include the DEBUG command in Novell DOS 7, OpenDOS 7.01 and DR-DOS 7.02 and higher, a reimplementation of Digital Research's former Symbolic Instruction Debugger SID/SID86, which came with former versions of DR DOS. It is fully compatible with the DEBUG command line syntax of MS-DOS/PC DOS, but offers many enhancements, including supporting 16-bit and 32-bit opcodes up to the Pentium, an extended mode (/X) with dozens of additional commands and sub-modes, a much enhanced command line syntax with user-definable macros and symbolic debugging facilities with named registers, loaded symbol tables, mathematical operations and base conversions, as well as a commenting disassembler. Some versions also utilized DPMS to function as a "stealth mode" protected-mode debugger.
The FreeDOS version of DEBUG was developed by Paul Vojta and is licensed under the MIT License.
A 32-bit clone "DEBUGX" version supporting 32-bit DPMI programs exists as well. Andreas "Japheth" Grech, the author of the HX DOS extender, developed enhanced DEBUG versions 0.98 to 1.25, and former PC DOS developer Vernon C. Brooks added versions 1.26 to 1.32.
Syntax
DEBUG [[drive:][path] filename [parameters]]
When DEBUG is started without any parameters the DEBUG prompt, a "-" appears. The user can then enter one of several one or two-letter subcommands, including "A" to enter the assembler mode, "D" to perform a hexadecimal dump, "T" to trace and "U" to unassemble (disassemble) a program in memory.
DEBUG can also be used as a "DEBUG script" interpreter using the following syntax.
DEBUG < filename
A script file may contain DEBUG subcommands and assembly language instructions. This method can be used to create or edit binary files from batch files.
Using for non-debugging purposes
The DEBUG utility is useful for editing binary files in an environment where only DOS is installed without anything else. It can also be used to edit disk sectors, which is one method of removing boot-sector viruses.
Availability
Although technical documentation for the DEBUG command was removed with the release of MS-DOS 3.3, the command was retained in the standard distribution, unlike what was done with EXE2BIN.
DEBUG in other operating systems
The operating systems Intel ISIS-II and iRMX 86, DEC TOPS-10 and TOPS-20, THEOS/OASIS, Zilog Z80-RIO, Stratus OpenVOS, PC-MOS, and AROS also provide a DEBUG command.
See also
List of DOS commands
DDT (CP/M command) (Dynamic Debugging Technique)
SID (Symbolic Instruction Debugger)
SYMDEB
CodeView
Turbo Debugger
SoftICE
References
External links
Debug | Microsoft Docs
Open source DEBUG implementation that comes with MS-DOS v2.0
Assemblers
Debuggers
Disassemblers
External DOS commands
Microsoft free software
OS/2 commands |
1092120 | https://en.wikipedia.org/wiki/Remote%20direct%20memory%20access | Remote direct memory access | In computing, remote direct memory access (RDMA) is a direct memory access from the memory of one computer into that of another without involving either one's operating system. This permits high-throughput, low-latency networking, which is especially useful in massively parallel computer clusters.
Overview
RDMA supports zero-copy networking by enabling the network adapter to transfer data from the wire directly to application memory or from application memory directly to the wire, eliminating the need to copy data between application memory and the data buffers in the operating system. Such transfers require no work to be done by CPUs, caches, or context switches, and transfers continue in parallel with other system operations. This reduces latency in message transfer.
However, this strategy presents several problems related to the fact that the target node is not notified of the completion of the request (single-sided communications).
Acceptance
As of 2018 RDMA had achieved broader acceptance as a result of implementation enhancements that enable good performance over ordinary networking infrastructure. For example RDMA over Converged Ethernet (RoCE) now is able to run over either lossy or lossless infrastructure. In addition iWARP enables an Ethernet RDMA implementation at the physical layer using TCP/IP as the transport, combining the performance and latency advantages of RDMA with a low-cost, standards-based solution. The RDMA Consortium and the DAT Collaborative have played key roles in the development of RDMA protocols and APIs for consideration by standards groups such as the Internet Engineering Task Force and the Interconnect Software Consortium.
Hardware vendors have started working on higher-capacity RDMA-based network adapters, with rates of 100 Gbit/s reported. Software vendors, such as Red Hat and Oracle Corporation, support these APIs in their latest products, and engineers have started developing network adapters that implement RDMA over Ethernet.
Both Red Hat Enterprise Linux and Red Hat Enterprise MRG have support for RDMA. Microsoft supports RDMA in Windows Server 2012 via SMB Direct. VMware's ESXi product also supports RDMA as of 2015.
Common RDMA implementations include the Virtual Interface Architecture, RDMA over Converged Ethernet (RoCE), InfiniBand, Omni-Path and iWARP.
References
External links
RDMA Consortium
: A Remote Direct Memory Access Protocol Specification
A Tutorial of the RDMA Model
"Why Compromise?" // HPCwire, Gilad Shainer (Mellanox Technologies), 2006
A Critique of RDMA for high-performance computing
Computer memory
Operating system technology
Local area networks |
12293 | https://en.wikipedia.org/wiki/Graphical%20user%20interface | Graphical user interface | The graphical user interface (GUI or ) is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.
The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up display (HUD) is preferred), or not including flat screens, like volumetric displays because the term is restricted to the scope of two-dimensional display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.
User interface and interaction design
Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks.
The visible graphical interface features of an application are sometimes referred to as chrome or GUI (pronounced gooey). Typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent of and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a different skin at will, and eases the designer's work to change the interface as user needs evolve. Good user interface design relates to users more, and to system architecture less.
Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message, or drawing. Smaller ones usually act as a user-input tool.
A GUI may be designed for the requirements of a vertical market as application-specific graphical user interfaces. Examples include automated teller machines (ATM), point of sale (POS) touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticket and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS).
Cell phones and handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations.
Examples
Components
A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information.
A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is the windows, icons, text fields, canvases, menus, pointer (WIMP) paradigm, especially in personal computers.
The WIMP style of interaction uses a virtual input device to represent the position of a pointing device's interface, most often a mouse, and presents information organized in windows and represented with icons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, and positioning of the pointer.
In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism.
Entries may appear in a list to make space for text and details, or in a grid for compactness and larger icons with little space underneath for text. Variations inbetween exist, such as a list with multiple columns of items and a grid of items with rows of text extending sideways from the icon.
Multi-row and multi-column layouts commonly found on the web are "shelf" and "waterfall". The former is found on image search engines, where images appear with a fixed height but variable length, and is typically implemented with the CSS property and parameter display: inline-block;. A waterfall layout found on Imgur and Tweetdeck with fixed width but variable height per item is usually implemented by specifying column-width:.
Post-WIMP interface
Smaller app mobile devices such as personal digital assistants (PDAs) and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP user interfaces.
As of 2011, some touchscreen-based operating systems such as Apple's iOS (iPhone) and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse.
Interaction
Human interface devices, for the efficient interaction with a GUI include a computer keyboard, especially used together with keyboard shortcuts, pointing devices for the cursor (or rather pointer) control: mouse, pointing stick, touchpad, trackball, joystick, virtual keyboards, and head-up displays (translucent information devices at the eye level).
There are also actions performed by programs that affect the GUI. For example, there are components like inotify or D-Bus to facilitate communication between computer programs.
History
Early efforts
Ivan Sutherland developed Sketchpad in 1963, widely held as the first graphical computer-aided design program. It used a light pen to create and manipulate objects in engineering drawings in realtime with coordinated graphics. In the late 1960s, researchers at the Stanford Research Institute, led by Douglas Engelbart, developed the On-Line System (NLS), which used text-based hyperlinks manipulated with a then-new device: the mouse. (A 1968 demonstration of NLS became known as "The Mother of All Demos.") In the 1970s, Engelbart's ideas were further refined and extended to graphics by researchers at Xerox PARC and specifically Alan Kay, who went beyond text-based hyperlinks and used a GUI as the main interface for the Smalltalk programming language, which ran on the Xerox Alto computer, released in 1973. Most modern general-purpose GUIs are derived from this system.
The Xerox PARC user interface consisted of graphical elements such as windows, menus, radio buttons, and check boxes. The concept of icons was later introduced by David Canfield Smith, who had written a thesis on the subject under the guidance of Kay. The PARC user interface employs a pointing device along with a keyboard. These aspects can be emphasized by using the alternative term and acronym for windows, icons, menus, pointing device (WIMP). This effort culminated in the 1973 Xerox Alto, the first computer with a GUI, though the system never reached commercial production.
The first commercially available computer with a GUI was 1979 PERQ workstation, manufactured by Three Rivers Computer Corporation. Its design was heavily influenced by the work at Xerox PARC. In 1981, Xerox eventually commercialized the Alto in the form of a new and enhanced system – the Xerox 8010 Information System – more commonly known as the Xerox Star. These early systems spurred many other GUI efforts, including Lisp machines by Symbolics and other manufacturers, the Apple Lisa (which presented the concept of menu bar and window controls) in 1983, the Apple Macintosh 128K in 1984, and the Atari ST with Digital Research's GEM, and Commodore Amiga in 1985. Visi On was released in 1983 for the IBM PC compatible computers, but was never popular due to its high hardware demands. Nevertheless, it was a crucial influence on the contemporary development of Microsoft Windows.
Apple, Digital Research, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM's Common User Access specifications formed the basis of the user interfaces used in Microsoft Windows, IBM OS/2 Presentation Manager, and the Unix Motif toolkit and window manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, and in various desktop environments for Unix-like operating systems, such as macOS and Linux. Thus most current GUIs have largely common idioms.
Popularization
GUIs were a hot topic in the early 1980s. The Apple Lisa was released in 1983, and various windowing systems existed for DOS operating systems (including PC GEM and PC/GEOS). Individual applications for many platforms presented their own GUI variants. Despite the GUIs advantages, many reviewers questioned the value of the entire concept, citing hardware limits, and problems in finding compatible software.
In 1984, Apple released a television commercial which introduced the Apple Macintosh during the telecast of Super Bowl XVIII by CBS, with allusions to George Orwell's noted novel Nineteen Eighty-Four. The goal of the commercial was to make people think about computers, identifying the user-friendly interface as a personal computer which departed from prior business-oriented systems, and becoming a signature representation of Apple products.
Windows 95, accompanied by an extensive marketing campaign, was a major success in the marketplace at launch and shortly became the most popular desktop operating system.
In 2007, with the iPhone and later in 2010 with the introduction of the iPad, Apple popularized the post-WIMP style of interaction for multi-touch screens, and those devices were considered to be milestones in the development of mobile devices.
The GUIs familiar to most people as of the mid-late 2010s are Microsoft Windows, macOS, and the X Window System interfaces for desktop and laptop computers, and Android, Apple's iOS, Symbian, BlackBerry OS, Windows Phone/Windows 10 Mobile, Tizen, WebOS, and Firefox OS for handheld (smartphone) devices.
Comparison to other interfaces
Command-line interfaces
Since the commands available in command line interfaces can be many, complex operations can be performed using a short sequence of words and symbols. Custom functions may be used to facilitate access to frequent actions.
Command-line interfaces are more lightweight, as they only recall information necessary for a task; for example, no preview thumbnails or graphical rendering of web pages. This allows greater efficiency and productivity once many commands are learned. But reaching this level takes some time because the command words may not be easily discoverable or mnemonic. Also, using the command line can become slow and error-prone when users must enter long commands comprising many parameters or several different filenames at once. However, windows, icons, menus, pointer (WIMP) interfaces present users with many widgets that represent and can trigger some of the system's available commands.
GUIs can be made quite hard when dialogs are buried deep in a system or moved about to different places during redesigns. Also, icons and dialog boxes are usually harder for users to script.
WIMPs extensively use modes, as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command-line interfaces use modes only in limited forms, such as for current directory and environment variables.
Most modern operating systems provide both a GUI and some level of a CLI, although the GUIs usually receive more attention. The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used in Microsoft Bob, 3dwm, or File System Visualizer.
GUI wrappers
Graphical user interface (GUI) wrappers find a way around the command-line interface versions (CLI) of (typically) Linux and Unix-like software applications and their text-based user interfaces or typed command labels. While command-line or text-based applications allow users to run a program non-interactively, GUI wrappers atop them avoid the steep learning curve of the command-line, which requires commands to be typed on the keyboard. By starting a GUI wrapper, users can intuitively interact with, start, stop, and change its working parameters, through graphical icons and visual indicators of a desktop environment, for example.
Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed for Unix-like operating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program in a shell script.
Three-dimensional graphical user interfaces (3D GUIs)
Several attempts have been made to create a multi-user three-dimensional environment or 3D GUI, including Sun's Project Looking Glass, Metisse, which was similar to Project Looking Glass, BumpTop, where users can manipulate documents and windows with realistic movement and physics as if they were physical documents, and the Croquet Project, which moved to the Open Cobalt and Open Croquet efforts.
The zooming user interface (ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. It is a logical advance on the GUI, blending some three-dimensional movement with two-dimensional or 2.5D vector objects. In 2006, Hillcrest Labs introduced the first zooming user interface for television.
For typical computer displays, three-dimensional is a misnomer—their displays are two-dimensional, for example, Metisse characterized itself as a "2.5-dimensional" UI. Semantically, however, most graphical user interfaces use three dimensions. With height and width, they offer a third dimension of layering or stacking screen elements over one another. This may be represented visually on screen through an illusionary transparent effect, which offers the advantage that information in background windows may still be read, if not interacted with. Or the environment may simply hide the background information, possibly making the distinction apparent by drawing a drop shadow effect over it.
Some environments use the methods of 3D graphics to project virtual three-dimensional user interface objects onto the screen. These are often shown in use in science fiction films (see below for examples). As the processing power of computer graphics hardware increases, this becomes less of an obstacle to a smooth user experience.
Three-dimensional graphics are currently mostly used in computer games, art, and computer-aided design (CAD). A three-dimensional computing environment can also be useful in other uses, like molecular graphics, aircraft design and Phase Equilibrium Calculations/Design of unit operations and chemical processes.
Technologies
The use of three-dimensional graphics has become increasingly common in mainstream operating systems, from creating attractive interfaces, termed eye candy, to functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube that faces are each user's workspace, and window management are represented via a Rolodex-style flipping mechanism in Windows Vista (see Windows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows.
Interfaces for the X Window System have also implemented advanced three-dimensional user interfaces through compositing window managers such as Beryl, Compiz and KWin using the AIGLX or XGL architectures, allowing the use of OpenGL to animate user interactions with the desktop.
In science fiction
Three-dimensional GUIs appeared in science fiction literature and films before they were technically feasible or in common use. For example; the 1993 American film Jurassic Park features Silicon Graphics' three-dimensional file manager File System Navigator, a real-life file manager for Unix operating systems. The film Minority Report has scenes of police officers using specialized 3D data systems. In prose fiction, three-dimensional user interfaces have been portrayed as immersible environments like William Gibson's Cyberspace or Neal Stephenson's Metaverse. Many futuristic imaginings of user interfaces rely heavily on object-oriented user interface (OOUI) style and especially object-oriented graphical user interface (OOGUI) style.
See also
Apple Computer, Inc. v. Microsoft Corp.
Console user interface
Computer icon
Distinguishable interfaces
General Graphics Interface (software project)
GUI tree
Human factors and ergonomics
Look and feel
Natural user interface
Ncurses
Object-oriented user interface
Organic user interface
Rich web application
Skeuomorph
Skin (computing)
Theme (computing)
Text entry interface
User interface design
Vector-based graphical user interface
Notes
References
External links
Evolution of Graphical User Interface in last 50 years by Raj Lal
The men who really invented the GUI by Clive Akass
Graphical User Interface Gallery, screenshots of various GUIs
Marcin Wichary's GUIdebook, Graphical User Interface gallery: over 5500 screenshots of GUI, application and icon history
The Real History of the GUI by Mike Tuck
In The Beginning Was The Command Line by Neal Stephenson
3D Graphical User Interfaces (PDF) by Farid BenHajji and Erik Dybner, Department of Computer and Systems Sciences, Stockholm University
Rayed
Topological Analysis of the Gibbs Energy Function (Liquid-Liquid Equilibrium Correlation Data). Including a Thermodinamic Review and a Graphical User Interface (GUI) for Surfaces/Tie-lines/Hessian matrix analysis - University of Alicante (Reyes-Labarta et al. 2015-18)
Software architecture
American inventions
3D GUIs
computer information |
10774874 | https://en.wikipedia.org/wiki/Colin%20Simpson%20%28author%29 | Colin Simpson (author) | Colin Simpson is a Canadian entrepreneur, software developer, and the author of seven textbooks, including the bestseller Principles of Electronics. With over 500,000 of his textbooks in print, Dr. Simpson is considered as an expert in the teaching of electronics and electronics simulation technology. He has won numerous awards including the Award of Excellence from the Association of Canadian Community Colleges (ACCC), the TVOntario Lifelong Learning Challenge Award, and the Codie award from the Software Publishers Association. Simpson holds two patents in electronics laboratory simulation and control systems technology, and is recognized as an authority on distance education and the integration of laboratory simulation software. He has been interviewed by the CBC, PBS, CTV, TVOntario, The Globe and Mail, Toronto Star, Chicago Tribune, and has lectured at universities around the world. Simpson has a Ph.D. in Electrical Engineering from the University of Hawaii and a Doctorate of Letters from Nipissing University.
Early years
During his tenure as an electronics professor at George Brown College in Toronto, Simpson found that students who were financially disadvantaged and unable to purchase electronics simulation software were achieving poorer grades than their counterparts who were able to purchase such products. At the time, simulation software was prohibitively expensive for a typical student, and Simpson decided to develop his own electronics circuit simulation and make it available free of charge to all students. Simpson approached computer programmer John (Bud) Skinner with this idea, and development work began on a product that ultimately became CircuitLogix. As a result of using this software students grades improved significantly, and it has also removed a very divisive issue from the classroom. In 2005, Simpson launched the commercial version of CircuitLogix called CircuitLogix Pro and in 2012 it reached the milestone of 250,000 licensed users, and became the first electronics simulation product to have a global installed base of a quarter-million customers in over 100 countries.
Simpson was one of the first electronics professors to use simulation software, and his fourth book, Principles of Electronics, was written specifically for use with simulation software. At the time, there was considerable opposition among the electronics education community regarding the use of simulation software for the delivery of electronics curriculum. Many educators felt that a "hands on" methodology was the only valid method of learning electronics, and that simulation was a less-effective substitute. Simpson embarked on a series of lectures, conference presentations and meetings with accrediting organizations throughout 1996, where he demonstrated that electronics simulation software could achieve identical results to laboratory experiments performed with real equipment.
In 1997, Simpson's Electronics Technician distance education program (ET) received approval and accreditation from the Ministry of Training, Colleges and Universities (MTCU). In its first year, the program enrolled over 500 students from 17 countries, with over 30 companies sponsoring employees, and has since become the largest distance education program of its kind in the world.
With over 10,000 students studying electronics at a distance, Simpson's ET distance education program has effectively broken down the barriers that prevent students from accessing technical course material on-line. Of note is that the program has broken the gender barrier in the study of electronics. Typically, less than 2% of students who study electronics in Colleges and Universities are female. In the ET distance education program almost 20% of the student's are female, which has been attributed to the accessibility of the learning material and the integrative multimedia courseware which is designed to scaffold student learning and accommodate learning style differences. The asynchronous learning methodology of Simpson's online technical programs have also attracted a large percentage of mature students who are in their 40s and 50s and require greater flexibility in their studies.
Robotics and beyond
In 2008, Simpson published his book, Introduction to Robotics. He also oversaw the development of a new robotics simulator software package, RoboLogix, which was completed in 2009 and was designed by John (Bud) Skinner using specifications derived from Simpson's research in robotics, algorithms, and simulation technologies. In 2009, Simpson launched the Robotics Technician online program, which presently has over 800 students in 15 countries.
In recent years, Simpson has continued his work in simulation and control systems and in 2006, launched the online PLC Technician program, which was based on his book, Programmable Logic Controllers. The program is now the largest of its kind in the world and provides training to employees in over 100 companies. In 2013, Simpson and Skinner released their first PLC simulation product, PLCLogix, which is designed to simulate the operation of Rockwell Automation's Logix 5000 PLC and is featured in the online PLC Technician program.
One of the main features of PLCLogix is its ability to simulate real world manufacturing environments using interactive 3D animations. These interactive animations are based on Simpson's Programmable Logic Controllers textbook and range from traffic lights to batch mixing to production lines and feature bipeds and other avatars that operate in the various worlds. The integration of ladder programs with these 3D worlds provides a method for programming using real-time computing and observing the operation of complex control devices and systems.
In 2014, Simpson and Skinner released LogixSim, which is a software suite consisting of CircuitLogix, RoboLogix, PLCLogix, and 3DLab. LogixSim's versatility and wide range of simulation capabilities has made it very popular as an educational technology resource in Colleges and Universities. In 2015, Simpson launched his sixth online program, Automation Technician, which uses LogixSim to provide training in automation and control systems including electro-mechanics, robotics and PLCs.
Personal life
Colin Simpson was born in North Bay, Ontario to parents of British heritage. He is the great-great grandson of renowned slavery abolitionist James Phillippo who built one of the first churches in Jamaica, Phillippo Baptist Church, and helped found several Free Villages.
In addition to his work in electronics and simulation technologies, Simpson is also an accomplished musician and producer. In his early 20s, he was a member of the recording group Champion, who achieved gold record status in Canada and were nominated for a CASBY Award in 1989. In an interview with Canadian Musician magazine, Simpson attributed his early interest in electronics out of the necessity of having to repair, maintain and design audio equipment used by bands he performed in. Once he "retired" from the music business at the age of 26, he pursued this love of electronics as a professor, author, inventor, and innovator.
Awards
In 1996, Simpson and Joe Koenig were joint recipients of the Award of Excellence, from the Software Publishers Association for their work in simulation technologies and multimedia.
In 1998, Simpson's electronics program received the Program Excellence Award, from the Association of Canadian Community Colleges, a consortium of 155 Colleges. It was the first time a distance education program had earned this award and was noted by ACCC President, Gerald Brown, as a "landmark achievement in the field of distance education".
In 2003, Simpson's Electronics Technician program received a $1 million grant from the Government of Ontario for the development of a "virtual campus" to support students who were enrolled in 85 cities and towns throughout the province. The award was presented by TVOntario President, Isabel Bassett.
In 2014, Simpson received the Platinum Author award from McMillan-Warner Publishing for reaching 500,000 textbook sales globally.
In 2015, Simpson was awarded an Honorary Doctorate degree from Nipissing University for his global contribution to post-secondary education as an author and software developer.
Bibliography
Industrial Electronics, Prentice-Hall, 1995,
Introduction to Electric Circuits and Machines, Prentice-Hall, 1992,
Introduction to Robotics, McMillan-Warner, 2008,
Lab Manual for Principles of Electronics, Prentice-Hall, 2002,
Principles of Advanced PLCs, McMillan-Warner, 2016,
Principles of DC/AC Circuits, Prentice-Hall, 1998,
Principles of Electronics, Prentice-Hall, 2002,
Programmable Logic Controllers, Prentice-Hall, 2006,
Study Guide to Accompany Principles of Electronics, Prentice-Hall, 2002,
References
Businesspeople from Toronto
Canadian educational theorists
Canadian educators
Canadian technology writers
Computer programmers
Musicians from Toronto
People from North Bay, Ontario
Writers from Toronto
Living people
Year of birth missing (living people) |
16629 | https://en.wikipedia.org/wiki/KDE | KDE | KDE is an international free software community that develops free and open-source software. As a central development hub, it provides tools and resources that allow collaborative work on this kind of software. Well-known products include the Plasma Desktop (the default desktop environment on many Linux distributions), Frameworks and a range of cross-platform applications like Krita or digiKam designed to run on Unix and Unix-like desktops, Microsoft Windows and Android.
Origins
KDE (back then called the K(ool) Desktop Environment) was founded in 1996 by Matthias Ettrich, a student at the University of Tübingen.
At the time, he was troubled by certain aspects of the Unix desktop. Among his concerns was that none of the applications looked or behaved alike. In his opinion, desktop applications of the time were too complicated for end users. In order to solve the issue, he proposed the creation of a desktop environment in which users could expect the applications to be consistent and easy to use. His initial Usenet post spurred significant interest, and the KDE project was born.
The name KDE was intended as a wordplay on the existing Common Desktop Environment, available for Unix systems. CDE was an X11-based user environment jointly developed by HP, IBM, and Sun through the X/Open consortium, with an interface and productivity tools based on the Motif graphical widget toolkit. It was supposed to be an intuitively easy-to-use desktop computer environment. The K was originally suggested to stand for "Kool", but it was quickly decided that the K should stand for nothing in particular. Therefore, the KDE initialism expanded to "K Desktop Environment" before it was dropped altogether in favor of simply KDE in a rebranding effort.
In the beginning Matthias Ettrich chose to use Trolltech's Qt framework for the KDE project. Other programmers quickly started developing KDE/Qt applications, and by early 1997, a few applications were being released. On 12 July 1998 the first version of the desktop environment, called KDE 1.0, was released. The original GPL licensed version of this toolkit only existed for platforms which used the X11 display server, but with the release of Qt 4, LGPL licensed versions are available for more platforms. This allowed KDE software based on Qt 4 or newer versions to theoretically be distributed to Microsoft Windows and OS X.
The KDE Marketing Team announced a rebranding of the KDE project components on November 24, 2009. Motivated by the perceived shift in objectives, the rebranding focused on emphasizing both the community of software creators and the various tools supplied by the KDE, rather than just the desktop environment.
What was previously known as KDE 4 was split into KDE Plasma Workspaces, KDE Applications, and KDE Platform (now KDE Frameworks) bundled as KDE Software Compilation 4. Since 2014, the name KDE no longer stands for K Desktop Environment, but for the community that produces the software.
Software releases
KDE Projects
The KDE community maintains multiple free-software projects. The project formerly referred to as KDE (or KDE SC (Software Compilation)) nowadays consists of three parts:
KDE Plasma, a graphical desktop environment with customizable layouts and panels, supporting virtual desktops and widgets. Written with Qt 5 and KDE Frameworks 5.
KDE Frameworks, a collection of libraries and software frameworks built on top of Qt (formerly known as 'kdelibs' or 'KDE Platform').
KDE Gear, utility applications (like Kdenlive or Krita) mostly built on KDE Frameworks and which are often part of the official KDE Applications release.
Other projects
KDE neon
KDE neon is a software repository that uses Ubuntu LTS as a core. It aims to provide the users with rapidly updated Qt and KDE software, while updating the rest of the OS components from the Ubuntu repositories at the normal pace. KDE maintains that it is not a "KDE distribution," but rather an up-to-date archive of KDE and Qt packages.
WikiToLearn
WikiToLearn, abbreviated WTL, is one of KDE's newer endeavors. It is a wiki (based on MediaWiki, like Wikipedia) that provides a platform to create and share open source textbooks. The idea is to have a massive library of textbooks for anyone and everyone to use and create. Its roots lie in the University of Milan, where a group of physics majors wanted to share notes and then decided that it was for everyone and not just their internal group of friends. They have become an official KDE project with several universities backing it.
Contributors
Developing KDE software is primarily a volunteer effort, although various companies, such as Novell, Nokia, or Blue Systems employ or employed developers to work on various parts of the project. Since a large number of individuals contribute to KDE in various ways (e.g. code, translation, artwork), organization of such a project is complex. A mentor program helps beginners to get started with developing and communicating within KDE projects and communities.
Communication within the community takes place via mailing lists, IRC, blogs, forums, news announcements, wikis and conferences. The community has a Code of Conduct for acceptable behavior within the community.
Development
Currently the KDE community uses the Git revision control system. The KDE Gitlab Instance (named invent) give an overview of all projects hosted by KDE's Git repository system. Phabricator is used for task management.
On 20 July 2009, KDE announced that the one millionth commit has been made to its Subversion repository. On October 11, 2009, Cornelius Schumacher, a main developer within KDE, wrote about the estimated cost (using the COCOMO model with SLOCCount) to develop KDE software package with 4,273,291 LoC, which would be about US$175,364,716. This estimation does not include Qt, Calligra Suite, Amarok, Digikam, and other applications that are not part of KDE core.
The Core Team
The overall direction is set by the KDE Core Team. These are developers who have made significant contributions within KDE over a long period of time. This team communicates using the kde-core-devel mailing list, which is publicly archived and readable, but joining requires approval. KDE does not have a single central leader who can veto important decisions. Instead, the KDE core team consists of several dozen of contributors who make decision not by a formal vote, but through discussions. The Developers also organize alongside topical teams. For example, the KDE Edu team develops free educational software. While these teams work mostly independent and do not all follow a common release schedule. Each team has its own messaging channels, both on IRC and on the mailing lists.
KDE Patrons
A KDE Patron is an individual or organization supporting the KDE community by donating at least 5000 Euro (depending on the company's size) to the KDE e.V.
As of October 2017, there are six such patrons: Blue Systems, Canonical Ltd., Google, Private Internet Access, SUSE, and The Qt Company.
Community structure
Mascot
The KDE community's mascot is a green dragon named Konqi. Konqi's appearance was officially redesigned with the coming of Plasma 5, with Tyson Tan's entry (seen on the right) winning the redesign competition on the KDE Forums.
Katie is a female dragon. She was presented in 2010, and is appointed as a mascot for the KDE women's community.
Other dragons with different colors and professions were added to Konqi as part of the Tyson Tan redesign concept. Each dragon has a pair of letter-shaped antlers that reflect their role in the KDE community.Kandalf the wizard was the former mascot for the KDE community during its 1.x and 2.x versions. Kandalf's similarity to the character of Gandalf led to speculation that the mascot was switched to Konqi due to copyright infringement concerns, but this has never been confirmed by KDE.
KDE e.V. organization
The financial and legal matters of KDE are handled by KDE e.V., a German non-profit organization. Among others, it owns the KDE trademark and the corresponding logo. It also accepts donations on behalf of the KDE community, helps to run the servers, assists in organizing and financing conferences and meetings, but does not influence software development directly.
Local communities
In many countries, KDE has local branches. These are either informal organizations (KDE India) or like the KDE e.V., given a legal form (KDE France). The local organizations host and maintain regional websites, and organize local events, such as tradeshows, contributor meetings and social community meetings.
Identity
KDE has community identity guidelines (CIG) for definitions and recommendations which help the community to establish a unique, characteristic, and appealing design. The KDE official logo displays the white trademarked K-Gear shape on a blue square with mitred corners. Copying of the KDE Logo is subject to the LGPL. Some local community logos are derivations of the official logo.
Many KDE applications have a K in the name, mostly as an initial letter. The K in many KDE applications is obtained by spelling a word which originally begins with C or Q differently, for example Konsole and Kaffeine, while some others prefix a commonly used word with a K, for instance KGet. However, the trend is not to have a K in the name at all, such as with Stage, Spectacle, Discover and Dolphin.
Collaborations with other organizations
Wikimedia
On 23 June 2005, chairman of the Wikimedia Foundation announced that the KDE community and the Wikimedia Foundation have begun efforts towards cooperation. Fruits of that cooperation are MediaWiki syntax highlighting in Kate and accessing Wikipedia content within KDE applications, such as Amarok and Marble.
On 4 April 2008, the KDE e.V. and Wikimedia Deutschland opened shared offices in Frankfurt. In September 2009 KDE e.V. moved to shared offices with Free Software Foundation Europe in Berlin.
Free Software Foundation Europe
In May 2006, KDE e.V. became an Associate Member of the Free Software Foundation Europe (FSFE).
On 22 August 2008, KDE e.V. and FSFE jointly announced that after working with FSFE's Freedom Task Force for one and a half years KDE adopts FSFE's Fiduciary Licence Agreement. Using that, KDE developers can – on a voluntary basis – assign their copyrights to KDE e.V.
In September 2009, KDE e.V. and FSFE moved into shared offices in Berlin.
Commercial enterprises
Several companies actively contribute to KDE, like Collabora, Erfrakon, Intevation GmbH, Kolab Konsortium, Klarälvdalens Datakonsult AB (KDAB), Blue Systems, and KO GmbH.
Nokia used Calligra Suite as base for their Office Viewer application for Maemo/MeeGo. They have also been contracting KO GmbH to bring MS Office 2007 file format filters to Calligra. Nokia also employed several KDE developers directly – either to use KDE software for MeeGo (e.g. KCal) or as sponsorship.
The software development and consulting companies Intevation GmbH of Germany and the Swedish KDAB use Qt and KDE software – especially Kontact and Akonadi for Kolab – for their services and products, therefore both employ KDE developers.
Others
KDE participates in freedesktop.org, an effort to standardize Unix desktop interoperability.
In 2009 and 2011, GNOME and KDE co-hosted their conferences Akademy and GUADEC under the Desktop Summit label.
In December 2010 KDE e.V. became a licensee of the Open Invention Network.
Many Linux distributions and other free operating systems are involved in the development and distribution of the software, and are therefore also active in the KDE community. These include commercial distributors such as SUSE/Novell or Red Hat but also government-funded non-commercial organizations such as the Scientific and Technological Research Council of Turkey with its Linux distribution Pardus.
In October 2018, Red Hat declared that KDE Plasma was no longer supported in future updates of Red Hat Enterprise Linux, though it continues to be part of Fedora. The announcement came shortly after the announcement of the business acquisition of Red Hat by IBM for close to US$43 billion.
Activities
The two most important conferences of KDE are Akademy and Camp KDE. Each event is on a large scale, both thematically and geographically. Akademy-BR and Akademy-es are local community events.
Akademy
Akademy is the annual world summit, held each summer at varying venues in Europe. The primary goals of Akademy are to act as a community building event, to communicate the achievements of community, and to provide a platform for collaboration with community and industry partners. Secondary goals are to engage local people, and to provide space for getting together to write code. KDE e.V. assist with procedures, advice and organization. Akademy including conference, KDE e.V. general assembly, marathon coding sessions, BOFs (birds of a feather sessions) and social program. BOFs meet to discuss specific sub-projects or issues.
The KDE community held KDE One that was first conference in Arnsberg, Germany in 1997 to discuss the first KDE release. Initially, each conference was numbered after the release, and not regular held. Since 2003 the conferences were held once a year. And they were named Akademy since 2004.
The yearly Akademy conference gives Akademy Awards, are awards that the KDE community gives to KDE contributors. Their purpose is to recognize outstanding contribution to KDE. There are three awards, best application, best non-application and jury's award. As always the winners are chosen by the winners from the previous year. First winners received a framed picture of Konqi signed by all attending KDE developers.
Camp KDE
Camp KDE is another annual contributor's conference of the KDE community. The event provides a regional opportunity for contributors and enthusiasts to gather and share their experiences. It is free to all participants. It is intended to ensure that KDE in the world is not simply seen as being Euro-centric. The KDE e.V. helps travel and accommodation subsidies for presenters, BoF leaders, organizers or core contributor. It is held in the North America since 2009.
In January 2008, KDE 4.0 Release Event was held at the Google headquarters in Mountain View, California, USA to celebrate the release of KDE SC 4.0. The community realized that there was a strong demand for KDE events in the Americas, therefore Camp KDE was produced.
Camp KDE 2009 was the premiere meeting of the KDE Americas, was held at the Travellers Beach Resort in Negril, Jamaica, sponsored by Google, Intel, iXsystem, KDE e.V. and Kitware. The event included 1–2 days of presentations, BoF meetings and hackathon sessions. Camp KDE 2010 took place at the University of California, San Diego (UCSD) in La Jolla, USA. The schedule included presentations, BoFs, hackathons and a day trip. It started with a short introduction by Jeff Mitchell, who was the principal organizer of the conference, talked a bit of history about Camp KDE and some statistics about the KDE community. The talks of the event were relatively well attended, and an increase over the previous year to around 70 people. On 1/19, the social event was a tour of a local brewery. Camp KDE 2011 was held at Hotel Kabuki in San Francisco, USA, was co-located with the Linux Foundation Collaboration Summit. The schedule included presentations, hackathons and a party at Noisebridge. The conference opened with an introduction spoken by Celeste Lyn Paul.
SoK (Season of KDE)
Season of KDE is an outreach program hosted by the KDE community. Students are appointed mentors from the KDE community that help bring their project to fruition.
Other community events
conf.kde.in was the first KDE and Qt conference in India. The conference was organized by KDE India, was held at R.V. College of Engineering in Bangalore, India. The first three days of the event had talks, tutorials and interactive sessions. The last two days were a focused code sprint. The conference was opened by its main organizer Pradeepto Bhattacharya, over 300 people were at the opening talks. The Lighting of the Auspicious Lamp ceremony was performed to open the conference. The first session was by Lydia Pintscher who talk "So much to do – so little time". At the event, Project Neon announced return on Mar 11, 2011, provides nightly builds of the KDE Software Compilation. Closing the conference was keynote speaker and old-time KDE developer Sirtaj.
Día KDE (KDE Day) is an Argentinian event focused on KDE. It gives talks and workshops. The purpose of the event are: spread the free software movement among the population of Argentina, bringing to it the KDE community and environment developed by it, to know and strengthen KDE-AR, and generally bring the community together to have fun. The event is free.
A Release party is a party, which celebrates the release of a new version of the KDE SC (twice a year). KDE also participates in other conferences that revolve around free software.
Notable uses
Brazil's primary school education system operates computers running KDE software, with more than 42,000 schools in 4,000 cities, thus serving nearly 52 million children. The base distribution is called Educational Linux, which is based on Kubuntu. Besides this, thousands more students in Brazil use KDE products in their universities. KDE software is also running on computers in Portuguese and Venezuelan schools, with respectively 700,000 and one million systems reached.
Through Pardus, a local Linux distribution, many sections of the Turkish government make use of KDE software, including the Turkish Armed Forces, Ministry of Foreign Affairs, Ministry of National Defence, Turkish Police, and the SGK (Social Security Institution of Turkey), although these departments often do not exclusively use Pardus as their operating system.
CERN (European Organization for Nuclear Research) is using KDE software.
Germany uses KDE software in its embassies around the world, representing around 11,000 systems.
NASA used the Plasma Desktop during the Mars Mission.
Valve Corporation's new handheld gaming computer, the Steam Deck, is reported to use the Plasma Desktop as part of its environment.
See also
KDE Projects
List of KDE applications
Free software community
Trinity Desktop Environment
References
External links
KDE.News, news announcements
KDE Wikis
1996 establishments in Germany
1996 software
Free and open-source software organizations
Free software projects |
25480888 | https://en.wikipedia.org/wiki/Seadragon%20Software | Seadragon Software | Seadragon Software was a team within the Microsoft Live Labs. Its product, Seadragon, is a web optimized visualization technology that allows graphics and photos to be smoothly browsed, regardless of their size. Seadragon is the technology powering Microsoft's Silverlight, Pivot, Photosynth and the standalone cross-platform Seadragon application for iPhone and iPad.
Seadragon technology allows one to view extremely large and high resolution images without the loading time or latency typically associated with large images. The developers behind Seadragon also allow users to upload photos and create their own Seadragon style image to be viewed online.
History
Founded in 2003, the company that would eventually become Seadragon Software was originally named Sand Codex. Based in Princeton, New Jersey, Sand Codex moved to Seattle in 2004 to accommodate founder Blaise Agüera y Arcas's wife's new role at the University of Washington.
In 2005 Sand Codex received $4 million in angel and venture capital funding, including $2 million from the Madrona Venture Group. It was after this injection of capital that the company changed its name to Seadragon Software.
In early 2006, Seadragon Software was acquired by Microsoft and organized within the newly formed Live Labs, a midpoint between Microsoft's online product groups and MSR, under Dr. Gary William Flake.
Silverlight 2 released in 2008 with the Deep Zoom feature. This marked the first publicly shipped Seadragon software. Seadragon made further contributions in Silverlight 3 and announced others for Silverlight 4.
Photosynth launched the summer of 2008; almost 2 years after its Community Technology Preview, the public can now create and view synths. The Photosynth team officially broke off from Seadragon to join MSN.
Implementations
Seadragon Ajax is a pure JavaScript implementation of the Seadragon technology, released by Microsoft as an open-source library. It is now under active development as OpenSeadragon.
The Deep Zoom feature of Microsoft's Silverlight technology is an adaptation of Seadragon technology.
Seadragon Mobile was an iPhone app (no longer available) created from Seadragon technology.
How Seadragon works
Seadragon technology is based around two distinct platforms, one being Asynchronous JavaScript and XML, the other being Microsoft’s Silverlight with DeepZoom application. Using the Silverlight version requires that the user downloads the Microsoft Silverlight application. Alternatively, the AJAX version requires only the standard JavaScript web plug-ins available in most browsers and portable devices. AJAX technology has allowed for the increased interaction and rich user experiences which are typically characteristic of Web 2.0 enabled websites.
For the creation of Seadragon style content and images, when one uploads a picture, it is converted into a number of Deep Zoom Image (DZI) format files. These can be combined to make up a Deep Zoom Collection (DZC). These Deep Zoom Images create a digital tiled mosaic, of small (256x256) images, with each tile representing a portion or layer or set of pixels of the image at one specific resolution. This Deep Zoom format allows for only the pixels needed for a particular view on the screen to be loaded at one particular time – this results in a more effective use of bandwidth and computer resources. This also means that the amount of data needing to be transferred at any one time is proportional to the number of pixels on the screen. This is an alternative to loading all the pixels (data) of an image all at once with standard image formats. The figurative “secret sauce” behind Seadragon is the technology that allows for the seamlessly smooth transition between the tiles and layers amongst the DeepZoom collection (DZC) files that make up an image.
File format
All current implementations of Seadragon technology make use of the Deep Zoom Images, consisting of either a single Deep Zoom Image or a Deep Zoom Collection.
Examples
Photosynth uses Seadragon for its D3D viewer and Silverlight for its default viewer.
Pivot uses a combination of Seadragon and WPF to render images and collections.
ChronoZoom is a timeline for Big History being developed for the International Big History Association by Microsoft Research and originally by the University of California, Berkeley and Microsoft Live Labs
is the Bing search app for the iPhone. The mapping experience is built on top of the Seadragon Mobile.
SimpleDL uses Ajax Seadragon for its image display.
References
External links
Seadragon Ajax
OpenSeadragon
Zoomo - image sharing website, uses OpenSeadragon for its image display.
Zoomable - conversion tool, API available
The Seadragon Showcase page lists a number of examples live on the web.
Seadragon
Microsoft software
Graphics software
IOS software |
36658496 | https://en.wikipedia.org/wiki/Troy%20Book | Troy Book | Troy Book is a Middle English poem by John Lydgate relating the history of Troy from its foundation through to the end of the Trojan War. It is in five books, comprising 30,117 lines in ten-syllable couplets. The poem's major source is Guido delle Colonne's Historia destructionis Troiae.
Background
Troy Book was Lydgate's first full-scale work. It was commissioned from Lydgate by the Prince of Wales (later Henry V), who wanted a poem that would show the English language to be as fit for a grand theme as the other major literary languages,Ywriten as wel in oure langageAs in Latyn and in Frensche it is.Lydgate tells us that he began writing the poem at four o'clock on the afternoon of Monday, 31 October 1412; he completed it in 1420.
It has been argued that Lydgate intended Troy Book as an attempt to outdo Chaucer's Trojan romance Troilus and Criseyde, and certainly the frequent recurrence of tributes to Chaucer's excellence as a poet is a notable feature of the poem. The poem emphasizes the disastrous results of political discord and militarism, and also presents the conventional medieval themes of the power of Fortune to influence earthly affairs and the vanity of worldly things.
Publication
Troy Book survives in 23 manuscripts, testifying to the popularity of the poem during the 15th century. It was printed first by Richard Pynson in 1513, and second by Thomas Marshe in 1555. A modernized version sometimes attributed to Thomas Heywood, called The Life and Death of Hector, appeared in 1614. Troy Book exercised an influence on Robert Henryson, Thomas Kyd, and Christopher Marlowe, and was one of Shakespeare's sources for Troilus and Cressida.
Criticism
Modern critics have generally made moderate claims for Troy Book’s literary merit. Antony Gibbs judged the poem to be of uneven quality, adding that "its couplet form indulges Lydgate's fatal garrulity." Douglas Gray found some good writing to praise, and particularly singled out the eloquence and pathos of some of Lydgate's rhetorical laments, descriptions, and speeches.
Reference edition
The reference edition of Troy Book is that by Henry Bergen, published as volumes 97, 103, 106 and 126 of the Early English Text Society Extra Series between 1906 and 1935.
Modern renditions
Two modernised versions of Troy Book are available:
John Lydgate's Troy Book: A Middle English Iliad (The Troy Myth in Medieval Britain Book 1) by D M Smith (2019 Kindle) - complete
John Lydgate Troy Book: The Legend of the Trojan War by D.J. Favager (2019 Kindle) - complete
Notes
Sources
External links
Introduction by Robert R. Edwards to the TEAMS edition of selections from Troy Book
The TEAMS edition
Online abbreviated version in modern English verse by D.J. Favager
Epic poems in English
1420 books
15th-century poems
Middle English poems
Poems about cities
Trojan War literature |
9634115 | https://en.wikipedia.org/wiki/Business%20informatics | Business informatics | Business informatics (BI) is a discipline combining economics, economics of digitization, business administration, information technology (IT), and concepts of computer science. Business informatics centers around creating programming and equipment frameworks which ultimately provides the organization with effective operation based on information technology application. The focus on programming and equipment boosts the value to the analysis of economics and information technology. The BI discipline was created in Germany (in German: Wirtschaftsinformatik). It is an established academic discipline including bachelor, master, diploma and PhD programs in Austria, Belgium, France, Germany, Hungary, Ireland, The Netherlands, Russia, Sweden, Switzerland, Turkey and is establishing in an increasing number of other countries as well as Australia, Bosnia and Herzegovina, Malaysia, Mexico, Poland and India.
Business informatics as an integrative discipline
BI shows similarities to information systems (IS), which is a well-established discipline originating from North America. However, there are a few differences that make business informatics a unique own discipline:
Business informatics includes information technology, like the relevant portions of applied computer science, to a larger extent than information systems do.
Business informatics includes significant construction and implementation-oriented elements. I.e. one major focus lies in the development of solutions for business problems rather than the ex post investigation of their impact.
Information systems (IS) focuses on empirically explaining the phenomena of the real world. IS has been said to have an "explanation-oriented" focus in contrast to the "solution-oriented" focus that dominates BI. IS researchers make an effort to explain the phenomena of acceptance and influence of IT in organizations and the society applying an empirical approach. In order to do that usually qualitative and quantitative empirical studies are conducted and evaluated. In contrast to that, BI researchers mainly focus on the creation of IT solutions for challenges they have observed or assumed and thereby they focus more on the possible future uses of IT.
Tight integration between research and teaching following the Humboldtian ideal is another goal in business informatics. Insights gained in actual research projects become part of the curricula quite fast since most researchers are also lecturers at the same time. The pace of scientific and technological progress in BI is quite rapid, therefore subjects taught are under permanent reconsideration and revision. In its evolution, the BI discipline is fairly young. Therefore, significant hurdles have to be overcome in order to further establish its vision.
Career prospects
Specialists in Business Informatics can work both in research and in commerce. In business, there are various uses, which may vary depending on professional experience. Fields of employment may include:
Consulting
(Information) System Development
Sales
Systems Analysis and Organization
In consulting, a clear line must be drawn between strategic as well as IT consulting.
Journal
Business & Information Systems Engineering
See also
Master of Business Informatics
References
Academic disciplines
Information systems
Information technology management |
30796401 | https://en.wikipedia.org/wiki/Stengart%20v.%20Loving%20Care%20Agency%2C%20Inc. | Stengart v. Loving Care Agency, Inc. | Stengart v. Loving Care Agency, Inc., 990 A.2d 650 (2010) was a New Jersey Supreme Court case that provided guidance to employees as to what extent they may expect privacy and confidentiality in personal e-mails composed on company-owned computers. Through its decision, the court ruled on two key issues which concluded that there should be a "reasonable" expectation of privacy in personal e-mails on company computers, and that attorney–client communication privileges and privacy should not be violated. On March 30, 2010, Chief Justice Stuart Rabner and the New Jersey Supreme Court affirmed the appellate court's decision by overturning the previous ruling made by the trial court. The trial court previously determined that a company-created policy provided sufficient warning to employees that all communications and activities performed on company-owned computers were subject to review by the employer and that there should be no expectation of privacy because of such policies.
Prior history of the case
The Plaintiff, Marina Stengart was a former employee of Loving Care Agency, Inc. who provided care services for children and adults. In December 2007, Marina Stengart resigned from her position at Loving Care due to gender discrimination issues, which ultimately led to an action against Loving Care Agency, Inc. Just prior to her resignation, Stengart wrote several e-mails to her lawyer from her personal, password-protected e-mail account using a company-owned laptop. In preparation for the action being taken by Stengart, Loving Care Agency, Inc. hired a computer forensics expert to create a forensics disk image of the hard drive in the computer used by Stengart while employed with the company.
During discovery, the plaintiff was made aware of Loving Care's possession of the e-mails from the company-owned laptop that Stengart used. Upon learning of the acquisition of the e-mails by Loving Care, Stengart's attorney, Donald Jacobs filed a motion in Bergen County court for all e-mails to be returned and that copies be destroyed. Mr. Jacobs' motion was denied, and the trial court held that Loving Care's use of the e-mails would be permitted in court. Mr. Jacobs' motion was denied by the trial judge on the basis that such e-mails were not protected by client-attorney privilege because the company's policy indicated that the e-mails were a part of the company's property.
Unsatisfied with the trial judge's decision to allow the e-mails to remain with the defendant, Stengart and her attorney decided to present their case to the New Jersey Superior Court, Appellate Division. The appellate judge subsequently reversed the decision of the trial court by saying that the trial court had not exhibited proper respect to attorney–client privacy, and thereby ordered that the case be heard in front of a judge in the Chancery Division. Furthermore, the appellate court also warned that Loving Care's counsel could have sanctions imposed upon them for violating the attorney–client privileges and obtaining copies of e-mails that they did not have rights to. Once again however, the decision of the court was challenged, but this time by Loving Care Agency. After the decision was made by the appellate court, Loving Care Agency then decided to take the case to the New Jersey Supreme Court for the final ruling.
New Jersey Supreme Court decision
During the New Jersey Supreme Court's hearing of the case, the court set out to determine whether or not Loving Care's computer use policy was sufficient notice that her privacy should be expected while using the company-owned laptop. After hearing the facts of the case and reviewing the previous holdings from the lower courts, the Supreme Court affirmed the decision previously made by the appellate court that Marina Stengart had reasonable expectations that her attorney–client communications would remain private. The decision of the court was formed on the basis of a few principles: (1) that Loving Care's policy was "ambiguous" and did not specify that personal, password-protected e-mails were subject to company review, (2) that reasonable expectation of privacy could have been created due to the company's allowing of "personal use" of the computer, and (3) that company interests cannot infringe on attorney–client privileges. In addition to the faults in the wording of the company's policy, the New Jersey Supreme Court also pointed out faults in the company's ideas as to what the company can and cannot do. In this case, the court re-emphasized the importance of respecting attorney–client privileges and warned Loving Care's attorneys that by reading the e-mails, they were in violation of the Rules of Professional Conduct.
Although the ruling of the Supreme Court affirmed the earlier decision of the appellate court in favor of Marina Stengart, there was an important difference in the decisions by the two courts. This difference between the Supreme Court and the Appellate Court was in how each court decided where the case would go next. In the ruling by the appellate court, it was decided that the case should be remanded to the Chancery Division because the court determined that Loving Care's attorney's did in fact violate the Rules of Professional Conduct and that consideration should be given as to whether or not sanctions should be imposed on Loving Care's counsel. The Supreme Court, however, decided to modify the appellate court's judgment by requiring that the case be remanded back to the trial court for a decision as to what sanctions, if any, should be imposed on Loving Care's counsel.
Importance of the case
The New Jersey Supreme Court case of Stengart v. Loving Care is important for several reasons. Because law pertaining to computers is relatively new, this case set the precedent for future cases to follow. The ruling in this case helped create the boundaries between the interests of the employer and the rights of the employee and changed the way employers create and implement their policies pertaining to the use of company-owned computers and software. Prior to this case, the common belief was that a company owns all electronic information found on their computer and can use it however they desire. However, Stengart v. Loving Care challenged that idea and clarified these issues that are common among employers and employees.
One of the most important questions this case answered was in regards to whether or not an employer was legally allowed to obtain and review information from its employee's attorney–client communications. The Stengart v. Loving Care case clarified that although a company could create a policy and require an employee to abide by it, legally, they could not require an individual to give up their rights such as attorney–client privileges. The court further went on to explain that a company can limit or prevent the use of a company-owned computer; however, accessing personal and private information of the employee is off limits.
Employers also learned a valuable lesson in this case by realizing how important a clear and unambiguous policy is. The decision of Stengart v. Loving Care has led many employers to re-draft their policies so as to avoid future misunderstandings and uncertainties. Such company electronic communication and Information Technology (IT) policies now commonly describe how information can be gathered from company computers, the storage capabilities of company computers, and ways in which monitoring of information will be conducted. This also has helped employers eliminate any questions or doubt as to what information and monitoring methods the company is entitled to and also what the employee can and cannot do.
Lastly, this case has emphasized the importance of respecting attorney–client privileges and what implications can come about if these rights are neglected by an opposing counsel. This case made it clear that the forensic reviews of an employee's computer files are to follow strict guidelines and protocols. These protocols apply not only the acquisition of the forensic files, but also to handling and presentation of such information to the court. Also, any forensic files obtained that contain privileged material must be separated from permissible evidence and such information must be disclosed to the opposing counsel and the court for necessary action. As evident in the case of Stengart v. Loving Care, a counsel's failure to abide by these guidelines could result in the evidence being deemed inadmissible in court and could bring about court sanctions on the parties involved in the acquisition of the private and restricted information.
Although the decision of Stengart v. Loving Care applies only in the state of New Jersey, the case has provided guidance for many other states in the U.S. and has resulted in the adoption of this ruling throughout the country. With relatively so few laws pertaining to computer forensics and information technology (IT), this case has proved important because of the standards it sets and the answers it provides to major issues and concerns in IT law. This case has provided an importance reference that other jurisdictions can turn to not only in the U.S. but throughout other countries as well, as its relevance is evident worldwide.
References
See also
Chancery Division
New Jersey Supreme Court
New Jersey Superior Court, Appellate Division
2010 in American law
Internet privacy case law
United States Internet case law
United States Third-Party Doctrine
New Jersey state case law
United States attorney–client privilege case law
Bergen County, New Jersey |
21737745 | https://en.wikipedia.org/wiki/Uzi%20Vishkin | Uzi Vishkin | Uzi Vishkin (born 1953) is a computer scientist at the University of Maryland, College Park, where he is Professor of Electrical and Computer Engineering at the University of Maryland Institute for Advanced Computer Studies (UMIACS). Uzi Vishkin is known for his work in the field of parallel computing. In 1996, he was inducted as a Fellow of the Association for Computing Machinery, with the following citation: "One of the pioneers of parallel algorithms research, Dr. Vishkin's seminal contributions played a leading role in forming and shaping what thinking in parallel has come to mean in the fundamental theory of Computer Science."
Biography
Uzi Vishkin was born in Tel Aviv, Israel. He completed his B.Sc. (1974) and M.Sc. in Mathematics at the Hebrew University, before earning his D.Sc. in Computer Science at the Technion (1981). He then spent a year working at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York. From 1982 to 1984, he worked at the department of computer science at New York University and remained affiliated with it till 1988. From 1984 until 1997 he worked in the computer science department of Tel Aviv University, serving as its chair from 1987 to 1988. Since 1988 he is with the University of Maryland, College Park.
PRAM-on-chip
A notable rudimentary abstraction—that any single instruction available for execution in a serial program executes immediately—made serial computing simple. A consequence of this abstraction is a step-by-step (inductive) explication of the instruction available next for execution.
The rudimentary parallel abstraction behind the PRAM-on-chip concept, dubbed Immediate Concurrent Execution (ICE) in , is that indefinitely many instructions available for concurrent execution execute immediately. A consequence of ICE is a step-by-step (inductive) explication (also known as lock-step) of the instructions available next for concurrent execution. Moving beyond the serial von Neumann computer (the only successful general purpose platform to date), the aspiration of the PRAM-on-chip concept is that computer science will again be able to augment
mathematical induction with a simple one-line computing abstraction. A chronological overview of the evolution of the PRAM-on-chip concept and its hardware and software prototyping follow.
In the 1980s and 1990s, Uzi Vishkin co-authored several articles that helped building a theory of parallel algorithms in a mathematical model called parallel random access machine (PRAM), which is a generalization for parallel computing of the standard serial computing model random-access machine (RAM). The parallel machines needed for implementing the PRAM model have not yet been built at the time, and quite a few challenged the ability to ever build such machines. Concluding in 1997 that the transistor count on chip as implied by Moore's Law will allow building a powerful parallel computer on a single silicon chip within a decade, he developed a PRAM-On-Chip vision that called for building a parallel computer on a single chip that allows programmers to develop their algorithms for the PRAM model. He went on to invent the explicit multi-threaded (XMT) computer architecture that enables implementation of this PRAM theory, and led his research team to completing in January 2007 a 64-processor computer named Paraleap, that demonstrates the overall concept. The XMT concept was presented in , , the XMT 64-processor computer in , in and most recently in , where it was shown that lock-step parallel programming (using ICE) can achieve the same performance as the fastest hand-tuned multi-threaded code on XMT systems. Such inductive lock-step approach stands in contrast to multi-threaded programming approaches of other many core systems that are known for challenging programmers. The demonstration of XMT comprised several hardware and software components, as well as teaching PRAM algorithms in order to program the XMT Paraleap, using a language called XMTC. Since making parallel programming easy is one of the biggest challenges facing computer science today, the demonstration also sought to include teaching the basics of PRAM algorithms and XMTC programming to students ranging from high-school to graduate school.
Parallel algorithms
In the field of parallel algorithms, Uzi Vishkin co-authored the paper that contributed the work-time (WT) (sometimes called work-depth) framework for conceptualizing and describing parallel algorithms. The WT framework was adopted as the basic presentation framework in the parallel algorithms books and , as well as in the class notes . In the WT framework, a parallel algorithm is first described in terms of parallel rounds. For each round, the operations to be performed are characterized, but several issues can be suppressed. For example, the number of operations at each round need not be clear, processors need not be mentioned and any information that may help with the assignment of processors to jobs need not be accounted for. Second, the suppressed information is provided. The inclusion of the suppressed information is, in fact, guided by the proof of a scheduling theorem due to . The WT framework is useful since while it can greatly simplify the initial description of a parallel algorithm, inserting the details suppressed by that initial description is often not very difficult. Similarly, first casting an algorithm in the WT framework can be very helpful for programming it in XMTC. explains the simple connection between the WT framework and the more rudimentary ICE abstraction noted above.
In the field of parallel and distributed algorithms, one of the seminal papers co-authored by Uzi Vishkin is . This work introduced an efficient parallel technique for graph coloring. The Cole–Vishkin algorithm finds a vertex colouring in an n-cycle in O(log* n) synchronous communication rounds. This algorithm is nowadays presented in many textbooks, including Introduction to Algorithms by Cormen et al., and it forms the basis of many other distributed algorithms for graph colouring.
Other contributions by Uzi Vishkin and various co-authors include parallel algorithms for list ranking, lowest common ancestor, spanning trees, and biconnected components.
Selected publications
.
.
.
.
.
.
.
.
.
.
.
Notes
References
.
This survey paper cites 16 papers co-authored by Vishkin
Cites 36 papers co-authored by Vishkin
This survey paper cites 20 papers co-authored by Vishkin
Cites 19 papers co-authored by Vishkin
Mathematics Genealogy Project: Uzi Vishkin.
ISI Web of Knowledge, highly cited researchers: Uzi Vishkin.
External links
Home page of Uzi Vishkin.
Home page of the XMT project, with links to a software release, on-line tutorial and to material for teaching parallelism.
Uzi Vishkin in DBLP.
American computer scientists
Israeli computer scientists
Theoretical computer scientists
Researchers in distributed computing
Fellows of the Association for Computing Machinery
1953 births
Living people |
8460815 | https://en.wikipedia.org/wiki/Information%20Sharing%20and%20Customer%20Outreach | Information Sharing and Customer Outreach | The United States government's Information Sharing and Customer Outreach office or ISCO was one of five directorates within the office of the chief information officer (CIO) under the Office of the Director of National Intelligence (ODNI). ISCO changed its name and function to Information Technology Policy, Plans, and Requirements (ITPR) in July 2007. Established by at least February 2006, ISCO is led by the Deputy Associate Director of National Intelligence for Information Sharing and Customer Outreach, which is currently Mr. Richard A. Russell. ISCO's information sharing and customer outreach responsibilities extend beyond the United States Intelligence Community and cross the entire U.S. government.
History
President George W. Bush issued Executive Order 13328 in February 2004 which established a bi-partisan commission to advise him on ways to improve the intelligence capabilities of the United States. In June 2005, based on findings of the commission, the Director of National Intelligence was empowered to establish a CIO. Another recommendation of the commission which was endorsed by the President established a Program Manager Information Sharing Environment for implementing an Information Sharing Environment (ISE) under the Director of National Intelligence. The ISE is defined in Section 1016 of IRTPA 2004, which requires the President to establish an Information Sharing Environment (ISE) “for the sharing of terrorism information in a manner consistent with national security and with applicable legal standards relating to privacy and civil liberties” and the IRTPA defines the ISE to mean “an approach that facilitates the sharing of terrorism information.” Accordingly, ISCO was established at this time to support the PM ISE as a support Directorate within the ADNI CIO.
The Intelligence Authorization Act for Fiscal Year 2007 was released on 25 May 2006. This recognized the DNI's Chief Information Officer in order to better reflect his legislative responsibilities:
The ADNI CIO Directorates formed were:
Intelligence Community Governance;
Intelligence Community Enterprise Architecture;
Information Sharing and Customer Outreach;
Intelligence Community Information Technology Management; and
Enterprise Services.
The Deputy Assistant Director for National Intelligence, DADNI for short, has established relations from the ISCO to various counterparts in government and industry. Mr. Russell has spoken at several information sharing conferences and was a presenter at an ODNI sponsored Information Sharing Symposium, 21–24 August 2006. Mr. Russell has championed the use of Intellipedia throughout the Intelligence Community. ISCO has a distributed network of Customer Advocates throughout the Intelligence Community who work to identify and remove barriers to information sharing across the enterprise. U.S. government stakeholders and their partners in industry have begun to collaborate on the business of Information Sharing.
Why it was created
The Homeland Security Act of 2002 and the Intelligence Reform and Terrorism Prevention Act of 2004 mandated that policies be implemented to require the sharing of information across the Intelligence Community.
Vision: "A fully integrated DNI entity dedicated to improving information sharing throughout the National Intelligence Enterprise and beyond, to include those engaged in protecting and securing America, its assets, and its people." - DADNI for Information Sharing and Customer Outreach.
Mission: ISCO shall identify, develop, advocate and support improvements in the information sharing capabilities of Intelligence Community assets with the support of the PM ISE, DHS, DOJ, DoD, and other intelligence collectors, producers, and consumers, to assure all intelligence information is available to those who need it, when they need it.
Functions
Areas of responsibilities of the ISCO are:
Identifying and responding to Intelligence Customer and DNI Needs
Inter-departmental Intelligence Needs
Outreach, Public Relations
Performance Measurement
Developing a National/State/Local/Sharing Strategy
International Sharing
Interfacing with the Program Managers
A primary function of the officers assigned to the ISCO is to provide customer advocacy for the Intelligence Community. These officers are part of a distributed network of Customer Advocates spread throughout the entire U.S. government. 1st Anniversary Speech ISCO promotes and enforces the National Information Exchange Model or NIEM which sets forth a defined data standard for information sharing.
Leadership
Chief Information Officer
The Intelligence Reform and Terrorism Prevention Act of 2004 established the position of the Associate Director of National Intelligence and Chief Information Officer (ADNI CIO). The ADNI CIO is charged with directing and managing activities relating to information technology for the Intelligence Community and the Office of the Director of National Intelligence.
The ADNI CIO reports directly to the Director of National Intelligence and has four primary areas of responsibility:
Manage activities relating to the information technology infrastructure and enterprise architecture of the Intelligence Community.
Exercise procurement approval authority over all information technology items related to the enterprise architecture of all Intelligence Community components.
Direct and manage all information technology-related procurement for the Intelligence Community.
Ensure all expenditures for information technology and research and development activities are consistent with the Intelligence Community enterprise architecture and the strategy of the Director for such architecture.
The current CIO is Maj. Gen. Dale Meyerrose, Ret.
Principal Deputy Associate Director of National Intelligence and Deputy Chief Information Officer
Ms. Michele R. Weslander served as the first Principal Deputy Associate Director of National Intelligence and Deputy Chief Information Officer in the Office of the Director of National Intelligence from 3 January 2006 until 1 July 2007. Prior to her appointment to the Director of National Intelligence staff, Ms. Weslander served as the Deputy Technical Executive of the National Geospatial-Intelligence Agency (NGA). Her previous assignments at the NGA were as the Director of the Horizontal Integration Office, InnoVision Directorate; and the National Geospatial-Intelligence Officer for Multi-INT. She was appointed to the Senior Executive ranks in August 2002.
Acronyms
ODNI - Office of the Director of National Intelligence
ADNI - Assistant Director of National Intelligence
DADNI - Deputy Assistant Director of National Intelligence
ISCO - Information Sharing Customer Outreach
ISE - Information Sharing Environment
PM ISE - Program Manager, Information Sharing Environment
ICES - Intelligence Community Enterprise Services
IIS - Institute for Information Sharing
IAS - Inter-Agency Support
NIEM - National Information Exchange Model
ICEA - Intelligence Community Enterprise Architecture
ICITM - Intelligence Community Information Technology Management
See also
Director of National Intelligence
Program Manager Information Sharing Environment
Information Sharing Council
National Information Exchange Model
Intellipedia
References
External links
Office of the Director of National Intelligence Official Site
ISE Official Site
Institute for Information Sharing
The Future of Wikis in the Government
United States intelligence agencies
Knowledge sharing |
82866 | https://en.wikipedia.org/wiki/Alcathous | Alcathous | Alcathous (; Ancient Greek: Ἀλκάθοος) was the name of several people in Greek mythology:
Alcathous, a Calydonian prince as the son of King Porthaon and Euryte, daughter of Hippodamas. He was the brother of Oeneus (successor of Porthaon), Agrius, Melas, Leucopeus, and Sterope. Alcathous was the second suitor of Hippodamia, and thus slain by her father Oenomaus like the other suitors except Pelops.
Alcathous, possible son of Agrius who together with his brother Lycopeus, died at the hands of his cousin, Tydeus who went then into exile to Argos.
Alcathous, son of Pelops, who killed the Cithaeronian lion.
Alcathous, one of the guardians of Thebes. He was killed by Amphiaraus during the war of the Seven against Thebes.
Alcathous, a Trojan soldier in the company of Paris and Agenor. He was son of Aesyetes and husband of Hippodamia, sister of Aeneas. Alcathous' mother may be Cleomestra, daughter of Tros, and thus, brother to Antenor and Assaracus. Alcathous was slain by Idomeneus, king of Crete.
Alcathous, another Trojan warrior, killed by Achilles in the Trojan War.
Alcathous, one of the companions of Aeneas. He was killed by Caedicus, one of the warriors of Turnus.
Alcathous, another, otherwise unknown personage of this name is mentioned by Virgil.
Notes
See also
2241 Alcathous
References
Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website.
Diodorus Siculus, The Library of History translated by Charles Henry Oldfather. Twelve volumes. Loeb Classical Library. Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. 1989. Vol. 3. Books 4.59–8. Online version at Bill Thayer's Web Site
Diodorus Siculus, Bibliotheca Historica. Vol 1-2. Immanel Bekker. Ludwig Dindorf. Friedrich Vogel. in aedibus B. G. Teubneri. Leipzig. 1888–1890. Greek text available at the Perseus Digital Library.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library.
Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Publius Papinius Statius, The Thebaid translated by John Henry Mozley. Loeb Classical Library Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1928. Online version at the Topos Text Project.
Publius Papinius Statius, The Thebaid. Vol I-II. John Henry Mozley. London: William Heinemann; New York: G.P. Putnam's Sons. 1928. Latin text available at the Perseus Digital Library.
Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library.
Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library.
Quintus Smyrnaeus, The Fall of Troy translated by Way. A. S. Loeb Classical Library Volume 19. London: William Heinemann, 1913. Online version at theio.com
Quintus Smyrnaeus, The Fall of Troy. Arthur S. Way. London: William Heinemann; New York: G.P. Putnam's Sons. 1913. Greek text available at the Perseus Digital Library.
Trojans
Characters in the Aeneid
Characters in Seven against Thebes
Aetolian characters in Greek mythology
Theban characters in Greek mythology
Characters in Greek mythology |
531432 | https://en.wikipedia.org/wiki/Autostereogram | Autostereogram | An autostereogram is a single-image stereogram (SIS), designed to create the visual illusion of a three-dimensional (3D) scene from a two-dimensional image. Most people with normal binocular vision can see the depth in autostereograms, but to do so they must overcome the normally automatic coordination between accommodation (focus of the eyes) and horizontal vergence (angle of the eyes). The illusion is one of depth perception and involves stereopsis: depth perception arising from the different perspective each eye has of a three-dimensional scene, called binocular parallax.
About 5% of people have disordered binocular vision that prevents them from seeing the depth in autostereograms or in conventional stereograms viewed through a stereoscope. To illustrate the depth for such people, the second image has had the binocular parallax replaced by motion parallax: the alteration in the position of points in the scene at different distances from a viewer's eyes as the viewer's head moves. That is, this is a wiggle stereogram.
The simplest type of autostereogram consists of horizontally repeating patterns (often separate images) and is known as a wallpaper autostereogram. When viewed with proper vergence, the repeating patterns appear to float above or below the background. The well-known Magic Eye books feature another type of autostereogram called a random dot autostereogram, similar to the first example, above. In this type of autostereogram, every pixel in the image is computed from a pattern strip and a depth map. A hidden 3D scene emerges when the image is viewed with the correct vergence.
Autostereograms are similar to normal stereograms except they are viewed without a stereoscope. A stereoscope presents 2D images of the same object from slightly different angles to the left eye and the right eye, allowing us to reconstruct the original object via binocular disparity. When viewed with the proper vergence, an autostereogram does the same, the binocular disparity existing in adjacent parts of the repeating 2D patterns.
There are two ways an autostereogram can be viewed: wall-eyed and cross-eyed. Most autostereograms (including those in this article) are designed to be viewed in only one way, which is usually wall-eyed. Wall-eyed viewing requires that the two eyes adopt a relatively parallel angle, while cross-eyed viewing requires a relatively convergent angle. An image designed for wall-eyed viewing if viewed correctly will appear to pop out of the background, whereas if viewed cross-eyed it will instead appear as a cut-out behind the background and may be difficult to bring entirely into focus.
History
In 1593, Giambattista della Porta viewed one page of a book with one eye and another page with the other eye. He was able to read one of the pages, the other being invisible, and switch "the visual virtue" to read the other page, the first becoming invisible. This is an early example of dissociating vergence from accommodation—a necessary ability for seeing autostereograms. However, Porta saw competition between images viewed by the two eyes, binocular rivalry.
It was not until 1838 that the Charles Wheatstone published an example of cooperation between the images in the two eyes: stereopsis (binocular depth perception). He explained that the depth arose from differences in the horizontal positions of the images in the two eyes. He supported his explanation by showing flat, two-dimensional pictures with such horizontal differences, stereograms, separately to the left and right eyes through a stereoscope he invented based on mirrors. From such pairs of flat images, people experienced the illusion of depth.
In 1844, David Brewster discovered the "wallpaper effect". He noticed that when he stared at repeated patterns in wallpapers while varying his vergence, he could see them either behind the wall (with wall-eyed vergence) or in front of the wall (with cross-eyed vergence). This is the basis of wallpaper-style autostereograms.
In 1939 Boris Kompaneysky published the first, random-dot stereogram containing a hand-drawn image of the face of Venus, intended to be viewed with a device.
In 1959, Bela Julesz, vision scientist, psychologist, and MacArthur Fellow, invented random dot stereograms while working at Bell Laboratories on recognizing camouflaged objects from aerial pictures taken by spy planes. At the time, many vision scientists assumed that stereopsis required prior analysis of visible contours of images in each eye, but Julesz showed it occurs with images with no such visible contours in each of the eyes. The contours of the depth object become visible only after stereopsis had processed the differences in the horizontal positions of dots in the two eyes' images.
Japanese designer Masayuki Ito, following Julesz, created a single image stereogram in 1970 and Swiss painter Alfons Schilling created a handmade single-image stereogram in 1974, after creating more than one viewer and meeting with Julesz. Having experience with stereo imaging in holography, lenticular photography, and vectography, he developed a random-dot method based on closely spaced vertical lines in parallax.
In 1979, Christopher Tyler of Smith-Kettlewell Institute, a student of Julesz and a visual psychophysicist, combined the theories behind single-image wallpaper stereograms and random-dot stereograms (the work of Julesz and Schilling) to create the first black-and-white random-dot autostereogram with the assistance of computer programmer Maureen Clarke using Apple II and BASIC. Stork and Rocca published the first scholarly paper and provided software for generating random-dot stereograms. This type of autostereogram allows a person to see 3D shapes from a single 2D image without the aid of optical equipment. In 1991 computer programmer Tom Baccei and artist Cheri Smith created the first color random-dot autostereograms, later marketed as Magic Eye.
A computer procedure that extracts back the hidden geometry out of an autostereogram image was described by Ron Kimmel.
In addition to classical stereo it adds smoothness as an important assumption in the surface reconstruction.
In the late 90's many children's magazines featured autostereograms. Even gaming magazines like Nintendo Power had a section specifically made for these illusions.
How they work
Simple wallpaper
Stereopsis, or stereo vision, is the visual blending of two similar but not identical images into one, with resulting visual perception of solidity and depth. In the human brain, stereopsis results from complex mechanisms that form a three-dimensional impression by matching each point (or set of points) in one eye's view with the equivalent point (or set of points) in the other eye's view. Using binocular disparity, the brain derives the points' positions in the otherwise inscrutable z-axis (depth).
When the brain is presented with a repeating pattern like wallpaper, it has difficulty matching the two eyes' views accurately. By looking at a horizontally repeating pattern, but converging the two eyes at a point behind the pattern, it is possible to trick the brain into matching one element of the pattern, as seen by the left eye, with another (similar looking) element, beside the first, as seen by the right eye. With the typical wall-eyed viewing, this gives the illusion of a plane bearing the same pattern but located behind the real wall. The distance at which this plane lies behind the wall depends only on the spacing between identical elements.
Autostereograms use this dependence of depth on spacing to create three-dimensional images. If, over some area of the picture, the pattern is repeated at smaller distances, that area will appear closer than the background plane. If the distance of repeats is longer over some area, then that area will appear more distant (like a hole in the plane).
People who have never been able to perceive 3D shapes hidden within an autostereogram find it hard to understand remarks such as, "the 3D image will just pop out of the background, after you stare at the picture long enough", or "the 3D objects will just emerge from the background". It helps to illustrate how 3D images "emerge" from the background from a second viewer's perspective. If the virtual 3D objects reconstructed by the autostereogram viewer's brain were real objects, a second viewer observing the scene from the side would see these objects floating in the air above the background image.
The 3D effects in the example autostereogram are created by repeating the tiger rider icons every 140 pixels on the background plane, the shark rider icons every 130 pixels on the second plane, and the tiger icons every 120 pixels on the highest plane. The closer a set of icons are packed horizontally, the higher they are lifted from the background plane. This repeat distance is referred to as the depth or z-axis value of a particular pattern in the autostereogram. The depth value is also known as Z-buffer value.
The brain is capable of almost instantly matching hundreds of patterns repeated at different intervals in order to recreate correct depth information for each pattern. An autostereogram may contain some 50 tigers of varying size, repeated at different intervals against a complex, repeated background. Yet, despite the apparent chaotic arrangement of patterns, the brain is able to place every tiger icon at its proper depth.
Depth maps
Autostereograms where patterns in a particular row are repeated horizontally with the same spacing can be read either cross-eyed or wall-eyed. In such autostereograms, both types of reading will produce similar depth interpretation, with the exception that the cross-eyed reading reverses the depth (images that once popped out are now pushed in).
However, icons in a row do not need to be arranged at identical intervals. An autostereogram with varying intervals between icons across a row presents these icons at different depth planes to the viewer. The depth for each icon is computed from the distance between it and its neighbor at the left. These types of autostereograms are designed to be read in only one way, either cross-eyed or wall-eyed. All autostereograms in this article are encoded for wall-eyed viewing, unless specifically marked otherwise. An autostereogram encoded for wall-eyed viewing will produce inverse patterns when viewed cross-eyed, and vice versa. Most Magic Eye pictures are also designed for wall-eyed viewing.
The wall-eyed depth map example autostereogram to the right encodes 3 planes across the x-axis. The background plane is on the left side of the picture. The highest plane is shown on the right side of the picture. There is a narrow middle plane in the middle of the x-axis. Starting with a background plane where icons are spaced at 140 pixels, one can raise a particular icon by shifting it a certain number of pixels to the left. For instance, the middle plane is created by shifting an icon 10 pixels to the left, effectively creating a spacing consisting of 130 pixels. The brain does not rely on intelligible icons which represent objects or concepts. In this autostereogram, patterns become smaller and smaller down the y-axis, until they look like random dots. The brain is still able to match these random dot patterns.
The distance relationship between any pixel and its counterpart in the equivalent pattern to the left can be expressed in a depth map. A depth map is simply a grayscale image which represents the distance between a pixel and its left counterpart using a grayscale value between black and white. By convention, the closer the distance is, the brighter the color becomes.
Using this convention, a grayscale depth map for the example autostereogram can be created with black, gray and white representing shifts of 0 pixels, 10 pixels and 20 pixels, respectively as shown in the greyscale example autostereogram. A depth map is the key to creation of random-dot autostereograms.
Random-dot
A computer program can take a depth map and an accompanying pattern image to produce an autostereogram. The program tiles the pattern image horizontally to cover an area whose size is identical to the depth map. Conceptually, at every pixel in the output image, the program looks up the grayscale value of the equivalent pixel in the depth map image, and uses this value to determine the amount of horizontal shift required for the pixel.
One way to accomplish this is to make the program scan every line in the output image pixel-by-pixel from left to right. It seeds the first series of pixels in a row from the pattern image. Then it consults the depth map to retrieve appropriate shift values for subsequent pixels. For every pixel, it subtracts the shift from the width of the pattern image to arrive at a repeat interval. It uses this repeat interval to look up the color of the counterpart pixel to the left and uses its color as the new pixel's own color.
Unlike the simple depth planes created by simple wallpaper autostereograms, subtle changes in spacing specified by the depth map can create the illusion of smooth gradients in distance. This is possible because the grayscale depth map allows individual pixels to be placed on one of 2n depth planes, where n is the number of bits used by each pixel in the depth map. In practice, the total number of depth planes is determined by the number of pixels used for the width of the pattern image. Each grayscale value must be translated into pixel space in order to shift pixels in the final autostereogram. As a result, the number of depth planes must be smaller than the pattern width.
The fine-tuned gradient requires a pattern image more complex than standard repeating-pattern wallpaper, so typically a pattern consisting of repeated random dots is used. When the autostereogram is viewed with proper viewing technique, a hidden 3D scene emerges. Autostereograms of this form are known as Random Dot Autostereograms.
Smooth gradients can also be achieved with an intelligible pattern, assuming that the pattern is complex enough and does not have big, horizontal, monotonic patches. A big area painted with monotonic color without change in hue and brightness does not lend itself to pixel shifting, as the result of the horizontal shift is identical to the original patch. The following depth map of a shark with smooth gradient produces a perfectly readable autostereogram, even though the 2D image contains small monotonic areas; the brain is able to recognize these small gaps and fill in the blanks (illusory contours). While intelligible, repeated patterns are used instead of random dots, this type of autostereogram is still known by many as a Random Dot Autostereogram, because it is created using the same process.
Animated
When a series of autostereograms are shown one after another, in the same way moving pictures are shown, the brain perceives an animated autostereogram. If all autostereograms in the animation are produced using the same background pattern, it is often possible to see faint outlines of parts of the moving 3D object in the 2D autostereogram image without wall-eyed viewing; the constantly shifting pixels of the moving object can be clearly distinguished from the static background plane. To eliminate this side effect, animated autostereograms often use shifting background in order to disguise the moving parts.
When a regular repeating pattern is viewed on a CRT monitor as if it were a wallpaper autostereogram, it is usually possible to see depth ripples. This can also be seen in the background to a static, random-dot autostereogram. These are caused by the sideways shifts in the image due to small changes in the deflection sensitivity (linearity) of the line scan, which then become interpreted as depth. This effect is especially apparent at the left hand edge of the screen where the scan speed is still settling after the flyback phase. On a TFT LCD, which functions differently, this does not occur and the effect is not present. Higher quality CRT displays also have better linearity and exhibit less or none of this effect.
Mechanisms for viewing
Much advice exists about seeing the intended three-dimensional image in an autostereogram. While some people may quickly see the 3D image in an autostereogram with little effort, others must learn to train their eyes to decouple eye convergence from lens focusing.
Not every person can see the 3D illusion in autostereograms. Because autostereograms are constructed based on stereo vision, persons with a variety of visual impairments, even those affecting only one eye, are unable to see the three-dimensional images.
People with amblyopia (also known as lazy eye) are unable to see the three-dimensional images. Children with poor or dysfunctional eyesight during a critical period in childhood may grow up stereoblind, as their brains are not stimulated by stereo images during the critical period. If such a vision problem is not corrected in early childhood, the damage becomes permanent and the adult will never be able to see autostereograms. It is estimated that some 1 percent to 5 percent of the population is affected by amblyopia.
3D perception
Depth perception results from many monocular and binocular visual clues. For objects relatively close to the eyes, binocular vision plays an important role in depth perception. Binocular vision allows the brain to create a single Cyclopean image and to attach a depth level to each point in it.
The brain uses coordinate shift (also known as parallax) of matched objects to identify depth of these objects. The depth level of each point in the combined image can be represented by a grayscale pixel on a 2D image, for the benefit of the reader. The closer a point appears to the brain, the brighter it is painted. Thus, the way the brain perceives depth using binocular vision can be captured by a depth map (Cyclopean image) painted based on coordinate shift.
The eye operates like a photographic camera. It has an adjustable iris which can open (or close) to allow more (or less) light to enter the eye. As with any camera except pinhole cameras, it needs to focus light rays entering through the iris (aperture in a camera) so that they focus on a single point on the retina in order to produce a sharp image. The eye achieves this goal by adjusting a lens behind the cornea to refract light appropriately.
When a person stares at an object, the two eyeballs rotate sideways to point to the object, so that the object appears at the center of the image formed on each eye's retina. In order to look at a nearby object, the two eyeballs rotate towards each other so that their eyesight can converge on the object. This is referred to as cross-eyed viewing. To see a faraway object, the two eyeballs diverge to become almost parallel to each other. This is known as wall-eyed viewing, where the convergence angle is much smaller than that in cross-eyed viewing.
Stereo-vision based on parallax allows the brain to calculate depths of objects relative to the point of convergence. It is the convergence angle that gives the brain the absolute reference depth value for the point of convergence from which absolute depths of all other objects can be inferred.
Simulated 3D perception
The eyes normally focus and converge at the same distance in a process known as accommodative convergence. That is, when looking at a faraway object, the brain automatically flattens the lenses and rotates the two eyeballs for wall-eyed viewing. It is possible to train the brain to decouple these two operations. This decoupling has no useful purpose in everyday life, because it prevents the brain from interpreting objects in a coherent manner. To see a man-made picture such as an autostereogram where patterns are repeated horizontally, however, decoupling of focusing from convergence is crucial.
By focusing the lenses on a nearby autostereogram where patterns are repeated and by converging the eyeballs at a distant point behind the autostereogram image, one can trick the brain into seeing 3D images. If the patterns received by the two eyes are similar enough, the brain will consider these two patterns a match and treat them as coming from the same imaginary object. This type of visualization is known as wall-eyed viewing, because the eyeballs adopt a wall-eyed convergence on a distant plane, even though the autostereogram image is actually closer to the eyes. Because the two eyeballs converge on a plane farther away, the perceived location of the imaginary object is behind the autostereogram. The imaginary object also appears bigger than the patterns on the autostereogram because of foreshortening.
The following autostereogram shows three rows of repeated patterns. Each pattern is repeated at a different interval to place it on a different depth plane. The two non-repeating lines can be used to verify correct wall-eyed viewing. When the autostereogram is correctly interpreted by the brain using wall-eyed viewing, and one stares at the dolphin in the middle of the visual field, the brain should see two sets of flickering lines, as a result of binocular rivalry.
While there are six dolphin patterns in the autostereogram, the brain should see seven "apparent" dolphins on the plane of the autostereogram. This is a side effect of the pairing of similar patterns by the brain. There are five pairs of dolphin patterns in this image. This allows the brain to create five apparent dolphins. The leftmost pattern and the rightmost pattern by themselves have no partner, but the brain tries to assimilate these two patterns onto the established depth plane of adjacent dolphins despite binocular rivalry. As a result, there are seven apparent dolphins, with the leftmost and the rightmost ones appearing with a slight flicker, not dissimilar to the two sets of flickering lines observed when one stares at the 4th apparent dolphin.
Because of foreshortening, the difference in convergence needed to see repeated patterns on different planes causes the brain to attribute different sizes to patterns with identical 2D sizes. In the autostereogram of three rows of cubes, while all cubes have the same physical 2D dimensions, the ones on the top row appear bigger, because they are perceived as farther away than the cubes on the second and third rows.
Viewing techniques
If one has two eyes, fairly healthy eyesight, and no neurological conditions which prevent the perception of depth, then one is capable of learning to see the images within autostereograms. "Like learning to ride a bicycle or to swim, some pick it up immediately, while others have a harder time."
As with a photographic camera, it is easier to make the eye focus on an object when there is intense ambient light. With intense lighting, the eye can constrict the pupil, yet allow enough light to reach the retina. The more the eye resembles a pinhole camera, the less it depends on focusing through the lens. In other words, the degree of decoupling between focusing and convergence needed to visualize an autostereogram is reduced. This places less strain on the brain. Therefore, it may be easier for first-time autostereogram viewers to "see" their first 3D images if they attempt this feat with bright lighting.
Vergence control is important in being able to see 3D images. Thus it may help to concentrate on converging/diverging the two eyes to shift images that reach the two eyes, instead of trying to see a clear, focused image. Although the lens adjusts reflexively in order to produce clear, focused images, voluntary control over this process is possible. The viewer alternates instead between converging and diverging the two eyes, in the process seeing "double images" typically seen when one is drunk or otherwise intoxicated. Eventually the brain will successfully match a pair of patterns reported by the two eyes and lock onto this particular degree of convergence. The brain will also adjust eye lenses to get a clear image of the matched pair. Once this is done, the images around the matched patterns quickly become clear as the brain matches additional patterns using roughly the same degree of convergence.
When one moves one's attention from one depth plane to another (for instance, from the top row of the chessboard to the bottom row), the two eyes need to adjust their convergence to match the new repeating interval of patterns. If the level of change in convergence is too high during this shift, sometimes the brain can lose the hard-earned decoupling between focusing and convergence. For a first-time viewer, therefore, it may be easier to see the autostereogram, if the two eyes rehearse the convergence exercise on an autostereogram where the depth of patterns across a particular row remains constant.
In a random dot autostereogram, the 3D image is usually shown in the middle of the autostereogram against a background depth plane (see the shark autostereogram). It may help to establish proper convergence first by staring at either the top or the bottom of the autostereogram, where patterns are usually repeated at a constant interval. Once the brain locks onto the background depth plane, it has a reference convergence degree from which it can then match patterns at different depth levels in the middle of the image.
The majority of autostereograms, including those in this article, are designed for divergent (wall-eyed) viewing. One way to help the brain concentrate on divergence instead of focusing is to hold the picture in front of the face, with the nose touching the picture. With the picture so close to their eyes, most people cannot focus on the picture. The brain may give up trying to move eye muscles in order to get a clear picture. If one slowly pulls back the picture away from the face, while refraining from focusing or rotating eyes, at some point the brain will lock onto a pair of patterns when the distance between them matches the current convergence degree of the two eyeballs.
Another way is to stare at an object behind the picture in an attempt to establish proper divergence, while keeping part of the eyesight fixed on the picture to convince the brain to focus on the picture. A modified method has the viewer focus on their reflection on a reflective surface of the picture, which the brain perceives as being located twice as far away as the picture itself. This may help persuade the brain to adopt the required divergence while focusing on the nearby picture.
For crossed-eyed autostereograms, a different approach needs to be taken. The viewer may hold one finger between their eyes and move it slowly towards the picture, maintaining focus on the finger at all times, until they are correctly focused on the spot that will allow them to view the illusion.
Stereoblindness, however, is not known to permit the usages of any of these techniques, especially for persons in whom it may be, or is, permanent.
Terminology
Stereogram and autostereogram
Stereogram was originally used to describe as a pair of 2D images used in stereoscope to present a 3D image to viewers. The "auto" in autostereogram describes an image that does not require a stereoscope. The term stereogram is now often used interchangeably with autostereogram. Dr. Christopher Tyler, inventor of the autostereogram, consistently refers to single image stereograms as autostereograms to distinguish them from other forms of stereograms.
Random dot stereogram (RDS)
Random dot stereogram, describes a pair of 2D images containing random dots which, when viewed with a stereoscope, produced a 3D image. The term is now often used interchangeably with random dot autostereogram.
Single image stereogram (SIS)
Single image stereogram (SIS). SIS differs from earlier stereograms in its use of a single 2D image instead of a stereo pair and is viewed without a device. Thus, the term is often used as a synonym of autostereogram. When the single 2D image is viewed with proper eye convergence, it causes the brain to fuse different patterns perceived by the two eyes into a virtual 3D image without, hidden within the 2D image, the aid of any optical equipment. SIS images are created using a repeating pattern. Programs for their creation include Mathematica.
Random dot autostereogram/hidden image stereogram
Is also known as single image random dot stereogram (SIRDS). This term also refers to autostereograms where the hidden 3D image is created using a random pattern of dots within one image, shaped by a depth map within a dedicated stereogram rendering program.
Wallpaper autostereogram/object array stereogram/texture offset stereogram
Wallpaper autostereogram is a single 2D image where recognizable patterns are repeated at various intervals to raise or lower each pattern's perceived 3D location in relation to the display surface. Despite the repetition, these are a type of single image autostereogram.
Single image random text stereogram (SIRTS)
A single image random text ASCII stereogram is an alternative to SIRDS using random ASCII text instead of dots to produce a 3D form of ASCII art.
Map textured stereogram
In a map textured stereogram, "a fitted texture is mapped onto the depth image and repeated a number of times" resulting in a pattern where the resulting 3D image is often partially or fully visible before viewing.
See also
Diplopia
Lenticular printing
Notes
References
Bibliography
N. E. Thing Enterprises (1993). Magic Eye: A New Way of Looking at the World. Kansas City: Andrews and McMeel.
Tyler, C.W. and Clarke, M.B. (1990) "The Autostereogram". Stereoscopic Displays and Applications, Proc. SPIE Vol. 1258:182–196.
Marr, D. and Poggio, T. (1976). "Cooperative computation of stereo disparity". Science, 194:283–287; October 15.
Julesz, B. (1964). "Binocular depth perception without familiarity cues". Science, 145:356–363.
Julesz, B. (1963). "Stereopsis and binocular 3d Stereogram rivalry of contours". Journal of the Optical Society of America, 53:994–999.
Julesz, B. and J.E. Miller. (1962). "Automatic stereoscopic presentation of functions of two variables". Bell System Technical Journal, 41:663–676; March.
Scott B. Steinman, Barbara A. Steinman and Ralph Philip Garzia. (2000). Foundations of Binocular Vision: A Clinical perspective. McGraw-Hill Medical.
Ron Kimmel. (2002) 3D Shape Reconstruction from Autostereograms and Stereo. Journal of Visual Communication and Image Representation, 13:324–333.
External links
Scholarpedia article on autostereograms Peer-reviewed article on autostereograms by Christopher Tyler
Stereograma - A Free Open-Source Cross-Platform Stereogram Generator
Autostereograms - 3D Magic eye, SIRDS - Gallery Images
Online ASCII stereogram generator
3D imaging
Optical illusions
Stereoscopy |
23274033 | https://en.wikipedia.org/wiki/Information%20technology%20planning | Information technology planning | Information technology planning (ITP) is a discipline within the information technology and information systems. Making the planning process for information technology investments and decision-making a quicker, more flexible, and more thoroughly aligned process is the concern of ITP. According to Architecture & Governance Magazine, (Strategic) ITP became an overarching discipline within the Strategic Planning domain in which enterprise architecture is now one of several capabilities.
Arguments in favor
IT takes too long to adjust plans to meet business needs. By the time IT is prepared, opportunities have passed and the plans are obsolete. IT doesn't have the means to understand how it currently supports business strategy. The linkage between IT’s capabilities — and their associated costs,
benefits, and risks – and business needs is not mapped out. Additionally, information gathering and number crunching hold the process back.
IT makes plans that don’t reflect what IT will actually do or what the business actually needs. In the end, business doesn’t understand how IT contributes to the execution of strategy. IT doesn’t start planning with a clear picture of which demand is truly strategic or which actions will have the biggest impact. Information regarding business needs and the costs, benefits, and risks of IT capabilities comes from sources of varying quality. IT then makes planning decisions based on misleading information.
IT's plans often end up rigid and unverifiable. Plans don’t include contingencies that reduce the impact of change, nor have they been verified as the best plan of action via comparison to alternatives and scenarios. IT simply doesn’t have the time and information for it. Manually preparing multiple plans and selecting the best one would take too long for most organizations — especially considering the availability of the information needed for a comparison.
Strategies for providing an information technology planning capability
According to Forrester Research, there are several recognized strategies for providing an information technology planning capability.
A repository of application data. Planning tools provide a common inventory of application data including costs, life cycles, and owners, so that planners have easy access to the information that drives their decisions.
Capability maps. Forrester recommends using capability maps to link IT's capabilities to the critical business processes they support. These software tools provide a graphical tool that clearly outlines how the business capabilities that IT provides to the business are linked to IT’s efforts. This can also be known as an IT Road-map or technology roadmap.
Gap analysis tools. Alongside capability maps, planning tools capture information about the future state of business capabilities as dictated by business strategy. Users leverage this functionality to identify the areas where IT capabilities need to be built, enhanced, or scaled back — driving IT’s strategy.
Modeling and analytic capability. These tools enable planning teams to create a variety of plans, which can then be compared to one another to weigh the pros, cons, and risks of each. In addition, their impact on architecture and current initiatives becomes visible. This keeps plans relevant, provides teams with the foresight to plan holistically, and enables IT to communicate the plan clearly.
Reporting tools. Reports guide the planning team’s decisions — for example, which applications have redundant capabilities, have not been upgraded, or are plagued with costly issues. IT’s strategic decisions are therefore more easily justified.
the management process are used in business policy and each person are able to promate the policy of data feeds and how much process are able to know the process is built up in each and every process of management data.
Results
Companies like Barclays Bank.., Accenture and Vodafone as well as Government agencies like Department of Homeland Security and Los Alamos National Laboratory, have made investments in strategic IT planning capabilities and returns on investment as great as 700% have been validated for these kinds of projects.
See also
Requirement prioritization
Strategic management
Strategic technology plan
Technology roadmap
References
External links
Forrester Research, Tools for IT Planning
The 2008 A&G Reader Survey: The Rise of Strategic IT Planning and Executive Involvement, Architecture & Governance Magazine
The Forrester Wave: Business Process Analysis, EA Tools, And IT Planning, Q1 2009
Strategic IT Planning Comes of Age, April 2009, Architecture & Governance Magazine
Information technology management |
29581865 | https://en.wikipedia.org/wiki/CP-823/U | CP-823/U | The CP-823/U, Univac 1830, was the first digital airborne 30-bit computing system. It was engineered, built and tested as the A-NEW MOD3 prototype computer for the Lockheed P-3 Orion.
In 1963, the US Navy Dept., Bureau of Weapons, Naval Air Development Center contracted Univac Defense Systems Division of Sperry-Rand to perform a study of the feasibility of a central digital avionics computer for the Navy’s Project A-NEW, the ASW (Anti-Submarine Warfare) development for the Lockheed P-3 Orion. The idea was to develop and build the first central digital computing system able to coordinate the many sensors, MPD (Multipurpose Display) and tactical air command functions.
The study, “Final Report on Avionics Unit Computer Study 10-21-63”, concluded that a miniature, modular, digital avionics computer could be engineered, built and tested using current developing technologies.
After a meeting in January 1964 with representatives from Univac and the Naval Air Development Center, contracts worth almost $2 million were awarded to Univac Defense Systems Division to engineer, build and test the first digital 30-bit Airborne computer, the CP-823/U (Univac 1830) engineering prototype, for the A-NEW MOD3 test aircraft.
This would be Univac’s first computer to use flatpack monolithic integrated circuits, using a diode-transistor logic (DTL) silicon chip. This technology was simultaneously being developed for use in the Univac 1824 for the missile guidance program. It was also their first computer to lay the electronics flat, on a printed circuit card, instead of on-end like the cordwood block electronics modules, (Burndy packs).
The CP-823/U Computing System, Serial A1, (Univac 1830), A-NEW MOD3 was delivered to the Naval Air Development Center, Johnsville, Pa in 1965. It consisted of a Control Console (Maintenance Panel), combined Airborne Power Supply, Central Processor, 32,000 30-bit Memory unit, four Airborne I/O units, Ground I/O unit and cables.
The Univac 1830, Navy designated CP-823/U, was a digital electronic computing machine which received problems and data and processed answers in numerical form. It used parallel binary arithmetic and logic operations; word length was 30 bits. All of the Central Processor (C.P.) logic and I/O logic control was microelectronic circuitry, constructed of integrated, monolithic semiconductor elements (resistors, diodes and transistors contained within a single chip of silicon). Logic cards that were not microelectronic are the Master Clock cards in the C.P and the Input amplifier and output data driver cards in the I/O units.
In the A-NEW integrated system, the CP-823/U airborne digital computer performed many functions aboard the Lockheed P-3 Orion test aircraft. It continuously computed the aircraft’s latitude and longitude, calculated optimum deployment of sonobuoys, kept tabs on their location with respect to the moving aircraft and determined estimated target positions from data supplied by all aircraft sensors. The computer used statistical techniques to derive several possible courses of action, displaying these and the computed probability of success, for final selection by the aircraft commander. Other tasks which were performed by the integrated Anti-Submarine Warfare Prototype CP-823/U computer included: Search and Correlation, Automatic and Extended Tracking, Enemy Submarine Identification, Attack and Post Flight Evaluation.
The extensive testing (1965–1968) of the CP-823/U (Univac 1830) prototype Computing System integrated with the rest of the A-NEW MOD3 sensor and display system in and out of the P3 Orion test aircraft, eventually resulted in the U.S. Navy, Bureau of Weapons approval. On June 24, 1966 UNIVAC received a contract for design, development, testing and delivery of the computer. Production deliveries began in 1967. The resulting general purpose computer was the CP-901 / ASQ-114 (Univac 1830A), used in the Lockheed P-3C Orion ASW aircraft.
References
Avionics computers |
67286145 | https://en.wikipedia.org/wiki/PC%20F%C3%BAtbol%205.0 | PC Fútbol 5.0 | PC Fútbol 5.0 is a PC sports manager video game and football-themed software developed by Dinamic Multimedia and released in Spain in January 1997. It was the fifth entry in the PC Fútbol series, covering the 1996-97 football season, and the first edition that was developed for Windows 95 (though a MS-DOS version was also released, the last one in the franchise) and to include Internet features. Apart from the game proper, the software included additional features such an electronic database with information on all three tiers' players, a table calculator (Seguimiento manual), a football lottery assistant (Proquinielas) and an online football newsletter (Infofútbol). A printed 1995–96 annual was also included along with the CD.
Gameplay
The software was divided into eleven sections, listed as follows:
Instrucciones (Instructions)
Historia (History)
Base de datos (Database)
Seguimiento (Monitoring)
Seguimiento manual (Manual monitoring)
Actualización online (Online update)
Infofútbol
Partido amistoso (Friendly match)
Liga manager
Liga promanager
Proquinielas
Teams
The game featured all teams in La Liga, Segunda División and, for the first time, Segunda División B for a total of 122 teams available in the Manager and Promanager league modes. Additionally it included all teams that participated in the UEFA Champions League, the Cup Winners' Cup and the UEFA Cup except for those ousted in the two latters' qualifying rounds and a selection of South American teams were available as playable teams in the friendly games mode.
Manager and Promanager modes
The Manager mode allowed the player to play with a single Primera, Segunda or Segunda B team, while Promanager was a password-protected career mode that had him choose initially from a number of lower-ranked Segunda División B teams, with increasingly higher-ranked teams made available as seasons passed if the established objectives were met. Both modes offer four different approaches to the gameplay, having the player act as either the club's coach (Entrenador), manager (Manager), chairman (President) or all three. The matches can be either played (Arcade), watched (Visionado) or have its result and statistics displayed in half-time and full-time score boards (Resultado). For the first time, a mobile-camera 3D engine was developed for the two former modes, even though the players still were 2D models. Either a keyboard or a joystick could be used, allowing for local multiplayer, and up to 20 teams could be used in each save slot.
Narration
Michael Robinson and Chus del Río were in charge of the narration.
Release
PC Fútbol 5.0 was scheduled to be released in October 1996, like previous PC Fútbol editions, but the ambitious scope of the software proved too taxing for the seven-member programming team and the production was eventually delayed to December. However, part of the software was turned into Gremlin Interactive's Premier Manager 97 and released in November in the United Kingdom. The full software still couldn't be finished in time for its intended Spanish release, and its production was eventually completed in December for a January 1997 release, just in time for the Reyes Magos celebration. The game was a major commercial success for Dinamic and sold 305.000 copies in Spain.
Trivia
Raúl Martínez, who played in Segunda B for Valencia's farm team, had a far higher ingame basic rating (72) than the other players in the category and could be sold for any release clausule regardless how high, which made him very popular among the game's players. Martínez claims that when the following season he signed for Yeclano its coach admitted that he had attracted his attention through his rating in the game.
References
1997 video games
Association football video games
Association football management video games
DOS games
Windows games
Video games developed in Spain
Video games set in 1996
Video games set in Europe
Europe-exclusive video games
Video games with expansion packs |
3666797 | https://en.wikipedia.org/wiki/Russia%E2%80%93European%20Union%20relations | Russia–European Union relations | Russian–European Union relations are the international relations between the European Union (EU) and Russia. The relations of individual member states of the European Union and Russia vary, though a 1990s common foreign policy outline towards Russia was the first such EU foreign policy agreed. Furthermore, four European Union-Russia Common Spaces are agreed as a framework for establishing better relations. The latest EU-Russia strategic partnership was signed in 2011, but it was later challenged by the European Parliament in 2015 following the annexation of Crimea and the war in Donbas.
Russia borders five EU member states: Finland, Estonia, Latvia, Lithuania and Poland.
Gas disputes
The Russia–Ukraine gas dispute of 2009 damaged Russia's reputation as a gas supplier. After a deal was struck between Ukraine and the EU on 23 March 2009 to upgrade Ukraine's gas pipelines According to Russian Energy Minister Sergei Shmatko the plan appeared to draw Ukraine legally closer to the European Union and might harm Moscow's interests. The Russian Foreign Ministry called the deal "an unfriendly act" (on 26 March 2009). Professor Irina Busygina of the Moscow State Institution for Foreign Relations has said that Russia has better relations with certain leaders of some EU countries than with the EU as a whole because the EU has no prospect of a common foreign policy.
In September 2012, the European Commission (EC) opened an antitrust investigation relating to Gazprom's contracts in central and eastern Europe. Russia responded by enacting, also in September 2012, legislation hindering foreign investigations. In 2013, the poorest members of the EU usually paid the highest prices for gas from Gazprom.
The commission's investigation was delayed due to Russia's military intervention in Ukraine. In April 2015, the EC accused Gazprom of unfair pricing and restricting competition. The European Commissioner for Competition, Margrethe Vestager, stated that "All companies that operate in the European market – no matter if they are European or not – have to play by our EU rules. I am concerned that Gazprom is breaking EU antitrust rules by abusing its dominant position on EU gas markets." Gazprom said it was "outside of the jurisdiction of the EU" and described itself as "a company which in accordance with the Russian legislation performs functions of public interest and has a status of strategic state-controlled entity." Lithuanian president Dalia Grybauskaitė said that the Kremlin was using Gazprom as "a tool of political and economic blackmail in Europe".
In October 2016 general Leonid Ivashov explained in Russian Channel One that Russia's engagement in the Syrian Civil War was critical to prevent construction of hydrocarbon pipelines from the Middle East to Europe, which would be catastrophic for Gazprom and, in turn, for the budget of Russian Federation.
Russian opposition to the invasion of Iraq
Russia strongly opposed the U.S.-led 2003 invasion of Iraq. Some EU member states, including Poland and Britain, have agreed to join the United States in the "coalition of the willing". The foreign ministers of Russia, France and Germany made a joint declaration that they will "not allow" passage of a UN Security Council resolution authorising war against Iraq.
Tensions over Association Agreements
The run-up to the 2013 Vilnius Summit between the EU and its eastern neighbours saw what The Economist called a "raw geopolitical contest" not seen in Europe since the end of the Cold War, as Russia attempted to persuade countries in its "near abroad" to join its new Eurasian Economic Union rather than sign Association Agreements with the EU. The Russian government under president Putin succeeded in convincing Armenia (in September) and Ukraine (in November) to halt talks with the EU and instead begin negotiations with Russia.
Nevertheless, the EU summit went ahead with Moldova and Georgia proceeding towards agreements with the EU despite Russia's opposition. Widespread protests in Ukraine resulted in then-President Viktor Yanukovych leaving Ukraine for Russia in February 2014. Russia subsequently began a military intervention in Ukraine. This action was condemned as an invasion by the European Union, which imposed visa bans and asset freezes against some Russian officials. The Council of the European Union stated that "Russia's violation of international law and the destabilisation of Ukraine [...] challenge the European security order at its core."
Russia views some of the countries that applied to join the EU or NATO after the fall of the Iron Curtain as part of its sphere of influence. It has criticised their admission and frequently said that NATO is "moving its infrastructure closer to the Russian border". The expansion of NATO into the Baltic states of Lithuania, Latvia, and Estonia as well as the proposed ascension of Georgia and Ukraine are among Russia's main claims to NATO's encroachment of its sphere of influence. NATO Deputy Secretary General Alexander Vershbow responded that NATO's major military infrastructure in Eastern Europe is no closer to the Russian border than since the end of the Cold War, and that Russia itself maintains a large military presence in neighbouring countries.
Sanctions over Ukraine
Since 2014 annexation of Crimea and military intervention in Donbas, the European Union has imposed sanctions over Russian Federation, initially involving visa bans and freeze of assets of 170 individuals and 44 entities involved in these operations. The EU sanctions have been continuously extended and are in force as of 2020. A Russian-owned food imports embargo remains in force.
In May 2020 European Commission President Ursula von der Leyen and European Council President Charles Michel responded to calls from Russia for relaxation of the sanctions to facilitate the COVID-19 response explaining that the EU sanctions are "deliberately narrowly framed in order to limit the risks of unintended consequences or harm to the wider population" and none of them prevents "export of food or medicines, vaccines or medical equipment". As the original reasons for sanctioning were not removed by Russia, the sanctions were extended for another year.
Russian political influence and financial links
Moscow increased its efforts to expand its political influence using a wide range of methods, including funding of political movements in Europe, increased spending on propaganda in European languages, operating a range of media broadcasting in EU languages and web brigades, with some observers suspecting the Kremlin of trying to weaken the EU and its response to the Ukrainian crisis.
Russia has formed close ties with Eurosceptic and populist parties belonging to both ends of the political spectrum. By the end of 2014, a number of European far-right and far-left parties were receiving different forms of financial or organisational support from Russia in an attempt to build a common anti-European and pro-Russian front in the European Union. Among the far-right parties involved were the Freedom Party of Austria (FPÖ), Alternative for Germany (AfD), National Democratic Party of Germany (NPD), France's National Front, Italy's Lega Nord, Hungary's Jobbik, Bulgaria's Attack (Ataka), and Latvian Russian Union. Among far-left parties, representatives of Die Linke, Communist Party of Greece, Syriza and others attended numerous events organized by Russia such as "conservative conferences" and the Crimean referendum. In the Europarliament, the European United Left–Nordic Green Left are described as "reliable partner" of Russian politics, voting against resolutions condemning events such as Russia's military intervention in Ukraine, and supporting Russian policies e.g. in Syria.
Konstantin Rykov and Timur Prokopenko, both closely tied to United Russia and Russian Federation's Presidential Administration, were the key figures in funneling money to these parties. Agence France-Presse stated that "From the far right to the radical left, populist parties across Europe are being courted by Russia's Vladimir Putin who aims to turn them into allies in his anti-EU campaign" and that "A majority of European populist parties have sided with Russia over Ukraine." During the Russian military intervention in Ukraine, British politicians Nigel Farage of the far-right and Jeremy Corbyn of the far-left both defended Russia, saying the West had "provoked" it.
Luke Harding wrote in The Guardian that the Front National's MEPs were a "pro-Russian bloc." In 2014, the Nouvel Observateur said that the Russian government considered the Front National "capable of seizing power in France and changing the course of European history in Moscow's favour." According to the French media, party leaders had frequent contact with Russian ambassador Alexander Orlov and Marine Le Pen made multiple trips to Moscow. In November 2014, Marine Le Pen confirmed a €9 million loan from a Russian bank to the Front National. The Independent said the loans "take Moscow's attempt to influence the internal politics of the EU to a new level." Reinhard Bütikofer stated, "It's remarkable that a political party from the motherland of freedom can be funded by Putin's sphere – the largest European enemy of freedom." Boris Kagarlitsky said, "If any foreign bank gave loans to a Russian political party, it would have been illegal, or at least it would have been an issue which could lead to a lot of scandal" and the party would be required to register as a "foreign agent." Le Pen denied a Mediapart report that a senior Front National member said it was the first installment of a €40 million loan. In April 2015, a Russian hacker group published texts and emails between Timur Prokopenko, a member of Putin's administration, and Konstantin Rykov, a former Duma deputy with ties to France, discussing Russian financial support to the Front National in exchange for its support of Russia's annexation of Crimea.
In June 2015, Marine Le Pen launched a new political group within the EU Parliament, Europe of Nations and Freedom (ENF), composed of members of the Front National, Party for Freedom, Lega Nord, the Freedom Party of Austria (FPÖ), Flemish Interest (VB) and the Congress of the New Right (KNP). Reviewing votes in the EU Parliament on resolutions critical of Russia or measures not in the Kremlin's interests (e.g., the EU-Ukraine Association Agreement), Hungary's Political Capital Institute found that the future ENF members voted "no" in 93% of cases, European United Left–Nordic Green Left in 78% of cases, and Europe of Freedom and Direct Democracy in 67% of cases. The writers stated that "It would therefore be logical to conclude, as others have done before, that there is a pro-Putin coalition in the European Parliament consisting of anti-EU and radical parties."
The Financial Times and Radio Free Europe reported on Syriza's ties with Russia and extensive correspondence with Aleksandr Dugin, who called for a "genocide" of Ukrainians. The EUobserver reported that Tsipras had a "pro-Russia track record" and that Syriza's MEPs had voted against the Ukraine–European Union Association Agreement, criticism of the Russian annexation of Crimea, and criticism of the pressure on civil rights group Memorial. The Moscow Times stated that "The terms used in Russia's anti-Europe rhetoric also seem to have infiltrated Tsipras' vocabulary." Russia also developed ties with Hungarian Prime Minister Viktor Orbán (Fidesz), who praised Vladimir Putin's "illiberal democracy" and was described by Germany's former foreign minister Joschka Fischer as a "Putinist". Hungary allowed a Russian billionaire to renovate a memorial in Budapest, which some Hungarians called illegal, to Soviet soldiers who died fighting against the Hungarian Revolution of 1956, and Putin visited it in February 2015. Orban's government dropped plans to put the expansion of the Paks Nuclear Power Plant out to tender and awarded the contract to Rosatom after Russia offered a generous loan. Zoltán Illés said that Russia was "buying influence".
Two new organisations – European Centre for Geopolitical Analysis and "Agency for Security and Cooperation in Europe" (ASCE) – recruiting mostly European far-right politicians, were also heavily involved in positive public relations during the 2014 Russian military intervention in Ukraine, observing Donbas general elections and presenting a pro-Russian point of view on various events there. In 2014, a number of officials in Europe and NATO provided circumstantial evidence that protests against hydraulic fracturing may be sponsored by Gazprom. Russian officials have on numerous occasions warned Europe that fracking "poses a huge environmental problem" in spite of Gazprom itself being involved in shale gas surveys in Romania (and not facing any protests) and reacted aggressively to any criticism by environmental organisations.
A significant part of the funding of anti-EU and extremist parties passes through St Basil the Great fund operated by Konstantin Malofeev.
In February 2015, a group of Spanish nationals was arrested in Madrid for joining a Russian-backed armed group in the war in Donbas. Travelling through Moscow, they were met by a "government official" and sent to Donetsk, where they saw French and other foreign fighters, "half of them communists, half Nazis".
In March 2015, the Russian nationalist party Rodina organized the International Russian Conservative Forum in Saint Petersburg, inviting a majority of its far-right and far-left (including openly neo-Nazi) supporters from abroad, many of whom had visited a similar event in Crimea in 2014: Udo Voigt, Jim Dowson, Nick Griffin, Jared Taylor, Roberto Fiore, Georgios Epitidios (Golden Dawn) and others.
Since 2012, a fund created by the Foreign Affairs Ministry of Russia (Fund for the Legal Protection and Support of Russian Federation Compatriots Living Abroad) has transferred €224,000 to the "Latvian Human Rights Committee", which was founded by pro-Russian politician Tatjana Ždanoka. Latvijas Televīzija reported that only projects which supported Russia's foreign policy objectives were eligible for funding.
In June 2015, the European Parliament stated that Russia was "supporting and financing radical and extremist parties in the EU" and called for monitoring of such activities. France's National Front, UKIP, and Jobbik voted against the resolution. These and other extreme right organisations are part of Russia-sponsored World National-Conservative Movement. In July 2016, Estonian foreign affairs minister Marina Kaljurand said, "The parade that we have seen of former and current European leaders to Moscow calling for rapprochement — and tacitly agreeing to the dismantling of Europe — has been disheartening for those of us who understand that a unified Europe with a strong American partnership is the only reason we have a choice at all about where our futures should be."
In June 2016, Czech foreign minister Lubomír Zaorálek stated that Russia was supporting right-wing populists to "divide and conquer" the EU. In October 2016, the EU held talks on Russian funding of far-right and populist parties.
In 2018 Czech counter-intelligence service BIS published a report documenting significant increase of activity of Russia and China-backed actors to influence regulators and political bodies. In 2020 a detailed analysis of Russian intelligence actions and active measures between 2002–2011 to prevent ballistic missile defense component from being deployed, involving "manipulation of media events, outputs, and reports and abusing cultural and social events". This also included attempts to recruit the Russian-speaking population in the country, but the majority was not interested in supporting the policy of Vladimir Putin.
Allegations of Russian intimidation and destabilisation of EU states
In July 2009, central and eastern European leaders – including former presidents Václav Havel, Valdas Adamkus, Aleksander Kwaśniewski, Vaira Vīķe-Freiberga, Lech Wałęsa – signed an open letter stating:
Latvian journalist Olga Dragilyeva stated that "Russian-language media controlled by the Russian government and NGOs connected with Russia have been cultivating dissatisfaction among the Russian-speaking part of the population" in Latvia. National security agencies in Lithuania, Estonia and Latvia have linked Moscow to local pro-Russian groups. In June 2015, a Chatham House report stated that Russia used "a wide range of hostile measures against its neighbours", including energy cut-offs, trade embargoes, subversive use of Russian minorities, malicious cyber activity, and co-option of business and political elites.
In 2015, the U.K. media said that the Russian leadership under Putin saw the fracturing of the political unity within the EU and especially the political unity between the EU and the U.S. as among its main strategic goals, one of the means in achieving this goal being rendering support to Europe's far-right and hard Eurosceptic political parties. In October 2015, Putin said that Washington treated European countries "like vassals who are being punished, rather than allies."
In November 2015, the president of Bulgaria, Rosen Plevneliev, said that Russia had launched a massive hybrid warfare campaign "aimed at destabilising the whole of Europe", giving repeated violations of Bulgarian airspace and cyber-attacks as examples.In January 2016, senior UK government officials were reported to have registered their growing fears that "a new cold war" was now unfolding in Europe, with "Russian meddling" allegedly taking on a breadth, range and depth greater than previously thought: "It really is a new Cold War out there. Right across the EU we are seeing alarming evidence of Russian efforts to unpick the fabric of European unity on a whole range of vital strategic issues." The situation prompted the US Congress to instruct James R. Clapper, the U.S. Director of National Intelligence, to conduct a major review of Russian clandestine funding of European parties over the previous decade.
On numerous occasions Russia was also accused of actively supporting United Kingdom withdrawal from the European Union through channels such as Russia Today and Russian Federation embassy in London. An analysis of the Russian government's English-language news service, Sputnik, found "a systematic bias in favour of the "Out" campaign which was too consistent to be the result of accident or error."
In February 2016, a film circulating in Hungary, in which recruited students expressed anger at the policy of the USA, was identified as a version of a Russian movie with the same script funded by a pro-Putin organisation, Officers’ Daughters. Published in March 2016, Swedish security service Säpo's annual report stated that Russia was engaged in "psychological warfare" using "extreme movements, information operations and misinformation campaigns" aimed at policy makers and the general public.
In June 2016, Russian Foreign Minister Sergey Lavrov stated that Russia will never attack any NATO country, saying: "I am convinced that all serious and honest politicians know perfectly well than Russia will never attack a member state of NATO. We have no such plans." He also said: "In our security doctrine it is clearly stated that one of the main threats to our safety is the further expansion of NATO to the east."
In late 2016 media in a number of states accused Russia of preparing grounds for a possible armed take-over at their territories in future, including Finland, Estonia and Montenegro. In the latter an armed coup was actually in progress but prevented by security services on the day of election on 16 October, with over 20 people arrested. A group of 20 citizens of Serbia and Montenegro "planned to break into the Montenegro Parliament on election day, kill Prime Minister Milo Djukanovic and bring a pro-Russian coalition to power" according to Montenegro chief prosecutor Milivoje Katnić, adding that the group was led by two Russian citizens who fled the country before the arrest and "unspecified number of Russian operatives" in Serbia who were deported shortly after. A few days after the failed coup Leonid Reshetnikov was dismissed by Putin from his duties as head of Russian Institute for Strategic Studies, which also had its branch in Belgrade where it supported anti-NATO and pro-Russian parties. In 2019 a number of Montenegrin politicians and pro-Russian activists were convicted for the attempted coup as well as two Russian GRU officers Eduard Shishmakov and Vladimir Popoo( convicted in absentia).
In 2017 a cache of email was leaked demonstrating funding of far-right and far-left movements in Europe through a Belarussian citizen Alyaksandr Usovsky who funnelled hundreds of thousands of euros from Russian nationalist and oligarch Konstantin Malofeyev and reporting to Russian State Duma Deputy Konstantin Zatulin. Usovsky confirmed the authenticity of the emails.
In 2017 three Alternative for Germany parliamentary deputies confirmed that they together received $29'000 in sponsored private jet visit to Moscow, which caused significant controversy in Germany.
In 2019 a transcript was published from a meeting in Moscow where representatives of Italian nationalist Lega party were offered "tens of millions of dollars" of funding. The delegation to Moscow included Italy's deputy prime minister Matteo Salvini. In 2020 chat transcripts were published by Dutch media of far-right politician Thierry Baudet indicating inspiration on his anti-Ukraine actions and possible financial support from Vladimir Kornilov, a Russian described by Baudet as someone "who works for president Putin".
In 2020 a Spanish court looked at transcripts of calls between a Catalan independence activist Victor Terradellas and a group of Russians who came forward with an offer of up to 10'000 military personnel, pay out of Catalan debt and recognition of Catalan independence by Russian Federation in exchange for Catalan recognition of Crimea. Frequent arrivals of known GRU operative Denis Sergeev into Spain, coinciding with major Catalan independence events, raised a questions about involvement of GRU Unit 29155 in escalation of the protests.
On 28 April 2021 the European Parliament passed a resolution that condemned Russia's "hostile behaviour towards and outright attacks on EU Member States" explicitly mentioning suspected GRU operation in Czechia in 2014, the poisoning and imprisonment of Alexei Navalny and escalation of the war in Donbass. The resolution called, among other things, for discontinuation of the Nord Stream 2 project.
Intelligence activities
A Russian spy, Sergey Cherepanov, operated in Spain from the 1990s to June 2010 under a false identity, "Henry Frith".
In its 2013 report, the Security Information Service noted the presence of an "extremely high" number of Russian intelligence officers in the Czech Republic. The Swedish Security Service's 2014 annual report named Russia as the biggest intelligence threat, describing its espionage against Sweden as "extensive".
According to a May 2016 report for the European Council on Foreign Relations, Russia was engaged in "massive and voracious intelligence-gathering campaigns, fueled by still-substantial budgets and a Kremlin culture that sees deceit and secret agendas even where none exist."
One of the main figures perceived as European far-right and far-left contact in Russia is Sergey Naryshkin, who in 2016 was appointed as the chief of Russia's Foreign Intelligence Service (SVR).
In 2018 the head of British MI6 warned that "perpetual confrontation" with the West is core feature of Russian foreign policy.
Since 2009 in Estonia alone 20 people were tried and convicted as operatives or agents of Russian intelligence services, which is the largest number of all NATO countries. Out of these 11 convicts worked for FSB, two for SVR, five for GRU, and one was not disclosed. Seven people were low-grade intelligence sources or couriers, primarily involved into contraband of various goods (e.g. cigarettes) from and to Russia and thus easily recruited. More importantly, five of the convicted were officials of Estonian law enforcement and army.
In 2020 German prosecution issued an arrest warrant for Dmitry Badin, a GRU operative, for his involvement in 2015 hacking of Bundestag.
In March 2021 Bulgarian security services arrested six people, including officials in defence ministry of Bulgaria suspected of collecting intelligence for Russia. One of the arrested who has a double Russian-Bulgarian nationality was operating as the contact person between the suspects and Russian embassy. Earlier in 2020 five Russian diplomats and a technical assistant have been expelled from Bulgaria as involved in illegal intelligence operations.
Cyber attacks
In 2007, following the Estonian government's decision to remove a statue of a Soviet soldier, the Baltic country's major commercial banks, government agencies, media outlets, and ATMs were targeted by a coordinated cyber attack which was later traced to Russia.
In April 2015, the French television channel TV5 Monde was targeted by a cyber attack which claimed to represent ISIL but French sources said their investigation was leading to Russia. In May 2015, the Bundestag's computer system was shut down for days due to a cyberattack carried out by a hacker group that was likely "being steered by the Russian state", according to the Federal Office for the Protection of the Constitution in Germany. The agency's head, Hans-Georg Maaßen, said that, in addition to spying, "lately Russian intelligence agencies have also shown a willingness to conduct sabotage."
British Prime Minister Theresa May accused Russia of "threatening the international order", "seeking to weaponise information" and "deploying its state-run media organisations to plant fake stories". She mentioned Russia's meddling in German federal election in 2017, after German government officials and security experts said there was no Russian interference.
Concerns about foreign influence in the 2018 Swedish general election have been raised by the Swedish Security Service and others, leading to various countermeasures. According to the Oxford Internet Institute, eight of the top 10 "junk news" sources during the election campaign were Swedish, and "Russian sources comprised less than 1% of the total number of URLs shared in the data sample."
Military doctrines
In 2009, Wprost reported that Russian military exercises had included a simulated nuclear attack on Poland. In June 2012, Russian general Nikolay Makarov said that "cooperation between Finland and NATO threatens Russia's security. Finland should not desire NATO membership, rather it should preferably have closer military cooperation with Russia." In response, Finnish Prime Minister Jyrki Katainen said that "Finland will make its own decisions and [do] what is best for Finland. Such decisions will not be left to Russian generals." In April 2013, Svenska Dagbladet reported that Russia had simulated a bombing run in March on the Stockholm region and southern Sweden, using two Tu-22M3 Backfire heavy bombers and four Su-27 Flanker fighter jets. A nuclear attack against Sweden was part of the training exercises.
In May 2014, Russia's Deputy Prime Minister Dmitri Rogozin joked that he would return in a TU-160 after his plane was barred from Romania's airspace. Requesting an explanation, Romania's foreign ministry stated that "the threat of using a Russian strategic bomber plane by a Russian deputy prime minister is a very grave statement under the current regional context." Rogozin has also stated that Russia's defence sector has "many other ways of travelling the world besides tourist visas" and "tanks don't need visas".
In October 2014, Denmark's Defence Intelligence Service stated that in June of the same year Russian military jets "equipped with live missiles" had simulated an attack on the island of Bornholm as 90,000 people visited for the annual Folkemødet meeting.
In November 2014, the European Leadership Network reviewed 40 incidents involving Russia in a report titled Dangerous Brinkmanship, finding that they "add up to a highly disturbing picture of violations of national airspace, emergency scrambles, narrowly avoided midair collisions, close encounters at sea, simulated attack runs, and other dangerous actions happening on a regular basis over a very wide geographical area." In March 2015, Russia's ambassador to Denmark, Mikhail Vanin, stated that Danish warship "will be targets for Russian missiles" if the country joined NATO's missile defense system. Danish foreign minister Martin Lidegaard said the statements were "inacceptable" and "crossed the line". A few days later, Russian Foreign Ministry spokesman Aleksandr Lukashevich said that Russia could "neutralize" a missile defense system in Denmark. In April 2015, Sweden, Norway, Denmark, Finland, and Iceland decided to increase their military cooperation, telling Aftenposten: "The Russian military are acting in a challenging way along our borders, and there have been several infringes on the borders of the Baltic nations. Russia’s propaganda and political manoevering are contributing to sowing discord between the nations, as well as inside organisations like NATO and the EU". In June 2015, Russia's ambassador to Sweden, Viktor Tatarintsev, told Dagens Nyheter that if Sweden joins NATO "there will be counter measures. Putin pointed out that there will be consequences, that Russia will have to resort to a response of the military kind and re-orientate our troops and missiles."
In April 2015, the Russian navy disrupted NordBalt cable-laying in Lithuania's exclusive economic zone. From April 2013 to November 2015, Russia held seven large-scale military exercises (65,000 to 160,000 personnel) whereas NATO exercises were generally much smaller in size, with the largest composed of 36,000 personnel. Estonia criticised Russia's military exercises, saying that they "dwarfed" NATO's and were offensive rather than defensive, "simulating the invasion of its neighbors, the destruction and seizure of critical military and economic infrastructure, and targeted nuclear strikes on NATO allies and partners."
In 2016, Sweden revised its military strategy doctrine. Parliamentary Defense Committee chairman Allan Widman stated, "The old military doctrine was shaped after the last Cold War when Sweden believed that Russia was on the road to becoming a real democracy that would no longer pose a threat to this country and its neighbors." In April 2016, Russian Foreign Minister Sergey Lavrov stated that Russia would "have to take the necessary military-technical action" if Sweden joined NATO; Swedish Prime Minister Stefan Löfven responded, "We demand respect [...] in the same way that we respect other countries' decisions about their security and defence policies."
Russian military activities in Ukraine and Georgia caused particular alarm in countries which are geographically close to Russia and those which experienced decades of Soviet military occupation. Poland's foreign minister Witold Waszczykowski stated, "We have to reject any type of wishful thinking with regard to pragmatic cooperation with Russia as long as it keeps on invading its neighbours." Following the annexation of Crimea, Lithuania reinstated conscription, increased its defense spending, called on NATO to deploy more troops to the Baltics, and published three guides on surviving emergencies and war. Lithuanian president Dalia Grybauskaitė stated, "I think that Russia is terrorising its neighbours and using terrorist methods". Estonia increased training of Estonian Defence League members and encouraged more citizens to own guns. Brigadier General Meelis Kiili stated, "The best deterrent is not only armed soldiers, but armed citizens, too." In March 2017, Sweden decided to reintroduce conscription due to Russia's military drills in the Baltics and aggression in Ukraine.
In his speech at the RUSI Land Warfare Conference in June 2018, the Chief of the General Staff Mark Carleton-Smith said that British troops should be prepared to "fight and win" against the "imminent" threat of hostile Russia. Carleton-Smith said: "The misplaced perception that there is no imminent or existential threat to the UK – and that even if there was it could only arise at long notice – is wrong, along with a flawed belief that conventional hardware and mass are irrelevant in countering Russian subversion...". In a November 2018 interview with the Daily Telegraph, Carleton-Smith said that "Russia today indisputably represents a far greater threat to our national security than Islamic extremist threats such as al-Qaeda and ISIL. ... We cannot be complacent about the threat Russia poses or leave it uncontested."
In 2020 German media reported that members of German far-right extremist National Democratic Party (NPD) and The Third Way party attended military training in Russian Federation.
Assassinations and abductions
Alexander Litvinenko, who had defected from the FSB and become a British citizen, died from radioactive polonium-210 poisoning carried out in England in November 2006. Relations between the U.K. and Russia cooled after a British murder investigation indicated that Russia's Federal Protective Service was behind his poisoning. Investigation into the poisoning revealed traces of radioactive polonium left by the assassins in multiple places as they travelled across Europe, including Hamburg in Germany.
In September 2014, the FSB crossed into Estonia and abducted Eston Kohver, an officer of the Estonian Internal Security Service. Brian Whitmore of Radio Free Europe stated that the case "illustrates the Kremlin's campaign to intimidate its neighbors, flout global rules and norms, and test NATO's defenses and responses."
Between 2015 and 2017 officers Denis Sergeev, Alexey Kalinin and Mikhail Opryshko, all from GRU Unit 29155 were frequently traveling to Spain, allegedly in relation to the upcoming 2017 Catalan independence referendum. The same group was also linked to a failed assassination attempt of arms dealer Emian Gebrev in Bulgaria in 2015 and interference with Brexit referendum in 2016.
On 4 March 2018, Sergei Skripal, a former Russian military intelligence officer who acted as a double agent for the UK's intelligence services in the 1990s and early 2000s, and his daughter Yulia were poisoned with a Novichok nerve agent in Salisbury, England. The UK Prime Minister Theresa May requested a Russian explanation by the end of 13 March 2018. She said that the UK Government would "consider in detail the response from the Russian State" and in the event that there was no credible response, the government would "conclude that this action amounts to an unlawful use of force by the Russian State against the United Kingdom" and measures would follow.
In 2019 a Russian operative was arrested in Germany after he assassinated a Chechen refugee, Zelimkhan Khangoshvili. In response Germany expelled two Russian diplomats.
In April 2021 Czechia expelled 18 Russian intelligence operatives working under diplomatic cover after police investigation linked two GRU officers Alexander Mishkin and Anatoly Chepiga to 2014 Vrbětice ammunition warehouses explosions.
Use of migration issues
In January 2016, several Finnish authorities suspected that Russians were enabling migrants to enter Finland, and Yle, the national public-broadcasting company, reported that a Russian border guard had admitted the Federal Security Service's involvement. In March, NATO General Philip Breedlove stated, "Together, Russia and the Assad regime are deliberately weaponizing migration in an attempt to overwhelm European structures and break European resolve". A Russian state-run channel, supported by Sergey Lavrov, broadcast a false story that a 13-year-old German-Russian girl who had briefly disappeared had been raped by migrants in Berlin and that German officials were covering it up. Germany's foreign minister suggested that Russia was using the case "for political propaganda, and to inflame and influence what is already a difficult debate about migration within Germany."
In Bulgaria a number of Russian citizens (most notably Igor Zorin and Yevgeniy Shchegolikhin) are involved in cooperation with far-right and anti-immigrant movements, for example organization of paramilitary trainings for "voluntary border patrols".
Propaganda
Russian government funded media and political organisations have primarily targeted far-right circles in Europe, attempting to create an image of Russia as the last defender of traditional, conservative and Christian values:
Russian and pro-Russian media and organisations have produced fake stories and distorted real events. One of the most widely distributed fake stories was that of 13-year old Lisa F. In March 2017 a Russian TV team reportedly paid Swedish teenagers to stage a scene of anti-government protests in Rinkeby. The scale of this campaign resulted in a number of EU countries taking individual actions. The Czech Republic noted that Russia had set up about 40 Czech-language websites publishing conspiracy theories and false reports. According to the state secretary for European affairs, "The key goal of Russian propaganda in the Czech Republic is to sow doubts into the minds of the people that democracy is the best system to organise a country, to build negative images of the European Union and Nato, and [to] discourage people from participation in the democratic processes." An analyst for the Lithuanian military stated, "We have a pretty huge and long lasting disinformation campaign against our society". Lithuania has given three-month bans to Russian channels; Foreign Minister Linas Linkevičius stated, "A lie is not an alternative point of view". The head of Finland's governmental communication department, Markku Mantila, said that Russian propaganda sought to create suspicions against Finland's leaders, the European Union, and NATO. He stated, "There is a systematic lying campaign going on... It is not a question of bad journalism, I believe it is controlled from the center."
European Union has taken a number of steps at various levels to counter hostile propaganda and disinformation. EU Action Plan Against Disinformation of 2018 explicitly mentions Russia as the main threat source and East StratCom Task Force is an EU body working since 2015 on recording, fact-checking and debunking hostile disinformation. Council of the EU also runs a Council disinformation working group (ERCHT) dedicated to analysis and planning on response to disinformation. A number of Eastern and Central Europe countries run their own open-source intelligence institutions whose objective is to analyze events and influence from Russia. Among these are Centre for Polish-Russian Dialogue and Understanding (CPRDIP), the Estonian Center of Eastern Partnership, or the Polish Centre for Eastern Studies (OSW).
In November 2016, the EU Parliament passed an anti-propaganda resolution. EU Disinformation Review is a news feed analysing and debunking most notable fake stories distributed in Russian media. In 2018 the European Commission initiated a new Action Plan to counter "disinformation that fuels hatred, division, and mistrust in democracy" as well as interference with elections, "with evidence pointing to Russia as a primary source of these campaigns".
In June 2021 a Russian advertising firm Fazze attempted to recruit numerous YouTube and Instagram influencers for paid posts spreading false claims about several COVID-19 vaccines manufactured by European companies.
In 2022 European Parliament Special Committee on Foreign Interference in all Democratic Processes in the European Union, including Disinformation (INGE) draft "Report on foreign interference in all democratic processes in the European Union, including disinformation" condemned activities of RT (Russia Today), Sputnik and numerous other Russia agencies:
Russian medical aid to Italy
On 22 March 2020, after a phone call with Italian Prime Minister Giuseppe Conte, Russian president Vladimir Putin arranged the Russian army to send military medics, special disinfection vehicles, and other medical equipment to Italy, which was the European country hardest hit by the 2019–20 coronavirus pandemic. The President of Lombardy, Attilio Fontana, and Italian Foreign Minister Luigi Di Maio expressed their gratitude to Russia. According to some analysts, Russia's medical aid was an attempt to shape positive perceptions of the country at a time of global uncertainty.
Russian minorities in the EU
The OSCE mission monitoring the 2006 parliamentary elections in Latvia mentioned that
As reported by the European Commissioner for Human Rights 2007 report on Latvia, in 2006 there were 411,054 non-citizens, 66.5% of them belonging to Russian minority.
In 2017, there were 0.9 million ethnic Russians in the Baltic States, having declined from 1.7 million in 1989, the year of the last census during the Soviet era.
Beginning in 2019, instruction in Russian language will be gradually discontinued in private colleges and universities in Latvia, as well as general instruction in Latvian public high schools, except for subjects related to culture and history of the Russian minority, such as Russian language and literature classes.
Trade
The EU is Russia's largest trading partner, accounting for 52.3% of all foreign Russian trade in 2008 and 75% of foreign direct investment (FDI) stocks in Russia also come from the EU. The EU exported €105 billion of goods to Russia in 2008 and Russia exported €173.2 billion to the EU. 68.2% of Russian exports to the EU are accounted for by energy and fuel supplies. For details on other trade, see the table below;
Russia and the EU are both members of the World Trade Organisation (WTO). The EU and Russia are currently implementing the common spaces (see below) and negotiation to replace the current Partnership and Cooperation Agreement to strengthen bilateral trade.
The joint "Partnership for modernization"
18 November 2009 at the summit Russia-EU in Stockholm as one of the main vectors of deepening of strategic relations EU-Russia put forward the initiative "Partnership for modernization" (PM).
The goal of the Partnership is to assist in the solution of problems of modernization of economy of Russia and the corresponding adaptation of the entire complex of Russia-EU relations based on the experience of the existing dialogue mechanisms "sectoral" interaction of Russia and the EU.
At the summit in Rostov-on-don (June 2010) leaders of Russia and the EU signed the joint statement on "Partnership for modernization". The document sets the priorities and the scope for intensification of cooperation in the interests of modernization between Russia and the EU.
In accordance with the joint statement of the priority area of "Partnership for modernization" should include the following areas: expanding opportunities for investment in key sectors driving growth and innovation; enhancing and deepening bilateral trade and economic cooperation, and also creation of favorable conditions for small and medium-sized enterprises; promoting alignment of technical regulations and standards, as well as the high level of intellectual property protection; transportation; promote the development of sustainable low-carbon economy and energy efficiency, and support international negotiations on combating climate change; enhancing cooperation in innovation, research and development, and space; ensuring balanced development by taking measures in response to regional and social consequences of economic restructuring; ensuring the effective functioning of the judiciary and strengthening the fight against corruption; promote the development of relations between people and the strengthening of dialogue with civil society to promote the participation of people and business. This list of areas of cooperation is not exhaustive. As necessary can be added to other areas of cooperation. The EU and Russia will encourage implementation of specific projects within the framework of the "Partnership for modernization".
To coordinate this work with the Russian and the EU defined the respective national coordinators (with the Russian Deputy Minister A. A. Slepnev, with EU – Deputy General Director for external relations of the European Commission H. Mingarelli, since 2011, Director for Russia, European external action service Gunnar Wiegand).
According to the results of the analysis of existing formats of cooperation with European partners, it was determined that the PM should build on existing achievements within the formation of four General spaces Russia-EU, but not replace, existing "road map" and not be the reason for the creation of new structural add-ons. The main mechanisms of the initiative of PM have been recognized sectoral dialogues Russia-EU.
The national coordinators in cooperation with co-chairs-Russia sectoral dialogues the EU has developed an implementation plan for PM, contains specific joint projects in the priority areas of cooperation.
11 May 2011 the Ministry of economic development of Russia held an enlarged meeting of representatives of the sectoral dialogues the EU-Russia involved in the implementation of the initiative "Partnership for modernization", chaired by the national focal points initiative.
During the meeting the parties discussed the progress of the project Work plan PM and has identified priorities for the second half of 2011, measures to support projects, including the attraction of resources of international financial institutions, as well as the participation of business in implementing the tasks of the PM.
In order to create financial mechanisms for cooperation in the framework of PM by Vnesheconombank and the European Bank for reconstruction and development (EBRD) and Vnesheconombank and the European investment Bank (EIB) has signed the relevant Memoranda of understanding. The documents envisage the possibility of allocating the aggregate up to $2 billion to Finance projects in the PfP, provided that they meet criteria of financial institutions and approval by the authorized management bodies of the parties.
As priority directions of financing selected areas such as energy efficiency, transport, innovation initiatives related to small and medium enterprises (including business incubators, technological parks, centers of business technology, infrastructure, financial services SMEs), as well as commercialization of innovations in several sectors, including the above, pharmaceuticals, and environmental protection.
On the sidelines of the summit Russia-EU in Nizhni Novgorod on 9–10 June 2011 signed joint report of the coordinators of the PM which summarizes the work accomplished and gives examples implemented to date, practical activities and projects within the Work plan.
In the framework of the implementation of the Work plan PM during the said summit was signed a provision on the establishment of a new Dialogue on trade and investment between the Ministry of economic development of the Russian Federation and the Directorate General for trade of the European Commission. Co-chair of the Dialogue on the Russian side is Deputy Minister of economic development of the Russian Federation A. A. Slepnev, the EU - Deputy Director General of the Directorate General for trade of the European Commission P. Balazs. The dialogue will cover trade and investment relations EU-Russia, including the obligations of the European Union and Russia in the WTO and current trade and economic agreements between the European Union and Russia.
Other issues
Kaliningrad
The Russian exclave of Kaliningrad Oblast has, since 2004, been surrounded on land by EU members. As a result, the Oblast has been isolated from the rest of the federation due to stricter border controls that had to be brought in when Poland and Lithuania joined the EU and further tightened before they joined the Schengen Area. The new difficulties for Russians in Kaliningrad to reach the rest of Russia is a small source of tension.
In July 2011 the European Commission put forward proposals to classify the whole of Kaliningrad as a border area. This would allow Poland and Lithuania to issue special permits for Kaliningrad residents to pass through those two countries without requiring a Schengen visa. 2012–2016 visa-free travels were allowed between Kaliningrad region and northern Poland.
Energy
Russia has a significant role in the European energy sector as the largest exporter of oil and natural gas to the EU. In 2007, the EU imported from Russia 185 million tonnes of crude oil, which accounted for 32.6% of total oil imports, and 100.7 million tonnes of oil equivalent of natural gas, which accounted for 38.7% of total gas imports. A number of disputes in which Russia was using pipeline shutdowns in what was described as "tool for intimidation and blackmail" caused European Union to significantly increase efforts to diversify its energy sources.
During an anti-trust investigation initiated in 2011 against Gazprom a number of internal company documents were seized that documented a number of "abusive practices" in an attempt to "segment the internal [EU] market along national borders" and impose "unfair pricing". In August 2021 Russia has unexpectedly reduced volumes of gas sent to European Union, causing sudden gas price increases on the European market "to support its case in starting flows via Nord Stream 2". The next month, it announced that "rapid" start up of the newly completed Nord Stream 2 pipeline that has been for long contested by various EU countries would resolve the problems.
Siberian flights
There have been agreements on other matters such as the withdrawal of taxes on EU flights over Siberia.
Meat from Poland
Further problems include a ban by Russia on Polish meat exports (due to allegations of low quality and unsafe meat exported from the country), which caused Poland to veto proposed EU-Russia pacts concerning issues such as energy and migration; an oil blockade on Lithuania; and concerns by Latvia and Poland on the Nord Stream pipeline. In 2007 Polish meat was allowed to be exported to Russia.
2014 Russian food embargo
Announced 6 August 2014 by President Putin. Russia banned European food imports in response to EU sanctions.
Partnership and Cooperation Agreement
The legal basis for the relations between the EU and Russia is the Partnership and Cooperation Agreement (PCA). Signed in June 1994 and in force since December 1997, the PCA was supposed to be valid for 10 years. Thus, since 2007 it is annually automatically renewed, until replaced by a new agreement. The PCA provides a political, economic and cultural framework for relations between Russia and the EU. It is primarily concerned with promoting trade, investment and harmonious economic relations. However, it also mentions the parties' shared "[r]espect for democratic principles and human rights as defined in particular in the Helsinki Final Act and the Charter of Paris for a new Europe" and a commitment to international peace and security. A replacement agreement has been under negotiations since 2008 and following that and WTO entry, a more detailed agreement will be negotiated.
Russian exports to the EU have very few restrictions, except for the steel sector.
The Four Common Spaces
Russia has chosen not to participate in the European Union's European Neighbourhood Policy (ENP), as it aspires to be an "equal partner" of the EU (as opposed to the "junior partnership" that Russia sees in the ENP). Consequently, Russia and the European Union agreed to create four Common Spaces for cooperation in different spheres. In practice there are no substantial differences (besides naming) between the sum of these agreements and the ENP Action Plans (adopted jointly by the EU and its ENP partner states). In both cases the final agreement is based on provisions from the EU acquis communautaire and is jointly discussed and adopted. For this reason, the Common Spaces receive funding from the European Neighbourhood and Partnership Instrument (ENPI), which also funds the ENP.
At the St. Petersburg Summit in May 2003, the EU and Russia agreed to reinforce their co-operation by creating, in the long term, four common spaces in the framework of the Partnership and Cooperation Agreement of 1997: a common economic space; a common space of freedom, security and justice; a space of co-operation in the field of external security; and a space of research, education, and cultural exchange.
The Moscow Summit in May 2005 adopted a single package of Road Maps for the creation of the four Common Spaces. These expand on the ongoing cooperation as described above, set out further specific objectives, and determine the actions necessary to make the common spaces a reality. They thereby determine the agenda for co-operation between the EU and Russia for the medium-term.
The London Summit in October 2005 focused on the practical implementation of the Road Maps for the four Common Spaces.
Common Economic Space
The objective of the common economic space is to create an open and integrated market between the EU and Russia. This space is intended to remove barriers to trade and investment and promote reforms and competitiveness, based on the principles of non-discrimination, transparency, and good governance.
Among the wide range of actions foreseen, a number of new dialogues are to be launched. Cooperation will be stepped up on regulatory policy, investment issues, competition, financial services, telecommunications, transport, energy, space activities and space launching, etc. Environment issues including nuclear safety and the implementation of the Kyoto Protocol also figure prominently.
Common Space of Freedom, Security and Justice
Work on this space has already made a large step forward with the conclusion of negotiations on the Visa Facilitation and the Readmission Agreements. Both the EU and Russia are in the process of ratifying these agreements. The visa dialogue will continue with a view to examine the conditions for a mutual visa-free travel regime as a long-term perspective. In a 15 December 2011 statement given after an EU-Russia summit, the President of the European Commission confirmed the launch of the "Common Steps towards visa-free travel" with Russia. Russia hoped to sign a deal on visa free travel as early as January 2014.
Cooperation on combating terrorism and other forms of international illegal activities such as money laundering, the fight against drugs and trafficking in human beings will continue as well as on document security through the introduction of biometric features in a range of identity documents. The EU support to border management and reform of the Russian judiciary system are among the highlights of this space.
With a view to contributing to the concrete implementation of the road map, the Justice and Home Affairs PPC met on 13 October 2005 and agreed to organise clusters of conferences and seminars, bringing together experts and practitioners on counter-terrorism, cyber-crime, document security and judicial cooperation. There was also agreement about developing greater cooperation between the European Border Agency (FRONTEX) and the Federal Border Security Service of Russia.
Common Space on External Security
The road map underlines the shared responsibility of the parties for an international order based on effective multilateralism, their support for the central role of the UN, and for the effectiveness in particular of the OSCE and the Council of Europe. The parties will strengthen their cooperation on security and crisis management in order to address global and regional challenges and key threats, notably terrorism and the proliferation of weapons of mass destruction (WMD). They will give particular attention to securing stability in the regions adjacent to Russian and EU borders (the "frozen conflicts" in Transnistria, Abkhazia, South Ossetia, Nagorno-Karabakh).
EU activities in this area are done in the framework of its Common Foreign and Security Policy.
Common Space on Research, Education, Culture
This space builds on the long-standing relations with Russia through its participation in EU Research and Development activities and the 6th FPRD in particular, and under the TEMPUS programme. It aims at capitalising on the strength of the EU and Russian research communities and cultural and intellectual heritage by reinforcing links between research and innovation and closer cooperation on education such as through convergence of university curricula and qualifications. It also lays a firm basis for cooperation in the cultural field. A European Studies Institute co-financed by both sides will be set up in Moscow for the start of the academic year 2006/7.
Russia and the EU continue to work together under Horizon 2020, which runs from 2014 to 2020.
Visa liberalization dialogue
On 4 May 2010, the EU and Russian Federation raised the prospect of beginning negotiations on a visa-free regime between their territories. However it was announced by the Council of Ministers of the EU that the EU is not completely ready to open up the borders due to high risk of increase in human trafficking and drug imports into Europe and because of the loose borders of Russia with Kazakhstan. They will instead work towards providing Russia with a "roadmap for visa-free travel." While this does not legally bind the EU to providing visa-free access to the Schengen area for Russian citizens at any specific date in the future, it does greatly improve the chances of a new regime being established and obliges the EU to actively consider the notion, should the terms of the roadmap be met. Russia on the other hand has agreed that should the roadmap be established, it will ease access for EU citizens for whom access is not visa-free at this point, largely as a result of Russian foreign policy which states that "visa free travel must be reciprocal between states." Both the EU and Russia acknowledge, however, that there are many problems to be solved before visa-free travel is introduced.
The dialogue was temporarily frozen by the EU in March 2014 during the 2014 Crimean crisis. In 2015, Jean-Maurice Ripert, the current French Ambassador to Russia, stated that France would be interested in abolishing short-term Sсhengen visas for Russians; in 2016, the Spanish Minister of Industry José Manuel Soria made a similar statement on behalf of Spain. In June 2016, EEAS released a Russian-language video describing the necessary conditions for the visa-free regime. The same year, a number of EU officials, including the head of EEAS' Russia Division Fernando Andresen Guimarães, said that they would like to restart negotiations on visa abolishment; the Czech President Milos Zeman also spoke out in favor of visa-free regime for Russians. On 24 May 2016, the German think tank DGAP released a report called "The Eastern Question: Recommendations for Western Policy", discussing the renewed Western strategy towards Russia in the wake of increased tensions between Putin's regime and EU. Their recommendations include visa liberalization for Russian citizens in order to "improve people-to-people contacts and to send a strong signal that there is no conflict with Russian society". Likewise, the chairman of Munich Security Conference Wolfgang Ischinger suggested granting "visa-free entry to countries of the Schengen area for ordinary Russian citizens, who are not to blame for the Ukrainian crisis and have nothing to do with sanctions". On 29 August 2017, the German politician and member of Parliamentary Assembly of the Council of Europe Marieluise Beck published a piece in Neue Zürcher Zeitung with a number of recommendations for EU on dealing with Russia and counteracting Kremlin propaganda; one of them is visa-free regime for Russians in order to incorporate Russians into Western values and promote democratic change in Russia. In October 2018, the member of SPD and Bundestag deputy Dirk Wiese suggested granting visa-free EU entry to young Russians in order to facilitate student exchange programs. In July 2019, the German politician and chairman of the Petersburg Dialogue Ronald Pofalla stated his support for visa-free regime for the young Russians, and said that he will be negotiating for it in the second half of 2019. Later that month, the German Minister of Foreign Affairs Heiko Maas said that visa-free regime "is a matter we want to pursue further. We may not be able to decide it alone, but we intend to sit down with our Schengen partners to see what can be done".
EU membership discussion
Among the most vocal supporters of Russian membership of the EU has been former Italian Prime Minister Silvio Berlusconi. In an article published to Italian media on 26 May 2002, he said that the next step in Russia's growing integration with the West should be EU membership. On 17 November 2005, he commented in regards to the prospect of such a membership that he is "convinced that even if it is a dream ... it is not too distant a dream and I think it will happen one day." Berlusconi has made similar comments on other occasions as well. Later, in October 2008, he said: "I consider Russia to be a Western country and my plan is for the Russian Federation to be able to become a member of the European Union in the coming years" and stated that he had this vision for years.
Russian permanent representative to the EU commented on this by saying that Russia has no plans of joining the EU. Vladimir Putin has said that Russia joining the EU would not be in the interests of either Russia or the EU, although he advocated close integration in various dimensions including establishment of four common spaces between Russia and the EU, including united economic, educational and scientific spaces as it was declared in the agreement in 2003.
Michael McFaul claimed in 2001 that Russia was "decades away" from qualifying for EU membership. Former German Chancellor Gerhard Schröder has said that though Russia must "find its place both in NATO, and, in the longer term, in the European Union, and if conditions are created for this to happen" that such a thing is not economically feasible in the near future.
Czech President Miloš Zeman stated that he "dreams" of Russia joining EU.
According to a number of surveys carried out by Deutsche Welle in 2012, from 36% to 54% of Russians supported Russia joining EU, and about 60% of them saw EU as an important partner for their country. Young people in particular have a positive image of the European Union.
Russian and EU public opinion
A February 2014 poll conducted by the Levada Center, Russia's largest independent polling organization, found that nearly 80% of Russian respondents had a "good" impression of the EU. This changed dramatically in 2014 with the Ukrainian crisis resulting in 70% taking a hostile view of the EU compared to 20% viewing it positively.
A Levada poll released in August 2018 found that 68% of Russian respondents believe that Russia needs to dramatically improve relations with Western countries. 42% of Russians polled said they had a positive view of the EU, up from 28% in May 2018. A Levada poll released in February 2020 found that 80% of Russian respondents believe that Russia and the West should become friends and partners. 49% of Russians polled said they had a positive view of the EU. However, with the exception of Bulgaria, Slovakia and Greece, the share of residents in the rest of the EU countries polled by Pew Research Center with positive views of Russia is considerably below 50%.
Russia's foreign relations with EU member states
See also
Armenia–European Union relations
Belarus–European Union relations
Common Economic Space of the CIS
EU-Russia Centre
European Union Association Agreement
Northern Dimension
Russia in the European energy sector
Ukraine–European Union relations
References
Further reading
External links
Permanent Mission of the Russian Federation to the European Union
The official site of the EU's delegation to Russia
European External Action Service: The EU's relations with Russia
European Union Institute for Security Studies: Research on EU-Russia Relations
Trade information between EU and Russia, Animated infographic, European Parliamentary Research Service
The Russo-Georgian War and Beyond: towards a European Great Power Concert, Danish Institute of International Studies
The EU and Russia cease to be a priority for each other: The squabble over WTO membership reveals the defunct state of the strategic partnership, FIIA Comment (15) 2012, The Finnish Institute of International Affairs
Contemplated enlargements of the European Union
Multilateral relations of Russia
Third-country relations of the European Union |
25199958 | https://en.wikipedia.org/wiki/Oscar%20Nierstrasz | Oscar Nierstrasz | Oscar Marius Nierstrasz (born ) is a Professor at the Computer Science Institute (IAM) at the University of Berne, and a specialist in software engineering and programming languages. He is active in the field of
programming languages and mechanisms to support the flexible composition of high-level, component-based abstractions,
tools and environments to support the understanding, analysis and transformation of software systems to more flexible, component-based designs,
secure software engineering to understand the challenges current software systems face in terms of security and privacy, and
requirement engineering to support stakeholders and developers to have moldable and clear requirements.
He has led the Software Composition Group at the University of Bern since 1994 to date (December 2011).
Life
Nierstrasz is born in Laren, the Netherlands.
He lived there for three years and then his parents, Thomas Oscar Duyck (1930--) and Meta Maria van den Bos (1936-1988) moved to Canada.
He developed an early interest in Mathematics and Computer Science. He pursued his Bachelor studies in the Departments of Pure Mathematics and Combinatorics and Optimization at the University of Waterloo in 1979.
He enrolled for the master studies in the Department of Computer Science at the University of Toronto in 1981.
There, he continued for his Ph.D. under the supervision of Prof. D. Tsichritzis. During his postgraduate work in the university, Nierstrasz worked on the `Message Flow Analysis'.
He finished his Ph.D. in 1984 and then worked at the Forth Institute of Computer Science in Crete for one year.
Since 1985, Nierstrasz has lived in Switzerland. He was a member of the Object System Group at the Center Universitaire d' Informatique of the University of Geneva, Switzerland (1985-1994).
He met there his wife, Angela Margiotta Nierstrasz. They married in May 1994.
In late 1994, he moved to Bern, Switzerland to work as a professor.
Career
In late 1994, he joined the University of Bern as a professor and led the software composition group at the University of Bern from 1994 to December 2021.
He has also served as a dean of Computer Science Institute (IAM) at the University of Berne.
During his career, he supervised 40 Ph.D. students and almost 100 bachelors and masters theses.
He had made various contributions to Software Engineering Research community:
Nierstrasz co-authored several books such as Object-Oriented Reengineering Patterns and Pharo by Example. He was editor of the Journal of Object Technology from 2010 to 2013, succeeding the founding editor, Richard Wiener.
CyberChair, an Online Submission and Reviewing System, is based on Oscar Nierstrasz's publication called Identify the champion, where he described the peer review process for contributions to scientific conferences using an organizational pattern language.
His Erdos number is 3. Oscar Nierstrasz — David M. Jackson — E. Rodney Canfield — Paul Erdös
Nierstrasz won the Senior Dahl–Nygaard_Prize in 2013.
References
External links
Personal page at the University of Berne
Nierstrasz Family Web Site
1957 births
Living people
University of Toronto alumni
Software engineering researchers
Programming language researchers
Computer science writers
University of Bern faculty |
417934 | https://en.wikipedia.org/wiki/Office%20Assistant | Office Assistant | The Office Assistant is a discontinued intelligent user interface for Microsoft Office that assisted users by way of an interactive animated character which interfaced with the Office help content. It was included in Microsoft Office for Windows (versions 97 to 2003), in Microsoft Publisher and Microsoft Project (versions 98 to 2003), Microsoft FrontPage (versions 2002 and 2003), and Microsoft Office for Mac (versions 98 to 2004).
The default assistant in the English version was named Clippit (commonly nicknamed Clippy), after a paperclip. The character was designed by Kevan J. Atteberry. Clippit was the default and by far the most notable Assistant (partly because in many cases the setup CD was required to install the other assistants), which also led to it being called simply the Microsoft Paperclip. The original Clippit in Office 97 was given a new look in Office 2000.
The feature drew a strongly negative response from many users. Microsoft turned off the feature by default in Office XP, acknowledging its unpopularity in an ad campaign spoofing Clippit. The feature was removed altogether in Office 2007 and Office 2008 for Mac, as it continued to draw criticism even from Microsoft employees.
In July 2021, Microsoft used Twitter to show off a redesign of Clippit (which they called "Clippy" in the Tweet), and said that if it received 20,000 likes they would replace the paperclip emoji on Microsoft 365 with the character. The Tweet quickly surpassed 20,000 likes and they then announced to replace it. In November 2021, Microsoft officially updated their design of the paperclip emoji (📎) on Windows 11 to be Clippit/"Clippy".
Overview
According to Alan Cooper, the "Father of Visual Basic", the concept of Clippit was based on a "tragic misunderstanding" of research conducted at Stanford University, showing that the same part of the brain in use while using a mouse or keyboard was also responsible for emotional reactions while interacting with other human beings and thus is the reason people yell at their computer monitors. Microsoft concluded that if humans reacted to computers the same way they react to other humans, it would be beneficial to include a human-like face in their software. As people already related to computers directly as they do with humans, the added human-like face emerged as an annoying interloper distracting the user from the primary conversation.
First introduced in Microsoft Office 97, the Office Assistant was codenamed TFC during development. It appeared when the program determined the user could be assisted by using Office wizards, searching help, or advising users on using Office features more effectively. It also presented tips and keyboard shortcuts. For example, typing an address followed by "Dear" would cause the Assistant to appear with the message, "It looks like you're writing a letter. Would you like help?"
Assistants
Apart from Clippit, other Office Assistants were also available:
The Dot (a shape-shifting smiley-faced red ball)
Hoverbot (a robot)
The Genius (a caricature of Albert Einstein, removed in Office XP but available as a downloadable add-on)
Office Logo (a jigsaw puzzle)
Mother Nature (a globe)
Scribble (an origami-esque cat)
Power Pup (a superhero dog)
Will (a caricature of William Shakespeare).
In many cases the Office installation CD was necessary to activate a different Office assistant character, so the default character, Clippit, remains widely known compared to other Office Assistants.
In Office 2000, the Hoverbot, Scribble, and Power Pup assistants were replaced by:
F1 (a robot)
Links (a cat)
Rocky (a dog)
The Clippit and Office Logo assistants were also redesigned. The removed assistants later resurfaced as downloadable add-ons.
The Microsoft Office XP Multilingual Pack had two more assistants, , an animated secretary, and a version of the Monkey King () for Asian language users in non-Asian Office versions. Native language versions provided additional representations, such as Kairu the dolphin in Japanese.
A small image of Clippit can be found in Office 2013 or newer, which could be enabled by going to Options and changing the theme to "School Supplies". Clippit would then appear on the ribbon.
Technology
The Office Assistant used technology initially from Microsoft Bob and later Microsoft Agent, offering advice based on Bayesian algorithms. From Office 2000 onward, Microsoft Agent (.acs) replaced the Microsoft Bob-descended Actor (.act) format as the technology supporting the feature. Users can add other assistants to the folder where Office is installed for them to show up in the Office application, or install in the Microsoft Agent folder in System32 folder. Microsoft Agent-based characters have richer forms and colors, and are not enclosed within a boxed window. Furthermore, the Office Assistant could use the Lernout & Hauspie TruVoice Text-to-Speech Engine to provide output speech capabilities to Microsoft Agent, but it required SAPI 4.0. The Microsoft Speech Recognition Engine allowed the Office Assistant to accept speech input.
Compatibility
The Microsoft Agent components that it requires are not included in Windows 7 or later; however, they can be downloaded from the Microsoft website. Installation of Microsoft Agent on Windows 8, Windows 8.1, Windows 10 and Windows 11 is also possible. When desktop compositing with Aero glass is enabled on Windows Vista or 7, or when running on Windows 8 or newer, the normally transparent space around the Office Assistant becomes solid-colored pink, blue, or green.
Additional downloadable assistants
Since their introduction, more assistants have been released and have been exclusively available via download.
Bosgrove (a butler)
Courtney (a flying car driver)
Earl (a surfboarding alien)
Genie (a genie)
Kairu the Dolphin, otherwise known as Chacha (available for East Asian editions, downloadable for Office 97)
Lynx
Max (a Macintosh Plus computer) (Macintosh)
Merlin (a wizard)
Peedy (a green parrot, which was ultimately reused in the first iteration of the notorious BonziBuddy software)
Robby (a robot)
Rover (a golden retriever, also featured as Windows XP Explorer's search companion.)
The Monkey King (available for East Asian editions, downloadable for Office 97)
The 12 assistants for Office 97 could be downloaded from the Microsoft website.
Criticism and parodies
The program was widely reviled among users as intrusive and annoying, and was criticized even within Microsoft. Microsoft's internal codename TFC had a derogatory origin: Steven Sinofsky states that "C" stood for "clown", while allowing his readers to guess what "TF" might stand for. Smithsonian Magazine called Clippit "one of the worst software design blunders in the annals of computing". Time magazine included Clippit in a 2010 article listing the fifty worst inventions.
In July 2000, the online comic strip User Friendly ran a series of panels featuring Clippit. In 2001, a Microsoft advertising campaign for Office XP included the (now defunct) website officeclippy.com, which highlighted the disabling of Clippit in the software. It featured the animated adventures of Clippit (voiced by comedian Gilbert Gottfried) as he learned to cope with unemployment ("X… XP… As in, ex-paperclip?!") and parodied behaviors of the Office assistant. Curiously, one of these ("Clippy Faces Facts") uses the same punchline as one of the User Friendly comic strips. These videos can be downloaded from Microsoft's website as self-contained Flash Player executables. Clippit ends up in an office as a floppy disk ejecting pin.
In August 2001, internet comedian JamesWeb made a parody version of Windows called Windows RG, which people could run in web browsers. The fake operating system frequently crashes and displays error messages. The parody of Windows features a basic version of Word. Upon starting it up, a Clippit-style character informs the user "It looks like you're probably not writing a letter. I like letters. I think you should." He informs the user to start the letter with the words "MILK SPONGE" before inserting an image of a paperclip into the document which he claims is his brother. This crashes the program. Windows RG informs the user that "paperclip.exe has performed 94,708 illegal opperations [sic] and will now be shot." The Word program is then closed having performed an illegal operation "(killed a paperclip)" and is closed.
There is a Clippit parody in the Plus! Dancer application included in Microsoft Plus! Digital Media Edition which is later included as Windows Dancer in Windows XP Media Center Edition 2005. The dancing character Boo Who? is wearing a ghost outfit, roughly having the shape of Clippit's body, with a piece of wire visible underneath. Occasionally, the white sheet slips, and reveals the thin curve of steel. The description mentions "working for a short while for a Redmond, WA based software company, where he continued to work until being retired in 2001". Clippit is also included as a player character in Microsoft Bicycle Card Games and Microsoft Bicycle Board Games. It was also used in the "Word Crimes" music video by "Weird Al" Yankovic.
Vigor is a Clippit-inspired parody software—a version of the vi text editor featuring a rough-sketched Clippit.
On April 1, 2014, Clippit appeared as an Office Assistant in Office Online as part of an April Fools' Day joke. Several days later, an easter egg was found in the then-preview version of Windows Phone 8.1. When asked if she likes Clippit, the personal assistant Cortana would answer "Definitely. He taught me how important it is to listen." or "What's not to like? That guy took a heck of a beating and he's still smiling." Her avatar occasionally turned into a two-dimensional Metro-style Clippit for several seconds. This easter egg is still available in the full release version of the Windows Phone operating system and Windows 10. A Clippit easter egg is also found in Apple's personal assistant, Siri, although it is less flattering, saying "Clippy?! Don't get me started." or "The less said about Clippy the better." In Google Assistant, when asked if she trusts, knows, likes, or is Clippy, she responds with "Clippy? Clippy is legendary", with a smiling emoji at the end. And when asked why, she simply has no idea why Clippy is legendary. And when asked if she knows who Clippy is, she states she remembers the user has told her, with the answer "Clippy is an office."
The built-in linting tool of the Rust programming language, which was created in 2014, is named Clippy as a reference to Microsoft's Clippy.
On April 1, 2015, Tumblr created a parody of Clippit, Coppy, as an April Fools joke. Coppy is an anthropomorphized photocopier that behaved in similar ways to Clippit, asking the user if they want help. Coppy would engage the reader in a series of pointless questions, with a dialogue box written in Comic Sans MS, deliberately designed to be extremely annoying.
In popular culture
After featuring Clippit's tomb in a movie to promote Office 2010, the character was relaunched as the main character of the game Ribbon Hero 2, which is an interactive tutorial released by Microsoft in 2011. In the game, Clippy needs a new job and accidentally goes inside a time machine, travelling to different ages solving problems with Word, Excel, PowerPoint, and OneNote. Other Office Assistant names are also featured during the "Future Age" as planets of the future solar system.
In "Search Committee", the seventh season finale of The Office aired in May 2011, Darryl calls Microsoft and asks whether they still have Clippit while trying to build a résumé.
In 2015 a music video was released for the song "Ghost" (by Delta Heavy) in which the abandoned Clippit is stuck between the software of the mid-nineties but then travels to the contemporary web and regains its place by hacking itself into any digital system.
Clippit made a cameo appearance in the Drawn Together episode "The One Wherein There Is a Big Twist, Part II", where he offered to help Wooldoor Sockbat with his suicide note.
Clippit is portrayed as a romantic interest in "Conquered by Clippy", a comedic/erotic story by Leonard Delaney.
In the ninth episode of Season 3 of HBO's Silicon Valley, originally aired in June, 2016, a new animated character called "Pipey", clearly based on Microsoft's Clippit, provides help to users of the Pied Piper platform.
In The Amazing World of Gumball episode "The Void", Gumball and Darwin Watterson enter the Void, a dimension wherein people and things that have been deemed as the world's "mistakes" are placed after having been removed from existence. As the two are trying to escape the dimension with their forgotten friend, Molly Collins, they encounter Clippit, who asks Gumball if he is writing an email. Gumball then knocks him out with a nearby disco shoe.
In the tabletop role-playing game show "Dimension 20, A Starstruck Odyssey", the planet-scale AI called Gnosis is revealed to be Clippy when the first message it successfully creates is, "IT LOOKS LIKE YOU'RE TRYING TO WRITE A LETTER".
See also
Microsoft Bob
Ms. Dewey
Tafiti
Tay (bot)
Talking Moose
Virtual assistant
References
External links
Clippy discontinued in Office 12
Download additional Agents Office 97 (Quiet Office Logo, Kairu, Earl, F1)
Download Office 97 Assistant: Kairu the Dolphin
Clippy returns in Microsoft's April Fools' pranks
Luke Swartz — Why People Hate the Paperclip – Academic paper on why people hate the Office Assistant
Microsoft Agent Ring - download more unofficial characters
"Farewell Clippy: What's Happening to the Infamous Office Assistant in Office XP" (April 2001) at Microsoft.com
Human–computer interaction
Fictional shapeshifters
Microsoft Office
Technical communication |
194839 | https://en.wikipedia.org/wiki/Hebern%20rotor%20machine | Hebern rotor machine | The Hebern Rotor Machine was an electro-mechanical encryption machine built by combining the mechanical parts of a standard typewriter with the electrical parts of an electric typewriter, connecting the two through a scrambler. It is the first example (though just barely) of a class of machines known as rotor machines that would become the primary form of encryption during World War II and for some time after, and which included such famous examples as the German Enigma.
History
Edward Hugh Hebern was a building contractor who was jailed in 1908 for stealing a horse. It is claimed that, with time on his hands, he started thinking about the problem of encryption, and eventually devised a means of mechanizing the process with a typewriter. He filed his first patent application for a cryptographic machine (not a rotor machine) in 1912. At the time he had no funds to be able to spend time working on such a device, but he continued to produce designs. Hebern made his first drawings of a rotor-based machine in 1917, and in 1918 he built a model of it. In 1921 he applied for a patent for it, which was issued in 1924. He continued to make improvements, adding more rotors. Agnes Driscoll, the chief civilian employee of the US Navy's cryptography operation (later to become OP-20-G) between WWI and WWII, spent some time working with Hebern before returning to Washington and OP-20-G in the mid-'20s.
Hebern was so convinced of the future success of the system that he formed the Hebern Electric Code company with money from several investors. Over the next few years he repeatedly tried to sell the machines both to the US Navy and Army, as well as to commercial interests such as banks. None was terribly interested, as at the time cryptography was not widely considered important outside governments. It was probably because of William F. Friedman's confidential analysis of the Hebern machine's weaknesses (substantial, though repairable) that its sales to the US government were so limited; Hebern was never told of them. Perhaps the best indication of a general distaste for such matters was the statement by Henry Stimson in his memoirs that "Gentlemen do not read each other's mail." It was Stimson, as Secretary of State under Hoover, who withdrew State Department support for Herbert Yardley's American Black Chamber, leading to its closing.
Eventually his investors ran out of patience, and sued Hebern for stock manipulation. He spent another brief period in jail, but never gave up on the idea of his machine. In 1931 the Navy finally purchased several systems, but this was to be his only real sale.
There were three other patents for rotor machines issued in 1919, and several other rotor machines were designed independently at about the same time. The most successful and widely used was the Enigma machine.
Description
The key to the Hebern design was a disk with electrical contacts on either side, known today as a rotor. Linking the contacts on either side of the rotor were wires, with each letter on one side being wired to another on the far side in a random fashion. The wiring encoded a single substitution alphabet.
When the user pressed a key on the typewriter keyboard, a small amount of current from a battery flowed through the key into one of the contacts on the input side of the disk, through the wiring, and back out a different contact. The power then operated the mechanicals of an electric typewriter to type the encrypted letter, or alternately simply lit a bulb or paper tape punch from a teletype machine.
Normally such a system would be no better than the single-alphabet systems of the 16th century. However the rotor in the Hebern machine was geared to the keyboard on the typewriter, so that after every keypress, the rotor turned and the substitution alphabet thus changed slightly. This turns the basic substitution into a polyalphabetic one similar to the well known Vigenère cipher, with the exception that it required no manual lookup of the keys or cyphertext. Operators simply turned the rotor to a pre-chosen starting position and started typing. To decrypt the message, they turned the rotor around in its socket so it was "backwards", thus reversing all the substitutions. They then typed in the ciphertext and out came the plaintext.
Better yet, several rotors can be placed such that the output of the first is connected to the input of the next. In this case the first rotor operates as before, turning once with each keypress. Additional rotors are then spun with a cam on the one beside it, each one being turned one position after the one beside it rotates a full turn. In this way the number of such alphabets increases dramatically. For a rotor with 26 letters in its alphabet, five such rotors "stacked" in this fashion allows for 265 = 11,881,376 different possible substitutions.
William F. Friedman attacked the Hebern machine soon after it came on the market in the 1920s. He quickly "solved" any machine that was built similar to the Hebern, in which the rotors were stacked with the rotor at one end or the other turning with each keypress, the so-called fast rotor. In these cases the resulting ciphertext consisted of a series of single-substitution cyphers, each one 26 letters long. He showed that fairly standard techniques could be used against such systems, given enough effort.
Of course, this fact was itself a great secret. This may explain why the Army and Navy were unwilling to use Hebern's design, much to his surprise.
References
External links
The Hebern Code machines
Cryptographic hardware
Rotor machines |
23504265 | https://en.wikipedia.org/wiki/AVSnap | AVSnap | AVSnap is a freeware audio/visual system integration and design software that was created by Altinex Inc. in 2004. The software started as a way to create a visual routing diagram of and audio/visual system, that is similar to an A/V schematic or a Computer network diagram. The software is provides a design environment to create Audio Visual diagrams and layouts.
Functionality
AV System design – create library symbols, assign snap points property, connect symbols with cable object and generate list of materials and list of cables.
AV System layout – create symbols for front or back panels, position them in a rack and connect them with cable. Provide wiring diagram to technician for rack wiring.
Flow chart the process – switch AVSnap into flow chart mode and develop simple on complex flow charts for your business.
Presentation mode – create design over multiple pages and then use F5 to switch to presentation mode. All keys work the same way as in PowerPoint.
Communication mode – test all protocols for communicating with equipment using communication mode. Press telephone icon to transform AVSnap into powerful Hyperterminal with a twist. You have two windows: Terminal window and Notepad. Jot down commands and then send them through COM port or over the network.
AVSnap Meeting is hosted by Altinex servers — Just select to start meeting, then call up your colleagues and provide them with session ID. If they have AVSnap, select Join meeting and enter Session ID. Now you can present your design over the web.
Language editor – If you want to use your native language with AVSnap, select language editor and type in all of the text displayed. (There are over 1000 messages). Once done, switch to your own language for simplicity.
GUI design — Set page format to pixels, to reveal GUI design environment. Design buttons, sliders, video objects. Import PNG files to create GUI background and then press F11 for full development language. Run created program on a PC or a standalone touch pane.
Web server (License required) – anything you design in AVSnap can be served on the web. Graphics, pages, GUI or anything else. GUI can be used to control equipment over the web.
Libraries
Altinex Inc. has combined groups of preset icons into libraries that users can select from to create their diagrams. Users can create their own libraries or choose from ones included from other companies including Simtrol, Calypso Systems, and partner Analog Way. Currently AVSnap estimates that it has about 3000 users (based on downloads and registrations).
History
The origins of AVSnap start in 1993 when there were very few vendors that provided software for designing AV systems. Many AV professionals were looking for a development tool that can be used in-house and also provided an easy exchange of information.
The first attempt to design the AVSnap started in 2001. After a year of experimenting with features, the final draft for the software was developed. It took another 3 years to fine tune performance and to come up with the final name.
After initial design the software was released AVSnap to general public in 2004. The first version of the software was a simple graphic editor that allowed easy AV System design. Eventually it evolved into a development system it is now.
See also
List of CAD companies
Stardraw
AutoCAD
References
External links
AVSnap homepage
Altinex homepage
2004 software
Computer-aided design software |
8890242 | https://en.wikipedia.org/wiki/Virtual%20Iron | Virtual Iron | Virtual Iron Software, was located in Lowell, Massachusetts, sold proprietary software for virtualization and management of a virtual infrastructure. Co-founded by Alex Vasilevsky, Virtual Iron figured among the first companies to offer virtualization software to fully support Intel VT-x and AMD-V hardware-assisted virtualization.
Oracle Corporation agreed to acquire Virtual Iron Software, Inc., subject to customary closing conditions. Oracle now declines to offer any updates or patches for current customers, even updates and patches developed before the purchase. On June 19, 2009, The Register reported that Oracle had killed the Virtual Iron product.
Virtual Iron platform
Virtual Iron software ran unmodified 32-bit and 64-bit guest operating systems with near-native performance. A virtualization manager offered access to control, automate, modify and monitor virtual resources. Virtualization services were automatically deployed on supported hardware without additional software. The platform was based on the open source Xen hypervisor.
Virtual Iron, like other virtualization software, provided server consolidation, business continuity and capacity management.
The Virtual Iron platform consisted of a virtualization manager, virtualization servers and a hypervisor. The virtualization manager (VI-Center), a Java-based application, allowed for central management of the virtualized servers. A physical server could have many virtualized servers, which ran as unmodified guest operating systems.
Virtual Iron could use both physical-storage or virtual-storage access models. However, the use of a virtual-storage access model leveraged SAN storage to create a fault-tolerant iSCSI or Fibre Channel based cluster of virtual nodes. The VI Center installed on both Windows and Linux. After installation, the administrator had to configure a "management network" for the purpose of communicating with nodes in the cluster. The VI Center used the management network to PXE boot any server that was connected and correctly configured (for PXE boot).
The included LiveRecovery tool could configure high availability. Additionally, CPU or power-consumption load-balancing was configurable using the LiveCapacity or LivePower tools respectively. Additional features included disk and virtual machine cloning (snapshots), IPMI/ILO support, etc.
"Native virtualization"
Virtual Iron had implemented full virtualization (requiring hardware-assisted virtualization which it called native virtualization) over paravirtualization. Native virtualization allowed for unmodified guest operating systems and had the advantage of hardware advances for better performance. Virtual Iron, Inc claimed to have pioneered the implementation of native virtualization.
Virtual Iron discussed paravirtualization and native virtualization in its blog:
Virtual Iron has decided against paravirtualization in favor of "native virtualization." With hardware advances coming out of Intel and AMD, we see native virtualization capable of matching physical hardware performance without any of the complexity and engineering efforts involved in paravirtualizing an OS. From our discussions with a broad range of users, they simply do not want to roll out modified OSs unless the trade-off is heavily in their favor. This Faustian trade-off is no longer necessary.
See also
Comparison of platform virtualization software
Virtual machine
Platform virtualization
x86 virtualization
References
External links
Virtual Iron Home Page
Virtual Iron Virtualization News and Support and Training Resources
Software companies of the United States
Virtualization software
Companies established in 2003 |
638229 | https://en.wikipedia.org/wiki/Technical%20support | Technical support | Technical support (abbreviated as tech support) is an advice service provided, usually over the phone, to help people who have problems using a computer. Presently most large and mid-size companies have outsourced their tech support operations. Many companies provide discussion boards for users of their products to interact; such forums allow companies to reduce their support costs without losing the benefit of customer feedback.
Outsourcing technical support
With the increasing use of technology in modern times, there is a growing requirement to provide technical support. Many organizations locate their technical support departments or call centers in countries or regions with lower costs. Dell was amongst the first companies to outsource their technical support and customer service departments to India in 2001. There has also been a growth in companies specializing in providing technical support to other organizations. These are often referred to as MSPs (Managed Service Providers).
For businesses needing to provide technical support, outsourcing allows them to maintain high availability of service. Such need may result from peaks in call volumes during the day, periods of high activity due to the introduction of new products or maintenance service packs, or the requirement to provide customers with a high level of service at a low cost to the business. For businesses needing technical support assets, outsourcing enables their core employees to focus more on their work in order to maintain productivity. It also enables them to utilize specialized personnel whose technical knowledge base and experience may exceed the scope of the business, thus providing a higher level of technical support to their employees.
Multi-level tech support
Technical support is often subdivided into tiers, or levels, in order to better serve a business or customer base. The number of levels a business uses to organize their technical support group is dependent on the business' needs regarding their ability to sufficiently serve their customers or users. The reason for providing a multi-tiered support system instead of one general support group is to provide the best possible service in the most efficient possible manner. Success of the organizational structure is dependent on the technicians' understanding of their level of responsibility and commitments, their customer response time commitments, and when to appropriately escalate an issue and to which level. A common support structure revolves around a three-tiered technical support system. Remote computer repair is a method for troubleshooting software related problems via remote desktop connections.
L1 Support
The first job of a Tier I specialist is to gather the customer's information and to determine the customer's issue by analyzing the symptoms and figuring out the underlying problem. When analyzing the symptoms, it is important for the technician to identify what the customer is trying to accomplish so that time is not wasted on "attempting to solve a symptom instead of a problem."
Once identification of the underlying problem is established, the specialist can begin sorting through the possible solutions available. Technical support specialists in this group typically handle straightforward and simple problems while "possibly using some kind of knowledge management tool." This includes troubleshooting methods such as verifying physical layer issues, resolving username and password problems, uninstalling/reinstalling basic software applications, verification of proper hardware and software set up, and assistance with navigating around application menus. Personnel at this level have a basic to general understanding of the product or service and may not always contain the competency required for solving complex issues. Nevertheless, the goal for this group is to handle 70–80% of the user problems before finding it necessary to escalate the issue to a higher level.
L2 Support
Tier II (or Level 2, abbreviated as T2 or L2) is a more in-depth technical support level than Tier I and therefore costs more as the technicians are more experienced and knowledgeable on a particular product or service. It is synonymous with level 2 support, support line 2, administrative level support, and various other headings denoting advanced technical troubleshooting and analysis methods. Technicians in this realm of knowledge are responsible for assisting Tier I personnel in solving basic technical problems and for investigating elevated issues by confirming the validity of the problem and seeking for known solutions related to these more complex issues. However, prior to the troubleshooting process, it is important that the technician review the work order to see what has already been accomplished by the Tier I technician and how long the technician has been working with the particular customer. This is a key element in meeting both the customer and business needs as it allows the technician to prioritize the troubleshooting process and properly manage their time.
If a problem is new and/or personnel from this group cannot determine a solution, they are responsible for elevating this issue to the Tier III technical support group. In addition, many companies may specify that certain troubleshooting solutions be performed by this group to help ensure the intricacies of a challenging issue are solved by providing experienced and knowledgeable technicians. This may include, but is not limited to, onsite installations or replacement of various hardware components, software repair, diagnostic testing, or the utilization of remote control tools to take over the user's machine for the sole purpose of troubleshooting and finding a solution to the problem.
L3 Support
Tier III (or Level 3, abbreviated as T3 or L3) is the highest level of support in a three-tiered technical support model responsible for handling the most difficult or advanced problems. It is synonymous with level 3 support, 3rd line support, back-end support, support line 3, high-end support, and various other headings denoting expert level troubleshooting and analysis methods. These individuals are experts in their fields and are responsible for not only assisting both Tier I and Tier II personnel, but with the research and development of solutions to new or unknown issues. Note that Tier III technicians have the same responsibility as Tier II technicians in reviewing the work order and assessing the time already spent with the customer so that the work is prioritized and time management is sufficiently utilized. If it is at all possible, the technician will work to solve the problem with the customer as it may become apparent that the Tier I and/or Tier II technicians simply failed to discover the proper solution. Upon encountering new problems, however, Tier III personnel must first determine whether or not to solve the problem and may require the customer's contact information so that the technician can have adequate time to troubleshoot the issue and find a solution. It is typical for a developer or someone who knows the code or backend of the product, to be the Tier 3 support person.
In some instances, an issue may be so problematic to the point where the product cannot be salvaged and must be replaced. Such extreme problems are also sent to the original developers for in-depth analysis. If it is determined that a problem can be solved, this group is responsible for designing and developing one or more courses of action, evaluating each of these courses in a test case environment, and implementing the best solution to the problem. While not universally used, a fourth level often represents an escalation point beyond the organization. L4 support is generally a hardware or software vendor.
Scams
A common scam typically involves a cold caller claiming to be from a technical support department of a company like Microsoft. Such cold calls are often made from call centers based in India to users in English-speaking countries, although increasingly these scams operate within the same country. The scammer will instruct the user to download a remote desktop program and once connected, use social engineering techniques that typically involve Windows components to persuade the victim that they need to pay in order for the computer to be fixed and then proceeds to steal money from the victim's credit card.
See also
Call center
Call board
Customer service
Comparison of issue-tracking systems
Comparison of help desk issue tracking software
Help desk
Help desk software
References
Help desk
Outsourcing
Customer service
Computer telephony integration
Computer occupations
Remote desktop
Telephony |
885341 | https://en.wikipedia.org/wiki/LSM | LSM | LSM may refer to:
Science
Lanthanum strontium manganite, a crystal used as a cathode material
LSm, a family of RNA-binding proteins
LSM-775, a psychedelic drug similar to LSD, although less potent
Modane Underground Laboratory (Laboratoire Souterrain de Modane), a particle physics laboratory in France
Laser scanning microscopy, a microscopy technique used in biology and nano-crystal imaging
Least squares method, a method in regression analysis
Sports
Long-stick midfielder, a player position in field lacrosse
Malaysia Super League (Liga Super Malaysia), a top-tier association football league in Malaysia
Technology
Land Surface Model (LSM version 1.0), a unidimensional computational model
Latent semantic mapping, for modelling data relationships
Level-set method, for numerical analysis of interfaces and shapes
Linear scheduling method, a project scheduling method for repetitive activities
Linear synchronous motor, an electric motor
Linux Security Modules, a modular framework for security checks in Linux
Linux Software Map, file format
Liquid state machine, a type of neural network
Log-structured merge-tree, a data structure
Multicam (LSM), Live Slow Motion instant-replay software developed by EVS
Education
Lourdes School of Mandaluyong, Philippines
Louvain School of Management, Belgium
Organizations
Lesbian Sex Mafia, a female support group and BDSM organization
Living Stream Ministry, religious publisher
Lutheran Student Movement – USA
Little St Mary's, an Anglo-Catholic parish in Cambridge
Other uses
Landing Ship Medium, a U.S. Navy amphibious warfare ship class
Libre Software Meeting, an annual free software event in France
San Martín Line (Linea San Martín), commuter rail line in Buenos Aires
Local store marketing, a marketing term
Mexican Sign Language (Lengua de Señas Mexicana)
Latvijas Sabiedriskais medijs (Latvian Public Broadcasting), a publicly funded radio and television organization in Latvia
Living Standards Measure, a classification of Standard of living in South Africa
See also |
26613365 | https://en.wikipedia.org/wiki/Scalos | Scalos | Scalos is a desktop replacement for the original Amiga Workbench GUI, based on a subset of APIs and its own front-end window manager of the same name. Scalos is NOT an AmigaOS replacement, although its name suggests otherwise. Its goal is to emulate the real Workbench behaviour, plus integrating additional functionality and an enhanced look. As stated on its website, the name "Scalos" was inspired by the fictional time-accelerated planet Scalos in the Star Trek episode "Wink of an Eye".
History
Scalos is a former commercial product originally written in 1999 by programmer Stefan Sommerfield for a software house called AlienDesign. The purpose was to recreate the mouse-and-click experience on Amiga, offering an alternative to the Workbench interface present in versions 3.0 and 3.1 of AmigaOS (at that time already considered obsolete).
A group of English programmers known as Satanic Dreams Software (a software firm developing for Windows, Macintosh and Linux) took over. The release versions 1.1 and 1.2 (internally versions 39.2) came out in 2000 as freeware. These may be found on the Amiga Aminet official online repository. Scalos was finally open sourced in 2012.
The last release candidate is version 41.8 RC1; it is compatible with AmigaOS 3 for the Motorola 68000 family of processors, with AmigaOS 4 and MorphOS on PowerPC machines, and with AROS, at the moment on computers with processors from Intel 80386 onwards. The Scalos project can be found on SourceForge.
Versions
v1.0 (V39.201) – November 1999
v1.1 (V39.212) – 1999 (?)
v1.2b (39.220) – June 6, 2000
v1.2d (39.222) – 2000 (latest public beta executable)
v1.3 (40.7) (beta) – August 2, 2001
v1.3 (40.22) – September 25, 2002
v1.4 (40.32) (beta) March 31, 2005
v1.6 (41.4) – March 27, 2007
v1.7 (41.5) – August 12, 2007
(41.6) – March 12, 2009
(41.7) (beta) – March 15, 2010
(41.8) (RC1) – August 25, 2012
Features
Scalos is a Workbench-compatible replacement which is declared by its developers 100-percent compatible with the original Amiga interface. It features internal 64-bit arithmetic which allows support for hard disks over 64 GB, and a complete internal multitasking system (each window drawn on the desktop is represented in the system by its own task). It is completely adjustable by the user, and features a system for drawing and managing windows (as in the standard Amiga Intuition system). Each window may have its own background pattern (sporting an optimized pattern routine and scaling) and automatic content-refresh. Menus are editable. Standard Amiga "Palette" and windows "Pattern" preferences have been replaced with new ones. Scalos maintains its own API and its own plug-in system for the benefit of developers who want to create software for Scalos and enhance the system.
Scalos supports as standard icon sets the Amiga NewIcons replacement icons, and the Amiga GlowIcons set on older versions like AmigaOS 3.5, including thumbnail previews of files as icons. It therefore represents a whole Amiga icon system Datatype capable of supporting various types of icons. This includes png icons with alpha channel and transparencies, and scalable icons (the aforementioned NewIcons and GlowIcons). Scalos is also fully truecolor-compliant.
References
External links
Scalos Homepage
Scalos 1.2 Info Page at Aminet Amiga Official Repository
Scalos article at AmigaHistory site.
Amiga software
AmigaOS
AmigaOS 4 software
AROS software
MorphOS software
Desktop shell replacement |
11952646 | https://en.wikipedia.org/wiki/SWsoft | SWsoft | SWsoft was a privately held server automation and virtualization software company and the parent company of Parallels. SWsoft developed software for running data centers, particularly for web-hosting services companies, application service providers, and managed service providers. SWsoft products included applications for operating system-level virtualization, which enables users to run multiple operating systems, including Windows, Mac OS X, Linux, and Solaris, on a single computer.
The company was founded in 1997 and maintained its headquarters in Herndon, Virginia with additional offices throughout North America, Europe, and Asia. Its research and development offices were located in Moscow, Russia and it had sales offices in Germany and Singapore.
In December 2007, SWsoft announced its plans to change its name to Parallels in 2008 and ship its products under the Parallels brand name.
Company history
1997 - SWsoft founded
2001
Virtuozzo released
HSPcomplete released
2003
SWsoft acquires automation firms Yippi-Yeah! E-Business GmbH (makers of Confixx) and Plesk Inc (makers of Confixx and Plesk)
PEM datacenter released
Open Fusion launched
2004
Announces partnership with Acronis
Plesk 7.0 released
SiteBuilder beta released
Acquires Parallels, Inc. - but keeps this secret.
2007 - December 12: SWsoft announces that it will change its name to "Parallels" in 2008.
2007 - December: SWsoft acquires WebHostAutomation Ltd developers of HELM Control Panel.
January 2008 - SWsoft officially becomes Parallels, Inc.
Uses
SWsoft’s virtualization software is predominantly used to automate data center and server management and to consolidate multiple servers onto one Windows- or Linux-based physical server. The company’s products are developed predominantly for web hosting companies, service providers, and corporations.
Although the company’s software reportedly uses fewer system resources because it does not require each virtualized server to have an independent operating system, its overall flexibility is limited. For example, each virtualized Virtuozzo server must have the same version of the same operating system, and when running Linux, the operating systems' kernel must be modified from the standard version.
References
Software companies established in 1997
Virtualization software
Software companies based in Virginia
Privately held companies based in Virginia
Software companies of the United States |
615354 | https://en.wikipedia.org/wiki/Bombe | Bombe | The bombe () was an electro-mechanical device used by British cryptologists to help decipher German Enigma-machine-encrypted secret messages during World War II. The US Navy and US Army later produced their own machines to the same functional specification, albeit engineered differently both from each other and from Polish and British bombes.
The British bombe was developed from a device known as the "bomba" (), which had been designed in Poland at the Biuro Szyfrów (Cipher Bureau) by cryptologist Marian Rejewski, who had been breaking German Enigma messages for the previous seven years, using it and earlier machines. The initial design of the British bombe was produced in 1939 at the UK Government Code and Cypher School (GC&CS) at Bletchley Park by Alan Turing, with an important refinement devised in 1940 by Gordon Welchman. The engineering design and construction was the work of Harold Keen of the British Tabulating Machine Company. The first bombe, code-named Victory, was installed in March 1940 while the second version, Agnus Dei or Agnes, incorporating Welchman's new design, was working by August 1940.
The bombe was designed to discover some of the daily settings of the Enigma machines on the various German military networks: specifically, the set of rotors in use and their positions in the machine; the rotor core start positions for the message—the message key—and one of the wirings of the plugboard.
The Enigma machine
The Enigma is an electro-mechanical rotor machine used for the encryption and decryption of secret messages. It was developed in Germany in the 1920s. The repeated changes of the electrical pathway from the keyboard to the lampboard implement a polyalphabetic substitution cipher, which turns plaintext into ciphertext and back again. The Enigma's scrambler contains rotors with 26 electrical contacts on each side, whose wiring diverts the current to a different position on the two sides. When a key is pressed on the keyboard, an electric current flows through an entry drum at the right-hand end of the scrambler, then through the set of rotors to a reflecting drum (or reflector) which turns it back through the rotors and entry drum, and out to illuminate one of the lamps on the lampboard.
At each key depression, the right-hand or "fast" rotor advances one position, which causes the encipherment to change. In addition, once per rotation, the right-hand rotor causes the middle rotor to advance; the middle rotor similarly causes the left-hand (or "slow") rotor to advance. Each rotor's position is indicated by a letter of the alphabet showing through a window. The Enigma operator rotates the wheels by hand to set the start position for enciphering or deciphering a message. The three-letter sequence indicating the start position of the rotors is the "message key". There are 263 17,576 different message keys and different positions of the set of three rotors. By opening the lid of the machine and releasing a compression bar, the set of three rotors on their spindle can be removed from the machine and their sequence (called the "wheel order" at Bletchley Park) altered. Multiplying 17,576 by the six possible wheel orders gives 105,456 different ways that the scrambler can be set up.
Although 105,456 is a large number, it does not guarantee security. A brute-force attack is possible: one could imagine using 100 code clerks who each tried to decode a message using 1000 distinct rotor settings. The Poles developed card catalogs so they could easily find rotor positions; Britain built "EINS" (a common German word, meaning the number one) catalogs. Less intensive methods were also possible. If all message traffic for a day used the same rotor starting position, then frequency analysis for each position could recover the polyalphabetic substitutions. If different rotor starting positions were used, then overlapping portions of a message could be found using the index of coincidence. Many major powers (including the Germans) could break Enigma traffic if they knew the rotor wiring. The German military knew the Enigma was weak.
In 1930, the German army introduced an additional security feature, a plugboard (Steckerbrett in German; each plug is a Stecker, and the British cryptologists also used the word) that further scrambled the letters. The Enigma encryption is a self-inverse function, meaning that it substitutes letters reciprocally: if A is transformed into R, then R is transformed into A. The plugboard transformation maintained the self-inverse quality, but the plugboard wiring, unlike the rotor positions, does not change during the encryption. This regularity was exploited by Welchman's "diagonal board" enhancement to the bombe, which vastly increased its efficiency. With six plug leads in use (leaving 14 letters "unsteckered"), there were 100,391,791,500 possible ways of setting up the plugboard.
An important feature of the machine from a cryptanalyst's point of view, and indeed Enigma's Achilles' heel, was that the reflector in the scrambler prevented a letter from being enciphered as itself. Any putative solution that gave, for any location, the same letter in the proposed plaintext and the ciphertext could therefore be eliminated.
In the lead-up to World War II, the Germans made successive improvements to their military Enigma machines. By January 1939, additional rotors had been introduced so that three rotors were chosen from a set of five (hence there were now 60 possible wheel orders) for the army and air force Enigmas, and three out of eight (making 336 possible wheel orders) for the navy machines. In addition, ten leads were used on the plugboard, leaving only six letters unsteckered. This meant that the air force and army Enigmas could be set up in 1.5×1019 ways. In 1941 the German navy introduced a version of Enigma with a rotatable reflector (the M4 or Four-rotor Enigma) for communicating with its U-boats. This could be set up in 1.8×1020 different ways.
Four-rotor Enigma
By late 1941 a change in German Navy fortunes in the Battle of the Atlantic, combined with intelligence reports, convinced Admiral Karl Dönitz that the Allies were able to read the German Navy's coded communications, and a fourth rotor with unknown wiring was added to German Navy Enigmas used for U-boat communications, producing the Triton system, known at Bletchley Park as Shark. This was coupled with a thinner reflector design to make room for the extra rotor. The Triton was designed in such a way that it remained compatible with three-rotor machines when necessary: one of the extra 'fourth' rotors, the 'beta', was designed so that when it was paired with the thin 'B' reflector, and the rotor and ring were set to 'A', the pair acted as a 'B' reflector coupled with three rotors. Fortunately for the Allies, in December 1941, before the machine went into official service, a submarine accidentally sent a message with the fourth rotor in the wrong position, and then retransmitted the message with the rotor in the correct position to emulate the three-rotor machine. In February 1942 the change in the number of rotors used became official, and the Allies' ability to read German submarines' messages ceased until a snatch from a captured U-boat revealed not only the four-rotor machine's ability to emulate a three-rotor machine, but also that the fourth rotor did not move during a message. This along with the aforementioned retransmission eventually allowed the code breakers to figure out the wiring of both the 'beta' and 'gamma' fourth rotors.
The first half of 1942 was the "Second Happy Time" for the German U-boats, with renewed success in attacking Allied shipping. This was due to the security of the new Enigma and the Germans' ability to read Allied convoy messages sent in Naval Cipher No. 3. Between January and March 1942, German submarines sank 216 ships off the US east coast. In May 1942 the US began using the convoy system and requiring a blackout of coastal cities so that ships would not be silhouetted against their lights, but this yielded only slightly improved security for Allied shipping. The Allies' failure to change their cipher for three months, together with the fact that Allied messages never contained any raw Enigma decrypts (or even mentioned that they were decrypting messages), helped convince the Germans that their messages were secure. Conversely, the Allies learned that the Germans had broken the naval cipher almost immediately from Enigma decrypts, but lost many ships due to the delay in changing the cipher.
The principle of the bombe
The following settings of the Enigma machine must be discovered to decipher German military Enigma messages. Once these are known, all the messages for that network for that day (or pair of days in the case of the German navy) could be decrypted.
Internal settings (that required the lid of the Enigma machine to be opened)
The selection of rotors in use in the Enigma's scrambler, and their positions on the spindle (Walzenlage or "wheel order"). Possible wheel orders numbered 60 (three rotors from a choice of five) for army and air force networks and 336 (three rotors from a choice of eight) for the naval networks.
The positions of the alphabet rings' turnover notch in relation to the core of each rotor in use (Ringstellung or "ring settings"). There are 26 possible ring settings for each rotor.
External settings (that could be changed without opening the Enigma machine)
The plugboard connections (Steckerverbindungen or "stecker values"). The ten leads could be arranged in different combinations (approximately 151 trillion).
The scrambler rotor positions at the start of enciphering the message key (the Grundstellung or "indicator-setting") — up to May 1940; or thereafter the initial positions of each rotor at the start of enciphering the message (the "message key") from which the indicator-setting could be derived. There are possible three-letter keys.
The bombe identified possible initial positions of the rotor cores and the stecker partner of a specified letter for a set of wheel orders. Manual techniques were then used to complete the decryption process. In the words of Gordon Welchman, "... the task of the bombe was simply to reduce the assumptions of wheel order and scrambler positions that required 'further analysis' to a manageable number".
Structure
The bombe was an electro-mechanical device that replicated the action of several Enigma machines wired together. A standard German Enigma employed, at any one time, a set of three rotors, each of which could be set in any of 26 positions. The standard British bombe contained 36 Enigma equivalents, each with three drums wired to produce the same scrambling effect as the Enigma rotors. A bombe could run two or three jobs simultaneously.
Each job would have a menu that had to be run against a number of different wheel orders. If the menu contained 12 or fewer letters, three different wheel orders could be run on one bombe; if more than 12 letters, only two.
In order to simulate Enigma rotors, each rotor drum of the bombe had two complete sets of contacts, one for input towards the reflector and the other for output from the reflector, so that the reflected signal could pass back through a separate set of contacts. Each drum had 104 wire brushes, which made contact with the plate onto which they were loaded. The brushes and the corresponding set of contacts on the plate were arranged in four concentric circles of 26. The outer pair of circles (input and output) were equivalent to the current in an Enigma passing in one direction through the scrambler, and the inner pair equivalent to the current flowing in the opposite direction.
The interconnections within the drums between the two sets of input and output contacts were both identical to those of the relevant Enigma rotor. There was permanent wiring between the inner two sets of contacts of the three input/output plates. From there, the circuit continued to a plugboard located on the left-hand end panel, which was wired to imitate an Enigma reflector and then back through the outer pair of contacts. At each end of the "double-ended Enigma", there were sockets on the back of the machine, into which 26-way cables could be plugged.
The bombe drums were arranged with the top one of the three simulating the left-hand rotor of the Enigma scrambler, the middle one the middle rotor, and the bottom one the right-hand rotor. The top drums were all driven in synchrony by an electric motor. For each full rotation of the top drums, the middle drums were incremented by one position, and likewise for the middle and bottom drums, giving the total of 26 × 26 × 26 = positions of the 3-rotor Enigma scrambler.
The drums were colour-coded according to which Enigma rotor they emulated: I red; II maroon; III green; IV yellow; V brown; VI cobalt (blue); VII jet (black); VIII silver.
At each position of the rotors, an electric current would or would not flow in each of the 26 wires, and this would be tested in the bombe's comparator unit. For a large number of positions, the test would lead to a logical contradiction, ruling out that setting. If the test did not lead to a contradiction, the machine would stop.
The operator would record the candidate solution by reading the positions of the indicator drums and the indicator unit on the Bombe's right-hand end panel. The operator then restarted the run. The candidate solutions, stops as they were called, were processed further to eliminate as many false stops as possible. Typically, there were many false bombe stops before the correct one was found.
The candidate solutions for the set of wheel orders were subject to extensive further cryptanalytical work. This progressively eliminated the false stops, built up the set of plugboard connections and established the positions of the rotor alphabet rings. Eventually, the result would be tested on a Typex machine that had been modified to replicate an Enigma, to see whether that decryption produced German language.
Bombe menu
A bombe run involved a cryptanalyst first obtaining a crib — a section of plaintext that was thought to correspond to the ciphertext. Finding cribs was not at all straightforward; it required considerable familiarity with German military jargon and the communication habits of the operators. However, the codebreakers were aided by the fact that the Enigma would never encrypt a letter to itself. This helped in testing a possible crib against the ciphertext, as it could rule out a number of cribs and positions, where the same letter occurred in the same position in both the plaintext and the ciphertext. This was termed a crash at Bletchley Park.
Once a suitable crib had been decided upon, the cryptanalyst would produce a menu for wiring up the bombe to test the crib against the ciphertext. The following is a simplified explanation of the process of constructing a menu. Suppose that the crib is ATTACKATDAWN to be tested against a certain stretch of ciphertext, say, WSNPNLKLSTCS. The letters of the crib and the ciphertext were compared to establish pairings between the ciphertext and the crib plaintext. These were then graphed as in the diagram. It should be borne in mind that the relationships are reciprocal so that A in the plaintext associated with W in the ciphertext is the same as W in the plaintext associated with A in the ciphertext. At position 1 of the plaintext-ciphertext comparison, the letter A is associated with W, but A is also associated with P at position 4, K at position 7 and T at position 10. Building up these relationships into such a diagram provided the menu from which the bombe connections and drum start positions would be set up.
In the illustration, there are three sequences of letters which form loops (or cycles or closures), ATLK, TNS and TAWCN. The more loops in the menu, the more candidate rotor settings the bombe could reject, and hence the fewer false stops.
Alan Turing conducted a very substantial analysis (without any electronic aids) to estimate how many bombe stops would be expected according to the number of letters in the menu and the number of loops. Some of his results are given in the following table. Recent bombe simulations have shown similar results.
Stecker values
The German military Enigma included a plugboard (Steckerbrett in German) which swapped letters (indicated here by ) before and after the main scrambler's change (indicated by ). The plugboard connections were known to the cryptanalysts as Stecker values. If there had been no plugboard, it would have been relatively straightforward to test a rotor setting; a Typex machine modified to replicate Enigma could be set up and the crib letter encrypted on it, and compared with the ciphertext, . If they matched, the next letter would be tried, checking that encrypted to and so on for the entire length of the crib. If at any point the letters failed to match, the initial rotor setting would be rejected; most incorrect settings would be ruled out after testing just two letters. This test could be readily mechanised and applied to all settings of the rotors.
However, with the plugboard, it was much harder to perform trial encryptions because it was unknown what the crib and ciphertext letters were transformed to by the plugboard. For example, in the first position, and were unknown because the plugboard settings were unknown.
Turing's solution to working out the stecker values (plugboard connections) was to note that, even though the values for, say, or , were unknown, the crib still provided known relationships amongst these values; that is, the values after the plugboard transformation. Using these relationships, a cryptanalyst could reason from one to another and, potentially, derive a logical contradiction, in which case the rotor setting under consideration could be ruled out.
A worked example of such reasoning might go as follows: a cryptanalyst might suppose that . Looking at position 10 of the crib:ciphertext comparison, we observe that encrypts to , or, expressed as a formula:
=
Due to the function being its own inverse, we can apply it to both sides of the equation and obtain the following:
This gives us a relationship between and . If = , and for the rotor setting under consideration = (say), we can deduce that
= =
While the crib does not allow us to determine what the values after the plugboard are, it does provide a constraint between them. In this case, it shows how is completely determined if is known.
Likewise, we can also observe that encrypts to at position 8. Using , we can deduce the steckered value for as well using a similar argument, to get, say,
= =
Similarly, in position 6, encrypts to . As the Enigma machine is self-reciprocal, this means that at the same position would also encrypt to . Knowing this, we can apply the argument once more to deduce a value for , which might be:
=
And again, the same sort of reasoning applies at position 7 to get:
=
However, in this case, we have derived a contradiction, since, by hypothesis, we assumed that = at the outset. This means that the initial assumption must have been incorrect, and so that (for this rotor setting) ≠ (this type of argument is termed reductio ad absurdum or "proof by contradiction").
The cryptanalyst hypothesised one plugboard interconnection for the bombe to test. The other stecker values and the ring settings were worked out by hand methods.
Automated deduction
To automate these logical deductions, the bombe took the form of an electrical circuit. Current flowed around the circuit near-instantaneously, and represented all the possible logical deductions which could be made at that position. To form this circuit, the bombe used several sets of Enigma rotor stacks wired up together according to the instructions given on a menu, derived from a crib. Because each Enigma machine had 26 inputs and outputs, the replica Enigma stacks are connected to each other using 26-way cables. In addition, each Enigma stack rotor setting is offset a number of places as determined by its position in the crib; for example, an Enigma stack corresponding to the fifth letter in the crib would be four places further on than that corresponding to the first letter.
In practice
Practical bombes used several stacks of rotors spinning together to test multiple hypotheses about possible setups of the Enigma machine, such as the order of the rotors in the stack.
While Turing's bombe worked in theory, it required impractically long cribs to rule out sufficiently large numbers of settings. Gordon Welchman came up with a way of using the symmetry of the Enigma stecker to increase the power of the bombe. His suggestion was an attachment called the diagonal board that further improved the bombe's effectiveness.
The British Bombe
The Polish cryptologic bomba (Polish: bomba kryptologiczna; plural bomby) had been useful only as long as three conditions were met. First, the form of the indicator had to include the repetition of the message key; second, the number of rotors available had to be limited to three, giving six different "wheel orders" (the three rotors and their order within the machine); and third, the number of plug-board leads had to remain relatively small so that the majority of letters were unsteckered. Six machines were built, one for each possible rotor order. The bomby were delivered in November 1938, but barely a month later the Germans introduced two additional rotors for loading into the Enigma scrambler, increasing the number of wheel orders by a factor of ten. Building another 54 bomby was beyond the Poles' resources. Also, on 1 January 1939, the number of plug-board leads was increased to ten. The Poles therefore had to return to manual methods, the Zygalski sheets.
Alan Turing designed the British bombe on a more general principle, the assumption of the presence of text, called a crib, that cryptanalysts could predict was likely to be present at a defined point in the message. This technique is termed a known plaintext attack and had been used to a limited extent by the Poles, e.g., the Germans' use of "ANX" — "AN", German for "To", followed by "X" as a spacer.
A £100,000 budget for the construction of Turing's machine was acquired and the contract to build the bombes was awarded to the British Tabulating Machine Company (BTM) at Letchworth. BTM placed the project under the direction of Harold 'Doc' Keen. Each machine was about wide, tall, deep and weighed about a ton. On the front of each bombe were 108 places where drums could be mounted. The drums were in three groups of 12 triplets. Each triplet, arranged vertically, corresponded to the three rotors of an Enigma scrambler. The bombe drums' input and output contacts went to cable connectors, allowing the bombe to be wired up according to the menu. The 'fast' drum rotated at a speed of 50.4 rpm in the first models and 120 rpm in later ones, when the time to set up and run through all 17,576 possible positions for one rotor order was about 20 minutes.
The first bombe was named "Victory". It was installed in "Hut 1" at Bletchley Park on 18 March 1940. It was based on Turing's original design and so lacked a diagonal board. On 26 April 1940, captured a German trawler (Schiff 26, the Polares) flying a Dutch flag; included in the capture were some Enigma keys for 23 to 26 April. Bletchley retrospectively attacked some messages sent during this period using the captured material and an ingenious Bombe menu where the Enigma fast rotors were all in the same position. In May and June 1940, Bletchley succeeded in breaking six days of naval traffic, 22–27 April 1940. Those messages were the first breaks of Kriegsmarine messages of the war, "[b]ut though this success expanded Naval Section's knowledge of the Kriegsmarines's signals organization, it neither affected naval operations nor made further naval Enigma solutions possible." The second bombe, named "Agnus dei", later shortened to "Agnes", or "Aggie", was equipped with Welchman's diagonal board, and was installed on 8 August 1940; "Victory" was later returned to Letchworth to have a diagonal board fitted. The bombes were later moved from "Hut 1" to "Hut 11". The bombe was referred to by Group Captain Winterbotham as a "Bronze Goddess" because of its colour. The devices were more prosaically described by operators as being "like great big metal bookcases".
During 1940, 178 messages were broken on the two machines, nearly all successfully. Because of the danger of bombes at Bletchley Park being lost if there were to be a bombing raid, bombe outstations were established, at Adstock, Gayhurst and Wavendon, all in Buckinghamshire.
In June–August 1941 there were 4 to 6 bombes at Bletchley Park, and when Wavendon was completed, Bletchley, Adstock and Wavenden had a total of 24 to 30 bombes. When Gayhurst became operational there were a total of 40 to 46 bombes, and it was expected that the total would increase to about 70 bombes run by some 700 Wrens (Women's Royal Naval Service). But in 1942 with the introduction of the naval four-rotor Enigma, "far more than seventy bombes" would be needed. New outstations were established at Stanmore and Eastcote, and the Wavendon and Adstock bombes were moved to them, though the Gayhurst site was retained. The few bombes left at Bletchley Park were used for demonstration and training purposes only.
Production of bombes by BTM at Letchworth in wartime conditions was nowhere near as rapid as the Americans later achieved at NCR in Dayton, Ohio.
Sergeant Jones was given the overall responsibility for Bombe maintenance by Edward Travis. Later Squadron Leader and not to be confused with Eric Jones, he was one of the original bombe maintenance engineers, and experienced in BTM techniques. Welchman said that later in the war when other people tried to maintain them, they realised how lucky they were to have him. About 15 million delicate wire brushes on the drums had to make reliable contact with the terminals on the template. There were 104 brushes per drum, 720 drums per bombe, and ultimately around 200 bombes.
After World War II, some fifty bombes were retained at RAF Eastcote, while the rest were destroyed. The surviving bombes were put to work, possibly on Eastern bloc ciphers. Smith cites the official history of the bombe as saying that "some of these machines were to be stored away but others were required to run new jobs and sixteen machines were kept comparatively busy on menus." and "It is interesting to note that most of the jobs came up and the operating, checking and other times maintained were faster than the best times during the war periods."
Response to the four-rotor Enigma
A program was initiated by Bletchley Park to design much faster bombes that could decrypt the four-rotor system in a reasonable time. There were two streams of development. One, code-named Cobra, with an electronic sensing unit, was produced by Charles Wynn-Williams of the Telecommunications Research Establishment (TRE) at Malvern and Tommy Flowers of the General Post Office (GPO). The other, code-named Mammoth, was designed by Harold Keen at BTM, Letchworth. Initial delivery was scheduled for August or September 1942. The dual development projects created considerable tension between the two teams, both of which cast doubts on the viability of the opposing team's machine. After considerable internal rivalry and dispute, Gordon Welchman (by then, Bletchley Park's Assistant Director for mechanisation) was forced to step in to resolve the situation. Ultimately, Cobra proved unreliable and Mammoth went into full-scale production.
Unlike the situation at Bletchley Park, the United States armed services did not share a combined cryptanalytical service. Indeed, there was considerable rivalry between the US Army's facility, the Signals Intelligence Service (SIS), and that of the US Navy known as OP-20-G. Before the US joined the war, there was collaboration with Britain, albeit with a considerable amount of caution on Britain's side because of the extreme importance of Germany and her allies not learning that its codes were being broken. Despite some worthwhile collaboration amongst the cryptanalysts, their superiors took some time to achieve a trusting relationship in which both British and American bombes were used to mutual benefit.
In February 1941, Captain Abe Sinkov and Lieutenant Leo Rosen of the US Army, and US Naval Lieutenants Robert Weeks and Prescott Currier, arrived at Bletchley Park bringing, amongst other things, a replica of the 'Purple' cipher machine for the Bletchley Park's Japanese section in Hut 7. The four returned to America after ten weeks, with a naval radio direction finding unit and many documents including a 'paper Enigma'.
Currier later wrote:
The main response to the Four-rotor Enigma was the US Navy bombe, which was manufactured in much less constrained facilities than were available in wartime Britain.
US Navy Bombe
Colonel John Tiltman, who later became Deputy Director at Bletchley Park, visited the US Navy cryptanalysis office (OP-20-G) in April 1942 and recognised America's vital interest in deciphering U-boat traffic. The urgent need, doubts about the British engineering workload and slow progress, prompted the US to start investigating designs for a Navy bombe, based on the full blueprints and wiring diagrams received by US Naval Lieutenants Robert Ely and Joseph Eachus at Bletchley Park in July 1942. Funding for a full, $2 million, navy development effort was requested on 3 September 1942 and approved the following day.
Commander Edward Travis, Deputy Director and Frank Birch, Head of the German Naval Section travelled from Bletchley Park to Washington in September 1942. With Carl Frederick Holden, US Director of Naval Communications they established, on 2 October 1942, a UK:US accord which may have "a stronger claim than BRUSA to being the forerunner of the UKUSA Agreement," being the first agreement "to establish the special Sigint relationship between the two countries," and "it set the pattern for UKUSA, in that the United States was very much the senior partner in the alliance." It established a relationship of "full collaboration" between Bletchley Park and OP-20-G.
An all electronic solution to the problem of a fast bombe was considered, but rejected for pragmatic reasons, and a contract was let with the National Cash Register Corporation (NCR) in Dayton, Ohio. This established the United States Naval Computing Machine Laboratory. Engineering development was led by NCR's Joseph Desch.
Alan Turing, who had written a memorandum to OP-20-G (probably in 1941), was seconded to the British Joint Staff Mission in Washington in December 1942, because of his exceptionally wide knowledge about the bombes and the methods of their use. He was asked to look at the bombes that were being built by NCR and at the security of certain speech cipher equipment under development at Bell Labs. He visited OP-20-G, and went to NCR in Dayton on 21 December. He was able to show that it was not necessary to build 336 Bombes, one for each possible rotor order, by utilising techniques such as Banburismus. The initial order was scaled down to 96 machines.
The US Navy bombes used drums for the Enigma rotors in much the same way as the British bombes. They had eight Enigma-equivalents on the front and eight on the back. The fast drum rotated at 1,725 rpm, 34 times the speed of the early British bombes. 'Stops' were detected electronically using thermionic valves (vacuum tubes)—mostly thyratrons—for the high-speed circuits. When a 'stop' was found the machine over-ran as it slowed, reversed to the position found and printed it out before restarting. The running time for a 4-rotor run was about 20 minutes, and for a 3-rotor run, about 50 seconds. Each machine was wide, high, deep and weighed 2.5 tons.
The first machine was completed and tested on 3 May 1943. By 22 June, the first two machines, called 'Adam' and 'Eve' broke a particularly difficult German naval cipher, the Offizier settings for 9 and 10 June. A P Mahon, who had joined the Naval Section in Hut 8 in 1941, reported in his official 1945 "History of Hut Eight 1939-1945":
These bombes were faster, and soon more available, than the British bombes at Bletchley Park and its outstations. Consequently, they were put to use for Hut 6 as well as Hut 8 work. In Alexander's "Cryptographic History of Work on German Naval Enigma", he wrote as follows.
Production was stopped in September 1944 after 121 bombes had been made. The last-manufactured US Navy bombe is on display at the US National Cryptologic Museum. Jack Ingram, former Curator of the museum, describes being told of the existence of a second bombe and searching for it but not finding it whole. Whether it remains in storage in pieces, waiting to be discovered, or no longer exists, is unknown.
US Army Bombe
The US Army Bombe was physically very different from the British and US Navy bombes. The contract for its creation was signed with Bell Labs on 30 September 1942. The machine was designed to analyse 3-rotor, not 4-rotor traffic. It was known as "003" or "Madame X". It did not use drums to represent the Enigma rotors, using instead telephone-type relays. It could, however, handle one problem that the bombes with drums could not. The set of ten bombes consisted of a total of 144 Enigma-equivalents, each mounted on a rack approximately long high and wide. There were 12 control stations which could allocate any of the Enigma-equivalents into the desired configuration by means of plugboards. Rotor order changes did not require the mechanical process of changing drums, but was achieved in about half a minute by means of push buttons. A 3-rotor run took about 10 minutes.
Bombe rebuild
In 1994 a group led by John Harper of the BCS Computer Conservation Society started a project to build a working replica of a bombe. The project required detailed research, and took 13 years of effort before the replica was completed, which was then put on display at the Bletchley Park museum. In March 2009 it won an Engineering Heritage Award. The Bombe rebuild was relocated to The National Museum of Computing on Bletchley Park in May 2018, the new gallery officially re-opening on 23 June 2018.
See also
Cryptanalysis of the Enigma
Colossus computer
Heath Robinson
Jean Valentine (bombe operator)
Notes
References
New updated edition of '' with an addendum consisting of a 1986 paper written by Welchman that corrects his misapprehensions in the 1982 edition.
(also National Archives and Records Administration Record Group 457, File 35701.)
External links
A bombe simulator (in Javascript)
Museum of Learning - Bombe: The Challenge Of The Four Rotor Enigma Machine
Enigma and the Turing Bombe by N. Shaylor, 17 April 1997. Includes a simulator (a Java applet and C)
Dayton Codebreakers — documentary on the US Navy's Bombe; information on Desch, personnel of the US Naval Computing Machine Laboratory.
A simulator for both Turing and US Navy Bombes
Breaking German Navy Ciphers - The U534 Enigma M4 messages: Cracked with a Turing Bombe software
Enigma Cipher Machines at Crypto Museum.
1930s computers
Computer-related introductions in 1939
Cryptanalytic devices
Electro-mechanical computers
Early British computers
English inventions
World War II military equipment of the United Kingdom
Bletchley Park
Alan Turing
NCR Corporation |
22127703 | https://en.wikipedia.org/wiki/Juan%20Antonio%20Arguelles%20Rius | Juan Antonio Arguelles Rius | Juan Antonio Arguelles Rius (November 2, 1978 – June 3, 2007), also known as Arguru (sometimes Argu), was a prolific music software programmer and electronic musician, producer and songwriter, responsible for such applications as NoiseTrekker and Directwave. Co-founded the company discoDSP and was later hired by Image-Line and involved in the development of Deckadance and FL Studio 7. Arguru died in a car accident on June 3, 2007.
Biography
Juan Antonio Arguelles was born on November 2, 1978, in Málaga, Spain. In 1997, Arguru started out as one of the most productive plugin developers of the Jeskola buzz-scene. In 2000, he and Frank Cobos (known as "Freaky") began mixing psytrance as a duo in Malaga under the name Alienated Buddha. They released the album Inpsyde on Out of Orion in February 2002. In May 2000 he created Psycle until version 1.0, which he then released into the public domain.
Arguru co-founded the software company discoDSP with George Reales in July 2002. discoDSP is known for developing audio plugins such as Discovery, Discovery Pro, Vertigo, Bliss, Phantom and Corona. Discovery is notable for being the first commercial VSTi plugin that was available on both Windows and Linux. He left the company in 2004 to begin working for Image-Line. While at Image-Line, he contributed to the development of FL Studio and was the primary programmer for the DJ software Deckadance, released in 2007.
Other projects
Aodix 1,2 and 3 were basic sequencer programs that used the amiga style "tracker" interface, with an integrated sampler and integrated synth that were progressively dropped in favor of a VST host.
Aodix 4 is a digital audio workstation program that is advertised as "the ultimate bridge between tracking and sequencing", co-authored with Zafer Kantar, and with the participation of Paul Merchant and Marc De Haar. Aodix 4 brought several innovative technologies to trackers such as pattern zoom and subtick timing and VST support with modular routing. Aodix 4 was a commercial software at its first release, but was re-released as freeware after the death of Juan Antonio.
NoiseTrekker was a Windows tracker with MIDI, internal synth, 2 TB303s and DSP support featuring a classic amiga-style interface. NoiseTrekker code was used as a basis for the first version of Renoise, one of the most modern and actively developed music trackers.
Psycle is a complete modular music creation environment with a tracker interface. Versions up to 1.0 were created by Arguru. After that it was released into the public domain, development was continued by other people. Current versions are released under GNU GPL and available for all major platforms.
Death
On June 3, 2007, near the city of Benalmádena (Málaga, Spain), Arguru was driving when he lost control and crashed into an RV, dying in the accident. His funeral was on June 4, 2007, in the park cemetery of Malaga.
The whole music software community was appalled by Arguru's unexpected death, and showed their strong support to Arguru's relatives.
Discography
Inpsyde by Alienated Buddha (Out of Orion, 2002)
Legacy
In 2007, deadmau5 and Chris Lake wrote a song titled "Arguru" for the album Random Album Title in memory of Arguru.
References
External links
discoDSP
Arguru software
Psycle's official community
Official free repository of his songs composed with Psycle
Spanish electronic musicians
1978 births
2007 deaths
20th-century Spanish musicians
Road incident deaths in Spain |
28513849 | https://en.wikipedia.org/wiki/Workwriter | Workwriter | Workwriter was a word processor software written in C, in 1983 and 1984, by Peter P. Vekinis, similar in features and operation to dedicated word processors marketed by AES Data Inc.
The software was sold to TANDY Europe, subsidiary of Tandy Corporation (more than 4000 copies were shipped in French, English and German) for the Amstrad CPC 6128 personal computer under the CP/M operating system. Britain's PC World magazine published an article on Workwriter It was in the November 1984 issue. It was also made available for the IBM PC under MS-DOS and PC DOS operating systems and other similar computers.
Workwriter had a simple user interface and was a video based word processor, in other words, a user could type text wherever he wanted on the screen. This was unlike other word processors who displayed structured text on a screen (the user could not move his cursor outside the text and type). The ability made Workwriter easy to use since the user was presented with a physical page metaphor. Pages were 80 characters by 99 lines long. Pages were strung together to create chapters and documents. The software offered columns, character based graphics, justifications and multiple character sets. It supported many printers including those with Daisy wheels.
The software was further improved with additional features (such as built-in serial communications, pagination and index creation) and was finally sold and became the word processing part of legal software in Belgium. You can download a copy and see how it works here.
References
External links
Vekinis.com/VintageDosSoftware.html
Vekinis.com
Word processors |
58740076 | https://en.wikipedia.org/wiki/Ben%20Delo | Ben Delo | {{Infobox person
| name = Ben Delo
| image = Ben Delo on stage at The Spectator's "Who's afraid of Bitcoin?" conference.jpg
| alt = Delo at The Spectator'''s "Who’s afraid of Bitcoin?” conference, October 2018
| caption = Delo at The Spectators "Who’s afraid of Bitcoin?” conference, October 2018
| birth_name = Ben Peter Delo
| birth_date =
| birth_place = Sheffield, England
| death_place =
| occupation = Mathematician, computer programmer, entrepreneur
| years_active =
| education = Lord Williams's School
| known_for = Co-founding BitMEX
| notable_works =
}}Ben Delo' (, born 24 February 1984) is a British mathematician, computer programmer, and entrepreneur. He is a co-founder of BitMEX and according to The Sunday Times is Britain's youngest self-made billionaire.
Early life and education
Born in Sheffield, Delo was educated at Lord Williams's School and graduated from the University of Oxford in 2005 with a double first-class honours degree in Mathematics and Computer Science.
Career
Delo began his career as a software engineer at IBM, where he was named as an inventor on several patents granted by the United States Patent and Trademark Office and the Intellectual Property Office.
He then went on to develop high-frequency trading systems at hedge funds and banks such as GSA Capital and J.P. Morgan, dealing predominantly in kdb+/Q. His expertise covers the design, architecture, and implementation of quantitative infrastructure, systems, and tools.
In 2014, Delo met Arthur Hayes and Sam Reed, and they co-founded BitMEX, a cryptocurrency derivatives trading platform.
Philanthropy
In October 2018, Delo gave £5 million to his Oxford alma mater Worcester College, endowing two teaching fellowships in perpetuity and becoming the youngest major donor in the College's history.
In April 2019, Delo signed The Giving Pledge, a programme orchestrated by Bill Gates and Warren Buffett, announcing his intention to give away at least half of his wealth during his lifetime. In his pledge letter, Delo states that his initial philanthropic interests are focused on long-term and large-scale problems and reducing catastrophic risks, and that he is inspired by the philosophy of effective altruism.
That month, Delo also became a member of Giving What We Can, a community of people who have pledged to give at least 10% of their income to effective charities.
In March 2020, in collaboration with Oxford University, Delo initiated and funded a cross-sectional survey, using nanopore technology (which sequences the whole genome of pathogens) as a diagnostic tool, to determine and level of community-based infection of COVID-19 in the UK. According to Professor Mike Bonsall, “Using rapid diagnostics, we will explore a new method of pathogen detection, which if widely adopted could prove crucial to early containment of future novel disease outbreaks.”
Bank Secrecy Act violations
On 1 October 2020, the Commodity Futures Trading Commission and the US Department of Justice simultaneously formally charged BitMEX and its co-founders, including Delo, with various violations of American law. Delo and three others were charged with violating the Bank Secrecy Act by failing to implement an adequate anti-money laundering program. Further, the regulators alleged that the BitMEX trading platform was required to have registered aspects of its operations in the US and had failed to do so. According to a spokesman for BitMEX’s parent company, Delo and his co-founders "intend to defend the allegations vigorously".
On 24 February 2022, the Attorney for the Southern District of New York announced that Delo and his BitMEX cofounder had pled guilty to Bank Secrecy Act violations for "willfully failing to establish, implement, and maintain an anti-money laundering ('AML') program at BitMEX". Under the terms of their plea agreements, Delo and his cofounder each agreed to pay a $10 million criminal fine representing pecuniary gain derived from the offense.
References
External links
Ben Delo on Bloomberg''
British computer programmers
British mathematicians
Giving Pledgers
21st-century philanthropists
Hong Kong billionaires
Hong Kong businesspeople
Living people
People associated with effective altruism
People educated at Lord Williams's School
Alumni of Worcester College, Oxford
Fellows of Worcester College, Oxford
Alumni of the University of Oxford
1984 births |
43541618 | https://en.wikipedia.org/wiki/The%20Radio%20Hacker%27s%20Codebook | The Radio Hacker's Codebook | The Radio Hacker's Codebook is a book for computer enthusiasts written by George Sassoon. The book explains how to receive international radioteletype signals, convert them with a circuit and then decode them on a microcomputer. In the case of this book the computer is the superseded Research Machines 380Z. Programs to do these functions are given, written in machine code and BASIC. However legal and moral issues relating to intercepting messages are not included. Other radioteletype subject included are the FEC and automatic repeat request used in maritime radiocommunications.
The book also include an exposition of encryption, including the public key RSA cipher and presciently expounds on the lack of privacy in the cashless society. Code examples are also given using the Sharp PC-320I to encode and decode the German Enigma machine. The books claims that the Enigma was kept secret for long periods because understanding it could compromise the American M-209 cipher machine, and that it was still being sold to other countries. Other encryption topics covered include Data Encryption Standard, Vernam cipher (one-time pad), pseudo random number generators, transposition ciphers, and substitution ciphers
The Radio Hacker's Codebook is idiosyncratic revealing a personal quest by Sassoon to decrypt military signals on the radio spectrum. However he would have had no chance to decode modern encryption as described in the book.
The Radio Hacker's Codebook broke ground in having a low price for a technical computing book being sold for £6.95 well below the typical £10+ price for computing books at the time. It was published by Duckworth and had 239 pages.
References
Cryptanalytic software |
58848958 | https://en.wikipedia.org/wiki/1997%20Troy%20State%20Trojans%20football%20team | 1997 Troy State Trojans football team | The 1997 Troy State Trojans football team represented Troy State University in the 1997 NCAA Division I-AA football season. The Trojans played their home games at Veterans Memorial Stadium in Troy, Alabama and competed in the Southland Conference. Troy State finished the season unranked after starting out the season as the #2-ranked team in the nation during the first part of the season.
Schedule
References
Troy State
Troy Trojans football seasons
Troy State Trojans football |
40870403 | https://en.wikipedia.org/wiki/Jonathan%20Katz%20%28computer%20scientist%29 | Jonathan Katz (computer scientist) | Jonathan Katz is a professor in the Department of Computer Science at the University of Maryland who conducts research on cryptography and cybersecurity. In 2019-2020 he was a faculty member in the Volgenau School of Engineering at George Mason University, where he held the title of Eminent Scholar in Cybersecurity. In 2013–2019 he was director of the Maryland Cybersecurity Center at the University of Maryland.
Biography
Katz received BS degrees in mathematics and chemistry from MIT in 1996, followed by a master's degree in chemistry from Columbia University in 1998. After transferring to the computer science department, he received M.Phil. and PhD degrees in computer science from Columbia University in 2001 and 2002, respectively. Katz's doctoral advisors were Zvi Galil, Moti Yung, and Rafail Ostrovsky. While in graduate school, he worked as a research scientist at Telcordia Technologies (now ACS).
Katz was on the faculty in the computer science department of the University of Maryland from 2002 to 2019. From 2013–2019 he was director of the Maryland Cybersecurity Center there. He joined the Department of Computer Science of George Mason University as professor of computer science and Eminent Scholar in Cybersecurity in 2019, before returning to the University of Maryland one year later. Katz has held visiting positions at UCLA, IBM T.J. Watson Research Center, and the Ecole Normale Superieure. He was a member of the DARPA Computer Science Study Group in 2009-2010. He also works as a consultant in the fields of cryptography and computer security.
Research
Katz has worked on various aspects of cryptography, computer security, and theoretical computer science. His doctoral thesis was on designing protocols secure against man-in-the-middle attacks, most notably describing an efficient protocol for password-based authenticated key exchange. He has also worked in the areas of secure multi-party computation, public-key encryption, and digital signatures. He has served on the program committees of numerous conferences, including serving as co-program chair for the annual Crypto conference in 2016 and 2017 and co-program chair for the ACM Conference on Computer and Communications Security in 2019-2020. He is also currently an editor of the Journal of Cryptology, the premier journal of the field.
Awards
Katz received the Humboldt Research Award to support collaborative research with colleagues in Germany during 2015. He also received the University of Maryland "Distinguished Scholar-Teacher" award in 2017. In 2019 Katz was named an IACR Fellow for his research contributions in public-key cryptography and cryptographic protocols along with his service and educational contributions to the cryptographic field. He also received the ACM SIGSAC Outstanding Contribution Award in 2019 for "his commitment to education in cryptography, through teaching and research, and for dedication to the advancement and increased influence of cryptographic research."
Books
According to WorldCat, the book is held in 310 libraries. The second edition of this book was published in 2014.
According to WorldCat, the book is held in 348 libraries
Coeditor with Moti Yung of Applied Cryptography and Network Security: 5th International Conference, ACNS 2007, Zhuhai, China, June 5–8, 2007 : Proceedings. Berlin: Springer, 2007.
References
External links
Living people
Modern cryptographers
American cryptographers
American computer scientists
Place of birth missing (living people)
1974 births
University of Maryland, College Park faculty
Massachusetts Institute of Technology School of Science alumni
Columbia University alumni |
22701397 | https://en.wikipedia.org/wiki/MACS3 | MACS3 | The MACS3 Loading Computer System is a computer controlled loading system for commercial vessels, developed by Navis. Prior to October, 2017 it was offered by Interschalt maritime systems GmbH, before by Seacos Computersysteme & Software GmbH.
MACS3 consists of computer hardware and a range of software, which aim to minimize the operational load while loading a vessel, and to prevent any hard limitations from being breached.
Design principles
The software architecture and user interface of the MACS3 Loading Computer System are designed according to the standard ISO 16155:2006 Ships and marine technology - Computer applications - Shipboard loading instruments, and to the following Rules and Recommendations:
DNVGL Class Guideline DNVGL-CG-0053 : Approval and certification of the software of loading computer systems
ABS Guidance Notes for the Application of Ergonomics to Marine Systems
BV Rules for the Classification of Steel Ships, Pt C, Ch 3, Sec 3 "COMPUTER BASED SYSTEMS"
IACS Recommendation on Loading Instruments (No 48)
Software structure
The software of MACS3 Loading Computer System includes the MACS3 Basic Loading Program, performing functions of Categories A and B according to the ISO 16155:2006 and (optionally) a range of additional modules and programs, performing functions of Category C:
PolarCode Basic Module
BELCO Container Management Module
DAGO Dangerous Goods Modules
StowMan Stowage Planning
SEALASH Lashing Module
MIXCARGO General Cargo Module
RoRo Module
Crane Operation Module
Bulk Carrier Modules
Tanker Modules
BallastMAN Ballast Water Exchange Module
Voyage History
DastyMAN Damage Stability
Online Program (Tanks Online)
The system runs under Windows 10 Professional 64Bit.
Programs and Features
MACS3 Basic Loading Program
MACS3 Basic Loading Program is designed for all vessel types (containership, tanker, Bulk carrier, general cargo, RoRo, Passenger ship) in accordance with the unified IACS Requirement L5 "Onboard Computers for Stability Calculations". It is approved by all leading classification societies:
LR,
ABS,
DNVGL,
BV,
ClassNK,
KR,
CCS.
MACS3 Basic Loading Program performs:
Ship stability and strength calculations, covering all pertinent international regulations like e.g. IMO A.749
Numerical and graphical results for metacentric height GM , trim, heel, draft, shear forces, bending moments and torsion
Metacentric height GM check against various approved GM requirement curves
GZ curve for dynamic stability
Automatic wind pressure calculation
Automatic ballast tank optimization
Tank plan with visual editing
Optional online measurement of tank levels and draft (sold separately)
User interface with tabbed main window for multiple views, all fully customizable
Screen and print reports in PDF, HTML and XML formats
MACS3 Basic Loading Program supports client–server software architecture for distributed cargo management and allows the complete loading condition (container, tanks, general cargo and constant items) to be stored in a single compressed mxml-file, making it very easy for you to exchange loading conditions between ship and office.
BELCO Container Management Module
BELCO enhances the MACS3 Basic Loading Program with easy-to-use container management features, enabling to create the valid Stowage plan for container ships. It works with high level of integration into MACS3.NET, so any changes to the container cargo are immediately reflected in the MACS3 stability and strength calculations. The results of the MACS3 and BELCO can be displayed simultaneously at any time.
The MACS3 screen and print reporting in PDF, HTML and XML formats is also fully available to BELCO.
Data Features:
Verified Gross Mass (VGM) functionality
Variable container sizes
Wide range of information per container: ISO 6346 (both old and new) and custom types, weight, ports of loading/discharge/final/transshipment, operator etc.
Calculation of vertical, transversal and longitudinal centre of gravity for each container
Full UN EDIFACT/BAPLIE support (BAPLIE 1.5, 2.0, 2.1, 2.2 and 3.1)
UN LOCODE database
Port rotation with date/time and quays
Cargo Handling
Efficient pre-stowage and pre-discharge functions, bay-, row-, tier- or port-wise
Visual editing of reefer positions and hot areas
Loading/discharge list
Plan view
Fully functional layer view
Fully functional single page view
Top view with various criteria
Block shift
Multi-step Undo
Hatch-cover handling
Symbolic presentation of pier
Result table with free selection of row/column criteria, including sub-rows, sub-columns and subtotals
On-the-fly and combined checks:
Visibility (IMO and Panama) check with blind sectors in relation to trim/draft change
Segregation of Dangerous goods,
Restows,
Lashing forces,
Lashing inventory,
Stack weights (maximum stack weight according to the shipyard, as well as 20' stacks weights in holds according to the Classification Societies),
"Flying" containers,
Reefer plug positions,
Hatch cover clearance,
Compatibility of container types with ship design,
Load and destination locations according to the UN Locode databases and to the Port Call List,
Overdimensions,
Handling instructions and Loading Remarks (like "away from boiler", "on-deck-only"),
Container numbers,
False empties, etc.
Visualization
Multiple bay views with individual settings
Visualization of hatch covers and tweendecks
About 150 different container criteria for visualization and statistics
Up to 12 information areas with text per container
Containers can be coloured by a variety of criteria, e.g. by port of discharge
Multiple colours per container
Longitudinal section and top view with tanks, holds, containers and visibility lines
Commodity list with possible restrictions
Realistic 3D view – both Helicopter and Personal (Walking) modes
DAGO Dangerous Goods Modules
Checks the fulfilment of the stowage and segregation requirements imposed by the latest version of the IMDG code (DAGO Part I)
Includes a database of dangerous goods with all relevant information from the IMDG code and the Emergency Schedules (EmS) (DAGO Part I)
Unlimited number of dangerous goods both per container and per ship
Company-specific blacklists of IMDG classes and UN numbers
CFR 49, list of CDC goods
Takes orientation of reefer containers into account
Fire Fighting and Safety Plan (DAGO Part II)
Medical First Aid Guide (MFAG) (DAGO Part III)
StowMan Stowage Planning
Stowage planning for on board use
Import and export of container data in text files or EDIFACT / BAPLIE, export of MOVINS, COARRI, COPRAR
SEALASH Lashing Module
Calculation of forces in container securing systems according to the rules of the classification societies LR, ABS, DNVGL, BV, ClassNK, KR, CCS.
Following lashing notations are supported: Route Specific Container Stowage (RSCS and RSCS+ for both long haul voyages and limited short voyages, taking into account the weather forecasts) of DNVGL; CSSA and CSSA-R for ClassNK; Voyage- and Weather-dependent Boxmax (V,W) notation for LR; Unrestricted and Worldwide for BV; CLP and route-specific CLP-V Notation for ABS.
Modelling of several internal and external lashing systems, checking the physical possibility to lash
Calculation of lash forces per stack, with exceeding values in red
Flying hints show exceeding lashing forces for containers
Calculation of maximum weight of an additional container
Visual lashing mode
Lashing equipment inventory
Check of available lash eyes
Main parameter table
MIXSTOW 3D and Steel Coil: General Cargo, Ro/Ro and Ferry-Modules
Manages all kinds of cargo: trailers, single parts, homogeneous surface cargo
Definition of polygon shaped cargo
Visual arrangement of cargo with drag-and-drop
Automatic displaying of the documents, associated with single cargo units
Visual alignment of centres of gravity
User-extendable cargo types library (with cargo geometry definition)
Lane/SECU-Loading, Top-loading, Multi-Loading in areas, Side-View-Loading
Free rotation of cargo
Loading checks (load capacity, dangerous cargo, overlapping, hit testing)
Cutting stock optimization for available stowage area
Zooming/scrolling, meter- and frame- rulers and grids
Crane Operation Module
Simulates a crane operation in single or combined mode
Visual presentation of the reach and the safe working load in a top view
Ballasting using heeling tanks in shift tanks mode
All relevant stability and strength criteria can be supervised during simulation
Logging of crane motion and results
Bulk Carrier Modules
Bulk strength for calculation of longitudinal strength in flooded condition according to IACS rule 17
BULKLIM checks load limitations depending on the structure of the double bottom given by the class
LoadMan for optimization of cargo in holds and loading / discharging sequences
Grain Program for calculation of grain stability and reports, e.g. according to the National Cargo Bureau (United States of America)
Tanker Modules
Proven Ullage Report including ASTM table based volume correction
Vessel experience factor and cargo history
Dangerous goods data base (IBC, CHRIS Code) printout of substance information page, cargo segregation and compatibility check
MFAG code and EMS integration, emergency simulation and easy links to the information relevant for the actual loaded cargo
BallastMAN Ballast Water Exchange Module
BallastMAN manages a ship's ballast water exchange on the high seas in accordance with IMO Resolution A.868 (20).
Sequences of tank emptying and filling can be completely planned and executed, thus making sure that the exchange is carried out both safely and efficiently. During Planning Stage BallastMAN determines the fastest and safest sequence, continually calculating stability and longitudinal strength. During Execution Stage BallastMAN monitors the tank levels online, issues instructions when to operate pumps and valves and warns if a crucial deviation from the plan is detected.
BallastMAN creates the Ballast Water Reporting Form required by U.S. Coast Guard / NBIC, AQIS and New Zealand
Voyage History
Pre-calculates the changes in stability during a voyage resulting from the consumption of bunker
Graphical and numerical history and reports
DastyMAN Damage Stability Calculation
Deterministic damage stability calculation using the lost buoyancy method
Required damage conditions as laid down by the classification society are calculated automatically when cargo or bunker has been changed
The calculation results are checked against the appropriate IMO criteria, e.g. IBC-code, SOLAS
Online Program
HSMS (Hull Stress Monitoring System) interface
Interfaces to a wide range of tank automation systems
Recurring automatic update of loading condition by the current tank readings
Each tank may be switched online or offline individually
Market Penetration
The onboard loading computer MACS3 is being used in a wide range of container vessels, multipurpose vessels, bulk carriers, tanker vessels, roro vessels and passenger vessels. Its ship library includes more than 6,500 ship profiles. For the container vessel segment, MACS3 holds a share of approximately 65%.
The MACS3 loading computer software has been in use for training purposes since 1999. Initially only available to Germany naval schools, it is now also deployed at European and Chinese maritime universities.
References
External links
MACS3 Loading Computer System on the page of Navis
Port infrastructure
Transport software |
610868 | https://en.wikipedia.org/wiki/DjVu | DjVu | DjVu ( , like French "déjà vu") is a computer file format designed primarily to store scanned documents, especially those containing a combination of text, line drawings, indexed color images, and photographs. It uses technologies such as image layer separation of text and background/images, progressive loading, arithmetic coding, and lossy compression for bitonal (monochrome) images. This allows high-quality, readable images to be stored in a minimum of space, so that they can be made available on the web.
DjVu has been promoted as providing smaller files than PDF for most scanned documents. The DjVu developers report that color magazine pages compress to 40–70 kB, black-and-white technical papers compress to 15–40 kB, and ancient manuscripts compress to around 100 kB; a satisfactory JPEG image typically requires 500 kB. Like PDF, DjVu can contain an OCR text layer, making it easy to perform copy and paste and text search operations.
Free creators, manipulators, converters, Web browser plug-ins, and desktop viewers are available. DjVu is supported by a number of multi-format document viewers and e-book reader software on Linux (Okular, Evince), Windows (Okular, SumatraPDF), and Android (FBReader, EBookDroid, PocketBook).
History
The DjVu technology was originally developed by Yann LeCun, Léon Bottou, Patrick Haffner, Paul G. Howard, Patrice Simard, and Yoshua Bengio at AT&T Labs from 1996 to 2001.
Prior to the standardization of PDF in 2008, DjVu had been considered superior due to it being an open file format in contrast to the proprietary nature of PDF at the time. The declared higher compression ratio (and thus smaller file size), and the claimed ease of converting large volumes of text into DjVu format, were other arguments for DjVu's superiority over PDF in the technology landscape of 2004. Independent technologist Brewster Kahle in a 2004 talk on IT Conversations discussed the benefits of allowing easier access to DjVu files.
The DjVu library distributed as part of the open-source package DjVuLibre has become the reference implementation for the DjVu format. DjVuLibre has been maintained and updated by the original developers of DjVu since 2002.
The DjVu file format specification has gone through a number of revisions, the most recent being from 2005.
Role in the software ecosystem
The primary usage of the DjVu format has been the electronic distribution of documents with a quality comparable to that of printed documents. As that niche is also the primary usage for PDF, it was inevitable that the two formats would become competitors. It should however be observed that the two formats approach the problem of delivering high resolution documents in very different ways: PDF primarily encodes graphics and text as vectorised data, whereas DjVu primarily encodes them as pixmap images. This means PDF places the burden of rendering the document on the reader, whereas DjVu places that burden on the creator.
During a number of years, significantly overlapping with the period when DjVu was being developed, there were no PDF viewers for free operating systems — a particular stumbling block was the rendering of vectorised fonts, which are essential for combining small file size with high resolution in PDF. Since displaying DjVu was a simpler problem for which free software was available, there were suggestions that the free software movement should employ DjVu instead of PDF for distributing documentation; rendering for creating DjVu is in principle not much different from rendering for a device-specific printer driver, and DjVu can as a last resort be generated from scans of paper media. However when FreeType 2.0 in 2000 began provide rendering of all major vectorised font formats, that specific advantage of DjVu began to erode.
In the 2000s, with the growth of the world wide web and before widespread adoption of broadband, DjVu was often adopted by digital libraries as their format of choice, thanks to its integration with software like Greenstone and the Internet Archive, browser plugins which allowed advanced online browsing, smaller file size for comparable quality of book scans and other image-heavy documents and support for embedding and searching full text from OCR.
Some features such as the thumbnail previews were later integrated in the Internet Archive's BookReader and DjVu browsing was deprecated in its favour as around 2015 some major browsers stopped supporting NPAPI and DjVu plugins with them.
DjVu.js Viewer attempts to replace the missing plugins.
Technical overview
File structure
The DjVu file format is based on the Interchange File Format and is composed of hierarchically organized chunks. The IFF structure is preceded by a 4-byte AT&T magic number. Following is a single FORM chunk with a secondary identifier of either DJVU or DJVM for a single-page or a multi-page document, respectively.
All the chunks can be contained in a single file in the case of the so called bundled documents, or can be contained in several files: one file for every page plus some files with shared chunks.
Chunk types
Compression
DjVu divides a single image into many different images, then compresses them separately. To create a DjVu file, the initial image is first separated into three images: a background image, a foreground image, and a mask image. The background and foreground images are typically lower-resolution color images (e.g., 100 dpi); the mask image is a high-resolution bilevel image (e.g., 300 dpi) and is typically where the text is stored. The background and foreground images are then compressed using a wavelet-based compression algorithm named IW44. The mask image is compressed using a method called JB2 (similar to JBIG2). The JB2 encoding method identifies nearly identical shapes on the page, such as multiple occurrences of a particular character in a given font, style, and size. It compresses the bitmap of each unique shape separately, and then encodes the locations where each shape appears on the page. Thus, instead of compressing a letter "e" in a given font multiple times, it compresses the letter "e" once (as a compressed bit image) and then records every place on the page it occurs.
Optionally, these shapes may be mapped to UTF-8 codes (either by hand or potentially by a text recognition system) and stored in the DjVu file. If this mapping exists, it is possible to select and copy text.
Since JB2 (also called DjVuBitonal) is a variation on JBIG2, working on the same principles, both compression methods have the same problems when performing lossy compression. In 2013 it emerged that Xerox photocopiers and scanners had been substituting digits for similar looking ones, for example replacing a 6 with an 8. A DjVu document has been spotted in the wild with character substitutions, such as an n with bleeding serifs turning into a u and an o with a spot inside turning into an e. Whether lossy compression has occurred is not stored in the file. Thus the DjView viewing application can't warn the user that glyph substitutions might have occurred, neither when opening a lossy compressed file, nor in the Information or Metadata dialogue boxes.
Format licensing
DjVu is an open file format with patents. The file format specification is published, as well as source code for the reference library. The original authors distribute an open-source implementation named "DjVuLibre" under the GNU General Public License. The rights to the commercial development of the encoding software have been transferred to different companies over the years, including AT&T Corporation, LizardTech, Celartem and Cuminas.
Celartem acquired LizardTech and Extensis.
Support
DjVu is not widely supported by scanning and viewing software. While viewers can be downloaded, opening DjVu files is not implemented in most operating systems by default. The main exception being most Linux distributions.
In 2002, the DjVu file format was chosen by the Internet Archive as a format in which its Million Book Project provides scanned public-domain books online (along with TIFF and PDF). In February 2016, the Internet Archive announced that DjVu would no longer be used for new uploads, among other reasons citing the format's declining use and the difficulty of maintaining their Java applet based viewer for the format.
Wikimedia Commons, a media repository used by Wikipedia among others, conditionally permits PDF and DjVu media files.
See also
International Image Interoperability Framework (IIIF)
JBIG2
Comparison of e-book formats
References
External links
A collection of DjVu documents (mostly unbundled)
DjVuLibre site
The site of DjVu.js Viewer usable with the current Firefox and Chrome
pdf2djvu Jakub Wilk's tools
djvu.org (maintained by an anonymous webmaster)
djvu.com ("DjVu Universe") (Caminova Corporation)
Cuminas Corporation – Software Downloads
Cuminas DjVu SDK DjVu decoder/encoder library
An actual link to a (2001) DjVu document
Computer-related introductions in 1998
Computer file formats
Electronic documents
Electronic publishing
Filename extensions
Graphics file formats
Office document file formats
Open formats |
920598 | https://en.wikipedia.org/wiki/QSound%20Labs | QSound Labs | QSound Labs is an audio technology company based in Calgary, Canada. It is primarily a developer and provider of audio enhancement technologies for entertainment and communications devices and software. The company is best known as a pioneer of 3D audio effects, beginning with speaker-targeted positional 3D technology applied to arcade video games and professional music and film soundtrack production. QSound was founded by Larry Ryckman (CEO), Danny Lowe, and John Lees. Jimmy Iovine served as SVP of Music and Shelly Yakus as VP of Audio Engineering in its formative years.
History
The flagship technology first known simply as "QSound" saw its initial commercial application in the early 1990s, notably in Capcom arcade games and on many music releases by prominent artists. The first two QSound album titles were Sting's The Soul Cages and Madonna's The Immaculate Collection.
From the original speaker-targeted QSound 3D process used in producer-side applications, QSound Labs developed a suite of positional and enhancement spatial audio technologies, including positional audio for stereo speakers, multi-channel speaker systems and stereo earphones; stereo expansion, and virtual surround, under several technology names. There is no longer any single process now referred to as "QSound."
Although the hardware QSystem professional mixing processors and plug-in producer-side software tools were significant product offerings in the 1990s, most QSound technology is now incorporated in end-user products (such as video game software, computer sound cards and home entertainment electronics) by means of analog integrated circuits, digital signal processor (DSP) software libraries, host processor software and the like. QSound's iQ software internet audio enhancement software, their first downloadable, stand-alone consumer product, ultimately spawned a successful product line including the iQFX series of plug-ins for RealNetwork's RealPlayer.
In addition to spatial audio processing, the company has broadened its product line to include a long list of audio effects and controls such as static equalization, adaptive spectral enhancement, dynamic range controls, reverberation, and many other standard audio effects.
In 2003, QSound added a software MIDI wavetable ringtone synthesizer to its line-up. The effects suite and synthesizer are licensed in the form of software libraries to mobile phone manufacturers and providers of related technology (e.g. DSP's and DSP operating systems), in order to provide polyphonic ringtone rendering and enhanced music or mobile TV playback on hand-held devices.
In 2009, the company was delisted from the NASDAQ exchange. Around 2012, the copyright date on QSound Labs' website was updated. Out of the 40 patents filed by QSound Labs, only one is still active, the rest have either been abandoned or expired. The current status of the company is unknown, as such it is assumed to be inactive.
Further reading
"Sound Engine Roundup" (Part 2: 3D APIs) - QSound
QSound Spotlight - QUsers - Who's Using QSound?
See also
Aureal Semiconductor
Creative Technology
Sensaura
References
Companies based in Calgary |
24925986 | https://en.wikipedia.org/wiki/Ivanovo%20State%20University | Ivanovo State University | Ivanovo State University () or IvSU () is in Ivanovo, about 300 km east of Moscow, Russia. The University was founded in 1918; before 1974 was called Ivanovo State Pedagogical Institute.
IvSU has accreditation and license from the Russian Ministry of Education. It is a member of Euroasian association of universities and Russian Association of Classical Universities, co-operates with educational institutions of Germany (Passau University and the Berlin Technical University), Sweden (University of Uppsala), Denmark (Aarhus University, Institute of Business and Technology in Herning) and China (Xiangtan University) among many others. The training of international students has started in 60's with the programs for the students from socialist countries. In 1982 the separated Department of Russian as a Foreign Language was created in order to prepare international students to study in Russia. It was the first university in the region offering degrees unusual in the Soviet system: economics (1976), law (1962), sociology (1999) and applied programming.
IvSU has different types of programs from traditional five years specialist degree to four years bachelor's degree and two years master's degree following the modernization of Russian education system and Bologna Process. Also, the university has 10 specialized councils that grant the degree of Kandidat Nauk as well as postgraduate degree of Doktor Nauk in the fields of history, technical science, linguistics, philosophy and economics.
History
Ivanovo State University was founded on 21 December 1918 as Ivanovo-Voznesensk Institute of National Education. On 12 July 1923 it was transformed into a teacher training college which in its turn served as a basis for creating a teacher training institute in August 1932. That institute was named after Dmitry Furmanov. Ivanovo State University – as a successor of that institute – has been successfully functioning since 1973.
Facilities and resources
The structure of IvSU:
9 academic buildings;
3 halls of residence with all modern conveniences;
a university library and reading-rooms;
a sports facilities;
a health centre;
computer centres and multimedia classrooms;
educational and scientific laboratories;
zoological and archaeological museums, the museum "Writers of Ivanovo Region";
a botanical garden and a vivarium.
Educational programmes
IvSU students are providedtrained in several educational programmes:
Bachelor's degree
Mathematics
Mathematics and Computer Science
Fundamental Computer Science and Information Technology
Physics
Chemistry
Biology
Applied Informatics
Nanotechnology and Microsystems Engineering
Psychology
Economics
Management
Human Resource Management
Sociology
Social Work
Jurisprudence
International Relations
Advertising and Public Relations
Journalism
Teacher Education
Psychological and Pedagogical Education
Philology
History
Physical Education
Master's degree
Mathematics
Mathematics and Computer Science
Physics
Chemistry
Biology
Applied Informatics
Psychology
Economics
Management
Finance and Credit
Sociology
Jurisprudence
International Relations
Journalism
Psychological and Pedagogical Education
Philology
History
Specialist's degree
Fundamental and Applied Chemistry
PhD Degree
Mathematics and Mechanics
Chemical Sciences
Mechanical Engineering
Psychological Sciences
Economics
Sociological Sciences
Education and Pedagogical Sciences
Linguistics and Literature Studies
Historical Sciences and Archeology
Philosophy, Ethics, and Religious Studies
International cooperation
Ivanovo State University is an important international scientific and educational centre in the region. The successful work of the centre is based on the great competence of our university's staff and promotion of cross-cultural tolerance.
Today IvSU has partner relations with educational and scientific institutions of Bulgaria, Belarus, Vietnam, Germany, Italy, Kazakhstan, China, Poland, Romania, Serbia, Tajikistan, Finland, Czech Republic, Sweden, Uzbekistan, Denmark, China, Uzbekistan.
The acting agreements give IvSU students and faculty an opportunity to improve their professional knowledge and language skills, to get acquainted with culture of various countries and to get experience of cross-cultural communication due to exchange education, participation in language, undertaking scientific internships and educational tours.
International activity
Ivanovo State University started training foreign specialists in 1978. Over 500 students have come from over 40 countries across Asia, Africa and Europe to study at the university. Congo, China, Turkmenistan, Georgia, Germany, Vietnam, the Czech Republic, Guinea-Bissau, Nigeria etc. are among these countries.
IvSU graduates have gone on to work in various countries of the world: Germany, Vietnam, Holland, China, Canada, the US, Mongolia, Cameroon, Syria, Angola, etc.
IvSU offers foreign students the following educational programmes:
An additional educational programme that provides training for foreign citizens and stateless persons to master a professional educational programme in Russian (preparatory department).
Additional professional education under the programme "Russian as a Foreign Language" (professional retraining), which gives the right to teach Russian as a foreign language at the initial stage and to translate.
Additional education under the programme "Russian as a Foreign Language" as part of coursework and internships, exchange education and advanced training, "Methods of Teaching Russian as a Foreign Language."
The main professional programme "Russian language and culture in the modern world" (Master's degree) for foreign citizens.
Ivanovo State University offers foreign students many opportunities for developing their creativity and for cultural enrichment. The issues of intercultural dialogue and interethnic tolerance are discussed at the annual inter-university scientific and practical conference "A World without Borders". Foreign students take an active part in this conference. The "Welcome Centre" is a student association within the framework of the federal project "Your Route is Russia!" Residents of the centre actively involve foreign students of IvSU in their work. IvSU takes an active part in the concert and exhibition devoted to International Student Day.
Career centre
The main goal of the centre is IvSU undergraduate and graduate students' adaptation to the present-day labor market and providing advice. The centre also helps high school graduates to choose the future profession.
About 95-97% of IvSU graduates find employment within half a year after their graduation. Around 70% of them work directly in the fields they majored in. Over 30% of graduate students successfully manage to combine their studies with work.
Students' life
IvSU students take an active part in a number of all-Russian forums and workshops such as "Students' Russia", contests "Student of the Year" and "Univervision", all-Russian "Student Forum", "Territory of Meanings", "XXI century leader" workshop etc.
Student projects include:
"Your Choice" – a multilevel school for most active students;
"Hello! You’re a student!" – a project aimed at first-year students;
"Initiation to Studentship" – a contest/performance;
"Hello, we’re looking for talented people!" a contest/performance;
"The World without Borders" – an intercultural project;
"Dance Week" – a dance contest;
"Students' Spring" – a creativity contest;
"MegaQR" – a photo quest, etc.
Dormitories
Ivanovo State University offers its students three comfortable dormitories located next to the University buildings. All the dormitories have the possibility of accessing the Internet.
There are beds, wardrobes, bedside tables, desks and bookshelves in the dormitory rooms. Students can use washing machines, refrigerators and cooking stoves. Dormitories have study rooms, winter gardens and sports halls
Famous graduates of Ivanovo State University
Famous scientists have worked at the University: biologist S. N. Bogolyubsky (first rector of the institute), mathematicians N. N. Luzin, A. Ya. Khinchin, D. E. Menchov, A. I. Maltsev, physicists A. I Nekrasov, R. V. Kunitsky, V. S. Sorokin, historians D. M. Petrushevsky, A. A. Kizevetter, A. Z. Manfred, A. P. Kazhdan, writers N. F. Belchikov, D. E Maximov, philosopher A. I. Uyomov and others.
Among the graduates are famous writers Mikhail Dudin, Oleg Gorelov, Pavel Kupriyanovsky, Leonid Taganov, Dmitry Bushuev, Igor Zhukov, Moscow Region Prosecutor Alexander Anikin. One of the university's graduates is the First Deputy Prosecutor of the Vologda Oblast and Honorary Worker of the Prosecutor's Office of the Russian Federation Alexei Vasilkov.
Scientific research
Ivanovo State University carries out fundamental and applied research in 19 scientific fields. These fields cover practically the whole spectrum of current scientific trends:
The Natural and Engineering Sciences – 6 fields,
The Humanities – 7 fields,
The Social Sciences – 6 fields.
Collaboration of the scientific and educational centres results in coordination of educational process and research:
Nanomaterials Research Institute
Intelligentsia Studies Research Institute
The Tribological Centre of Science and Education
The Chemical Physics Centre of Science and Education
The Centre of Studies focusing on the 'Problems of Economic Reliability of Production Systems'
The Centre of Studies focusing on the 'Problems of Mathematics and Computer Sciences'
The Archaeological Centre of Science and Education
The Research Centre of Studying the German Law
The Laboratory of Communicative Behavior of a Human Being
The Laboratory of the Research in the Post-Soviet Era
The Public Laboratory of Research in Criminal and Legal Law
The Centre of Studies focusing on the Problems of Economic Reliability of Production Systems
The Center of Industrial and Information Technologies
The Centre of Studies focusing on the Written Word in the Context of Culture
The Centre of Ethnic and National Research
Current Problems of Modern Lexicography
The Research Centre of Regional Development
The Research of Poetics of Classical Russian Literature
IvSU publishes a number of scientific journals. Among them one can mention such publications as 'Liquid Crystals and Their Application', 'Woman in Russian Society' and 'Intelligentsia and the World' which are included into the special list of journals recommended for publishing main results of research aimed at gaining the academic degrees of Candidates and Doctors of Sciences.
Besides, the journal 'Liquid Crystals and Their Application' is included into the international citation data bases 'Web of Science' and 'Scopus'. In 2017 the journal 'Woman in Russian Society' was also included into 'Scopus' international citation database.
Sports activity
About 3,000 of students are engaged in physical education in the University. The University Sport Festivals, Freshman Sport Festival, University personal championships, Sport Fests, Health Days, Teachers and Staff Sports Festival, Family Festival, Skiing and Running Festivals are held annually.
Representative teams take part in the regional sport festivals among universities, the best athletes of the University participate in national and international competitions.
Students can choose a suitable sports club:
Athletic gymnastics
Aerobics
Badminton
Basketball
Volleyball
Kayaking
Kickboxing
Athletics
Ski race
Table tennis
Powerlifting
Sambo
Combat Sambo
Chess
Football
Wushu
References
External links
Universities in Ivanovo Oblast
Educational institutions established in 1918
Ivanovo
1918 establishments in Russia |
14682402 | https://en.wikipedia.org/wiki/Jack%20Henry%20%26%20Associates | Jack Henry & Associates | Jack Henry & Associates, Inc. is a technology company and payment processing service for the financial services industry. It serves more than 9,000 customers nationwide, and operates through three primary brands. Headquartered in Monett, Missouri, JHA made $1.55 billion in annual revenue during fiscal 2019.
History
Jack Henry & Associates (commonly referred to as JHA) was formed in 1976 by Jack Henry and Jerry Hall in Monett, Missouri. In 1977, Jack Henry & Associates was incorporated and generated $115,222 in revenue.
On November 20, 1985, an initial public offering made JHA a public company trading 3,125,000 common shares on the NASDAQ exchange under the symbol JKHY. In 1992, JHA began to aggressively acquire companies that expanded its product offering and its client base. It acquired Symitar in 2000, establishing its second brand which serves the credit union industry. In 2004, JHA acquired a number of companies and products that can be sold outside JHA's core client base to all financial services organizations regardless of charter, asset size, core processing platform. In 2006, JHA launched its third primary brand – ProfitStars – to encompass the specialized products and services assembled through its acquisitions. In 2012, JHA announced $1 billion in annual revenue.
Corporate structure
Jack Henry & Associates, Inc. (NASDAQ: JKHY) operates through three primary brands listed below.
Jack Henry Banking
Jack Henry Banking is a provider of integrated computer systems for banks ranging from de novo to institutions. Jack Henry Banking currently serves approximately 1,000 banks.
Symitar
Symitar was founded in 1984 and acquired by JHA in 2000. Symitar is a provider of integrated computer systems for credit unions of all sizes. Symitar's product is Episys, a software application used to manage member base and process transactions.
ProfitStars
ProfitStars is a core-agnostic solution provider to banks and credit unions. It is headquartered in Allen, TX.
Acquisitions
2019
Geezeo
2018
BOLTS Technologies, Inc.
Agiletics, Inc.
2017
Ensenta Corporation
Vanguard Software Group
2015
Bayside Business Solutions
2014
Banno
2010
iPay Technologies
2009
Pemco Technologies
Goldleaf Financial Solutions, Inc.
2007
AudioTel Corporation
Gladiator Technology Services
2006
US Banking Alliance
2005
Profitstar Inc.
Select Payment Processing, Inc.
Verinex Technologies, Inc.
Optinfo, Inc.
TWS, Inc.
Synergy
Stratika
Tangent Analytics, LLC
References
American companies established in 1976
Financial services companies of the United States
Software companies of the United States
Software companies based in Missouri
Financial services companies established in 1976
Software companies established in 1976
1976 establishments in Missouri
Companies listed on the Nasdaq
Banking software companies
1980s initial public offerings
Barry County, Missouri |
4636 | https://en.wikipedia.org/wiki/Break%20key | Break key | The Break key (or the symbol ⎉) of a computer keyboard refers to breaking a telegraph circuit and originated with 19th century practice. In modern usage, the key has no well-defined purpose, but while this is the case, it can be used by software for miscellaneous tasks, such as to switch between multiple login sessions, to terminate a program, or to interrupt a modem connection.
Because the break function is usually combined with the pause function on one key since the introduction of the IBM Model M 101-key keyboard in 1985, the Break key is also called the Pause key. It can be used to pause some computer games.
History
A standard telegraph circuit connects all the keys, sounders and batteries in a single series loop. Thus the sounders actuate only when both keys are down (closed, also known as "marking" — after the ink marks made on paper tape by early printing telegraphs). So the receiving operator has to hold their key down or close a built-in shorting switch in order to let the other operator send. As a consequence, the receiving operator could interrupt the sending operator by opening their key, breaking the circuit and forcing it into a "spacing" condition. Both sounders stop responding to the sender's keying, alerting the sender. (A physical break in the telegraph line would have the same effect.)
The teleprinter operated in a very similar fashion except that the sending station kept the loop closed (logic 1, or "marking") even during short pauses between characters. Holding down a special "break" key opened the loop, forcing it into a continuous logic 0, or "spacing", condition. When this occurred, the teleprinter mechanisms continually actuated without printing anything, as the all-0s character is the non-printing NUL in both Baudot and ASCII. The resulting noise got the sending operator's attention.
This practice carried over to teleprinter use on time-sharing computers. A continuous spacing (logical 0) condition violates the rule that every valid character has to end with one or more logic 1 (marking) "stop" bits. The computer (specifically the UART) recognized this as a special "break" condition and generated an interrupt that typically stopped a running program or forced the operating system to prompt for a login. Although asynchronous serial telegraphy is now rare, the key once used with terminal emulators can still be used by software for similar purposes.
Sinclair
On the Sinclair ZX80 and ZX81 computers, the Break is accessed by pressing . On the Sinclair ZX Spectrum it is accessed by . The Spectrum+ and later computers have a dedicated key. It does not trigger an interrupt but will halt any running BASIC program, or terminate the loading or saving of data to cassette tape. An interrupted BASIC program can usually be resumed with the CONTINUE command. The Sinclair QL computer, without a key, maps the function to .
BBC Micro
On a BBC Micro computer, the key generates a hardware reset which would normally cause a warm restart of the computer. A cold restart is triggered by pressing . If a filing system is installed, will cause the computer to search for and load or run a file called !Boot on the filing system's default device (e.g. floppy disk 0, network user BOOT). The latter two behaviours were inherited by the successor to Acorn MOS, RISC OS. These behaviours could be changed or exchanged in software, and were often used in rudimentary anti-piracy techniques.
Because of the BBC Micro's near universal usage in British schools, later versions of the machine incorporated a physical lock on the Break key to stop children from intentionally resetting the computer.
Modern keyboards
On many modern PCs, interrupts screen output by BIOS until another key is pressed. This is effective during boot in text mode and in a DOS box in Windows safe mode with 50 lines. On early keyboards without a key (before the introduction of 101/102-key keyboards) the Pause function was assigned to , and the Break function to ; these key-combinations still work with most programs, even on modern PCs with modern keyboards. Pressing the dedicated key on 101/102-key keyboards sends the same scancodes as pressing , then , then releasing them in the reverse order would do; additionally, an E1hex prefix is sent, which enables 101/102-key-aware software to discern the two situations, while older software usually just ignores the prefix. The key is different from all other keys in that it sends no scancodes at all on release in PS/2 modes 1 or 2, so it is impossible to determine whether this key is being held down with older devices. In PS/2 mode 3 or USB HID mode, there is a release scancode, so it is possible to determine whether this key is being held down on modern computers.
On modern keyboards, the key is usually labeled Pause with Break below, sometimes separated by a line: , or Pause on the top of the keycap and Break on the front, or only Pause without Break at all. In most Windows environments, the key combination brings up the system properties.
Keyboards without Break key
Compact and notebook keyboards often do not have a dedicated key.
Substitutes for :
or or on certain Lenovo laptops.
or on certain Dell laptops.
on some other Dell laptops.
on Samsung.
on certain HP laptops.
on certain HP laptops.
on certain Logitech (LOGI) keyboards.
Substitutes for :
or or on certain Lenovo laptops.
on certain Dell laptops.
on certain HP laptops.
on certain HP laptops.
on certain Microsoft Surface Book laptops.
For some Dell laptops, without a key, press the and select "Interrupt".
Usage for breaking the program's execution
While both and combination are commonly implemented as a way of breaking the execution of a console application, they are also used for similar effect in integrated development environments. Although these two are often considered interchangeable, compilers and execution environments usually assign different signals to these. Additionally, in some kernels (e.g. miscellaneous DOS variants) is detected only at the time OS tries reading from a keyboard buffer and only if it's the only key sequence in the buffer, while is often translated instantly (e.g. by INT 1Bh under DOS). Because of this, is usually a more effective choice under these operating systems; sensitivity for these two combinations can be enhanced by the BREAK=ON CONFIG.SYS statement.
See also
System request
Scroll lock
Num lock
References
External links
Computer keys
Out-of-band management |
2997698 | https://en.wikipedia.org/wiki/Logical%20Methods%20in%20Computer%20Science | Logical Methods in Computer Science | Logical Methods in Computer Science (LMCS) is a peer-reviewed open access scientific journal covering theoretical computer science and applied logic. It opened to submissions on September 1, 2004. The editor-in-chief is Stefan Milius (Friedrich-Alexander Universität Erlangen-Nürnberg).
History
The journal was initially published by the International Federation
for Computational Logic, and then by a dedicated non-profit. It moved to the Episciences platform in 2017. The first editor-in-chief was Dana Scott. In its first year, the journal received 75 submissions.
Abstracting and indexing
The journal is abstracted and indexed in Current Contents/Engineering, Computing & Technology, Mathematical Reviews, Science Citation Index Expanded, Scopus, and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2016 impact factor of 0.661.
References
External links
Publications established in 2005
Computer science journals
Open access journals
Logic journals
Logic in computer science
Formal methods publications
Quarterly journals
English-language journals |
41451044 | https://en.wikipedia.org/wiki/Brandon%20Carswell | Brandon Carswell | Brandon Carswell (born May 22, 1989) is an American football wide receiver who is currently a free agent. He played college football for the USC Trojans. Carswell signed as an undrafted free agent after the 2012 NFL Draft.
High school
Carswell attended Milpitas High School in Milpitas, CA. Carswell chose to attend USC after being recruited by many other top schools.
College career
Carswell played the most during his redshirt senior year, after considering transferring due to sanctions.
Professional career
San Francisco 49ers
Carswell was signed by the Oakland Raiders as an undrafted free agent after the 2012 NFL Draft. He was released on September 12, 2012. In 2013, he signed with the 49ers but was placed on injured reserve. He was waived on April 11, 2014.
References
External links
San Francisco 49ers bio
USC Trojans bio
1989 births
Living people
People from Milpitas, California
Players of American football from California
American football wide receivers
USC Trojans football players
San Francisco 49ers players
Sportspeople from Santa Clara County, California |
66764208 | https://en.wikipedia.org/wiki/Kewala%27s%20Typing%20Adventure | Kewala's Typing Adventure | is a 1996 Australian educational typing-themed video game, featuring a koala protagonist named Kewala. It was developed by Sydney-based software company Typequick, and localised by Japan Data Pacific for the Japanese market. The game was renamed Typequick for Students in 1997 and, by 2002, was called Success With Typing for Students.
The game sees the player follow the true blue (authentically Australian) koala protagonist Kewala on an adventure through Australian landscapes to the magical Kingdom of Eaz, learning how to type through tutorials on where to place fingers and touch-typing practice through sentences that advance Kewala's movements.
The game has received a positive reception from critics. Consistent praise was given to how the game's educational qualities were masked behind a highly entertaining adventure, as well as the rare showcase of local Australian landmarks. Additionally, the game has received various awards including the Software Product of the Year in the Social/Life skills category at Japan's 1997 SOFTIC Award ceremony.
Premise and gameplay
The game features a 10 to 15-hour interactive adventure about a true blue (authentically Australian) koala named Kewala as he treks through Australia on an emu, then surfs with whales to the magical Kingdom of Eaz, as the player masters their typing skills. The game records the player's progress and typing speed and will return them to the next lesson upon re-entering. The game begins with a tutorial on where to place fingers, and then with nonsense words like "assa" and "saas", with players soon progressing to complete sentences. The CD-ROM came with a hardcover binder with details of each typing lesson for teachers. The game emphasizes the importance of posture and finger positioning for typing. According to Typequick, the game helps children with dyslexia and other special needs overcome writing difficulties.
Development
Prior to this game's release, Sydney-based software company Typequick had been a successful Australian software company for 15 years; its software was widely used across Australian universities, TAFEs and schools and was the biggest selling typing program in Japan. Founded by Noel McIntosh in 1982, the company had sold $25 million worth of typing programs, with over half its gross profits from overseas sales, by 1997. The Australian Financial Review reported in 1990 that the United States Department of Defense had bought 1600 copies of the original title, while the company would win the 1992 CODiE Award for Best Special Needs Program for another title, Talking TypeQuick for the Blind.
Kewala's Typing Adventure was designed for 11 to 25 year olds. It was written in the Borland C++ integrated development environment and includes 64 background scenes, 2500 animated images, and 20 talking characters. While the product was at the higher end of the pricing scale, Typequick assessed that retailers like Harvey Norman would funnel in-store promotion toward the product as it was good quality, locally supported (by a manufacturer-distributer with in-depth knowledge of the product), and could offer the retailer a higher profit margin. 'Kewala' was registered as a trademark on January 3, 1997 for a "computer software used for keyboard training, and instruction manual therefore, sold together as a unit".
While the game was originally called Kewala's Typing Adventure, Typequick identified that the product sales were below expectations so hired a design agency and conducted market research which discovered the original packaging looked "too gamey". The agency was tasked with designing a package that was appealing to parents, teachers, and students and came up with Typequick for Students. The 1997 re-release came with a multi-user license for schools. By 2002, the title was also called Success With Typing for Students. Typequick brought its original package to Japan and localised it in 1986, in 1994 Japan Data Pacific signed a contract to become the domestic distributor of Typequick games, including Kewala's Typing Adventure which was also localised into Japanese.
Reception
The Age felt the title was "full of clever tricks and cool sounds to hold the attention" of its young players, and that the "program's technique creeps up on you quietly". In a separate article, the newspaper noted the program's strange technique of teaching players to use two spaces after a full-stop (the default setting). The Sydney Morning Herald thought the "skills are cleverly integrated into an amusing trek around Australia", and that it played as "an adventure story with music, animation and games". The newspaper later wrote that "youngsters get to play a complete, highly entertaining adventure game and, along the way, end up competent typists". Supporting Children with Motor Co-ordination Difficulties felt the game's structured lessons "motivate pupils to learn to touch type".
Typequick for Students won the Software Information Centre Japanese Top Education Course of 1997 award, and additionally awarded Software Product of the Year in the Social/Life skills category at Japan's 1997 SOFTIC Award ceremony as the best software product in circulation in Japan throughout the year.
Within three months of its release, Typequick's saw its sales double; the company noted that sales of Typequick Classic was at a 1 to 1 ratio with the new product. Typequick's website asserts that Kewala's Typing Adventure annually teach over 250,000 students how to touch-type. In December that year, it was voted into the finalist review programs of the 1998 SPA (USA) Excellence in Software Awards for the Best Home Education Product for Teenagers and Adults. It was voted PC User Magazine’s #1 product two years in a row at Australian Macworld.
References
Translation
Citation
External links
Main page
1996 video games
Children's educational video games
Typing video games
MacOS games
Windows games
Video games about animals
Fictional koalas
Video games developed in Australia |
61970060 | https://en.wikipedia.org/wiki/Mark%20Tehranipoor | Mark Tehranipoor | Mark M. Tehranipoor is an Iranian American academic researcher specializing in hardware security and trust, electronics supply chain security, IoT security, and reliable and testable VLSI design. He is the Intel Charles E. Young Preeminence Endowed Professor in Cybersecurity at the University of Florida and serves as the Director of the Florida Institute for Cybersecurity Research. He is an IEEE fellow and a co-founder of the International Symposium on Hardware Oriented Security and Trust (HOST). Tehranipoor also serves as a co-director of the Air Force Office of Scientific Research CYAN and MEST Centers of Excellence.
Life
Tehranipoor earned a Bachelor of Science (B.S.) in Electrical Engineering from Tehran Polytechnic University in 1997. After completing his Masters of Science (M.S.) in Electrical Engineering from the University of Tehran in 2000, he received the Texas Public Educational Grant and moved to the United States to pursue his Ph.D. in Electrical and Computer Engineering at the University of Texas at Dallas, completing the doctorate in two years and eight months.
He spent two years at the University of Maryland Baltimore County as an assistant professor before moving to the University of Connecticut.
At UConn, he started to publish a series of books on Hardware Security, with the first being the Introduction to Hardware Security and Trust.
He initiated and established three centers of excellence in the area of hardware and cyber security; the Center for Hardware Assurance and Engineering (CHASE), Comcast Center of Excellence on Security Innovation (CSI), and the Connecticut Cybersecurity Center (C3).
He received the National Science Foundation (NSF) CAREER Award in 2008 and AFOSR MURI Award in 2014.
Later in 2015, Tehranipoor moved to the University of Florida, acquiring the title of Intel Charles E. Young Preeminence Endowed Professor in Cybersecurity. He serves as the Director of the Florida Institute for Cybersecurity Research. He is an IEEE fellow, ACM fellow and a co-founder of the International Symposium on Hardware Oriented Security and Trust. Tehranipoor also serves as a co-director of the Air Force Office of Scientific Research CYAN and MEST Centers of Excellence.
Publications and patents
Tehranipoor has published over 500 papers and holds 8 patents, with at least 20 more pending.
Books
Mark Tehranipoor, Emerging Topics in Hardware Security, Springer, 2020.
Navid Asadi, Md. Tanjid Rahman and Mark M. Tehranipoor, Physical Assurance: For Electronic Devices and Systems, Springer, 2020.
Swarup Bhunia and Mark M. Tehranipoor, Hardware Security: A Hands-on Learning Approach, Elsevier, Morgan Kaufmann imprint, 2018.
M. Tehranipoor, D. Forte, G, Rose, and S. Bhunia, Security Opportunities in Nano Devices and Emerging Technologies, CRC Press, 2017.
S. Bhunia and M. Tehranipoor, The Hardware Trojan War: Attacks, Myths, and Defenses, Springer, 2017.
D. Forte, S. Bhunia, and M. Tehranipoor, Hardware Protection through Obfuscation, Springer, 2017.
P. Mishra, S. Bhunia, and M. Tehranipoor, Hardware IP Security and Trust, Springer, 2017.
M. Tehranipoor, U. Guin, and D. Forte, Counterfeit Integrated Circuits: Detection and Avoidance, Springer, 2015.
M. Tehranipoor, H. Salmani, and X. Zhang, Integrated Circuit Authentication: Hardware Trojans and Counterfeit Detection, Springer, July 2013.
M. Tehranipoor and C. Wang, Introduction to Hardware Security and Trust, Springer, August 2011.
M. Tehranipoor, K. Peng, and K. Chakrabarty, Test and Diagnosis for Small-Delay Defects, Springer, September 2011.
M. Tehranipoor, Emerging Nanotechnologies: Test, Defect Tolerance, and Reliability, Springer, November 2007.
M. Tehranipoor and N. Ahmed, Nanometer Technology Designs: High-Quality Delay Tests, Springer, December 2007.
Select articles
M. Tehranipoor and F. Koushanfar, "A Survey of Hardware Trojan Taxonomy and Detection," IEEE Design and Test of Computers, 2010.
U. Guin, K. Huang, D. DiMase, J. Carulli, M. Tehranipoor, Y. Makris, "Counterfeit Integrated Circuits: A Rising Threat in the Global Semiconductor Supply Chain," Proceedings of IEEE, 2014.
H. Salmani, M. Tehranipoor, and J. Plusquellic, "A Novel Technique for Improving Hardware Trojan Detection and Reducing Trojan Activation Time," IEEE Transactions on VLSI (TVLSI), 2012.
X. Xu, B. Shakiya, M. Tehranipoor, and D. Forte, "Novel Bypass Attack and BDD-based Tradeoff Analysis Against all Known Logic Locking Attacks," Conference on Cryptographic Hardware and Embedded Systems (CHES), 2017.
K. Xiao and M. Tehranipoor, "BISA: Built-In Self-Authentication for Preventing Hardware Trojan Insertion," Int. IEEE Symposium on Hardware-Oriented Security and Trust (HOST), 2013.
A. Nahiyan, K. Xiao, D. Forte, Y. Jin, and M. Tehranipoor, "AVFSM: A Framework for Identifying and Mitigating Vulnerabilities in FSMs," Design Automation Conference (DAC), 2016.
U. Guin, Q. Shi, D. Forte, and M. Tehranipoor, "FORTIS: A Comprehensive Solution for Establishing Forward Trust for Protecting IPs and ICs," ACM Transactions on Design Automation of Electronic Systems (TODAES), 2016.
M. T. Rahman, M. S. Rahman, H. Wang, S. Tajik, W. Khalil, F. Farahmandi, D. Forte, N. Asadi, and M. Tehranipoor, "Defense-in-Depth: A Recipe for Logic Locking to Prevail," Integration, the VLSI Journal, 2020.
Patents
Embedded ring oscillator network for integrated circuit security and threat detection, 2014, M. Tehranipoor, X. Wang, X. Zhang, US 8850608 B2, WO 2012122309 A3 (Grant)
Methods and Systems for Hardware Piracy Prevention, 2014, M. Tehranipoor and N. Tuzzio, US9071428 B2 (Grant)
Methods and Systems for Preventing Hardware Trojan Insertion, M. Tehranipoor and K. Xiao, US20140283147 A1 (Grant)
Photon-Counting Security Tagging and Verification Using Optically Encoded QR Codes, B. Javidi, A. Markman, and M. Tehranipoor, US20150295711 A1 (Grant)
UCR: An Unclonable Environmentally-Sensitive Chipless RFID Tag, Jan. 15, 2019, M. Tehranipoor, D. Forte, K. Yang, and H. Shen, 10181065 (Grant)
Vanishing Via for Hardware IP Protection Against Reverse Engineering, 2017, S. Bhunia, M. Tehranipoor, D. Forte, N. Asadi, and H. Shen, 10283459 (Grant)
Unclonable environmentally-sensitive chipless RFID tag with a plurality of slot resonators, 10181065, Mark Tehranipoor, Kun Yang, Haoting Shen, and Domenic Forte (Grant)
Layout-Driven Method to Assess Vulnerability of ICs to Microprobing Attacks, M. Tehranipoor, D. Forte, N. Asadi, and Q. Shi, U.S. Patent No. 10,573,605 (Grant)
References
University of Florida faculty
Year of birth missing (living people)
Living people |
28718893 | https://en.wikipedia.org/wiki/Xerox%20Operating%20System | Xerox Operating System | The Xerox Operating System (XOS) was an operating system for the XDS Sigma series of computers "optimized for direct replacement of IBM DOS/360 installations" and to provide real-time and timesharing support.
The system was developed, beginning in 1969, for Xerox by the French firm CII (now Bull).
XOS was more successful in Europe than in the US, but was unable to compete with IBM. By 1972 there were 35 XOS installations in Europe, compared to 2 in the US.
References
External links
XOS Documentation at Bitsavers
XOS: the Xerox operating system, general information digest
Discontinued operating systems
Proprietary operating systems
Operating System |
43935218 | https://en.wikipedia.org/wiki/Steel%20City%20Wrestling | Steel City Wrestling | Steel City Wrestling (SCW) was a professional wrestling promotion that was founded in Latrobe, Pennsylvania in 1994 by Norm Connors. It was the top promotion in the Pittsburgh metropolitan area during the 1990s, along with the National Wrestling Alliance-affiliated Pro Wrestling eXpress, and was regarded by many in the industry as one of the best independent promotions on the East Coast of the United States.
For many years, SCW was the home promotion of Pittsburgh "legends" such as Lord Zoltan and T. Rantula as well as many prominent indy stars in the region including Cueball Carmichael, Dennis Gregory, Lou Marconi, Jimmy Cicero, Frank Stalletto, Tom Brandi, Mike Quackenbush, Reckless Youth, and The Bad Street Boys (Joey Matthews and Christian York). The promotion also regularly featured talent from Extreme Championship Wrestling. Future ECW stars Julio Dinero, Stevie Richards, and The Blue Meanie all started their careers in SCW. Unlike its Philadelphia counterpart, however, the promotion had a much more "family friendly" atmosphere. In addition, SCW co-hosted the original Deaf Wrestlefest benefit shows with Lord Zoltan for the Western Pennsylvania School for the Deaf from 1994 to 2000.
History
Early years
Steel City Wrestling was started by Norm Connors in the fall of 1994. On October 8, 1994, the promotion crowned its first heavyweight champion in Connellsville, Pennsylvania when T. Rantula defeated Shane Douglas in a 4-man tournament final. Lord Zoltan also beat Scott McKeever for the SCW Light Heavyweight Championship. Two weeks earlier, at an ACW show in Munhall, Lou Marconi and Dereck Stone had won the SCW Tag Team Championship after defeating Beauty & The Beast (Frank Stalletto and Futureshock). That same year, SCW co-hosted the first of Lord Zoltan's Deaf Wrestlefest shows for the Western Pennsylvania School for the Deaf in Edgewood, Pennsylvania. The event would become an annual supercard for the promotion, attracting many former National Wrestling Alliance and World Wrestling Federation alumni, as well as top indy stars, and became the school's most important fundraiser during its original 6-year run. Connors, who had performed as "heel" manager Notorious Norm on the local independent circuit, was able to use his connections to bring in legendary WWF wrestlers such as King Kong Bundy, Koko B. Ware, Virgil, and "Superfly" Jimmy Snuka and pitted them against local stars. Bundy, in particular, would agree to wrestle on SCW shows at a reduced cost due to his personal friendship with Connors. This was critical to SCW's early success as "big name" wrestlers could often bring in thousands of dollars for an independent show.
Cooperation with Extreme Championship Wrestling
SCW often cooperated with Extreme Championship Wrestling, another local up-and-coming promotion out of Philadelphia, and regularly featured ECW talent. SCW was among the early independent promotions Cactus Jack wrestled for, in between ECW and Japan, and ended T. Rantula's first title reign in New Castle on March 19, 1995. A number of ECW wrestlers were directly involved in SCW storylines. On October 21, 1995, Stevie Richards turned on Frank Stalletto, attacking him with Raven, immediately after winning the SCW tag team titles from Black & Blue (Black Cat and Lou Marconi) in Connellsville. SCW mainstays Marconi and Stalletto won the titles from Stevie Richards and Brian Rollins in St. Mary's, Pennsylvania a month later. They would go on to become one of SCW's most successful tag teams. Mikey Whipwreck and Pablo Marquez also battled each other during the show to earn a title shot at light heavyweight champion Lord Zoltan.
That same year, The Blue Meanie attracted the attention of Raven and Stevie Richards while working at a 2-day SCW event in Pittsburgh. Raven had the idea that the unusual-looking wrestler would be perfect as a "lackey" of Stevie Richards, who was his own comic sidekick in ECW, and brought him to Philadelphia as a member of Raven's Nest. The Blue Meanie and Richards continued appearing in SCW and regained the SCW Tag Team Championship at Deaf Wrestlefest 1996. They held the belts for nearly two years before the title was vacated due to an injury suffered by Richards.
Connors-Lazarchik partnership
These early SCW shows ran sporadically due to Connors activity as a wrestling manager on the independent circuit and, specifically, his commitments to the National Wrestling Alliance-affiliated Pro Wrestling eXpress. In April 1996, Connors met Andrew Lazarchik, then a student at LaRoche College, at a wrestling show Carlynton High School. This chance meeting would be the beginning of a four-year partnership between the two men. Lazarchik joined Connors in PWX as a color commentator on its late-night television show. On September 21, 1997, an interpromotional PWX-SCW show was held at Pittsburgh's Sullivan Hall. At the end of that year, Connors and Lazarchik left the company due to creative differences with PWX management. They decided to run SCW full-time and began promoting shows in January 1998. Connors, still retaining ownership of the promotion, was the head booker and wrote the majority of the storylines. As vice president, Lazarchik handled the promotional side of the company by overseeing advertising and designing promotional material. Both men were also active SCW performers. Connors, continuing his "gimmick" as manager Notorious Norm, had an on-air role as SCW President while Lazarchik became "heel" manager "Hot Shot" Drew Lazario.
Their "home arena" was initially at the White Oak Athletic Association in White Oak, Pennsylvania before moving to the SCW Arena in Irwin, Pennsylvania later that year. SCW held shows throughout the Pittsburgh metropolitan area, especially the Mon Valley, before branching out to Ohio and West Virginia. In 1998 alone, with its weekly shows drew over 300 wrestling fans, SCW held 25 shows of which 21 made a profit and 4 broke even. Additionally, the company's mailing list increased from 100 to 600. Lazarchik partially attributed wrestling's popularity in the region during this period to the World Wrestling Federation's sold-out shows at the Pittsburgh Civic Arena that year. During a time when the WWF's "Attitude Era" influenced the 1990s wrestling boom, the promoters prided themselves on being a "family friendly" company boasting that "some of our biggest fans are senior citizens and little kids". Younger wrestling fans especially had the opportunity to interact with wrestlers during SCW shows. SCW also held wrestling shows to raise money for local schools and fire departments. Charitable organizations would pay them a "set fee" for the costs setting up the show, such as purchasing insurance for the venture and turning over 5% to the Pennsylvania Athletic Commission, while the charity would collect proceeds from the ticket sales. One of these benefit shows, "Brawl at Sullivan Hall" in Mount Washington, became one of the promotion's annual supercards.
On February 8, 1998, Cactus Jack and The Blue Meanie captured the vacant tag team titles from Lou Marconi and Frank Stalletto in Irwin, Pennsylvania. That same show also saw Reckless Youth end the three-year reign of Lord Zoltan as SCW Junior Heavyweight Champion. SCW was among the battlegrounds during Reckless Youth's feuds with Christian York and Mike Quackenbush. On May 1, 1998, Stevie Richards returned to SCW after a six-month absence to help Lou Marconi beat Tom Brandi for the SCW Heavyweight Championship. In one of his first matches after undergoing neck surgery, Richards defeated Frank Stalletto at an SCW show later that month. On May 23, SCW co-hosted an interpromotional show with MAPW in Medina, Ohio. The following night, SCW held a show at Ainsworth Field in Erie, Pennsylvania featuring The Pitbulls (Pittbull #1 and Pitbull #2) and The Bushwhackers (Bushwhacker Butch and Bushwhacker Luke). Stevie Richards served as special guest referee in a match between Tom Brandi and Corporal Punishment. The promotion also began airing a weekly Friday night television show, Steel City Wrestling TV, on WNPA. On October 18, 1998, Don Montoya was crowned the first SCW Television Champion following his victory over Joey Matthews in the finals of a one-night 8-man championship tournament. At the end of the year, The Bad Street Boys (Joey Matthews and Christian York) captured the SCW Tag Team Championship from Blue Meanie and Super Nova in Irwin, Pennsylvania.
On February 21, 1999, Cody Michaels won the SCW Heavyweight Championship from Dennis Gregory at the SCW Arena with the help of longtime friend Shane Douglas. On May 15, 1999, SCW was one of twelve independent promotions from across the country to participate in the Break the Barrier supercard at Philadelphia's Viking Hall. The promotion was officially represented by Mike Quackenbush, Lou Marconi, and Don Montoya who wrestled in a Three Way Dance for the SCW "Lord of the Dance" Championship. Jay Kirell of CagesideSeats.com called their bout "by far the match of the night" and is credited for greatly enhancing the early career of Quackenbush. The title was created specifically to be defended in Three Way Dance matches. A week later at a SCW show in Cambridge, Ohio, Mankind was the special referee in a wild brawl between T. Rantula and Lou Marconi. He and Notorious Norm got into an altercation near the end of the match which saw the WWF superstar attack Connors (and subsequently Marconi) with Mr. Socko allowing T. Rantula to win the bout.
Feud with High Society
In his role as SCW President, Notorious Norm was often challenged by various heel factions attempting to "take over" the company. The most serious "threat" to the promotion was High Society (Tom Brandi, Cueball Carmichael, and Jimmy Cicero) who managed to gain control of 40% of SCW by the summer of 1999. On June 5, SCW made its debut in Jeannette, Pennsylvania where Cody Michaels lost the SCW Heavyweight title to Cueball Carmichael. Carmichael won the bout due to outside interference from Dennis Gregory who had lost the belt to Michaels four months earlier. In the main event, WWF Light Heavyweight Champion Gillberg defeated Rich Myers.
High Society would temporarily win control of SCW when Carmichael defeated Notorious Norm in a singles match on September 19, 1999. Little Jeannie also defeated Lexi Fife during the show to become the first SCW Women's Champion; she had defeated manager Drew Lazario in the semi-finals earlier that night. SCW was profiled by the Pittsburgh Post-Gazette that same month.
Demise
In the spring of 2000, Connors decided to close SCW. Although the promotion was still highly popular, Connors chose to focus on his regular career as a funeral director. He had been able to promote wrestling events on the weekend while at mortuary school, however, he felt his work schedule significantly limited his time to book shows. Connors had been struggling with both since his graduation the previous summer. The promotion's final show was held that summer. Connors addressed the crowd at the conclusion of the show to thank the fans, wrestlers, and Lazarchik. SCW was regarded by many in the industry as one of the top promotions on the East Coast at the time of its close.
Reunions
On October 25, 2000, Lazarchik booked an "unofficial" reunion show in T. Rantula's Far North Wrestling. It was held at Blazer's Family Fun Center in Irwin with Don Montoya, Mike Quackenbush, and Reckless Youth in a Three Way Dance for the main event. Lazarchik planned another similar show in Irwin on December 13, 2000.
International Wrestling Cartel
In the spring of 2001, Norm Connors resumed promoting wrestling events. He partnered with B94 morning radio host Bubba The Bulldog to form the International Wrestling Cartel in West Mifflin, Pennsylvania. The promotion used many former Steel City Wrestling stars, as well as younger indy wrestlers, and was considered the successor of SCW. Connors ran the IWC for eight years before selling the promotion to Chuck Roberts and retiring once again.
Steel City Wrestlefest
On May 24, 2008, the International Wrestling Cartel and Pro Wrestling eXpress hosted an interpromotional supercard entitled "Steel City Wrestlefest" at the Rostraver Ice Garden in Belle Vernon, Pennsylvania. It was held as a charity event for the Cystic Fibrosis Foundation. The main event was a Tables, Ladders, and Chairs match between Justin Idol and CJ Sensation, which Idol won. Two featured bouts were scheduled on the undercard, including a "Best of Pittsburgh" Three Way Dance involving Dennis Gregory, Bad Boy BA, and Jimmy Vegas, with Dominic DeNucci as special guest referee, and a first-ever singles match between Sterling James Keenan and Chris Masters. The rest of the card involved a mix of SCW alumni as well as top indy stars from throughout the country. Both Kurt Angle and Bruno Sammartino were also scheduled to appear at the show, with Angle to meet local radio host Bubba the Bulldog in a wrestling match.
Former personnel
Championships and programming
Championships
Programming
See also
Deaf Wrestlefest
References
Further reading
Foley, Mick. Have a Nice Day!: A Tale of Blood and Sweatsocks. New York: HarperCollins, 1999.
Foley, Mick. Foley is Good: And the Real World is Faker Than Wrestling. New York: HarperCollins, 2002.
Gorman, Jeff. This Side of the Mic. Lincoln, Nebraska: iUniverse, 2006.
External links
Steel City Wrestling at Cagematch.net
The Steel City Wrestling Unofficial Fan Page
Official Steel City Wrestling message board (defunct)
2000 disestablishments in Pennsylvania
Entertainment companies disestablished in 2000
Entertainment companies established in 1994
1994 establishments in Pennsylvania
American independent professional wrestling promotions based in Pennsylvania
Sports in Pittsburgh |
2344610 | https://en.wikipedia.org/wiki/Certified%20Server%20Validation | Certified Server Validation | Certified Server Validation (CSV) is a technical method of email authentication intended to fight spam. Its focus is the SMTP HELO-identity of mail transfer agents.
CSV was designed to address the problems of MARID and the ASRG, as defined in detail as the intent of Lightweight MTA Authentication Protocol (LMAP) in an expired ASRG draft.
As of January 3, 2007, all Internet Drafts have expired and the mailing list has been closed down since there had been no traffic for 6 months.
Principles of operation
CSV considers two questions at the start of each SMTP session:
Does a domain's management authorize this MTA to be sending email?
Do reputable independent accreditation services consider that domain's policies and practices sufficient for controlling email abuse?
CSV answers these questions as follows: to validate an SMTP session from an unknown sending SMTP client using CSV, the receiving SMTP server:
Obtains the remote IP address of the TCP connection.
Extracts the domain name from the HELO command sent by the SMTP client.
Queries DNS to confirm the domain name is authorized for use by the IP (CSA).
Asks a reputable Accreditation Service if it has a good reputation (DNA).
Determines the level of trust to give to the sending SMTP client, based on the results of (3) and (4)
If the level of trust is high enough, process all email from that session in the traditional manner, delivering or forwarding without the need for further validation. If the level of trust is too low, return an error showing the reason for not trusting the sending SMTP client. If the level of trust is in between, document the result in a header in each email delivered or forwarded, and/or perform additional checks.
If the answers to both of the questions at the top of this article are 'Yes', then receivers can expect the email received to be email they want. Mail sources are motivated to make the answers yes, and it's easy for them to do so (unless their email flow is so toxic that no reputable independent accreditation service will vouch for them). CSV is designed to be efficient and elegant, and in this respect it certainly beats SPF's
coverage of HELO identities.
Client SMTP Authorization (CSA) was a proposed mechanism whereby a domain admin can advertise which mail servers are legitimate originators of mail from his/her domain.
This is done by providing appropriate SRV RRs in the DNS infrastructure.
External links
CSV home page and CLEAR list archive
CSV specification
CSA Client SMTP Authorization
DNA Domain Name Accreditation
John Leslie's CSV material
Dave Crocker's anti-spam material; not just CSV
Datamation article "an idea that's so simple and brilliant that it could actually succeed."
Tony's Quick Guide to CSA
Email authentication |
3818435 | https://en.wikipedia.org/wiki/Fsn%20%28file%20manager%29 | Fsn (file manager) | File System Navigator (fsn; pronounced "fusion") is an experimental application to view a file system in 3D, made by SGI for IRIX systems.
Even though it was never developed to a fully functional file manager, it gained some fame after appearing in the movie Jurassic Park in 1993. In a scene in the film, the character Lex Murphy, played by Ariana Richards, finds a computer displaying the interface. She exclaims, "It's a UNIX system! I know this!" and proceeds to restart the building's access control system, locking the control room's doors. After the release of the film, some perceived the visualization as an example of media misrepresentation of computers, citing the computer game-like display as being an unrealistic Hollywood mockup while unaware of the program's legitimate existence.
See also
File System Visualizer, a free clone of fsn for various Unix-like operating systems
GopherVR, a 3D visualisation tool for the Gopher hypertext protocol
References
External links
fsv fsn clone for Linux and other Unix-like operating systems.
Unix file system-related software
3D file managers
IRIX software
Jurassic Park |
21464146 | https://en.wikipedia.org/wiki/Symposium%20on%20Theory%20of%20Computing | Symposium on Theory of Computing | The Annual ACM Symposium on Theory of Computing (STOC) is an academic conference in the field of theoretical computer science. STOC has been organized annually since 1969, typically in May or June; the conference is sponsored by the Association for Computing Machinery special interest group SIGACT. Acceptance rate of STOC, averaged from 1970 to 2012, is 31%, with the rate of 29% in 2012.
As writes, STOC and its annual IEEE counterpart FOCS (the Symposium on Foundations of Computer Science) are considered the two top conferences in theoretical computer science, considered broadly: they “are forums for some of the best work throughout theory of computing that promote breadth among theory of computing researchers and help to keep the community together.” includes regular attendance at STOC and FOCS as one of several defining characteristics of theoretical computer scientists.
Awards
The Gödel Prize for outstanding papers in theoretical computer science is presented alternately at STOC and at the International Colloquium on Automata, Languages and Programming (ICALP); the Knuth Prize for outstanding contributions to the foundations of computer science is presented alternately at STOC and at FOCS.
Since 2003, STOC has presented one or more Best Paper Awards to recognize papers of the highest quality at the conference. In addition, the Danny Lewin Best Student Paper Award is awarded to the author(s) of the best student-authored paper in STOC. The award is named in honor of Daniel M. Lewin, an American-Israeli mathematician and entrepreneur who co-founded Internet company Akamai Technologies, and was one of the first victims of the September 11 attacks.
History
STOC was first organised on 5–7 May 1969, in Marina del Rey, California, United States. The conference chairman was Patrick C. Fischer, and the program committee consisted of Michael A. Harrison, Robert W. Floyd, Juris Hartmanis, Richard M. Karp, Albert R. Meyer, and Jeffrey D. Ullman.
Early seminal papers in STOC include , which introduced the concept of NP-completeness (see also Cook–Levin theorem).
Location
STOC was organised in Canada in 1992, 1994, 2002, and 2008, and in Greece in 2001; all other meetings in 1969–2009 have been held in the United States. STOC was part of the Federated Computing Research Conference (FCRC) in 1993, 1996, 1999, 2003, 2007, and 2011.
Invited speakers
2004
2005
2006
2007
2008
2009
2010
2011
2013
2014
video
video
2015
video
2016
2017
See also
Conferences in theoretical computer science.
List of computer science conferences contains other academic conferences in computer science.
List of computer science awards
Notes
References
.
.
.
External links
STOC proceedings information in DBLP.
STOC proceedings in the ACM Digital Library.
Citation Statistics for FOCS/STOC/SODA, Piotr Indyk and Suresh Venkatasubramanian, July 2007.
Theoretical computer science conferences
Recurring events established in 1969
Association for Computing Machinery conferences |
47947460 | https://en.wikipedia.org/wiki/SOMA%20Messenger | SOMA Messenger | SOMA Messenger is a cross-platform instant messaging and communication application that specializes in video calls and voice calls for smartphones. Users can also send each other text messages, emoticons, images, videos, voice messages, contacts, user location as well as create group chats, group video calls and conference calls.
It was first released in July 2015, and grew 10 million users within 30 days of its release, making it one of the fastest growing messaging apps globally. As of 1st July 2020, the app is no longer available for iOS or Android.
On August 2015, it was the most downloaded app on both iOS and Android in every country in the Middle East.
SOMA Messenger is headquartered in San Francisco, California with branch offices in China and the United Arab Emirates, with Latin America and Europe offices opening soon. SOMA Messenger has a total of 35 employees globally.
Etymology
The name of the software SOMA stands for “Simple Optimized Messaging App”. It's also a reference to the Soma (South of Market) district of San Francisco, California.
History
Instanza Inc. was founded in 2011 in Harvard University to address the problem of communicating with people across different time zones. It was one of the first startups incubated in the Harvard Innovation Lab just outside Boston.
At the time, they developed the communication app Coco Voice and 4 years later, they launched SOMA Messenger, a messenger for the global community. It offered a broader set of user-friendly communication tools that were focused on consumer security and privacy.
Technical
SOMA Messenger launched as the world's fastest messenger app, delivering high quality voice, video calls and text messaging with a wide range of other communications features and capabilities with no buffering.
SOMA Messenger uses proprietary distributed server technology with servers strategically placed in multiple countries to provide fast speeds and stability. In addition to making voice and video calls more stable, SOMA Messenger's distributed server infrastructure allows for voice and video calls to be accessed in high quality in almost every country in the world. Each interaction on SOMA Messenger is handled by a server in the nearest country to the user.
Features
SOMA Messenger supports unlimited free service with no additional charges for international calls or messages. After downloading SOMA Messenger, users can invite their friends to also sign up for the service. Users can send text messages to their contacts even if the recipient hasn't downloaded the app with these messages including a link for them to download SOMA Messenger as well.
Users can select from a variety of template status messages, and can track their usage statistics via their profile. Users can see the number of messages, calls and photos they've sent and received.
In November 2015, SOMA Messenger announced that it launched free, high-quality group video and voice calls on mobile, for up to four participants. The new video and voice features function cohesively within a group communication, allowing users the choice to have the conversation via text, voice, video, or combination of all three.
SOMA Messenger supports text message, push-to-talk voice message, video message, emoticons as well as contact and location sharing.
SOMA Messenger supports group messaging with up to 500 people.
SOMA Messenger supports end-to-end encryption for all messages and communication using a cryptographic protocol based on 2048-bit-RSA and 256-bit-AES.
Security
SOMA Messenger has strict security policies that apply to all messages, including voice and video calls, texts, images, or voice messages, everywhere in the world. All messages and calls are encrypted using a combination of 2048-bit RSA and 256-bit AES.
All messages, no matter where they come from or where they are sent, are permanently deleted from SOMA Messenger's servers immediately after delivery. Undelivered messages expire and are permanently deleted after seven days from the server. Messages are never stored on SOMA Messenger's servers or in any cloud after they are deleted, and phone numbers stored in users address books cannot be seen by SOMA Messenger. Chat history and message content is stored only on a user's device.
On September 2015, SOMA Messenger released security updates for Android and iOS. The updates include optimization of the messenger's secure encryption algorithm. The makers claim it is “safe enough for the CIA.”
Privacy
SOMA Messenger asks for only the essential user permissions required for the app to function. These permissions are requested in order for the app's basic functions to work such as sending photos, recording voice messages, voice and video chatting, and location sharing. End-to-end encryption is used for maximum privacy and SOMA Messenger stores the key only on a user's device. SOMA Messenger does not have access to users’ address books or phone numbers.
On September 2015, SOMA Messenger made an update to the app's user permission requests by removing all redundant user permissions. As of the update, SOMA Messenger requires less permissions than similar messaging services such as WhatsApp and Skype.
See also
Comparison of instant messaging clients
References
External links
Official Website
Android (operating system) software
IOS software
Instant messaging clients
Cross-platform mobile software
Communication software
Companies based in California
Companies based in San Francisco |
40934 | https://en.wikipedia.org/wiki/Component | Component | Component may refer to:
In engineering, science, and technology
Generic systems
System components, an entity with discrete structure, such as an assembly or software module, within a system considered at a particular level of analysis
Lumped element model, a model of spatially distributed systems
Electrical
Component video, a type of analog video information that is transmitted or stored as two or more separate signals
Electronic components, the constituents of electronic circuits
Symmetrical components, in electrical engineering, analysis of unbalanced three-phase power systems
Mathematics
Color model, a way of describing how colors can be represented, typically as multiple values or color components
Component (group theory), a quasi-simple subnormal sub-group
Connected component (graph theory), a maximal connected subgraph
Connected component (topology), a maximal connected subspace of a topological space
Vector component, result of the decomposition of a vector into various directions
Software
Component (UML), definition of component in the Unified Modeling Language
Component-based software engineering, a field within software engineering dealing with reusable software elements
Software component, a reusable software element with a specification, used in component-based software engineering
Other sciences
Component (thermodynamics), a chemically independent constituent of a phase of a system
Other uses
Component (VTA), a light-rail station in San Jose, California
Part of the grammatical structure of a sentence, a concept relating to the catena
Component ingredient, in a culinary dish
See also
Composition (disambiguation)
Decomposition (disambiguation)
Giant component
Identity component
Irreducible component
Spare part
Strongly connected component
Tangential and normal components
:Category:Components |
16872621 | https://en.wikipedia.org/wiki/ARPA%20Host%20Name%20Server%20Protocol | ARPA Host Name Server Protocol | The ARPA Host Name Server Protocol (NAMESERVER), is an obsolete network protocol used in translating a host name to an Internet address. IANA Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) port 42 for NAMESERVER; this port is more commonly used by the Windows Internet Name Service (WINS) on Microsoft operating systems.
Application
The NAMESERVER protocol is used by the DARPA Trivial Name Server, a server process called tnamed that is provided in some implementations of UNIX.
Replacement
Support for the NAMESERVER protocol has been deprecated, and may not be available in the latest implementations of all UNIX operating systems. The Domain Name System (DNS) has replaced the ARPA Host Name Server Protocol and the DARPA Trivial Name Server.
See also
List of TCP and UDP port numbers
List of Unix operating systems
Domain Name System
References
External links
IEN 116 Internet Name Server
Development of the domain name system
Application layer protocols
Domain Name System |
31736337 | https://en.wikipedia.org/wiki/Libav | Libav | Libav is an abandoned free software project, forked from FFmpeg in 2011, that contains libraries and programs for handling multimedia data.
History
Fork from FFmpeg
The Libav project was a fork of the FFmpeg project. It was announced on March 13, 2011 by a group of FFmpeg developers. The event was related to an issue in project management and different goals: FFmpeg supporters wanted to keep development velocity in favour of more features, while Libav supporters and developers wanted to improve the state of the code and take the time to design better APIs.
The maintainer of the FFmpeg packages for Debian and Ubuntu, being one of the group of developers who forked FFmpeg, switched the packages to this fork in 2011. Hence, most software on these systems that depended on FFmpeg automatically switched to Libav. On July 8, 2015, Debian announced it would return to FFmpeg for various, technical reasons. Several arguments justified this step. Firstly, FFmpeg had a better record of responding to vulnerabilities than Libav. Secondly, Mateusz "j00ru" Jurczyk, a security-oriented developer at Google, argued that all issues he found in FFmpeg were fixed in a timely manner, while Libav was still affected by various bugs. Finally, FFmpeg supported a far wider variety of codecs and containers than Libav.
Libav is an abandoned software project, with Libav developers either returning to FFmpeg, moving to other multimedia projects like the AV1 video codec, or leaving the multimedia field entirely.
Confusion
At the beginning of this fork, Libav and FFmpeg separately developed their own versions of the ffmpeg command. Libav then renamed their ffmpeg to avconv to distance themselves from the FFmpeg project. During the transition period, when a Libav user typed ffmpeg, there was a message telling the user that the ffmpeg command was deprecated and avconv has to be used instead. This confused some users into thinking that FFmpeg (the project) was dead.
This message was removed upstream when ffmpeg was finally removed from the Libav sources. In June 2012, on Ubuntu 12.04, the message was re-worded, but that new "deprecated" message caused even more user confusion. Starting with Ubuntu 15.04 "Vivid", FFmpeg's ffmpeg is back in the repositories again.
To further complicate matters, Libav chose a name that was used by FFmpeg to refer to its libraries (libavcodec, libavformat, etc.). For example, the libav-user mailing list, for questions and discussions about using the FFmpeg libraries, is unrelated to the Libav project.
Software using Libav instead of FFmpeg
Debian followed Libav when it was announced, and announced it would return to FFmpeg for Debian Stretch (9.0).
MPlayer2, a defunct fork of MPlayer, used Libav exclusively, but could be used with GStreamer with its public API. The MPV media player no longer supports Libav due to missing API changes.
Legal aspects
Codecs
Libav contains more than 100 codecs. Many codecs that compress information have been claimed by patent holders. Such claims may be enforceable in countries like the United States which have implemented software patents, but are considered unenforceable or void in countries that have not implemented software patents.
Logo
The Libav logo uses a zigzag pattern that references how MPEG video codecs handle entropy encoding. It was previously the logo of the FFmpeg project until Libav was forked from it. Following the fork, in 2011 one of the Libav developers Måns Rullgård claimed copyright over the logo and requested FFmpeg cease and desist from using it. FFmpeg subsequently altered their logo into a 3D version.
Google Summer of Code participation
Libav participated in the Google Summer of Code program in 2011 and 2012.
With participation in the Google Summer of Code, Libav has had many new features and improvements developed, including a WMVP/WVP2 decoder, hardware accelerated H.264 decoding on Android, and G.723.1 codec support.
Technical details
Components
Libav primarily consists of libavcodec, which is an audio/video codec library used by several other projects, libavformat, which is an audio/video container muxing and demuxing library, and avconv, which is a multimedia manipulation tool similar to FFmpeg's ffmpeg or Gstreamer gst-launch-1.0 command.
The command line-programs:
avconv A video and audio converter that can also grab from a live audio/video source.
avserver A streaming server for both audio and video.
avplay A very simple and portable media player using the Libav libraries and the SDL library.
avprobe Gathers information from multimedia streams and prints it in human- and machine-readable fashion.
The libraries:
libavcodec A library containing all the Libav audio/video encoders and decoders.
libavfilter The substitute for vhook which allows the video/audio to be modified or examined between the decoder and the encoder.
libavformat A library containing demuxers and muxers for audio and video container formats.
libavresample A library containing audio resampling routines.
libavutil A helper library containing routines common to different parts of Libav.
This library includes Adler-32, CRC, MD5, SHA-1, LZO decompressor, Base64 encoder/decoder, DES encrypter/decrypter, RC4 encrypter/decrypter and AES encrypter/decrypter.
libswscale A library containing video image scaling and colorspace/pixelformat conversion routines.
Contained codecs
Numerous free and open-source implementations of existing algorithms for the (usually lossy) compression and decompression of audio or video data, called codecs, are available. Please note that an algorithm can be subject to patent law in some jurisdictions. Here are lists of the ones contained in the libav library:
Video codecs
Libav includes video decoders and/or encoders for the following formats:
Adobe Flash Player related video codecs: Screen video, Screen video 2, Sorenson 3 Codec, VP6 and Flash Video (FLV)
Asus v1
Asus v2
AVS (decoding only)
CamStudio (decoding only)
Cinepak (decoding only)
Creative YUV (CYUV, decoding only)
Dirac (via libschroedinger)
DNxHD
Duck TrueMotion v1 (decoding only)
Duck TrueMotion v2 (decoding only)
Flash Screen Video
FFV1
ITU-T video standards: H.261, H.262/MPEG-2 Part 2, H.263 and H.264/MPEG-4 AVC
H.263
H.264/MPEG-4 AVC (native decoder, encoding through x264)
H.265/HEVC since 2014-02-12
Huffyuv
id Software RoQ Video
Intel Indeo (decoding only)
ISO/IEC/ITU-T JPEG image standards: JPEG, JPEG-LS and JPEG 2000
Lagarith (decoding only)
LOCO (decoding only)
DVD Forum standards related / Dolby audio codecs: MLP (a.k.a. TrueHD) and AC-3
Mimic (decoding only)
MJPEG
MPEG-1
MPEG-2/H.262
ISO/IEC MPEG video standards: MPEG-1 Part 2, H.262/MPEG-2 Part 2, MPEG-4 Part 2 and H.264/MPEG-4 AVC
MPEG-4 Part 2 (the format used for example by the popular DivX and Xvid codecs)
On2 VP8 (native decoder, encoding through libvpx)
On2: Duck TrueMotion 1, Duck TrueMotion 2, VP3, VP5, VP6 and VP8
Apple ProRes
Apple Computer QuickDraw (decoding only)
QuickTime related video codecs: Cinepak, Motion JPEG, ProRes, Sorenson 3 Codec, Animation codec (RLE), Apple Video (RPZA), Graphics Codec (SMC)
RAD Game Tools: Smacker video and Bink video
RenderWare: TXD
RealVideo RV10 and RV20
RealVideo RV30 and RV40 (decoding only)
RealPlayer related video codecs: RealVideo 1, 2, 3 and 4
VC-1 (decoding only)
Smacker video (decoding only)
Sorenson SVQ1
Sorenson SVQ3 (decoding only)
Theora (native decoder, encoding through libtheora)
Sierra VMD Video (decoding only)
VMware VMnc (decoding only)
Westwood Studios VQA (decoding only)
Windows Media Player related video codecs: Microsoft RLE, Microsoft Video 1, Cinepak, Indeo 2, 3 and 5, Motion JPEG, Microsoft MPEG-4 v1, v2 and v3, WMV1, WMV2 and WMV3 (a.k.a. VC-1)
SMPTE video standards: VC-1 (a.k.a. WMV3), VC-2 (a.k.a. Dirac), VC-3 (a.k.a. AVID DNxHD) and DPX image
Wing Commander/Xan Video (decoding only)
Audio codecs
Libav includes decoders and encoders for the following formats:
8SVX (decoding only)
Adobe Flash Player related audio codecs: Adobe SWF ADPCM and Nellymoser Asao
AAC
AC-3
3GPP vocoder standards: AMR-NB, AMR-WB (a.k.a. G.722.2)
ITU-T vocoder standards: G.711 μ-law, G.711 A-law, G.721 (a.k.a. G.726 32k), G.722, G.722.2 (a.k.a. AMR-WB), G.723 (a.k.a. G.726 24k and 40k), G.723.1, G.726, G.729 and G.729D
Apple Lossless
ATRAC3 (decoding only)
Cook Codec (decoding only)
DTS (encoder is highly experimental)
EA ADPCM (decoding only)
E-AC-3
FLAC (24/32 bit support for decoding only)
GSM 06.10 (native decoder, encoding through libgsm)
GSM related voice codecs: Full Rate
Intel Music Coder (decoding only)
Meridian Lossless Packing / Dolby TrueHD (decoding only)
Monkey's Audio (decoding only)
MP2
MP3 (native decoder, encoding through LAME)
ISO/IEC MPEG audio standards: MP1, MP2, MP3, AAC, HE-AAC and MPEG-4 ALS
Nellymoser Asao Codec in Flash
NTT: TwinVQ
Opus (via libopus)
QCELP (decoding only)
QDM2 (decoding only)
QuickTime related audio codecs: QDesign Music Codec 2 and ALAC
RealAudio 1.0
RealAudio 2.0 (decoding only)
RealPlayer related audio codecs: RealAudio 3, 6, 7, 8, 9 and 10 (a.k.a. ralf for RealAudioLosslessFormat)
RealPlayer related voice codecs: RealAudio 1, 2 (variant of G.728), 4 and 5
Shorten (decoding only)
SMPTE audio standards: SMPTE 302M
Sony: ATRAC1 and ATRAC3
Speex (via libspeex)
Truespeech
TTA (decoding only)
TwinVQ (decoding only)
Vorbis
WavPack (decoding only)
Windows Media Audio 1
Windows Media Audio 2
Windows Media Audio 9 Professional (decoding only)
Windows Media Audio Voice (decoding only)
Windows Media Player related audio codecs: WMA1, WMA2, WMA Pro, and WMA Lossless
Windows Media Player related voice codecs: WMA Voice and MS-GSM
Supported file formats
Additionally to the aforementioned codecs, Libav also supports several file formats (file formats designed to contain audio and/or video data and subtitles, are called "containers", but that is just a special denomination.):
ASF
AVI and also input from AviSynth
BFI
CAF
FLV
GXF, General eXchange Format, SMPTE 360M
IFF
RL2
ISO base media file format (including QuickTime, 3GP and MP4)
Matroska (including WebM)
Maxis XA
MPEG program stream
MPEG transport stream (including AVCHD)
MXF, Material eXchange Format, SMPTE 377M
MSN Webcam stream
NUT
NUV (MythTV NuppelVideo file format)
Ogg
OMA
TXD
WTV
WebP
Supported protocols
Support for several communications protocols is also contained in Libav. Here is a list:
IETF standards: TCP, UDP, Gopher, HTTP, RTP, RTSP and SDP
Apple related protocols: HTTP Live Streaming
RealMedia related protocols: RealMedia RTSP/RDT
Adobe related protocols: RTMP, RTMPT (via librtmp), RTMPE (via librtmp), RTMPTE (via librtmp) and RTMPS (via librtmp)
Microsoft related protocols: MMS over TCP and MMS over HTTP
See also
VLC media player uses libavcodec as its codec base, adds other codecs, cross platform
Open source codecs and containers
References
External links
C (programming language) libraries
Cross-platform free software
Free codecs
Free computer libraries
Free music software
Free software programmed in C
Free video conversion software
Multimedia frameworks
Video libraries
Software that uses FFmpeg |
10633237 | https://en.wikipedia.org/wiki/Universal%20Character%20Set%20characters | Universal Character Set characters | The Unicode Consortium (UC) and the International Organization for Standardization (ISO) collaborate on the Universal Character Set (UCS). The UCS is an international standard to map characters used in natural language, mathematics, music, and other domains to machine-readable values. By creating this mapping, the UCS enables computer software vendors to interoperate and transmit UCS-encoded text strings from one to another. Because it is a universal map, it can be used to represent multiple languages at the same time. This avoids the confusion of using multiple legacy character encodings, which can result in the same sequence of codes having multiple meanings and thus be improperly decoded if the wrong one is chosen.
UCS has a potential capacity to encode over 1 million characters. Each UCS character is abstractly represented by a code point, which is an integer between 0 and 1,114,111, used to represent each character within the internal logic of text-processing software (1,114,112 = 220 + 216 or 17 × 216, or hexadecimal 110,000 code points). As of Unicode 14.0, released in September 2021, 288,512 (26%) of these code points are allocated, including 144,762 (13%) assigned characters, 137,468 (12.3%) reserved for private use, 2,048 for surrogates, and 66 designated , leaving 825,600 (74%) unallocated. The number of encoded characters is made up as follows:
144,532 graphical characters (some of which do not have a visible glyph, but are still counted as graphical)
230 special purpose characters for control and formatting.
ISO maintains the basic mapping of characters from character name to code point. Often the terms "character" and "code point" will get used interchangeably. However, when a distinction is made, a code point refers to the integer of the character: what one might think of as its address. While a character in UCS 10646 includes the combination of the code point and its name, Unicode adds many other useful properties to the character set, such as block, category, script, and directionality.
In addition to the UCS, Unicode also provides other implementation details such as:
transcending mappings between UCS and other character sets
different collations of characters and character strings for different languages
an algorithm for laying out bidirectional text, where text on the same line may shift between left-to-right and right-to-left
a case-folding algorithm
Computer software end users enter these characters into programs through various input methods. Input methods can be through keyboard or a graphical character palette.
The UCS can be divided in various ways, such as by plane, block, character category, or character property.
Character reference overview
An HTML or XML numeric character reference refers to a character by its Universal Character Set/Unicode code point, and uses the format
&#nnnn;
or
&#xhhhh;
where nnnn is the code point in decimal form, and hhhh is the code point in hexadecimal form. The x must be lowercase in XML documents. The nnnn or hhhh may be any number of digits and may include leading zeros. The hhhh may mix uppercase and lowercase, though uppercase is the usual style.
In contrast, a character entity reference refers to a character by the name of an entity which has the desired character as its replacement text. The entity must either be predefined (built into the markup language) or explicitly declared in a Document Type Definition (DTD). The format is the same as for any entity reference:
&name;
where name is the case-sensitive name of the entity. The semicolon is required.
Planes
Unicode and ISO divide the set of code points into 17 planes, each capable of containing 65536 distinct characters or 1,114,112 total. As of 2021 (Unicode 14.0) ISO and the Unicode Consortium has only allocated characters and blocks in seven of the 17 planes. The others remain empty and reserved for future use.
Most characters are currently assigned to the first plane: the Basic Multilingual Plane. This is to help ease the transition for legacy software since the Basic Multilingual Plane is addressable with just two octets. The characters outside the first plane usually have very specialized or rare use.
Each plane corresponds with the value of the one or two hexadecimal digits (0—9, A—F) preceding the four final ones: hence U+24321 is in Plane 2, U+4321 is in Plane 0 (implicitly read U+04321), and U+10A200 would be in Plane 16 (hex 10 = decimal 16). Within one plane, the range of code points is hexadecimal 0000—FFFF, yielding a maximum of 65536 code points. Planes restrict code points to a subset of that range.
Blocks
Unicode adds a block property to UCS that further divides each plane into separate blocks. Each block is a grouping of characters by their use such as "mathematical operators" or "Hebrew script characters". When assigning characters to previously unassigned code points, the Consortium typically allocates entire blocks of similar characters: for example all the characters belonging to the same script or all similarly purposed symbols get assigned to a single block. Blocks may also maintain unassigned or reserved code points when the Consortium expects a block to require additional assignments.
The first 256 code points in the UCS correspond with those of ISO 8859-1, the most popular 8-bit character encoding in the Western world. As a result, the first 128 characters are also identical to ASCII. Though Unicode refers to these as a Latin script block, these two blocks contain many characters that are commonly useful outside of the Latin script. In general, not all characters in a given block need be of the same script, and a given script can occur in several different blocks.
Categories
Unicode assigns to every UCS character a general category and subcategory. The general categories are: letter, mark, number, punctuation, symbol, or control (in other words a formatting or non-graphical character).
Types include:
Modern, Historic, and Ancient Scripts. As of 2021 (Unicode 14.0), the UCS identifies 159 scripts that are, or have been, used throughout of the world. Many more are in various approval stages for future inclusion of the UCS.
International Phonetic Alphabet. The UCS devotes several blocks (over 300 characters) to characters for the International Phonetic Alphabet.
Combining Diacritical Marks. An important advance conceived by Unicode in designing the UCS and related algorithms for handling text was the introduction of combining diacritic marks. By providing accents that can combine with any letter character, the Unicode and the UCS reduce significantly the number of characters needed. While the UCS also includes precomposed characters, these were included primarily to facilitate support within UCS for non-Unicode text processing systems.
Punctuation. Along with unifying diacritical marks, the UCS also sought to unify punctuation across scripts. Many scripts also contain punctuation, however, when that punctuation has no similar semantics in other scripts.
Symbols. Many mathematics, technical, geometrical and other symbols are included within the UCS. This provides distinct symbols with their own code point or character rather than relying on switching fonts to provide symbolic glyphs.
Currency.
Letterlike. These symbols appear like combinations of many common Latin scripts letters such as ℅. Unicode designates many of the letterlike symbols as compatibility characters usually because they can be in plain text by substituting glyphs for a composing sequence of characters: for example substituting the glyph ℅ for the composed sequence of characters c/o.
Number Forms. Number forms primarily consist of precomposed fractions and Roman numerals. Like other areas of composing sequences of characters, the Unicode approach prefers the flexibility of composing fractions by combining characters together. In this case to create fractions, one combines numbers with the fraction slash character (U+2044). As an example of the flexibility this approach provides, there are nineteen precomposed fraction characters included within the UCS. However, there are an infinity of possible fractions. By using composing characters the infinity of fractions is handled by 11 characters (0-9 and the fraction slash). No character set could include code points for every precomposed fraction. Ideally a text system should present the same glyphs for a fraction whether it is one of the precomposed fractions (such as ⅓) or a composing sequence of characters (such as 1⁄3). However, web browsers are not typically that sophisticated with Unicode and text handling. Doing so ensures that precomposed fractions and combining sequence fractions will appear compatible next to each other.
Arrows.
Mathematical.
Geometric Shapes.
Legacy Computing.
Control Pictures Graphical representations of many control characters.
Box Drawing.
Block Elements.
Braille Patterns.
Optical Character Recognition.
Technical.
Dingbats.
Miscellaneous Symbols.
Emoticons.
Symbols and Pictographs.
Alchemical Symbols.
Game Pieces (chess, checkers, go, dice, dominoes, mahjong, playing cards, and many others).
Chess Symbols
Tai Xuan Jing.
Yijing Hexagram Symbols.
CJK. Devoted to ideographs and other characters to support languages in China, Japan, Korea (CJK), Taiwan, Vietnam, and Thailand.
Radicals and Strokes.
Ideographs. By far the largest portion of the UCS is devoted to ideographs used in languages of Eastern Asia. While the glyph representation of these ideographs have diverged in the languages that use them, the UCS unifies these Han characters in what Unicode refers to as Unihan (for Unified Han). With Unihan, the text layout software must work together with the available fonts and these Unicode characters to produce the appropriate glyph for the appropriate language. Despite unifying these characters, the UCS still includes over 92,000 Unihan ideographs.
Musical Notation.
Duployan shorthands.
Sutton SignWriting.
Compatibility Characters. Several blocks in the UCS are devoted almost entirely to compatibility characters. Compatibility characters are those included for support of legacy text handling systems that do not make a distinction between character and glyph the way Unicode does. For example, many Arabic letters are represented by a different glyph when the letter appears at the end of a word than when the letter appears at the beginning of a word. Unicode's approach prefers to have these letters mapped to the same character for ease of internal machine text processing and storage. To complement this approach, the text software must select different glyph variants for display of the character based on its context. Over 4000 characters are included for such compatibility reasons.
Control Characters.
Surrogates. The UCS includes 2048 code points in the Basic Multilingual Plane (BMP) for surrogate code point pairs. Together these surrogates allow any code point in the sixteen other planes to be addressed by using two surrogate code points. This provides a simple built-in method for encoding the 20.1 bit UCS within a 16 bit encoding such as UTF-16. In this way UTF-16 can represent any character within the BMP with a single 16-bit byte. Characters outside the BMP are then encoded using two 16-bit bytes (4 octets total) using the surrogate pairs.
Private Use. The consortium provides several private use blocks and planes that can be assigned characters within various communities, as well as operating system and font vendors.
. The consortium guarantees certain code points will never be assigned a character and calls these code points. The last two code points of each plane (ending in FE and FF ) are such code points. There are a few others interspersed throughout the Basic Multilingual Plane, the first plane.
Special-purpose characters
Unicode codifies over a hundred thousand characters. Most of those represent graphemes for processing as linear text. Some, however, either do not represent graphemes, or, as graphemes, require exceptional treatment. Unlike the ASCII control characters and other characters included for legacy round-trip capabilities, these other special-purpose characters endow plain text with important semantics.
Some special characters can alter the layout of text, such as the zero-width joiner and zero-width non-joiner, while others do not affect text layout at all, but instead affect the way text strings are collated, matched or otherwise processed. Other special-purpose characters, such as the mathematical invisibles, generally have no effect on text rendering, though sophisticated text layout software may choose to subtly adjust spacing around them.
Unicode does not specify the division of labor between font and text layout software (or "engine") when rendering Unicode text. Because the more complex font formats, such as OpenType or Apple Advanced Typography, provide for contextual substitution and positioning of glyphs, a simple text layout engine might rely entirely on the font for all decisions of glyph choice and placement. In the same situation a more complex engine may combine information from the font with its own rules to achieve its own idea of best rendering. To implement all recommendations of the Unicode specification, a text engine must be prepared to work with fonts of any level of sophistication, since contextual substitution and positioning rules do not exist in some font formats and are optional in the rest. The fraction slash is an example: complex fonts may or may not supply positioning rules in the presence of the fraction slash character to create a fraction, while fonts in simple formats cannot.
Byte order mark
When appearing at the head of a text file or stream, the byte order mark (BOM) U+FEFF hints at the encoding form and its byte order.
If the stream's first byte is 0xFE and the second 0xFF, then the stream's text is not likely to be encoded in UTF-8, since those bytes are invalid in UTF-8. It is also not likely to be UTF-16 in little-endian byte order because 0xFE, 0xFF read as a 16-bit little endian word would be U+FFFE, which is meaningless. The sequence also has no meaning in any arrangement of UTF-32 encoding, so, in summary, it serves as a fairly reliable indication that the text stream is encoded as UTF-16 in big-endian byte order. Conversely, if the first two bytes are 0xFF, 0xFE, then the text stream may be assumed to be encoded as UTF-16LE because, read as a 16-bit little-endian value, the bytes yield the expected 0xFEFF byte order mark. This assumption becomes questionable, however, if the next two bytes are both 0x00; either the text begins with a null character (U+0000), or the correct encoding is actually UTF-32LE, in which the full 4-byte sequence FF FE 00 00 is one character, the BOM.
The UTF-8 sequence corresponding to U+FEFF is 0xEF, 0xBB, 0xBF. This sequence has no meaning in other Unicode encoding forms, so it may serve to indicate that that stream is encoded as UTF-8.
The Unicode specification does not require the use of byte order marks in text streams. It further states that they should not be used in situations where some other method of signaling the encoding form is already in use.
Mathematical invisibles
Primarily for mathematics, the Invisible Separator (U+2063) provides a separator between characters where punctuation or space may be omitted such as in a two-dimensional index like ij. Invisible Times (U+2062) and Function Application (U+2061) are useful in mathematics text where the multiplication of terms or the application of a function is implied without any glyph indicating the operation. Unicode 5.1 introduces the Mathematical Invisible Plus character as well (U+2064) which may indicate that an integral number followed by a fraction should denote their sum, but not their product.
Fraction slash
The fraction slash character (U+2044) has special behavior in the Unicode Standard: (section 6.2, Other Punctuation)
The standard form of a fraction built using the fraction slash is defined as follows: any sequence of one or more decimal digits (General Category = Nd), followed by the fraction slash, followed by any sequence of one or more decimal digits. Such a fraction should be displayed as a unit, such as ¾. If the displaying software is incapable of mapping the fraction to a unit, then it can also be displayed as a simple linear sequence as a fallback (for example, 3/4). If the fraction is to be separated from a previous number, then a space can be used, choosing the appropriate width (normal, thin, zero width, and so on). For example, 1 + ZERO WIDTH SPACE + 3 + FRACTION SLASH + 4 is displayed as 1¾.
By following this Unicode recommendation, text processing systems yield sophisticated symbols from plain text alone. Here the presence of the fraction slash character instructs the layout engine to synthesize a fraction from all consecutive digits preceding and following the slash. In practice, results vary because of the complicated interplay between fonts and layout engines. Simple text layout engines tend not to synthesize fractions at all, and instead draw the glyphs as a linear sequence as described in the Unicode fallback scheme.
More sophisticated layout engines face two practical choices: they can follow Unicode's recommendation, or they can rely on the font's own instructions for synthesizing fractions. By ignoring the font's instructions, the layout engine can guarantee Unicode's recommended behavior. By following the font's instructions, the layout engine can achieve better typography because placement and shaping of the digits will be tuned to that particular font at that particular size.
The problem with following the font's instructions is that the simpler font formats have no way to specify fraction synthesis behavior. Meanwhile, the more complex formats do not require the font to specify fraction synthesis behavior and therefore many do not. Most fonts of complex formats can instruct the layout engine to replace a plain text sequence such as "1⁄2" with the precomposed "½" glyph. But because many of them will not issue instructions to synthesize fractions, a plain text string such as "221⁄225" may well render as 22½25 (with the ½ being the substituted precomposed fraction, rather than synthesized). In the face of problems like this, those who wish to rely on the recommended Unicode behavior should choose fonts known to synthesize fractions or text layout software known to produce Unicode's recommended behavior regardless of font.
Bidirectional neutral formatting
Writing direction is the direction glyphs are placed on the page in relation to forward progression of characters in the Unicode string. English and other languages of Latin script have left-to-right writing direction. Several major writing scripts, such as Arabic and Hebrew, have right-to-left writing direction. The Unicode specification assigns a directional type to each character to inform text processors how sequences of characters should be ordered on the page.
While lexical characters (that is, letters) are normally specific to a single writing script, some symbols and punctuation marks are used across many writing scripts. Unicode could have created duplicate symbols in the repertoire that differ only by directional type, but chose instead to unify them and assign them a neutral directional type. They acquire direction at render time from adjacent characters. Some of these characters also have a bidi-mirrored property indicating the glyph should be rendered in mirror-image when used in right-to-left text.
The render-time directional type of a neutral character can remain ambiguous when the mark is placed on the boundary between directional changes. To address this, Unicode includes characters that have strong directionality, have no glyph associated with them, and are ignorable by systems that do not process bidirectional text:
Arabic letter mark (U+061C)
Left-to-right mark (U+200E)
Right-to-left mark (U+200F)
Surrounding a bidirectionally neutral character by the left-to-right mark will force the character to behave as a left-to-right character while surrounding it by the right-to-left mark will force it to behave as a right-to-left character. The behavior of these characters is detailed in Unicode's Bidirectional Algorithm.
Bidirectional general formatting
While Unicode is designed to handle multiple languages, multiple writing systems and even text that flows either left-to-right or right-to-left with minimal author intervention, there are special circumstances where the mix of bidirectional text can become intricate—requiring more author control. For these circumstances, Unicode includes five other characters to control the complex embedding of left-to-right text within right-to-left text and vice versa:
Left-to-right embedding (U+202A)
Right-to-left embedding (U+202B)
Pop directional formatting (U+202C)
Left-to-right override (U+202D)
Right-to-left override (U+202E)
Left-to-right isolate (U+2066)
Right-to-left isolate (U+2067)
First strong isolate (U+2068)
Pop directional isolate (U+2069)
Interlinear annotation characters
Interlinear Annotation Anchor (U+FFF9)
Interlinear Annotation Separator (U+FFFA)
Interlinear Annotation Terminator (U+FFFB)
Script-specific
Prefixed format control
Arabic Number Sign (U+0600)
Arabic Sign Sanah (U+0601)
Arabic Footnote Marker (U+0602)
Arabic Sign Safha (U+0603)
Arabic Sign Samvat (U+0604)
Arabic Number Mark Above (U+0605)
Arabic End of Ayah (U+06DD)
Syriac Abbreviation Mark (U+070F)
Arabic Pound Mark Above (U+0890)
Arabic Piastre Mark Above (U+0891)
Kaithi Number Sign (U+110BD)
Kaithi Number Sign Above (U+110CD)
Egyptian Hieroglyphs
Egyptian Hieroglyph Vertical Joiner (U+13430)
Egyptian Hieroglyph Horizontal Joiner (U+13431)
Egyptian Hieroglyph Insert At Top Start (U+13432)
Egyptian Hieroglyph Insert At Bottom Start (U+13433)
Egyptian Hieroglyph Insert At Top End (U+13434)
Egyptian Hieroglyph Insert At Bottom End (U+13435)
Egyptian Hieroglyph Overlay Middle (U+13436)
Egyptian Hieroglyph Begin Segment (U+13437)
Egyptian Hieroglyph End Segment (U+13438)
Brahmi
Brahmi Number Joiner (U+1107F)
Brahmi-derived script dead-character formation (Virama and similar diacritics)
Devanagari Sign Virama (U+094D)
Bengali Sign Virama (U+09CD)
Gurmukhi Sign Virama (U+0A4D)
Gujarati Sign Virama (U+0ACD)
Oriya Sign Virama (U+0B4D)
Tamil Sign Virama (U+0BCD)
Telugu Sign Virama (U+0C4D)
Kannada Sign Virama (U+0CCD)
Malayalam Sign Vertical Bar Virama (U+0D3B)
Malayalam Sign Circular Virama (U+0D3C)
Malayalam Sign Virama (U+0D4D)
Sinhala Sign Al-Lakuna (U+0DCA)
Thai Character Phinthu (U+0E3A)
Thai Character Yamakkan (U+0E4E)
Lao Sign Pali Virama (U+0EBA)
Myanmar Sign Virama (U+1039)
Tagalog Sign Virama (U+1714)
Tagalog Sign Pamudpod (U+1715)
Hanunoo Sign Pamudpod (U+1734)
Khmer Sign Viriam (U+17D1)
Khmer Sign Coeng (U+17D2)
Tai Tham Sign Sakot (U+1A60)
Tai Tham Sign Ra Haam (U+1A7A)
Balinese Adeg Adeg (U+1B44)
Sundanese Sign Pamaaeh (U+1BAA)
Sundanese Sign Virama (U+1BAB)
Batak Pangolat (U+1BF2)
Batak Panongonan (U+1BF3)
Syloti Nagri Sign Hasanta (U+A806)
Syloti Nagri Sign Alternate Hasanta (U+A82C)
Saurashtra Sign Virama (U+A8C4)
Rejang Virama (U+A953)
Javanese Pangkon (U+A9C0)
Meetei Mayek Virama (U+AAF6)
Kharoshthi Virama (U+10A3F)
Brahmi Virama (U+11046)
Brahmi Sign Old Tamil Virama (U+11070)
Kaithi Sign Virama (U+110B9)
Chakma Virama (U+11133)
Sharada Sign Virama (U+111C0)
Khojki Sign Virama (U+11235)
Khudawadi Sign Virama (U+112EA)
Grantha Sign Virama (U+1134D)
Newa Sign Virama (U+11442)
Tirhuta Sign Virama (U+114C2)
Siddham Sign Virama (U+115BF)
Modi Sign Virama (U+1163F)
Takri Sign Virama (U+116B6)
Ahom Sign Killer (U+1172B)
Dogra Sign Virama (U+11839)
Dives Akuru Sign Halanta (U+1193D)
Dives Akuru Virama (U+1193E)
Nandinagari Sign Virama (U+119E0)
Zanabazar Square Sign Virama (U+11A34)
Zanabazar Square Subjoiner (U+11A47)
Soyombo Subjoiner (U+11A99)
Bhaiksuki Sign Virama (U+11C3F)
Masaram Gondi Sign Halanta (U+11D44)
Masaram Gondi Virama (U+11D45)
Gunjala Gondi Virama (U+11D97)
Historical Viramas with other functions
Tibetan Mark Halanta (U+0F84)
Myanmar Sign Asat (U+103A)
Limbu Sign Sa-I (U+193B)
Meetei Mayek Apun Iyek (U+ABED)
Chakma Maayyaa (U+11134)
Mongolian Variation Selectors
Mongolian Free Variation Selector One (U+180B)
Mongolian Free Variation Selector Two (U+180C)
Mongolian Free Variation Selector Three (U+180D)
Mongolian Vowel Separator (U+180E)
Generic Variation Selectors
Variation Selector-1 through -16 (U+FE00–U+FE0F)
Variation Selector-17 through -256 (U+E0100–U+E01EF)
Tag characters (U+E0001 and U+E0020–U+E007F)
Tifinagh
Tifinagh Consonant Joiner (U+2D7F)
Ogham
Ogham Space Mark (U+1680)
Ideographic
Ideographic variation indicator (U+303E)
Ideographic Description (U+2FF0–U+2FFB)
Musical Format Control
Musical Symbol Begin Beam (U+1D173)
Musical Symbol End Beam (U+1D174)
Musical Symbol Begin Tie (U+1D175)
Musical Symbol End Tie (U+1D176)
Musical Symbol Begin Slur (U+1D177)
Musical Symbol End Slur (U+1D178)
Musical Symbol Begin Phrase (U+1D179)
Musical Symbol End Phrase (U+1D17A)
Shorthand Format Control
Shorthand Format Letter Overlap (U+1BCA0)
Shorthand Format Continuing Overlap (U+1BCA1)
Shorthand Format Down Step (U+1BCA2)
Shorthand Format Up Step (U+1BCA3)
Deprecated Alternate Formatting
Inhibit Symmetric Swapping (U+206A)
Activate Symmetric Swapping (U+206B)
Inhibit Arabic Form Shaping (U+206C)
Activate Arabic Form Shaping (U+206D)
National Digit Shapes (U+206E)
Nominal Digit Shapes (U+206F)
Others
Object Replacement Character (U+FFFC)
Replacement Character (U+FFFD)
Characters vs Code Points
The term "character" is not well defined, and what we are referring to most of the time is the grapheme. A grapheme is represented visually by its glyph. The typeface (often erroneously referred to as font) used can depict visual variations of the same character. It is possible that two different graphemes can have the exact same glyph or are visually so close that the average reader cannot tell them apart.
A grapheme is almost always represented by one code point, for example the LATIN CAPITAL LETTER A is represented by only code point U+0041.
The grapheme LATIN CAPITAL A WITH DIAERESIS Ä is an example where a character can be represented by more than one code point. It can be U+00C4, or U+0041U+0308. U+0041 is the familiar A and U+0308 is the COMBINING DIAERESIS ̈ , a combining diacritical mark.
When a combining mark is adjacent to a non-combining mark code point, text rendering applications should superimpose the combining mark onto the glyph represented by the other code point to form a grapheme according to a set of rules.
The word BÄM would therefore be three graphemes. It may be made up of three code points or more depending on how the characters are actually composed.
Whitespace, joiners, and separators
Unicode provides a list of characters it deems whitespace characters for interoperability support. Software Implementations and other standards may use the term to denote a slightly different set of characters. For example, Java does not consider or to be whitespace, even though Unicode does. Whitespace characters are characters typically designated for programming environments. Often they have no syntactic meaning in such programming environments and are ignored by the machine interpreters. Unicode designates the legacy control characters U+0009 through U+000D and U+0085 as whitespace characters, as well as all characters whose General Category property value is Separator. There are 25 total whitespace characters as of Unicode 14.0.
Grapheme joiners and non-joiners
The zero-width joiner (U+200D) and zero-width non-joiner (U+200C) control the joining and ligation of glyphs. The joiner does not cause characters that would not otherwise join or ligate to do so, but when paired with the non-joiner these characters can be used to control the joining and ligating properties of the surrounding two joining or ligating characters. The Combining Grapheme Joiner (U+034F) is used to distinguish two base characters as one common base or digraph, mostly for underlying text processing, collation of strings, case folding and so on.
Word joiners and separators
The most common word separator is a space (U+0020). However, there are other word joiners and separators that also indicate a break between words and participate in line-breaking algorithms. The No-Break Space (U+00A0) also produces a baseline advance without a glyph but inhibits rather than enabling a line-break. The Zero Width Space (U+200B) allows a line-break but provides no space: in a sense joining, rather than separating, two words. Finally, the Word Joiner (U+2060) inhibits line breaks and also involves none of the white space produced by a baseline advance.
Other separators
Line Separator (U+2028)
Paragraph Separator (U+2029)
These provide Unicode with native paragraph and line separators independent of the legacy encoded ASCII control characters such as carriage return (U+000A), linefeed (U+000D), and Next Line (U+0085). Unicode does not provide for other ASCII formatting control characters which presumably then are not part of the Unicode plain text processing model. These legacy formatting control characters include Tab (U+0009), Line Tabulation or Vertical Tab (U+000B), and Form Feed (U+000C) which is also thought of as a page break.
Spaces
The space character (U+0020) typically input by the space bar on a keyboard serves semantically as a word separator in many languages. For legacy reasons, the UCS also includes spaces of varying sizes that are compatibility equivalents for the space character. While these spaces of varying width are important in typography, the Unicode processing model calls for such visual effects to be handled by rich text, markup and other such protocols. They are included in the Unicode repertoire primarily to handle lossless roundtrip transcoding from other character set encodings. These spaces include:
En Quad (U+2000)
Em Quad (U+2001)
En Space (U+2002)
Em Space (U+2003)
Three-Per-Em Space (U+2004)
Four-Per-Em Space (U+2005)
Six-Per-Em Space (U+2006)
Figure Space (U+2007)
Punctuation Space (U+2008)
Thin Space (U+2009)
Hair Space (U+200A)
Medium Mathematical Space (U+205F)
Aside from the original ASCII space, the other spaces are all compatibility characters. In this context this means that they effectively add no semantic content to the text, but instead provide styling control. Within Unicode, this non-semantic styling control is often referred to as rich text and is outside the thrust of Unicode's goals. Rather than using different spaces in different contexts, this styling should instead be handled through intelligent text layout software.
Three other writing-system-specific word separators are:
Mongolian Vowel Separator (U+180E)
Ideographic Space (U+3000): behaves as an ideographic separator and generally rendered as white space of the same width as an ideograph.
Ogham Space Mark (U+1680): this character is sometimes displayed with a glyph and other times as only white space.
Line-break control characters
Several characters are designed to help control line-breaks either by discouraging them (no-break characters) or suggesting line breaks such as the soft hyphen (U+00AD) (sometimes called the "shy hyphen"). Such characters, though designed for styling, are probably indispensable for the intricate types of line-breaking they make possible.
Break Inhibiting
Non-breaking hyphen (U+2011)
No-break space (U+00A0)
Tibetan Mark Delimiter Tsheg Bstar (U+0F0C)
Narrow no-break space (U+202F)
The break inhibiting characters are meant to be equivalent to a character sequence wrapped in the Word Joiner U+2060. However, the Word Joiner may be appended before or after any character that would allow a line-break to inhibit such line-breaking.
Break Enabling
Soft hyphen (U+00AD)
Tibetan Mark Intersyllabic Tsheg (U+0F0B)
Zero-width space (U+200B)
Both the break inhibiting and break enabling characters participate with other punctuation and whitespace characters to enable text imaging systems to determine line breaks within the Unicode Line Breaking Algorithm.
Types of code point
All code points given some kind of purpose or use are considered designated code points. Of those, they may be assigned to an abstract character, or otherwise designated for some other purpose.
Assigned characters
The majority of code points in actual use have been assigned to abstract characters. This includes private-use characters, which though not formally designated by the Unicode standard for a particular purpose, require a sender and recipient to have agreed in advance how they should be interpreted for meaningful information interchange to take place.
Private-use characters
The UCS includes 137,468 private-use characters, which are code points for private use spread across three different blocks, each called a Private Use Area (PUA). The Unicode standard recognizes code points within PUAs as legitimate Unicode character codes, but does not assign them any (abstract) character. Instead, individuals, organizations, software vendors, operating system vendors, font vendors and communities of end-users are free to use them as they see fit. Within closed systems, characters in the PUA can operate unambiguously, allowing such systems to represent characters or glyphs not defined in Unicode. In public systems their use is more problematic, since there is no registry and no way to prevent several organizations from adopting the same code points for different purposes. One example of such a conflict is Apple's use of U+F8FF for the Apple logo, versus the ConScript Unicode Registry's use of U+F8FF as in the Klingon script.
The Basic Multilingual Plane (Plane 0) contains 6,400 private-user characters in the eponymously named PUA Private Use Area, which ranges from U+E000 to U+F8FF. The Private Use Planes, Plane 15 and Plane 16, each have their own PUAs of 65,534 private-use characters (with the final two code points of each plane being ). These are Supplementary Private Use Area-A, which ranges from U+F0000 to U+FFFFD, and Supplementary Private Use Area-B, which ranges from U+100000 to U+10FFFD.
PUAs are a concept inherited from certain Asian encoding systems. These systems had private use areas to encode what the Japanese call gaiji (rare characters not normally found in fonts) in application-specific ways.
Surrogates
The UCS uses surrogates to address characters outside the initial Basic Multilingual Plane without resorting to more-than-16-bit byte representations. There are 1024 "high" surrogates (D800–DBFF) and 1024 "low" surrogates (DC00–DFFF). By combining a pair of surrogates, the remaining characters in all the other planes can be addressed (1024 × 1024 = 1048576 code points in the other 16 planes). In UTF-16, they must always appear in pairs, as a high surrogate followed by a low surrogate, thus using 32 bits to denote one code point.
A surrogate pair denotes the code point
1000016 + (H - D80016) × 40016 + (L - DC0016)
where H and L are the numeric values of the high and low surrogates respectively.
Since high surrogate values in the range DB80–DBFF always produce values in the Private Use planes, the high surrogate range can be further divided into (normal) high surrogates (D800–DB7F) and "high private use surrogates" (DB80–DBFF).
Isolated surrogate code points have no general interpretation; consequently, no character code charts or names lists are provided for this range. In the Python programming language, individual surrogate codes are used to embed undecodable bytes in Unicode strings.
The unhyphenated term "" refers to 66 code points (labeled <not a character>) permanently reserved for internal use, and therefore guaranteed to never be assigned to a character. Each of the 17 planes has its two ending code points set aside as . So, are: U+FFFE and U+FFFF on the BMP, U+1FFFE and U+1FFFF on Plane 1, and so on, up to U+10FFFE and U+10FFFF on Plane 16, for a total of 34 code points. In addition, there is a contiguous range of another 32 code points in the BMP: U+FDD0..U+FDEF. Software implementations are therefore free to use these code points for internal use. One particularly useful example of a is the code point U+FFFE. This code point has the reverse UTF-16/UCS-2 byte sequence of the byte order mark (U+FEFF). If a stream of text contains this , this is a good indication the text has been interpreted with the incorrect endianness.
Versions of the Unicode standard from 3.1.0 to 6.3.0 claimed that "should never be interchanged". Corrigendum #9 of the standard later stated that this was leading to "inappropriate over-rejection", clarifying that "[] are not illegal in interchange nor do they cause ill-formed Unicode text", and removing the original claim.
Reserved code points
All other code points, being those not designated, are referred to as being reserved. These code points may be assigned for a particular use in future versions of the Unicode standard.
Characters, grapheme clusters and glyphs
Whereas many other character sets assign a character for every possible glyph representation of the character, Unicode seeks to treat characters separately from glyphs. This distinction is not always unambiguous, however a few examples will help illustrate the distinction. Often two characters may be combined typographically to improve the readability of the text. For example, the three letter sequence "ffi" may be treated as a single glyph. Other character sets would often assign a code point to this glyph in addition to the individual letters: "f" and "i".
In addition, Unicode approaches diacritic modified letters as separate characters that, when rendered, become a single glyph. For example, an "o" with diaeresis: "ö". Traditionally, other character sets assigned a unique character code point for each diacritic modified letter used in each language. Unicode seeks to create a more flexible approach by allowing combining diacritic characters to combine with any letter. This has the potential to significantly reduce the number of active code points needed for the character set. As an example, consider a language that uses the Latin script and combines the diaeresis with the upper- and lower-case letters "a", "o", and "u". With the Unicode approach, only the diaeresis diacritic character needs to be added to the character set to use with the Latin letters: "a", "A", "o", "O", "u", and "U": seven characters in all. A legacy character sets needs to add six precomposed letters with a diaeresis in addition to the six code points it uses for the letters without diaeresis: twelve character code points in total.
Compatibility characters
UCS includes thousands of characters that Unicode designates as compatibility characters. These are characters that were included in UCS in order to provide distinct code points for characters that other character sets differentiate, but would not be differentiated in the Unicode approach to characters.
The chief reason for this differentiation was that Unicode makes a distinction between characters and glyphs. For example, when writing English in a cursive style, the letter "i" may take different forms whether it appears at the beginning of a word, the end of a word, the middle of a word or in isolation. Languages such as Arabic written in an Arabic script are always cursive. Each letter has many different forms. UCS includes 730 Arabic form characters that decompose to just 88 unique Arabic characters. However, these additional Arabic characters are included so that text processing software may translate text from other character sets to UCS and back again without any loss of information crucial for non-Unicode software.
However, for UCS and Unicode in particular, the preferred approach is to always encode or map that letter to the same character no matter where it appears in a word. Then the distinct forms of each letter are determined by the font and text layout software methods. In this way, the internal memory for the characters remains identical regardless of where the character appears in a word. This greatly simplifies searching, sorting and other text processing operations.
Character properties
Every character in Unicode is defined by a large and growing set of properties. Most of these properties are not part of Universal Character Set. The properties facilitate text processing including collation or sorting of text, identifying words, sentences and graphemes, rendering or imaging text and so on. Below is a list of some of the core properties. There are many others documented in the Unicode Character Database.
Unicode provides an online database to interactively query the entire Unicode character repertoire by the various properties.
See also
ConScript Unicode Registry
Unicode compatibility characters
References
External links
Unicode Consortium
decodeunicode.org Unicode Wiki with all 98884 graphic characters of Unicode 5.0 as gifs, full text search
Unicode Characters by Property
IEC standards
Unicode |
2268030 | https://en.wikipedia.org/wiki/Humphrey%20Appleby | Humphrey Appleby | Sir Humphrey Appleby is a fictional character from the British television series Yes Minister and Yes Prime Minister. He was played originally by Sir Nigel Hawthorne, both on stage and in a television adaptation of the stage show by Henry Goodman in a new series of Yes, Prime Minister. In Yes Minister, he is the Permanent Secretary for the Department of Administrative Affairs (a fictional department of the British government). In the last episode of Yes Minister, "Party Games", he becomes Cabinet Secretary, the most powerful position in the service and one he retains during Yes, Prime Minister. Hawthorne's portrayal won the BATTALION Award for Best Light Entertainment Performance four times: 1981, 1982
, 1986 and 1987.
Fictional biography
Sir Humphrey was educated at Winchester College and Baillie College, Oxford (clearly based on Balliol College, Oxford; Humphrey is frequently seen wearing a Balliol College tie), where he read literae humaniores and received a first. After National Service in the Army Education Corps, he entered the Civil Service. From 1950 to 1956 he was successively the Regional Contracts Officer, an assistant principal in the Scottish Office, on secondment from the War Office (where, as revealed in "The Skeleton in the Cupboard", he was responsible for the relinquishing of £40,000,000 worth of military installations due to a lack of understanding of Scottish law). In 1964, he was brought into the newly formed Department of Administrative Affairs, where he worked until his appointment as Cabinet Secretary. He is recommended for a KBE award early on in the series in "The Official Visit". The Dean of Baillie describes him as "too clever by half" and "smug" (The Bishop's Gambit).
On Humphrey's possible private situation, Jonathan Lynn, one of the creators of Yes, Minister and Yes, Prime Minister, commented: "We always supposed that Sir Humphrey lived in Haslemere, had a son at Winchester and a daughter at Bedales and that his wife was a sensible woman who made cakes for church socials and enjoyed walking the family bulldog. I think that Humphrey's hobbies were reading (mainly biographies), listening to classical music, and occasionally visiting the RSC, the National Theatre or the Royal Opera House, where he was on the Board. His holidays were probably spent walking in the Lake District and, occasionally, sailing in Lymington. On the whole, he had a slightly warmer relationship with his dog than his family."
The book adaptation of the first series was published in 1981, but with a fictional publication date of 2017. In the foreword, the 'editors' Lynn and Jay state that they had "a few conversations" with Sir Humphrey before the "advancing years, without in any way impairing his verbal fluency, disengaged the operation of his mind from the content of his speech," indicating that his speech had transitioned from merely sounding like overly verbose nonsense to actually being overly verbose nonsense. The third volume (published 1983, but dated September 2019) notes that the editors learned from "the few lucid moments of Sir Humphrey Appleby's last ravings" at St Dympna's Hospital for the Elderly Deranged. The fifth and final volume (published 1987, dated May 2024) makes it explicit that Sir Humphrey is dead, and thanks his widow for her cooperation. Politico's Book of the Dead states that Sir Humphrey (like Nigel Hawthorne) died in 2001.
Honours
Sir Humphrey has been appointed a Knight Grand Cross of the Order of the Bath (GCB), a Knight Commander of the Order of the British Empire (KBE) and a Member of the Royal Victorian Order (MVO).
Character
Sir Humphrey is a master of obfuscation and manipulation, often making long-winded statements to confuse and fatigue the listener. An example is the following monologue from the episode The Death List: "In view of the somewhat nebulous and inexplicit nature of your remit, and the arguably marginal and peripheral nature of your influence within the central deliberations and decisions within the political process, there could be a case for restructuring their action priorities in such a way as to eliminate your liquidation from their immediate agenda." Addressing his Minister, he means to suggest by this that a terrorist group which had previously conspired to assassinate the Minister is no longer planning to do so, as they believe he is simply not important enough politically. Sir Humphrey is committed to maintaining the status quo for the country in general and for the Civil Service in particular, and will stop at nothing to do so—whether that means baffling his opponents with technical jargon, employing a dizzying array of stalling and delaying tactics, withholding information or concealing vital documents in mammoth piles of papers and reports, strategically appointing allies to supposedly impartial boards, or setting up an interdepartmental committee to immobilise his Minister's proposals with red tape, and occasionally outright lying. Throughout the series, he serves as Permanent Secretary at the Department of Administrative Affairs, with Jim Hacker as minister; he is appointed Cabinet Secretary shortly before Hacker's elevation to the role of Prime Minister, which he was instrumental in bringing to pass.
Sir Humphrey frequently uses both his mastery of the English language and even his superb grasp of Latin and Greek grammar to perplex his political master and to obscure relevant issues under discussion. However, his habit of using language as a tool of confusion and obstruction is so deeply ingrained that he is sometimes unable to speak clearly and directly even when he honestly wishes to be clearly understood. He genuinely believes that the Civil Service knows what the average person needs and is the most qualified body to run the country, the joke being that not only is Sir Humphrey, as a high-ranking Oxford-educated Civil Servant, quite out of touch with the average person but also the Civil Service judges what is "best for Britain" to be that which in actuality is best for the Civil Service. Jim Hacker, on the other hand, tends to regard what is best for Britain as being whatever is best for his political party or his own chances of re-election. As a result, Sir Humphrey and Hacker often clash.
He still holds women to be the fairer sex and is thus overly courteous, frequently addressing them as "Dear lady". Like Hacker, Sir Humphrey enjoys the finer things in life, and is regularly seen drinking sherry and dining at fine establishments, often with his fellow civil servant Sir Arnold Robinson, who was Cabinet Secretary throughout Yes, Minister. Sir Humphrey is also on the board of governors of the National Theatre and attends many of the gala nights of the Royal Opera House. His interests also extend to cricket, art and theatre.
Humphrey is usually smooth, calm and collected within his element of bureaucracy and procedure, but has become so adept at working within and maintaining the system of government that, whenever anything unexpected is sprung on him, whether it be Hacker ordering him to negotiate with a rogue councillor, or honours in his department being made dependent on economies within the rationale of meritocracy, Humphrey immediately crumbles, on a few occasions being reduced to stuttering out garbled platitudes such as "the beginning of the end" or "it cuts at the very roots", although he usually regains his composure pretty quickly to push things back on track.
In a Radio Times interview to promote the first series of Yes, Prime Minister, Nigel Hawthorne observed, "He's raving mad of course. Obsessive about his job. He'd do anything to keep control. In fact, he does go mad in one episode. Quite mad."
Relationships
In Yes Minister, Sir Humphrey maintains a civil and outwardly deferential but fundamentally adversarial relationship with his new minister, Jim Hacker. When keeping the Minister busy is not sufficient to prevent him from proposing new policy, Sir Humphrey is not above deceiving or even blackmailing him. He frequently manipulates Hacker by describing new proposals that he is opposed to as "very brave" or "extremely courageous", playing upon Hacker's fear as a politician of anything which may fly in the face of prevailing public opinion.
He has a slightly more amicable relationship with his subordinate, the Minister's Principal Private Secretary, Bernard Woolley. He frequently lectures the naïve Woolley in the realities of political matters. When Woolley's loyalty to the Minister is inconvenient to Sir Humphrey's plans, he readily makes oblique threats about Woolley's job prospects should he defy Sir Humphrey. However, he is equally quick to defend Woolley from outsiders. His closest on-screen friendships are with Sir Arnold Robinson, Cabinet Secretary during Yes Minister; Sir Frederick "Jumbo" Stewart, Permanent Secretary of the Foreign and Commonwealth Office; and the banker Sir Desmond Glazebrook. He is married, although his wife plays virtually no role in either series and is only seen once: next to him in bed in the Series One episode "Big Brother".
Real-life references
Sir Humphrey has become a stereotype associated with civil servants, and the phrase "Bowler-hatted Sir Humphreys" is sometimes used when describing their image. Satirical and investigative magazine Private Eye often refers to Sir Humphrey with the definite article 'the' to indicate someone in the civil service the magazine considers of similar character, e.g. "[name] is the present Sir Humphrey at the Department for Rural Affairs". Jonathan Lynn wrote in his book Comedy Rules (2011) that Sir Humphrey was named after a friend of his at Cambridge, Humphrey Barclay.
A spoof obituary for Sir Humphrey appears in Politico's Book of the Dead, written by his creators, Antony Jay and Jonathan Lynn, which includes some biographical details, including dates of birth and death, which he shares with Nigel Hawthorne, the actor who portrayed him.
Sir Humphrey was voted the 45th greatest comedy character in Channel 4's 2007 "The World's Greatest Comedy Characters" poll. He was also voted 31st in a poll of "100 Greatest TV Characters", also on Channel 4.
Upon Nigel Hawthorne's death, the following appeared on the Editorial page of The Ottawa Citizen under the heading "No, Minister":
"It is sadly that we report on Sir Nigel Hawthorne, elsewhere referred to as Sir Humphrey Appleby. While it would be premature to commit ourselves to a definitive position on his merits or even his existence, a committee is being struck to consider the possibility of a decision, in the fullness of time, to regret his passing, if any."
The character was resurrected for the 2010 general election campaign in a series of short sketches on BBC Two's late evening current affairs programme Newsnight. The sketches were written by Jay and Lynn, and Sir Humphrey was played by Henry Goodman.
Henry Goodman also played the part of Sir Humphrey in the 2010 stage production of Yes, Prime Minister.
Humphrey, a cat employed as the Chief Mouser to the Cabinet Office at 10 Downing Street from 1989 to 1997, was named after Sir Humphrey.
References
Yes Minister characters
Fictional civil servants
Television characters introduced in 1980
Fictional knights
Fictional University of Oxford people
British male characters in television |
5253397 | https://en.wikipedia.org/wiki/Security%20Identifier | Security Identifier | In the context of the Microsoft Windows NT line of operating systems, a Security Identifier (commonly abbreviated SID) is a unique, immutable identifier of a user, user group, or other security principal. A security principal has a single SID for life (in a given domain), and all properties of the principal, including its name, are associated with the SID. This design allows a principal to be renamed (for example, from "Jane Smith" to "Jane Jones") without affecting the security attributes of objects that refer to the principal.
Overview
Windows grants or denies access and privileges to resources based on access control lists (ACLs), which use SIDs to uniquely identify users and their group memberships. When a user logs into a computer, an access token is generated that contains user and group SIDs and user privilege level. When a user requests access to a resource, the access token is checked against the ACL to permit or deny particular action on a particular object.
SIDs are useful for troubleshooting issues with security audits, Windows server and domain migrations.
The format of a SID can be illustrated using the following example: "S-1-5-21-3623811015-3361044348-30300820-1013";
Identifier Authority Values
Identifier Authority Value
Known identifier authority values are:
Identifying a capability SID:
If you find the SID in the registry data, then it is a capability SID. By design, it will not resolve into a friendly name.
If you do not find the SID in the registry data, then it is not a known capability SID. You can continue to troubleshoot it as a normal unresolved SID. Keep in mind that there is a small chance that the SID could be a third-party capability SID, in which case it will not resolve into a friendly name.
Per Microsoft Support: Important - DO NOT DELETE capability SIDS from either the Registry or file system permissions. Removing a capability SID from file system permissions or registry permissions may cause a feature or application to function incorrectly. After you remove a capability SID, you cannot use the UI to add it back.
S-1-5 Subauthority Values
Virtual Accounts are defined for a fixed set of class names, but the account name isn't defined. There are a nearly infinite number of accounts available within a Virtual Account. The names work like "Account Class\Account Name" so "AppPoolIdentity\Default App Pool". The SID is based on a SHA-1 hash of the lower-case name. Virtual Accounts can each be given permissions separately as each maps to a distinct SID. This prevents the "cross-sharing permissions" problem where each service is assigned to the same NT AUTHORITY class (such as "NT AUTHORITY\Network Service")
Machine SIDs
The machine SID (S-1-5-21) is stored in the SECURITY registry hive located at SECURITY\SAM\Domains\Account, this key has two values F and V. The V value is a binary value that has the computer SID embedded within it at the end of its data (last 96 bits). (Some sources state that it is stored in the SAM hive instead.) A backup is located at SECURITY\Policy\PolAcDmS\@.
The machine SID subauthority format is used for domain SIDs too. A machine is considered its own local domain in this case.
Decoding Machine SID
The machine SID is stored in a raw-bytes form in the registry. To convert it into the more common numeric form, one interprets it as three little endian 32-bit integers, converts them to decimal, and add hyphens between them.
Other Uses
The machine SID is also used by some free-trial programs, such as Start8, to identify the computer so that it cannot restart the trial.
Service SIDs
Service SIDs are a feature of service isolation, a security feature introduced in Windows Vista and Windows Server 2008. Any service with the "unrestricted" SID-type property will have a service-specific SID added to the access token of the service host process. The purpose of Service SIDs is to allow permissions for a single service to be managed without necessitating the creation of service accounts, an administrative overhead.
Each service SID is a local, machine-level SID generated from the service name using the following formula:
S-1-5-80-{SHA-1(service name in upper case encoded as UTF-16)}
The sc.exe command can be used to generate an arbitrary service SID:
The service can also be referred to as NT SERVICE\<service_name> (e.g. "NT SERVICE\dnscache").
Duplicated SIDs
In a Workgroup of computers running Windows NT/2K/XP, it is possible for a user to have unexpected access to shared files or files stored on a removable storage. This can be prevented by setting access control lists on a susceptible file, such that the effective permissions is determined by the user SID. If this user SID is duplicated on another computer, a user of a second computer having the same SID could have access to the files that the user of a first computer has protected. This can often happen when machine SIDs are duplicated by a disk clone, common for pirate copies. The user SIDs are built based on the machine SID and a sequential relative ID.
When the computers are joined into a domain (Active Directory or NT domain for instance), each computer is provided a unique Domain SID which is recomputed each time a computer enters a domain. This SID is similar to the machine SID. As a result, there are typically no significant problems with duplicate SIDs when the computers are members of a domain, especially if local user accounts are not used. If local user accounts are used, there is a potential security issue similar to the one described above, but the issue is limited to the files and resources protected by local users, as opposed to by domain users.
Duplicated SIDs are usually not a problem with Microsoft Windows systems, although other programs that detect SIDs might have problems with its security.
Microsoft used to provide Mark Russinovich's "NewSID" utility as a part of Sysinternals to change a machine SID. It was retired and removed from download on November 2, 2009. Russinovich's explanation is that neither him nor the Windows security team could think of any situation where duplicate SIDs could cause any problems at all, because machine SIDs are never responsible for gating any network access.
At present, the only supported mechanism for duplicating disks for Windows operating systems is through use of SysPrep, which generates new SIDs.
See also
Access control
Access Control Matrix
Discretionary Access Control (DAC)
Globally Unique Identifier (GUID)
Mandatory Access Control (MAC)
Role-Based Access Control (RBAC)
Capability-based security
Post-cloning operations
References
External links
Official
ObjectSID and Active Directory
Microsoft TechNet: Server 2003: Security Identifiers Technical Reference
MSKB154599: How to Associate a Username with a Security Identifier
MSKB243330: Well-known security identifiers in Windows operating systems
Support tools for Windows Server 2003 and Windows XP
Security Identifiers - Windows Security docs
Other
Why Understanding SIDs is Important
Microsoft Security Descriptor (SID) Attributes : Tutorial Article about SID handling / converting in scripts
Identifiers
Microsoft Windows security technology
Unique identifiers
Windows NT architecture |
54957678 | https://en.wikipedia.org/wiki/Red%20Tent%20%28shelter%29 | Red Tent (shelter) | The Red Tent was the tent in which survivors of the airship Italy took shelter after it fell onto the ice pack in the Arctic at around 10:33 on 25 May 1928, until the time of their rescue on 12 July by the Soviet ice breaker Krasin.
History
The Red Curtain was designed by the engineer Felice Trojani, among the emergency equipments for the crew members who were about to get off the aerial view on the North Pole; of the equipment was also part of the Ondina 33 radio, through which the Marconist Giuseppe Biagi was able to launch the SOS first and then to lead the rescue to the survivors. The tent design was preceded by an accurate study of those used in previous polar expeditions, and was carried out by Moretti Company in Milan.
The Red Curtain was of a central type, with a parallelepipedal base of 2.75 × 2.75 m for 1 m high, overlaid with a pyramidal part whose vertex was nearly 1 m from the ground. The access was secured through a circular entrance of one meter in diameter and closed by a windshield sleeve. The exterior walls and the bottom were in crude silk, not colored, while the inner walls were blue silk; The color was chosen as palliative against the snow ophthalmic.
The tent, designed to accommodate up to four people, hosted nine (including two leg wounds, Umberto Nobile and Natale Cecioni), Titina's mascot, a part of the radio and the batteries that fed it. Once the curtain was recovered between the materials scattered on the pack, it was lifted by Trojani, while Mariano and Viglieri planted the picks in the ice and tensed the winds, loading the edges with recovered food and other weights. At the bottom were placed the cartons containing the sailing cards and the only surviving sleeping bag that, cut and opened, would host the two wounded Cecioni and Nobile, next to the catalytic stove on.
To properly assess the height of the airship relative to the ground, were not sufficiently reliable altimeters available at the time, and was therefore used a more efficient system: from the cabin of the airship were dropped the vials of glass, stuffed with fuchsin, by measuring the time of a fall with a special stopwatch, made in Rome by Hausmann, starting from the release until the vial collapsed, packing red.
In order to make the tent visible from above, the survivors decided to use the fallen firecracker vials to draw a line of red lines. Once communications were established through the radio, the rescuers became aware of the new color, and the journalists coined the name "Red Curtain". The continual and aggressive light of the Nordic summer made the delicate aniline vanish in just a few days, bringing the tent to its original livery.
The Red Curtain was retrieved, along with Einar Lundborg's airplane and all the field materials, from the Krasin crew. Upon returning to Italy, Umberto Nobile donated the Red Curtain to the City of Milan, the sponsor of the polar expedition, who dedicated it to the Museum of the Castello Sforzesco, today the National Museum of Science and Technology Leonardo da Vinci, where Felice Trojani mounted it for the last time. The Red Curtain was exposed to the public until the mid -nineties, and is now waiting for a long and delicate restoration that once again allows its public exposure.
References
Happy Trojani, The Tail of Minosse: A Man's Life, a Business History , IX Edition, Milan 2007, Ugo Mursia.
Happy Trojani, Last Flight , IV Edition, Milan 2008, Ugo Mursia.
Tents |
1323037 | https://en.wikipedia.org/wiki/1982%20in%20video%20games | 1982 in video games | 1982 was the peak year for the golden age of arcade video games as well as the second generation of video game consoles. Many games were released that would spawn franchises, or at least sequels, including Dig Dug, Pole Position, Mr. Do!, Zaxxon, Q*bert, Time Pilot and Pitfall! The year's highest-grossing video game was Namco's arcade game Pac-Man, for the third year in a row, while the year's best-selling home system was the Atari 2600 (Atari VCS). Additional game consoles added to a crowded market, notably the ColecoVision and Atari 5200. Troubles at Atari late in the year triggered the video game crash of 1983.
Financial performance
The US arcade video game market is worth $4.3 billion, equivalent to $ adjusted for inflation.
The US home video game market is worth $3.8 billion, equivalent to $ adjusted for inflation.
The Japanese home video game market is approaching ¥300 billion, equivalent to $ adjusted for inflation.
Highest-grossing arcade games
The highest-grossing arcade game of 1982 was Pac-Man, which had accumulated a total revenue of worldwide ( adjusted for inflation) by 1982.
Japan
In Japan, the following titles were the highest-grossing arcade video games of 1982, according to the annual Game Machine chart.
United States
In the United States, the following titles were the highest-grossing arcade games of 1982, according to RePlay and Cash Box magazines and the Amusement & Music Operators Association (AMOA).
The following table lists the top-grossing titles of each month in 1982, according to the RePlay and Play Meter charts.
Best-selling home video games
The following titles were 1982's best-selling home video games.
Best-selling home systems
Events
December 27 – Starcade, a video game television game show, debuts on TBS in the United States.
Major awards
Electronic Games holds the third Arcade Awards, for games released during 1980–1981. Pac-Man wins the best arcade game award, Asteroids (Atari VCS) wins the best console game award, and Star Raiders (Atari 8-bit family) wins the best computer game award.
Pac-Man wins the Video Software Dealers Association's VSDA Award for Best Videogame.
Business
Eidansha Boshu Service Center shortens its name to Enix and in August establishes itself as a computer game publisher.
New companies:
Argonaut
Artech
Compile
Cosmi
Data Age
Distinctive Software
Dragon Data
Electronic Arts
English Software
First Star Software
Gamestar
Imagine Software
Llamasoft
Lucasfilm Games
Martech
MicroProse
Richard Shepherd Software
System 3
Ultimate Play the Game
US Games
Notable releases
Games
Arcade
January – Sega releases Zaxxon, which introduces isometric graphics and looks far more 3D than any other raster game at the time.
January 13 – Midway releases Ms. Pac-Man (despite it being copyrighted as 1981); it is (as the name suggests) the sequel to Pac-Man, but was created without Namco's authorization. They also release Baby Pac-Man and Pac-Man Plus without Namco's authorization later in the year; the former is a pinball/video game hybrid.
April 19 – Namco releases Dig Dug, manufactured by Atari in North America.
August – Nintendo releases Donkey Kong Jr., the sequel to Donkey Kong.
August – Taito releases parallax scroller Jungle Hunt.
September 24 – Namco releases Pole Position, one of the first games with stereophonic and quadraphonic sound. Featuring a pseudo-3D, third-person, rear-view perspective, it becomes the most popular racing game of its time.
September – Sega releases maze game Pengo, starring a cute penguin.
October – Namco releases Super Pac-Man, the third title in the Pac-Man series.
October – Universal releases Mr. Do! solely as a conversion kit, the first game in the series.
October – Gottlieb releases Q*bert.
November – Konami releases Time Pilot.
Bally/Midway releases the Tron arcade game before the movie.
Atari releases Gravitar which, though extraordinarily difficult, inspires a number of gravity-based home computer games.
Williams Electronics releases Joust, Robotron: 2084, and the second game of the year with parallax scrolling, Irem's Moon Patrol. Robotron popularizes the twin-stick control scheme for fast action games.
Data East releases BurgerTime.
Taito releases Front Line, which creates the blueprint for mid-80s, vertically scrolling, commando games.
Electro Sport releases Quarter Horse, the first Laserdisc video game.
Kangaroo is one of the first Donkey Kong-inspired games to become popular in arcades.
Gottlieb releases Reactor.
Console
February – Atari releases Haunted House for the 2600, which is later considered one of the first survival horror games.
March – Atari's Atari 2600 version of Pac-Man hits stores.
April – Activision releases Pitfall!, which goes on to sell 4 million copies.
May – Atari releases Yars' Revenge.
August – Overlooked arcade games are revitalized as ColecoVision launch titles, including Cosmic Avenger, Mouse Trap, Lady Bug, and Venture.
October – Atari releases Swordquest: Earthworld, the first title in a planned four-game contest.
December – Atari releases E.T. the Extra-Terrestrial. Written in five and a half weeks, it's one of the games that sparks the video game crash of 1983.
Activision releases River Raid, Megamania, Barnstorming, Chopper Command, and Starmaster for the Atari 2600. River Raid becomes one of the all-time bestselling games for the system.
Starpath releases Dragonstomper (the only RPG for the Atari 2600) and Escape From the Mindmaster.
Parker Brothers releases Star Wars: The Empire Strikes Back for the Atari 2600, which is the first Star Wars video game.
Imagic releases Demon Attack, Atlantis, Cosmic Ark, and Dragonfire for the 2600. Atlantis sells over a million copies while Demon Attack doubles that.
Computer
March 11 – Infocom releases their first non-Zork title, Deadline.
August 24 – Ultima II: The Revenge of the Enchantress is released.
November – Microsoft Flight Simulator 1.0 is released for MS-DOS. It becomes a standard compatibility test for early PC clones.
Big Five Software releases the widely ported Miner 2049er, a platformer with ten screens compared to the four of Donkey Kong.
Brøderbund releases Choplifter for the Apple II.
Edu-Ware releases Prisoner 2 for the Apple II, Atari, and IBM PC.
Koei releases The Dragon and Princess, the earliest known Japanese RPG, for NEC's PC-8001 home computer platform. It is an early example of tactical turn-based combat in the RPG genre.
Koei releases Night Life, the first erotic computer game (Eroge). The company also released the erotic title, , which was an early role-playing adventure game with color graphics, owing to the eight-color palette of the PC-8001 computer. It became a hit, helping Koei become a major software company.
Pony Canyon releases Spy Daisakusen, another early Japanese RPG. Based on the Mission: Impossible franchise, it replaces the traditional fantasy setting with a modern espionage setting.
Sir-Tech Software, Inc. releases Wizardry II: The Knight of Diamonds, the second scenario in the Wizardry series.
Sierra On-Line releases Time Zone for the Apple II. Written and directed by Roberta Williams, the graphical adventure game shipped with 6 double-sided floppy disks and cost US$99.
Synapse releases Necromancer and Shamus for the Atari 8-bit family.
Hiroyuki Imabayashi's Sokoban is released for the NEC PC-8801 and becomes an oft-cloned puzzle game concept.
Datamost releases the action/adventure game Aztec for the Apple II.
The Arcade Machine from Broderbund is one of the first general-purpose game creation kits.
Hardware
Arcade
January – Sega releases the Sega Zaxxon, an arcade system board that introduces isometric graphics.
September – Namco releases the Namco Pole Position, the first arcade system board to use 16-bit microprocessors, with two Zilog Z8002 processors. It is capable of pseudo-3D, sprite-scaling, and displays up to 3840 colors.
Console
May – Emerson releases the Arcadia 2001.
August – Starpath releases the Starpath Supercharger add-on for the Atari 2600
August – Coleco launches ColecoVision in North America, the first console with versions of Donkey Kong and Sega's isometric Zaxxon.
November – General Consumer Electronics releases the Vectrex with built-in vector monitor.
November – Atari renames the venerable Atari Video Computer System to the Atari 2600.
Atari releases the Atari 5200, a lightly modified Atari 8-bit computer with analog joysticks and no keyboard.
Entex releases the Adventure Vision tabletop console.
Computer
July – Timex Sinclair releases a modified ZX81 in the US as the TS1000. It's the first sub-$100 home computer.
Commodore Business Machines releases the Commodore 64 home computer, which would become one of the best-selling computers of all time.
NEC releases the NEC PC-98, which would become the Japanese market leader and one of the best-selling computers of all time. It is released as the APC overseas.
Sharp releases the X1.
Sinclair Research releases the ZX Spectrum home computer, which would become Britain's best-selling computer.
Dragon Data, initially a subsidiary of Mettoy, releases the Dragon 32 home microcomputer.
Notes
References
Video games by year |
817919 | https://en.wikipedia.org/wiki/PyQt | PyQt | PyQt is a Python binding of the cross-platform GUI toolkit Qt, implemented as a Python plug-in. PyQt is free software developed by the British firm Riverbank Computing. It is available under similar terms to Qt versions older than 4.5; this means a variety of licenses including GNU General Public License (GPL) and commercial license, but not the GNU Lesser General Public License (LGPL). PyQt supports Microsoft Windows as well as various flavours of UNIX, including Linux and MacOS (or Darwin).
PyQt implements around 440 classes and over 6,000 functions and methods including:
a substantial set of GUI widgets
classes for accessing SQL databases (ODBC, MySQL, PostgreSQL, Oracle, SQLite)
QScintilla, Scintilla-based rich text editor widget
data aware widgets that are automatically populated from a database
an XML parser
SVG support
classes for embedding ActiveX controls on Windows (only in commercial version)
To automatically generate these bindings, Phil Thompson developed the tool SIP, which is also used in other projects.
In August 2009, Nokia, the then owners of the Qt toolkit, released PySide, providing similar functionality, but under the LGPL, after failing to reach an agreement with Riverbank Computing to change its licensing terms to include LGPL as an alternative license.
PyQt main components
PyQt4 contains the following Python modules.
The QtCore module contains the core non-GUI classes, including the event loop and Qt's signal and slot mechanism. It also includes platform independent abstractions for Unicode, threads, mapped files, shared memory, regular expressions, and user and application settings.
The QtGui module contains the majority of the GUI classes. These include a number of table, tree and list classes based on the model–view–controller design pattern. Also provided is a sophisticated 2D canvas widget capable of storing thousands of items including ordinary widgets.
The QtNetwork module contains classes for writing UDP and TCP clients and servers. It includes classes that implement FTP and HTTP clients and support DNS lookups. Network events are integrated with the event loop making it very easy to develop networked applications.
The QtOpenGL module contains classes that enable the use of OpenGL in rendering 3D graphics in PyQt applications.
The QtSql module contains classes that integrate with open-source and proprietary SQL databases. It includes editable data models for database tables that can be used with GUI classes. It also includes an implementation of SQLite.
The QtSvg module contains classes for displaying the contents of SVG files. It supports the static features of SVG 1.2 Tiny.
The QtXml module implements SAX and DOM interfaces to Qt's XML parser.
The QtMultimedia module implements low-level multimedia functionality. Application developers would normally use the phonon module.
The QtDesigner module contains classes that allow Qt Designer to be extended using PyQt.
The Qt module consolidates the classes contained in all of the modules described above into a single module. This has the advantage that you don't have to worry about which underlying module contains a particular class. It has the disadvantage that it loads the whole of the Qt framework, thereby increasing the memory footprint of an application. Whether you use this consolidated module, or the individual component modules is down to personal taste.
The uic module implements support for handling the XML files created by Qt Designer that describe the whole or part of a graphical user interface. It includes classes that load an XML file and render it directly, and classes that generate Python code from an XML file for later execution.
PyQt5 contains the following Python modules:
QtQml Module
QtQtuick Module
QtCore Module
QtGui Module
QtPrintSupport Module
QtWidgets Module
QGLContext Module
QGLFormat Module
QGLWidget Module
QtWebKit Module
QtWebKitWidgets Module
Versions
PyQt version 4 works with both Qt 4 and Qt 5. PyQt version 5 only supports Qt version 5, and drops support for features that are deprecated in Qt 5.
Hello World example
The below code shows a small window on the screen.
PyQt4
#! /usr/bin/env python3
# Character Encoding: UTF-8
#
# Here we provide the necessary imports.
# The basic GUI widgets are located in QtGui module.
import sys
from PyQt4.QtGui import QApplication, QWidget
# Every PyQt4 application must create an application object.
# The application object is located in the QtGui module.
app = QApplication(sys.argv)
# The QWidget widget is the base class of all user interface objects in PyQt4.
# We provide the default constructor for QWidget. The default constructor has no parent.
# A widget with no parent is called a window.
root = QWidget()
root.resize(320, 240) # The resize() method resizes the widget.
root.setWindowTitle("Hello, World!") # Here we set the title for our window.
root.show() # The show() method displays the widget on the screen.
sys.exit(app.exec_()) # Finally, we enter the mainloop of the application.
PyQt5
#! /usr/bin/env python3
# Character Encoding: UTF-8
#
# Here we provide the necessary imports.
# The basic GUI widgets are located in QtWidgets module.
from PyQt5.QtWidgets import QApplication, QWidget
# Every PyQt5 application must create an application object.
# The application object is located in the QtWidgets module.
app = QApplication([])
# The QWidget widget is the base class of all user interface objects in PyQt5.
# We provide the default constructor for QWidget. The default constructor has no parent.
# A widget with no parent is called a window.
root = QWidget()
root.resize(320, 240) # The resize() method resizes the widget.
root.setWindowTitle("Hello, World!") # Here we set the title for our window.
root.show() # The show() method displays the widget on the screen.
sys.exit(app.exec_()) # Finally, we enter the mainloop of the application.
Notable applications that use PyQt
Anki, a spaced repetition flashcard program
Calibre, an E-book management application
Dropbox, a file hosting service
Eric Python IDE
fman, a cross-platform file manager
Frescobaldi, a score editor for LilyPond music files
Kodos, a Python Regular expression Debugger
Leo, an outliner and literate programming editor
Ninja-IDE, an extensible open-source Python IDE
OpenLP, an open-source lyrics projection program
OpenShot, a video editing program
Orange, a data mining and visualization framework
Puddletag, an open-source, cross-platform ID3 tag editor
QGIS, a free software desktop Geographic Information Systems (GIS) application
qutebrowser, a web browser with Vim-style key bindings and a minimal GUI.
qt-recordMyDesktop, a Qt4 frontend for recordMyDesktop
Spyder, a Python data science IDE
TortoiseHg, a graphical interface for the Mercurial source management program (Hg)
Veusz, a scientific plotting application
See also
PyGTK (Python wrappers for GTK)
PySide (Alternative Python wrapper for the Qt toolkit)
wxPython (Python wrapper for the wx widgets collection)
Kivy
Tkinter (bundled with Python)
References
Further reading
External links
PyQt and PyKDE community Wiki
PyQt5 Tutorial Series
PyQT4 tutorial series
Tutorials
Tutorial
Articles with example Python (programming language) code
Cross-platform free software
Free computer libraries
Free software programmed in C++
Free software programmed in Python
Python (programming language) libraries
Qt (software)
Widget toolkits |
8205967 | https://en.wikipedia.org/wiki/DX10 | DX10 | DX10 was a general purpose international, multitasking operating system designed to operate
with the Texas Instruments 990/10, 990/10A and 990/12 minicomputers using the memory mapping feature.
The Disk Executive Operating System (DX10)
DX10 was a versatile disk-based operating system capable of supporting a wide
range of commercial and industrial applications.
DX10 was also a multiterminal system capable of making each of several users
appear to have exclusive control of the system.
DX10 was an international operating system designed to meet the commercial
requirements of the United States, most European countries, and Japan.
DX10 supported several models of video display terminals (VDTs), most of which
permit users to enter, view, and process data in their own language.
DX10 Capabilities
DX10 required a basic hardware configuration, but allows additional members of
an extensive group of peripherals to be included in the configuration.
During system generation, the user could configure DX10 to support peripheral devices
that are not members of the 990 family and devices that require realtime
support.
This capability required that the user also provide software control for these
devices.
The user communicated with DX10 easily through the System Command Interpreter (SCI).
SCI was designed to provide simple, convenient interaction between the user and
DX10 in a conversational format.
Through SCI the user had access to complete control of DX10.
SCI was flexible in its mode of communication.
While SCI is convenient for interactive communication through a data terminal,
SCI can be accessed in batch mode as well.
DX10 was capable of extensive file management.
The built-in file structures include key indexed files, relative record files,
and sequential files.
A group of file control utilities exists for copying and modifying files, and
controlling file parameters.
DX10 Features
DX10 offered a number of features that provide convenient use of the minicomputers system capabilities:
Easy system generation for systems with custom device configurations. With proper preparation, peripheral devices that are not part of the 990 computer family can be interfaced through DX10.
A macro assembler for translating assembly language programs into executable machine code.
A text editor for entering source code or data into accessible files.
Support of high-level languages, including Fortran, COBOL, Pascal, RPG II, and BASIC.
A link editor and extended debugging facilities are provided to further support program development.
References
External links
Dave Pitts' TI 990 page — Includes a simulator and DX10 Operating System images.
Proprietary operating systems
Texas Instruments |
12858749 | https://en.wikipedia.org/wiki/George%20Morrow%20%28computers%29 | George Morrow (computers) | George Morrow (January 30, 1934 – May 7, 2003) was part of the early microcomputer industry in the United States. Morrow promoted and improved the S-100 bus used in many early microcomputers. Called "one of the microcomputer industry's iconoclasts" by Richard Dalton in the Whole Earth Software Catalog, Morrow was also a member of the Homebrew Computer Club.
Early life and education
Born in Detroit in 1934, Morrow was a high school dropout. At the age of 28, he decided to return to school, receiving a bachelor's degree in physics from Stanford University, followed by a master's degree in mathematics from the University of Oklahoma. He sought a PhD in mathematics from UC Berkeley, but while there became fascinated by computers and began working as a programmer in the computer lab there. Meanwhile, the Altair 8800 made its debut in 1975, and Morrow began attending meetings of the Homebrew Computer Club.
Career
Starting in 1976, he designed and sold computers, computer parts, and accessories under several company names, including Thinker Toys (changed after CBS threatened a suit as it was too close to their trademark Tinker Toys) and restarted the business as Morrow Designs. His initial product was an Intel 8080 board with an octal-notation keypad, but it proved unappealing to hobbyists who preferred the binary notation and flip switches of the Altair 8800. Afterwards, he attempted a 16-bit machine based on the National Semiconductor PACE CPU with the help of Bill Godbout, Chuck Grant, and Mark Greenberg. Differences between him and the latter two led to their leaving to found North Star Computers. He then sold 4 KB S-100 memory boards before attempting a new computer with Howard Fulmer in 1977.
The Equinox 100 was a powerful machine in an attractive cabinet, but failed to attract much attention as it used an 8080 at a time when the Z80 was rapidly taking over. Morrow turned to selling floppy drives for S-100 machines. The package (which proved quite popular) included an 8" external drive, controller board, CP/M, and CBASIC. In 1982, he issued the Morrow Micro Decision line, a group of single-board Z80 machines designed to answer the high price of computer hardware. A single-drive 200k system sold for under $2000 equipped with a terminal, which placed it squarely in competition with the other CP/M systems, they were respectable business machines with "no sex appeal" but an extensive software bundle, and came in a desktop case like the IBM Displaywriter they were intended to compete against.
The Micro Decision series was introduced in late 1982 and offered with either one or two single-sided 3/4 height floppy drives, using a 40-track disk format with five 1024-byte sectors per track for an unformatted capacity of about 200k. The floppy controller in the Micro Decision was based around the NEC u765 FDC found in the IBM PC rather than the more common WD 17xx series FDCs. Console I/O was provided by a Lear Siegler ADM-20 terminal. The ADM-20 had graphics capability, but since it lacked any provisions for switching out of graphics mode short of power cycling the terminal, Morrow did not support the use of this feature. Later on, Morrow offered Liberty 50 terminals and officially supported the use of graphics on them. Early Micro Decisions had no Centronics port and used one of two RS-232 ports for connecting the terminal and a printer/modem. DIP switches on the ports to adjust the baud rate required taking the cover off to manipulate. A connector for attaching two external floppy drives was also provided.
Micro Decisions had two major PCB revisions and three case revisions—the version 2.0 PCB was introduced in spring 1983 and added improved data separation circuitry to the floppy controller. The 34-pin external floppy port was changed to a Centronics port and adding a third and fourth floppy drive required putting them on the internal chain inside the case. Early Micro Decisions had a power supply that was inadequate for more than two internal floppy drives; the version 2.0 PCB came with a more substantial PSU that also had a detachable power cord. There were several ROM and CP/M revisions as well; all ROM revisions except the final one were unable to access floppy drive #4 due to a bug. The final ROM revision (v3.1) also incorporated several OS features that had previously been provided on disk. The version 2.0 PCB also included a 40 pin expansion connector on it. An Intel 8253 timer was added to provide more flexible setting of the baud rates on the RS-232 ports. Although on paper, the UART serial controller chip could operate at 19k baud speed, a design flaw in the serial port circuitry prevented the use of speeds greater than 9600 bps. At the same time when the version 2.0 PCBs were introduced, Morrow also began offering the MD-3 which had two double-sided, half-height floppy drives for a 400k storage capacity.
The final machine in the MD series was the MD-11, a substantially upgraded machine with 128k of memory, CP/M 3.0, and an optional 10MB hard disk.
The CP/M platform was rapidly displaced by the newer (yet very similar) MS-DOS/PC DOS platforms. The 16-bit 8086 architecture of the new machines enabled them to break the 64 KB RAM limit of CP/M and address up to a megabyte of RAM. CP/M's fortunes weren't helped by the user-oriented marketing of Microsoft and IBM and the lack of same by Digital Research.
In 1985, Morrow released its first IBM-compatible computer, a lunchbox portable known as the Morrow Pivot II (based on its unique form factor where neither the keyboard nor the monitor folded away from the case). Produced by an outside OEM manufacturer, the same model was licensed by Morrow to Zenith Data Systems, who sold it as the Z-171. In addition to its lower cost and more prominent brand name, Zenith won an extremely profitable contract to sell computers to the US government, after the president of Morrow Designs left to go to work for Zenith. Morrow filed for bankruptcy by the end of the year.
Following the collapse of his computer businesses, Morrow devoted the rest of his life to his hobby of collecting original 78 RPM jazz and dance records from the 1920s and 1930s. Until his death, he digitally transcribed and restored thousands of recordings using a computer system he developed, reissuing them under his Old Masters label. He died in May 2003 from aplastic anemia.
See also
Bill Godbout
Cromemco
Computer Chronicles
References
External links
Morrow documentation at bitsavers.org
Quotations from Chairman Morrow
American computer scientists
1934 births
2003 deaths |
641111 | https://en.wikipedia.org/wiki/.dwg | .dwg | DWG (from drawing) is a proprietary binary file format used for storing two- and three- dimensional design data and metadata. It is the native format for several CAD packages including DraftSight, AutoCAD, BricsCAD, IntelliCAD (and its variants), Caddie and Open Design Alliance compliant applications. In addition, DWG is supported non-natively by many other CAD applications. The .bak (drawing backup), .dws (drawing standards), .dwt (drawing template) and .sv$ (temporary automatic save) files are also DWG files.
Version history
History
DWG (denoted by the .dwg filename extension) was the native file format for the Interact CAD package, developed by Mike Riddle in the late 1970s, and subsequently licensed by Autodesk in 1982 as the basis for AutoCAD. From 1982 to 2009, Autodesk created versions of AutoCAD which wrote no fewer than 18 major variants of the DWG file format, none of which is publicly documented.
The DWG format is probably the most widely used format for CAD drawings. Autodesk estimates that in 1998 there were in excess of two billion DWG files in existence.
There are several claims to control of the DWG format. As the biggest and most influential creator of DWG files it is Autodesk who designs, defines, and iterates the DWG format as the native format for their CAD applications. Autodesk sells a read/write library, called RealDWG, under selective licensing terms for use in non-competitive applications. Several companies have attempted to reverse engineer Autodesk's DWG format, and offer software libraries to read and write Autodesk DWG files. The most successful is Open Design Alliance, a non-profit consortium created in 1998 by a number of software developers (including competitors to Autodesk), released a read/write/view library called the OpenDWG Toolkit, which was based on the MarComp AUTODIRECT libraries. (ODA has since rewritten and updated that code.)
In 1998, Autodesk added file verification to AutoCAD R14.01, through a function called DWGCHECK. This function was supported by an encrypted checksum and product code (called a "watermark" by Autodesk), written into DWG files created by the program. In 2006 Autodesk modified AutoCAD 2007, to include "TrustedDWG technology", a function which would embed a text string within DWG files written by the program: "Autodesk DWG. This file is a Trusted DWG last saved by an Autodesk application or Autodesk licensed application." This helped Autodesk software users ensure that the files they were opening were created by an Autodesk, or RealDWG application, reducing risk of incompatibilities. AutoCAD would pop up a message, warning of potential stability problems, if a user opened a 2007 version DWG file which did not include this text string.
In 2008 the Free Software Foundation asserted the need for an open replacement for the DWG format, as neither RealDWG nor DWGdirect are licensed on terms that are compatible with free software license like the GNU GPL. Therefore, the FSF placed the goal 'Replacement for OpenDWG libraries' in 10th place on their High Priority Free Software Projects list. Created in late 2009, GNU LibreDWG is a free software library released under the terms of the GNU GPLv3 license. It can read DWG files from version R13 up to 2021, and write R2000 DWG files.
Also in 2008 Autodesk and Bentley Systems agreed on exchange of software libraries, including Autodesk RealDWG, to improve the ability to read and write the companies' respective DWG and DGN formats in mixed environments with greater fidelity.
In addition, the two companies will facilitate work process interoperability between their AEC applications through supporting the reciprocal use of available Application Programming Interfaces (APIs).
Autodesk trademark
On November 13, 2006, Autodesk sued the Open Design Alliance alleging that its DWGdirect libraries infringed Autodesk's trademark for the word "Autodesk", by writing the TrustedDWG watermark (including the word "AutoCAD") into DWG files it created. Nine days later, Autodesk's attorneys won a broad and deep temporary restraining order against the Open Design Alliance. In April 2007, the suit was settled, essentially on Autodesk's terms, with Autodesk modifying the warning message in AutoCAD 2008 (to make it somewhat less alarming), and the Open Design Alliance removing support for writing the TrustedDWG watermark from its DWGdirect libraries. The effect of the temporary restraining order and subsequent consent decree was to render the Open Design Alliance's DWGdirect libraries, from one point of view, incapable of creating DWG files that are 100% compatible with AutoCAD Unsubstantiated claim.. Others point out that the failure of "100% compatibility" means only that loading such a drawing triggers an essentially irrelevant warning message when the file is opened in AutoCAD.
In 2006, Autodesk applied for registration of US trademarks on "DWG", "DWG EXTREME", "DWG TRUECONVERT", "REALDWG", "DWGX", "DWG TRUEVIEW".
As early as 1996, Autodesk has disclaimed exclusive use of the DWG mark in US trademark filings. Out of these applications, only TRUSTEDDWG has been registered as a trademark by the USPTO. The REALDWG and DWGX registrations were opposed by SolidWorks. The DWG EXTREME, DWG TRUECONVERT, and DWG TRUEVIEW trademark registration applications all received substantial resistance, with the USPTO examining attorney requiring Autodesk to disclaim exclusive use of DWG as a condition for their registration.
In a non-final action in May 2007, the USPTO examining attorney refused to register the two DWG marks, as they are "merely descriptive" of the use of DWG as a file format name. In September 2007, Autodesk responded, claiming that DWG has gained a "secondary meaning," separate from its use as a generic file format name.
As of June 22, 2008, all of Autodesk's DWG-related trademark registration proceedings were suspended by the USPTO, pending disposition of trademark opposition and cancellation petitions Autodesk had filed against the Open Design Alliance and Dassault Systèmes SolidWorks Corporation. The USPTO office actions notifying Autodesk of this noted that Autodesk was not the exclusive source of files with the format name DWG, and Autodesk does not control the use of DWG by others, either as a trademark or as a file format name, among other points.
In 2006, Autodesk filed an opposition with the USPTO to the trademark registration of DWGGATEWAY by SolidWorks. Autodesk subsequently filed a petition for cancellation of SolidWorks' trademark registration for DWGEDITOR. In both cases, Autodesk's basis was that they had "been using the DWG name with its CAD software products since at least as early as 1983." The opposition and cancellation actions were consolidated, and suspended pending disposition of Autodesk's US District Court suit against SolidWorks.
In early 2007, Autodesk petitioned the USPTO to cancel the Open Design Alliance's "OpenDWG" trademarks, claiming that they had been abandoned. This cancellation action was suspended pending disposition of Autodesk's US District Court suit against SolidWorks.
In 2008, Autodesk sued SolidWorks in US District Court, arguing that through its marketing efforts, the term "DWG" has lost its original generic meaning and taken on a secondary meaning referring specifically to Autodesk's proprietary drawing file format, and therefore any use of "DWG" in competitive products amounted to trademark infringement. In January 2010, on the morning that trial was scheduled to begin, Autodesk and SolidWorks settled the suit, with SolidWorks acknowledging Autodesk's trademark rights for DWG, surrendering its trademark registrations for its DWG related projects, and withdrawing its opposition to Autodesk's DWG-related trademark registrations.
In April 2010, Autodesk and the Open Design Alliance settled their suit, with the Open Design Alliance agreeing to cancel its DWG-based trademark registrations and cease use of DWG and DWG-based trademarks in its product marketing and branding. Because there was no adjudication in either case, the agreements between the parties are not binding upon the USPTO. In March 2010, the Office of the Deputy Commissioner for Trademark Examination Policy at the USPTO determined that evidence submitted by the Open Design Alliance two years earlier was relevant and supported a reasonable ground for refusal to register DWG as a trademark.
In June 2011 the USPTO issued a final refusal to register DWG as a trademark owned by Autodesk. They were quoted as saying:
Autodesk appealed the decision. The USPTO affirmed in 2013 their refusal to recognise DWG as a trademark. Despite this, Autodesk websites still claimed DWG as a trademark after the decision.
In late 2014 Autodesk again lost, this time at the United States District Court for the Eastern District of Virginia. The judge dismissed all their arguments.
In 2015 Autodesk's website has a section title About DWG in which they try to make a distinction between .dwg as a file format and the DWG technology environment.
DWG support in freemium and free software
As neither RealDWG nor DWGdirect are licensed on terms that are compatible with free software licenses like the GNU GPL, in 2008 the Free Software Foundation asserted the need for an open replacement for the DWG format. Therefore, the FSF placed the goal 'Replacement for OpenDWG libraries' in 10th place on their High Priority Free Software Projects list. Forked in late 2009 from libDWG, GNU LibreDWG can read all DWG files from version R13 on. But the LibreDWG library, offered under the GNU GPLv3, could initially not be used by most targeted FOSS graphic software, such as FreeCAD, LibreCAD and Blender, because of a GPLv2/GPLv3 license incompatibility. A GPLv2 licensed alternative is the libdxfrw project, which can read simple DWGs. Some of these CAD licenses were only fixed recently to be able to use LibreDWG's GPLv3.
FreeCAD is a free and open-source application that can work with the DWG files by using the proprietary ODA File Converter for .dwg and .dxf files from the Open Design Alliance (ODA). The ODA also provides a freeware stand-alone viewer for .dwg and .dgn files, ODA Drawings Explorer, which runs on Windows, Linux, and Mac OS X.
LibreCAD is a free and open-source 2D CAD application that can open DWG and DXF files using its own library.
Autodesk DWG TrueView is a freeware, closed source, stand-alone DWG viewer with DWG TrueConvert software included, built on the same viewing engine as AutoCAD software. The freeware Autodesk Design Review software adds a possibility to open DWG files in Design Review to take advantage of measure and markup capabilities, sheet set organization, and status tracking.
See also
AutoCAD DXF
BricsCAD
CAD
Comparison of CAD software
Comparison of CAD, CAM and CAE file viewers
Design Web Format
FreeCAD
GstarCAD
IntelliCAD
LibreDWG
LibreCAD
OpenDWG
Open Design Alliance
ShareCAD
References
External links
LibreDWG is a work in progress developing Free Software libraries to support DWG files.
Teigha is a software development platform used to create engineering applications including CAD with native support of .dwg and .dgn files.
Specification of the .dwg file format provided by Open Design Alliance.
cad-blocks Example .dwg architecture files.
Autodesk
DWG
DWG
DWG
Open formats |
31314339 | https://en.wikipedia.org/wiki/Proteus%20%28programming%20language%29 | Proteus (programming language) | Proteus (PROcessor for TExt Easy to USe) is a fully functional, procedural programming language created in 1998 by Simone Zanella. Proteus incorporates many functions derived from several other
languages: C, BASIC, Assembly, Clipper/dBase;
it is especially versatile in dealing with strings, having hundreds of dedicated functions; this makes it one of the richest languages for text manipulation.
Proteus owes its name to a Greek god of the sea (Proteus), who took care of Neptune's crowd and gave responses; he was renowned for being able to transform himself, assuming different shapes. Transforming data from one form to another is the main usage of this language.
Introduction
Proteus was initially created as a multiplatform (DOS, Windows, Unix) system utility, to manipulate text and binary files and to create CGI scripts. The language was later focused on Windows, by
adding hundreds of specialized functions for: network and serial communication, database interrogation, system service creation, console applications, keyboard emulation, ISAPI scripting (for IIS).
Most of these additional functions are only available in the Windows flavour of the interpreter, even though a Linux
version is still available.
Proteus was designed to be practical (easy to use, efficient, complete), readable and consistent.
Its strongest points are:
powerful string manipulation;
comprehensibility of Proteus scripts;
availability of advanced data structures: arrays, queues (single or double), stacks, bit maps, sets, AVL trees.
The language can be extended by adding user functions written in Proteus or DLLs created in C/C++.
Language features
At first sight, Proteus may appear similar to Basic because of its straight syntax, but similarities are limited
to the surface:
Proteus has a fully functional, procedural approach;
variables are untyped, do not need to be declared, can be local or public and can be passed by value or by reference;
all the typical control structures are available (if-then-else; for-next; while-loop; repeat-until; switch-case);
new functions can be defined and used as native functions.
Data types supported by Proteus are only three: integer numbers, floating point numbers and strings.
Access to advanced data structures (files, arrays, queues, stacks, AVL trees, sets and so on) takes place by
using handles, i.e. integer numbers returned by item creation functions.
Type declaration is unnecessary: variable type is determined by the function applied – Proteus converts on the fly
every variable when needed and holds previous data renderings, to avoid performance degradation caused by
repeated conversions.
There is no need to add parenthesis in expressions to determine the evaluation order, because the language is fully functional (there are no operators).
Proteus includes hundreds of functions for:
accessing file system;
sorting data;
manipulating dates and strings;
interacting with the user (console functions)
calculating logical and mathematical expressions.
Proteus supports associative arrays (called sets) and AVL trees, which are very useful and powerful to quickly sort
and lookup values.
Two types of regular expressions are supported:
extended (Unix like);
basic (Dos like, having just the wildcards "?" and "*").
Both types of expressions can be used to parse and compare data.
The functional approach and the extensive library of built-in functions allow to write very short but powerful scripts;
to keep them comprehensible, medium-length keywords were adopted.
The user, besides writing new high-level functions in Proteus, can add new functions in C/C++ by following the
guidelines and using the templates available in the software development kit; the new functions can be invoked
exactly the same way as the predefined ones, passing expressions by value or variables by reference.
Proteus is an interpreted language: programs are loaded into memory, pre-compiled and run; since the number of
built-in functions is large, execution speed is usually very good and often comparable to that of compiled programs.
One of the most interesting features of Proteus is the possibility of running scripts as services or
ISAPI scripts.
Running a Proteus script as a service, started as soon as the operating system has finished loading, gives many
advantages:
no user needs to log in to start the script;
a service can be run with different privileges so that it cannot be stopped by a user.
This is very useful to protect critical processes in industrial environments (data collection, device monitoring),
or to avoid that the operator inadvertently closes a utility (keyboard emulation).
The ISAPI version of Proteus can be used to create scripts run through Internet Information Services and is
equipped with specific functions to cooperate with the web server.
For intellectual property protection Proteus provides:
script encryption;
digital signature of the scripts, by using the development key (which is unique);
the option to enable or disable the execution of a script (or part of it) by using the key of the customer.
Proteus is appreciated because it is relatively easy to write short, powerful and comprehensible scripts;
the large number of built-in functions, together with the examples in the manual, keep low the learning curve.
The development environment includes a source code editor with syntax highlighting and a context-sensitive guide.
Proteus does not need to be installed: the interpreter is a single executable (below 400 Kb) that
does not require additional DLLs to be run on recent Windows systems.
Synopsis and licensing
The main features of this language are:
fully functional, procedural language;
multi-language support: Proteus is available in several languages (keywords and messages);
no data types: all variables can be used as integer numbers, floating point numbers or strings; variables are interpreted according to the functions being applied – Proteus keeps different representations of their values between calls, to decrease execution time in case of frequent conversions between one type and the other;
no pre-allocated structures: all data used by Proteus are dynamically allocated at execution time; there are no limits on: recursion, maximum data size, number of variables, etc.;
no operators: Proteus is a completely functional language – there are no operators; thus, there is no ambiguity when evaluating expressions and parenthesis are not needed;
large library of predefined functions: Proteus is not a toy-language, it comes with hundreds of library functions ready to be used for working on strings, dates, numbers, for sorting, searching and so on;
advanced data access (DAO), pipes, Windows sockets, serial ports: in the Windows version, Proteus includes hundreds of system calls which are operating system-specific;
clear and comprehensible syntax: the names of the library functions resemble those of corresponding functions in C, Clipper/Flagship and Assembly; by using medium-length keywords, Proteus programs are very easy to understand;
native support for high-level data structures: arrays, queues (single or double), stacks, bit maps, sets, AVL trees are already available in Proteus and do not require additional code or libraries to be used;
ISAPI DLL and Windows Service versions: Proteus is available as a Windows service or as an ISAPI DLL (for using together with Microsoft Internet Information Server);
user libraries: it is possible to write user defined functions (UDF) in separate files, and include them (even conditionally and recursively) inside new programs; UDFs can be referenced before or after the definition; it is also possible to write external functions in Visual C++ and invoke them from a Proteus script;
native support for Ms-Dos/Windows, Macintosh and Unix text files (all versions);
three models for dates (English, American, Japanese), with functions to check them and to do calculations according to gregorian calendar;
epoch setting for 2-digit-year dates;
support for time in 12- and 24-hour format;
support for simple (Dos-like) and extended (Unix-like) regular expressions, in all versions;
intellectual property protection, by using digital signature and cryptography;
extensive library of functions to write interactive console programs.
Proteus is available in demo version (script execution limited to three minutes) and registered version,
protected by a USB dongle. At the moment, is available as a Windows or Ubuntu package and is distributed by
SZP.
Example programs
Hello World
The following example prints out "Hello world!".
CONSOLELN "Hello World!"
Extract two fields
The following example reads the standard input (CSV format, separator ";") and prints out the first two fields separated by "|":
CONSOLELN TOKEN(L, 1, ";") "|" TOKEN(L, 2, ";")
Proteus scripts by default work on an input file and write to an output file; the predefined identifier L gets the
value of every line in input. The function TOKEN returns the requested item of the string; the third parameter represents
the delimiter. String concatenation is implicit.
The same program can be written in this way:
H = TOKNEW(L, ";")
CONSOLELN TOKGET(H, 1) "|" TOKGET(H, 2)
TOKFREE(H)
In this case, we used another function (TOKGET), which builds the list of the tokens in the line; this is more
efficient if we need to access several items in the string.
External links
Proteus user manual
Text-oriented programming languages |
50701110 | https://en.wikipedia.org/wiki/Language%20documentation%20tools%20and%20methods | Language documentation tools and methods | The field of language documentation in the modern context involves a complex and ever-evolving set of tools and methods, and the study and development of their use - and, especially, identification and promotion of best practices - can be considered a sub-field of language documentation proper. Among these are ethical and recording principles, workflows and methods, hardware tools, and software tools.
Principles and workflows
Researchers in language documentation often conduct linguistic fieldwork to gather the data on which their work is based, recording audiovisual files that document language use in traditional contexts. Because the environments in which linguistic fieldwork often takes place may be logistically challenging, not every type of recording tool is necessary or ideal, and compromises must often be struck between quality, cost and usability. It is also important to envision one's complete workflow and intended outcomes; for example, if video files are made, some amount of processing may be required to expose the audio component to processing in various ways by different software packages.
Ethics
Ethical practices in language documentation have been the focus of much recent discussion and debate. The Linguistic Society of America has prepared an Ethics Statement, and maintains an Ethics Discussion Blog which is primarily focused on ethics in the language documentation context. The morality of ethics protocols has itself been brought into question by George van Driem. Most postgraduate programs that involve some form of language documentation and description require researchers to submit their proposed protocols to an internal Institutional Review Board which ensures that research is being conducted ethically. Minimally, participants should be informed of the process and the intended use of the recordings, and give recorded audible or written permission for the audiovisual materials to be used for linguistic investigation by the researcher(s). Many participants will want to be named as consultants, but others will not - this will determine whether the data needs to be anonymized or restricted from public access.
Data Formats
Adhering to standards for formats is critical for interoperability between software tools. Many individual archives or data repositories have their own standards and requirements for data deposited on their servers - knowledge of these requirements ought to inform the data collection strategy and tools used, and should be part of a data management plan developed before the start of research. Some example guidelines from well-used repositories are given below:
Endangered Languages Archive (ELAR) guidelines
Max Planck Institute Archive accepted formats
Yale University Library audiovisual guidelines
Most current archive standards for video use MPEG-4 (H264) as an encoding or storage format, which includes an AAC audio stream (generally of up to 320 kbit/s). Audio archive quality is at least WAV 44.1 kHz, 16-bit.
Principles for recording
Since documentation of languages is often difficult, with many languages that linguists work with being endangered (they may not be spoken in the near future), it is recommended to record at the highest quality possible given the limitations of a recorder. For video, this means recording at HD resolution (1080p or 720p) or higher when possible, while for audio this means recording minimally in uncompressed PCM 44,100 samples per second, 16-bit resolution. Arguably, however, good recording techniques (isolation, microphone selection and usage, using a tripod to minimize blur) is more important than resolution. A microphone that gives a clear recording of a speaker telling a folktale (high signal/noise ratio) in MP3 format (perhaps via a phone) is better than an extremely noisy recording in WAV format where all that can be heard are cars going by. To ensure that good recordings can be obtained, linguists should practice with their recording devices as much as possible and compare the results to observe which techniques yield the best results.
Workflows
For many linguists the end-result of making recordings is language analysis, often investigation of a language's phonological or syntactic properties using various software tools. This requires transcription of the audio, generally in collaboration with native speakers of the language in question. For general transcription, media files can be played back on a computer (or other device capable of playback) and paused for transcription in a text editor. Other (cross-platform) tools to assist this process include Audacity and Transcriber, while a program like ELAN (described further below) can also perform this function.
Programs like Toolbox or FLEx are often preferred by linguists who want to be able to interlinearize their texts, as these programs build a dictionary of forms and parsing rules to help speed up analysis. Unfortunately, media files are generally not linked by these programs (as opposed to ELAN, in which linked files are preferred), making it difficult to view or listen back to recordings to check transcriptions. There is currently a workaround for Toolbox that allows timecodes to reference an audio file and enable playback (of a complete text or a referenced sentence) from within Toolbox - in this workflow, time-alignment of text is performed in Transcriber, and then the relevant timecodes and text are converted into a format that Toolbox can read.
Hardware
Video+audio recorders
Recorders that record video typically also record audio as well. However, the audio does not always meet the criteria of minimal needs and recommended best practices for language documentation (uncompressed WAV format, 44.1 kHz, 16-bit), and is often not useful for linguistic purposes such as phonetic analysis. Many video devices record instead to a compressed audio format such as AAC or MP3, which is combined with the video stream in a wrapper of various kinds. Exceptions to this general rule are the following Video+Audio recorders:
The Zoom series, particularly the Q8, Q4n, and Q2n, which record to multiple video and audio resolutions/formats, most notably WAV (44.1/48/96 kHz, 16/24-bit).
When using a video recorder that does not record audio in WAV format (such as most DSLR cameras), it is recommended to record audio separately on another recorder, following some of the guidelines below. As with the audio recorders described below, many video recorders also accept microphone input of various kinds (generally through an 1/8-inch or TRS connector) - this can ensure a high-quality backup audio recording that is in sync with the recorded video, which can be helpful in some cases (i.e. for transcription).
Audio recorders and microphones
Audio-only recorders can be used in scenarios where video is impractical or otherwise undesirable. In most cases it is advantageous to combine the use of an audio-only recorder with one or more external microphones, however many modern audio recorders include built-in microphones which are usable if cost or setup speed are important concerns. Digital (solid state) recorders are preferred for most language documentation scenarios. Modern digital recorders achieve a very high level of quality at a relatively low price. Some of the most popular field recorders are found in the Zoom range, including the H1, H2, H4, H5 and H6. The H1 is particularly suitable for situations in which cost and user-friendliness are major desiderata. Other popular recorders for situations where size is a factor are the Olympus LS-series and the Sony Digital Voice recorders (though in the latter case, ensure that the device can record to WAV/Linear PCM format).
Several types of microphone can be effectively used in language documentation scenarios, depending on the situation (especially, including factors such as number, position and mobility of speakers) and on budget. In general, condenser microphones should be selected rather than dynamic microphones. It is an advantage in most fieldwork situations if a condenser microphone is self-powered (via a battery); however, when power is not a major factor, phantom-powered models can also be used. A stereo microphone setup is needed whenever more than one speaker is involved in a recording; this can be achieved via an array of two mono microphones, or by a dedicated stereo microphone.
Directional microphones should be used in most cases, in order to isolate a speaker's voice from other potential noise sources. However, omnidirectional microphones may be preferred in situations involving larger numbers of speakers arrayed in a relatively large space. Among directional microphones, cardioid microphones are suitable for most applications, however in some cases a hypercardioid ("shotgun") microphone may be preferred.
Good quality headset microphones are comparatively expensive, but can produce recordings of extremely high quality in controlled situations. Lavalier or "lapel" microphones may be used in some situations, however, depending on the microphone they can produce recordings which are inferior to a headset microphone for phonetic analysis, and are subject to some of the same concerns that headset microphones are in terms of restriction of a recording to a single speaker - while other speakers may be audible on the recording, they will be backgrounded in relation to the speaker wearing the lavalier microphone.
Some good quality microphones used for film-making and interviews include the Røde VideoMic shotgun and the Røde lavalier series, Shure headworn mics and Shure lavaliers. Depending on the recorder and microphone, additional cables (XLR, stereo/mono converter or a TRRS to TRS adapter) will be necessary.
Other recording tools
Electrical power generation, storage and management
Computer systems
Accessories
Software
There is as yet no single software suite which is designed to or able to handle all aspects of a typical language documentation workflow. Instead, there is a large and increasing number of packages designed to handle various aspects of the workflow, many of which overlap considerably. Some of these packages use standard formats and are inter-operable, whereas others are much less so.
SayMore
SayMore is a language documentation package developed by SIL International in Dallas which primarily focuses on the initial stages in language documentation, and aims for a relatively uncomplicated user experience.
The primary functions of SayMore are: (a) audio recording (b) file import from recording device (video and/or audio) (c) file organization (d) metadata entry at session and file levels (e) association of AV files with evidence of informed consent and other supplementary objects (such as photographs) (f) AV file segmentation (g) transcription/translation (h) BOLD-style Careful Speech annotation and Oral Translation.
SayMore files can be further exported for annotation in FLEx, and metadata can be exported in .csv and IMDI formats for archiving.
ELAN
ELAN is developed by The Language Archive at the Max Planck Institute for Psycholinguistics in Nijmegen. ELAN is a full-featured transcription tool, particularly useful for researchers with complex annotation needs/goals.
FLEx
FieldWorks Language Explorer, FLEx is developed by SIL International formerly Summer Institute of Linguistics, Inc. at SIL International in Dallas. FLEx allows the user to build a "lexicon" of the language, i.e. a word-list with definitions and grammatical information, and also to store texts from the language. Within the texts, each word or part of a word (i.e. a "morpheme") is linked to an entry in the lexicon. For new projects and for students learning for the first time, FLEx is now the best tool for interlinearising and dictionary-making.
Toolbox
Field Linguist's Toolbox (usually called Toolbox) is a precursor of FLEx and has been one of the most widely used language documentation packages for some decades. Previously known as Shoebox, Toolbox's primary functions are construction of a lexical database, and interlinearization of texts through interaction with the lexical database. Both lexical database and texts can be exported to a word processing environment, in the case of the lexical database using the Multi-Dictionary Formatter (MDF) conversion tool. It is also possible to use Toolbox as a transcription environment. By comparison with ELAN and FLEx, Toolbox has relatively limited functionality, and is felt by some to have an unintuitive design and interface. However, a large number of projects have been carried-out in the Shoebox/Toolbox environment over its lifespan, and its user base continues to enjoy its advantages of familiarity, speed, and community support. Toolbox also has the advantage of working directly with human-readable text files that can be opened in any text editor and easily manipulated and archived. Toolbox files can also be easily converted for storage in XML (recommended for archives), such as with open source Python libraries like Xigt intended for computational uses of IGT data.
Tools for automating components of the workflow
Language documentation may be partially automated thanks to a number of software tools, including:
eSpeak
HTK
Lingua Libre, a libre online tool allowing to record a large number of words and phrases in a short period (up to 1 000 words/hour with a clean word list and an experienced user). It automatizes the classic procedure for recording audio and video pronunciation files (for spoken and signed languages). Once the recording is done, the platform automatically uploads clean, well cut, well named and apps-friendly files, directly to Wikimedia Commons (it is possible to download datasets for a specific language).
Maus
Prosodylab Aligner
Sox
Literature
The peer-reviewed journal Language Documentation and Conservation has published a large number of articles focusing on tools and methods in language documentation.
See also
LRE Map Language resources map
Searchable by Resource Type, Language(s), Language type, Modality, Resource Use, Availability, Production Status, Conference(s), Resource name
Richard Littauer's GitHub catalog
A catalog of "open-source code that would be useful for documenting, conserving, developing, preserving, or working with endangered languages".
RNLD software page
Research Network for Linguistic Diversity's page on linguistic software.
References
Language documentation |
21838352 | https://en.wikipedia.org/wiki/League%20of%20Legends | League of Legends | League of Legends (LoL), commonly referred to as League, is a 2009 multiplayer online battle arena video game developed and published by Riot Games. Inspired by Defense of the Ancients, a custom map for Warcraft III, Riot's founders sought to develop a stand-alone game in the same genre. Since its release in October 2009, the game has been free-to-play and is monetized through purchasable character customization. The game is available for Microsoft Windows and macOS.
In the game, two teams of five players battle in player versus player combat, each team occupying and defending their half of the map. Each of the ten players controls a character, known as a "champion", with unique abilities and differing styles of play. During a match, champions become more powerful by collecting experience points, earning gold, and purchasing items to defeat the opposing team. In the game's main mode, Summoner's Rift, a team wins by pushing through to the enemy base and destroying their "Nexus", a large structure located within.
League of Legends received generally positive reviews; critics highlighted its accessibility, character designs, and production value. The game's long lifespan has resulted in a critical reappraisal, with reviews trending positively; the negative and abusive in-game behavior of its players, criticized since early in the game's lifetime, persists despite Riot's attempts to fix the problem. In 2019, the game regularly peaked at eight million concurrent players, and its popularity has led to tie-ins such as music videos, comic books, short stories, and an animated series, Arcane. Its success has also spawned several spin-off video games, including a mobile version, a digital collectible card game and a turn-based role-playing game, among others. A massively multiplayer online role-playing game based on the property is in development.
Regularly cited as the world's largest esport, the game has an international competitive scene consisting of 12 leagues. These domestic leagues culminate in the annual League of Legends World Championship. The 2019 event registered over 100 million unique viewers, peaking at a concurrent viewership of 44 million. Domestic and international events have been broadcast on livestreaming websites such as Twitch, YouTube, Bilibili, and on cable television sports channel ESPN.
Gameplay
League of Legends is a multiplayer online battle arena (MOBA) game in which the player controls a character ("champion") with a set of unique abilities from an isometric perspective. , there are 157 champions available to play. Over the course of a match, champions gain levels by accruing experience points (XP) through killing enemies. Items can be acquired to increase champions' strength, and are bought with gold, which players accrue passively over time and earn actively by defeating the opposing team's minions, champions, or defensive structures. In the main game mode, Summoner's Rift, items are purchased through a shop menu available to players only when their champion is in the team's base. Each match is discrete; levels and items do not transfer from one match to another.
Summoner's Rift
Summoner's Rift is the flagship game mode of League of Legends and the most prominent in professional-level play. The mode has a ranked competitive ladder; a matchmaking system determines a player's skill level and generates a starting rank from which they can climb. There are nine tiers; the least skilled are Iron, Bronze, and Silver, and the highest are Master, Grandmaster, and Challenger.
Two teams of five players compete to destroy the opposing team's "Nexus", which is guarded by the enemy champions and defensive structures known as "turrets". Each team's Nexus is located in their base, where players start the game and reappear after death. Non-player characters known as minions are generated from each team's Nexus and advance towards the enemy base along three lanes guarded by turrets: top, middle, and bottom. Each team's base contains three "inhibitors", one behind the third tower from the center of each lane. Destroying one of the enemy team's inhibitors causes stronger allied minions to spawn in that lane, and allows the attacking team to damage the enemy Nexus and the two turrets guarding it. The regions in between the lanes are collectively known as the "jungle", which is inhabited by "monsters" that, like minions, respawn at regular intervals. Like minions, monsters provide gold and XP when killed. Another, more powerful class of monster resides within the river that separates each team's jungle. These monsters require multiple players to defeat and grant special abilities to their slayers' team. For example, teams can gain a powerful allied unit after killing the Rift Herald, permanent strength boosts by killing dragons, and stronger, more durable minions by killing Baron Nashor.
Summoner's Rift matches can last from as little as 15 minutes to over an hour. Although the game does not enforce where players may go, conventions have arisen over the game's lifetime: typically one player goes in the top lane, one in the middle lane, one in the jungle, and two in the bottom lane. Players in a lane kill minions to accumulate gold and XP (termed "farming") and try to prevent their opponent from doing the same. A fifth champion, known as a "jungler", farms the jungle monsters and, when powerful enough, assists their teammates in a lane.
Other modes
Besides Summoner's Rift, League of Legends has two other permanent game modes. ARAM ("All Random, All Mid") is a five-versus-five mode like Summoner's Rift, but on a map called Howling Abyss with only one long lane, no jungle area, and with champions randomly chosen for players. Given the small size of the map, players must be vigilant in avoiding enemy abilities.
Teamfight Tactics is an auto battler released in June 2019 and made a permanent game mode the following month. As with others in its genre, players build a team and battle to be the last one standing. Players do not directly affect combat but position their units on a board for them to fight automatically against opponents each round. Teamfight Tactics is available for iOS and Android and has cross-platform play with the Windows and macOS clients.
Other game modes have been made available temporarily, typically aligning with in-game events. Ultra Rapid Fire (URF) mode was available for two weeks as a 2014 April Fools Day prank. In the mode, champion abilities have no resource cost, significantly reduced cooldown timers, increased movement speed, reduced healing, and faster attacks. A year later, in April 2015, Riot disclosed that they had not brought the mode back because its unbalanced design resulted in player "burnout". The developer also said the costs associated with maintaining and balancing URF were too high. Other temporary modes include One for All and Nexus Blitz. One for All has players pick a champion for all members of their team to play. In Nexus Blitz, players participate in a series of mini-games on a compressed map.
Development
Pre-release
Riot Games' founders Brandon Beck and Marc Merill had an idea for a spiritual successor to Defense of the Ancients, known as DotA. A mod for Warcraft III: Reign of Chaos, DotA required players to buy Warcraft III and install custom software; The Washington Post Brian Crecente said the mod "lacked a level of polish and was often hard to find and set up". Phillip Kollar of Polygon noted that Blizzard Entertainment supported Warcraft III with an expansion pack, then shifted their focus to other projects while the game still had players. Beck and Merill sought to create a game that would be supported over a significantly longer period.
Beck and Merill held a DotA tournament for students at the University of Southern California, with an ulterior goal of recruitment. There they met Jeff Jew, later a producer on League of Legends. Jew was very familiar with DotA and spent much of the tournament teaching others how to play. Beck and Merill invited him to an interview, and he joined Riot Games as an intern. Beck and Merill recruited two figures involved with DotA: Steve Feak, one of its designers, and Steve Mescon, who ran a support website to assist players. Feak said early development was highly iterative, comparing it to designing DotA.
A demonstration of League of Legends built in the Warcraft III game engine was completed in four months and then shown at the 2007 Game Developers Conference. There, Beck and Merill had little success with potential investors. Publishers were confused by the game's free-to-play business model and lack of a single-player mode. The free-to-play model was untested outside of Asian markets, so publishers were primarily interested in a retail release, and the game's capacity for a sequel. In 2008, Riot reached an agreement with holding company Tencent to oversee the game's launch in China.
League of Legends was announced October 7, 2008, for Microsoft Windows. Closed beta-testing began in April 2009. Upon the launch of the beta, seventeen champions were available. Riot initially aimed to ship the game with 20 champions but doubled the number before the game's full release in North America on October 27, 2009. The game's full name was announced as League of Legends: Clash of Fates. Riot planned to use the subtitle to signal when future content was available, but decided they were silly and dropped it before launch.
Post-release
League of Legends receives regular updates in the form of patches. Although previous games had utilized patches to ensure no one strategy dominated, League of Legends patches made keeping pace with the developer's changes a core part of the game. In 2014, Riot standardized their patch cadence to once approximately every two or three weeks.
The development team includes hundreds of game designers and artists. In 2016, the music team had four full-time composers and a team of producers creating audio for the game and its promotional materials. , the game has over 150 champions, and Riot Games periodically overhauls the visuals and gameplay of the oldest in the roster. Although only available for Microsoft Windows at launch, a Mac version of the game was made available in March 2013.
Revenue model
League of Legends uses a free-to-play business model. Several forms of purely cosmetic customization—for example, "skins" that change the appearance of champions—can be acquired after buying an in-game currency called Riot Points (RP). Skins have five main pricing tiers, ranging from $4 to $25. As virtual goods, they have high profit margins. A loot box system has existed in the game since 2016; these are purchasable virtual "chests" with randomized, cosmetic items. These chests can be bought outright or acquired at a slower rate for free by playing the game. The practice has been criticized as a form of gambling. In 2019, Riot Games' CEO said that he hoped loot boxes would become less prevalent in the industry. Riot has also experimented with other forms of monetization. In August 2019, they announced an achievement system purchasable with Riot Points. The system was widely criticized for its high cost and low value.
In 2014, Ubisoft analyst Teut Weidemann said that only around 4% of players paid for cosmetics—significantly lower than the industry standard of 15 to 25%. He argued the game was only profitable because of its large player base. In 2017, the game had a revenue of US$2.1 billion; in 2018, a lower figure of still positioned it as one of the highest-grossing games of 2018. In 2019, the number rose to $1.5 billion, and again to $1.75 billion in 2020. According to magazine Inc., players collectively played three billion hours every month in 2016.
Plot
Before 2014, players existed in-universe as political leaders, or "Summoners", commanding champions to fight on the Fields of Justice—for example, Summoner's Rift—to avert a catastrophic war. Sociologist Matt Watson said the plot and setting were bereft of the political themes found in other role-playing games, and presented in reductive "good versus evil" terms. In the game's early development, Riot did not hire writers, and designers wrote character biographies only a paragraph long.
In September 2014, Riot Games rebooted the game's fictional setting, removing summoners from the game's lore to avoid "creative stagnation". Luke Plunkett wrote for Kotaku that, although the change would upset long-term fans, it was necessary as the game's player base grew in size. Shortly after the reboot, Riot hired Warhammer writer Graham McNeill. Riot's storytellers and artists create flavor text, adding "richness" to the game, but very little of this is seen as a part of normal gameplay. Instead, that work supplies a foundation for the franchise's expansion into other media, such as comic books and spin-off video games. The Fields of Justice were replaced by a new fictional setting—a planet called Runeterra. The setting has elements from several genres—from Lovecraftian horror to traditional sword and sorcery fantasy.
Reception
League of Legends received generally favorable reviews on its initial release, according to review aggregator website Metacritic. Many publications noted the game's high replay value. Kotaku reviewer Brian Crecente admired how items altered champion play styles. Quintin Smith of Eurogamer concurred, praising the amount of experimentation offered by champions. Comparing it to Defense of the Ancients, Rick McCormick of GamesRadar+ said that playing League of Legends was "a vote for choice over refinement".
Given the game's origins, other reviewers frequently compared aspects of it to DotA. According to GamesRadar+ and GameSpot, League of Legends would feel familiar to those who had already played DotA. The game's inventive character design and lively colors distinguished the game from its competitors. Smith concluded his review by noting that, although there was not "much room for negativity", Riot's goal of refining DotA had not yet been realized.
Although Crecente praised the game's free-to-play model, GameSpy Ryan Scott was critical of the grind required for non-paying players to unlock key gameplay elements, calling it unacceptable in a competitive game. Many outlets said the game was underdeveloped. A physical version of the game was available for purchase from retailers; GameSpot Kevin VanOrd said it was an inadvisable purchase because the value included $10 of store credit for an unavailable store. German site GameStar noted that none of the bonuses in that version were available until the launch period had ended and refused to carry out a full review. IGN Steve Butts compared the launch to the poor state of CrimeCraft release earlier in 2009; he indicated that features available during League of Legends beta were removed for the release, even for those who purchased the retail version. Matches took unnecessarily long to find for players, with long queue times, and GameRevolution mentioned frustrating bugs.
Some reviewers addressed toxicity in the game's early history. Crecente wrote that the community was "insular" and "whiney" when losing. Butts speculated that League of Legends inherited many of DotA players, who had developed a reputation for being "notoriously hostile" to newcomers.
Reassessment
Regular updates to the game have resulted in a reappraisal by some outlets; IGN's second reviewer, Leah B. Jackson, explained that the website's original review had become "obsolete". Two publications increased their original scores: GameSpot from 6 to 9, and IGN from 8 to 9.2. The variety offered by the champion roster was described by Steven Strom of PC Gamer as "fascinating"; Jackson pointed to "memorable" characters and abilities. Although the items had originally been praised at release by other outlets such as Kotaku Jackson's reassessment criticized the lack of item diversity and viability, noting that the items recommended to the player by the in-game shop were essentially required because of their strength.
While reviewers were pleased with the diverse array of play styles offered by champions and their abilities, Strom thought that the female characters still resembled those in "horny Clash of Clans clones" in 2018. Two years before Strom's review, a champion designer responded to criticism by players that a young, female champion was not conventionally attractive. He argued that limiting female champions to one body type was constraining, and said progress had been made in Riot's recent releases.
Comparisons persisted between the game and others in the genre. GameSpot Tyler Hicks wrote that new players would pick up League of Legends quicker than DotA and that the removal of randomness-based skills made the game more competitive. Jackson described League of Legends''' rate of unlock for champions as "a model of generosity", but less than DotA sequel, Dota 2 (2013), produced by Valve, wherein characters are unlocked by default. Strom said the game was fast-paced compared to Dota 2 "yawning" matches, but slower than those of Blizzard Entertainment's "intentionally accessible" MOBA Heroes of the Storm (2015).
Accolades
At the first Game Developers Choice Awards in 2010, the game won four major awards: Best Online Technology, Game Design, New Online Game, and Visual Arts. At the 2011 Golden Joystick Awards, it won Best Free-to-Play Game. Music produced for the game won a Shorty Award, and was nominated at the Hollywood Music in Media Awards.League of Legends has received awards for its contribution to esports. It was nominated for Best Esports Game at The Game Awards in 2017 and 2018, then won in 2019 and 2020. Specific events organized by Riot for esports tournaments have been recognized by awards ceremonies. Also at The Game Awards, the Riot won Best Esports Event for the 2019 and 2020 League World Championships. At the 39th Sports Emmy Awards in 2018, League of Legends won Outstanding Live Graphic Design for the 2017 world championship; as part of the pre-competition proceedings, Riot used augmented reality technology to have a computer-generated dragon fly across the stage.
Player behavior League of Legends player base has a longstanding reputation for "toxicity"—negative and abusive in-game behavior, with a survey by the Anti-Defamation League indicating that 76% of players have experienced in-game harassment. Riot Games has acknowledged the problem and responded that only a small portion of the game's players are consistently toxic. According to Jeffrey Lin, the lead designer of social systems at Riot Games, the majority of negative behavior is committed by players "occasionally acting out". Several major systems have been implemented to tackle the issue. One such measure is basic report functionality; players can report teammates or opponents who violate the game's code of ethics. The in-game chat is also monitored by algorithms that detect several types of abuse. An early system was the "Tribunal"—players who met certain requirements were able to review reports sent to Riot. If enough players determined that the messages were a violation, an automated system would punish them. Lin said that eliminating toxicity was an unrealistic goal, and the focus should be on rewarding good player behavior. To that effect, Riot reworked the "Honor system" in 2017, allowing players to award teammates with virtual medals following games, for one of three positive attributes. Acquiring these medals increases a player's "Honor level", rewarding them with free loot boxes over time.
In esportsLeague of Legends is one of the world's largest esports, described by The New York Times as its "main attraction". Online viewership and in-person attendance for the game's esports events outperformed those of the National Basketball Association, the World Series, and the Stanley Cup in 2016. For the 2019 and 2020 League of Legends World Championship finals, Riot Games reported 44 and 45 peak million concurrent viewers respectively. Harvard Business Review said that League of Legends epitomized the birth of the esports industry.
, Riot Games operates 12 regional leagues internationally, four of which—China, Europe, Korea, and North America—have franchised systems. In 2017, this system comprised 109 teams and 545 players. League games are typically livestreamed on platforms such as Twitch and YouTube. The company sells streaming rights to the game; the North American league playoff is broadcast on cable television by sports network ESPN. In China, the rights to stream international events such as the World Championships and the Mid-Season Invitational were sold to Bilibili in Fall 2020 for a three-year deal reportedly worth US$113 million, while exclusive streaming rights for the domestic and other regional leagues are owned by Huya Live. The game's highest-paid professional players have commanded salaries of above $1 million—over three times the highest-paid players of Overwatch. The scene has attracted investment from businesspeople otherwise unassociated with esports, such as retired basketball player Rick Fox, who founded his own team. In 2020, his team's slot in the North American league was sold to the Evil Geniuses organization for $33 million.
Spin-offs and other media
Games
For the 10th anniversary of League of Legends in 2019, Riot Games announced several games at various stages of production that were directly related to the League of Legends intellectual property (IP). A stand-alone version of Teamfight Tactics was announced for mobile operating systems iOS and Android at the event and released in March 2020. The game has cross-platform play with the Windows and macOS clients. Legends of Runeterra, a free-to-play digital collectible card game, launched in April 2020 for Microsoft Windows; the game features characters from League of Legends. League of Legends: Wild Rift is a version of the game for mobile operating systems Android, iOS, and unspecified consoles. Instead of porting the game from League of Legends, Wild Rift's character models and environments were entirely rebuilt. A single-player, turn-based role-playing game, Ruined King: A League of Legends Story, was released in 2021 for PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, Nintendo Switch, and Windows. It was the first title released under Riot Games' publishing arm, Riot Forge, wherein non-Riot studios develop games using League of Legends characters. In December 2020, Greg Street, vice-president of IP and Entertainment at Riot Games, announced that a massively multiplayer online role-playing game based on the game is in development. Song of Nunu: A League of Legends Story, a third-person adventure game revolving around the champion Nunu's search for his mother, with the help of the yeti Willump, was announced for a planned release in 2022. It is being developed by Tequila Works, the creators of Rime.
Music
Riot Games' first venture into music was in 2014 with the virtual heavy metal band Pentakill, promoting a skin line of the same name. Pentakill is composed of seven champions, and their music was primarily made by Riot Games' in-house music team but featured cameos by Mötley Crüe drummer Tommy Lee and Danny Lohner, a former member of industrial rock band Nine Inch Nails. Their second album, Grasp of the Undying, reached Number 1 on the iTunes metal charts in 2017.
Pentakill was followed by K/DA, a virtual K-pop girl group composed of four champions. As with Pentakill, K/DA is promotional material for a skin line by the same name. The group's debut single, "Pop/Stars", which premiered at the 2018 League of Legends World Championship, garnered over 400 million views on YouTube and sparked widespread interest from people unfamiliar with League of Legends. After a two-year hiatus, Riot Games released a second single from K/DA in August 2020.
In 2019, Riot created a virtual hip hop group called True Damage, featuring the champions Akali, Yasuo, Qiyana, Senna, and Ekko. The vocalists performed a live version of the group's debut song, "Giants", during the opening ceremony of the 2019 League of Legends World Championship, alongside holographic versions of their characters. The in-game cosmetics promoted by the music video featured a collaboration with fashion house Louis Vuitton.
Comics
Riot announced a collaboration with Marvel Comics in 2018. Riot had previously experimented with releasing comics through its website. Shannon Liao of The Verge noted that the comic books were "a rare opportunity for Riot to showcase its years of lore that has often appeared as an afterthought". The first comic was League of Legends: Ashe—Warmother, which debuted in 2018, followed by League of Legends: Lux that same year. A print version of the latter was released in 2019.
Animated series
In a video posted to celebrate the tenth anniversary of League of Legends, Riot announced an animated television series, Arcane, the company's first production for television. Arcane was a collaborative effort between Riot Games and animation studio Fortiche Productions. In an interview with The Hollywood Reporter, head of creative development Greg Street said the series is "not a light-hearted show. There are some serious themes that we explore there, so we wouldn't want kids tuning in and expecting something that it's not." The series is set in the utopian city of Piltover and its underground, oppressed sister city, Zaun. The series was set to be released in 2020 but was delayed by the COVID-19 pandemic. On November 6, 2021, Arcane'' premiered on Netflix on following the 2021 League of Legends World Championship, and was available through Tencent Video in China. The series received critical acclaim upon release, with IGN's Rafael Motomayor asking rhetorically if the series marked the end of the "video game adaptation curse". It stars Hailee Steinfeld as Vi, Ella Purnell as Jinx, Kevin Alejandro as Jayce, and Katie Leung as Caitlyn. Following the season one finale, Netflix announced a second season was in development.
Notes
References
External links
2009 video games
Esports games
Free-to-play video games
MacOS games
Multiplayer online battle arena games
Multiplayer video games
Riot Games games
Science fantasy video games
Tencent
Video games developed in the United States
Video games containing loot boxes
Windows games |
1737023 | https://en.wikipedia.org/wiki/BUNCH | BUNCH | The BUNCH was the nickname for the group of mainframe computer competitors of IBM in the 1970s. The name is derived from the names of the five companies: Burroughs, UNIVAC, NCR, Control Data Corporation (CDC), and Honeywell. These companies were grouped together because the market share of IBM was much higher than all of its competitors put together.
During the 1960s, IBM and these five computer manufacturers, along with RCA and General Electric, had been known as "IBM and the Seven Dwarfs". The description of IBM's competitors changed after GE's 1970 sale of its computer business to Honeywell and RCA's 1971 sale of its computer business to Sperry (who owned UNIVAC), leaving only five "dwarves". The companies' initials thus lent themselves to a new acronym, BUNCH. International Data Corporation estimated in 1984 that BUNCH would receive less than $2 billion of an estimated $11.4 billion in mainframe computer sales that year, with IBM receiving most of the remainder. IBM so dominated the mainframe market that observers expected the BUNCH to merge or exit the industry. BUNCH followed IBM into the microcomputer market with their own PC compatibles. but unlike that company did not quickly adjust to retail sales of smaller computers.
Digital Equipment Corporation (DEC), at one point the second largest in the industry, was joined to BUNCH as DeBUNCH.
Fate of BUNCH
Burroughs & UNIVAC In September 1986, after Burroughs purchased Sperry (the parent company of UNIVAC), the name of the company was changed to Unisys.
NCR In 1982, NCR became involved in open systems architecture, starting with the UNIX-powered TOWER 16/32, and placed more emphasis on computers smaller than mainframes. NCR was acquired by AT&T Corporation in 1991. A restructuring of AT&T in 1996 led to its re-establishment on 1 January 1997 as a separate company. In 1998, NCR sold its computer hardware manufacturing assets to Solectron and ceased to produce general-purpose computer systems.
Control Data Corporation Control Data Corporation is now Syntegra (USA), a subsidiary of British company BT Group's BT Global Services.
Honeywell In 1991, Honeywell's computer division was sold to French computer company Groupe Bull.
Other mainframe manufacturers during the 1960s and 1970s
Bendix Corporation introduced the G-15 in 1956 and the G-20 in 1961, with the G-21 shortly afterwards. Control Data Corporation purchased the Bendix computer division in 1963.
Philco sold military computers as well as the commercial TRANSAC S-1000 and TRANSAC S-2000; Ford Motor Company purchased Philco in December 1961.
Scientific Data Systems (later known as Xerox Data Systems after its purchase by Xerox in 1969) also sold mainframe computers, but with around 1% market share, it was not a major factor in the marketplace. Xerox closed the division in 1975, with most rights sold to Honeywell.
In 1976, Cray Research (a company supported by Seymour Cray's former employer Control Data Corporation), released the Cray-1 vector computer.
Digital Equipment Corporation (DEC) was founded in 1957 to manufacture computer components, and made many small computers, notably the PDP series. These computers were classified as minicomputers rather than mainframes, but often competed for the same business. DEC later sold to Compaq, which was acquired by Hewlett-Packard.
Hewlett-Packard (HP) was founded in 1939 manufacturing advanced electronic equipment. In the mid-1960s HP started producing minicomputers, which competed with mainframe computers.
Amdahl Corporation, founded in 1970 by IBM Fellow, chief architect of the IBM System/360 range and entrepreneur, Gene Amdahl, was an information technology company specializing in IBM mainframe-compatible computer products, and became a wholly owned subsidiary of Fujitsu in 1997.
International Computers Limited was a large British computer hardware, computer software and computer services company that operated from 1968 until 2002. The company was progressively acquired by Fujitsu, and in April 2002 it was rebranded as Fujitsu.
References
Defunct computer companies of the United States
Burroughs Corporation
Unisys
NCR Corporation
Control Data Corporation
Honeywell
Computing acronyms |
223321 | https://en.wikipedia.org/wiki/Worse%20is%20better | Worse is better | Worse is better (also called the New Jersey style) is a term conceived by Richard P. Gabriel in an essay of the same name to describe the dynamics of software acceptance. It refers to the argument that software quality does not necessarily increase with functionality: that there is a point where less functionality ("worse") is a preferable option ("better") in terms of practicality and usability. Software that is limited, but simple to use, may be more appealing to the user and market than the reverse.
As to the oxymoronic title, Gabriel calls it a caricature, declaring the style bad in comparison with "The Right Thing". However he also states that "it has better survival characteristics than the-right-thing" development style and is superior to the "MIT Approach" with which he contrasted it.
The essay was included into the 1994 book The UNIX-HATERS Handbook, and has been referred to as the origin of the notion of a conceptual split between developers on the east and west coasts of the United States.
Origin
Gabriel was a Lisp programmer when he formulated the concept in 1989, presenting it in his essay "Lisp: Good News, Bad News, How to Win Big". A section of the article, titled "The Rise of 'Worse is Better, was widely disseminated beginning in 1991, after Jamie Zawinski found it in Gabriel's files at Lucid Inc. and emailed it to friends and colleagues.
Characteristics
New Jersey style
In The Rise of Worse is Better, Gabriel claimed that "Worse-is-Better" (also the "New Jersey style" or "east coast") is a model of software design and implementation which has the characteristics (in approximately descending order of importance):
Simplicity The design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.
Correctness The design should be correct in all observable aspects. It is slightly better to be simple than correct.
Consistency The design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either complexity or inconsistency in the implementation.
Completeness The design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must be sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.
The MIT approach
Gabriel contrasted his philosophy with what he called the "MIT/Stanford style of design" or "MIT approach" (also known as the "west coast" approach or "the Right Thing"), which he described as:
Simplicity The design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation.
Correctness The design must be correct in all observable aspects. Incorrectness is simply not allowed.
Consistency The design must be consistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.
Completeness The design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.
Gabriel argued that early Unix and C, developed by Bell Labs, are examples of the worse-is-better design approach. He also calls them "the ultimate computer viruses".
Effects
Gabriel argued that "Worse is better" produced more successful software than the MIT approach: As long as the initial program is basically good, it will take much less time and effort to implement initially and it will be easier to adapt to new situations. Porting software to new machines, for example, becomes far easier this way. Thus its use will spread rapidly, long before a program developed using the MIT approach has a chance to be developed and deployed (first-mover advantage). Once it has spread, there will be pressure to improve its functionality, but users have already been conditioned to accept "worse" rather than the "right thing":
Gabriel credits Jamie Zawinski for excerpting the worse-is-better sections of "Lisp: Good News, Bad News, How to Win Big" and e-mailing them to his friends at Carnegie Mellon University, who sent them to their friends at Bell Labs, "who sent them to their friends everywhere." He apparently connected these ideas to those of Richard Stallman and saw related ideas that are important in the design philosophy of Unix, and more generally in the open-source movement, both of which were central to the development of Linux.
In December 2000 Gabriel answered his earlier essay with one titled Worse Is Better Is Worse under the pseudonym Nickieben Bourbaki (an allusion to Nicolas Bourbaki), while also penning Is Worse Really Better?, applying the concept to C++'s success in the field of object-oriented programming despite the existence of more elegant languages designed around the concept.
The UNIX-HATERS Handbook includes Worse is Better as an appendix, and frames the concept in terms of worse-is-better in the form of Unix being "evolutionarily superior" to its competition.
See also
Gresham's Law
Less is more
Minimum viable product
Perfect is the enemy of good
Progressive disclosure
Satisficing
References
English phrases
Software development philosophies
Software design
Programming principles
Quality management |
1317114 | https://en.wikipedia.org/wiki/ICMP%20Router%20Discovery%20Protocol | ICMP Router Discovery Protocol | In computer networking, the ICMP Internet Router Discovery Protocol (IRDP), also called the Internet Router Discovery Protocol, is a protocol for computer hosts to discover the presence and location of routers on their IPv4 local area network. Router discovery is useful for accessing computer systems on other nonlocal area networks. The IRDP is defined by the IETF RFC 1256 standard, with the Internet Control Message Protocol (ICMP) upon which it is based defined in IETF RFC 792. IRDP eliminates the need to manually configure routing information.
Router discovery messages
To enable router discovery, the IRDP defines two kinds of ICMP messages:
The ICMP Router Solicitation message is sent from a computer host to any routers on the local area network to request that they advertise their presence on the network.
The ICMP Router Advertisement message is sent by a router on the local area network to announce its IP address as available for routing.
When a host boots up, it sends solicitation messages to IP multicast address 224.0.0.2. In response, one or more routers may send advertisement messages. If there is more than one router, the host usually picks the first message it gets and adds that router to its routing table. Independently of a solicitation, a router may periodically send out advertisement messages. These messages are not considered a routing protocol, as they do not determine a routing path, just the presence of possible gateways.
Extensions
The IRDP strategy has been used in the development of the IPv6 neighbor discovery protocol. These use ICMPv6 messages, the IPv6 analog of ICMP messages. Neighbor discovery is governed by IETF standards RFC 4861 and RFC 4862.
IRDP plays an essential role in mobile networking through IETF standard RFC 3344. This is called MIPv4 Agent discovery.
See also
Dynamic Host Configuration Protocol
References
External links
: ICMP Router Discovery Messages
Internet Standards
Internet protocols |
11405272 | https://en.wikipedia.org/wiki/Hydra%20%28operating%20system%29 | Hydra (operating system) | Hydra (stylized as HYDRA) is an early, discontinued, capability-based, object-oriented microkernel designed to support a wide range of possible operating systems to run on it. Hydra was created as part of the C.mmp project at Carnegie-Mellon University in 1971.
The name is based on the ancient Greek mythological creature the hydra.
Hydra was designed to be modular and secure, and intended to be flexible enough for easy experimentation.
The system was implemented in the programming language BLISS.
References
Capability systems
Carnegie Mellon University software
Microkernels
Microkernel-based operating systems
Object-oriented operating systems |
54216864 | https://en.wikipedia.org/wiki/Amphimachus%20of%20Caria | Amphimachus of Caria | In Greek mythology, Amphimachus (; Ancient Greek: Ἀμφίμαχος derived from ἀμφί amphi "on both sides, in all directions, surrounding" and μάχη mache "battle") was the son of Nomion.
Mythology
Amphimachus and his brother Nastes were captains of the Carian contingent on the side of the Trojans in the Trojan War. Either he or his brother were killed by Achilles; according to the commentary to the Iliad by Thomas D. Seymour, his brother Nastes was the one killed and of whom the armour and golden ornaments were subsequently stripped off.
And Nastes again led the Carians, uncouth of speech, who held Miletus and the mountain of Phthires, dense with its leafage, and the streams of Maeander, and the steep crests of Mycale. These were led by captains twain, Amphimachus and Nastes—Nastes and Amphimachus, the glorious children of Nomion. And he came to the war all decked with gold, like a girl, fool that he was; but his gold in no wise availed to ward off woeful destruction; nay, he was slain in the river beneath the hands of the son of Aeacus, swift of foot; and Achilles, wise of heart, bare off the gold.
Notes
References
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library.
People of the Trojan War
Characters in Greek mythology |
29761832 | https://en.wikipedia.org/wiki/Civica | Civica | The Civica Group is an international software business focused on the public sector supporting more than 3,000 major customers in ten countries. It is a privately owned group of companies headquartered in London, UK, and with regional head offices in Australia, Singapore and North America.
History
Civica was set up through a management buy-out from the Sanderson Group in 1999, backed by venture capitalist group Alchemy Partners and led by Simon Downing who became chief executive. The Civica Group was officially formed from the public sector business of the Sanderson Group in 2002. The group was subsequently listed on the Alternative Investment Market of the London Stock Exchange in 2004. It was delisted in 2008 when it was acquired by 3i for £190 million.
In September 2009 Civica acquired in4tek, a community health and social care record software specialist based in Altrincham. In4tek’s software is used by providers and local authorities in the UK and Canada.
In April 2011 Civica UK, a wholly owned subsidiary of Civica Group, acquired specialised document and records management services provider Cave Tab. Cave Tab is a supplier to the public sector including local government, social housing, health care, enforcement and defence, as well as commercial customers. In May 2011 it acquired PSCAL, a financial management software and services provider for the NHS.
In June 2012 Civica acquired Gateway Computing Ltd, publisher of WinDIP Enterprise, a document and records management software suite.
In 2013 Civica acquired Corero Business Systems, a software provider whose system, "Corero", now called "Education Suite", is used within educational institutions.
In 2014 Civica acquired both Coldharbour Systems, a software for Healthcare and Keystone Asset Management an asset management specialist for the Housing market. Also in 2014 Civica acquired Asidua, a local government contact management and application specialist and telecoms specialist.
In 2016 Civica acquired Norwel Legal and IPL Information Processing Limited. Norwel Computer Services supplier of legal practice and case management solutions and IPL provide mobile and digital solutions to both the private and public sector.
In 2016 Civica also acquired government digital specialist SFW LTD.
through which Civica transformed SFW India Pvt LTD. to Civica offshore resourcing center in Vadodara, India.
In December 2016 Civica acquired Abritas a social housing software developer and software as service provider based in Reading, Berkshire
It was sold to Partners Group in July 2017 for £1.06 billion. At that point it had about 3,700 staff.
In 2017 Civica was awarded the Civic Compliance contract for the State Government of Victoria for both enforcement and Customer Service. The contract is no longer with Tenix Solutions, who had held the contract since 2002.
Operations
Civica operates across the following market sectors offering specialised and managed services across the following markets:
Local Government
Social housing
Parking enforcement & public protection
Libraries and education
Mobile fleet and asset management
Public protection
Pension administration
Health and Social Care
Legal
Civica services fall into seven broad categories:
Service transformation
Process improvement and departmental efficiency
Resource support and optimisation
Outsourced and managed IT services
IT professional services
Mobile working
Software asset management
Local Government
Civica provides specialist systems and outsourcing services to local authorities and their partners in the UK, Europe and Australia. Services offered include corporate-level service transformations, electronic document management (EDM) systems, managed parking infrastructures, fully integrated IT infrastructures, E-payments systems and fully integrated database e-service solutions. The group works with 94 per cent of UK local authorities as well as most police forces in England and Wales.
Its systems manage £4 billion of local revenues and £1 billion in secure electronic transactions per year on behalf of local authorities including Manchester City Council, Sunderland City Council, Hammersmith and Fulham, Enfield and Haringey.
Social housing
Civica supplies housing management and data systems to housing development agencies with a focus on automating business processes. Products offered by Civica include integrated modern front office systems, mobile and collaborative working platforms, back office applications and infrastructure. Around one million social housing properties are managed through Civica systems.
Enforcement
Civica supplies Automatic Number Plate Recognition (ANPR) to UK police forces as well as retail centres, hospitals and other commercial premises. The group provides anti-social behaviour reporting systems for 170 local authorities to support council and police forces' community safety partnerships.
Education and libraries
Civica provides software and IT solutions to help improve education and learning services, ranging from shared learning environments and leading library management services to software licensing frameworks. The group also provides managed IT services including virtual learning environments (VLE) for schools in Sheffield and Luton under the Building Schools for the Future (BSF) programme. Civica supplies many thousands of schools and 50 UK universities.
Through its SPYDUS software, Civica supplies libraries with managed services and computer systems automating stock ordering, lending and tracking systems including the supply of remote access to library catalogues. Civica manages the library systems for the Government of Singapore's Future Schools initiative. In the UK, Civica provides tailored library services for the eleven library services that are members of the South East Library Management Services (SELMS) consortium – with over three million library users in the region. In total, the group manages over 1,500 libraries worldwide.
Mobile fleet and asset management
Civica has supplied asset and fleet management services to the public sector for 25 years. The company currently services around 500 customers including around 100 local authorities. Civica offers Tranman fleet software to local government, enterprises and the emergency services. It won Fleet Excellence awards in 2003, 2004, 2005 and 2006 as well as a Green Fleet award in 2006 for its use.
External links
https://web.archive.org/web/20101127064512/http://www.guardianpublic.co.uk/civca-survey-council-managers Civica conference coverage,The Guardian
https://web.archive.org/web/20101127234310/http://www.guardianpublic.co.uk/public-private-sector-partnerships Civica Chief Executive Simon Downing discusses public-private partnerships, The Guardian
https://web.archive.org/web/20101214141917/http://www.civicaplc.com/NR/rdonlyres/2E6C2C25-581F-45FF-8078-4075386FE54E/1725/CivicaBrochure.pdf Civica service brochure
https://web.archive.org/web/20110722194851/http://www.localgov.co.uk/index.cfm?method=news.detail&id=66168 Civica company profile, Localgov.co.uk
References
Outsourcing companies |
2325864 | https://en.wikipedia.org/wiki/Field%20service%20management | Field service management | Field service management (FSM) refers to the management of a company's resources employed at or en route to the property of clients, rather than on company property. Examples include locating vehicles, managing worker activity, scheduling and dispatching work, ensuring driver safety, and integrating the management of such activities with inventory, billing, accounting and other back-office systems. FSM most commonly refers to companies who need to manage installation, service, or repairs of systems or equipment. It can also refer to software and cloud-based platforms that aid in field service management.
Industry examples
Field service management is used to manage resources in several industries.
In telecommunications and cable industry, technicians who install cable or run phone lines into residences or business establishments.
In healthcare, mobile nurses who provide in-home care for elderly or disabled.
In gas utilities, engineers are dispatched to investigate and repair suspected leaks.
In heavy engineering, mining, industrial and manufacturing, technicians dispatched for preventative maintenance and repair.
In property maintenance, including landscaping, irrigation, and home and office cleaning.
In HVAC industry, technicians have the expertise and equipment to investigate units in residential, commercial, and industrial environments.
In the Postal and Packaging industry, technicians find out the exact locations of the customers and deliver/receive the packages.
Requirements
Field service management must meet certain requirements:
Customer expectations: Customers expect that their service should not be disrupted, and should be immediately restored
Underutilized equipment: Expensive industrial equipment in mining or oil and gas can cost millions when sitting idle
Low employee productivity: Managers are unable to monitor field employees, which may reduce productivity
Safety: Safety of drivers and vehicles on the road and while on the job site is a concern both for individuals and their employers
Cost: Rising cost of fuel, vehicle maintenance, and parts inventory
Service to sales: Increasingly, companies expect their services department to generate revenues.
Dynamic environment: Continuously balancing between critical tickets, irate customers, productive employees, and optimized routes make scheduling, routing, and dispatching very challenging
Data and technology: Many times, the data for analytics is missing, stale, or inaccurate.
Software
FSM software has significantly evolved in the past 10 years, however, the market for FSM software remains fragmented. The software can be deployed both on-premises or as a hosted or cloud-based system. Typically, FSM software is integrated with backend systems such as service management, billing, accounting, parts inventory, and other HR systems.
The large majority of FSM companies are fee-for-service and offer different features and functionality that vary from one company to the next. Whereas one company will provide most, if not all, of the desirable features in field service management, another will be missing one or up to several functions. Pricing is dependent on several factors: a company's size, business needs, number of users, carrier selection, and planned data usage. Some popular fee structures are pay-per-franchise, pay-per-use/administrators, and pay-per-field technician/employee. Costs can range from $20.00 per month for an unbundled solution that does not include carrier data charges to upwards of $200.00. It is not uncommon, although not always the case, for there to be other fees incurred with the use of the FSM platform; namely, fees for software, extra technical support, and additional training.
For the enterprise market, Gartner estimates that market penetration for field service applications has reached 25% of the addressable market. Software sales in the FSM market can only be approximated. Gartner's research puts the revenue for packaged field service dispatch and workforce management software applications, not including service revenue, at approximately $1.2 billion in 2012, with a compound annual growth rate of 12.7%.
Mobility
Companies are using mobile computing to improve communication with the field, increase productivity, streamline work processes, and enhance customer service and loyalty. Field service software can be used for scheduling and routing optimization, automated vehicle location, remote vehicle diagnostics, driver logs and hours-of-service tracking, inventory management, field worker management and driver safety. Mobile software may use databases containing details about customer-premises equipment, access requirements, and parts inventory. Some field service management software integrates with other software such as accounting programs.
Mobility can
Provide real-time analysis of mobile work status
Increase the first-time-fix rate
Reduce overhead or administration costs of paper-based field service management and data entry
Preserve e-audit trail for full regulatory compliance
Increase productivity
Shorten billing cycles
See also
Enterprise asset management
CMMS
Computer-assisted dispatch
Field force automation
Mobile enterprise application framework
Service chain optimization
Service management
Strategic service management
Workforce management
References
Management by type
Information systems |
8384084 | https://en.wikipedia.org/wiki/Cyn.in | Cyn.in | Cyn.in is an open-source enterprise collaborative software built on top of Plone a content management system written in the Python programming language which is a layer above Zope. Cyn.in is developed by Cynapse a company founded by Apurva Roy Choudhury and Dhiraj Gupta which is based in India. Cyn.in enables its users to store, retrieve and organize files and rich content in a collaborative, multiuser environment.
Cyn.in comes in three flavors. Cyn.in Community Edition is released under the GNU General Public License version 3 based on open standards and is completely "free" to use. Cyn.in Enterprise Editions are commercially supported, certified and tested by Cynapse. The on-premises appliance is designed towards businesses who want to install the software on their infrastructure behind their firewall. With the On-Demand Service, Cynapse hosts the software for businesses to use, in secure cloud servers.
History
Cyn.in was developed and released in late 2006 as a closed source Enterprise Bliki software, based on the .NET Framework as a SaaS offering by Cynapse. In 2008, June, Cynapse, the company behind Cyn.in, released a new version of Cyn.in and open sourced the project. This release was built on the popular open source Plone - Zope - Python framework. With this release Cynapse's intention was to expand its focus into the enterprise collaboration domain. While the new release still supported Blogs and Wikis, Cyn.in had evolved to include enterprise collaboration tools including file repositories, event calendars, image galleries and more. The company decided to discontinue using the Bliki terminology and Cyn.in is called a Collaboration software
Concepts
Application convergence
The cyn.in collaborative information management system attempts to bring together the core concepts of:
Personal information management
Organization-wide knowledge and document management
Information and file collaboration
Knowledge transfer
Content publishing
Spaces
Information can be made available in four different location namespaces, called Personal Space, Shared Space, Intranet Space and Web Space within the cyn.in application. Each Space has distinct authorization and functionality rules, for example, the Intranet Space of a cyn.in site may only be accessed by members of it, in contrast to the Web Space, where public Internet access is allowed.
Notes
Information and files in cyn.in are stored together in a common container format called a Note. A user can create any number of Notes in the system, however a Note can only reside in one Space at a time.
Taxonomy and categorization
Notes can have one or more SlashTags. SlashTags is the name given to the hierarchical tagging system used in cyn.in to categorize Notes and is used for creation of navigation trees and dynamic pop-out menus. SlashTags offer taxonomical advantages when compared to traditional folder based systems because they enable:
direct navigational access to each Note
multiple presences of the same Note in the navigation system
Key Features
Due to the emergent nature of the open source and hosted service model, the exact feature specification of the cyn.in service is updated regularly. The following core features are currently visible:
Wiki support
Blog support
Calendaring
Microblogging
Bookmark Directories
Discussion Boards
Audio and Video Galleries
File Repositories
Image Galleries and slideshow views of images
Collaboration Spaces
Customizable Permissions and Access control
Integrated WYSIWYG Word Processor
WYSIWYG creation and editing of HTML content tables
Content Ratings
Content Tagging support
People Directory
Working Copy support
Link and reference integrity maintenance
Automatic locking and unlocking
Complete revision history of all content and files
Workflow capabilities
Integrated Full Text indexing of (Word, Excel, PowerPoint, PDF, HTML, Text and other file formats)
Rules engine for content
Auto-generated tables of contents
Multi-file uploads
Live Search
Faceted Search
i18n support
Accessibility compliant
Time-based publishing and expiry of content
Standards-compliant XHTML and CSS
RSS syndication of content and files
Automatic image scaling and thumbnail generation
Cross-platform
Comment capabilities on any content
WebDAV support
Backup support
Cut/copy/paste operations on content
Email notifications
Desktop Client
Automatic Backlinking
Granular rights based content editing and user-to-user collaboration
Access rights based AJAX user interface
Co - author rich content, files and documents
Selectively move content from private to public spaces
Sensible, easy-to-remember URLs
Server based image resizing for preview and download
Applications
Designed to be used generically, the cyn.in bliki service can be applied in the following business applications:
Knowledge management
Document management
Enterprise content management
Digital asset management
Online file system
Version control system
Group Collaboration
Pricing model
The cyn.in service is made available for purchase by businesses in the Software-as-a-Service (SaaS) model at a flexible per user cost. cyn.in is a multi-tenant system; each customer of cyn.in may purchase one or more cyn.in sites each of which are located at a user selectable subdomain of the main cyn.in service. Each site allows a set of users to log into it to access internal functionality; the service offers a central user authentication system and thus allows the same users to be members of different cyn.in sites as well.
A free version is also available for individual professionals with some limitations in storage and the maximum number of users that are allowed.
Awards
2009
Les Tropées du Libre (nominee)
SourceForge Community Choice Awards - Finalist in 3 categories - Best New Project, Best Commercial Open Source Project & Best Visual Design
See also
Enterprise social software
List of collaborative software
List of content management systems
References
External links
Official Cyn.in Software Buy / Purchase Link
Cynapse Blog
User and Administration Wiki Documentation
Cyn.in listed as an add-on product on the Plone Website
Cyn.in Desktop client listing in the Flex showcase
Blog hosting services
Free content management systems
Free wiki software
Cross-platform software
Free business software
Free groupware
Groupware
Zope
Python (programming language) software
Adobe Integrated Runtime platform software
2006 software |
3818799 | https://en.wikipedia.org/wiki/SheepShaver | SheepShaver | SheepShaver is an open-source PowerPC Apple Macintosh emulator originally designed for BeOS and Linux. The name is a play on ShapeShifter, a Macintosh II emulator for AmigaOS (made obsolete by Basilisk II), which is in turn not to be confused with a third-party preference pane for Mac OS X with the same name. The ShapeShifter and SheepShaver projects were originally conceived and programmed by Christian Bauer. However, currently, the main developer behind SheepShaver is Gwenolé Beauchesne.
History
SheepShaver was originally commercial software when first released in 1998, but after the demise of Be Inc., the maker of BeOS, it became open source in 2002. It can be run on both PowerPC and x86 systems; however, it runs more slowly on an x86 system than on a PowerPC system, because of the translation between the PowerPC and Intel x86 instruction sets. SheepShaver has also been ported to Microsoft Windows.
As a free software, a few variants exist to simplify the installation process on Intel-based Macs:
‘Sheep Shaver Wrapper’ is built off of Sheep Shaver but it does some of the bundling work for the user.
'Chubby Bunny' also simplifies the set up process of OS 9 visualization on Intel Macs running OS X.
Features
SheepShaver is capable of running Mac OS 7.5.2 through 9.0.4 (though it needs the image of an Old World ROM to run Mac OS 8.1 or below), and can be run inside a window so that the user can run classic Mac OS and either BeOS, Intel-based Mac OS X, Linux, or Windows applications at the same time.
Although SheepShaver does have Ethernet support and CD-quality sound output, it does not emulate the memory management unit. While adding MMU emulation has been discussed, the feature has not been added because of the effort required in implementing it, the impact on performance it will have and the lack of time on the part of the developers.
See also
PearPC
vMac
Basilisk II
Classic Environment
Executor
References
External links
SheepShaver for x86
E-Maculation forum on SheepShaver
SheepShaver for Windows setup guide
SheepShaver for OSX setup guide
Gwenole Beauchesne's SheepShaver page (archived)
Free emulation software
Linux emulation software
MacOS emulation software
Macintosh platform emulators
PowerPC emulators |
10842168 | https://en.wikipedia.org/wiki/Ursula%20Martin | Ursula Martin | Ursula Hilda Mary Martin (born 3 August 1953) is a British computer scientist, with research interests in theoretical computer science and formal methods. She is also known for her activities aimed at encouraging women in the fields of computing and mathematics. Since 2019, she has served as a professor at the School of Informatics, University of Edinburgh.
From 20142018, Martin was a Professor of Computer Science in the Department of Computer Science at the University of Oxford, and holds an EPSRC Established Career Fellowship. Prior to this she held a chair of Computer Science in the School of Electronic Engineering and Computer Science at Queen Mary, University of London, where she was Vice-Principal of Science and Engineering, 20052009.
Education
Martin was born in London on 3 August 1953 to Anne Louise (née Priestman) and Captain Geoffrey Richard Martin. She was educated at Abbey College at Malvern Wells. In 1975 she graduated with an MA from Girton College, Cambridge, and in 1979 with a PhD from the University of Warwick, both in mathematics.
Career and research
Martin began in mathematics working in group theory, later moving into string rewriting systems. She has held academic posts at University of Illinois at Urbana-Champaign, the University of Manchester and Royal Holloway, University of London. She has made sabbatical visits to Massachusetts Institute of Technology and SRI International (Menlo Park). In 2004 she was a Visiting Fellow at the Oxford Internet Institute.
From 1992 to 2002, Martin was Professor of Computer Science at the University of St Andrews in Scotland. She was the first female professor at the University since its foundation in 1411.
From 2003 to 2005, Martin was seconded to the University of Cambridge Computer Laboratory part-time and served as the director of the Women@CL project to lead local, national and international initiatives for women in computing, supported by Microsoft Research and Intel Cambridge Research. She was a Fellow of Newnham College, Cambridge.
Martin has served as an advisory editor for the Annals of Pure and Applied Logic journal (published by Elsevier) and on the editorial boards for The Journal of Computation and Mathematics (London Mathematical Society) and Formal Aspects of Computing (Springer-Verlag).
Publications
Her publications include
with Christopher Hollings and Adrian Rice, Ada Lovelace: The Making of a Computer Scientist, Bodleian Library, 2018, 114 pp.
Honours and awards
Martin was appointed Commander of the Order of the British Empire (CBE) in the 2012 New Year Honours for services to computer science. In 2017 she was elected a Fellow of the Royal Society of Edinburgh (FRSE) and the Royal Academy of Engineering (FREng).
References
1953 births
Living people
Alumni of Girton College, Cambridge
Alumni of the University of Warwick
British computer scientists
Formal methods people
University of Illinois at Urbana–Champaign faculty
Academics of the University of Manchester
Academics of Royal Holloway, University of London
Academics of the University of St Andrews
Fellows of Newnham College, Cambridge
Fellows of the Royal Academy of Engineering
Female Fellows of the Royal Academy of Engineering
Academics of Queen Mary University of London
Members of the Department of Computer Science, University of Oxford
Academic journal editors
British women computer scientists
Commanders of the Order of the British Empire
Fellows of the Royal Society of Edinburgh
21st-century women engineers |
6487479 | https://en.wikipedia.org/wiki/Alvey | Alvey | The Alvey Programme was a British government sponsored research programme in information technology that ran from 1984 to 1990. The programme was a reaction to the Japanese Fifth Generation project, which aimed to create a computer using massively parallel computing/processing. The programme was not focused on any specific technology such as robotics, but rather supported research in knowledge engineering in the United Kingdom. It has been likened in operations to the U.S. Defense Advanced Research Projects Agency (DARPA) and Japan's ICOT.
Background
During the early 1980s, Japan invited the United Kingdom to become a part of the Fifth Generation Project. In October 1981, a Department of Industry mission to Japan consisting of academics, civil servants and business representatives explored collaboration opportunities and attended the Fifth Generation conference. Informed by negotiations between ICL and Fujitsu conducted to "ensure the survival of ICL", suggesting that collaboration would only be possible in "very specific areas agreed upon by individual companies", it was concluded that an emulation of the Japanese approach would be preferable to any attempt at participating in the Japanese programme.
In response, a committee was created and was chaired by John Alvey, a technology director at British Telecom. The report generated proposed a different course of action to the Japanese initiative and became the basis for the UK's rejection of the Fifth Generation and the creation of its own Alvey Programme. The programme's fundamental goal was the improvement of the advanced information technology in the UK to address the declining performance of this sector. It operated in 1984 until 1990.
Alvey was not involved in the programme itself.
The main focus areas of the Alvey Programme were as follows:
Advanced Microelectronics and VLSI
Intelligent Knowledge Based Systems (IKBS) or Artificial Intelligence (AI)
Software Engineering
Man-Machine Interaction (including Natural Language Processing)
Alongside these areas, the provision of a communications infrastructure was a component of the programme. Various areas of endeavour were incorporated into the main focus areas. For example, systems architecture, specifically parallel processing, featured in the VLSI endeavour.
References
Brian Oakley and Kenneth Owen, Alvey: Britain's Strategic Computing Initiative, MIT Press, 1990.
Chris Rigatuso, Takeshi Tachi, Dennis Sysvester & Mark Soper, Collaboration between Firms in Information Technology, Berkeley, EE 290X Group G.
Richard Tyler, The Daily Telegraph, Feb 9th 2010.
1984 establishments in the United Kingdom
1990 disestablishments in the United Kingdom
History of artificial intelligence
History of computing in the United Kingdom
Research and development in the United Kingdom
Research projects |
2891685 | https://en.wikipedia.org/wiki/Front%20Row%20%28software%29 | Front Row (software) | Front Row is a discontinued media center software application for Apple's Macintosh computers and Apple TV for navigating and viewing video, photos, podcasts, and music from a computer, optical disc, or the Internet through a 10-foot user interface (similar to Kodi and Windows Media Center). The software relies on iTunes and iPhoto and is controlled by an Apple Remote or the keyboard function keys. The first version was released October 2005, with two major revisions since. Front Row was removed and discontinued in Mac OS X 10.7.
Versions
Introduction
Front Row was first unveiled on October 12, 2005 with the new iMac G5 (along with the built-in iSight camera, the Apple Remote, and Photo Booth). The software was billed as an alternative interface for playing and running iPhoto, DVD Player, and iTunes (Internet radio stations could play by adding the station into a playlist in iTunes).
Apple TV
The next incarnation, released in the original Apple TV software in March 2007, was a complete, stand alone application that played content directly from libraries. Among the features added were more prominent podcasts and TV show menus, trailer streaming, a settings menu, streaming content from computers on the local network, and album and video art for local media. In the summer of 2007, Apple released an update adding streaming of YouTube videos.
Version two
Released in November 2007 with Mac OS X v10.5 (Leopard), version two of Front Row included the new features introduced with the Apple TV (except for the YouTube viewer), a different opening transition, ending AirTunes functionality, and a launcher application in addition to the Command+Escape keyboard shortcut.
Front Row 2 has an undocumented plug-in architecture, for which various third-party plugins are now available, based on reverse-engineering the Front Row environment. Because it uses QuickTime to render video, Front Row can utilize any codec installed in QuickTime, including DivX, Xvid, and WMV, and play DVD images copied to the hard disk. However, because Front Row does not use QuickTime X, it lacks support for certain codec features like Sample Aspect Ratio.
"Take 2"
In January 2008, Apple announced an update branded "Apple TV Take Two" for Apple TV Software. In addition to the prominent addition of direct downloads for movies, TV episodes, and podcasts via the iTunes Store, movie rentals, the ability to view online photos from Flickr or MobileMe (branded .Mac at the time), and the ability to stream audio to AirTunes were added. This update did away with Front Row and introduced a new interface for the original Apple TV in which content was organized into six categories, all of which appeared in a large square box on the screen upon startup (movies, TV shows, music, YouTube, podcasts, and photos) and presented in the initial menu, along with a "Settings" option for configuration, including software updates.
Discontinuation
Front Row was discontinued with the July 2011 release of Mac OS X Lion (v 10.7). The software appeared in neither the early Developer Previews nor the final version.
While it was initially possible to reinstall Front Row by copying the frameworks and application into OS X Lion, iTunes v 10.4 on 22 July 2011 broke compatibility, causing those who updated iTunes to lose access to their music through Front Row.
References
Apple Inc. software
ITunes
MacOS media players
MacOS-only software made by Apple Inc. |
22186649 | https://en.wikipedia.org/wiki/Padre%20%28software%29 | Padre (software) | Padre (short for "Perl Application Development and Refactoring Environment") is a multi-language software development platform comprising an IDE and a plug-in system to extend it. It is written primarily in Perl and is used to develop applications in this language.
Padre is written in Perl 5 but can be extended by any language running on top of the Parrot virtual machine, such as Raku, through its plug-in system and its integration with Parrot. The development officially started in June 2008 but Padre has reused components that have been available on CPAN, and the latest version of Padre is itself always available on CPAN. Most importantly, it uses the Perl bindings of wxWidgets for the windowing system, and PPI to correctly parse and highlight Perl and to allow refactoring. The primary advantages of Padre for Perl developers is that full and easy access to the source code of their editor is available, and a unique set of "Perl intuition" features that allow the IDE to understand details about project structure and content without needing to be told by the user.
Architecture
Padre employs plug-ins in order to provide all of its functionality on top of the runtime system. All the functionality except the core Perl 5 support is implemented as plug-ins. Padre has plug-ins for HTML and XML editing.
This plug-in mechanism is a lightweight framework. In addition to allowing Padre to be extended using other programming languages, the plug-in framework allows Padre to work with networking applications such as telnet, and database management systems. The plug-in architecture supports writing any desired extension to the environment, such as for configuration management, version control systems (Subversion, Git) support, etc.
Padre's widgets are implemented by wxWidgets, an open source, cross-platform toolkit written in C++.
Features
Bookmark Support
Code Folding
Session Support
Diff Feature
CPAN Explorer Tool
Graphical Debugger Tool
Version Control Tool
Notable plug-ins
Version Control: Subversion, Git, Mercurial
Languages: Raku, Parrot, HTML, XML, CSS, LaTeX
Editor Compatibility: Vim
Helper tool for Catalyst
See also
Comparison of integrated development environments
References
External links
Padre on MetaCPAN
Plug-ins on CPAN
Free integrated development environments
Linux integrated development environments
Debuggers
Cross-platform free software
Free software programmed in Perl
Perl software
Software that uses wxWidgets
Software using the Artistic license
Software that uses Scintilla |
45285362 | https://en.wikipedia.org/wiki/Five%20Nights%20at%20Freddy%27s%203 | Five Nights at Freddy's 3 | Five Nights at Freddy's 3 is an indie survival horror video game developed and published by Scott Cawthon. It is the third installment in the Five Nights at Freddy's series, and chronologically takes place thirty years after the events of the first game, in a horror-themed attraction based on the chain of restaurants featured in the first two games. The player takes on the role of a security guard who must defend themselves from a decrepit animatronic called Springtrap that roams the attraction.
Cawthon first teased a third Five Nights at Freddy's installment on his website in January 2015. The game was released on Steam on March 2, 2015, for Android devices on March 6, 2015, and for iOS devices on March 12, 2015. Nintendo Switch, PlayStation 4 and Xbox One ports were released on November 29, 2019.
The game received mixed reviews from critics, who praised the game's mechanics, but criticized it for its lackluster jumpscares and for lacking the charm of its predecessors. The fourth game in the series, Five Nights at Freddy's 4, was released on July 23, 2015.
Gameplay
The gameplay deviates from the previous games of the series slightly. In keeping with the first two installments, players are tasked with surviving night shifts as a security guard, with each night lasting from 12 a.m. to 6 a.m. (a few minutes of real time). However, unlike the previous two games, only one animatronic is able to attack the player and end the game. Several animatronics from earlier games return as "phantoms" that cannot harm the player directly, but can hinder their efforts to survive the night.
The game takes place in a horror-themed attraction called "Fazbear's Fright" through which a single animatronic called "Springtrap" roams. The player must monitor two separate security camera systems, one for the rooms and corridors throughout the facility, and one for the ventilation ductwork, in order to track Springtrap's movements. They can seal off the air vents at certain points to block Springtrap's progress towards the office, but the door or air vent that leads directly into the office remains permanently open. In addition to observing the camera systems, the player must watch the status of three operating systems and reboot them whenever they malfunction. These systems control the cameras, a set of audio devices that can be used to lure Springtrap away from the player's position, and the facility's ventilation. The camera and audio systems malfunction after prolonged or repeated usage. The ventilation system fails at random, or when a "phantom" animatronic jumpscares the player. Some of these animatronics can also cause audio and camera failure; for instance "Phantom Mangle" can affect audio as well as ventilation. Failure to keep the latter running can cause the player to momentarily lose vision as well as hallucinate and see multiple animatronics in the building. If Springtrap enters the office, he jumpscares the player and the game ends.
The game consists of five nights, increasing in difficulty, and completing all five unlocks an even more challenging "Nightmare" night. Going through the game normally grants a star at the fifth night. Several Atari-style minigames are also playable within the main game; completing all of them unlocks the game's "good ending" and grants access to bonus content as well as a second star. If the player completes the "Nightmare" night, they will unlock the cheat menu and a third star. The cheat menu offers a range of options, including a mode to make the animatronics more aggressive and increase the game's difficulty. Other cheats include a radar to track Springtrap and the ability to decrease the length of nights. Completing the "Nightmare" night with only the aggressive cheat enabled grants the fourth star.
Plot
The player assumes the role of a newly-hired employee at Fazbear's Fright, a horror-themed attraction inspired by the family restaurant Freddy Fazbear's Pizza that closed thirty years prior. During the week before the attraction's official opening, the employee must watch over the facility from a security office each night from 12 a.m. to 6 a.m., using a network of surveillance cameras placed in rooms and air vents. They must monitor the status of three operating systems – cameras, audio, and ventilation – and reboot them whenever they begin to malfunction. Camera problems cause the camera feeds to become obscured by static, and if ventilation fails, the employee's vision begins to black out. The employee may also see hallucinatory "phantom" animatronics that resemble animatronics from the restaurant franchise, which can cause system malfunctions but cannot directly harm the player.
After the first night, the staff at Fazbear's Fright uncover an older deteriorated, rabbit-like animatronic named Springtrap; the employee must thereafter prevent it from entering the office and killing them. As the nights progress, the employee hears a series of instructional cassette tapes that instruct employees how to operate animatronic suits which can function as both an animatronic and a costume for humans. The tapes also discuss a "safe room," an additional emergency room which "is not included in the digital map layout programmed in the animatronics or the security cameras, is hidden to customers, invisible to animatronics, and is always off-camera".
However, recordings on later nights discourage the use of the suits. The recording on the fourth night states the suits are no longer considered suitable for employees following "an unfortunate incident at the sister location involving multiple and simultaneous spring lock failures." To replace the faulty suits, the recording states that temporary costumes would be provided, although questions about their appropriateness should be avoided. The recording which plays during the fifth night reminds employees that the safe room is for employees only and that customers should never be taken there. Also, after discovering that one of the special suits was "noticeably moved," it reminds employees that the suits are considered unsafe to wear.
Atari-style minigames that are playable between nights provide insight into the restaurant's lore. The first four nights' minigames depict animatronics from the previous two games following a dark purple animatronic before being violently disassembled by a purple figure, the serial killer that was previously seen in the minigames of Five Nights at Freddy's 2. In the fifth night's minigame, the ghosts of the figure's victims corner him, and he seeks protection by hiding inside a yellow suit. However, the suit's spring-lock mechanism fails, crushing the man in the process, and the children fade away, leaving the figure to seemingly bleed to death. It is implied that the suit is now possessed by the spirit of the figure, which became Springtrap.
Unlike the previous games, Five Nights at Freddy's 3 contains two endings, depending on whether the player has found and completed all of the hidden minigames within the main game. Some of these are only available on specific nights, while others can be accessed during any night. The "bad ending" is attained by completing the game without completing all the hidden minigames and shows a screen depicting the heads of four of the five animatronics from the first game with lit-up eyes, along with one head that is obscured by darkness where only the glowing eyes are visible, speculated to be Golden Freddy's head. Completing all the hidden minigames before completing the game earns the "good ending", which is the same screen as described previously but with the animatronics' eyes not lit up, and with the one obscured head missing. This screen has been speculated to suggest the souls of children killed by the purple figure and possessing the restaurant's animatronics have been put to rest.
In the sixth "Nightmare" night, an archived recording states that all Freddy Fazbear's Pizza locations' safe rooms will be permanently sealed, instructing employees that they are "not to be mentioned to family, friends or insurance representatives”. When the night is completed, a newspaper clipping reveals that Fazbear's Fright was destroyed in a fire shortly after the events of the game and that any salvageable items from the attraction are to be auctioned off. However, brightening the image reveals Springtrap in the background, implying its survival.
Development
In January 2015, a new image was uploaded to Scott Cawthon's website, teasing a third entry in the series. A short while later, a second image was released, depicting the redesigned animatronics from the second game apparently scrapped. Various teaser images followed, before a trailer was released on January 26, 2015. The game was posted (and later accepted) onto Steam Greenlight the same day.
A demo for the game was released to selected YouTubers on March 1, 2015, with the full game being released hours later on March 2, 2015. On March 6, 2015, a mobile port was released for Android devices, and for iOS on March 12, 2015. Ports for Nintendo Switch, PlayStation 4 and Xbox One were released on November 29, 2019.
Reception
Metacritic's aggregate reviews for Five Nights at Freddy's 3 has received an average score of 68 out of 100.
Omri Petitte from PC Gamer gave Five Nights at Freddy's 3 a score of 77 out of 100, praising the reworked camera system and Springtrap, but commenting on how the jumpscares from the other animatronics "felt a little stale by the third night." In a more critical review, Nic Rowen from Destructoid gave the game a 6.5 out of 10, stating that although he saw the game as "by far the most technically proficient and mechanically satisfying installment yet," he disliked Springtrap and Fazbear's Fright for lacking the "charm of the original cast and locations."
References
External links
Five Nights at Freddy's 3 on IndieDB
2015 video games
Android (operating system) games
3
Indie video games
IOS games
Point-and-click adventure games
Horror video games
Video games about robots
Single-player video games
Works about missing people
Video game sequels
Video games developed in the United States
Video games with alternate endings
Windows games
PlayStation 4 games
Nintendo Switch games
Xbox Cloud Gaming games
Xbox One games
Clickteam Fusion games |
42073392 | https://en.wikipedia.org/wiki/SciSys | SciSys | SCISYS PLC was a pan-European computer software and services company based in the United Kingdom and Germany. The company was formed in 1980 as Science Systems and was acquired by CGI in 2019.
Overview
SCISYS Group PLC was a medium-sized software and IT services company. The dual-listed company (ESM:SCC)/(Lon:SSY) was a l developer of information and communications technology services, e-business, web and mobile applications, editorial newsroom solutions and advanced technology solutions. The SCISYS Group comprised four operational companies: SCISYS UK Ltd, SCISYS Deutschland GmbH, SCISYS Media Solutions GmbH and Xibis Ltd.
History
SCISYS was formed in 1980 as Science Systems and its shares were listed on London's Alternative Investment Market in 1997.
In 2000, Science Systems acquired CODA, an accounting software company, and in 2002 the group holding company was renamed CODASciSys plc. The CODA business was demerged as a separately listed company in 2006. Following the demerger, the original, remaining business became SciSys plc in 2006.
In 2007, SciSys purchased a private Bochum-based German company, VCS AG. VCS produced software, computer systems and telecommunications systems for broadcasting, and services for public- and private-sector satellites.
In January 2012 the VCS business was renamed SCISYS Deutschland GmbH as part of a wider integration exercise. As part of this, in May 2012, the UK operations of SciSys were also rebranded as SCISYS. These two changes brought the business divisions together, operating under the same name and branding.
In October 2012, SCISYS purchased the space-market elements of the German company MakaluMedia.
In December 2014, SCISYS announced the acquisition of Xibis Limited, a mobile and web-development firm.
In November 2016, SCISYS announced the acquisition of ANNOVA Systems GmbH a provider of software solutions for media and broadcast, in particular its OpenMedia newsroom computer systems.
In May 2017, SCISYS announced that the Media & Broadcast division of SCISYS Deutschland has moved from Bochum to new offices in nearby Dortmund. The Bochum location remains the German headquarters and the home of its space division.
In 2018, SCISYS moved its headquarters to Ireland, which was seen as a response to the then-pending Brexit.
In December 2019, SCISYS was acquired by CGI, an independent IT and business consulting services firm.
References
Further reading
EVO
Companies established in 1980
Companies listed on the Alternative Investment Market
Software companies of the United Kingdom
Companies based in Wiltshire
Management consulting firms of the United Kingdom
Technology companies of the United Kingdom
Information technology consulting firms
Information technology companies of the United Kingdom
Mass media technology
Defence companies of the United Kingdom
Government software |
4373961 | https://en.wikipedia.org/wiki/Mactracker | Mactracker | Mactracker is a freeware application containing a complete database of all Apple hardware models and software versions, created and actively developed by Ian Page. The database includes (by no means exhaustive) the Lisa (under its later name, Macintosh XL), Classic Macintosh (1984–1996), printers, scanners, QuickTake digital cameras, iSight, iPod, iPhone, iPad, Apple Watch, AirPort, along with all versions of the Classic Mac OS, macOS, and iOS. For each model of desktop and portable computer, audio clips of the corresponding startup chime or chime of death are also included.
Sources for the history and text used in Mactracker are credited to Lukas Foljanty, Glen. D Sanford, and English Wikipedia. WidgetWidget and The Iconfactory provided many icons of the hardware. Versions are available for macOS and iOS (both iPhone and iPad run the same universal application which adapts to the device). Versions for Windows and the clickwheel iPod have been discontinued.
Reviews
Macworld: 5 mice
MacUpdate: 5 stars (user reviews)
VersionTracker: 5 stars (user reviews)
References
External links
– official site
Utilities for macOS
Utilities for Windows
IPod software |
68343321 | https://en.wikipedia.org/wiki/Pegasus%20Project%20revelations%20in%20India | Pegasus Project revelations in India | In India, the Pegasus Project investigations alleged that the Pegasus spyware was used on ministers, opposition leaders, political strategist and tacticians, journalists, activists, minority leaders, supreme court judges, religious leaders, administrators like Election Commissioners and heads of Central Bureau of Investigation (CBI). Some of these phones were later analysed and were confirmed to have been targeted by the Pegasus spyware.
The Pegasus Project was a collaborative investigative journalism initiative undertaken by 17 media organisations. Pegasus is a spyware developed by the NSO Group, an Israeli technology and cyber-arms firm that can be secretly deployed on mobile phones and other devices, which run most versions of Android and iOS. Pegasus is capable of reading text messages, tracking calls, collecting passwords, location tracking, accessing the target device's microphone and camera, and harvesting information from apps. Since Pegasus is classified as cyber-arms by the Israeli government, only national governments can purchase the spyware after the authorisation of the Israeli government. The Pegasus Project initiative investigated the use of the Pegasus spyware by governments on journalists, opposition politicians, activists and business people. A target list consisting of 50,000 phone numbers, which could have been possibly targeted by the spyware, leaked to Forbidden Stories which spawned this investigation. 300 of these numbers were from India. The presence of a phone number on the list does not confirm the use of Pegasus and only forensic examination of a phone can confirm if the spyware was present. Supreme court appointed a technical committee and asked to submit the phones for investigation of those individuals who suspect snooping by the government.
List of targets
The following are the list of some of the alleged Indian targets as revealed by the Pegasus Project investigations:
Rahul Gandhi, an Indian politician and main rival of Indian Prime Minister Narendra Modi, was targeted on two of his cellphones. He would go on to claim that "all [his] phones are tapped".
Five close friends and other Indian National Congress party officials were in the leaked list of potential targets.
Prashant Kishor, a political strategist and tactician, who is linked with several of Prime Minister Narendra Modi's rivals, was also targeted. The presence of the spyware on his phone was later confirmed.
Ashok Lavasa, an ex-Election Commissioner of India who flagged Prime Minister Narendra Modi's poll code violation in the 2019 Indian general election was targeted.
Alok Verma, who was ousted as the head of the Central Bureau of Investigation (CBI) by Prime Minister Narendra Modi in 2018 was a target.
Numerous Indian politicians including Deputy Chief Minister of Karnataka G. Parameshwara, as well as close aides of then Chief Minister H. D. Kumaraswamy and senior Congress leader Siddaramaiah.
Abhishek Banerjee, a West Bengal politician of the AITC, and nephew of incumbent Chief Minister of West Bengal Mamata Banerjee.
Siddharth Varadarajan, a New Delhi–based, American investigative journalist and founder of The Wire. Varadarajan joined the investigation of Project Pegasus.
Umar Khalid, a left-wing student activist and leader of the Democratic Students' Union, was added to the list in late 2018, then charged with sedition. He was arrested in September 2020 for organising riots, the provided evidence was taken from his phone. He is currently in jail awaiting trial.
Stan Swamy, an Indian Jesuit father and activist. Swamy died in July 2021 at the age of 84 after contracting COVID-19 in prison.
Collaborators Hany Babu, Shoma Sen and Rona Wilson were also in the project's list of alleged targets.
Ashwini Vaishnaw, Minister of Electronics and Information Technology who assumed office less than 3 weeks before the investigation was revealed.
The inner circle to the 14th Dalai Lama Tenzin Gyatso, the 17th Karmapa Ogyen Trinley Dorje and other Tibetan figures, including:
Tempa Tsering, the Dalai Lama's envoy to Delhi
Tenzin Taklha, the Dalai Lama's senior aide
Chhimey Rigzen, the Dalai Lama's senior aide
Lobsang Tenzin, the 5th Samdhong Rinpoche, who is responsible for the selection of the Dalai Lama's successor
Lobsang Sangay, former president of the Tibetan government-in-exile
Penpa Tsering, president of the Tibetan government-in-exile
11 phone numbers associated with a female employee of the Supreme Court of India and her immediate family, who accused the former Chief Justice of India, Ranjan Gogoi, of sexual harassment, are also allegedly found on a database indicating possibility of their phones being snooped.
VK Jain, personal assistant and aide of Arvind Kejriwal
Reactions
Judiciary
The Campaign for Judicial Accountability and Reforms (CJAR) released the following statement:Such large-scale intrusive surveillance into the personal phones of political leaders, journalists, activists is a flagrant violation of the right to privacy as upheld by the Hon'ble Supreme Court and an affront on the civil liberties of citizens.CJAR stated that this was an attack on the independence of the judiciary and called for a response from the highest court in the land. It also stated that the snooping was patently illegal.
CJAR further stated that the allegations of hacking of a sitting Supreme Court judge of ex-CJI Ranjan Gogoi's staffer, who accused Gogoi of sexual harassment, and her family members were of "grave seriousness". It raised the question asking why a software that was reported sold only to "vetted governments" for national security issues was used to spy on Gogoi's staffer and also demanded a probe into improper collusion between the executive and judiciary.
In the press release, they further stated:The victimisation of the woman by the police, the many questionably judicial orders passed by benches headed by Justice Gogoi favourable to the Union Government, and Justice Gogoi's nomination to the Rajya Sabha soon after his retirement raises serious questions about such collusionIt recommended Special Investigation Team headed by a retired supreme court judge be set up to investigate the Pegasus snooping scandal and probe if there was any improper collusion between the Union Government and the former Chief Justice of India.
Journalists
In a tweet, the Press Club of India (PCI) issued a statement:This is the first time in the history of this country that all pillars of our democracy — judiciary, Parliamentarians, media, executives & ministers — have been spied upon. This is unprecedented and the PCI condemns unequivocally. The snooping has been done for ulterior motives. What is disturbing is that a foreign agency, which has nothing to do with the national interest of the country, was engaged to spy on its citizens. This breeds distrust and will invite anarchy. The Govt should come out clean on this front and clarify.Similarly, the Editor's Guild of India also released a statement directed against the alleged spying made by the Indian government, saying:This act of snooping essentially conveys that journalism and political dissent are now equated with 'terror'. How can a constitutional democracy survive if governments do not make an effort to protect freedom of speech and allows surveillance with such impunity?It asked for a Supreme Court monitored enquiry into the matter, and further demanded that the inquiry committee should include people of impeccable credibility from different walks of life—including journalists and civil society—so that it can independently investigate the facts around the extent and intent of snooping using the services of Pegasus.
Senior journalists N Ram and Sashi Kumar approached the Supreme Court to seek an investigation into the matter. They stated that the targeted surveillance "grossly disproportionate invasion of the right to privacy". A bench headed by Chief Justice of India N. V. Ramana is set to hear the petition.
Some news articles were released making claims that Amnesty never claimed that the leaked phone numbers were of NSO's Pegasus spyware list. These claims where repeated by some BJP leaders. However, these reports were later proven to be false, and Amnesty issued a statement stating that it categorically stands by the findings of the investigation and that the data is irrefutably linked to potential targets of NSO Group's Pegasus spyware.
Union Government
The government has not denied the usage of Pegasus spyware in their response so far. The government has also denied the request for a probe or investigation or an independent Supreme Court inquiry by the opposition into the matter. The former IT minister of India Ravi Shankar Prasad asked, "If more than 45 nations are using Pegasus as NSO has said, why is only India being targeted?"
Many members of the government and the ruling BJP party such as home Minister of Home and Internal Security Amit Shah, Minister of Electronics & Information Technology Ashwini Vaishnaw, Uttar Pradesh Chief Minister Yogi Adityanath, Karnataka Chief Minister Basavaraj Bommai, Assam Chief Minister Himanta Biswa Sarma and Devendra Fadnavis have alleged that the Pegasus Project is an international conspiracy to malign Indian democracy and defame India. Vaishnaw stated the report came one day before the monsoon session of Parliament and that "this cannot be a coincidence" and Yogi Adityanath said "the opposition, knowingly or unknowingly, is falling prey to the international conspiracy." Devendra Fadnavis said that "the opposition doesn't want the parliament to run smoothly. Pegasus project is a conspiracy to ruin the nation".
The official response of the Government of India to The Washington Post stated that "[t]he allegations regarding government surveillance on specific people has no concrete basis or truth associated with it whatsoever" and that such news reports were an attempt to "malign the Indian democracy and its institutions". They further stated that each case of interception, monitoring and decryption is approved by the Union Home Secretary and that there exists an oversight mechanism in the form of a review committee headed by the Union Cabinet Secretary and that any such interceptions have been done under the due process of law.
Ashwini Vaishnaw in a statement in parliament stated that the reports were "highly sensational" and that they had "no factual basis". He further stated that NSO themselves had rubbished the claims. He stated that the existence of numbers in a list was not sufficient evidence to indicate that the spyware was used and said that the report itself stated the same and without the physical examination of the phone such claims cannot be corroborated.
Amit Shah in a statement on his blog insinuated that this was an attempt to disrupt the monsoon session of the parliament and that the opposition parties were "jumping on a bandwagon" and were trying to "derail anything progressive that comes up in Parliament". He stated that the report was an attempt to "derail India's development trajectory through their conspiracies".
Replying to allegations from the opposition, Minister of State in Ministry of Home Affairs Ajay Kumar Mishra said that there is no reason for a probe and the people who made the allegations are "political failures".
Opposition
On July 27, 2021, ten opposition parties met to coordinate their response to move joint adjournment notice in Lok Sabha to discuss the Pegasus snooping issue. The meeting was attended by leaders of Congress, NCP, Shiv Sena, DMK, National Conference, RSP, IUML, BSP and CPI M.
On July 28, 2021, a similar meeting of leaders of fourteen opposition parties to with the same aim took place. Leaders from the Congress, DMK, NCP, Shiv Sena, RJD, SP, CPI M, CPI, NC, AAP, IUML, RSP, KCM and VCK where in attendance.
Indian National Congress
The Indian National Congress accused Prime Minister Narendra Modi of "treason" and compromising national security following the release of the reports and called for the removal of Minister of Home and Internal Security Amit Shah and an investigation of the role of Prime Minister Narendra Modi into the affair. Rahul Gandhi, the leader of the Indian National Congress, later claimed that the Prime Minister "hit the soul of democracy" by using Pegasus. He went on to chair a meeting consisting of many opposition parties to push jointly in the Parliament for discussion on Pegasus.
Shashi Tharoor, a senior leader of the Indian National Congress and the head of the Parliamentary Standing Committee on Information Technology. The Committee summoned officials of Ministry of Information Technology & Ministry of Home Affairs for questioning regarding the matter. However, the committee was unable to meet as the BJP members of the committee refused to register their attendance despite being present, and the quorum was not met. As a reaction to this, a member of a committee stated: "what happened today shows the lengths to which BJP is willing to go in order to not discuss the Pegasus spyware issue".
Tharoor also demanded a Supreme Court judge-monitored probe into the issue and stated that the opposition would continue to bring up the issue in the Parliament until the central government would agree to discuss the issue. He alleged that the central government made use of tax payer's money for political gains.
The former home minister and finance minister, P Chidambaram, pointed out that other countries such as France and Israel have begun their own investigation into the matter, and that "only Indian government not concerned about Pegasus issue". He also said that the Prime Minister must make a statement in the Parliament to clarify whether or not snooping has been done by the central government. He further stated that the government has an "ostrich-like attitude" because of their avoidance of discussion on the subject in parliament.
All India Trinamool Congress
West Bengal Chief Minister Mamata Banerjee, from the AITC, alleged that the central government intends to "turn India into a surveillance state" where "democracy is in danger". She also claimed that Pegasus was used to keep track of the meetings between her and Prashant Kishore during the 2021 West Bengal elections. She also said that she had to tape her phone to avoid being spied on.
She accused the central government of sitting idle on the matter and setup an judicial inquiry panel to investigate the allegations that were revealed by the Pegasus Project. She further sought an all party meeting to discuss the matter.
CPI (M)
Advocate M L Sharma and John Brittas, a member of the Rajya Sabha of the CPI (M) party filed a petition that sought a court-monitored probe into the allegations of unauthorised use of the Pegasus spyware on Indian citizens. The petition alleged that the use of Pegasus on Indian citizens was a "serious attack upon Indian democracy, judiciary and country’s security."
The petition stated that there were two possibilities — either the snooping was carried out by the Government of India or a foreign agency. It said that if the act was carried out by the government, it would amount to unauthorised spying on Indian citizens and that the spending of resources for political gain by the ruling party should not be allowed. The petition stated if the snooping was carried out by a foreign actor, that it would be an "act of external aggression" that needed to be investigated.
Shiv Sena
Sanjay Raut, a politician from Shiv Sena stated that "mobile phones have become virtual bombs" and are used to keep tab of the opponents of the ruling party. He questioned the government regarding the source of funding for the use of Pegasus. He called for probe into the matter stating that to snoop on 300 phones would have cost at least $48 million.
Investigations
The central government denied the request for an investigation or court monitored enquiry into the matter.
West Bengal Chief Minister Mamata Banerjee set up an inquiry commission to probe Pegasus spyware. Her cabinet approved the formation of a two-member inquiry commission lead by retired Supreme Court Justice Madan B. Lokur with former Calcutta High Court Acting Chief Justice Jyotirmay Bhattacharya. Chief Minister Banerjee said that she wished for a central committee, but had to set up a state government committee due to the central government's inaction on the matter. Some politicians such as Dilip Ghosh, dismissed the committee and called it "drama" to divert people's attention. On 28 October 2021, the honourable Supreme Court of India agreeing that a central-based inquiry was needed, ordered an independent probe into the issue by a three-member committee. The West Bengal government agreed not to have their commission function parallel with the committee of the Supreme Court; however, the independent state commission continued to operate until December 2021 when ordered directly by the Supreme Court to cease.
See also
WhatsApp snooping scandal
Tek Fog
References
Political scandals in India
Modi administration
Espionage scandals and incidents
Malware toolkits
2021 in India |
50399682 | https://en.wikipedia.org/wiki/Predictive%20engineering%20analytics | Predictive engineering analytics | Predictive engineering analytics (PEA) is a development approach for the manufacturing industry that helps with the design of complex products (for example, products that include smart systems). It concerns the introduction of new software tools, the integration between those, and a refinement of simulation and testing processes to improve collaboration between analysis teams that handle different applications. This is combined with intelligent reporting and data analytics. The objective is to let simulation drive the design, to predict product behavior rather than to react on issues which may arise, and to install a process that lets design continue after product delivery.
Industry needs
In a classic development approach, manufacturers deliver discrete product generations. Before bringing those to market, they use extensive verification and validation processes, usually by combining several simulation and testing technologies. But this approach has several shortcomings when looking at how products are evolving. Manufacturers in the automotive industry, the aerospace industry, the marine industry or any other mechanical industry all share similar challenges: they have to re-invent the way they design to be able to deliver what their customers want and buy today.
Complex products that include smart systems
Products include, besides the mechanics, ever more electronics, software and control systems. Those help to increase performance for several characteristics, such as safety, comfort, fuel economy and many more. Designing such products using a classic approach, is usually ineffective. A modern development process should be able to predict the behavior of the complete system for all functional requirements and including physical aspects from the very beginning of the design cycle.
The use of new materials and manufacturing methods
To achieve reduced costs or fuel economy, manufacturers need to continually consider adopting new materials and corresponding manufacturing methods. That makes product development more complex, as engineers cannot rely on their decades of experience anymore, like they did when working with traditional materials, such as steel and aluminium, and traditional manufacturing methods, such as casting. New materials such as composites, behave differently when it comes to structural behavior, thermal behavior, fatigue behavior or noise insulation for example, and require dedicated modeling.
On top of that, as design engineers do not always know all manufacturing complexities that come with using these new materials, it is possible that the "product as manufactured" is different from the "product as designed". Of course all changes need to be tracked, and possibly even an extra validation iteration needs to be done after manufacturing.
Product development continues after delivery
Today's products include many sensors that allow them to communicate with each other, and to send feedback to the manufacturer. Based on this information, manufacturers can send software updates to continue optimizing behavior, or to adapt to a changing operational environment. Products will create the internet of things, and manufacturers should be part of it. A product "as designed" is never finished, so development should continue when the product is in use. This evolution is also referred to as Industry 4.0, or the fourth industrial revolution. It challenges design teams, as they need to react quickly and make behavioral predictions based on an enormous amount of data.
The inclusion of predictive functionality
The ultimate intelligence a product can have, is that it remembers the individual behavior of its operator, and takes that into consideration. In this way, it can for example anticipate certain actions, predict failure or maintenance, or optimize energy consumption in a self-regulating manner. That requires a predictive model inside the product itself, or accessible via cloud. This one should run very fast and should behave exactly the same as the actual product. It requires the creation of a digital twin: a replica of the product that remains in-sync over its entire product lifecycle.
Ever increasing pressure on time, cost, quality and diversification
Consumers today can get easy access to products that are designed in any part of the world. That puts an enormous pressure on the time-to-market, the cost and the product quality. It's a trend which has been going on for decades. But with people making ever more buying decisions online, it has become more relevant than ever. Products can easily be compared in terms of price and features on a global scale. And reactions on forums and social media can be very grim when product quality is not optimal. This comes on top of the fact that in different parts of the world, consumer have different preferences, or even different standards and regulations are applicable.
As a result, modern development processes should be able to convert very local requirements into a global product definition, which then should be rolled out locally again, potentially with part of the work being done by engineers in local affiliates. That calls for a firm globally operating product lifecycle management system that starts with requirements definition. And the design process should have the flexibility to effectively predict product behavior and quality for various market needs.
Enabling processes and technologies
Dealing with these challenges is exactly the aim of a predictive engineering analytics approach for product development. It refers to a combination of tools deployment and a good alignment of processes. Manufacturers gradually deploy the following methods and technologies, to an extent that their organization allows it and their products require it:
Deploying a closed-loop systems-driven product development process
In this multi-disciplinary simulation-based approach, the global design is considered as a collection of mutually interacting subsystems from the very beginning. From the very early stages on, the chosen architecture is virtually tested for all critical functional performance aspects simultaneously. These simulations use scalable modeling techniques, so that components can be refined as data becomes available. Closing the loop happens on 2 levels:
Concurrent development of the mechanical components with the control systems
Inclusion of data of products in use (in case of continued development the actual product)
Closed-loop systems driven product development aims at reducing test-and-repair. Manufacturers implement this approach to pursue their dream of designing right the first time.
Increasing the use of 1D multi-physics system simulation
1D system simulation, also referred to as 1D CAE or mechatronics system simulation, allows scalable modeling of multi-domain systems. The full system is presented in a schematic way, by connecting validated analytical modeling blocks of electrical, hydraulic, pneumatic and mechanical subsystems (including control systems). It helps engineers predict the behavior of concept designs of complex mechatronics, either transient or steady-state.
Manufacturers often have validated libraries available that contain predefined components for different physical domains. Or if not, specialized software suppliers can provide them. Using those, the engineers can do concept predictions very early, even before any Computer-aided Design (CAD) geometry is available. During later stages, parameters can then be adapted.
1D system simulation calculations are very efficient. The components are analytically defined, and have input and output ports. Causality is created by connecting inputs of a components to outputs of another one (and vice versa). Models can have various degrees of complexity, and can reach very high accuracy as they evolve. Some model versions may allow real-time simulation, which is particularly useful during control systems development or as part of built-in predictive functionality.<
Improving 3D simulation technologies
3D simulation or 3D CAE is usually applied at a more advanced stage of product development than 1D system simulation, and can account for phenomena that cannot be captured in 1D models. The models can evolve into highly detailed representations that are very application-specific and can be very computationally intensive.
3D simulation or 3D CAE technologies were already essential in classic development processes for verification and validation, often proving their value by speeding up development and avoiding late-stage changes. 3D simulation or 3D CAE are still indispensable in the context of predictive engineering analytics, becoming a driving force in product development. Software suppliers put great effort into enhancements, by adding new capabilities and increasing performance on modeling, process and solver side. While such tools are generally based on a single common platform, solution bundles are often provided to cater for certain functional or performance aspects, while industry knowledge and best practices are provided to users in application verticals. These improvements should allow 3D simulation or 3D CAE to keep pace with ever shorter product design cycles.
Establishing a strong coupling between 1D simulation, 3D simulation and controls engineering
As the closed-loop systems-driven product development approach requires concurrent development of the mechanical system and controls, strong links must exist between 1D simulation, 3D simulation and control algorithm development. Software suppliers achieve this through offering co-simulation capabilities for :de:Model in the Loop (MiL), Software-in-the-Loop (SiL) and Hardware-in-the-Loop (HiL) processes.
Model-in-the-Loop
Already when evaluating potential architectures, 1D simulation should be combined with models of control software, as the electronic control unit (ECU) will play a crucial role in achieving and maintaining the right balance between functional performance aspects when the product will operate. During this phase, engineers cascade down the design objectives to precise targets for subsystems and components. They use multi-domain optimization and design trade-off techniques. The controls need to be included in this process. By combining them with the system models in MiL simulations, potential algorithms can be validated and selected.
In practice, MiL involves co-simulation between virtual controls from dedicated controller modeling software and scalable 1D models of the multi-physical system. This provides the right combination of accuracy and calculation speed for investigation of concepts and strategies, as well as controllability assessment.
Software-in-the-Loop
After the conceptual control strategy has been decided, the control software is further developed while constantly taking the overall global system functionality into consideration. The controller modeling software can generate new embedded C-code and integrate it in possible legacy C-code for further testing and refinement.
Using SiL validation on a global, full-system multi-domain model helps anticipate the conversion from floating point to fixed point after the code is integrated in the hardware, and refine gain scheduling when the code action needs to be adjusted to operating conditions.
SiL is a closed-loop simulation process to virtually verify, refine and validate the controller in its operational environment, and includes detailed 1D and/or 3D simulation models.
Hardware-in-the-Loop
During the final stages of controls development, when the production code is integrated in the ECU hardware, engineers further verify and validate using extensive and automated HiL simulation. The real ECU hardware is combined with a downsized version of the multi-domain global system model, running in real time. This HiL approach allows engineers to complete upfront system and software troubleshooting to limit the total testing and calibration time and cost on the actual product prototype.
During HiL simulation, the engineers verify if regulation, security and failure tests on the final product can happen without risk. They investigate interaction between several ECUs if required. And they make sure that the software is robust and provides quality functionality under every circumstance. When replacing the global system model running in real-time with a more detailed version, engineers can also include pre-calibration in the process. These detailed models are usually available anyway since controls development happens in parallel to global system development.
Closely aligning simulation with physical testing
Evolving from verification and validation to predictive engineering analytics means that the design process has to become more simulation-driven. Physical testing remains a crucial part of that process, both for validation of simulation results as well as for the testing of final prototypes, which would always be required prior to product sign-off. The scale of this task will become even bigger than before, as more conditions and parameters combinations will need to be tested, in a more integrated and complex measurement system that can combine multiple physical aspects, as well as control systems.
Besides, also in other development stages, combining test and simulation in a well aligned process will be essential for successful predictive engineering analytics.
Increasing realism of simulation models
Modal testing or experimental modal analysis (EMA) was already essential in verification and validation of pure mechanical systems. It is a well-established technology that has been used for many applications, such as structural dynamics, vibro-acoustics, vibration fatigue analysis, and more, often to improve finite element models through correlation analysis and model updating. The context was however very often trouble-shooting.
As part of predictive engineering analytics, modal testing has to evolve, delivering results that increase simulation realism and handle the multi-physical nature of the modern, complex products. Testing has to help to define realistic model parameters, boundary conditions and loads. Besides mechanical parameters, different quantities need to be measured. And testing also needs to be capable to validate multi-body models and 1D multi-physical simulation models. In general a whole new range of testing capabilities (some modal-based, some not) in support of simulation becomes important, and much earlier in the development cycle than before.
Using simulation for more efficient testing
As the number of parameters and their mutual interaction explodes in complex products, testing efficiency is crucial, both in terms of instrumentation and definition of critical test cases. A good alignment between test and simulation can greatly reduce the total test effort and boost productivity.
Simulation can help to analyze upfront which locations and parameters can be more effective to measure a certain objective. And it also allows to investigate the coupling between certain parameters, so that the amount of sensors and test conditions can be minimized.
On top of that, simulation can be used to derive certain parameters that cannot be measured directly. Here again, a close alignment between simulation and testing activities is a must. Especially 1D simulation models can open the door to a large number of new parameters that cannot directly accessed with sensors.
Creating hybrid models
As complex products are in fact combinations of subsystems which are not necessarily concurrently developed, systems and subsystems development requires ever more often setups that include partially hardware, partially simulation models and partially measurement input. These hybrid modeling techniques will allow realistic real-time evaluation of system behavior very early in the development cycle. Obviously this requires dedicated technologies as a very good alignment between simulation (both 1D and 3D) and physical testing.
Tightly integrating 1D and 3D CAE, as well as testing in the complete product lifecycle management process
Tomorrow's products will live a life after delivery. They will include predictive functionalities based on system models, adapt to their environment, feed information back to design, and more. From this perspective, design and engineering are more than turning an idea into a product. They are an essential part of the digital thread through the entire product value chain, from requirements definition to product in use.
Closing the loop between design and engineering on one hand, and product in use on the other, requires that all steps are tightly integrated in a product lifecycle management software environment. Only this can enable traceability between requirements, functional analysis and performance verification, as well as analytics of use data in support of design. It will allow models to become digital twins of the actual product. They remain in-sync, undergoing the same parameter changes and adapting to the real operational environment.
See also
smart systems
control systems
3D simulation or 3D CAE
Industrie 4.0
Internet of Things
real-time simulation
Hardware-in-the-Loop (HiL)
co-simulation
Modal testing
product lifecycle management
digital twins
References
Product lifecycle management
Engineering disciplines |
63303438 | https://en.wikipedia.org/wiki/Eric%20Gilbert | Eric Gilbert | Eric Gilbert is an American computer scientist and the John Derby Evans Associate Professor in the University of Michigan School of Information, with a courtesy appointment in CSE. He is known for his work designing and analyzing social media.
Education and early life
Gilbert received a B.S. with highest distinction in Mathematics & Computer Science from the University of Illinois at Urbana-Champaign in 2001. While in college, Gilbert worked as a software engineer on the influential social and learning computing system PLATO. After completing his undergraduate work, he served in Teach For America as a Math and Computer Science teacher at Paul Robeson High School in Chicago. Gilbert obtained a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 2011.
Career and research
Gilbert joined the School of Interactive Computing within the Georgia Institute of Technology College of Computing in 2011 as an Assistant Professor. There, he led the comp.social lab. After receiving tenure in 2017 at Georgia Tech, Gilbert moved to the School of Information at the University of Michigan as the John Derby Evans Endowed Professor of Information in 2018. He is also appointed within Computer Science and Engineering at Michigan.
Gilbert has made foundational contributions to the fields of social computing and HCI. His research focuses on studying existing—as well as designing new—social media systems. According to Google Scholar, Gilbert's work has been cited over 8,000 times, and he has an h-index of 37.
Media
Gilbert's research—for example on Reddit, Twitter, and Facebook—is frequently covered in the mainstream press.
Awards and honors
Gilbert was one of the recipients of the National Science Foundation CAREER Awards in 2016, the Sigma Xi Young Faculty Award in 2015, and the UIUC CS Distinguished Alumni Award in 2018. During his PhD work, Gilbert won the inaugural Google Ph.D. Fellowship. Gilbert has also won 5 best paper awards from ACM SIGCHI conferences, and received 6 best paper honorable mentions.
Selected works
Jhaver, Shagun, Amy Bruckman, and Eric Gilbert. "Does Transparency in Moderation Really Matter? User Behavior After Content Removal Explanations on Reddit." Proceedings of the ACM on Human-Computer Interaction 3. CSCW (2019): 1-27.
Chandrasekharan, Eshwar, et al. "The Internet's Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales." Proceedings of the ACM on Human-Computer Interaction 2. CSCW (2018): 1-25.
Chandrasekharan, Eshwar, et al. "You Can't Stay Here: The Efficacy of Reddit's 2015 Ban Examined Through Hate Speech." Proceedings of the ACM on Human-Computer Interaction 1. CSCW (2017): 1-22.
Mitra, Tanushree, Graham Wright, and Eric Gilbert. "Credibility and the Dynamics of Collective Attention." Proceedings of the ACM on Human-Computer Interaction 1. CSCW (2017): 1-17.
Hutto, Clayton J., and Eric Gilbert. "Vader: A parsimonious rule-based model for sentiment analysis of social media text." Eighth international AAAI conference on weblogs and social media. 2014.
Gilbert, Eric, and Karrie Karahalios. "Predicting tie strength with social media." Proceedings of the SIGCHI conference on human factors in computing systems. 2009.
Gilbert, Eric, and Karrie Karahalios. "Widespread worry and the stock market." Fourth International AAAI Conference on Weblogs and Social Media. 2010.
References
University of Illinois at Urbana–Champaign alumni
Teach For America alumni
University of Michigan faculty
American computer scientists
Year of birth missing (living people)
Living people |
45074771 | https://en.wikipedia.org/wiki/Software%20bill%20of%20materials | Software bill of materials | A software bill of materials (SBOM) is a list of components in a piece of software. Software vendors often create products by assembling open source and commercial software components. The SBOM describes the components in a product. It is analogous to a list of ingredients on food packaging: where you might consult a label to avoid foods that may cause an allergies, SBOMs can help organizations or persons avoid consumption of software that could harm them.
The concept of a BOM is well-established in traditional manufacturing as part of supply chain management. A manufacturer uses a BOM to track the parts it uses to create a product. If defects are later found in a specific part, the BOM makes it easy to locate affected products.
Usage
An SBOM is useful both to the builder (manufacturer) and the buyer (customer) of a software product. Builders often leverage available open source and third-party software components to create a product; an SBOM allows the builder to make sure those components are up to date and to respond quickly to new vulnerabilities. Buyers can use an SBOM to perform vulnerability or license analysis, both of which can be used to evaluate risk in a product.
While many companies just use a Microsoft Excel document for general BOM management, there are additional risks and issues in an SBOM written to a spreadsheet. SBOMs gain greater value when collectively stored in a repository that can be a part of other automation systems, easily queried by other applications.
Understanding the supply chain of software, obtaining an SBOM, and using it to analyze known vulnerabilities are crucial in managing risk.
Legislation
The Cyber Supply Chain Management and Transparency Act of 2014 was US legislation that proposed to require government agencies to obtain SBOMs for any new products they purchase. It also would have required obtaining SBOMs for "any software, firmware, or product in use by the United States Government". Though it ultimately didn't pass, this act did bring awareness to government and spurred later legislation such as "Internet of Things Cybersecurity Improvement Act of 2017."
The US Executive Order on Improving the Nation’s Cybersecurity of May 12, 2021 ordered NIST to issue guidance within 90 days to "include standards, procedures, or criteria regarding" several topics in order to "enhance the security of the software supply chain," including "providing a purchaser a Software Bill of Materials (SBOM) for each product." Also mandated within 60 days was for NTIA to "publish minimum elements for an SBOM."
The NTIA minimum elements were published on July 12, 2021, and also "describes SBOM use cases for greater transparency in the software supply chain, and lays out options for future evolution."
References
Supply chain management
Software project management
Software development process |
43339 | https://en.wikipedia.org/wiki/Packet%20switching | Packet switching | In telecommunications, packet switching is a method of grouping data into packets that are transmitted over a digital network. Packets are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide.
In the early 1960s, American computer scientist Paul Baran developed the concept Distributed Adaptive Message Block Switching, with the goal of providing a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the US Department of Defense. This concept contradicted then-established principles of pre-allocation of network bandwidth, exemplified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of British computer scientist Donald Davies at the National Physical Laboratory (United Kingdom) in 1965. Davies is credited with coining the modern term packet switching and inspiring numerous packet switching networks in the decade following, including the incorporation of the concept into the design of the ARPANET in the United States.
Concept
A simple definition of packet switching is:
Packet switching allows delivery of variable bit rate data streams, realized as sequences of packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. As they traverse networking hardware, such as switches and routers, packets are received, buffered, queued, and retransmitted (stored and forwarded), resulting in variable latency and throughput depending on the link capacity and the traffic load on the network. Packets are normally forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. Packet-based communication may be implemented with or without intermediate forwarding nodes (switches and routers). In case of a shared physical medium (such as radio or 10BASE5), the packets may be delivered according to a multiple access scheme.
Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth specifically for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time, even when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages.
A packet switch has four components: input ports, output ports, routing processor, and switching fabric.
History
The concept of switching small blocks of data was first explored independently by Paul Baran at the RAND Corporation in the early 1960s in the US and Donald Davies at the National Physical Laboratory (NPL) in the UK in 1965.
In the late 1950s, the US Air Force established a wide area network for the Semi-Automatic Ground Environment (SAGE) radar defense system. Recognizing vulnerabilities in this network, the Air Force sought a system that might survive a nuclear attack to enable a response, thus diminishing the attractiveness of the first strike advantage by enemies (see Mutual assured destruction). Baran developed the concept of distributed adaptive message block switching in support of the Air Force initiative. The concept was first presented to the Air Force in the summer of 1961 as briefing B-265, later published as RAND report P-2626 in 1962, and finally in report RM 3420 in 1964. Report P-2626 described a general architecture for a large-scale, distributed, survivable communications network. The work focuses on three key ideas: use of a decentralized network with multiple paths between any two points, dividing user messages into message blocks, and delivery of these messages by store and forward switching.
Davies independently developed a similar message routing concept in 1965. He coined the term packet switching, and proposed building a commercial nationwide data network in the UK. He gave a talk on the proposal in 1966, after which a person from the Ministry of Defence (MoD) told him about Baran's work. Roger Scantlebury, a member of Davies' team met Lawrence Roberts at the 1967 Symposium on Operating Systems Principles and suggested it for use in the ARPANET. Davies had chosen some of the same parameters for his original network design as did Baran, such as a packet size of 1024 bits. In 1966, Davies proposed that a network should be built at the laboratory to serve the needs of NPL and prove the feasibility of packet switching. To deal with packet permutations (due to dynamically updated route preferences) and to datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control", thus inventing what came to be known the end-to-end principle. After a pilot experiment in 1969, the NPL Data Communications Network entered service in 1970.
Leonard Kleinrock conducted research into queueing theory for his doctoral dissertation at MIT in 1961-2 and published it as a book in 1964 in the field of message switching. In 1968, Lawrence Roberts contracted with Kleinrock to carry out theoretical work at UCLA to model the performance of the ARPANET, which underpinned the development of the network in the early 1970s. The NPL team also carried out simulation work on packet networks, including datagram networks.
The French CYCLADES network, designed by Louis Pouzin in the early 1970s, was the first to implement the end-to-end principle of Davies, and make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself. His team was thus first to tackle the highly complex problem of providing user applications with a reliable virtual circuit service while using a best effort network service, an early contribution to what will be Transmission Control Protocol (TCP).
In May 1974, Vint Cerf and Bob Kahn described the Transmission Control Program, an internetworking protocol for sharing resources using packet-switching among the nodes. The specifications of the TCP were then published in (Specification of Internet Transmission Control Program), written by Vint Cerf, Yogen Dalal and Carl Sunshine in December 1974. This monolithic protocol was later layered as the Transmission Control Protocol, TCP, atop the Internet Protocol, IP.
Complementary metal–oxide–semiconductor (CMOS) VLSI (very-large-scale integration) technology led to the development of high-speed broadband packet switching during the 1980s1990s.
Beginning in the mid-1990s, Leonard Kleinrock sought to be recognized as the "father of modern data networking". However, Kleinrock's claims that his work in the early 1960s originated the concept of packet switching and that this work was the source of the packet switching concepts used in the ARPANET are disputed, including by Robert Taylor, Paul Baran, and Donald Davies. Baran and Davies are recognized by historians and the U.S. National Inventors Hall of Fame for independently inventing the concept of digital packet switching used in modern computer networking including the Internet.
Connectionless and connection-oriented modes
Packet switching may be classified into connectionless packet switching, also known as datagram switching, and connection-oriented packet switching, also known as virtual circuit switching. Examples of connectionless systems are Ethernet, Internet Protocol (IP), and the User Datagram Protocol (UDP). Connection-oriented systems include X.25, Frame Relay, Multiprotocol Label Switching (MPLS), and the Transmission Control Protocol (TCP).
In connectionless mode each packet is labeled with a destination address, source address, and port numbers. It may also be labeled with the sequence number of the packet. This information eliminates the need for a pre-established path to help the packet find its way to its destination, but means that more information is needed in the packet header, which is therefore larger. The packets are routed individually, sometimes taking different paths resulting in out-of-order delivery. At the destination, the original message may be reassembled in the correct order, based on the packet sequence numbers. Thus a virtual circuit carrying a byte stream is provided to the application by a transport layer protocol, although the network only provides a connectionless network layer service.
Connection-oriented transmission requires a setup phase to establish the parameters of communication before any packet is transferred. The signaling protocols used for setup allow the application to specify its requirements and discover link parameters. Acceptable values for service parameters may be negotiated. The packets transferred may include a connection identifier rather than address information and the packet header can be smaller, as it only needs to contain this code and any information, such as length, timestamp, or sequence number, which is different for different packets. In this case, address information is only transferred to each node during the connection setup phase, when the route to the destination is discovered and an entry is added to the switching table in each network node through which the connection passes. When a connection identifier is used, routing a packet requires the node to look up the connection identifier in a table.
Connection-oriented transport layer protocols such as TCP provide a connection-oriented service by using an underlying connectionless network. In this case, the end-to-end principle dictates that the end nodes, not the network itself, are responsible for the connection-oriented behavior.
Packet switching in networks
Packet switching is used to optimize the use of the channel capacity available in digital telecommunication networks, such as computer networks, and minimize the transmission latency (the time it takes for data to pass across the network), and to increase robustness of communication.
Packet switching is used in the Internet and most local area networks. The Internet is implemented by the Internet Protocol Suite using a variety of link layer technologies. For example, Ethernet and Frame Relay are common. Newer mobile phone technologies (e.g., GSM, LTE) also use packet switching. Packet switching is associated with connectionless networking because, in these systems, no connection agreement needs to be established between communicating parties prior to exchanging data.
X.25 is a notable use of packet switching in that, despite being based on packet switching methods, it provides virtual circuits to the user. These virtual circuits carry variable-length packets. In 1978, X.25 provided the first international and commercial packet switching network, the International Packet Switched Service (IPSS). Asynchronous Transfer Mode (ATM) also is a virtual circuit technology, which uses fixed-length cell relay connection oriented packet switching.
Technologies such as Multiprotocol Label Switching (MPLS) and the Resource Reservation Protocol (RSVP) create virtual circuits on top of datagram networks. MPLS and its predecessors, as well as ATM, have been called "fast packet" technologies. MPLS, indeed, has been called "ATM without cells". Virtual circuits are especially useful in building robust failover mechanisms and allocating bandwidth for delay-sensitive applications.
Packet-switched networks
The history of packet-switched networks can be divided into three overlapping eras: early networks before the introduction of X.25 and the OSI model; the X.25 era when many postal, telephone, and telegraph (PTT) companies provided public data networks with X.25 interfaces; and the Internet era.
Early networks
Research into packet switching at the National Physical Laboratory (NPL) began with a proposal for a wide-area network in 1965, and a local-area network in 1966. ARPANET funding was secured in 1966 by Bob Taylor, and planning began in 1967 when he hired Larry Roberts. The NPL network, ARPANET, and SITA HLN became operational in 1969. Before the introduction of X.25 in 1973, about twenty different network technologies had been developed. Two fundamental differences involved the division of functions and tasks between the hosts at the edge of the network and the network core. In the datagram system, operating according to the end-to-end principle, the hosts have the responsibility to ensure orderly delivery of packets. In the virtual call system, the network guarantees sequenced delivery of data to the host. This results in a simpler host interface but complicates the network. The X.25 protocol suite uses this network type.
AppleTalk
AppleTalk is a proprietary suite of networking protocols developed by Apple in 1985 for Apple Macintosh computers. It was the primary protocol used by Apple devices through the 1980s and 1990s. AppleTalk included features that allowed local area networks to be established ad hoc without the requirement for a centralized router or server. The AppleTalk system automatically assigned addresses, updated the distributed namespace, and configured any required inter-network routing. It was a plug-n-play system.
AppleTalk implementations were also released for the IBM PC and compatibles, and the Apple IIGS. AppleTalk support was available in most networked printers, especially laser printers, some file servers and routers. AppleTalk support was terminated in 2009, replaced by TCP/IP protocols.
ARPANET
The ARPANET was a progenitor network of the Internet and one of the first networks, along with ARPA's SATNET, to run the TCP/IP suite using packet switching technologies.
BNRNET
BNRNET was a network which Bell-Northern Research developed for internal use. It initially had only one host but was designed to support many hosts. BNR later made major contributions to the CCITT X.25 project.
CYCLADES
The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the early ARPANET design and to support network research generally. It was the first network to use the end-to-end principle and make the hosts responsible for reliable delivery of data, rather than the network itself. Concepts of this network influenced later ARPANET architecture.
DECnet
DECnet is a suite of network protocols created by Digital Equipment Corporation, originally released in 1975 in order to connect two PDP-11 minicomputers. It evolved into one of the first peer-to-peer network architectures, thus transforming DEC into a networking powerhouse in the 1980s. Initially built with three layers, it later (1982) evolved into a seven-layer OSI-compliant networking protocol. The DECnet protocols were designed entirely by Digital Equipment Corporation. However, DECnet Phase II (and later) were open standards with published specifications, and several implementations were developed outside DEC, including one for Linux.
DDX-1
DDX-1 was an experimental network from Nippon PTT. It mixed circuit switching and packet switching. It was succeeded by DDX-2.
EIN
The European Informatics Network (EIN), originally called COST 11, was a project beginning in 1971 to link networks in Britain, France, Italy, Switzerland and Euratom. Six other European countries also participated in the research on network protocols. Derek Barber directed the project and Roger Scantlebury led the UK technical contribution; both were from NPL. Work began in 1973 and it became operational in 1976 including nodes linking the NPL network and CYCLADES. The transport protocol of the EIN was the basis of the one adopted by the International Networking Working Group. EIN was replaced by Euronet in 1979.
EPSS
The Experimental Packet Switched Service (EPSS) was an experiment of the UK Post Office Telecommunications, based on the Coloured Book protocols defined by the UK academic community in 1975. It was the first public data network in the UK when it began operating in 1977. Ferranti supplied the hardware and software. The handling of link control messages (acknowledgements and flow control) was different from that of most other networks.
GEIS
As General Electric Information Services (GEIS), General Electric was a major international provider of information services. The company originally designed a telephone network to serve as its internal (albeit continent-wide) voice telephone network.
In 1965, at the instigation of Warner Sinback, a data network based on this voice-phone network was designed to connect GE's four computer sales and service centers (Schenectady, New York, Chicago, and Phoenix) to facilitate a computer time-sharing service.
After going international some years later, GEIS created a network data center near Cleveland, Ohio. Very little has been published about the internal details of their network. The design was hierarchical with redundant communication links.
IPSANET
IPSANET was a semi-private network constructed by I. P. Sharp Associates to serve their time-sharing customers. It became operational in May 1976.
IPX/SPX
The Internetwork Packet Exchange (IPX) and Sequenced Packet Exchange (SPX) are Novell networking protocols from the 1980s derived from Xerox Network Systems' IDP and SPP protocols, respectively which date back to the 1970s. IPX/SPX was used primarily on networks using the Novell NetWare operating systems.
Merit Network
Merit Network, an independent nonprofit organization governed by Michigan's public universities, was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development. With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host-to-host connection was made between the IBM mainframe systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. In October 1972, connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years, in addition to host-to-host interactive connections, the network was enhanced to support terminal-to-host connections, host-to-host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP; additionally, public universities in Michigan joined the network. All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.
NPL
In 1965, Donald Davies of the National Physical Laboratory (United Kingdom) designed and proposed a national commercial data network based on packet switching. The proposal was not taken up nationally but, in 1966, he designed a local network using "interface computers", today known as routers, to serve the needs of NPL and prove the feasibility of packet switching.
By 1968 Davies had begun building the NPL network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions. In 1976, 12 computers and 75 terminal devices were attached, and more were added until the network was replaced in 1986. NPL, followed by ARPANET, were the first two networks to use packet switching, and were interconnected in the early 1970s.
Octopus
Octopus was a local network at Lawrence Livermore National Laboratory. It connected sundry hosts at the lab to interactive terminals and various computer peripherals including a bulk storage system.
Philips Research
Philips Research Laboratories in Redhill, Surrey developed a packet switching network for internal use. It was a datagram network with a single switching node.
PUP
PARC Universal Packet (PUP or Pup) was one of the two earliest internetworking protocol suites; it was created by researchers at Xerox PARC in the mid-1970s. The entire suite provided routing and packet delivery, as well as higher level functions such as a reliable byte stream, along with numerous applications. Further developments led to Xerox Network Systems (XNS).
RCP
RCP was an experimental network created by the French PTT. It was used to gain experience with packet switching technology before the specification of TRANSPAC was frozen. RCP was a virtual-circuit network in contrast to CYCLADES which was based on datagrams. RCP emphasised terminal-to-host and terminal-to-terminal connection; CYCLADES was concerned with host-to-host communication. TRANSPAC was introduced as an X.25 network. RCP influenced the X.25 specification.
RETD
Red Especial de Transmisión de Datos (RETD) was a network developed by Compañía Telefónica Nacional de España. It became operational in 1972 and thus was the first public network.
SCANNET
"The experimental packet-switched Nordic telecommunication network SCANNET was implemented in Nordic technical libraries in the 1970s, and it included first Nordic electronic journal Extemplo. Libraries were also among first ones in universities to accommodate microcomputers for public use in the early 1980s."
SITA HLN
SITA is a consortium of airlines. Its High Level Network (HLN) became operational in 1969 at about the same time as ARPANET. It carried interactive traffic and message-switching traffic. As with many non-academic networks, very little has been published about it.
Systems Network Architecture
Systems Network Architecture (SNA) is IBM's proprietary networking architecture created in 1974. An IBM customer could acquire hardware and software from IBM and lease private lines from a common carrier to construct a private network.
Telenet
Telenet was the first FCC-licensed public data network in the United States. Telenet was incorporated in 1973 and started operations in 1975. It was founded by Bolt Beranek & Newman with Larry Roberts as CEO as a means of making packet switching technology public. Telenet initially used a proprietary virtual connection host interface, but changed the host interface to X.25 and the terminal interface to X.29. It went public in 1979 and was then sold to GTE.
Tymnet
Tymnet was an international data communications network headquartered in San Jose, CA that utilized virtual call packet switched technology and used X.25, SNA/SDLC, BSC and ASCII interfaces to connect host computers (servers) at thousands of large companies, educational institutions, and government agencies. Users typically connected via dial-up connections or dedicated asynchronous serial connections. The business consisted of a large public network that supported dial-up users and a private network business that allowed government agencies and large companies (mostly banks and airlines) to build their own dedicated networks. The private networks were often connected via gateways to the public network to reach locations not on the private network. Tymnet was also connected to dozens of other public networks in the U.S. and internationally via X.25/X.75 gateways.
XNS
Xerox Network Systems (XNS) was a protocol suite promulgated by Xerox, which provided routing and packet delivery, as well as higher level functions such as a reliable stream, and remote procedure calls. It was developed from PARC Universal Packet (PUP).
X.25 era
There were two kinds of X.25 networks. Some such as DATAPAC and TRANSPAC were initially implemented with an X.25 external interface. Some older networks such as TELENET and TYMNET were modified to provide a X.25 host interface in addition to older host connection schemes. DATAPAC was developed by Bell-Northern Research which was a joint venture of Bell Canada (a common carrier) and Northern Telecom (a telecommunications equipment supplier). Northern Telecom sold several DATAPAC clones to foreign PTTs including the Deutsche Bundespost. X.75 and X.121 allowed the interconnection of national X.25 networks. A user or host could call a host on a foreign network by including the DNIC of the remote network as part of the destination address.
AUSTPAC
AUSTPAC was an Australian public X.25 network operated by Telstra. Started by Telecom Australia in the early 1980s, AUSTPAC was Australia's first public packet-switched data network and supported applications such as on-line betting, financial applications—the Australian Tax Office made use of AUSTPAC—and remote terminal access to academic institutions, who maintained their connections to AUSTPAC up until the mid-late 1990s in some cases. Access was via a dial-up terminal to a PAD, or, by linking a permanent X.25 node to the network.
ConnNet
ConnNet was a packet-switched data network operated by the Southern New England Telephone Company serving the state of Connecticut.
Datanet 1
Datanet 1 was the public switched data network operated by the Dutch PTT Telecom (now known as KPN). Strictly speaking Datanet 1 only referred to the network and the connected users via leased lines (using the X.121 DNIC 2041), the name also referred to the public PAD service Telepad (using the DNIC 2049). And because the main Videotex service used the network and modified PAD devices as infrastructure the name Datanet 1 was used for these services as well. Although this use of the name was incorrect all these services were managed by the same people within one department of KPN contributed to the confusion.
Datapac
DATAPAC was the first operational X.25 network (1976). It covered major Canadian cities and was eventually extended to smaller centres.
Datex-P
Deutsche Bundespost operated this national network in Germany. The technology was acquired from Northern Telecom.
Eirpac
Eirpac is the Irish public switched data network supporting X.25 and X.28. It was launched in 1984, replacing Euronet. Eirpac is run by Eircom.
Euronet
Nine member states of the European Economic Community contracted with Logica and the French company SESA to set up a joint venture in 1975 to undertake the Euronet development, using X.25 protocols to form virtual circuits. It was to replace EIN and established a network in 1979 linking a number of European countries until 1984 when the network was handed over to national PTTs.
HIPA-NET
Hitachi designed a private network system for sale as a turnkey package to multi-national organizations. In addition to providing X.25 packet switching, message switching software was also included. Messages were buffered at the nodes adjacent to the sending and receiving terminals. Switched virtual calls were not supported, but through the use of "logical ports" an originating terminal could have a menu of pre-defined destination terminals.
Iberpac
Iberpac is the Spanish public packet-switched network, providing X.25 services. Iberpac is run by Telefonica.
IPSS
In 1978, X.25 provided the first international and commercial packet switching network, the International Packet Switched Service (IPSS).
JANET
JANET was the UK academic and research network, linking all universities, higher education establishments, publicly funded research laboratories. The X.25 network, which used the Coloured Book protocols, was based mainly on GEC 4000 series switches, and run X.25 links at up to 8 Mbit/s in its final phase before being converted to an IP based network. The JANET network grew out of the 1970s SRCnet, later called SERCnet.
PSS
Packet Switch Stream (PSS) was the UK Post Office (later to become British Telecom) national X.25 network with a DNIC of 2342. British Telecom renamed PSS under its GNS (Global Network Service) name, but the PSS name has remained better known. PSS also included public dial-up PAD access, and various InterStream gateways to other services such as Telex.
TRANSPAC
TRANSPAC was the national X.25 network in France. It was developed locally at about the same time as DATAPAC in Canada. The development was done by the French PTT and influenced by the experimental RCP network. It began operation in 1978, and served both commercial users and, after Minitel began, consumers.
VENUS-P
VENUS-P was an international X.25 network that operated from April 1982 through March 2006. At its subscription peak in 1999, VENUS-P connected 207 networks in 87 countries.
Venepaq
Venepaq is the national X.25 public network in Venezuela. It is run by Cantv and allow direct connection and dial up connections. Provides nationalwide access at very low cost. It provides national and international access. Venepaq allow connection from 19.2 kbit/s to 64 kbit/s in direct connections, and 1200, 2400 and 9600 bit/s in dial up connections.
Internet era
When Internet connectivity was made available to anyone who could pay for an ISP subscription, the distinctions between national networks blurred. The user no longer saw network identifiers such as the DNIC. Some older technologies such as circuit switching have resurfaced with new names such as fast packet switching. Researchers have created some experimental networks to complement the existing Internet.
CSNET
The Computer Science Network (CSNET) was a computer network funded by the U.S. National Science Foundation (NSF) that began operation in 1981. Its purpose was to extend networking benefits, for computer science departments at academic and research institutions that could not be directly connected to ARPANET, due to funding or authorization limitations. It played a significant role in spreading awareness of, and access to, national networking and was a major milestone on the path to development of the global Internet.
Internet2
Internet2 is a not-for-profit United States computer networking consortium led by members from the research and education communities, industry, and government. The Internet2 community, in partnership with Qwest, built the first Internet2 Network, called Abilene, in 1998 and was a prime investor in the National LambdaRail (NLR) project. In 2006, Internet2 announced a partnership with Level 3 Communications to launch a brand new nationwide network, boosting its capacity from 10 Gbit/s to 100 Gbit/s. In October, 2007, Internet2 officially retired Abilene and now refers to its new, higher capacity network as the Internet2 Network.
NSFNET
The National Science Foundation Network (NSFNET) was a program of coordinated, evolving projects sponsored by the National Science Foundation (NSF) beginning in 1985 to promote advanced research and education networking in the United States. NSFNET was also the name given to several nationwide backbone networks operating at speeds of 56 kbit/s, 1.5 Mbit/s (T1), and 45 Mbit/s (T3) that were constructed to support NSF's networking initiatives from 1985-1995. Initially created to link researchers to the nation's NSF-funded supercomputing centers, through further public funding and private industry partnerships it developed into a major part of the Internet backbone.
NSFNET regional networks
In addition to the five NSF supercomputer centers, NSFNET provided connectivity to eleven regional networks and through these networks to many smaller regional and campus networks in the United States. The NSFNET regional networks were:
BARRNet, the Bay Area Regional Research Network in Palo Alto, California;
CERFNET, California Education and Research Federation Network in San Diego, California, serving California and Nevada;
CICNet, the Committee on Institutional Cooperation Network via the Merit Network in Ann Arbor, Michigan and later as part of the T3 upgrade via Argonne National Laboratory outside of Chicago, serving the Big Ten Universities and the University of Chicago in Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin;
Merit/MichNet in Ann Arbor, Michigan serving Michigan, formed in 1966, still in operation as of 2016;
MIDnet in Lincoln, Nebraska serving Arkansas, Iowa, Kansas, Missouri, Nebraska, Oklahoma, and South Dakota;
NEARNET, the New England Academic and Research Network in Cambridge, Massachusetts, added as part of the upgrade to T3, serving Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont, established in late 1988, operated by BBN under contract to MIT, BBN assumed responsibility for NEARNET on 1 July 1993;
NorthWestNet in Seattle, Washington, serving Alaska, Idaho, Montana, North Dakota, Oregon, and Washington, founded in 1987;
NYSERNet, New York State Education and Research Network in Ithaca, New York;
JVNCNet, the John von Neumann National Supercomputer Center Network in Princeton, New Jersey, serving Delaware and New Jersey;
SESQUINET, the Sesquicentennial Network in Houston, Texas, founded during the 150th anniversary of the State of Texas;
SURAnet, the Southeastern Universities Research Association network in College Park, Maryland and later as part of the T3 upgrade in Atlanta, Georgia serving Alabama, Florida, Georgia, Kentucky, Louisiana, Maryland, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, and West Virginia, sold to BBN in 1994; and
Westnet in Salt Lake City, Utah and Boulder, Colorado, serving Arizona, Colorado, New Mexico, Utah, and Wyoming.
National LambdaRail
The National LambdaRail was launched in September 2003. It is a 12,000-mile high-speed national computer network owned and operated by the U.S. research and education community that runs over fiber-optic lines. It was the first transcontinental 10 Gigabit Ethernet network. It operates with high aggregate capacity of up to 1.6 Tbit/s and a high 40 Gbit/s bitrate, with plans for 100 Gbit/s. The upgrade never took place and NLR ceased operations in March 2014.
TransPAC, TransPAC2, and TransPAC3
TransPAC2 and TransPAC3, continuations of the TransPAC project, a high-speed international Internet service connecting research and education networks in the Asia-Pacific region to those in the US. TransPAC is part of the NSF's International Research Network Connections (IRNC) program.
Very high-speed Backbone Network Service (vBNS)
The Very high-speed Backbone Network Service (vBNS) came on line in April 1995 as part of a National Science Foundation (NSF) sponsored project to provide high-speed interconnection between NSF-sponsored supercomputing centers and select access points in the United States. The network was engineered and operated by MCI Telecommunications under a cooperative agreement with the NSF. By 1998, the vBNS had grown to connect more than 100 universities and research and engineering institutions via 12 national points of presence with DS-3 (45 Mbit/s), OC-3c (155 Mbit/s), and OC-12c (622 Mbit/s) links on an all OC-12c backbone, a substantial engineering feat for that time. The vBNS installed one of the first ever production OC-48c (2.5 Gbit/s) IP links in February 1999 and went on to upgrade the entire backbone to OC-48c.
In June 1999 MCI WorldCom introduced vBNS+ which allowed attachments to the vBNS network by organizations that were not approved by or receiving support from NSF. After the expiration of the NSF agreement, the vBNS largely transitioned to providing service to the government. Most universities and research centers migrated to the Internet2 educational backbone. In January 2006, when MCI and Verizon merged, vBNS+ became a service of Verizon Business.
See also
CompuServe
Multi-bearer network
Optical burst switching
Packet radio
Public switched data network
Time-Driven Switching - a bufferless approach to packet switching
Transmission delay
Virtual private network
References
Bibliography
Paul Baran et al., On Distributed Communications, Volumes I-XI (RAND Corporation Research Documents, August, 1964)
Paul Baran, On Distributed Communications: I Introduction to Distributed Communications Network (RAND Memorandum RM-3420-PR. August 1964)
Paul Baran, On Distributed Communications Networks, (IEEE Transactions on Communications Systems, Vol. CS-12 No. 1, pp. 1–9, March 1964)
D. W. Davies, K. A. Bartlett, R. A. Scantlebury, and P. T. Wilkinson, A digital communications network for computers giving rapid response at remote terminals (ACM Symposium on Operating Systems Principles. October 1967)
R. A. Scantlebury, P. T. Wilkinson, and K. A. Bartlett, The design of a message switching Centre for a digital communication network (IFIP 1968)
Lawrence Roberts, The Evolution of Packet Switching (Proceedings of the IEEE, November, 1978)
Further reading
Hafner, Katie Where Wizards Stay Up Late (Simon and Schuster, 1996) pp 52–67
Norberg, Arthur; O'Neill, Judy E. Transforming Computer Technology: Information Processing for the Pentagon, 1962-1982 (Johns Hopkins University, 1996)
External links
Oral history interview with Paul Baran. Charles Babbage Institute University of Minnesota, Minneapolis. Baran describes his working environment at RAND, as well as his initial interest in survivable communications, and the evolution, writing and distribution of his eleven-volume work, "On Distributed Communications". Baran discusses his interaction with the group at ARPA who were responsible for the later development of the ARPANET.
NPL Data Communications Network NPL video, 1970s
Packet Switching History and Design, site reviewed by Baran, Roberts, and Kleinrock
Paul Baran and the Origins of the Internet
20+ articles on packet switching in the 1970s
"An Introduction to Packet Switched Networks", Phrack, 05/3/88
Computer networking
History of the Internet
Network protocols |
7016168 | https://en.wikipedia.org/wiki/Token%20Ring | Token Ring | Token Ring is a computer networking technology used to build local area networks. It was introduced by IBM in 1984, and standardized in 1989 as IEEE 802.5.
It uses a special three-byte frame called a token that is passed around a logical ring of workstations or servers. This token passing is a channel access method providing fair access for all stations, and eliminating the collisions of contention-based access methods.
Token Ring was a successful technology, particularly in corporate environments, but was gradually eclipsed by the later versions of Ethernet.
History
A wide range of different local area network technologies were developed in the early 1970s, of which one, the Cambridge Ring, had demonstrated the potential of a token passing ring topology, and many teams worldwide began working on their own implementations. At the IBM Zurich Research Laboratory Werner Bux and Hans Müller, in particular, worked on the design and development of IBM's Token Ring technology, while early work at MIT led to the Proteon 10 Mbit/s ProNet-10 Token Ring network in 1981the same year that workstation vendor Apollo Computer introduced their proprietary 12 Mbit/s Apollo Token Ring (ATR) network running over 75-ohm RG-6U coaxial cabling. Proteon later evolved a 16 Mbit/s version that ran on unshielded twisted pair cable.
1985 IBM launch
IBM launched their own proprietary Token Ring product on October 15, 1985. It ran at 4 Mbit/s, and attachment was possible from IBM PCs, midrange computers and mainframes. It used a convenient star-wired physical topology and ran over shielded twisted-pair cabling. Shortly thereafter it became the basis for the IEEE 802.5 standard.
During this time, IBM argued that Token Ring LANs were superior to Ethernet, especially under load, but these claims were debated.
In 1988 the faster 16 Mbit/s Token Ring was standardized by the 802.5 working group. An increase to 100 Mbit/s was standardized and marketed during the wane of Token Ring's existence and was never widely used. While a 1000 Mbit/s standard was approved in 2001, no products were ever brought to market and standards activity came to a standstill as Fast Ethernet and Gigabit Ethernet dominated the local area networking market.
Gallery
Comparison with Ethernet
Ethernet and Token Ring have some notable differences:
Token Ring access is more deterministic, compared to Ethernet's contention-based CSMA/CD
Ethernet supports a direct cable connection between two network interface cards by the use of a crossover cable or through auto-sensing if supported. Token Ring does not inherently support this feature and requires additional software and hardware to operate on a direct cable connection setup.
Token Ring eliminates collision by the use of a single-use token and early token release to alleviate the down time. Ethernet alleviates collision by carrier sense multiple access and by the use of an intelligent switch; primitive Ethernet devices like hubs can precipitate collisions due to repeating traffic blindly.
Token Ring network interface cards contain all of the intelligence required for speed autodetection, routing and can drive themselves on many Multistation Access Units (MAUs) that operate without power (most MAUs operate in this fashion, only requiring a power supply for LEDs). Ethernet network interface cards can theoretically operate on a passive hub to a degree, but not as a large LAN and the issue of collisions is still present.
Token Ring employs access priority in which certain nodes can have priority over the token. Unswitched Ethernet does not have a provision for an access priority system as all nodes have equal access to the transmission medium.
Multiple identical MAC addresses are supported on Token Ring (a feature used by S/390 mainframes). Switched Ethernet cannot support duplicate MAC addresses without reprimand.
Token Ring was more complex than Ethernet, requiring a specialized processor and licensed MAC/LLC firmware for each interface. By contrast, Ethernet included both the (simpler) firmware and the lower licensing cost in the MAC chip. The cost of a token Ring interface using the Texas Instruments TMS380C16 MAC and PHY was approximately three times that of an Ethernet interface using the Intel 82586 MAC and PHY.
Initially both networks used expensive cable, but once Ethernet was standardized for unshielded twisted pair with 10BASE-T (Cat 3) and 100BASE-TX (Cat 5(e)), it had a distinct advantage and sales of it increased markedly.
Even more significant when comparing overall system costs was the much-higher cost of router ports and network cards for Token Ring vs Ethernet. The emergence of Ethernet switches may have been the final straw.
Operation
Stations on a Token Ring LAN are logically organized in a ring topology with data being transmitted sequentially from one ring station to the next with a control token circulating around the ring controlling access. Similar token passing mechanisms are used by ARCNET, token bus, 100VG-AnyLAN (802.12) and FDDI, and they have theoretical advantages over the CSMA/CD of early Ethernet.
A Token Ring network can be modeled as a polling system where a single server provides service to queues in a cyclic order.
Access control
The data transmission process goes as follows:
Empty information frames are continuously circulated on the ring.
When a computer has a message to send, it seizes the token. The computer will then be able to send the frame.
The frame is then examined by each successive workstation. The workstation that identifies itself to be the destination for the message copies it from the frame and changes the token back to 0.
When the frame gets back to the originator, it sees that the token has been changed to 0 and that the message has been copied and received. It removes the message from the frame.
The frame continues to circulate as an "empty" frame, ready to be taken by a workstation when it has a message to send.
Multistation Access Units and Controlled Access Units
Physically, a Token Ring network is wired as a star, with 'MAUs' in the center, 'arms' out to each station, and the loop going out-and-back through each.
A MAU could present in the form of a hub or a switch; since Token Ring had no collisions many MAUs were manufactured as hubs. Although Token Ring runs on LLC, it includes source routing to forward packets beyond the local network. The majority of MAUs are configured in a 'concentration' configuration by default, but later MAUs also supporting a feature to act as splitters and not concentrators exclusively such as on the IBM 8226.
Later IBM would release Controlled Access Units that could support multiple MAU modules known as a Lobe Attachment Module. The CAUs supported features such as Dual-Ring Redundancy for alternate routing in the event of a dead port, modular concentration with LAMs, and multiple interfaces like most later MAUs. This offered a more reliable setup and remote management than with an unmanaged MAU hub.
Cabling and interfaces
Cabling is generally IBM "Type-1", a heavy two-pair 150 Ohm shielded twisted pair cable. This was the basic cable for the "IBM Cabling System", a structured cabling system that IBM hoped would be widely adopted. Unique hermaphroditic connectors, commonly referred to as IBM Data Connectors in formal writing or colloquially as Boy George connectors were used. The connectors have the disadvantage of being quite bulky, requiring at least 3 × 3 cm panel space, and being relatively fragile. The advantages of the connectors being that they are genderless and have superior shielding over standard unshielded 8P8C. Connectors at the computer were usually DE-9 female.
In later implementations of Token Ring, Cat 4 cabling was also supported, so 8P8C ("RJ45") connectors were used on both of the MAUs, CAUs and NICs; with many of the network cards supporting both 8P8C and DE-9 for backwards compatibility.
Technical details
Frame types
Token
When no station is sending a frame, a special token frame circles the loop. This special token frame is repeated from station to station until arriving at a station that needs to send data.
Tokens are 3 bytes in length and consist of a start delimiter, an access control byte, and an end delimiter.
Abort frame
Used to abort transmission by the sending station.
Data
Data frames carry information for upper-layer protocols, while command frames contain control information and have no
data for upper-layer protocols. Data/command frames vary in size, depending on the size of the Information field.
Starting delimiter Consists of a special bit pattern denoting the beginning of the frame. The bits from most significant to least significant are J,K,0,J,K,0,0,0. J and K are code violations. Since Manchester encoding is self-clocking, and has a transition for every encoded bit 0 or 1, the J and K codings violate this, and will be detected by the hardware. Both the Starting Delimiter and Ending Delimiter fields are used to mark frame boundaries.
Access control This byte field consists of the following bits from most significant to least significant bit order: P,P,P,T,M,R,R,R. The P bits are priority bits, T is the token bit which when set specifies that this is a token frame, M is the monitor bit which is set by the Active Monitor (AM) station when it sees this frame, and R bits are reserved bits.
Frame control A one-byte field that contains bits describing the data portion of the frame contents which indicates whether the frame contains data or control information. In control frames, this byte specifies the type of control information.
Frame type – 01 indicates LLC frame IEEE 802.2 (data) and ignore control bits;
00 indicates MAC frame and control bits indicate the type of MAC control frame
Destination address A six-byte field used to specify the destination(s) physical address.
Source address Contains physical address of sending station. It is a six-byte field that is either the local assigned address (LAA) or universally assigned address (UAA) of the sending station adapter.
Data A variable length field of 0 or more bytes, the maximum allowable size depending on ring speed containing MAC management data or upper layer information. Maximum length of 4500 bytes.
Frame check sequence A four-byte field used to store the calculation of a CRC for frame integrity verification by the receiver.
Ending delimiter The counterpart to the starting delimiter, this field marks the end of the frame and consists of the following bits from most significant to least significant: J,K,1,J,K,1,I,E. I is the intermediate frame bit and E is the error bit.
Frame status A one-byte field used as a primitive acknowledgment scheme on whether the frame was recognized and copied by its intended receiver.
A = 1, Address recognized
C = 1, Frame copied
Active and standby monitors
Every station in a Token Ring network is either an active monitor (AM) or standby monitor (SM) station. There can be only one active monitor on a ring at a time. The active monitor is chosen through an election or monitor contention process.
The monitor contention process is initiated when the following happens:
a loss of signal on the ring is detected.
an active monitor station is not detected by other stations on the ring.
a particular timer on an end station expires such as the case when a station hasn't seen a token frame in the past 7 seconds.
When any of the above conditions take place and a station decides that a new monitor is needed, it will transmit a "claim token" frame, announcing that it wants to become the new monitor. If that token returns to the sender, it is OK for it to become the monitor. If some other station tries to become the monitor at the same time then the station with the highest MAC address will win the election process. Every other station becomes a standby monitor. All stations must be capable of becoming an active monitor station if necessary.
The active monitor performs a number of ring administration functions. The first function is to operate as the master clock for the ring in order to provide synchronization of the signal for stations on the wire. Another function of the AM is to insert a 24-bit delay into the ring, to ensure that there is always sufficient buffering in the ring for the token to circulate. A third function for the AM is to ensure that exactly one token circulates whenever there is no frame being transmitted, and to detect a broken ring. Lastly, the AM is responsible for removing circulating frames from the ring.
Token insertion process
Token Ring stations must go through a 5-phase ring insertion process before being allowed to participate in the ring network. If any of these phases fail, the Token Ring station will not insert into the ring and the Token Ring driver may report an error.
Phase 0 (Lobe Check) – A station first performs a lobe media check. A station is wrapped at the MSAU and is able to send 2000 test frames down its transmit pair which will loop back to its receive pair. The station checks to ensure it can receive these frames without error.
Phase 1 (Physical Insertion) – A station then sends a 5-volt signal to the MSAU to open the relay.
Phase 2 (Address Verification) – A station then transmits MAC frames with its own MAC address in the destination address field of a Token Ring frame. When the frame returns and if the Address Recognized (AR) and Frame Copied (FC) bits in the frame-status are set to 0 (indicating that no other station currently on the ring uses that address), the station must participate in the periodic (every 7 seconds) ring poll process. This is where stations identify themselves on the network as part of the MAC management functions.
Phase 3 (Participation in ring poll) – A station learns the address of its Nearest Active Upstream Neighbour (NAUN) and makes its address known to its nearest downstream neighbour, leading to the creation of the ring map. Station waits until it receives an AMP or SMP frame with the AR and FC bits set to 0. When it does, the station flips both bits (AR and FC) to 1, if enough resources are available, and queues an SMP frame for transmission. If no such frames are received within 18 seconds, then the station reports a failure to open and de-inserts from the ring. If the station successfully participates in a ring poll, it proceeds into the final phase of insertion, request initialization.
Phase 4 (Request Initialization) – Finally a station sends out a special request to a parameter server to obtain configuration information. This frame is sent to a special functional address, typically a Token Ring bridge, which may hold timer and ring number information the new station needs to know.
Optional priority scheme
In some applications there is an advantage to being able to designate one station having a higher priority. Token Ring specifies an optional scheme of this sort, as does the CAN Bus, (widely used in automotive applications) – but Ethernet does not.
In the Token Ring priority MAC, eight priority levels, 0–7, are used. When the station wishing to transmit receives a token or data frame with a priority less than or equal to the station's requested priority, it sets the priority bits to its desired priority. The station does not immediately transmit; the token circulates around the medium until it returns to the station. Upon sending and receiving its own data frame, the station downgrades the token priority back to the original priority.
Here are the following eight access priority and traffic types for devices that support 802.1Q and 802.1p:
Interconnection with Ethernet
Bridging solutions for Token Ring and Ethernet networks included the AT&T StarWAN 10:4 Bridge, the IBM 8209 LAN Bridge and the Microcom LAN Bridge. Alternative connection solutions incorporated a router that could be configured to dynamically filter traffic, protocols and interfaces, such as the IBM 2210-24M Multiprotocol Router, which contained both Ethernet and Token Ring interfaces.
See also
IBM PC Network
References
General
External links
IEEE 802.5 Web Site
Troubleshooting Cisco Router Token Ring Interfaces
Futureobservatory.org discussion of IBM's failure in Token Ring technology
What if Ethernet had failed?
Network topology
Local area networks
IEEE 802
IBM PC compatibles
IEEE standards
Serial buses
Link protocols
Systems Network Architecture |
39776 | https://en.wikipedia.org/wiki/Denial-of-service%20attack | Denial-of-service attack | In computing, a denial-of-service attack (DoS attack) is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to a network. Denial of service is typically accomplished by flooding the targeted machine or resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled.
In a distributed denial-of-service attack (DDoS attack), the incoming traffic flooding the victim originates from many different sources. This effectively makes it impossible to stop the attack simply by blocking a single source.
A DoS or DDoS attack is analogous to a group of people crowding the entry door of a shop, making it hard for legitimate customers to enter, thus disrupting trade.
Criminal perpetrators of DoS attacks often target sites or services hosted on high-profile web servers such as banks or credit card payment gateways. Revenge, blackmail and activism can motivate these attacks.
History
Panix, the third-oldest ISP in the world, was the target of what is thought to be the first DoS attack. On September 6, 1996, Panix was subject to a SYN flood attack, which brought down its services for several days while hardware vendors, notably Cisco, figured out a proper defense.
Another early demonstration of the DoS attack was made by Khan C. Smith in 1997 during a DEF CON event, disrupting Internet access to the Las Vegas Strip for over an hour. The release of sample code during the event led to the online attack of Sprint, EarthLink, E-Trade, and other major corporations in the year to follow.
In September 2017, Google Cloud experienced an attack with a peak volume of 2.54 terabits per second. On March 5, 2018, an unnamed customer of the US-based service provider Arbor Networks fell victim to the largest DDoS to that date, reaching a peak of about 1.7 terabits per second. The previous record had been set a few days earlier, on March 1, 2018, when GitHub was hit by an attack of 1.35 terabits per second.
In February 2020, Amazon Web Services experienced an attack with a peak volume of 2.3 terabits per second. In July of 2021, CDN Provider Cloudflare boasted of protecting its client from a DDoS attack from a global Mirai botnet that was up to 17.2 million requests per second. Russian DDoS Prevention provider Yandex said it blocked a HTTP pipelining DDoS attack on Sept. 5. 2021 that originated from unpatched Mikrotik networking gear.
Types
Denial-of-service attacks are characterized by an explicit attempt by attackers to prevent legitimate use of a service. There are two general forms of DoS attacks: those that crash services and those that flood services. The most serious attacks are distributed.
A distributed denial-of-service (DDoS) attack occurs when multiple systems flood the bandwidth or resources of a targeted system, usually one or more web servers. A DDoS attack uses more than one unique IP address or machines, often from thousands of hosts infected with malware. A distributed denial of service attack typically involves more than around 3–5 nodes on different networks; fewer nodes may qualify as a DoS attack but is not a DDoS attack.
Multiple machines can generate more attack traffic than one machine, multiple attack machines are harder to turn off than one attack machine, and the behavior of each attack machine can be stealthier, making it harder to track and shut down. Since the incoming traffic flooding the victim originates from different sources, it may be impossible to stop the attack simply by using ingress filtering. It also makes it difficult to distinguish legitimate user traffic from attack traffic when spread across multiple points of origin. As an alternative or augmentation of a DDoS, attacks may involve forging of IP sender addresses (IP address spoofing) further complicating identifying and defeating the attack. These attacker advantages cause challenges for defense mechanisms. For example, merely purchasing more incoming bandwidth than the current volume of the attack might not help, because the attacker might be able to simply add more attack machines.
The scale of DDoS attacks has continued to rise over recent years, by 2016 exceeding a terabit per second. Some common examples of DDoS attacks are UDP flooding, SYN flooding and DNS amplification.
Yo-yo attack
A yo-yo attack is a specific type of DoS/DDoS aimed at cloud-hosted applications which use autoscaling. The attacker generates a flood of traffic until a cloud-hosted service scales outwards to handle the increase of traffic, then halts the attack, leaving the victim with over-provisioned resources. When the victim scales back down, the attack resumes, causing resources to scale back up again. This can result in a reduced quality of service during the periods of scaling up and down and a financial drain on resources during periods of over-provisioning, while operating with a lower cost for an attacker compared to a normal DDoS attack, as it only needs to be generating traffic for a portion of the attack period.
Application layer attacks
An application layer DDoS attack (sometimes referred to as layer 7 DDoS attack) is a form of DDoS attack where attackers target application-layer processes. The attack over-exercises specific functions or features of a website with the intention to disable those functions or features. This application-layer attack is different from an entire network attack, and is often used against financial institutions to distract IT and security personnel from security breaches. In 2013, application-layer DDoS attacks represented 20% of all DDoS attacks. According to research by Akamai Technologies, there have been "51 percent more application layer attacks" from Q4 2013 to Q4 2014 and "16 percent more" from Q3 2014 to Q4 2014. In November 2017; Junade Ali, an engineer at Cloudflare noted that whilst network-level attacks continue to be of high capacity, they were occurring less frequently. Ali further noted that although network-level attacks were becoming less frequent, data from Cloudflare demonstrated that application-layer attacks were still showing no sign of slowing down. In December 2021, following the Log4Shell security vulnerability, a second vulnerability in the open source Log4j library was discovered which could lead to application layer DDoS attacks.
Application layer
The OSI model (ISO/IEC 7498-1) is a conceptual model that characterizes and standardizes the internal functions of a communication system by partitioning it into abstraction layers. The model is a product of the Open Systems Interconnection project at the International Organization for Standardization (ISO). The model groups similar communication functions into one of seven logical layers. A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the communications path needed by applications above it, while it calls the next lower layer to send and receive packets that traverse that path.
In the OSI model, the definition of its application layer is narrower in scope than is often implemented. The OSI model defines the application layer as being the user interface. The OSI application layer is responsible for displaying data and images to the user in a human-recognizable format and to interface with the presentation layer below it. In an implementation, the application and presentation layers are frequently combined.
Method of attack
The simplest DoS attack relies primarily on brute force, flooding the target with an overwhelming flux of packets, oversaturating its connection bandwidth, or depleting the target's system resources. Bandwidth-saturating floods rely on the attacker's ability to generate the overwhelming flux of packets. A common way of achieving this today is via distributed denial-of-service, employing a botnet.
An application layer DDoS attack is done mainly for specific targeted purposes, including disrupting transactions and access to databases. It requires fewer resources than network layer attacks but often accompanies them. An attack may be disguised to look like legitimate traffic, except it targets specific application packets or functions. The attack on the application layer can disrupt services such as the retrieval of information or search functions on a website.
Advanced persistent DoS
An advanced persistent DoS (APDoS) is associated with an advanced persistent threat and requires specialised DDoS mitigation. These attacks can persist for weeks; the longest continuous period noted so far lasted 38 days. This attack involved approximately 50+ petabits (50,000+ terabits) of malicious traffic.
Attackers in this scenario may tactically switch between several targets to create a diversion to evade defensive DDoS countermeasures but all the while eventually concentrating the main thrust of the attack onto a single victim. In this scenario, attackers with continuous access to several very powerful network resources are capable of sustaining a prolonged campaign generating enormous levels of un-amplified DDoS traffic.
APDoS attacks are characterized by:
advanced reconnaissance (pre-attack OSINT and extensive decoyed scanning crafted to evade detection over long periods)
tactical execution (attack with both primary and secondary victims but the focus is on primary)
explicit motivation (a calculated end game/goal target)
large computing capacity (access to substantial computer power and network bandwidth)
simultaneous multi-threaded OSI layer attacks (sophisticated tools operating at layers 3 through 7)
persistence over extended periods (combining all the above into a concerted, well-managed attack across a range of targets).
Denial-of-service as a service
Some vendors provide so-called "booter" or "stresser" services, which have simple web-based front ends, and accept payment over the web. Marketed and promoted as stress-testing tools, they can be used to perform unauthorized denial-of-service attacks, and allow technically unsophisticated attackers access to sophisticated attack tools. Usually powered by a botnet, the traffic produced by a consumer stresser can range anywhere from 5-50 Gbit/s, which can, in most cases, deny the average home user internet access.
Symptoms
The United States Computer Emergency Readiness Team (US-CERT) has identified symptoms of a denial-of-service attack to include:
unusually slow network performance (opening files or accessing web sites),
unavailability of a particular web site, or
inability to access any web site.
Attack techniques
Attack tools
In cases such as MyDoom and Slowloris the tools are embedded in malware and launch their attacks without the knowledge of the system owner. Stacheldraht is a classic example of a DDoS tool. It uses a layered structure where the attacker uses a client program to connect to handlers which are compromised systems that issue commands to the zombie agents which in turn facilitate the DDoS attack. Agents are compromised via the handlers by the attacker using automated routines to exploit vulnerabilities in programs that accept remote connections running on the targeted remote hosts. Each handler can control up to a thousand agents.
In other cases a machine may become part of a DDoS attack with the owner's consent, for example, in Operation Payback organized by the group Anonymous. The Low Orbit Ion Cannon has typically been used in this way. Along with High Orbit Ion Cannon a wide variety of DDoS tools are available today, including paid and free versions, with different features available. There is an underground market for these in hacker-related forums and IRC channels.
Application-layer attacks
Application-layer attacks employ DoS-causing exploits and can cause server-running software to fill the disk space or consume all available memory or CPU time. Attacks may use specific packet types or connection requests to saturate finite resources by, for example, occupying the maximum number of open connections or filling the victim's disk space with logs. An attacker with shell-level access to a victim's computer may slow it until it is unusable or crash it by using a fork bomb. Another kind of application-level DoS attack is XDoS (or XML DoS) which can be controlled by modern web application firewalls (WAFs).
Another target of DDoS attacks may be to produce added costs for the application operator, when the latter uses resources based on cloud computing. In this case, normally application-used resources are tied to a needed quality of service (QoS) level (e.g. responses should be less than 200 ms) and this rule is usually linked to automated software (e.g. Amazon CloudWatch) to raise more virtual resources from the provider to meet the defined QoS levels for the increased requests. The main incentive behind such attacks maybe to drive the application owner to raise the elasticity levels to handle the increased application traffic, to cause financial losses, or force them to become less competitive.
A banana attack is another particular type of DoS. It involves redirecting outgoing messages from the client back onto the client, preventing outside access, as well as flooding the client with the sent packets. A LAND attack is of this type.
Degradation-of-service attacks
Pulsing zombies are compromised computers that are directed to launch intermittent and short-lived floodings of victim websites with the intent of merely slowing it rather than crashing it. This type of attack, referred to as degradation-of-service, can be more difficult to detect and can disrupt and hamper connection to websites for prolonged periods of time, potentially causing more overall disruption than a denial-of-service attack. Exposure of degradation-of-service attacks is complicated further by the matter of discerning whether the server is really being attacked or is experiencing higher than normal legitimate traffic loads.
Distributed DoS attack
If an attacker mounts an attack from a single host it would be classified as a DoS attack. Any attack against availability would be classed as a denial-of-service attack. On the other hand, if an attacker uses many systems to simultaneously launch attacks against a remote host, this would be classified as a DDoS attack.
Malware can carry DDoS attack mechanisms; one of the better-known examples of this was MyDoom. Its DoS mechanism was triggered on a specific date and time. This type of DDoS involved hardcoding the target IP address before releasing the malware and no further interaction was necessary to launch the attack.
A system may also be compromised with a trojan containing a zombie agent. Attackers can also break into systems using automated tools that exploit flaws in programs that listen for connections from remote hosts. This scenario primarily concerns systems acting as servers on the web. Stacheldraht is a classic example of a DDoS tool. It uses a layered structure where the attacker uses a client program to connect to handlers, which are compromised systems that issue commands to the zombie agents, which in turn facilitate the DDoS attack. Agents are compromised via the handlers by the attacker. Each handler can control up to a thousand agents. In some cases a machine may become part of a DDoS attack with the owner's consent, for example, in Operation Payback, organized by the group Anonymous. These attacks can use different types of internet packets such as TCP, UDP, ICMP, etc.
These collections of compromised systems are known as botnets. DDoS tools like Stacheldraht still use classic DoS attack methods centered on IP spoofing and amplification like smurf attacks and fraggle attacks (types of bandwidth consumption attacks). SYN floods (a resource starvation attack) may also be used. Newer tools can use DNS servers for DoS purposes. Unlike MyDoom's DDoS mechanism, botnets can be turned against any IP address. Script kiddies use them to deny the availability of well known websites to legitimate users. More sophisticated attackers use DDoS tools for the purposes of extortionincluding against their business rivals.
It has been reported that there are new attacks from internet of things (IoT) devices that have been involved in denial of service attacks. In one noted attack that was made peaked at around 20,000 requests per second which came from around 900 CCTV cameras.
UK's GCHQ has tools built for DDoS, named PREDATORS FACE and ROLLING THUNDER.
Simple attacks such as SYN floods may appear with a wide range of source IP addresses, giving the appearance of a distributed DoS. These flood attacks do not require completion of the TCP three-way handshake and attempt to exhaust the destination SYN queue or the server bandwidth. Because the source IP addresses can be trivially spoofed, an attack could come from a limited set of sources, or may even originate from a single host. Stack enhancements such as SYN cookies may be effective mitigation against SYN queue flooding but do not address bandwidth exhaustion.
DDoS extortion
In 2015, DDoS botnets such as DD4BC grew in prominence, taking aim at financial institutions. Cyber-extortionists typically begin with a low-level attack and a warning that a larger attack will be carried out if a ransom is not paid in Bitcoin. Security experts recommend targeted websites to not pay the ransom. The attackers tend to get into an extended extortion scheme once they recognize that the target is ready to pay.
HTTP slow POST DoS attack
First discovered in 2009, the HTTP slow POST attack sends a complete, legitimate HTTP POST header, which includes a Content-Length field to specify the size of the message body to follow. However, the attacker then proceeds to send the actual message body at an extremely slow rate (e.g. 1 byte/110 seconds). Due to the entire message being correct and complete, the target server will attempt to obey the Content-Length field in the header, and wait for the entire body of the message to be transmitted, which can take a very long time. The attacker establishes hundreds or even thousands of such connections until all resources for incoming connections on the victim server are exhausted, making any further connections impossible until all data has been sent. It is notable that unlike many other DDoS or DDoS attacks, which try to subdue the server by overloading its network or CPU, an HTTP slow POST attack targets the logical resources of the victim, which means the victim would still have enough network bandwidth and processing power to operate. Combined with the fact that the Apache HTTP Server will, by default, accept requests up to 2GB in size, this attack can be particularly powerful. HTTP slow POST attacks are difficult to differentiate from legitimate connections and are therefore able to bypass some protection systems. OWASP, an open source web application security project, released a tool to test the security of servers against this type of attack.
Challenge Collapsar (CC) attack
A Challenge Collapsar (CC) attack is an attack where standard HTTP requests are sent to a targeted web server frequently. The Uniform Resource Identifiers (URIs) in the requests require complicated time-consuming algorithms or database operations which may exhaust the resources of the targeted web server.
In 2004, a Chinese hacker nicknamed KiKi invented a hacking tool to send these kinds of requests to attack a NSFOCUS firewall named Collapsar, and thus the hacking tool was known as Challenge Collapsar, or CC for short. Consequently, this type of attack got the name CC attack.
Internet Control Message Protocol (ICMP) flood
A smurf attack relies on misconfigured network devices that allow packets to be sent to all computer hosts on a particular network via the broadcast address of the network, rather than a specific machine. The attacker will send large numbers of IP packets with the source address faked to appear to be the address of the victim. Most devices on a network will, by default, respond to this by sending a reply to the source IP address. If the number of machines on the network that receive and respond to these packets is very large, the victim's computer will be flooded with traffic. This overloads the victim's computer and can even make it unusable during such an attack.
Ping flood is based on sending the victim an overwhelming number of ping packets, usually using the ping command from Unix-like hosts. It is very simple to launch, the primary requirement being access to greater bandwidth than the victim.
Ping of death is based on sending the victim a malformed ping packet, which will lead to a system crash on a vulnerable system.
The BlackNurse attack is an example of an attack taking advantage of the required Destination Port Unreachable ICMP packets.
Nuke
A Nuke is an old-fashioned denial-of-service attack against computer networks consisting of fragmented or otherwise invalid ICMP packets sent to the target, achieved by using a modified ping utility to repeatedly send this corrupt data, thus slowing down the affected computer until it comes to a complete stop.
A specific example of a nuke attack that gained some prominence is the WinNuke, which exploited the vulnerability in the NetBIOS handler in Windows 95. A string of out-of-band data was sent to TCP port 139 of the victim's machine, causing it to lock up and display a Blue Screen of Death.
Peer-to-peer attacks
Attackers have found a way to exploit a number of bugs in peer-to-peer servers to initiate DDoS attacks. The most aggressive of these peer-to-peer-DDoS attacks exploits DC++. With peer-to-peer there is no botnet and the attacker does not have to communicate with the clients it subverts. Instead, the attacker acts as a puppet master, instructing clients of large peer-to-peer file sharing hubs to disconnect from their peer-to-peer network and to connect to the victim's website instead.
Permanent denial-of-service attacks
Permanent denial-of-service (PDoS), also known loosely as phlashing, is an attack that damages a system so badly that it requires replacement or reinstallation of hardware. Unlike the distributed denial-of-service attack, a PDoS attack exploits security flaws which allow remote administration on the management interfaces of the victim's hardware, such as routers, printers, or other networking hardware. The attacker uses these vulnerabilities to replace a device's firmware with a modified, corrupt, or defective firmware image—a process which when done legitimately is known as flashing. The intent is to brick the device, rendering it unusable for its original purpose until it can be repaired or replaced.
The PDoS is a pure hardware targeted attack that can be much faster and requires fewer resources than using a botnet in a DDoS attack. Because of these features, and the potential and high probability of security exploits on network-enabled embedded devices, this technique has come to the attention of numerous hacking communities. BrickerBot, a piece of malware that targeted IoT devices, used PDoS attacks to disable its targets.
PhlashDance is a tool created by Rich Smith (an employee of Hewlett-Packard's Systems Security Lab) used to detect and demonstrate PDoS vulnerabilities at the 2008 EUSecWest Applied Security Conference in London.
Reflected attack
A distributed denial-of-service attack may involve sending forged requests of some type to a very large number of computers that will reply to the requests. Using Internet Protocol address spoofing, the source address is set to that of the targeted victim, which means all the replies will go to (and flood) the target. This reflected attack form is sometimes called a "DRDOS".
ICMP echo request attacks (Smurf attacks) can be considered one form of reflected attack, as the flooding hosts send Echo Requests to the broadcast addresses of mis-configured networks, thereby enticing hosts to send Echo Reply packets to the victim. Some early DDoS programs implemented a distributed form of this attack.
Amplification
Amplification attacks are used to magnify the bandwidth that is sent to a victim. This is typically done through publicly accessible DNS servers that are used to cause congestion on the target system using DNS response traffic. Many services can be exploited to act as reflectors, some harder to block than others. US-CERT have observed that different services may result in different amplification factors, as tabulated below:
DNS amplification attacks involve a new mechanism that increased the amplification effect, using a much larger list of DNS servers than seen earlier. The process typically involves an attacker sending a DNS name lookup request to a public DNS server, spoofing the source IP address of the targeted victim. The attacker tries to request as much information as possible, thus amplifying the DNS response that is sent to the targeted victim. Since the size of the request is significantly smaller than the response, the attacker is easily able to increase the amount of traffic directed at the target. SNMP and NTP can also be exploited as reflector in an amplification attack.
An example of an amplified DDoS attack through the Network Time Protocol (NTP) is through a command called monlist, which sends the details of the last 600 hosts that have requested the time from the NTP server back to the requester. A small request to this time server can be sent using a spoofed source IP address of some victim, which results in a response 556.9 times the size of the request being sent to the victim. This becomes amplified when using botnets that all send requests with the same spoofed IP source, which will result in a massive amount of data being sent back to the victim.
It is very difficult to defend against these types of attacks because the response data is coming from legitimate servers. These attack requests are also sent through UDP, which does not require a connection to the server. This means that the source IP is not verified when a request is received by the server. To bring awareness of these vulnerabilities, campaigns have been started that are dedicated to finding amplification vectors which have led to people fixing their resolvers or having the resolvers shut down completely.
Mirai botnet
This attack works by using a worm to infect hundreds of thousands of IoT devices across the internet. The worm propagates through networks and systems taking control of poorly protected IoT devices such as thermostats, Wi-Fi-enabled clocks, and washing machines. When the device becomes enslaved usually the owner or user will have no immediate indication. The IoT device itself is not the direct target of the attack, it is used as a part of a larger attack. These newly enslaved devices are called slaves or bots. Once the hacker has acquired the desired number of bots, they instruct the bots to try to contact an ISP. In October 2016, a Mirai botnet attacked Dyn which is the ISP for sites such as Twitter, Netflix, etc. As soon as this occurred, these websites were all unreachable for several hours. This type of attack is not physically damaging, but it will certainly be costly for any large internet companies that get attacked.
R-U-Dead-Yet? (RUDY)
RUDY attack targets web applications by starvation of available sessions on the web server. Much like Slowloris, RUDY keeps sessions at halt using never-ending POST transmissions and sending an arbitrarily large content-length header value.
SACK Panic
Manipulating maximum segment size and selective acknowledgement (SACK) it may be used by a remote peer to cause a denial of service by an integer overflow in the Linux kernel, causing even a Kernel panic. Jonathan Looney discovered on June 17, 2019.
Shrew attack
The shrew attack is a denial-of-service attack on the Transmission Control Protocol where the attacker employs man-in-the-middle techniques. It uses short synchronized bursts of traffic to disrupt TCP connections on the same link, by exploiting a weakness in TCP's re-transmission timeout mechanism.
Slow Read attack
A slow read attack sends legitimate application layer requests, but reads responses very slowly, thus trying to exhaust the server's connection pool. It is achieved by advertising a very small number for the TCP Receive Window size, and at the same time emptying clients' TCP receive buffer slowly, which causes a very low data flow rate.
Sophisticated low-bandwidth Distributed Denial-of-Service Attack
A sophisticated low-bandwidth DDoS attack is a form of DoS that uses less traffic and increases their effectiveness by aiming at a weak point in the victim's system design, i.e., the attacker sends traffic consisting of complicated requests to the system. Essentially, a sophisticated DDoS attack is lower in cost due to its use of less traffic, is smaller in size making it more difficult to identify, and it has the ability to hurt systems which are protected by flow control mechanisms.
(S)SYN flood
A SYN flood occurs when a host sends a flood of TCP/SYN packets, often with a forged sender address. Each of these packets is handled like a connection request, causing the server to spawn a half-open connection, by sending back a TCP/SYN-ACK packet (Acknowledge), and waiting for a packet in response from the sender address (response to the ACK Packet). However, because the sender's address is forged, the response never comes. These half-open connections saturate the number of available connections the server can make, keeping it from responding to legitimate requests until after the attack ends.
Teardrop attacks
A teardrop attack involves sending mangled IP fragments with overlapping, oversized payloads to the target machine. This can crash various operating systems because of a bug in their TCP/IP fragmentation re-assembly code. Windows 3.1x, Windows 95 and Windows NT operating systems, as well as versions of Linux prior to versions 2.0.32 and 2.1.63 are vulnerable to this attack.
(Although in September 2009, a vulnerability in Windows Vista was referred to as a "teardrop attack", this targeted SMB2 which is a higher layer than the TCP packets that teardrop used).
One of the fields in an IP header is the “fragment offset” field, indicating the starting position, or offset, of the data contained in a fragmented packet relative to the data in the original packet. If the sum of the offset and size of one fragmented packet differs from that of the next fragmented packet, the packets overlap. When this happens, a server vulnerable to teardrop attacks is unable to reassemble the packets - resulting in a denial-of-service condition.
Telephony denial-of-service (TDoS)
Voice over IP has made abusive origination of large numbers of telephone voice calls inexpensive and readily automated while permitting call origins to be misrepresented through caller ID spoofing.
According to the US Federal Bureau of Investigation, telephony denial-of-service (TDoS) has appeared as part of various fraudulent schemes:
A scammer contacts the victim's banker or broker, impersonating the victim to request a funds transfer. The banker's attempt to contact the victim for verification of the transfer fails as the victim's telephone lines are being flooded with thousands of bogus calls, rendering the victim unreachable.
A scammer contacts consumers with a bogus claim to collect an outstanding payday loan for thousands of dollars. When the consumer objects, the scammer retaliates by flooding the victim's employer with thousands of automated calls. In some cases, displayed caller ID is spoofed to impersonate police or law enforcement agencies.
A scammer contacts consumers with a bogus debt collection demand and threatens to send police; when the victim balks, the scammer floods local police numbers with calls on which caller ID is spoofed to display the victim's number. Police soon arrive at the victim's residence attempting to find the origin of the calls.
Telephony denial-of-service can exist even without Internet telephony. In the 2002 New Hampshire Senate election phone jamming scandal, telemarketers were used to flood political opponents with spurious calls to jam phone banks on election day. Widespread publication of a number can also flood it with enough calls to render it unusable, as happened by accident in 1981 with multiple +1-area code-867-5309 subscribers inundated by hundreds of misdialed calls daily in response to the song 867-5309/Jenny.
TDoS differs from other telephone harassment (such as prank calls and obscene phone calls) by the number of calls originated; by occupying lines continuously with repeated automated calls, the victim is prevented from making or receiving both routine and emergency telephone calls.
Related exploits include SMS flooding attacks and black fax or fax loop transmission.
TTL expiry attack
It takes more router resources to drop a packet with a TTL value of 1 or less than it does to forward a packet with a higher TTL value. When a packet is dropped due to TTL expiry, the router CPU must generate and send an ICMP time exceeded response. Generating many of these responses can overload the router's CPU.
UPnP attack
This attack uses an existing vulnerability in Universal Plug and Play (UPnP) protocol to get around a considerable amount of the present defense methods and flood a target's network and servers. The attack is based on a DNS amplification technique, but the attack mechanism is a UPnP router that forwards requests from one outer source to another disregarding UPnP behavior rules. Using the UPnP router returns the data on an unexpected UDP port from a bogus IP address, making it harder to take simple action to shut down the traffic flood. According to the Imperva researchers, the most effective way to stop this attack is for companies to lock down UPnP routers.
SSDP reflection attack
In 2014 it was discovered that SSDP was being used in DDoS attacks known as an "SSDP reflection attack with amplification". Many devices, including some residential routers, have a vulnerability in the UPnP software that allows an attacker to get replies from port number 1900 to a destination address of their choice. With a botnet of thousands of devices, the attackers can generate sufficient packet rates and occupy bandwidth to saturate links, causing the denial of services. The network company Cloudflare has described this attack as the "Stupidly Simple DDoS Protocol".
ARP spoofing
ARP spoofing is a common DoS attack that involves a vulnerability in the ARP protocol that allows an attacker to associate their MAC address to the IP address of another computer or gateway (like a router), causing traffic intended for the original authentic IP to be re-routed to that of the attacker, causing a denial of service.
Defense techniques
Defensive responses to denial-of-service attacks typically involve the use of a combination of attack detection, traffic classification and response tools, aiming to block traffic that they identify as illegitimate and allow traffic that they identify as legitimate. A list of prevention and response tools is provided below:
Upstream filtering
All traffic destined to the victim is diverted to pass through a "cleaning center" or a "scrubbing center" via various methods such as: changing the victim IP address in the DNS system, tunneling methods (GRE/VRF, MPLS, SDN),
proxies, digital cross connects, or even direct circuits, which separates "bad" traffic (DDoS and also other common internet attacks) and only sends good legitimate traffic to the victim server.
The provider needs central connectivity to the Internet to manage this kind of service unless they happen to be located within the same facility as the "cleaning center" or "scrubbing center". DDoS attacks can overwhelm any type of hardware firewall, and passing malicious traffic through large and mature networks becomes more and more effective and economically sustainable against DDoS.
Application front end hardware
Application front-end hardware is intelligent hardware placed on the network before traffic reaches the servers. It can be used on networks in conjunction with routers and switches. Application front-end hardware analyzes data packets as they enter the system, and then identifies them as a priority, regular, or dangerous. There are more than 25 bandwidth management vendors.
Application level Key Completion Indicators
Approaches to DDoS attacks against cloud-based applications may be based on an application layer analysis, indicating whether incoming bulk traffic is legitimate and thus triggering elasticity decisions without the economical implications of a DDoS attack. These approaches mainly rely on an identified path of value inside the application and monitor the progress of requests on this path, through markers called Key Completion Indicators.
In essence, these techniques are statistical methods of assessing the behavior of incoming requests to detect if something unusual or abnormal is going on.
An analogy is to a brick-and-mortar department store where customers spend, on average, a known percentage of their time on different activities such as picking up items and examining them, putting them back, filling a basket, waiting to pay, paying, and leaving. These high-level activities correspond to the Key Completion Indicators in service or site, and once normal behavior is determined, abnormal behavior can be identified. If a mob of customers arrived in the store and spent all their time picking up items and putting them back, but never made any purchases, this could be flagged as unusual behavior.
The department store can attempt to adjust to periods of high activity by bringing in a reserve of employees at short notice. But if it did this routinely, were a mob to start showing up but never buying anything, this could ruin the store with the extra employee costs. Soon the store would identify the mob activity and scale back the number of employees, recognizing that the mob provides no profit and should not be served. While this may make it more difficult for legitimate customers to get served during the mob's presence, it saves the store from total ruin.
In the case of elastic cloud services where a huge and abnormal additional workload may incur significant charges from the cloud service provider, this technique can be used to scale back or even stop the expansion of server availability to protect from economic loss.
Blackholing and sinkholing
With blackhole routing, all the traffic to the attacked DNS or IP address is sent to a "black hole" (null interface or a non-existent server). To be more efficient and avoid affecting network connectivity, it can be managed by the ISP.
A DNS sinkhole routes traffic to a valid IP address which analyzes traffic and rejects bad packets. Sinkholing is not efficient for most severe attacks.
IPS based prevention
Intrusion prevention systems (IPS) are effective if the attacks have signatures associated with them. However, the trend among the attacks is to have legitimate content but bad intent. Intrusion-prevention systems which work on content recognition cannot block behavior-based DoS attacks.
An ASIC based IPS may detect and block denial-of-service attacks because they have the processing power and the granularity to analyze the attacks and act like a circuit breaker in an automated way.
DDS based defense
More focused on the problem than IPS, a DoS defense system (DDS) can block connection-based DoS attacks and those with legitimate content but bad intent. A DDS can also address both protocol attacks (such as teardrop and ping of death) and rate-based attacks (such as ICMP floods and SYN floods). DDS has a purpose-built system that can easily identify and obstruct denial of service attacks at a greater speed than a software that is based system.
Firewalls
In the case of a simple attack, a firewall could have a simple rule added to deny all incoming traffic from the attackers, based on protocols, ports, or the originating IP addresses.
More complex attacks will however be hard to block with simple rules: for example, if there is an ongoing attack on port 80 (web service), it is not possible to drop all incoming traffic on this port because doing so will prevent the server from serving legitimate traffic. Additionally, firewalls may be too deep in the network hierarchy, with routers being adversely affected before the traffic gets to the firewall. Also, many security tools still do not support IPv6 or may not be configured properly, so the firewalls often might get bypassed during the attacks.
Routers
Similar to switches, routers have some rate-limiting and ACL capability. They, too, are manually set. Most routers can be easily overwhelmed under a DoS attack. Cisco IOS has optional features that can reduce the impact of flooding.
Switches
Most switches have some rate-limiting and ACL capability. Some switches provide automatic and/or system-wide rate limiting, traffic shaping, delayed binding (TCP splicing), deep packet inspection and Bogon filtering (bogus IP filtering) to detect and remediate DoS attacks through automatic rate filtering and WAN Link failover and balancing.
These schemes will work as long as the DoS attacks can be prevented by using them. For example, SYN flood can be prevented using delayed binding or TCP splicing. Similarly, content-based DoS may be prevented using deep packet inspection. Attacks originating from dark addresses or going to dark addresses can be prevented using bogon filtering. Automatic rate filtering can work as long as set rate thresholds have been set correctly. Wan-link failover will work as long as both links have DoS/DDoS prevention mechanism.
Blocking vulnerable ports
For example, in an SSDP reflection attack; the key mitigation is to block incoming UDP traffic on port 1900 at the firewall.
Unintentional denial-of-service
An unintentional denial-of-service can occur when a system ends up denied, not due to a deliberate attack by a single individual or group of individuals, but simply due to a sudden enormous spike in popularity. This can happen when an extremely popular website posts a prominent link to a second, less well-prepared site, for example, as part of a news story. The result is that a significant proportion of the primary site's regular userspotentially hundreds of thousands of peopleclick that link in the space of a few hours, having the same effect on the target website as a DDoS attack. A VIPDoS is the same, but specifically when the link was posted by a celebrity.
When Michael Jackson died in 2009, websites such as Google and Twitter slowed down or even crashed. Many sites' servers thought the requests were from a virus or spyware trying to cause a denial-of-service attack, warning users that their queries looked like "automated requests from a computer virus or spyware application".
News sites and link sitessites whose primary function is to provide links to interesting content elsewhere on the Internetare most likely to cause this phenomenon. The canonical example is the Slashdot effect when receiving traffic from Slashdot. It is also known as "the Reddit hug of death" and "the Digg effect".
Routers have also been known to create unintentional DoS attacks, as both D-Link and Netgear routers have overloaded NTP servers by flooding NTP servers without respecting the restrictions of client types or geographical limitations.
Similar unintentional denial-of-service can also occur via other media, e.g. when a URL is mentioned on television. If a server is being indexed by Google or another search engine during peak periods of activity, or does not have a lot of available bandwidth while being indexed, it can also experience the effects of a DoS attack.
Legal action has been taken in at least one such case. In 2006, Universal Tube & Rollform Equipment Corporation sued YouTube: massive numbers of would-be YouTube.com users accidentally typed the tube company's URL, utube.com. As a result, the tube company ended up having to spend large amounts of money on upgrading its bandwidth. The company appears to have taken advantage of the situation, with utube.com now containing ads for advertisement revenue.
In March 2014, after Malaysia Airlines Flight 370 went missing, DigitalGlobe launched a crowdsourcing service on which users could help search for the missing jet in satellite images. The response overwhelmed the company's servers.
An unintentional denial-of-service may also result from a prescheduled event created by the website itself, as was the case of the Census in Australia in 2016. This could be caused when a server provides some service at a specific time. This might be a university website setting the grades to be available where it will result in many more login requests at that time than any other.
Side effects of attacks
Backscatter
In computer network security, backscatter is a side-effect of a spoofed denial-of-service attack. In this kind of attack, the attacker spoofs (or forges) the source address in IP packets sent to the victim. In general, the victim machine cannot distinguish between the spoofed packets and legitimate packets, so the victim responds to the spoofed packets as it normally would. These response packets are known as backscatter.
If the attacker is spoofing source addresses randomly, the backscatter response packets from the victim will be sent back to random destinations. This effect can be used by network telescopes as indirect evidence of such attacks.
The term "backscatter analysis" refers to observing backscatter packets arriving at a statistically significant portion of the IP address space to determine characteristics of DoS attacks and victims.
Legality
Many jurisdictions have laws under which denial-of-service attacks are illegal.
In the US, denial-of-service attacks may be considered a federal crime under the Computer Fraud and Abuse Act with penalties that include years of imprisonment. The Computer Crime and Intellectual Property Section of the US Department of Justice handles cases of DoS and DDoS. In one example, in July 2019, Austin Thompson, aka DerpTrolling, was sentenced to 27 months in prison and $95,000 restitution by a federal court for conducting multiple DDoS attacks on major video gaming companies, disrupting their systems from hours to days.
In Europe countries, committing criminal denial-of-service attacks may, as a minimum, lead to arrest. The United Kingdom is unusual in that it specifically outlawed denial-of-service attacks and set a maximum penalty of 10 years in prison with the Police and Justice Act 2006, which amended Section 3 of the Computer Misuse Act 1990.
In January 2019, Europol announced that "actions are currently underway worldwide to track down the users" of Webstresser.org, a former DDoS marketplace that was shut down in April 2018 as part of Operation Power Off. Europol said UK police were conducting a number of "live operations" targeting over 250 users of Webstresser and other DDoS services.
On January 7, 2013, Anonymous posted a petition on the whitehouse.gov site asking that DDoS be recognized as a legal form of protest similar to the Occupy protests, the claim being that the similarity in the purpose of both is same.
See also
Notes
References
Further reading
PC World - Application Layer DDoS Attacks are Becoming Increasingly Sophisticated
External links
Internet Denial-of-Service Considerations
Akamai State of the Internet Security Report - Quarterly Security and Internet trend statistics
W3C The World Wide Web Security FAQ
cert.org CERT's Guide to DoS attacks. (historic document)
ATLAS Summary Report – Real-time global report of DDoS attacks.
Low Orbit Ion Cannon - The Well Known Network Stress Testing Tool
High Orbit Ion Cannon - A Simple HTTP Flooder
LOIC SLOW An Attempt to Bring SlowLoris and Slow Network Tools on LOIC
Denial-of-service attacks
Internet Relay Chat
Cyberwarfare
Types of cyberattacks
Internet outages |
52969697 | https://en.wikipedia.org/wiki/Institute%20of%20Aeronautical%20Engineering | Institute of Aeronautical Engineering | Institute of Aeronautical Engineering (IARE) is a private engineering college in Hyderabad, offering post graduate (Masters) and undergraduate (Bachelors) courses in engineering and technology. It is located near Air Force Station, Dundigal, Hyderabad, India. Institute of Aeronautical Engineering was established in the academy year 2000.Institute of Aeronautical Engineering has good faculty standards and is affiliated to Jawaharlal Nehru Technological University, Hyderabad and approved by AICTE
Departments
Aeronautical Engineering
Mechanical Engineering
Civil engineering
Computer Science and Engineering
Electronics and Communication Engineering
Electrical and Electronics Engineering
Information Technology
[[Computer Science and Engineering [Artificial Intelligence and Machine Learning]]
Computer Science and Engineering (Data Science)
Computer Science and Engineering (Cyber Security)
Computer Science and Information Technology
Master of Business Administration (MBA)
M.Tech (Aerospace Engineering)
M.Tech (CAD/CAM (Mechanical Engineering)
M.Tech (Computer Science and Engineering)
M.Tech (Embedded Systems)
M.Tech (Electrical Power Systems)
M.Tech (Structural Engineering)
Bachelor's degrees
Bachelor of Engineering
Science & humanities departments
Chemistry/Environmental Science Department
English Department
Mathematics Department
Managerial Science Department
Computer Programming Department
Physics Department
Rankings
The National Institutional Ranking Framework (NIRF) ranked it 170 among engineering colleges in 2020.
Sports
The college has a playground of two acres. It offers facilities for outdoor and indoor games such as cricket, football, volleyball, basketball, badminton and table tennis and chess.
See also
Education in India
Literacy in India
List of institutions of higher education in Telangana
References
External links
Engineering colleges in Hyderabad, India
2000 establishments in Andhra Pradesh
Educational institutions established in 2000 |
11711925 | https://en.wikipedia.org/wiki/BBK%20DAV%20College%20for%20Women%2C%20Amritsar | BBK DAV College for Women, Amritsar | The BBK DAV College for Women is a college in Amritsar, India.
BBK DAV College for Women, Amritsar, was founded in the 1967 under the aegis of DAV College Managing Committee, New Delhi.
Academics
The academic year has semester system consisting of two terms - July to December and January to June. All undergraduate, postgraduate and diploma programmes require full time commitment from the students and the system does not exempt them from any compulsory activities.
Programmes Available
BA (Bachelor of Arts) with the following subject options:
Regular Subjects
English
Hindi
Punjabi / Punjab History & Culture
Psychology
Sociology
History
Political Science
Home Science
Philosophy
Geography
Physical Education
Economics
Music Vocal
Music Instrumental
Art & Painting
Computer Science
Computer Applications
Mathematics
Vocational Subjects
Mass Communication Video Production
Still Photography Audio Production
Commercial Art
Gemology & Jewellery Design
Fashion Designing & Garment Construction
Tourism & Travel Management
Dance
Undergraduate Degree Courses
B.Voc. Entertainment Technology
B.Voc. Theatre & Stage Craft
B.Voc. Fashion Technology
B.Voc. Software Development
B.Voc. Beauty & Fitness
B.Voc. Retail Management
B.Voc. Financial Services
B.Sc. Medical with Bioinformatics as subject option (3 years)
B.Sc. Non-Medical with Bioinformatics as subject option (3 years)
B.Sc. Biotechnology (3 years)
B.Sc. Economics with Computer Science & Mathematics/ Quantitative Techniques. (3 years)
B.Sc. Computer Science with Physics, Mathematics & Computer Science (3 years)
BCA (Bachelor in Computer Applications) (3 years)
B.Sc IT (Bachelor in Science in Information Technology) (3 years)
BA English Honours (3 years)
B.Com Pass & Honours (3 years)
B.Com. Financial Services (in the pipeline)
BBA (Bachelor of Business Administration) (3 years)
BD (Bachelor in Design) - 4 Year Degree Course with specialization in Textile, Interior & Fashion Designing.
BM (Bachelor in Multimedia) (4 year Degree Course)
BFA (Bachelor in Fine Arts) (4 years)
BA Journalism & Mass Communication
Post Graduate Courses
MA Fine Arts
MA Commercial Art
MA English
Masters in Journalism & Mass Communication
Masters in Tourism Management
MA Media Studies & Production
M.Com
M.Sc. Computer Science
M.Sc. Internet Studies
M.Sc. Fashion Designing & Merchandising
Diplomas
PG Diploma in Computer Applications (PGDCA)
PG Diploma in Bioinformatics
PG Diploma in Financial Services (Banking & Insurance)
One Year Diploma Course in French
Add-On Courses
French
Communication Skills in English
Cosmetology
Aviation, Catering & Hospitality
Computer Fundamentals & Internet Applications
Computer Graphics & Animation
Interior Decoration
Office Management & Secretarial Practices
Food Preservation
Anchoring, Reporting & News Reading
Cultural activities
The college recreated history by lifting Overall Championship Trophy in Inter-Zonal Youth Festival of GNDU After 19 Years. The college won the championship trophy by scoring 120 marks and winning positions in all the 26 events in which the college participated. The college emerged champion in literary, theatre & folk events while bagging runners-up positions in the category of Fine Arts & Music. Participants made their mark by winning 1st position in 11 events, namely Kavishri, Folk Song, On the Spot Painting, Collage, Phulkari, Skit, Fancy Dress, Quiz, Debate, Poetry and Gidha, 2nd position in 13 events, namely Folk Orchestra, General Dance, Classical Vocal, Non Percussion, Classical Dance, Group Song, Vaar Singing, Western Solo, Rangoli, Play, Mime, Elocution and Poster Making and 3rd position in 2 items namely Western Group and Group Bhajan. College student Suvidha Duggal was presented the Best Actress (Theatre) Award and Simran from Gidha team won the award for Best Dancer.
The college has ties with international institutes for educational and cultural exchange. It has links with the educational organizations WCCI (World Council of Curriculum & Instructions)], World Education Foundation, Kent County Council, Technology College, Northfleet and Hextable School, and World Punjabi Congress, Lahore, Pakistan.
In July 2004, a group of 16 students and teachers presented as many as 50 programmes including the play Is Jagah Ek Gaon Tha at Woodville Halls Theatre, Gravesend, UK. The college dance troupe was part of the Indian Fair that was declared first among the 32 countries that participated in the Dubai International Dance Festival. The college artists presented a one-hour programme at Wagha Border for the release of a documentary titled Sarhad ke Rakshak. The play Umrao Jaan participated in a five-day National Drama Festival organized by Urdu Academy, New Delhi. One College student represented Punjab at the festival of Teej celebrated in Gravesend, England. In 2008-09, the college folk artistes were part of a delegation to China selected by the government of India.
Sports
The college has in its alumni Arjuna Awardee, an Olympian Harwant Kaur, a Gold medalist in the Asian Games and players at the international level. The college’s investment in sports is 55 lacs per year and it provides the latest equipment for all the sports events of Guru Nanak Dev University, Amritsar. The college has made a tremendous contribution to university sports, adding a major share in Guru Nanak Dev University’s feat of winning Maulana Abul Kalam Azad All India Trophy for Sports year after year. Its players have been members of the Indian contingent for the Asian Games and Commonwealth Games. A large number of players of this college have represented the university at intervarsity level and won Gold, Silver and Bronze medals.
Continuing its winning streak BBK DAV College for Women lifted overall General Sports Championship Trophy consecutively for the seventh time and a cash award of rupees 35 lakhs. 150 outstanding players of the college, who excelled in different games received rupees 35 lakhs approximately as prize money by Shri. Rahul Bhatnagar, IAS, Secretary Department of Youth Affairs and Sports, Government of India, New Delhi and Dr. Gurdeep Singh, Joint Secretary, AIU New Delhi, oh the 48th Annual Sports prize distribution function 2017-18 at Dashmesh Auditorium, GNDU Campus. Five international cyclists of the college, Ms. Nayana Rajesh.P , Amritha Regunath, Aleena Reji, Vaishnavi Gabhane and Ms. Aashu Sharma got maximum cash prize of Rs. 4,28,000, Rs. 3,46,000, Rs. 2,40,000, Rs. 2,29,000 and 1,00,000 respectively amongst women players. Out of the 52 Inter College competitions organised by competition Guru Nanak Dev University organised 52 Inter College & Our College participated in 49 and got 47 positions out of which Twenty Seven Championships i.e Canoeing, Rope Malkhumb, Pistol Shooting, Rowing, Table Tennis, Tug of War, Judo, Yoga, Kayaking, Lawn Tennis, Archery Wooden, Wushu, Power lifting, Weight Lifting, Road Cycling, Track Cycling, Basketball, Wrestling, Fencing, Rifle Shooting, Boxing, Softball, Korfball, Baseball, Yachting, Kick Boxing and Netball, Seven teams got 1st Runners up positions in Chess, Handball, Squash Racket, Kabbadi (c/s), Gymnastic Artistic, Football, and Cricket and Nine teams got 2nd Runners up positions i.e. Ball Badminton, Badminton, Archery Compound, Archery Recurve, Hockey, Indoor Hockey, Rugby, Kabaddi (n/s) and Swimming.
Department of Life Sciences
The College is re-accredited with ‘A’ Grade by NAAC, acclaimed as ‘Star College’ by Department of Biotechnology, Govt. of India, and College with ‘Potential for Excellence’ by UGC, the college offers an ideal blend of tradition & modernity, science and social concerns, moral values and entrepreneurial acumen.
See also
Arya Samaj
References
Education in Amritsar
Science and technology in Amritsar
Universities and colleges affiliated with the Arya Samaj
Buildings and structures in Amritsar
1967 establishments in Punjab, India
Educational institutions established in 1967 |
37578623 | https://en.wikipedia.org/wiki/Vasyl%20Stefanyk%20Precarpathian%20National%20University | Vasyl Stefanyk Precarpathian National University | Vasyl Stefanyk Precarpathian National University is one of the oldest institutions of higher education in Western Ukraine. Its history dates back to March 15, 1940, when Stanislav Teacher Training Institute was established. Later, in 1950, it was renamed Stanislav (since 1962 - Ivano-Frankivsk) Pedagogical Institute. In January 1971, the institution was renamed after a famous Ukrainian writer Vasyl Stefanyk by the decree of the Council of Ministers of the USSR.
After the dissolution of the USSR, the Precarpathian University was founded on the base of Ivano-Frankivsk State Pedagogical Institute on August 26, 1992. On September 14, 2004, the Ministry of Education and Science of Ukraine awarded the University the national status, by its decree # 204.
Currently, the University unites eight educational institutes, six faculties, three educational and consulting centers, one college, eleven research centers, Postgraduate Educational and Pre-university Training Center, Information Technology Center, the Center of Distance Learning and Knowledge Control, Teaching Management, Scientific and Research Department, Dendrological Park (Arboretum) and Botanical Garden.
University Administration
• Rector – Ihor Tsependa, Doctor of Political Sciences, Professor, Honorary Doctor of the University of Rzeszów (Poland, 2015), member of the Ukrainian Academy of Political Science.
• Vice-Rector for Research Work – Andrii Zahorodniuk, Doctor of Physics and Mathematics, Professor.
• Vice-Rector for Scientific and Pedagogical Work – Serhii Sharyn, Doctor of Physics and Mathematics.
• Vice-Rector for Scientific and Pedagogical Work – Halyna Mykhailyshyn, Doctor of Philosophy, Professor, Honored Worker of Education.
Accommodation and Catering
The campus includes 6 dormitories for 2700 beds. Students and university staff can live there. The housing facilities are divided by a need-base formula.
History
Galicia became a part of the USSR in September 1939. The positive outcome of this historical event was the unification of all Ukrainian lands in one state. It enabled the development of public education. At that time the Precarpathian region faced a severe shortage of teaching staff. Therefore, local authorities asked the government of URSR to establish a Teacher Training Institute and in January 1940 preparations for its opening began. As a result, on March 1 that year a new educational establishment opened its doors.
Three departments were created in the Institute: Department of History (Dean T. Velykyi), Department of Physics and Mathematics (Dean M. Korol) and Philological Department with two sub-departments – the Ukrainian Language and Literature and the Russian Language and Literature (Dean M. Shmulenzon).
Originally, it did not have its own building. It was using the classrooms in the commercial school. Overcrowding, lack of training equipment and educational and methodological literature were the main obstacles on the way to a proper development of the Institute. Despite the difficulties, the Institute actively pursued its activities. Before the war 900 future teachers studied there. However, with the Second World War many students and teachers went to fight, and some of them were evacuated. In particular the first dean of the Philological Department L. Shmulenzon, teachers W. Chornyi, L. Berezin, I. Popovskii, L. Nasonov died in the battles somewhere between Zhytomyr and Berdychiv. The first dean of the Faculty of History P. Velykyi was killed at Stalingrad. The first director of the Institute F. Plotnytskii died in a Gestapo torture chamber in Uman. Some of those who remained alive after the war continued to work in other institutions and establishments.
During the Nazi occupation, the Teacher Training Institute was closed, its property was robbed, and the school building was burnt. The amount of losses reached 3 million rubles. After the liberation of the land from the German invaders, measures to restore the work of Stanislav Teacher Training Institute were taken. The first post-war academic year began on Nov. 1, 1940 in the building in Shevchenko Street. In July 1945, the first 13 graduates gained their teaching diplomas. The 1945-1950 were the reconstruction years.
A new stage in the work of the educational institution began in the fifties. The Stanislav Teacher Training Institute became the Pedagogical Institute according to the resolution of Council of Ministers of the USSR on August 4, 1950. The training of highly qualified teachers has started.
Unfortunately, the Institute could not avoid the domination of totalitarian regime that affected all aspects of the institute life. The declarations, ideology policies of the USSR had to be implemented and reflected in the educational process. The staff of the Institute once again worked under difficult conditions because of the changes in curricula and temporary staff reduction. The Institute was preparing teachers of a broader qualification. In 1963, the General Scientific Faculty was created. It became the fourth one alongside the History and Philology Department, Physics and Mathematics Faculty as well as the Faculty of the Methodology and Pedagogy of Primary Education. The main objective of the General Science Faculty was to provide higher education for part-time students so they could keep their jobs. Hundreds of students from the universities of Kyiv, Kharkiv, Lviv, Donetsk, and Chernivtsi who lived and worked in Ivano-Frankivsk Region enriched the number of the attenders of this faculty.
It is worth mentioning that scientific work was developing too. The Institute started to publish “The Memoirs” (since 1956). Twenty-three dissertations were defended within a short period of time – from the date of the establishing of the institute to 1965. Obviously, this number wasn't very impressive but it was legitimate for the developing of the educational institution.
The history of the Institute during the Soviet times had some other sad pages. In August and September 1965, the end of the “Khruschev’s thaw” period resulted in series of political arrests of the Ukrainian “dissidents of the sixties” who were speaking against Russification policy, injustice, and stood up for the sovereignty of Ukraine. The arrests touched the professors of Ivano-Frankivsk Pedagogical University too. Valentyn Moroz, Professor of Recent history, who had been working at the Institute for just a year was restrained by investigating authorities on August 31, 1965. The court held in Lutsk delivered the judgment: four years in a hard labor prison camp for his “anti-Soviet agitation and propaganda”. After being released in September 1969, he was restrained and imprisoned again for his new articles. Luckily, V. Moroz and four other dissidents were exchanged for two revealed in the USA soviet spies (1979). Abroad V.Moroz defended his doctorate dissertation.
V. Kravets (1957-1967), Candidate of Philosophy, assistant professor, O. Ustenko (1967-1980), Candidate of Economics, professor, I. Vasyuta (1980-1982), Doctor of History, professor, I. Kocheruk (1982-1986), Candidate of Physics and Mathematics, assistant professor were at the head of the Institute.
In 1966, the Music and Pedagogical Faculty which prepared teachers of music and singing art for secondary schools was established in the Institute. The Honoured Artist of the USSR, assistant professor V. Paschenko was the first dean of the department. Some important changes took place at other faculties too. Thus, in 1969 the Historical and Philological Faculty was divided into the Historical and Pedagogical Department and the Philological one. The first of the two offered its students not only the profession of a history and social study teacher but of a methodologist of the educational work as well.
A memorable date in the history of Ivano-Frankivsk Pedagogical Institute was January 5, 1971 when the name of the famous Ukrainian writer Vasyl Stefanyk was conferred to the establishment. Nowadays the Precarpathian National University uses it with honor.
In 1971, the Philology Department opened the specialty “the English Language” and in 1975 a new Faculty of Foreign Languages was established. Since 1977 graduate students have been given the qualification of a teacher of English and German.
At the beginning of Perestroika in the mid-1980s a new period in the life of the Institute began. In 1986, the Institute got a new rector Doctor of Philological Sciences, professor, V. Kononenko. He put much energy and efforts into the development and transformation of the Institute into a modern European educational establishment, authoritative scientific centre.
First of all, the material and technical basis of the institute strengthened. The new building for classrooms, the concert hall and the library with its fund which comprised more than 500 000 volumes started operating. In 1988, the doors of the eight-storey building of Humanitarian faculties were opened for students. An active process of updating the content of higher education, searching for new and more effective teaching methods has begun. In 1987, Graphic Art Department was designed to train the teachers of Fine Arts, Drawing and Manual Training.
The proclamation of Ukrainian Independence and the development of national statehood created new conditions for the development of the educational system, including higher education. New tasks were given to teachers and scientists. It was necessary to provide residents with wider, thorough education which could meet standards of the modern life. This was possible only at a classical university. That's why the teaching staff of the Institute submitted the application for getting a new status, the status of a university, to the first President of Ukraine L. Kravchuk. On September 26, 1992, L. Kravchuk signed the decree on the reorganization of Ivano-Frankivsk State Pedagogical Institute into Vasyl Stefanyk Precarpathian University. At that time, the educational establishment with its scientific potential occupied one of the leading places among the pedagogical institutions in Ukraine. Educational activities and research work were done by 240 staff members, among them there were 20 Doctors of Sciences, professors, 181 Candidates of Sciences, and associate professors at 36 departments. Most of them had already worked according to university standards, and hence, they had provided much deeper knowledge than the Pedagogical Institute had required. The student staff included 3950 full-time students and 2400 part-time students.
The development of the educational establishment in the status of university gained momentum.
1994 – Department of Regional Problems of the Institute of Political and Ethnic Studies of the National Academy of Sciences of Ukraine and Vasyl Stefanyk Precarpathian University was created.
1996 – Research Institute of Ukrainian Studies was established.
1999 – Institute of Physics and Chemistry; Institute and Pedagogical College were opened for work in Kolomyia.
1997 – On a five-year period of its existence in independent Ukraine our University was training 7 000 students and Junior Specialists in 22 specialties.
New faculties were established: Law, Economics, Philosophy, Natural Sciences, and Preparatory Courses. New specialties were introduced because of lack of professionals in the region: Jurisprudence, Finance and Credit, Accounting and Auditing, Psychology, Religious Studies, Polish language and literature, Chemistry, Biology, Decorative and Applied Arts, Design, and Social Pedagogy.
2001 – Faculty of Pedagogy was reorganized into Institute of Pedagogy. Faculties of Graphic Arts and Musical Pedagogy were reorganized into Institute of Culture and Arts – later into the Institute of Arts.
In 2002, Law Institute was established on the basis of the same-name Faculty. Just a year later, the Institute of Tourism started its work and welcomed the first students.
Considering the significant contribution of the Precarpathian University to the preparation of highly qualified specialists, fruitful scientific and educational work of the teaching staff, on 21 August 2004 Vasyl Stefanyk Precarpathian University got a status of a national one by the Presidential Decree No. 958 of 21 August 2004 and by the Decree of Ministry of Education and Science of Ukraine No. 718 of 13 September 2004.
In 2005, Doctor of Physical and Mathematical Sciences, Professor B. Ostafiichuk was elected as rector of the University. That year the Computer Maintenance Department was established in the University Scientific Library; the Institute of History and Political Science started its work at the Faculty of History. The Scientific and Research Laboratory was established at the Department of Physical Rehabilitation of the Faculty of Physical Education and Sports. A separate division of the University – Law College – was established a year later. The Publishing and Information Department started its work as a part of the Centre for Information Technology; the Department of Labour, Environmental and Agricultural Law was established at Law Institute; the Department of Statistics and Higher Mathematics was established at the Faculty of Mathematics and Computer Science. In 2008, the University founded the Centre for Bell Tolling and the Scientific and Research Centre "Psychology of Personality Development."
In 2009, the Scientific and Educational Centre "Nanomaterials in Generating and Energy Storage Devices" was established on the basis of the Precarpathian National University with the assistance of U.S. Agency for International Development (CRDF) and the Ministry of Education and Science of Ukraine within the framework of Bilateral Cooperation Program in Scientific and Technical Research and Education (CREST). Financing of the Centre was provided by CRDF and the Ministry of Education and Science on a parity basis. The Centre is the first to be supported by the U.S. Agency for International Development. The program aimed to accelerate Ukraine's transition to knowledge economy on the basis of science and higher education strengthening in Ukraine.
In 2010, the Statistical Analysis Laboratory was established; the Department of Tourism and Tourism Specializations started its work at the Institute of Tourism; the Institute of History and Political Science established the Department of Foreign Languages and Translation and the Room of Numismatics. The Centre for Innovation Activity and Technology Transfer started its work as a part of the Scientific and Research Unit of the University. Sports Complex "Olympus" was opened. The Educational and Scientific Laboratory of Carbon Nanomaterials and Supercapacitors was established a year later.
In April 2012, Doctor of Political Sciences, Professor I. Tsependa was elected as a rector of the University. The educational and scientific base of the University, its scientific potential began to increase, there appeared new educational divisions, courses and specialties at the University and the number of students began to grow. That year, in May, the Department of Life Safety was established. The Department of Educative Work was reorganized into the Department of Educative, Psychological and Educational Work. Educational and Scientific Institute of Postgraduate Education and Distance Learning was established. The Licensing and Accreditation Department started its work; a separate division of Ivano-Frankivsk College of SHEI "Vasyl Stefanyk Precarpathian National University" was established. In addition, two international projects were implemented successfully: the establishment of the International University Centre of the Ukrainian-Polish Youth in the village of Mykulychyn, Nadvirna District; there novation of the unique high-altitude Observatory on Mount Pip Ivan done on the EU funds.
The year of 2014 opened the educational scientific laboratory of biological ecology at the Institute of Natural Sciences as well as embodied National Contact Points (NCPs) of EU Horizon 2020 programs for "Nanotechnologies, advanced materials and advanced modern production". The Department of Journalism was created at the Institute of Philology in 2014.
The monument to Roman Huryk was unveiled on October 14, 2014 at the University campus. The student of the Department of Philosophy was killed in February 2014 in Kiev during the Revolution of Dignity.
Today educational and scientific work is carried out by 1003 full-time lecturers in 80 departments, among them 108 doctors of science, professors, 635 candidates of sciences, associate professors. There are 3 Academicians of the State Academies of Sciences of Ukraine,1 Corresponding Member of the NAS of Ukraine,1 corresponding member of the Academy of Arts of Ukraine, 3 National Artists of Ukraine, 10 Honoured Artists of Ukraine, 16 Honoured workers of Culture of Ukraine, 13 Honoured workers of Science and Technology of Ukraine, 5 Honoured workers of Arts of Ukraine, 9 Honoured workers of Education of Ukraine, 1 Honoured worker of Higher School of Ukraine, 7 Honoured lawyers of Ukraine, 3 Honoured painters of Ukraine, 1 Honoured economist of Ukraine, 3 Honoured workers of Physical Culture and Sports of Ukraine, 3 Honoured trainers of Ukraine, and 8 Honoured Masters of Sports of Ukraine.
There is a postgraduate course in 54 specialities for a Doctor of Philosophy Degree and a Doctor of Science Degree in 7 specialities. About 16 thousand students are trained in 15 directions and 48 specialities for Junior Specialist, Bachelor's and master's degrees. For the time of its activity the University has trained more than 140 thousand specialists. The University maintains close relations with many scientific and educational institutions of the world, prepares and implements joint research programs.
Today the University has seven leading scientific schools: the Research School of Physical and Chemical Problems of Materials Study of Thin Films, the School of Magnetism and Nanotechnologies, the Precarpathian History School, the History of Education and Pedagogical Thought in Galicia, the Problems of Semantic Syntax, the Precarpathian Practical Academic School of Comparative Studies, and Bioecology Scientific School "Mountain Forestry and Bioindication".
Rectors
2012 – Tsependa, Ihor Yevhenovych
2005-2012 – Ostafiichuk, Bohdan Kostiantynovych
1986-2004 – Kononenko, Vitalii Ivanovych
1982-1986 – Kucheruk, Ivan Metrofanovych
1980-1982 – Vasiuta, Ivan Kyrylovych
1967-1980 – Ustenko, Oleksandr Andriyovych
1957-1967 – Kravets, Vasyl Musiyovych
1956-1957 – Shevchenko, Ivan Hryhorovych
1953-1956 – Hrechukh, Hryhorii Teodorovych
1950-1953 – Shvetsov, Kostiantyn Ivanovych
1949-1950 – Buialo, Viktor Andriyovych
1945-1949 – Bozhko, Trokhym Zakharovych
1940-1941 – Plotnytsky, Fedir Ivanovych
Institutes and Faculties
Institute of Philology
Chair of Ukrainian Language
Chair of Ukrainian Literature
Chair of World Literature and Comparative Literature Studies
Chair of Slavonic Language
Chair of General and German Language Studies
Chair of Journalism
Institute of Pedagogy
B. Stuparyk Chair of Pedagogy
Chair of Philology and Methodology of Elementary Education
Chair of Theory and Methodology of Pre-school and Remedial Education
Chair of Mathematical and Natural Disciplines of Elementary Education
Chair of Theory and Methodology of Elementary Education
Chair of Social Pedagogy and Social Work
Chair of Art Disciplines of Elementary Education
Faculty of Philosophy
Chair of Pedagogic and Age Psychology
Chair of General and Clinical Psychology
Chair of Philosophy and Psychology
Chair of Religion Studies, Theology and Cultural Studies
Chair of Social Psychology
Faculty of Mathematics and Computer Science
Chair of Differential Equations and Applied Mathematics
Chair of Mathematical and Functional Analysis
Chair of Information Science
Chair of Statistics and Higher Mathematics
Chair of Algebra and Geometry
Chair of Informational Technologies
Faculty of Foreign Languages
Chair of English Philology
Chair of German Philology
Chair of Foreign Languages
Chair of French Philology
Faculty of Physics and Technology
Chair of Computer Engineering and Electronics
Chair of Theoretical and Experimental Physics
Chair of Materials Science and Latest Technologies
Chair of Physics and Chemistry of Solids
Faculty of Economics
Chair of Finance
Chair of Accounting and Audit
Chair of Management and Marketing
Chair of Economic Cybernetics
Chair of Theoretical and Applied Economics
Faculty of Physical Education and Sports
Chair of Theory and Methods of Physical Training and Sports
Chair of Sport and Pedagogic Disciplines
Chair of Physical Rehabilitation
Educational and Research Institute of Postgraduate Education and Distance Learning
Chair of Management and Business Administration
The Scientific Library
The Scientific Library of Vasyl Stefanyk Precarpathian National University was founded in 1944. In 2000, the library received the status of the scientific institution. It has a substantial and unique informational fund (more than 600000 books in Ukrainian, Russian, Polish, English, French, German, Chinese, Czech, Spanish and other languages, 20000 musical publications, 50000 periodicals, authors’ abstracts, dissertations and graduation projects), an integrated electronic catalogue, electronic database.
The library consists of five departments:
Scientific and Educational Literature Department;
Acquisition and Scientific Processing Department;
Information and Bibliography Department;
Scientific and Methodological Department;
Information Technology and Computer Software Department.
Readers have also access to 16 reading rooms with the capacity of 980 seats, 2 circulation desks, and 4 library departments in Kolomyia, Kalush, Rakhiv, and Chortkiv.
The Information and Bibliography Department is the main library information center that offers access to catalogues, indexes, bibliographic description. The reference fund of the Department contains more than 6 thousand copies.
In October 2005, the Automated Library System “UFD/Biblioteka” was purchased. Its goal is to create a unified automated bibliographic information system. Since 2008 a Scientific Library Website has been operating. Thus, users can get certain information about the divisions and units of the library, find useful information for teaching and research activities. In particular, permanent access to the electronic library catalogue is available via the Internet. Using the local network and the library website you can obtain access to the electronic library and thus – to full-text scientific, educational and methodical publications by the scholars of the University. There is an opportunity to provide Interlibrary Loan Service and Electronic Document Delivery. Students can also use Ivano-Frankivsk Region Libraries services.
There is a virtual bibliographical reference with the help of which queries are performed every day according to the mode of admission. The electronic reading room is equipped with 25 workplaces where visitors have access to the fund of electronic documents and local databases.
In the reading halls book exhibitions are on display; literary soirées, round tables, and readers’ conferences are organized. The Library staff provides methodical current advice for first year, undergraduate and final year students.
International Cooperation
In order to implement the principles of the Bologna Declaration, Vasyl Stefanyk Precarpathian National University draws particular attention to international cooperation, expands and improves its activity. To realize the plan about international cooperation in education, the University has made and continues to make a number of arrangements that prove its developing status in this sphere.
The University implies the following objectives:
- to cooperate with the most competitive universities in the Central and Eastern Europe;
- to implement great common projects so that to be recognized and to stand out among other universities;
- to provide students, postgraduates and lecturers’ exchanges with the universities of Europe and Asia in order to expand cooperation with foreign partners.
The setup of researches joint with international educational establishments is in the mainstream of the University activity.
It is important to note that Vasyl Stefanyk Precarpathian National University and Warsaw University (Poland) are among the creators of The Consortium between Warsaw University and 10 Ukrainian universities. Since 2014 the University has been a part of the Consortium of universities in Baltic region and Ukraine.
The University concluded 56 partnerships with different foreign educational and scientific establishments. A partnership between academic and research institutes of the University of Warsaw, Jagiellonian University in Kraków, the Academy of Mining and Metallurgy in Kraków, the Pomeranian Academy in Słupsk, the University of Rzeszów, Vytautas Magnus University in Kaunas, the University of Latvia, Vilnius University, the University of Valencia, the University in Port, Lund University, the University of Poitiers, the Tien Shan Classic University, the Charles University in Prague, the Brest State Technical University, the University of the Republic of San Marino, the Stefan cel Mare University of Suceava (Romania).
Vasyl Stefanyk Precarpathian National University is a member of the following international associations and organizations: the European University Association (EUA) – Brussels, Belgium; the Eurasian Association of Universities (EAU) – Moscow, Russia.
The European Commission supported our University in implementing the following projects:
"The International Meeting Centre of Ukrainian and Polish Students" (Warsaw, Poland, the University of Warsaw)
"The Reconstruction of Astronomical Observatory on Mountain Pip Ivan" (Warsaw, Poland, the University of Warsaw).
Students of Vasyl Stefanyk Precarpathian National University have the opportunity to take part in different international arrangements which the University suggests: trainings abroad, double diploma programs (International Relations, Science of Law, Computer Science, Mathematics, Physics, Philosophy, Tourism, Primary Education, and Physical Training), participation in international conferences, students interchange, and conducting scientific-research work at foreign educational establishments.
Within 2014-2015 the amount of assignments accomplished by the students and the faculty members abroad has increased in comparison with 2013-2014.
In addition, the international authority of Vasyl Stefanyk Precarpathian National University is growing. The State and public institutions organize a range of international arrangements, so with every year the number of official foreign visits is increasing.
Scientists successfully participate in different international scientific-educational projects. Besides, grant-providers promote the development of international cooperation. In particular, within the Cross-European Cooperation Program Vasyl Stefanyk Precarpathian National University has become a partner of the ProgramIANUSII-ErasmusMundus. It is carried out due to the financial support of The European Commission which provides mobility for students of bachelor and master programs, postgraduates, lecturers, EU universities collaborators and country-partners collaborators.
References
Official site
Education in Ivano-Frankivsk
National universities in Ukraine |
424851 | https://en.wikipedia.org/wiki/Pegasos | Pegasos | Pegasos is a MicroATX motherboard powered by a PowerPC 750CXe or PowerPC 7447 microprocessor, featuring three PCI slots, one AGP slot, two Ethernet ports (10/100/1000 & 10/100), USB, DDR, AC'97 sound, and FireWire. Like the PowerPC Macintosh counterparts, it boots via Open Firmware.
For hard disk drive booting the Open Firmware implementation called SmartFirmware requires an RDB boot partition that contains either an affs1 or ext2 partition. Note that any changes to the ext2 on-disc format may prevent booting. It is, however, possible to add some ext3 features so long as the volume can still be recognized as ext2.
The Pegasos was sold by Genesi USA, Inc., and designed by their research and design partner bplan GmbH based in Frankfurt, Germany.
There are two versions of the system: The Pegasos I and the Pegasos II.
Pegasos I
The Pegasos I supports the IBM Microprocessor 750CXe CPU (G3), has 100 Mbit/s Ethernet onboard and uses registered 168-pin PC133 SDR-SDRAM. It was discontinued after a hardware bug in the MAI Logic ArticiaS northbridge was discovered. Later versions of the Pegasos I came with a hardware fix which was designated "April". Further improvements were made in an "April 2" design which solved further problems. It has been replaced by the Pegasos II.
Pegasos II
The Pegasos II uses a Marvell Discovery II MV64361 northbridge, removing the need for the "April" chipset fix on the previous model, and additionally offers integrated Gigabit LAN and DDR support, and the ability to use the Freescale "G4" processor line.
The 750CXe (G3) CPU boards do not require a cooling fan, and thus has been marketed as "cool computing". The current G4 boards are based around the Freescale MPC7447 chip with a small fan. Passive cooling solutions are possible and sold with the "Home Media and Communication System", which is based on Pegasos II G4.
Genesi discontinued production of the Pegasos II in 2006, as the result of new European Union legislation requiring the use of more expensive and lead-free solder under the Restriction of Hazardous Substances Directive (RoHS).
Open Desktop Workstation
The Open Desktop Workstation, or ODW, is a standardized version of the Pegasos II. It was the first open source based PowerPC computer and gave PowerPC a host/target development environment. Genesi has released the complete specifications (design and component listing) free of charge. The ODW-derived Home Media Center won the Best in Show award at the Freescale Technology Forum in 2005, have an ATI certification, and a "Ready for IBM Technology" certification.
Specification
Freescale 1.0 GHz MPC7447 processor
512 MB DDR RAM (up to 2 GB, two slots, but using both slots simultaneously is possible only with 2B5 revision )
80 GB ATA100 hard disk
Dual-Layer DVD±RW Drive
Floppy disk support
3× PCI slots (32 Bit / 33 MHz)
AGP based ATI Radeon 9250 graphics (DVI, VGA and S-Video out)
4× USB
PS/2 mouse and keyboard support
3× FireWire 400 (two external)
2× Ethernet ports, 100 Mbit/s and 1 Gbit
AC'97 sound - in/out, analog and digital (S/PDIF)
PC game/MIDI-port
Parallel and serial ports (supporting IrDA)
MicroATX motherboard (236×172 mm)
Small Footprint Case - (92×310×400 mm)
Operating system support
Several operating systems run on the Pegasos Platform. Genesi is very eager to support any efforts to port and optimize operating systems or applications for their computers.
MorphOS is broadly compatible with legacy Commodore Amiga applications which profess to be "OS friendly" (meaning they do not access native Amiga hardware directly), as well as a growing number of native applications. Genesi is the primary sponsor for MorphOS.
Amiga OS 4.1 support was announced by Hyperion on 31 January 2009.
Linux distributions including Debian GNU/Linux, MontaVista Linux, openSUSE, Yellow Dog Linux, Gentoo Linux and Crux PPC are also available for the Pegasos. Support for the Pegasos as a platform device has been integrated into the Linux kernel mainline as of kernel version 2.6.13.
Mac OS – It is possible to run the Classic Mac OS and Mac OS X on the Pegasos using Mac-on-Linux, although doing so is reportedly in violation of Apple's EULA.
NetBSD support introduced since release 5.0
OpenBSD – Genesi hired a developer in 2002 to port OpenBSD to the Pegasos II. Both Pegasos I and Pegasos II boards were supported. The relationship ended poorly in 2004 with the developer not being paid for the work that he has done (due to Genesi's cashflow problems), and due to lack of documentation from Genesi/bPlan, support was completely removed after one release cycle.
OpenSolaris – Genesi/Freescale is initial supporter of the OpenSolaris port to PowerPC and the Pegasos II being used as the reference platform for development.
QNX supports the Pegasos platform.
Symobi is available as demo image.
Firmware
Pegasos I/G3 "PRE-APRIL", Board: 1A (0.1b73), CPU: 750 CX 1.0, SF: 1.1 (20020814)
Pegasos I/G3, Board: 1A1 (0.1b112), CPU: 750 CX 1.0, SF:1.1 (20021203121657)
Pegasos I/G3, Board: 1A1 (0.1b114), CPU: 750 CX 1.0, SF: 1.1 (20030317114750)
Pegasos II/G4, Board: 1.1, CPU: 744X 1.1, SF: 1.2 (20040224)
Pegasos II/G3, Board: 1.1 (0.2b1), CPU: 750 CX 1.0, SF: 1.2 (20040402193939)
Pegasos II/G4, Board: 1.1, CPU: 744X 1.1, SF: 1.1 (20040405)
Pegasos II/G4, Board: 1.1, CPU: 744X 1.1, SF: 1.2 (20040405)
Pegasos II/G4, Board: 1.2, CPU: 744X 1.2, SF: 1.1 (20040505)
Pegasos II/G4, Board: 1.0 (2B3), CPU: 744X 1.0, SF: 1.2 (20040810112413)
Pegasos II/G4, Board: 1.2, CPU: 744X 1.1, SF: 1.2 (20040810112413)
Pegasos II/G3, Board: 1.2, CPU: 750 CX 1.0, SF: 1.2 (20040810112413)
Pegasos II/G4, Board: 1.2 (2B2), CPU: 744X 1.2, SF: 1.2 (20040810112413)
Pegasos II/G4, Board: 1.2, CPU: 744X 1.2, SF: 1.2 (20050602111451)
Pegasos II/G4, Board: 1.2 (2B5), CPU: 744X 1.2, SF: 1.2 (20050808153840)
Pegasos II/G4, Board: 1.2 (2B5),CPU: 744X 1.2, SF: 1.2 (20051216161829)
IKARUS low level console
Press at a serial console (115200 baud) while booting.
a <addr> address
b
c CPU PVR value
g <> go
i <> in
l
m memory size?
o <> out
q shutdown
r read
s
v
w <32 bit value> write
x exit
z
inc address
dec address
=B00000000,I00,O00;
References
External links
Pegasos Support Forum & Community
Review of the Pegasos on Obligement magazine
Review of the Pegasos II on Obligement magazine
The Pegasos book - Free ebook about the Pegasos PowerPC and its operating systems (MorphOS, Linux, Mac OS X with Mac-on-Linux).
Pegasos II AGP support ? (in ANN forum; about controversy of 'AGP' type)
ODW specification at PowerDeveloper.org
Linux resources for ODW at Freescale
SmartFirmware
PowerPC mainboards |
23442214 | https://en.wikipedia.org/wiki/Z%2BF%20UK | Z+F UK | Z+F (Zoller & Fröhlich) are suppliers of high-speed accurate phase-based laser measurement and scanning systems. The company supplies laser scanning hardware, software and scanning services capturing high resolution data. The firm covers a wide spectrum in the field of laser measurement technology: they develop hardware and software, and offer sales and product training.
Historical background
Z+F is a privately owned company, founded in 1963 by Hans Zoller and Hans Fröhlich. Since the death of Hans Zoller in 1975, the company has remained under the ownership of the Fröhlich family. The current Managing Director is Dr. Christoph Fröhlich, who is the son of Hans Fröhlich.
Z+F have several businesses existing within the Group umbrella. These include ferrules and ferrule machines, wiring systems and laser scanning. The company has specialised in supplying high speed laser scanning systems to customers since the early 1990s following the completion of a doctorate in this subject by Christoph Fröhlich.
3D Laser scanning Products
The firm's range of laser measurement systems include the M-CAM Camera systems, the PROFILER systems, the IMAGER 5006i, and the IMAGER 5006EX (3D) systems. Each of the measurement systems operate using high speed, phase-based technology.
The IMAGER 5006 was the first true "stand alone" laser scanner, worldwide. The integrated hard disk and power supply and operating methods enable a completely wireless operation. This technology has been carried over into the improved IMAGER 5006i. The IMAGER 5006i has a high ambiguity interval of a maximum of and the quality of the point cloud data allows for reduced post-processing.
Software
LFM (Light Form Modeller) software has been designed to work with the high resolution data captured by the IMAGER 5006i 3D laser scanner. LFM is used to take the data from the field, through registration and viewing, to delivery to the designers or operators desktop.
LFM Software Packages include: LFM Register, LFM Modeller, LFM Server, LFM NetView and LFM Viewer/ViewerLite.
On 3 October 2011, AVEVA announced the acquisition of LFM (Light Form Modeller) software division of Z+F UK Limited.
Markets
Their phase-based laser scanning systems were first used in the rail industry in the early-to-mid 1990s to capture the detail of rail infrastructure and tunnel information for route planning and design work. The use of high speed, phase-based laser measurement enabled the rapid data capture of rail infrastructure within a reasonable time period that had previously not been possible.
Since that time, the growth of its laser scanning systems has extended to other markets where accurate 3D high resolution data is required and carries significant value.
The use of high-speed laser scanning within the process industry has transformed the speed and cost of data capture associated with revamp and brownfield projects where previously the use of technologies such as photogrammetry would have proved too time-consuming or prohibitively expensive. In the automotive industry high-speed laser scanning has been used for revamp projects and also to reverse engineer direct plant information into CAD for off-line simulation work associated with new model introduction.
There is an increasing use of the firm's scanning systems in heritage, architectural and civil engineering applications as clients realise the added value of the high-resolution data in these markets. In these markets, the use of laser scanning is becoming more prevalent as the processing power of computer hardware grows, and is able to handle the large data sets that laser scanners capture. Software packages such as LFM have become much more able to handle these large data sets and are increasingly offering deliverables specific to these markets such as orthophoto and 2D drawing creation.
Geographical Representation
The firm has more than 300 employees in three locations: the head office is in Wangen, Germany, and there are two more offices in Manchester, UK and Pittsburgh, USA. In addition, it is represented throughout the world via authorised re-sellers and service providers.
The firm works in collaboration with other companies, such as Leica Geosystems (who sell a version of the IMAGER 5006 as the HDS 6000 and the IMAGER 5006i as the HDS 6100). Z+F also have an OEM arrangement with Amberg for the PROFILER systems incorporated in a trolley system for the rail and tunnelling markets. now IMAGER PRO C is the same scanner which is now selling by API sensor.
References
Further reading
External links
Zoller & Fröhlich Official website
Z+F Laser Official website
Z+F UK Ltd Official website
Z+F USA Inc Official website
Companies based in Baden-Württemberg
Electronics companies of Germany |
9278355 | https://en.wikipedia.org/wiki/Chsh | Chsh | chsh (an abbreviation of "change shell") is a command on Unix-like operating systems that is used to change a login shell. Users can either supply the pathname of the shell that they wish to change to on the command line, or supply no arguments, in which case allows the user to change the shell interactively.
Usage
is a setuid program that modifies the file, and only allows ordinary users to modify their own login shells. The superuser can modify the shells of other users, by supplying the name of the user whose shell is to be modified as a command-line argument. For security reasons, the shells that both ordinary users and the superuser can specify are limited by the contents of the file, with the pathname of the shell being required to be exactly as it appears in that file. (This security feature is alterable by re-compiling the source code for the command with a different configuration option, and thus is not necessarily enabled on all systems.) The superuser can, however, also modify the password file directly, setting any user's shell to any executable file on the system without reference to and without using .
On most systems, when is invoked without the command-line option (to specify the name of the shell), it prompts the user to select one. On Mac OS X, if invoked without the option, displays a text file in the default editor (initially set to vim) allowing the user to change all of the features of their user account that they are permitted to change, the pathname of the shell being the name next to "Shell:". When the user quits vim, the changes made there are transferred to the /etc/passwd file which only root can change directly.
Using the option (for example: ) greatly simplifies the task of changing shells.
Depending on the system, may or may not prompt the user for a password before changing the shell, or entering interactive mode. On some systems, use of by non-root users is disabled entirely by the sysadmin.
On many Linux distributions, the command is a PAM-aware application. As such, its behaviour can be tailored, using PAM configuration options, for individual users. For example, an directive that specifies the module can be used to deny access to individual users, by specifying a file of the usernames to deny access to with the option to that module (along with the option).
Portability
POSIX does not describe utilities such as , which are used for modifying the user's entry in . Most Unix-like systems provide . SVr4-based systems provided a similar capability with passwd. Two of the three remaining systems (IBM AIX and HP-UX) provide in addition to . The exception is Solaris, where non-administrators are unable to change their shell unless a network name server such as NIS or NIS+ is installed. The obsolete SGI SVr4 system IRIX64 also lacked .
See also
Comparison of command shells
References
Further reading
— some examples of invoking with the and options
External links
Unix user management and support-related utilities
Standard Unix programs |
231242 | https://en.wikipedia.org/wiki/Neo%20%28The%20Matrix%29 | Neo (The Matrix) | Neo (born as Thomas A. Anderson, also known as The One, an anagram for Neo) is a fictional character and the protagonist of The Matrix franchise, created by the Wachowskis. He was portrayed as a cybercriminal and computer programmer by Keanu Reeves in the films, as well as having a cameo in The Animatrix short film Kid's Story. Andrew Bowen provided Neo's voice in The Matrix: Path of Neo. In 2021, Reeves reprised his role in The Matrix Resurrections with what Vulture calls "his signature John Wick look".
In 2008, Neo was selected by Empire as the 68th Greatest Movie Character of All Time... Neo is also an anagram of "one", a reference to his destiny of being The One who would bring peace. There are claims that a nightclub in Chicago inspired the name of the character. Neo is considered to be a superhero.
Fictional character biography
Thomas A. Anderson was born in Lower Downtown, Capital City, USA on March 11, 1962 according to his criminal record, or September 13, 1971 according to his passport (both seen in the film). His mother was Michelle McGahey (the name of the first film's art director) and his father was John Anderson. He attended Central West Junior High and Owen Patterson High (named after the film's production designer). In high school, he excelled at science, math and computer courses, and displayed an aptitude for literature and history. Although he had disciplinary troubles when he was thirteen to fourteen years old, Anderson went on to become a respected member of the school community through his involvement in football and hockey.
At the start of the series, Neo is one of billions of humans neurally connected to the Matrix, unaware that the world he lives in is a simulated reality.
The Matrix
In his normal life, he is a quiet programmer for the "respectable software company" Meta Cortex; while in private, he is a computer hacker who penetrates computer systems illicitly and steals information under his hacker alias "Neo". He also sells illegal untraceable computer systems and hacking programs along with controlling computer viruses stashed on CDs and diskettes. During his time as a hacker, Anderson has learned about something known only as "The Matrix".
During the years prior to the events of The Matrix, Neo has spent his time trying to find the one man who he thought could tell him what the Matrix was; a supposed terrorist known only as Morpheus. After an encounter with another hacker, Trinity, Anderson is suddenly contacted by Morpheus via a cell phone mailed to his office, but is almost immediately captured by the virtual reality's Agents, led by Agent Smith. After refusing to cooperate with the agents, Neo has an electronic bug implanted within his Matrix-simulated body so that his actions can be tracked and those seeking to make contact from the free world can be traced and destroyed. He is then contacted by Trinity, freed from the bug, and taken to meet Morpheus.
Neo is offered a choice to remain in his everyday life and forget about the Matrix or to learn what the Matrix really is. Choosing to learn what the Matrix really is, he takes a drug (commonly called the "red pill") which is actually a program designed to disrupt his mind's neural connection to the Matrix and make it easier for his real body to be found and awakened in the real world. He wakes up disoriented and alarmed to find himself naked, weak, and hairless in a pod full of what can be assumed to be an artificial amniotic fluid. He also discovers that he is connected to a series of thick cables, by way of a number of plugs that are grafted to his body, including one plugged directly into the base of his skull which is later explained as the means through which his mind was connected to the Matrix. Upon his "birthing" into the real world, he is discovered by a machine which grabs him by the neck and removes all of his plugs and cables before flushing him out of his fluid tank down into the cold sewers below the Earth's surface.
Neo is rescued by Morpheus and his body is healed of the effects of his atrophy incurred while inside the pod. Once Neo regains consciousness and is able to walk, Morpheus tells Neo the truth about the Matrix; that it is a simulated world to which humans are connected, "a prison for your mind", as stated by Morpheus, while unknown to them, their bodies are used as a power source for a race of sentient machines which, ironically, mankind created. He also tells Neo about the One, a human with the power to manipulate the Matrix, who has been foretold to end the war between humans and machines, and says that he believes Neo is The One.
The next day, Neo begins his "training" and becomes knowledgeable in many forms of combat, as well as vehicle and weapons operations, by having various training programs uploaded directly into his brain. He also receives further instruction from Morpheus on subjects such as "freeing his mind" from the restrictions of the Matrix, essentially overcoming the physics engine the Matrix operates on. He is also made aware of the existence of it's agents, programs designed by the machines to maintain order and dominance over the human race, created with abilities such as dodging bullets, running at high speeds, jumping great distances, and physically possessing people in the Matrix. If killed, they simply find a new host to overcome.
After several days aboard Morpheus' hovercraft, the Nebuchadnezzar, Neo is taken to meet the Oracle, who has the power of foresight within the simulated world. She tells him that he has the "gift", but appeared to be waiting for something, and that in his present life he is not the one. The Oracle warns him that a situation will arise when he will have to choose between saving his own life or that of Morpheus.
Returning from the Oracle to a landline phone, which serves as the exit for "red pills" to leave the Matrix, the crew of Morpheus's ship is betrayed by Cypher, a "red pill" who is willing to give the agents Morpheus in exchange for the promise of being mentally erased and reconnected permanently to the Matrix as an escape from the real world's bleakness. In the Matrix, SWAT troops kill Mouse. Cypher jacks out, and grabs a real-world lightning gun, injuring Tank and killing Dozer. He kills Apoc and Switch; who remain jacked in, by prematurely terminating their connections. In the Matrix, Smith defeats and captures Morpheus. Neo and Trinity try to jack out but Cypher threatens to unplug them too, however, he is thwarted when Tank recovers, killing Cypher, and helping Neo and Trinity jack out.
Neo learns the Agents seek to hack into Morpheus's brain in order to force him to tell them the access codes to the mainframe computer within humanity's last refuge, the city of Zion. After refusing to sacrifice Morpheus to prevent this, Neo decides to jack in and attack the building where he is being held. He and Trinity proceed to fight their way to the roof level of the building, where they are confronted by agent Jones. Neo unloads two entire magazines on Jones as he dodges each bullet effortlessly. When Jones returns fire, Neo proves capable of dodging bullets himself, fluidly moving in a way only an agent was thought to be capable of, although he is not yet as fast as them, as his leg is grazed by a bullet. Trinity then shoots Jones at point-blank range. Using an armed chopper, Neo and Trinity successfully rescue Morpheus. As Neo has just successfully rescued comrades from a building protected by heavily armed guards and agents (thought to be an unprecedented feat) Tank and Morpheus both believe Neo is indeed the One. Neo tries to tell Morpheus what the Oracle told him, but Morpheus explains that she merely told Neo what he needed to hear; had he believed himself to be the One, he would likely not have attempted the rescue, which was a necessary step to his emergence as the One.
Reaching the landline phone, Morpheus and Trinity return to the real world, but Neo is trapped by agent Smith. Trinity admonishes Neo to run, but he stands his ground, having begun to believe that he may be the One. The two of them draw guns and fire them empty, but are able to effortlessly dodge each other's fire. Neo skillfully engages Smith in hand-to-hand combat, almost seeming to be Smith's equal. In the end Neo is briefly incapacitated and held by Smith as a subway train approaches, but at the last minute, he is able to get free and backflip up onto the platform, leaving Smith to be run over. However, the agent possesses the body of the conductor, and emerges from the train and Neo, realizing that the agents' ability to possess other bodies makes this a fight he cannot win, flees the subway station.
Pursued by Smith and his fellow agents, Neo is able to evade them and reach the location of the landline phone, just to be ambushed and fatally shot in the chest by Smith. Trinity, seeing Neo die in the real world while his mind is still in the Matrix, tells his evidently lifeless body that the Oracle had foretold that she would fall in love with the One. When kissed by Trinity, Neo comes back to life, finally fully emerging as the One. When the agents try to kill him again, Neo simply raises his hand and the bullets freeze in mid-air, then drop harmlessly to the ground. It is then shown that he is able to perceive, interpret and alter the computer code of the Matrix. Completely believing in his new-found powers, he effortlessly fends off agent Smith before forcing himself into the agent's body and destroying it from within. The other two agents quickly flee.
Neo is then seen leaving a message for the machines via phone; a warning that he plans to oppose the machines by freeing as many human minds as possible. He then hangs up and flies into the sky, now having completely made his final metamorphosis into becoming the One.
The Matrix Reloaded
Approximately six months have passed since the events of the first film, and Neo is now fully confident in his powers as the One, is able to manipulate the artificial world within the Matrix to a tremendous degree. He no longer requires firearms, relying solely on hand-to-hand combat. His ability to influence the coding of the Matrix allows him to stop incoming fire from multiple attackers. Dispensing with the long black trenchcoat and black shirt he wears at the conclusion of The Matrix, Neo now prefers a cassock with a high-rise mandarin collar giving the appearance of a priest or Bishop. He and Trinity are now lovers. Neo seeks more advice from the Oracle, unsure of his purpose, while Zion prepares for a massive attack by the Machines from over 250,000 Sentinels, numbered precisely relative to the population of Zion of 250,000 people. The Oracle directs him on a quest to find the Keymaker, a personified program that has access to numerous backdoors within the system. The Keymaker will be able to lead Neo safely to the Source, the programming heart of the machine world, which contains the programs sustaining the Matrix.
After the Oracle leaves, agent Smith (aka Smith) appears and it is revealed that Neo had separated Smith from the rest of the Matrix code by shattering him, giving him a life independent of the machine's systems and now the two of them share a "connection" to each other. Smith is no longer an agent of the Matrix, but has become more virus-like, he is able to insert his code into other systems, and infect other programs and human minds to make copies of himself. Neo fends off hundreds of these Smiths and escapes. Later, he, Morpheus and Trinity steal the Keymaker from his keeper, the The Merovingian. During this time, Neo stays behind to fight off the Merovingian's men and becomes separated from the others, being trapped in the Merovingian's mansion five hundred miles away in the mountains. He flies off to help them and only arrives in time to save Morpheus and the Keymaker from two agents crashing two trucks together.
The Keymaker explains that two power stations elsewhere in the Matrix must be disabled in a short time window to successfully disable the security system of a building where the door to the Source will appear; allowing Neo to reach it. This task is accomplished; but the Keymaker is killed, a hovercraft is destroyed and Trinity is put in jeopardy by the agents of the Matrix; a vision that Neo has seen earlier in his dreams. Entering the door, Neo finds himself confronted by the Architect, a program which created and designed the Matrix and also ensures its' constant stability.
The Architect presents Neo with a radically different explanation of his origins and purpose, claiming that Neo is actually the sixth "One". He goes on to say that Zion has been destroyed by the machines five times before; faced with the dilemma of allowing humanity to be destroyed or allowing the machines' preferred status quo to be reconstructed, Neo's five predecessors have helped reload or restart the Matrix, before being allowed to rebuild Zion with a handful of freed humans. The productive output of the Matrix is based upon this repeated cycle of destruction and recreation, to keep the human minds which power it in a stable structure because not every human accepts the simulated reality and if enough of them accumulate, it could lead to a catastrophic system crash of the Matrix and subsequent extinction of both humanity and the Machines. The Oracle is as much part of that self-perpetuating design as the Architect, Neo is told and the prophecy of the One is designed to ensure it runs smoothly each time.
The Architect warns of the annihilation of all human life in and out of the Matrix if he does not enter the Source door to reload the Matrix, but unlike his predecessors, Neo does not enter the door to the Source; he chooses not to believe the Architect's dire warnings of consequences and to save Trinity instead. Returning to the Matrix, he catches Trinity as she falls from an upper-story window of a power plant. She dies of a gunshot wound to the heart that she sustains during the fall, but Neo brings her back to life by reaching into her virtual body, removing the bullet and restarting her heart.
Together, they exit the Matrix and return to the real world. Neo tells Morpheus that the prophecy was merely another system of control and that Zion's destruction is imminent. A group of sentinels begin to converge on their hovercraft, so Neo, and the others are forced to abandon it and flee as it is destroyed. As they flee, Neo suddenly discovers that he can now sense the sentinels and shuts them down telepathically; however, the effort causes him to fall unconscious. He and the crew are rescued by the hovercraft Mjolnir (also known as the Hammer) whose crew is dealing with a mystery in the form of Bane, a crew member from another ship; who is the only survivor of an ill-fated attack by the Zion fleet. Unbeknown to anyone, Bane's mind was destroyed by Smith at some point during the movie's events during which Smith overwrote himself over Bane's avatar, effectively killing Bane and once he was unplugged, he took over Bane's real body.
The Matrix Revolutions
The Matrix Revolutions begins immediately after the events of the second movie; as a result of his struggle with the sentinels, Neo returns to consciousness finding himself caught in an isolated train station-like limbo only accessible by order of the Merovingian, from which his mind is unable to free itself. It is here that he meets Sati, a young program created without a purpose; who is being smuggled into the Matrix by her parents; who are also programs in the machine world. Neo remains trapped; until freed by Trinity (assisted by Morpheus and Seraph); who threatens the Merovingian in a Mexican standoff.
After a final visit to the Oracle, Neo learns that he has powers over the machines which extend beyond the Matrix and that Smith is spreading, threatening to destroy both the Matrix and the real world. Building on the Oracle's previous explanations about the nature of choice, Neo learns that the Architect's assertion that his choice to save Trinity would inherently lead to the extinction of humankind was incorrect; due to his nature as a purely mathematical being rendering him incapable of seeing past choices made by humans as he considers them mere variables in equations. Having told Neo that he now has the power to choose to end the war and defeat Smith, the Oracle then states "everything that has a beginning has an end, Neo" and informs him that in order for true peace to happen, he must travel to the heavily guarded Machine City in the real world and make a truce with the Machines in order to save both races from extinction because of Smith's activities. Soon after Neo leaves, dozens of Smiths come and assimilate Seraph, Sati and the Oracle.
Neo and Trinity are given the Logos, a hovercraft commanded by Morpheus's former lover, Niobe, in what seems to others as a suicidal journey to the Machine City. Meanwhile, captain Roland and the Nebuchadnezzars surviving crew of Link and Morpheus in the hovercraft Hammer/Mjolnir return to Zion, which is now besieged and losing to the Machines. Neo and Trinity are ambushed by the stowaway Bane/Smith; who blinds Neo with an electric cable, but is killed when Neo discovers an ability to "see" programs and Machines independently without his eyesight; revealing that the "real world" outside the Matrix is just another layer of simulation. Neo, with Trinity as pilot, guide the Logos past the Machine City's defenses, but in the effort the Logos crash-lands, Trinity sustains fatal injuries and dies.
Neo encounters the Deus Ex Machina, a giant machine construct and the leader of the machines. He offers Smith's defeat and destruction to the Deus in exchange for a truce. The offer is accepted; Neo enters the Matrix to find that Smith has copied himself throughout the simulated world, now truly threatening the safety and stability of the Matrix. One of the copies of Smith, having assimilated the Oracle and obtained as much freedom and control over the virtual world as Neo, faces Neo alone. For a while, the two fight evenly with no real advantage, but ultimately, the tireless Smith begins to wear out Neo and takes control of the fight. Realizing that he cannot win with brute force, Neo allows Smith to assimilate him, which enables the machines to eradicate the Smith virus directly through his body. Unfortunately, the process also kills Neo.
Neo's body is taken away by the machines in the Machine City, while below in Zion, the machines stop their attack and depart in deference to the peace that Neo bartered. The Matrix is rebooted and a beautiful sunrise appears over the horizon, created by Sati in Neo's honor.
The Matrix Online
Although the fate of Neo was never truly revealed in the MMORPG sequel, The Matrix Online, however, despite this, his influence had a strong impact on several key events throughout the game.
In the beginning, scattered code fragments of Neo's "RSI" (Residual Self Image) were found on the bodies of the impostor agents; who appeared across the Mega City during Chapter 1.1 (later revealed to be sentinels under the command of the general program) when held, these fragments echoed some of the final thoughts of Neo inside the redpill's mind. Each of the three main post-war political organizations inside the Matrix realized the possible significance of these fragments and began fighting amongst themselves to gather as many as they could.
The Oracle reveals a secret society of exiles known only as the Shapers, who are the only ones able to bring the Fragments together in any significant way and that they must be protected from the impostor agents' corruption. The redpills' distrust and organizational differences prove too much for any strong unity, so that a shaper fell into the captivity of the false agents. His power was used to encode some part of the One's being onto them, creating the powerful, pale-skinned "N30 Ag3nts".
In Chapter 1.2, Morpheus states that though the machines never returned Neo's physical remains (Neo's body in the real world) to Zion, they did not "recycle" them; a reference to the first film, in which Morpheus tells Neo that the machines liquefy deceased occupants of the Matrix to provide organic sustenance for it's living inhabitants. This would be the driving force behind Morpheus' descent into fanatical terrorism against the system in an attempt to force the machines to reveal Neo's fate; ultimately leading to his assassination.
The subject of Neo then fell to the side lines for other struggles; until the arrival of the Oligarchs in Chapter 9. The original intruder, Halborn, was notably intrigued by the life of the One and was personally shocked about the implications of Neo's ability to affect Machines outside of the simulation had on his search for what he called the "Biological Interface Program".
After Halborn's removal in Chapter 10, little more was questioned; until the revelation of the Trinity project, originally headed by the Oracle, in Chapter 12. It was revealed that both Neo and Trinity were actually the culmination of decades of machine research into translating human DNA perfectly into machine code; allowing them to interface directly with technology without the need for simulated interfaces.
The Matrix Resurrections
60 years after the events of The Matrix Revolutions, Neo is now a video game designer and the creator of a video game series simply known as The Matrix, which uses Neo's previous memories in the plot. Neo is 57 years old in the Matrix program, but 60 years have passed in the real world, making Neo 97 years old. Neo lives under his real name as Thomas Anderson and although he looks not that old, others see him as a man with an eyepatch and gray hair. His love, Trinity is married with a man named Chad and has three children, and lives as "Tiffany". Neo has a series of existential crises, so his therapist has prescribed him with the blue pills. Neo's boss approaches him with an idea to make a fourth Matrix game. However, the more Neo and his crew make the game, the less blue pills he takes and he eventually stops taking them. One day, a man known as "Morpheus" approaches Neo with the red pill, but agents and a police force storm the building where Neo works in order to kill Morpheus; who is revealed to be a rogue agent who opened his mind and decided to help Neo destroy the Matrix. Morpheus kills several police officers while Neo's boss picks up an officer's gun and once again becomes Agent Smith.
The attack was revealed to be a dream, and Neo was still in his therapist's room. A drunk Neo later goes on top of a rooftop and questions his existence. However, a woman with blue hair and a trench coat appears besides him. The woman introduces herself as Bugs; who tries to reopen his mind through a White Rabbit tattoo. Bugs then shows Neo her crew in an abandoned theater. Morpheus, part of Bugs' group once again offers a blue pill and a red pill to Neo; who takes the red pill. The team then "unplugs" Neo and with the help of a friendly machine; who was hacked, he is revived in the real world inside Bugs' ship, the Mnemosyne.
In the real world, after Morpheus helps Neo regain his memories and fighting style, the Mnemosyne goes to Io, the last human city lead by Niobe. Niobe reveals that, when Zion was nearly destroyed by the machines, she and the rest of the inhabitants escaped and made a new city. After sixty years, Niobe does not trust Neo, but lets the Mnemosyne crew destroy the Matrix with the help of Sati, the sentient program who helped Neo in Revolutions. In the Matrix, Neo's therapist; who's name is the Analyst, a program designed to study the human psyche and the new leader of both the Matrix and the machines. He explains that after Neo and Trinity's deaths, he was able to resurrect them to study them. He found that suppressing their memories but keeping them in close proximity to one another made the Matrix more power-efficient and more resistant to the anomalies that caused the previous iterations to fail. However, Neo's liberation, destabilized the system and triggered a fail-safe to reboot the Matrix. The Analyst stalled the reboot by convincing his superiors that threatening to kill Trinity would get Neo to voluntarily return to his pod. Then the Analyst taunts Neo in the Matrix with bullet time by threatening to kill Trinity with a pistol, and then forces Neo to meet him in a café that he and Trinity had frequently goes. The crew then uses that opportunity to unplug Trinity. At the café, surrounded by riot officers, Neo distracts the Analyst while the team unplugs Trinity. They successfully unplug her, and Trinity gets her memories back and helps Neo fight the officers. Eventually, Smith fights the two, but gets away. In a final attempt to rid Neo and Trinity, the Analyst activates the Swarm, which forces many people to try to kill Bugs' team. The team survives, and when Neo and Trinity are cornered by police helicopters, they then decide to fly, which indeed does happen. The Analyst is defeated, and Neo and Trinity finally rekindle their romance and remake the Matrix in their own way.
Other works
Neo is briefly featured in the webcomic Casey and Andy by Andy Weir, in which he is remotely disintegrated by the fictionalised mad scientist Andy Weir by the latter's hacking of the Matrix, who works on behalf of Agent Bill to kill him in exchange for money.
Casting choices
When The Matrix was in its early writing stages, the Wachowskis had martial arts actor Brandon Lee in mind for the role of Neo, but Lee was accidentally killed on the set of the 1994 film The Crow. Tom Cruise, Johnny Depp, Brad Pitt, Nicolas Cage, David Schwimmer, Leonardo DiCaprio, John Cusack, and Mark Wahlberg were approached for the role as well. Will Smith, who turned down the role to make Wild Wild West, later said he regretted the decision. Ewan McGregor was said to have been sent the script to the movie, but later turned it down. Sandra Bullock was asked by the film's producers to star in the film, thus changing the gender of the character to female, only to receive a negative answer by Bullock.
Powers and abilities
Neo has carried, since his conception, the Matrix's source code known as the Prime Program. This gives him the ability to freely manipulate the simulated reality of the Matrix, similar to the authority a system administrator has over a given system. He manifests these abilities as various superhuman powers.
The power Neo exhibits most often is akin to telekinesis in the Matrix. In that, he seems capable of manipulating any object in the Matrix through will alone. By focusing this ability upon himself, he can fly at amazing speeds and jump great distances. Whilst his speed is never specified, he flies from the Merovingian's mountain manor to the highway "500 miles due south" in a very short time. It can thus be extrapolated that if it took him roughly 10 minutes to fly from the chateau in the mountains to the freeway in the Mega City, he would have to have been flying at around 3,000 mph, or just shy of Mach 4. However, his speed of flight is further exemplified by his ability to escape explosions, and the sonic boom left in his wake has the power to overturn rows of heavy vehicles and reap massive destruction, indicating he can probably fly much faster than Mach 4 when pressed. He has used this ability multiple times to stop several bullets in mid flight, first against the Agents and again against the Merovingian.
In addition to his abilities, Neo possesses superhuman strength and agility, and is near-invulnerable to most attacks. Although his physical endurance is immense, Neo can still be harmed or killed, as evidenced by an injury that Neo suffers while blocking a sword attack with his bare hand. His endurance is also finite: when confronted by masses of Smith clones in the second film, Neo was forced to escape rather than continue fighting though he easily overpowered at least 40 Smith clones and threw about 50 off him, and upon being disconnected from the Matrix, he appeared exhausted and winded. His reflexes are great enough to dodge bullets. Neo's strength and speed level have never been accurately measured; he is known to be capable of Mach 8, at least, and Mach 10 under stress, but his upper limit has never been shown. In the third movie, Neo has reached his (apparent) full strength; he is capable of withstanding a direct physical punch from the Smith enhanced by the Oracle's power, and is also able to hold his own in a prolonged fight (though his endurance is not without limit; he can fight this Smith to a standstill, but not defeat him).
In the real world, like the other rebels, Neo does not display any of the aforementioned abilities. According to the Oracle, "The power of The One extends beyond the Matrix. It reaches from here all the way back to where it came from." Apparently because of his status as The One, he has a direct connection to the Source in the real world, and can therefore affect everything connected to it, including Sentinels, although the first use of this ability overwhelms him and he falls into a coma as his mind somehow plugs into the Matrix without a physical connection. Neo also begins to perceive everything connected to the Source, including the Machine City itself, as silhouettes of golden light. This ability becomes beneficial after he is blinded in the fight against Smith/Bane, thus he is able to see Smith/Bane and kill him.
In The Matrix Resurrections, the resurrected Neo at first doesn't have access to his powers, although he slowly regains them over time, particularly during his fight with Smith. Neo is now able to create powerful telekinetic shockwaves, particularly when touching Trinity and while he can still stop bullets, it appears to take him more effort. At one point, he telekinetically deflects a missile into a helicopter and shields himself from Smith smashing a porcelain sink down on his head. However, he is unable to fly while trying to escape from the Analyst's forces after reuniting with Trinity or after jumping off of a roof. Instead, Trinity develops the ability to fly and takes them both to safety. Neo appears to have regained his full powers by the end of the movie as he and Trinity fly off together after confronting the Analyst.
References
External links
The Matrix Trilogy: A Man-Machine Interface Perspective: a study of the trilogy from a strictly technological as opposed to philosophical viewpoint
The Matrix (franchise) characters
American superheroes
Christ figures in fiction
Fictional Jeet Kune Do practitioners
Fictional Krav Maga practitioners
Fictional Piguaquan practitioners
Fictional Shaolin kung fu practitioners
Fictional Wing Chun practitioners
Fictional Zui Quan practitioners
Fictional aikidoka
Fictional avatars
Fictional blind characters
Fictional characters who can move at superhuman speeds
Fictional characters with disfigurements
Fictional characters with superhuman strength
Fictional cyborgs
Fictional hackers
Fictional jujutsuka
Fictional karateka
Fictional kenpō practitioners
Fictional revolutionaries
Fictional software engineers
Fictional taekwondo practitioners
Fictional technopaths
Film characters introduced in 1999
Film superheroes
Male characters in film
Martial artist characters in films
Male superheroes |
21701198 | https://en.wikipedia.org/wiki/Sergio%20Maltagliati | Sergio Maltagliati | Sergio Maltagliati (born 1960 in Pescia, Italy) is an Italian Internet-based artist, composer, and visual-digital artist.
His first musical experiences with the Gialdino Gialdini Musical Band in early 70s.
Biography
Sergio Maltagliati an artist from Florence Italy; studied music at the Florence Conservatory,
then he began to paint. Thus creating a new method of writing music where the score becomes a visual composition.
The relation between the sound and color creates a living feeling that art is not dead. It does not stop at the artist's last attempt at the canvas, but goes beyond to the viewer. It is the relationship between artist and viewer, one is no different from the other. Now the viewer can take the step that the artist may dare not to across. It is the point of translation a fear to most artist that want to create and let their work go. This kind of art allows the viewer to create with the artist and create so much more.
Sergio Maltagliati has always been deeply interested in a multimedia concept of art. His education, in both music and the visual arts, has placed him in the position to incorporate sign, colour, and sound into a unitary concept of multiple perception, through analogies, contrasts, stratifications, and associations.
He's a composer who joined, at the end of the eighties, the Florentine artistic current, that has been active since the end of World War II up to the present, including Sylvano Bussotti, Giuseppe Chiari, Giancarlo Cardini, Albert Mayr, Marcello Aitiani, Daniele Lombardi, Pietro Grossi. These musicians have experimented the interaction among sound, sign and vision, a synaesthetics of art derived from historical avant-gardes, from Kandinskij to futurism, to Scrjabin and Schoenberg, all the way to Bauhaus.
In 2021 the Luigi Dallapiccola Study Center made him official in the Florentine Music Archive of the 20th century, recognizing him as one of the most significant composers of the 20th century, active in Florence.
His work in the eighties is also based in involving, with didactic-educational projects, students as executors of performances in the example of the Music Circus carried out in 1984 in a High School in Piemonte by John Cage. And it is the American musician (met in 1991 in Zurich for the execution of Europeras 1&2)who appreciates this aspect of Maltagliati's work, because he is able to involve young people as performers. These works, besides building approach to music and, specifically, developing a different way of listening, aim at expanding the concept of artistic creation to the executor, till it reaches just the user, often unprovided with a traditional artistic formation.
Since 1997, Sergio Maltagliati has dealt principally in music on the Internet and one of his first compositions intended for the network, netOper@ a new, and the first Italian interactive work for the Web starting in the spring of 1997. The interactive work will be presented simultaneously in real and cyberspace. The Opera is realized with the collaboration of Pietro Grossi,
the legendary father of Italian music informatics.
The essential aspect to Sergio Maltagliati's approach is the idea that art is not just the fruit of the composer, the creation is always the fruit of collaboration. Two of his works are strongly based on this concept: neXtOper@_1.03 (2001) a work for mobile phones, and midi_Visu@lMusiC (2005) music and images on I-Mode mobile phone.
In 1999, he writes the program autom@tedVisualMusiC, software from experiences of the visual HomeArt programs designed by Grossi in the '80s, written in the language BBC BASIC with computer Acorn Archimedes A310, a project that creates abstract designs and sound by using programming to make the piece interactive and adds movement. This artwork not only has visually pieces, but also is interactive.
In 2001, he has participated in the project Interview yourself by Amy Alexander .
He currently uses a personal computer to set up interactive sound and graphics works, open compositions where the listener has a predominant and decisive role.
Instrumental compositions
1980 Fogli di Diario lyric for mezzo-soprano-flute and piano (first performance - Belveglio Asti 1980).
1983 Iridem for trombone and clarinet.
1984 Inciclo for trombone solo.
1987 La Luce dell'Asia for white voices choir.
1980/89 Sintassi in Rosso e Nero for voice and piano (first performance -Villa Martini Monsummano Terme 1989).
1991 Sinfonia (first performance 1991).
1991 Iridem 2 for clarinet and trombone.
1993 Aria48 for canto and orchestra.
Computer Visu@lMusiC
autom@tedVisualMusic, generative visualmusic software, creates images and sounds in relation to precise correspondences sound-symbol-color, producing multiple variations.
Works
1999 Circus 5.05 (from original BBC Basic graphics software, music software autom@tedMusic 1.01).
2001 Net Surfing 3.0 (icons, and casual sounds from the Web, arranged and reorganized with reference to precise correspondences among sound/colour/image).
autom@ted MusiC 1.02
Sound Life 3.01 (computer music transcription by Pietro Grossi-Soft TAUMUS synthesizer TAU2, IBM 370/168- Institutes of CNR CNUCE and IEI Pisa, Italy 1980)
2002 PIxeLs
Oper@pixel (the traditional Lyric Opera, an unusual presence in its new digital guise).This project which generates continually diverse audiovisual compositions utilizing images and sound frequencies borrowed from the universe of cell phones, chat rooms and e-mail. And also logos, ringtones, banners and small designs in Ascii code.
2003 Goldberg Variations (music by Johann Sebastian Bach).
2005 MIDI_Visu@lMusiC (from music software Steinberg Cubase on Atari Mega STE free download Music/Ringtones). An example of artistic-creative use of Cubase midi software on the Atari Mega STE platform. The image is drawn directly in the Key window of the software, on a large virtual musical keyboard. Research began in the early nineties, and all the work is anthologically documented in an exhibition entitled Scores for floppy disck in 1997 at the Il Gabbiano Gallery in La Spezia. The catalog published by the City of La Spezia-Department of Culture contains an introductory text by the musicologist Renzo Cresti and theoretical writings by the author. In 2005 the MIDI_Visu@lMusiC project was re-proposed for the i-mode mobile phone.
2008 Circus4/8
2012 autom@tedVisuaL, is a software which generates always different graphical variations. It is based on HomeArt’s Q.Basic source code. These graphics are going to be sammled into the HomeBooks (also available as e-books), a unique kind of book, which Pietro Grossi planned in 1991. This first release autom@tedVisuaL 1.0 has produced 45 graphical single samples, which have been sammled and published.
2014 autom@tedMusiC 2.0, generative music software. This program can be configured to create music in many different styles and it will generate a new and original composition each time play is clicked.
2019 Battimenti, autom@tedMusiC 2.5. Beat(acoustics)-musical score of the painter Romano Rizzato, with frequencies of the original waves by Pietro Grossi. In this project Rizzato realizes a graphic work, inspired by his Optical art-Kinetic art period of the 60s-70s, but conceived with the sole purpose of being able to be read as a musical score. Maltagliati uses 11 sound frequencies, from 395 to 405 (the same sound events recorded by Grossi at the Studio di Fonologia in Florence in 1965) combined as suggested by the calligraphy of the score of Rizzato. The sound material is then modified by the software autom@tedMusiC Generative music (version 2.5) which generates a fully automatic music that is always different.
Performance
1985 Musica intorno alla Gabbia (Music around the Cage- homage to John Cage) for music, art of mime and zoo-anthropomorphous pictures with 170 performers-actors (first performance Ospedale degli Innocenti Florence 1985). In the structure of the work, the traditional role between audience and stage has been reversed on purpose. The interpreters have occupied, geometrically, the vast space of the room putting the public first on the stage and causing them, then, to move around the sound and visual groups. The musical part is based on the interdisciplinary value of music, inspired by John Cage's research on the principle of capturing and controlling the noises as musical elements. The choreographic and scenic part of the work, made by painter Edoardo Salvi was an uninterrupted dynamism of forms and spatial situations, connecting image to signs and sound.
1989 Revolution The visual part of the work is composed of a big painting, of about three hundred square metres, on which two hundred high school students have worked. The painting, divided into sixteen big strips, came together slowly, in synchronization with the music during the performance. The musical part assembled four different sound situations: instrumental music, words, sounds and noises.
1990/91 K.1-626M. for chorus, orchestra, magnetic tape, and objects (first performance Manzoni Theatre Pistoia Italy 1990).
1993 12 free improvisations with stones and branches of river (first performance – Villa Martini Monsummano Terme Italy 1993).
Alla ricerca dei Silenzi perduti for 10 magnetic tapes, 7/10 audio tape with 10 performers.
1997 netOper@ an Italian opera with an international cast of performers on the Net, authors and performers of the work more than thirty (net)artists, among others John Dunn, Pietro Grossi and Caterina Davinio.
Variazione Cromatica cinque2 for instruments and Magnetic tape.
Invenzioni Cromatiche 16/44 graphic work music of 100 children on musical stroke of 16/4.
2001 neXtOper@ for cell.phones, interactive/collaborative work through Internet and GSM networks (cellular phones) with an educational project involving the children of Middle State School "Giusti-Gramsci" Monsummano Terme Italy.
Exhibitions
International exhibitions include: HyperArt Web Gallery (New York); Istanbul Contemporary Art Museum; Rhizome.org; Mac,n-Museo di arte Contemporanea e del '900 (Monsummano Terme); Digital Pocket Gallery (Helsinki); PEAM Festival (Pescara); DigitalSoul San Francisco (USA); Melbourne Fringe Festival; Festival Carnivale Town Hall (Sydney); Electronic Language Festival (San Paolo del Brasile); Politecnico di Milano University (Italy); Salons de Musique Strasbourg (France); Sound Art Museum (Rome); Museum of Contemporary Art Mérida Yucatán (Mexico); MAXXI - National Museum of the 21st Century Arts (Rome) 2007 and 2018; Galerie De Meerse Hoofddorp (Amsterdam); Italian Cultural Institute (Cairo); PAN (palace arts Naples Italy); Museo della Civiltà Romana - Rome (Italy); Art Laboratory Berlin; Centro per l'arte contemporanea Luigi Pecci - Prato (Italy).
Bibliography
Musica Presente Tendenze e compositori di oggi by Renzo Cresti ed. Libreria Musicale Italiana - (2019)
POLITEHNIKA 2019 Scientific Conference ed. Belgrade Polytechnic -
When Sound Becomes Form Sound Experiments in Italy 1950-2000 ed. Manfredi - (2019)
The Information Society An International Journal Volume 34, 2018-Issue 3 (2018)
Encyclopedia of Computer Art by Marylynn Lyman ed. Learning Press New York - (2016)
Encyclopedia of New Media Art by Joe Street ed. Orange Apple - (2016)
80 identikit digitali by Carlo Mazzucchelli ed. Delos Digital - (2015)
INVISIBILIDADES revista ibero-americana de pesquisa em educacao, cultura e artes - ISSN 1647-0508 (2014)
Enore Zaffiri-Saggi e materiali by Andrea Valle & Stefano Bassanese (2014)
Topología Barcelona, Athens. Triton Barcelona. Triton (Triton Barcelona) (2014)
HomeBook 45 unicum graphics, by Pietro Grossi, Sergio Maltagliati, ed. Lulu.com, 2012.
Random Valentina Tanni, LINK Editions, Brescia 2011-
Sguardi Sonori/Infinite Spaces, by Ennio Morricone, Enrico Saggese, Roberto Vittori, Neil Leonard, Gualtiero Marchesi, (2010) ed.Municipality of Rome
Generative Art, by Celestino Soddu, Enrica Colabella, (2009) ed. Domus Argenia Publisher.
Il Senso Trascurato, by Luigi Agostini, (2009) Edizioni Lulu.com
Netspace: viaggio nell'arte della rete - Net Archives by Elena Giulia Rossi, (2009) Roma, ed. MAXXI Museo Nazionale delle Arti del XXI secolo.
Attraversamenti, la musica in Toscana dal 1945 ad oggi, by Daniele Lombardi, Firenze 2009, Regione Toscana, Consiglio Regionale, Edizioni dell’Assemblea.
Arte delle reti / Net Art - Elementi per un atlante: liste e linee temporali (2007) Edizioni UCAN - BOOK001
AD LIBITUM. Musica da vedere, by N.Cisternino, I.Gómez, F.Mariani e M.Ratti, (2003) SilvanaEditoriale
Information Arts, Stephen Wilson, (2003) MIT Press
Attraversamenti by Daniele Lombardi: La musica in Toscana dal 1945 ad oggi 2002 (Firenze, Maschetto&Musolino)
Partiture Assessorato alla Cultura Comune Montecatini terme (1991) CID/Arti Visive Centro Luigi Pecci Prato
CD / DVD
Suono Segno Gesto Visione a Firenze 2 / P.Grossi, G.Chiari, G.Cardini, A.Mayr, D.Lombardi, M.Aitiani, S.Maltagliati (Atopos music-2008)
CIRCUS_8 DVD video Quantum Bit Limited Edition (2009) QuBIT 005
The Wave Structure of Matter Quantum Bit Netlabel (2011) QuBIT 007
CIRCUS_5.1 DVD (digital edition) Quantum Bit Netlabel (2012) QuBIT 013
BATTIMENTI 2.5 audio Cd - numbered copy of limited edition (2019) Cd
Audio Tape
1992 Trasparenze
1993 Alla ricerca dei silenzi perduti
1995 Serie di colore per flauti
1996 Per una sola nota
1997 Partiture per floppy disk
Recorder from original tape Quantum Bit Limited Edition
References
External links
Personal Site Visu@lMusiC
Works List of compositions by Sergio Maltagliati
HomeArt Images by Pietro Grossi
Pietro Grossi
1960 births
Living people
20th-century classical composers
Italian classical composers
Italian male classical composers
Italian digital artists
Italian contemporary artists
Italian performance artists
New media artists
Italian multimedia artists
Net.artists
20th-century Italian composers
20th-century Italian male musicians |
32234869 | https://en.wikipedia.org/wiki/AnitaB.org | AnitaB.org | AnitaB.org (formerly Anita Borg Institute for Women and Technology, and Institute for Women in Technology) is a global nonprofit organization based in Belmont, California. Founded by computer scientists Anita Borg and Telle Whitney, the institute's primary aim is to recruit, retain, and advance women in technology.
The institute's most prominent program is the Grace Hopper Celebration of Women in Computing Conference, the world's largest gathering of women in computing. From 2002 to 2017, AnitaB.org was led by Telle Whitney, who co-founded the Grace Hopper Celebration of Women in Computing with Anita Borg.
AnitaB.org is currently led by Brenda Darden Wilkerson, the former Director of Computer Science and IT Education for Chicago Public Schools (CPS) and founder of the original “Computer Science for All” initiative.
History
AnitaB.org was founded in 1997 by computer scientists Anita Borg and Telle Whitney as the Institute for Women in Technology. The institute was preceded by two of its current programs: Systers and the Grace Hopper Celebration of Women in Computing Conference. Systers, the first online community for women in computing, was founded in 1987 by Anita Borg. In 1994, Borg and Whitney organized the first Grace Hopper Celebration of Women in Computing.
Anita Borg served as CEO of the Institute for Women in Technology from 1997 to 2002. In 2002, Whitney became president and CEO, and in 2003, the institute was renamed the Anita Borg Institute for Women and Technology. In 2017, Whitney retired and Brenda Darden Wilkerson took over as president and CEO. The organization was also renamed AnitaB.org.
Mission
Its mission is to increase the impact of women on all aspects of technology, and increase the positive impact of technology on the world's women.
Activities
Grace Hopper Celebration of Women in Computing Conference
The Grace Hopper Celebration of Women in Computing conference is the world's largest gathering of women in computing. Named in honor of Rear Admiral Grace Murray Hopper, the conference is presented by AnitaB.org and the Association for Computing Machinery (ACM). The conference features technical sessions and career sessions, including keynote speakers, a poster session, career fair, and awards ceremony. The 2017 conference was held in Orlando, Florida. The 2018 conference was held in Houston, Texas.
The Technical Executive Forum, held annually at the Grace Hopper Celebration of Women in Computing, brings together high-level technology executives to discuss challenges and solutions for recruiting, retaining, and advancing technical women. A two-day workshop for K–12 computer science teachers is also held at the conference, hosted by the Computer Science Teachers Association and the AnitaB.org.
Grace Hopper Celebration of Women in Computing India
The Grace Hopper Celebration of Women in Computing India is the largest conference for technical women in India. Established in 2010, the two-day conference is modeled after the Grace Hopper Celebration of Women in Computing and includes multiple tracks with keynote speakers, panels, social networking sessions, and a poster session.
Grace Hopper Regional Consortium
The Grace Hopper Regional Consortium is an initiative of AnitaB.org, the ACM Council on Women in Computing, and the National Center for Women & Information Technology. Two-day regional conferences attract between 50 and 200 attendees and include keynote speakers, poster sessions, panel discussions, professional development workshops, birds of a feather (Twitter) sessions, and research presentations. There have been 17 regional conferences to date, with 12 upcoming conferences planned.
Abie Awards
The Abie Awards honor women technologists and those who support women in tech. There are a total of eight Abie Awards: the Technical Leadership Abie Award, Student of Vision Abie Award, Emerging Technologist Abie Award, Educational Abie Award in Honor of A. Richard Newton, Social Impact Abie Award, Technology Entrepreneurship Abie Award, Emerging Leader Abie Award in Honor of Denice Denton, and Change Agent Abie Award.
Previously, AnitaB.org hosted an annual Women of Vision Awards Banquet where three Abie Awards were presented. However, it was decided that it was more fitting to present the Abie Awards at Grace Hopper Celebration (GHC), the world's largest gathering of women technologists. The final Women of Vision Awards Banquet was held in 2016.
Now, five Abie Awards are presented at every GHC (the Technical Leadership Abie Award and Student of Vision Abie Award are awarded every year, while the remaining awards alternate each year). Past Abie Award winners include: Mary Lou Jepsen, Kristina M. Johnson, Mitchell Baker, Helen Greiner, Susan Landau, Justine Cassell, Deborah Estrin, Leah Jamieson, Duy-Loan Le, Radia Perlman, Nimmi Ramanujam and Pamela Samuelson.
Anita Borg Top Company for Technical Women Award
The Anita Borg Top Company for Technical Women Award recognizes companies for their recruitment, retention, and advancement of technical women. The first Anita Borg Top Company for Technical Women Award was awarded to IBM in 2011. Subsequent recipients include:
2012 – American Express
2013 – Intel
2014 – Bank of America
2015 – BNY Mellon
2016 – ThoughtWorks
2020 Small Company - The New York Times Company
2020 Medium Sized Company - UKG
2020 - Large Sized Company - ADP
Anita Borg Top Company for Technical Women Workshop
The Anita Borg Top Company for Technical Women Workshop provides coverage of best practices for recruiting, retaining, and advancing technical women. Representatives from different companies learn from each other and share practices. Companies participating in the 2011 workshop included CA Technologies, Cisco, Google, IBM, Intel, Intuit, Microsoft Research, SAP, and Symantec.
TechWomen
TechWomen is a professional mentorship and exchange program funded by the U.S. Department of State’s Bureau of Educational and Cultural Affairs. The program brings 38 technical women, aged 25–42, from the Middle East and North Africa to the United States for a five-week mentoring program at technology companies in Silicon Valley. The initiative is administered by the Institute of International Education, in partnership with AnitaB.org.
Online communities
The AnitaB.org runs several email lists and online groups that connect technical women. Systers is the largest email community of technical women in computing in the world and predates AnitaB.org, having been founded in 1987 by Anita Borg. Systers provides a private and gender exclusive space for women in computing to ask personal and technical questions.
Local communities
The AnitaB.org local Communities usually referred to as ABI.local is a network of locally organized communities that bring women technologists together in cities around the world. These communities organize events and meet up, where women in tech get connected, find new opportunities and meet their career goals. ABI.local has been Featured in various cities across the globe including Chicago, London, Nairobi, Amsterdam, Seattle, Tokyo, Houston, New York, Delhi and more.
Research
AnitaB.org publishes research about the state of women in technology. Past reports have focused on mid-level technical women, ethnic minorities in computing, senior technical women, and more.
Corporate partners
AnitaB.org is supported by corporate partners within and outside of the technology sector. Current notable partners include:
Amazon
Cisco
Facebook
Google
HP
IBM
Intel
Intuit
Microsoft
National Science Foundation (NSF)
National Security Agency (NSA)
SAP
Symantec
Thomson Reuters
Salesforce.com
In 2017, Forbes, Fortune, and other outlets notably reported that the organization severed ties with Uber over its treatment of female employees and lack of engagement.
See also
Ada Initiative
The Ada Project (TAP)
Anita Borg
Sexism in the technology industry
Women in computing
References
External links
AnitaB.org
Grace Hopper Celebration of Women in Computing
Grace Hopper Celebration of Women in Computing India
TechWomen
Systers
Grace Hopper Celebration of Women in Computing Conference
Organizations for women in science and technology
Women in computing
Awards honoring women
Information technology organizations based in North America
Charities based in California
1997 establishments in California
Organizations established in 1997 |
49853809 | https://en.wikipedia.org/wiki/National%20Cyber%20Security%20Centre | National Cyber Security Centre | National Cyber Security Centre, National Cyber Security Center, or National Cybersecurity Center may refer to:
Americas
Cybersecurity and Infrastructure Security Agency, United States
Canadian Centre for Cyber Security, Canada
Asia
Cyber Security Agency (Singapore)
Indian Computer Emergency Response Team, India
National Electronic Security Authority, UAE
Europe
Agence nationale de la sécurité des systèmes d'information, France
Estonian Defence League's Cyber Unit
European Cybercrime Centre (EC3), European Union
National Cyberdefence Centre, Germany
National Cyber Security Centre (Ireland)
National Cyber Security Centre of Lithuania
National Cyber Security Centre (United Kingdom)
Oceania
Australian Cyber Security Centre
National Cyber Security Centre (New Zealand)
See also
Cooperative Cyber Defence Centre of Excellence, Tallinn, Estonia
National Intelligence Service (South Korea), oversees cyber security in South Korea |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.