id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
3536499
https://en.wikipedia.org/wiki/E4M
E4M
Encryption for the Masses (E4M) is a free disk encryption software for Windows NT and Windows 9x families of operating systems. E4M is discontinued; it is no longer maintained. Its author, Paul Le Roux, joined Shaun Hollingworth (the author of the Scramdisk) to produce the commercial encryption product DriveCrypt for the security company SecurStar. The popular source-available freeware program TrueCrypt was based on E4M's source code. However, TrueCrypt uses a different container format than E4M, which makes it impossible to use one of these programs to access an encrypted volume created by the other. Shortly after TrueCrypt version 1.0 was released in February 2004, the TrueCrypt Team reported receiving emails from Wilfried Hafner, manager of SecurStar, claiming that Paul Le Roux had stolen the source code of E4M from SecurStar as an employee. According to the TrueCrypt Team, the emails stated that Le Roux illegally distributed E4M, and authored an illegal license permitting anyone to base derivative work on E4M and distribute it freely, which Hafner alleges Le Roux did not have any right to do, claiming that all versions of E4M always belonged only to SecurStar. For a time, this led the TrueCrypt Team to stop developing and distributing TrueCrypt. See also On-the-fly encryption (OTFE) Disk encryption Disk encryption software Comparison of disk encryption software References Cryptographic software Disk encryption Free software External links Archived version of official website
682298
https://en.wikipedia.org/wiki/Security-focused%20operating%20system
Security-focused operating system
This is a list of operating systems specifically focused on security. Operating systems for general-purpose usage may be secure without having a specific focus on security. Similar concepts include security-evaluated operating systems that have achieved certification from an auditing organization, and trusted operating systems that provide sufficient support for multilevel security and evidence of correctness to meet a particular set of requirements. Linux Android-based GrapheneOS is a free and open source privacy and security focused Android Custom ROM Kali NetHunter is a free and open source Kali Linux based mobile operating system usually for Android devices Debian-based Subgraph is a Linux-based operating system designed to be resistant to surveillance and interference by sophisticated adversaries over the Internet. Subgraph OS is designed with features that aim to reduce the attack surface of the operating system, and increase the difficulty required to carry out certain classes of attack. This is accomplished through system hardening and a proactive, ongoing focus on security and attack resistance. Subgraph OS also places emphasis on ensuring the integrity of installed software packages through deterministic compilation. Subgraph OS features a kernel hardened with the Grsecurity and PaX patchset, Linux namespaces, and Xpra for application containment, mandatory file system encryption using LUKS, resistance to cold boot attacks, and is configured by default to isolate network communications for installed applications to independent circuits on the Tor anonymity network. Tails is a security-focused Linux distribution aimed at preserving privacy and anonymity. It is meant to be run as Live-CD or from a USB Drive and to not write any kind of data to a drive, unless specified or persistence is set. That way, it lives in RAM and everything is purged from the system whenever it is powered off. Tails is designed to do an emergency shutdown and erase its data from RAM if the medium where it resides is expelled. Whonix is an anonymous general purpose operating system based on VirtualBox, Debian Linux and Tor. By Whonix design, IP and DNS leaks are impossible. Not even Malware as Superuser can find out the user's real IP address/location. This is because Whonix consists of two (virtual) machines. One machine solely runs Tor and acts as a gateway, called Whonix-Gateway. The other machine, called Whonix-Workstation, is on a completely isolated network. It is also possible to use multiple Whonix Workstations simultaneously through one Gateway, that will provide stream isolation (though is not necessarily endorsed). All the connections are forced through Tor with the Whonix Gateway Virtual Machine, therefore IP and DNS leaks are impossible. Fedora-based Qubes OS is a desktop operating system based around the Xen hypervisor that allows grouping programs into a number of isolated sandboxes (virtual machines) to provide security. Windows for programs running within these sandboxes ("security domains") can be color coded for easy recognition. The security domains are configurable, they can be transient (changes to the file system will not be preserved), and their network connection can be routed through special virtual machines (for example one that only provides Tor networking). The operating system provides secure mechanisms for copy and paste and for copying files between the security domains Gentoo-based Pentoo is a Live CD and Live USB designed for penetration testing and security assessment. Based on Gentoo Linux, Pentoo is provided both as 32 and 64-bit installable live CD. It is built on Hardened Gentoo linux including a hardened kernel and a toolchain. Tin Hat Linux is derived from Hardened Gentoo Linux. It aims to provide a very secure, stable, and fast desktop environment that lives purely in RAM. Other Linux distributions Alpine Linux is an actively maintained lightweight musl and BusyBox-based distribution. It uses PaX and grsecurity patches in the default kernel and compiles all packages with stack-smashing protection. Annvix was originally forked from Mandriva to provide a security-focused server distribution that employs ProPolice protection, hardened configuration, and a small footprint. There were plans to include full support for the RSBAC mandatory access control system. Annvix is dormant, however, with the last version being released on 30 December 2007. EnGarde Secure Linux is a secure platform designed for servers. It has had a browser-based tool for MAC using SELinux since 2003. Additionally, it can be accompanied with Web, DNS, and email enterprise applications, specifically focusing on security without any unnecessary software. The community platform of EnGarde Secure Linux is the bleeding-edge version freely available for download. Immunix was a commercial distribution of Linux focused heavily on security. They supplied many systems of their own making, including StackGuard; cryptographic signing of executables; race condition patches; and format string exploit guarding code. Immunix traditionally releases older versions of their distribution free for non-commercial use. The Immunix distribution itself is licensed under two licenses: The Immunix commercial and non-commercial licenses. Many tools within are GPL, however, as is the kernel. Solar Designer's Openwall Project (Owl) was the first distribution to have a non-executable userspace stack, /tmp race condition protection, and access control restrictions to /proc data, by way of a kernel patch. It also features a per-user tmp directory via the pam_mktemp PAM module, and supports Blowfish password encryption. BSD-based TrustedBSD is a sub-project of FreeBSD designed to add trusted operating system extensions, targeting the Common Criteria for Information Technology Security Evaluation (see also Orange Book). Its main focuses are working on access control lists, event auditing, extended attributes, mandatory access controls, and fine-grained capabilities. Since access control lists are known to be confronted with the confused deputy problem, capabilities are a different way to avoid this issue. As part of the TrustedBSD project, there is also a port of NSA's FLASK/TE implementation to run on FreeBSD. Many of these trusted extensions have been integrated into the main FreeBSD branch starting at 5.x. OpenBSD is a research operating system for developing security mitigations. Object-capability systems These operating systems are all engineered around the object-capabilities security paradigm, where instead of having the system deciding if an access request should be granted the bundling of authority and designation makes it impossible to request anything not legitimate. CapROS EROS Genode Fiasco.OC KeyKOS seL4 Solaris-based Trusted Solaris was a security-focused version of the Solaris Unix operating system. Aimed primarily at the government computing sector, Trusted Solaris adds detailed auditing of all tasks, pluggable authentication, mandatory access control, additional physical authentication devices, and fine-grained access control. Trusted Solaris is Common Criteria certified. The most recent version, Trusted Solaris 8 (released 2000), received the EAL4 certification level augmented by a number of protection profiles. Telnet was vulnerable to buffer overflow exploits until patched in April 2001. See also References Operating system security Operating system technology Security
4472835
https://en.wikipedia.org/wiki/Machine-dependent%20software
Machine-dependent software
Machine-dependent software is software that runs only on a specific computer. Applications that run on multiple computer architectures are called machine-independent, or cross-platform. Many organisations opt for such software because they believe that machine-dependent software is an asset and will attract more buyers. Organizations that want application software to work on heterogeneous computers may port that software to the other machines. Deploying machine-dependent applications on such architectures, such applications require porting. This procedure includes composing, or re-composing, the application's code to suit the target platform. Porting Porting is the process of converting an application from one architecture to another. Software languages such as Java are designed so that applications can migrate across architectures without source code modifications. The term is applied when programming/equipment is changed to make it usable in a different architecture. Code that does not operate properly on a specific system must be ported to another system. Porting effort depends upon a few variables, including the degree to which the first environment (the source stage) varies from the new environment (the objective stage) and the experience of the creators in knowing platform-specific programming dialects. Many languages offer a machine independent intermediate code that can be processed by platform-specific interpreters to address incompatibilities. The transitional representation characterises a virtual machine that can execute all modules written in the intermediate dialect. The intermediate code guidelines are interpreted into distinct machine code arrangements by a code generator to make executable code. The intermediate code may also be executed directly without static conversion into platform-specific code. Approaches Port the translator. This can be coded in portable code. Adapt the source code to the new machine. Execute the adjusted source utilizing the translator with the code generator source as data. This will produce the machine code for the code generator. See also Virtual machine Java (programming language) Hardware-dependent software References External links Agrawala, A. K., & Rauscher, T. G., 2014, Foundations of microprogramming: architecture, software, and applications, Academic press Huang, J., Li, Y. F., & Xie, M., 2015, An empirical analysis of data preprocessing for machine learning-based software cost estimation, Information and Software Technology, 67, 108-127 Lee, J. H., Yu, J. M., & Lee, D. H., 2013, A tabu search algorithm for unrelated parallel machine scheduling with sequence-and machine-dependent setups: minimizing total tardiness, The International Journal of Advanced Manufacturing Technology, 69(9-12), 2081-2089 Lin, S. W., & Ying, K. C., 2014, ABC-based manufacturing scheduling for unrelated parallel machines with machine-dependent and job sequence-dependent setup times, Computers & Operations Research, 51, 172-181 Mathur, R., Miles, S., & Du, M., 2015, Adaptive Automation: Leveraging Machine Learning to Support Uninterrupted Automated Testing of Software Applications, arXiv preprint Rashid, E. A., Patnaik, S. B., & Bhattacherjee, V. C., 2014, Machine learning and software quality prediction: as an expert system, International Journal of Information Engineering and Electronic Business (IJIEEB), 6(2), 9 Röhrich, T., & Welfonder, E., 2014, Machine Independent Software Wiring and Programming of Distributed Digital Control Systems, In Digital Computer Applications to Process Control: Proceedings of the 7th IFAC/IFIP/IMACS Conference, Vienna, Austria, 17–20 September 1985 (p. 247), Elsevier Shepperd, M., Bowes, D., & Hall, T., 2014, Researcher bias: The use of machine learning in software defect prediction, Software Engineering, IEEE Transactions on, 40(6), 603-616 Wang, J. B., Sun, L. H., & Sun, L. Y., 2011, Single-machine total completion time scheduling with a time-dependent deterioration, Applied Mathematical Modelling, 35(3), 1506-1511 Yin, Y., Liu, M., Hao, J., & Zhou, M., 2012, Sin Software architecture
1566287
https://en.wikipedia.org/wiki/Jon%20Stephenson%20von%20Tetzchner
Jon Stephenson von Tetzchner
Jon Stephenson von Tetzchner (Icelandic: Jón; born 29 August 1967 in Reykjavík) is an Icelandic-Norwegian programmer and businessman. He is the co-founder and CEO of Vivaldi Technologies. Before starting the Vivaldi Web browser, he launched a community site called Vivaldi.net. Tetzchner is also a co-founder and the former CEO of Opera Software. Early life Jon Stephenson von Tetzchner is the son of the Icelander Elsa Jónsdóttir and the Norwegian Stephen von Tetzchner, a professor of psychology (and the brother of politician Michael Tetzschner). Jon Stephenson grew up around Skólabraut in the Reykjavík suburb of Seltjarnarnes with his grandparents, the doctor Jón Gunnlaugsson and Selma Kaldalóns, the daughter of the doctor and composer Sigvaldi Kaldalóns. Tetzchner went to secondary school at the Menntaskólinn í Reykjavík before continuing his studies in Norway, where he made his career. Tetzchner holds a master's degree in computer science from the University of Oslo. Opera Software Tetzchner worked at the Norwegian state phone company (now known as Telenor) from 1991 to 1995. There, he and Geir Ivarsøy developed browsing software called MultiTorg Opera. The project was abandoned by Telenor, but Ivarsøy and von Tetzchner obtained the rights to the software, formed a company named Opera Software in 1995 and continued working on the Opera browser. In 1996 it was offered for sale to the public. On 21 April 2005, Tetzchner proclaimed at an internal meeting of Opera Software that if the download numbers of the browser's new version Opera 8 reached one million within four days, he would swim across the Atlantic Ocean from Norway to the United States. Two days later, on the 23rd, the downloads reached 1,050,000 and Tetzchner had to fulfill his challenge. The Opera site covered the swim and his quick failure. Under his leadership, Jon turned Opera into a global company with more than 750 employees in 13 countries. Opera was an early pioneer in mobile web browser and more than 350 million people use Opera. In January 2010, Tetzchner stepped down as Chief Executive Officer of Opera Software, but he continued to serve Opera as a strategic adviser. In June 2011, Tetzchner announced that he was leaving Opera Software over disagreements with management. Vivaldi Technologies In December 2013, Tetzchner founded the company Vivaldi Technologies and launched the online community site vivaldi.net, which includes a forum, blogs, chat, photo sharing and a free email service named Vivaldi Mail. On 27 January 2015, Vivaldi Technologies announced the release of its new web browser Vivaldi. Its 1.0 version came out in April 2016. Jon von Tetzchner and his team are making a personal and a feature-rich browser that takes into consideration the needs of every user. Vivaldi is self-funded and aims to be this way where the employees hold equity. References External links Jon Stephenson von Tetzchner's blog at Vivaldi 1967 births Living people Icelandic businesspeople Opera Software employees Icelandic people of Norwegian descent von
4547
https://en.wikipedia.org/wiki/Bash%20%28Unix%20shell%29
Bash (Unix shell)
Bash is a Unix shell and command language written by Brian Fox for the GNU Project as a free software replacement for the Bourne shell. First released in 1989, it has been used as the default login shell for most Linux distributions. A version is also available for Windows 10 via the Windows Subsystem for Linux. It is also the default user shell in Solaris 11. Bash was also the default shell in all versions of Apple macOS prior to the 2019 release of macOS Catalina, which changed the default shell to zsh, although Bash remains available as an alternative shell. Bash is a command processor that typically runs in a text window where the user types commands that cause actions. Bash can also read and execute commands from a file, called a shell script. Like most Unix shells, it supports filename globbing (wildcard matching), piping, here documents, command substitution, variables, and control structures for condition-testing and iteration. The keywords, syntax, dynamically scoped variables and other basic features of the language are all copied from sh. Other features, e.g., history, are copied from csh and ksh. Bash is a POSIX-compliant shell, but with a number of extensions. The shell's name is an acronym for Bourne Again Shell, a pun on the name of the Bourne shell that it replaces and the notion of being "born again". A security hole in Bash dating from version 1.03 (August 1989), dubbed Shellshock, was discovered in early September 2014 and quickly led to a range of attacks across the Internet. Patches to fix the bugs were made available soon after the bugs were identified. History Brian Fox began coding Bash on January 10, 1988, after Richard Stallman became dissatisfied with the lack of progress being made by a prior developer. Stallman and the Free Software Foundation (FSF) considered a free shell that could run existing shell scripts so strategic to a completely free system built from BSD and GNU code that this was one of the few projects they funded themselves, with Fox undertaking the work as an employee of FSF. Fox released Bash as a beta, version .99, on June 8, 1989, and remained the primary maintainer until sometime between mid-1992 and mid-1994, when he was laid off from FSF and his responsibility was transitioned to another early contributor, Chet Ramey. Since then, Bash has become by far the most popular shell among users of Linux, becoming the default interactive shell on that operating system's various distributions (although Almquist shell may be the default scripting shell) and on Apple's macOS releases before Catalina in October 2019. Bash has also been ported to Microsoft Windows and distributed with Cygwin and MinGW, to DOS by the DJGPP project, to Novell NetWare, to OpenVMS by the GNV project, to ArcaOS, and to Android via various terminal emulation applications. In September 2014, Stéphane Chazelas, a Unix/Linux specialist, discovered a security bug in the program. The bug, first disclosed on September 24, was named Shellshock and assigned the numbers . The bug was regarded as severe, since CGI scripts using Bash could be vulnerable, enabling arbitrary code execution. The bug was related to how Bash passes function definitions to subshells through environment variables. Features The Bash command syntax is a superset of the Bourne shell command syntax. Bash supports brace expansion, command line completion (Programmable Completion), basic debugging and signal handling (using trap) since bash 2.05a among other features. Bash can execute the vast majority of Bourne shell scripts without modification, with the exception of Bourne shell scripts stumbling into fringe syntax behavior interpreted differently in Bash or attempting to run a system command matching a newer Bash builtin, etc. Bash command syntax includes ideas drawn from the KornShell (ksh) and the C shell (csh) such as command line editing, command history (history command), the directory stack, the $RANDOM and $PPID variables, and POSIX command substitution syntax $(…). When a user presses the tab key within an interactive command-shell, Bash automatically uses command line completion, since beta version 2.04, to match partly typed program names, filenames and variable names. The Bash command-line completion system is very flexible and customizable, and is often packaged with functions that complete arguments and filenames for specific programs and tasks. Bash's syntax has many extensions lacking in the Bourne shell. Bash can perform integer calculations ("arithmetic evaluation") without spawning external processes. It uses the ((…)) command and the $((…)) variable syntax for this purpose. Its syntax simplifies I/O redirection. For example, it can redirect standard output (stdout) and standard error (stderr) at the same time using the &> operator. This is simpler to type than the Bourne shell equivalent 'command > file 2>&1'. Bash supports process substitution using the <(command) and >(command)syntax, which substitutes the output of (or input to) a command where a filename is normally used. (This is implemented through /proc/fd/ unnamed pipes on systems that support that, or via temporary named pipes where necessary). When using the 'function' keyword, Bash function declarations are not compatible with Bourne/Korn/POSIX scripts (the KornShell has the same problem when using 'function'), but Bash accepts the same function declaration syntax as the Bourne and Korn shells, and is POSIX-conformant. Because of these and other differences, Bash shell scripts are rarely runnable under the Bourne or Korn shell interpreters unless deliberately written with that compatibility in mind, which is becoming less common as Linux becomes more widespread. But in POSIX mode, Bash conforms with POSIX more closely. Bash supports here documents. Since version 2.05b Bash can redirect standard input (stdin) from a "here string" using the <<< operator. Bash 3.0 supports in-process regular expression matching using a syntax reminiscent of Perl. In February 2009, Bash 4.0 introduced support for associative arrays. Associative array indices are strings, in a manner similar to AWK or Tcl. They can be used to emulate multidimensional arrays. Bash 4 also switches its license to GPL-3.0-or-later; some users suspect this licensing change is why MacOS continues to use older versions. Apple finally stopped using Bash in their operating systems with the release of MacOS Catalina in 2019. Brace expansion Brace expansion, also called alternation, is a feature copied from the C shell. It generates a set of alternative combinations. Generated results need not exist as files. The results of each expanded string are not sorted and left to right order is preserved: $ echo a{p,c,d,b}e ape ace ade abe $ echo {a,b,c}{d,e,f} ad ae af bd be bf cd ce cf Users should not use brace expansions in portable shell scripts, because the Bourne shell does not produce the same output. $ # A traditional shell does not produce the same output $ /bin/sh -c 'echo a{p,c,d,b}e' a{p,c,d,b}e When brace expansion is combined with wildcards, the braces are expanded first, and then the resulting wildcards are substituted normally. Hence, a listing of JPEG and PNG images in the current directory could be obtained using: ls *.{jpg,jpeg,png} # expands to *.jpg *.jpeg *.png - after which, # the wildcards are processed echo *.{png,jp{e,}g} # echo just show the expansions - # and braces in braces are possible. In addition to alternation, brace expansion can be used for sequential ranges between two integers or characters separated by double dots. Newer versions of Bash allow a third integer to specify the increment. $ echo {1..10} 1 2 3 4 5 6 7 8 9 10 $ echo file{1..4}.txt file1.txt file2.txt file3.txt file4.txt $ echo {a..e} a b c d e $ echo {1..10..3} 1 4 7 10 $ echo {a..j..3} a d g j When brace expansion is combined with variable expansion (A.K.A. parameter expansion and parameter substitution) the variable expansion is performed after the brace expansion, which in some cases may necessitate the use of the eval built-in, thus: $ start=1; end=10 $ echo {$start..$end} # fails to expand due to the evaluation order {1..10} $ eval echo {$start..$end} # variable expansion occurs then resulting string is evaluated 1 2 3 4 5 6 7 8 9 10 Startup scripts When Bash starts, it executes the commands in a variety of dot files. Unlike Bash shell scripts, dot files do not typically have execute permission enabled nor an interpreter directive like #!/bin/bash. Legacy-compatible Bash startup example The skeleton ~/.bash_profile below is compatible with the Bourne shell and gives semantics similar to csh for the ~/.bashrc and ~/.bash_login. The [ -r filename ] && cmd is a short-circuit evaluation that tests if filename exists and is readable, skipping the part after the && if it is not. [ -r ~/.profile ] && . ~/.profile # set up environment, once, Bourne-sh syntax only if [ -n "$PS1" ] ; then # are we interactive? [ -r ~/.bashrc ] && . ~/.bashrc # tty/prompt/function setup for interactive shells [ -r ~/.bash_login ] && . ~/.bash_login # any at-login tasks for login shell only fi # End of "if" block Operating system issues in Bash startup Some versions of Unix and Linux contain Bash system startup scripts, generally under the /etc directories. Bash calls these as part of its standard initialization, but other startup files can read them in a different order than the documented Bash startup sequence. The default content of the root user's files may also have issues, as well as the skeleton files the system provides to new user accounts upon setup. The startup scripts that launch the X window system may also do surprising things with the user's Bash startup scripts in an attempt to set up user-environment variables before launching the window manager. These issues can often be addressed using a ~/.xsession or ~/.xprofile file to read the ~/.profile — which provides the environment variables that Bash shell windows spawned from the window manager need, such as xterm or Gnome Terminal. Portability Invoking Bash with the --posix option or stating set -o posix in a script causes Bash to conform very closely to the POSIX 1003.2 standard. Bash shell scripts intended for portability should take into account at least the POSIX shell standard. Some bash features not found in POSIX are: Certain extended invocation options Brace expansion Arrays and associative arrays The double bracket extended test construct and its regex matching The double-parentheses arithmetic-evaluation construct (only ; is POSIX) Certain string-manipulation operations in parameter expansion for scoped variables Process substitution Bash-specific builtins Coprocesses $EPOCHSECONDS and $EPOCHREALTIME variables If a piece of code uses such a feature, it is called a "bashism" – a problem for portable use. Debian's and Vidar Holen's can be used to make sure that a script does not contain these parts. The list varies depending on the actual target shell: Debian's policy allows some extensions in their scripts (as they are in the dash shell), while a script intending to support pre-POSIX Bourne shells, like autoconf's , are even more limited in the features they can use. Keyboard shortcuts Bash uses readline to provide keyboard shortcuts for command line editing using the default (Emacs) key bindings. Vi-bindings can be enabled by running set -o vi. Process management The Bash shell has two modes of execution for commands: batch, and concurrent mode. To execute commands in batch (i.e., in sequence) they must be separated by the character ";", or on separate lines: command1; command2 in this example, when command1 is finished, command2 is executed. A background execution of command1 can occur using (symbol &) at the end of an execution command, and process will be executed in background returning immediately control to the shell and allowing continued execution of commands. command1 & Or to have a concurrent execution of two command1 and command2, they must be executed in the Bash shell in the following way: command1 & command2 In this case command1 is executed in the background & symbol, returning immediately control to the shell that executes command2 in the foreground. A process can be stopped and control returned to bash by typing while the process is running in the foreground. A list of all processes, both in the background and stopped, can be achieved by running jobs: $ jobs [1]- Running command1 & [2]+ Stopped command2 In the output, the number in brackets refers to the job id. The plus sign signifies the default process for bg and fg. The text "Running" and "Stopped" refer to the Process state. The last string is the command that started the process. The state of a process can be changed using various commands. The fg command brings a process to the foreground, while bg sets a stopped process running in the background. bg and fg can take a job id as their first argument, to specify the process to act on. Without one, they use the default process, identified by a plus sign in the output of jobs. The kill command can be used to end a process prematurely, by sending it a signal. The job id must be specified after a percent sign: kill %1 Conditional execution Bash supplies "conditional execution" command separators that make execution of a command contingent on the exit code set by a precedent command. For example: cd "$SOMEWHERE" && ./do_something || echo "An error occurred" >&2 Where ./do_something is only executed if the cd (change directory) command was "successful" (returned an exit status of zero) and the echo command would only be executed if either the cd or the ./do_something command return an "error" (non-zero exit status). For all commands the exit status is stored in the special variable $?. Bash also supports and forms of conditional command evaluation. Bug reporting An external command called bashbug reports Bash shell bugs. When the command is invoked, it brings up the user's default editor with a form to fill in. The form is mailed to the Bash maintainers (or optionally to other email addresses). Programmable completion Bash supports programmable completion via built-in complete, , and compgen commands. The feature has been available since the beta version of 2.04 released in 2000. These commands enable complex and intelligent completion specification for commands (i.e. installed programs), functions, variables, and filenames. The complete and two commands specify how arguments of some available commands or options are going to be listed in the readline input. As of version 5.1 completion of the command or the option is usually activated by the keystroke after typing its name. Release history See also Comparison of command shells References External links (interview with GNU Bash's maintainer, Chet Ramey) 1989 software Cross-platform free software Domain-specific programming languages Free software programmed in C GNU Project software Scripting languages Text-oriented programming languages Unix shells
40771259
https://en.wikipedia.org/wiki/Mixamo
Mixamo
Mixamo () is a 3D computer graphics technology company. Based in San Francisco, the company develops and sells web-based services for 3D character animation. Mixamo's technologies use machine learning methods to automate the steps of the character animation process, including 3D modeling to rigging and 3D animation. Mixamo is spun-off of Stanford University and has raised over $11 million from investors Granite Ventures, Keynote Ventures, Stanford University and AMD Ventures. Mixamo was acquired by Adobe Systems in 2015. History Mixamo was founded in 2008 by Stefano Corazza and Nazim Kareemi as a spin-off of Stanford University's Biomotion Lab, and started out as a cloud-based service offering animations and automatic character rigging. It launched its first online animation service in 2009. In 2010, Mixamo worked with Evolver to provide characters to customers. Later Autodesk acquired Evolver and made it proprietary, but Mixamo had already begun work on its own character creation service. Mixamo released its automatic rigging service in 2011. That was followed by the launch of its real-time facial animation product, Face Plus, in 2013, and the official launch of its Fuse 3D character creator software in March 2014. In August 2014, Mixamo launched a new pricing structure. Mixamo was acquired by Adobe Systems on June 1, 2015. Fuse In March 2014, Mixamo officially launched Fuse Character Creator at Game Developers Conference. Fuse stems out of the research done by Siddhartha Chaudhuri which originated within Vladlen Koltun's research group at Stanford and Substance technology from Allegorithmic. An early version of the service had been released in November 2013. It allows users to create a 3D character by assembling customizable body parts, clothing and textures together. Those characters can then be exported into other 3D model software packages or game engine. In March 2014, Mixamo launched Fuse 1.0 on Steam with the ability of importing and integrating user generated content in the character creation system. Fuse was updated to allow the importation and editing of character models generated by Microsoft's Kinect 2.0 in August of that year. Development of Fuse was discontinued in September 2019. On September 13, 2020, the program was removed from the Adobe Creative Cloud and is no longer available for download from Adobe. Version 1.3 is still available for download on the Steam Marketplace. Rigging service This service has been discontinued for keeping custom 3D models and models uploaded via fuse CC on Adobe's servers. You can still upload custom 3D model to the online auto rigging service and rig them, but they won't be kept. Mixamo also provides an online, automatic model rigging service known as the Auto-Rigger, which was the industry's first online rigging service. The AutoRigger applies machine learning to understand where the limbs of a 3D model are and to insert a "skeleton", or rig, into the 3D model as well as calculating the skinning weights. The service can take up to 2 minutes. 3D Character Animation Mixamo's online services include an animation store featuring downloadable 3D models and animation sequences. The animations were created at Mixamo using motion capture and cleaned up by key frame animators. All its animations work with characters created in Fuse and/or rigged with Mixamo's AutoRigger. Real-time facial animation August 2013, Mixamo released Face Plus, a game development tool that allows users to record facial animation data of themselves using a standard webcam and apply the animation to a character inside the Unity game engine in real-time. Face Plus was briefly included in the keynote presentation at the Unite conference in Vancouver. The technology was developed in collaboration with AMD. and uses GPU acceleration. The animated short "Unplugged" was created using Face Plus technology and as of April 2014 has won several awards. Face Plus has been discontinued and according to a September 2017 Adobe Forum post by Adobe staff JMathews, no further work on the product was being contemplated at that time. References External links "Mixamo’s Facial Expression Capturing Technology Helps Indie Developers Speed Up Character Animation" from Techcrunch Official site Unplugged – Official site Entertainment software Software companies based in California Animation software Stanford University American companies established in 2008 Companies based in San Francisco 2008 establishments in California Software companies established in 2008 2015 mergers and acquisitions Adobe Inc.
71996
https://en.wikipedia.org/wiki/BlackBerry
BlackBerry
BlackBerry was a brand of smartphones, tablets, and services originally designed and marketed by Canadian company BlackBerry Limited (formerly known as Research In Motion, or RIM). Beginning in 2016, BlackBerry Limited licensed third-party companies to design, manufacture, and market smartphones under the BlackBerry brand. The original licensors were BB Merah Putih for the Indonesian market, Optiemus Infracom for the South Asian market, and BlackBerry Mobile (a trade name of TCL Technology) for all other markets. In summer 2020, the Texas-based startup OnwardMobility signed a new licensing agreement with BlackBerry Limited to develop a new 5G BlackBerry smartphone. OnwardMobility was cooperating with BlackBerry Limited and FIH Mobile (a subsidiary of Foxconn) as they "sought to revitalize the iconic BlackBerry brand through an Android-based, next-gen Wi-Fi device." However, in a statement released on February 18, 2022, OnwardMobility said that not only would the development of the new BlackBerry device would not be moving forward but the company itself would be shutting down, as well. BlackBerry was one of the most prominent smartphone brands in the world, specializing in secure communications and mobile productivity, and well known for the keyboards on most of its devices. At its peak in September 2013, there were 85 million BlackBerry subscribers worldwide. However, BlackBerry lost its dominant position in the market due to the success of the Android and iOS platforms; its numbers had fallen to 23 million in March 2016 and slipped even further to 11 million in May 2017. Historically, BlackBerry devices used a proprietary operating system—known as BlackBerry OS—developed by BlackBerry Limited. In 2013, BlackBerry introduced BlackBerry 10, a major revamp of the platform based on the QNX operating system. BlackBerry 10 was meant to replace the aging BlackBerry OS platform with a new system that was more in line with the user experiences of Android and iOS platforms. The first BB10 powered device was the all-touch BlackBerry Z10, which was followed by other all-touch devices (Blackberry Z30, BlackBerry Leap) as well as more traditional keyboard-equipped models (BlackBerry Q10, BlackBerry Classic, BlackBerry Passport). In 2015, BlackBerry began releasing Android-based smartphones, beginning with the BlackBerry Priv slider and then the BlackBerry DTEK50. On September 28, 2016, BlackBerry announced it would cease designing its own phones in favor of licensing to partners. TCL Communication became the global licensee of the brand, under the name "BlackBerry Mobile". Optiemus Infracom, under the name BlackBerry Mobile India, and BB Merah Putih also serve as licensees of the brand, serving the Indian and Indonesian markets, respectively. In 2017, BlackBerry Mobile released the BlackBerry KeyOne—which was known for having a physical keyboard below its 4.5 inch touchscreen and a long battery life—and was the last device to be designed internally by BlackBerry. Also in 2017, BlackBerry Mobile, under their partner license agreements, released the BlackBerry Aurora, BlackBerry KeyOne L/E BLACK, and the BlackBerry Motion. In June 2018, the BlackBerry Key2 was launched in international markets, and in India by licensee Optiemus Infracom. The Key2 sports a dual camera setup and incorporates features such as portrait mode and optical zoom. In August 2018, after the launch of the BlackBerry Key2, Optiemus Infracom announced the launch of the BlackBerry Evolve and Evolve X smartphones for the Indian market sold exclusively on Amazon India. The smartphones have been conceptualized, designed and manufactured in India. As of 2019, BB Merah Putih's website has been repurposed, with BlackBerry Limited stating that only technical support will be offered for the Indonesian devices built by the company. Additionally, the operational status of Optiemus is unknown as of September 2020, as there have not been any updates posted regarding BlackBerry products in India since 2018. History Research in Motion (RIM), founded in Waterloo, Ontario, first developed the Inter@ctive Pager 900, announced on September 18, 1996. The Inter@ctive Pager 900 was a clamshell-type device that allowed two-way paging. After the success of the 900, the Inter@ctive Pager 800 was created for IBM, which bought US$10 million worth of them on February 4, 1998. The next device to be released was the Inter@ctive Pager 950, on August 26, 1998. The very first device to carry the BlackBerry name was the BlackBerry 850, an email pager, released January 19, 1999. Although identical in appearance to the 950, the 850 was the first device to integrate email and the name Inter@ctive Pager was no longer used to brand the device. The first BlackBerry device, the 850, was introduced in 1999 as a two-way pager in Munich, Germany. The name BlackBerry was coined by the marketing company Lexicon Branding. The name was chosen due to the resemblance of the keyboard's buttons to that of the drupelets that compose the blackberry fruit. The original BlackBerry devices, the RIM 850 and 857, used the DataTAC network. In 2002, the more commonly known convergent smartphone BlackBerry was released, which supports push email, mobile telephone, text messaging, Internet faxing, Web browsing and other wireless information services. BlackBerry gained market share in the mobile industry by concentrating on email. BlackBerry began to offer email service on non-BlackBerry devices, such as the Palm Treo, through the proprietary BlackBerry Connect software. The original BlackBerry device had a monochrome display while newer models installed color displays. All newer models have been optimized for "thumbing", the use of only the thumbs to type on a keyboard. The Storm 1 and Storm 2 include a SureType keypad for typing. Originally, system navigation was achieved with the use of a scroll wheel mounted on the right side of device models prior to the 8700. The trackwheel was replaced by the trackball with the introduction of the Pearl series, which allowed four-way scrolling. The trackball was replaced by the optical trackpad with the introduction of the Curve 8500 series. Models made to use iDEN networks, such as Nextel, SouthernLINC, NII Holdings, and Mike also incorporate a push-to-talk (PTT) feature, similar to a two-way radio. On January 30, 2013, BlackBerry announced the release of the Z10 and Q10 smartphones. Both models consist of touch screens: the Z10 features an all-touch design and the Q10 combines a QWERTY keyboard with touchscreen features. During the second financial quarter of 2013, BlackBerry sold 6.8 million handsets, but was eclipsed by the sales of competitor Nokia's Lumia model for the first time. On August 12, 2013, BlackBerry announced the intention to sell the company due to their increasingly unfavourable financial position and competition in the mobile industry. Largely due to lower than expected sales on the Z10, BlackBerry announced on September 20, 2013, that 4,500 full- and part-time positions (an estimated 40% of its operating staff) have been terminated and its product line has been reduced from six to four models. On September 23, 2013, Fairfax Financial, which owns a 10% equity stake in BlackBerry, made an offer to acquire BlackBerry for $4.7 billion (at $9.00 per share). Following the announcement, BlackBerry announced an acceptance of the offer provisionally but it would continue to seek other offers until November 4, 2013. On November 4, 2013, BlackBerry replaced Thorsten Heins with new interim CEO John S. Chen, the former CEO of Sybase. On November 8, the BlackBerry board rejected proposals from several technology companies for various BlackBerry assets on grounds that a break-up did not serve the interest of all stakeholders, which include employees, customers and suppliers in addition to shareholders, said the sources, who did not want to be identified as the discussions were confidential. On November 13, 2013, Chen released an open message: "We are committed to reclaiming our success." In early July 2014, the TechCrunch online publication published an article titled "BlackBerry Is One Of The Hottest Stocks Of 2014, Seriously", following a 50 percent rise in the company's stock, an increase that was greater than peer companies such as Apple and Google; however, an analysis of BlackBerry's financial results showed that neither revenue or profit margin were improved, but, instead, costs were markedly reduced. During the same period, BlackBerry also introduced the new Passport handset—consisting of a square screen with "Full HD-class" (1,440 x 1,440) resolution and marketed to professional fields such as healthcare and architecture—promoted its Messenger app and released minor updates for the BB10 mobile operating system. On December 17, 2014, the BlackBerry Classic was introduced; it is meant to be more in line with the former Bold series, incorporating navigation buttons similar to the previous BlackBerry OS devices. When it was discontinued in June 2016, it was the last BlackBerry with a keyboard that dominates the front of the phone in the classic style. In September 2015, BlackBerry officially unveiled the BlackBerry Priv, a slider, with a German made camera lens with 18 megapixels, phablet that utilizes the Android operating system with additional security and productivity-oriented features inspired by the BlackBerry operating systems. However, BlackBerry COO Marty Beard told Bloomberg that "The company's never said that we would not build another BB10 device." On July 26, 2016, the company hinted that another model with a physical keyboard was "coming shortly". The same day, BlackBerry unveiled a mid-range Android model with only an on-screen keyboard, the BlackBerry DTEK50, powered by the then latest version of Android, 6.0, Marshmallow. (The Priv could also be upgraded to 6.0) This device featured a 5.2-inch full high-definition display. BlackBerry chief security officer David Kleidermacher stressed data security during the launch, indicating that this model included built-in malware protection and encryption of all user information. Industry observers pointed out that the DTEK50 is a re-branded version of the Alcatel Idol 4 with additional security-oriented software customizations, manufactured and designed by TCL. In September 2016, BlackBerry Limited agreed to a licensing partnership with an Indonesian company to set up a new joint venture company called BB Merah Putih to "source, distribute, and market BlackBerry handsets in Indonesia". On October 25, 2016, BlackBerry released the BlackBerry DTEK60, the second device in the DTEK series, manufactured and designed by TCL. The device features a 5.5-inch Quad-HD touch screen display running on Qualcomm's Snapdragon 820 processor with support for Quick Charge 3.0, USB Type-C, and a fingerprint sensor. In October 2016, it was announced that BlackBerry will be working with the Ford Motor Company of Canada to develop software for the car manufacturer's connected vehicles. In February 2017, a $20m class action lawsuit against BlackBerry was announced by the former employees of the company. In March 2017, BB Merah Putih announced the BlackBerry Aurora, an Indonesian-made and sold device, running an operating system based on Android 7.0 out of the box. In March 2018, it was announced that BlackBerry would be working with Jaguar Land Rover to develop software for the car manufacturer's vehicles. In June 2018, BlackBerry in partnership with TCL Mobile and Optiemus Infracom launched the KEY2 at a global launch in New York. This is the third device to sport a keyboard while running Google's Android OS. Intellectual property litigation NTP Inc case In 2000 NTP sent notice of its wireless email patents to a number of companies and offered to license the patents to them. NTP brought a patent-infringement lawsuit against one of the companies, Research In Motion, in the United States District Court for the Eastern District of Virginia. This court is well known for its strict adherence to timetables and deadlines, sometimes referred to as the "rocket docket", and is particularly efficient at trying patent cases. The jury eventually found that the NTP patents were valid; furthermore, the jury established that RIM had infringed the patents in a "willful" manner, and the infringement had cost NTP US$33 million in damages (the greater of a reasonable royalty or lost profits). The judge, James R. Spencer, increased the damages to US$53 million as a punitive measure due to the willful nature of the infringement. He also instructed RIM to pay NTP's legal fees of US$4.5 million and issued an injunction ordering RIM to cease and desist infringing the patents—this decision would have resulted in the closure of BlackBerry's systems in the US. RIM appealed all of the findings of the court. The injunction and other remedies were stayed pending the outcome of the appeals. In March 2005 during appeal, RIM and NTP tried to negotiate a settlement of their dispute; the settlement was to be for $450 million. Negotiations broke down due to other issues. On June 10, 2005, the matter returned to the courts. In early November 2005 the US Department of Justice filed a brief requesting that RIM's service be allowed to continue because of the large number of BlackBerry users in the US Federal Government. In January 2006 the US Supreme Court refused to hear RIM's appeal of the holding of liability for patent infringement, and the matter was returned to a lower court. The prior granted injunction preventing all RIM sales in the US and use of the BlackBerry device might have been enforced by the presiding district court judge had the two parties been unable to reach a settlement. On February 9, 2006, the US Department of Defense (DOD) filed a brief stating that an injunction shutting down the BlackBerry service while excluding government users was unworkable. The DOD also stated that the BlackBerry was crucial for national security given the large number of government users. On February 9, 2006, RIM announced that it had developed software workarounds that would not infringe the NTP patents, and would implement those if the injunction was enforced. On March 3, 2006, after a stern warning from Judge Spencer, RIM and NTP announced that they had settled their dispute. Under the terms of the settlement, RIM has agreed to pay NTP $612.5 million (USD) in a "full and final settlement of all claims." In a statement, RIM said that "all terms of the agreement have been finalized and the litigation against RIM has been dismissed by a court order this afternoon. The agreement eliminates the need for any further court proceedings or decisions relating to damages or injunctive relief." The settlement amount is believed low by some analysts, because of the absence of any future royalties on the technology in question. On May 26, 2017, BlackBerry announced that it had reached an agreement with Qualcomm Incorporated resolving all amounts payable in connection with the interim arbitration decision announced on April 12, 2017. Following a joint stipulation by the parties, the arbitration panel has issued a final award providing for the payment by Qualcomm to BlackBerry of a total amount of U.S.$940,000,000 including interest and attorneys' fees, net of certain royalties due from BlackBerry for calendar 2016 and the first quarter of calendar 2017. KIK On November 24, 2010, Research In Motion (RIM) removed Kik Messenger from BlackBerry App World and limited the functionality of the software for its users. RIM also sued Kik Interactive for patent infringement and misuse of trademarks. In October 2013, the companies settled the lawsuit, with the terms undisclosed. Facebook and Instagram In 2018 it was reported that BlackBerry would be filing legal action against Facebook over perceived intellectual property infringements within both Facebook Messenger and WhatsApp as well as with Instagram. BlackBerry retail stores Many BlackBerry retail stores operate outside North America, such as in Thailand, Indonesia, United Arab Emirates, and Mexico. In December 2007 a BlackBerry Store opened in Farmington Hills, Michigan. The store offers BlackBerry device models from AT&T, T-Mobile, Verizon, and Sprint, the major U.S. carriers which offer smartphones. There were three prior attempts at opening BlackBerry stores in Toronto and London (UK), but they eventually folded. There are also BlackBerry Stores operated by Wireless Giant at airports in Atlanta, Boston, Charlotte, Minneapolis–St. Paul, Philadelphia, Houston, and Newark, but several have been slated for closing. On September 23, 2015, Blackberry opened its first pop-up store in Frankfurt, Germany. 2005, 2007, 2009, 2011 and 2012 outages At various stages of the company's history it suffered occasional service outages that have been referred to in the media as "embarrassing". In 2005 the company suffered a relatively short-term outage reportedly among a small handful of North America carriers. The service was restored after several hours. In 2007 the e-mail service suffered an outage which led for calls by some questioning the integrity towards BlackBerry's perceived centralized system. In 2009 the company had an outage reportedly covering the whole of North America. At 2011-10-10 10:00 UTC began a multi-day outage in Europe, the Middle East and Africa, affecting millions of BlackBerry users. There was another outage the following day. By October 12, 2011, the Blackberry Internet Service went down in North America. Research In Motion attributed data overload due to switch failures in their two data centres in Waterloo in Canada and Slough in England as the cause of the service disruptions. The outage intensified calls by shareholders for a shake-up in the company's leadership. Some estimates by BlackBerry are that the company lost between $50 million to $54 million due to global email service failure and outage in 2011. Certification BCESA (BlackBerry Certified Enterprise Sales Associate, BCESA40 in full) is a BlackBerry Certification for professional users of RIM (Research In Motion) BlackBerry wireless email devices. The Certification requires the user to pass several exams relating to the BlackBerry Devices, all its functions including Desktop software and providing technical support to Customers of BlackBerry Devices. The BCESA, BlackBerry Certified Enterprise Sales Associate qualification, is the first of three levels of professional BlackBerry Certification. BCTA (BlackBerry Certified Technical Associate) BlackBerry Certified Support Associate T2 More information on certifications is on the BlackBerry.com website. The BlackBerry Technical Certifications available are: BlackBerry Certified Enterprise Server Consultant (BCESC) BlackBerry Certified Server Support Technician (BCSST) BlackBerry Certified Support Technician (BCSTR) Products Android devices: BlackBerry Evolve X (2018) BlackBerry Evolve (2018) BlackBerry Key2 (2018) BlackBerry Motion (2017) BlackBerry Aurora (2017) BlackBerry KeyOne (2017) BlackBerry DTEK60 (2016) BlackBerry DTEK50 (2016) BlackBerry Priv (2015) BlackBerry 10 devices: BlackBerry Leap (2015) BlackBerry Classic (2014) BlackBerry Passport (2014) BlackBerry Porsche Design P'9983 (2014) BlackBerry Z3 (2014) BlackBerry Z30 (2013) BlackBerry Porsche Design P'9982 (2013) BlackBerry Q10 (2013) BlackBerry Z10 (2013) BlackBerry Q5 (2013) BlackBerry 7 devices: BlackBerry Bold series (2011): BlackBerry Bold 9900/9930/9790 BlackBerry 9720 (2013) BlackBerry Porsche Design (2012): BlackBerry Porsche Design P'9981 BlackBerry Torch series (2011): BlackBerry Torch 9810 BlackBerry Torch series (2011): BlackBerry Torch 9850/9860 BlackBerry Curve series (2011): BlackBerry 9350/9360/9370/9380 BlackBerry Curve 9320/9220 (2012) BlackBerry 6 devices: BlackBerry Torch series (2010): BlackBerry Torch 9800 BlackBerry Curve series (2010): BlackBerry Curve 9300/9330 BlackBerry Style 9670 (2010) BlackBerry Pearl series (2010): BlackBerry Pearl 3G 9100/9105 BlackBerry Bold series (2010–2011): BlackBerry Bold 9780/9788 BlackBerry 5 devices: BlackBerry Bold series (2008–2010): BlackBerry Bold 9000/9700/9650 BlackBerry Tour series (2009): BlackBerry Tour (9630) BlackBerry Storm series (2009): BlackBerry Storm 2 (9520/9550) BlackBerry Storm series (2008): BlackBerry Storm (9500/9530) BlackBerry Curve series (2009–2010): BlackBerry Curve 8900 (8900/8910/8980) BlackBerry Curve series (2009): BlackBerry Curve 8520/8530 Blackberry 4 devices: BlackBerry 8800 series (2007): BlackBerry 8800/8820/8830 BlackBerry Pearl series (2006): BlackBerry Pearl 8100/8110/8120/8130 BlackBerry Pearl Flip series (2008): BlackBerry Pearl Flip 8220/8230 BlackBerry Curve series (2007): BlackBerry Curve 8300 (8300/8310/8320/8330/8350i) Blackberry 3 devices: Blackberry Java-based series: 5000, 6000 Blackberry 2 devices: Blackberry phone series: 7100 Blackberry color series: 7200, 7500, 7700 Blackberry 1 devices: Blackberry pager models: 850, 857, 950, 957 Hardware Modern LTE based phones such as the BlackBerry Z10 have a Qualcomm Snapdragon S4 Plus, a proprietary Qualcomm SOC which is based on ARMv7-A architecture, featuring two 1.5 GHz Qualcomm Krait CPU cores, and a 400 MHz Adreno 225 GPU. GSM-based BlackBerry phones incorporate an ARM 7, 9 or 11 processor. Some of the BlackBerry models (Torch 9850/9860, Torch 9810, and Bold 9900/9930) have a 1.2 GHz MSM8655 Snapdragon S2 SOC, 768 MB system memory, and 8 GB of on-board storage. Entry-level models, such as the Curve 9360, feature a Marvell PXA940 clocked at 800 MHz. Some previous BlackBerry devices, such as the Bold 9000, were equipped with Intel XScale 624 MHz processors. The Bold 9700 featured a newer version of the Bold 9000's processor but is clocked at the same speed. The Curve 8520 featured a 512 MHz processor, while BlackBerry 8000 series smartphones, such as the 8700 and the Pearl, are based on the 312 MHz ARM XScale ARMv5TE PXA900. An exception to this is the BlackBerry 8707 which is based on the 80 MHz Qualcomm 3250 chipset; this was due to the PXA900 chipset not supporting 3G networks. The 80 MHz processor in the BlackBerry 8707 meant the device was often slower to download and render web pages over 3G than the 8700 was over EDGE networks. Early BlackBerry devices, such as the BlackBerry 950, used Intel 80386-based processors. BlackBerry's latest Flagship phone the BlackBerry Z30 based on a 5-inch Super AMOLED, 1280×720 resolution, at 295 ppi 24-bit color depth and powered by Quad-Graphics and Qualcomm's Dual Core 1.7 GHz MSM8960T Pro. The first BlackBerry with an Android operating system was released in late November 2015, the 192 gram/6.77 ounce BlackBerry Priv. It launched with version 5.1.1 but was later upgraded to version 6.0 Android Marshmallow. It was first available in four countries but increased to 31 countries by February 28, 2016. Employing a Qualcomm 8992 Snapdragon 808 Hexa-Core, 64 bit, Adreno 418, 600 MHz GPU with 3GB RAM processor, this unit is equipped with a curved 5.4-inch (2560 x 1440) OLED display and a sliding QWERTY keyboard which is hidden when not in use; Google's voice recognition that allows for dictating e-mails is also available. The Priv retained the best BlackBerry 10 features. Its 3,410mAh battery is said to provide 22.5 hours of mixed use. The 18-megapixel camera, with a Schneider-Kreuznach lens, can also record 4K video; a secondary selfie camera is also provided. Several important apps unique to the Priv were available from Google Play by mid December. Software A new operating system, BlackBerry 10, was released for two new BlackBerry models (Z10 and Q10) on January 30, 2013. At BlackBerry World 2012, RIM CEO Thorsten Heins demonstrated some of the new features of the OS, including a camera which is able to rewind frame-by-frame separately of individual faces in an image, to allow selection of the best of different shots, which is then stitched seamlessly to an optimal composite, an intelligent, predictive, and adapting keyboard, and a gesture based user interface designed around the idea of "peek" and "flow". Apps are available for BlackBerry 10 devices through the BlackBerry World storefront. The previous operating system developed for older BlackBerry devices was BlackBerry OS, a proprietary multitasking environment developed by RIM. The operating system is designed for use of input devices such as the track wheel, track ball, and track pad. The OS provides support for Java MIDP 1.0 and WAP 1.2. Previous versions allowed wireless synchronisation with Microsoft Exchange Server email and calendar, as well as with Lotus Domino email. OS 5.0 provides a subset of MIDP 2.0, and allows complete wireless activation and synchronisation with Exchange email, calendar, tasks, notes and contacts, and adds support for Novell GroupWise and Lotus Notes. The BlackBerry Curve 9360, BlackBerry Torch 9810, Bold 9900/9930, Curve 9310/9320 and Torch 9850/9860 featured the 2011 BlackBerry OS 7. Apps are available for these devices through BlackBerry World (which before 2013 was called BlackBerry App World). Third-party developers can write software using these APIs, and proprietary BlackBerry APIs as well. Any application that makes use of certain restricted functionality must be digitally signed so that it can be associated to a developer account at RIM. This signing procedure guarantees the authorship of an application but does not guarantee the quality or security of the code. RIM provides tools for developing applications and themes for BlackBerry. Applications and themes can be loaded onto BlackBerry devices through BlackBerry World, Over The Air (OTA) through the BlackBerry mobile browser, or through BlackBerry Desktop Manager. BlackBerry devices, as well as Android, iOS, and Windows Phone platforms, have the ability to use the proprietary BlackBerry Messenger, also known as BBM, software for sending and receiving encrypted instant messages, voice notes, images and videos via BlackBerry PIN. As long as your cell phone has a data plan these messages are all free of charge. Some of the features of BBM include groups, bar-code scanning, lists, shared calendars, BBM Music and integration with apps and games using the BBM social platform. In April 2013, BlackBerry announced that it was shutting down its streaming music service, BBM Music, which was active for almost two years since its launch. BlackBerry Messenger Music closed on June 2, 2013. In July 2014, BlackBerry revealed BlackBerry Assistant, a new feature for BlackBerry OS 10.3, and BlackBerry Passport hardware. The feature is a digital personal assistant to help keep you "organized, informed and productive." In December 2014, BlackBerry and NantHealth, a healthcare-focused data provider, launched a secure cancer genome browser, giving doctors the ability to access patients' genetic data on the BlackBerry Passport smartphone. In January 2022, BlackBerry announced that they would discontinue their services on all BlackBerry phones not running on Android on January 4. According to BlackBerry, "As of this date, devices running these legacy services and software through either carrier or Wi-Fi connections will no longer reliably function, including for data, phone calls, SMS and 9-1-1 functionality". Phones with BlackBerry email client Several non-BlackBerry mobile phones have been released featuring the BlackBerry email client which connects to BlackBerry servers. Many of these phones have full QWERTY keyboards. AT&T Tilt HTC Advantage X7500 HTC TyTN Motorola MPx220, some models Nokia 6810 Nokia 6820 Nokia 9300 Nokia 9300i Nokia 9500 Nokia Eseries phones, except models Nokia E66, Nokia E71 Qtek 9100 Qtek 9000 Samsung t719 Siemens SK65 Sony Ericsson P910 Sony Ericsson P990 Sony Ericsson M600i Sony Ericsson P1 Third-party software Third-party software available for use on BlackBerry devices includes full-featured database management systems, which can be used to support customer relationship management clients and other applications that must manage large volumes of potentially complex data. In March 2011, RIM announced an optional Android player that could play applications developed for the Android system would be available for the BlackBerry PlayBook, RIM's first entry in the tablet market. On August 24, 2011 Bloomberg News reported unofficial rumors that BlackBerry devices would be able to run Android applications when RIM brings QNX and the Android App Player to BlackBerry. On October 20, 2011, RIM officially announced that Android applications could run, unmodified, on the BlackBerry tablet and the newest BlackBerry phones, using the newest version of its operating system. Connectivity BlackBerry smartphones can be integrated into an organization's email system through a software package called BlackBerry Enterprise Server (BES) through version 5, and BlackBerry Enterprise Service (BES) as of version 10. (There were no versions 6 through 9.) Versions of BES are available for Microsoft Exchange, Lotus Domino, Novell GroupWise and Google Apps. While individual users may be able to use a wireless provider's email services without having to install BES themselves, organizations with multiple users usually run BES on their own network. BlackBerry devices running BlackBerry OS 10 or later can also be managed directly by a Microsoft Exchange Server, using Exchange ActiveSync (EAS) policies, in the same way that an iOS or Android device can. (EAS supports fewer management controls than BES does.) Some third-party companies provide hosted BES solutions. Every BlackBerry has a unique ID called a BlackBerry PIN, which is used to identify the device to the BES. BlackBerry at one time provided a free BES software called BES Express (BESX). The primary BES feature is to relay email from a corporate mailbox to a BlackBerry phone. The BES monitors the user's mailbox, relaying new messages to the phone via BlackBerry's Network Operations Center (NOC) and user's wireless provider. This feature is known as push email, because all new emails, contacts, task entries, memopad entries, and calendar entries are pushed out to the BlackBerry device immediately (as opposed to the user synchronising the data manually or having the device poll the server at intervals). BlackBerry also supports polling email, through third-party applications. The messaging system built into the BlackBerry only understands how to receive messages from a BES or the BIS, these services handle the connections to the user's mail providers. Device storage also enables the mobile user to access all data off-line in areas without wireless service. When the user reconnects to wireless service, the BES sends the latest data. A feature of the newer models of the BlackBerry is their ability to quickly track the user's current location through trilateration without the use of GPS, thus saving battery life and time. Trilateration can be used as a quick, less battery intensive way to provide location-aware applications with the co-ordinates of the user. However, the accuracy of BlackBerry trilateration is less than that of GPS due to a number of factors, including cell tower blockage by large buildings, mountains, or distance. BES also provides phones with TCP/IP connectivity accessed through a component called MDS (Mobile Data System) Connection Service. This allows custom application development using data streams on BlackBerry devices based on the Sun Microsystems Java ME platform. In addition, BES provides network security, in the form of Triple DES or, more recently, AES encryption of all data (both email and MDS traffic) that travels between the BlackBerry phone and a BlackBerry Enterprise Server. Most providers offer flat monthly pricing via special Blackberry tariffs for unlimited data between BlackBerry units and BES. In addition to receiving email, organizations can make intranets or custom internal applications with unmetered traffic. With more recent versions of the BlackBerry platform, the MDS is no longer a requirement for wireless data access. Starting with OS 3.8 or 4.0, BlackBerry phones can access the Internet (i.e. TCP/IP access) without an MDS – formerly only email and WAP access was possible without a BES/MDS. The BES/MDS is still required for secure email, data access, and applications that require WAP from carriers that do not allow WAP access. The primary alternative to using BlackBerry Enterprise Server is to use the BlackBerry Internet Service (BIS). BlackBerry Internet Service is available in 91 countries internationally. BlackBerry Internet Service was developed primarily for the average consumer rather than for the business consumer. The service allows users to access POP3, IMAP, and Outlook Web App (not via Exchange ActiveSync) email accounts without connecting through a BlackBerry Enterprise Server (BES). BlackBerry Internet Service allows up to 10 email accounts to be accessed, including proprietary as well as public email accounts (such as Gmail, Outlook, Yahoo and AOL). BlackBerry Internet Service also supports the push capabilities of various other BlackBerry Applications. Various applications developed by RIM for BlackBerry utilise the push capabilities of BIS, such as the Instant Messaging clients (like Google Talk, Windows Live Messenger and Yahoo Messenger). The MMS, PIN, interactive gaming, mapping and trading applications require data plans like BIS (not just Wi-Fi) for use. The service is usually provisioned through a mobile phone service provider, though BlackBerry actually runs the service. BlackBerry PIN The BlackBerry PIN (Personal Identification Number) is an eight-character hexadecimal identification number assigned to each BlackBerry device. PINs cannot be changed manually on the device (though BlackBerry technicians are able to reset or update a PIN server-side), and are locked to each specific BlackBerry. BlackBerry devices can message each other using the PIN directly or by using the BlackBerry Messenger application. BlackBerry PINs are tracked by BlackBerry Enterprise Servers and the BlackBerry Internet Service and are used to direct messages to a BlackBerry device. Emails and any other messages, such as those from the BlackBerry Push Service, are typically directed to a BlackBerry device's PIN. The message can then be routed by a RIM Network Operations Center, and sent to a carrier, which will deliver the message the last mile to the device. In September 2012 RIM announced that the BlackBerry PIN would be replaced by users' BlackBerry ID starting in 2013 with the launch of the BlackBerry 10 platform. Competition and financial results The primary competitors of the BlackBerry are Android smartphones and the iPhone. BlackBerry has struggled to compete against both and its market share has plunged since 2011, leading to speculation that it will be unable to survive as an independent going concern. However, it has managed to maintain significant positions in some markets. Despite market share loss, on a global basis, the number of active BlackBerry subscribers has increased substantially through the years. For example, for the fiscal period during which the Apple iPhone was first released, RIM reported that they had a subscriber base of 10.5 million BlackBerry subscribers. At the end of 2008, when Android first hit the market, RIM reported that the number of BlackBerry subscribers had increased to 21 million. After the release of the Apple iPhone 5 in September 2012 RIM CEO Thorsten Heins announced that the current global subscribers is up to 80 million, which sparked a 7% jump in shares price. However, since then, BlackBerry's global user base (meaning active accounts) has declined dramatically since its peak of 80 million in June 2012, dropping to 46 million users in September 2014. Its market share globally has also declined to less than 1 percent. In 2011, BlackBerry shipped 43% of all smartphones to Indonesia. By April 2014 this had fallen to 3%. The decline in the Indonesian market share mirrors a global trend for the company (0.6% of North America). The retail price of 2,199,000 Indonesian Rupiah ($189) failed to give BlackBerry the boost it needed in Indonesia. The company launched the device with a discounted offer to the first 1000 purchasers, which resulted in a stampede in the capital in which several people were injured. BlackBerry lost market share in Indonesia despite the launch of the Z3 on May 13, 2014. The new device was given a worldwide launch in the city of Jakarta and came on the back of the news that Research in Motion (RIM) was to cut hardware production costs by outsourcing this to Taiwan-based Foxconn Group. During the report of its third quarter 2015 results on December 18, 2015, the company said that approximately 700,000 handsets had been sold, down from 1.9 million in the same quarter in 2014, and down from 800,000 in Q2 of 2015. The average sale price per unit was up from $240 to $315, however. This should continue to increase with sales of the new Android Priv device which was selling at a premium price ($800 in Canada, for example). In Q3 of 2015, BlackBerry had a net loss of $89 million U.S. or 17 cents per share, but only a $15 million net loss, or three cents per share, when excluding restructuring charges and other one-time items. Revenue was up slightly from a year earlier, at $557 million U.S. vs. $548 million, primarily because of software sales. Chief executive officer John Chen said that he expects the company's software business to grow at (14 percent) or above the market. At the time, the company was not ready to provide sales figures for the Android-based Priv handset which had been released only weeks earlier, and in only four countries at that time, but Chen offered this comment to analysts: "Depending on how Priv does ... there is a chance we could achieve or get closer to break-even operating profitability for our overall device business in the (fourth) quarter". Due to a continuous reduction in BlackBerry users, in February 2016 the Blackberry headquarters in Waterloo, Ontario, Canada, slashed 35 percent of its workforce. By early 2016, Blackberry market share dropped to 0.2%. In Q4 2016, reports indicate Blackberry sold only 207,900 units—equivalent to a 0.0% market share. User base The number of active BlackBerry users since 2003 globally: Security agencies access Research in Motion agreed to give access to private communications to the governments of United Arab Emirates and Saudi Arabia in 2010, and India in 2012. The Saudi and UAE governments had threatened to ban certain services because their law enforcement agencies could not decrypt messages between people of interest. It was revealed as a part of the 2013 mass surveillance disclosures that the American and British intelligence agencies, the National Security Agency (NSA) and the Government Communications Headquarters (GCHQ) respectively, have access to the user data on BlackBerry devices. The agencies are able to read almost all smartphone information, including SMS, location, e-mails, and notes through BlackBerry Internet Service, which operates outside corporate networks, and which, in contrast to the data passing through internal BlackBerry services (BES), only compresses but does not encrypt data. Documents stated that the NSA was able to access the BlackBerry e-mail system and that they could "see and read SMS traffic". There was a brief period in 2009 when the NSA was unable to access BlackBerry devices, after BlackBerry changed the way they compress their data. Access to the devices was re-established by GCHQ. GCHQ has a tool named SCRAPHEAP CHALLENGE, with the capability of "Perfect spoofing of emails from Blackberry targets". In response to the revelations BlackBerry officials stated that "It is not for us to comment on media reports regarding alleged government surveillance of telecommunications traffic" and added that a "back door pipeline" to their platform had not been established and did not exist. Similar access by the intelligence agencies to many other mobile devices exists, using similar techniques to hack into them. The BlackBerry software includes support for the Dual EC DRBG CSPRNG algorithm which, due to being probably backdoored by the NSA, the US National Institute of Standards and Technology "strongly recommends" no longer be used. BlackBerry Ltd. has however not issued an advisory to its customers, because they do not consider the probable backdoor a vulnerability. BlackBerry Ltd. also owns US patent 2007189527, which covers the technical design of the backdoor. Usage The (formerly) advanced encryption capabilities of the BlackBerry Smartphone made it eligible for use by government agencies and state forces. On January 4, 2022, Blackberry announced that older phones running Blackberry 10, 7.1 OS and earlier will no longer work. Barack Obama Former United States president Barack Obama became known for his dependence on a BlackBerry device for communication during his 2008 Presidential campaign. Despite the security issues, he insisted on using it even after inauguration. This was seen by some as akin to a "celebrity endorsement", which marketing experts have estimated to be worth between $25 million and $50 million. His usage of BlackBerry continued until around the end of his presidency. Hillary Clinton The Hillary Clinton email controversy is associated with Hillary Clinton continuing to use her BlackBerry after assuming the office of Secretary of State. Use by government forces An example is the West Yorkshire Police, which has allowed the increase in the presence of police officers along the streets and a reduction in public spending, given that each officer could perform desk work directly via the mobile device, as well as in several other areas and situations. The US Federal Government has been slow to move away from the Blackberry platform, a State Department spokesperson saying in 2013 that Blackberry devices were still the only mobile devices approved for U.S. missions abroad by the State Department. The high encryption standard that made BlackBerry smartphones and the PlayBook tablet unique, have since been implemented in other devices, including most Apple devices released after the iPhone 4. The Bangalore City Police is one of the few police departments in India along with the Pune Police and Kochi Police to use BlackBerry devices. Use by transportation staff In the United Kingdom, South West Trains and Northern Rail have issued BlackBerry devices to guards in order to improve the communication between control, guards and passengers. In Canada, Toronto and many other municipalities within Canada have issued BlackBerry devices to most of its employees including but not limited to transportation, technical, water and operations inspection staff and all management staff in order to improve the communication between contracted construction companies, its winter maintenance operations and to assist and successfully organize multimillion-dollar contracts. The devices are the standard mobile device to receive e-mail redirected from GroupWise. As part of their Internet of Things endeavours, the company announced plans of moving into the shipping industry by adapting the smartphones devices to the communication necessities of freight containers. Other users Eric Schmidt, Executive Chairman of Google from 2001 to 2011, is a longtime BlackBerry user. Although smartphones running Google's Android mobile operating system compete with BlackBerry, Schmidt said in a 2013 interview that he uses a BlackBerry because he prefers its keyboard. The Italian criminal group known as the 'Ndrangheta was reported in February 2009 to have communicated overseas with the Gulf Cartel, a Mexican drug cartel, through the use of the BlackBerry Messenger, since the BBM Texts are "very difficult to intercept". Kim Kardashian was also a BlackBerry user. In 2014, she reportedly said at the Code/Mobile conference that "BlackBerry has my heart and soul. I love it. I'll never get rid of it" and "I have anxiety that I will run out and I won't be able to have a BlackBerry. I'm afraid it will go extinct." Kardashian also admitted to keeping a supply of BlackBerry phones with her because she was loyal to the BlackBerry Bold model. She has since been spotted using an iPhone instead. See also BlackBerry Limited (formerly Research in Motion) BlackBerry Mobile Comparison of smartphones Index of articles related to BlackBerry OS List of BlackBerry products QWERTY Science and technology in Canada T9 (predictive text) References Bibliography Research In Motion Reports Fourth Quarter and Year-End Results For Fiscal 2005 Research In Motion Fourth Quarter and 2007 Fiscal Year End Results External links Computer-related introductions in 1999 BlackBerry Limited mobile phones Canadian brands C++ software Goods manufactured in Canada Information appliances Personal digital assistants Science and technology in Canada Pager companies
5088118
https://en.wikipedia.org/wiki/Evilspeak
Evilspeak
Evilspeak is a 1981 American horror film directed by Eric Weston and co-written by Weston and Joseph Garofalo. The film stars Clint Howard as an outcast cadet named Stanley Coopersmith, who frequently gets tormented by his mates and advisers at a military academy. Upon finding a book of black mass that belonged to the evil medieval Father Esteban, he taps through a computer to conjure Satan and summons spells and demons to get revenge upon his harassers. The movie was one of the infamous "video nasties" banned in the United Kingdom in the 1980s. Plot During the Dark Ages, Satanic leader Father Lorenzo Esteban and his followers are approached by a church official on the shore of Spain, telling them that they are banished from Spain and denied God's grace unless they renounce Satan and their evil ways. In the present, Stanley Coopersmith is a young cadet at West Andover military academy. He remains as a social outcast who is bullied by his classmates due to him being an orphan and treated unfairly by his instructors who believe him to be inept at everything. When he is punished for no clear reason by cleaning the church cellar, he finds a room belonging to Father Esteban which contains books of black magic along with Esteban's diary. He then uses his computer skills to translate the book from Latin into English. The translation describes Estaban as a Satanist and the book contains rituals for performing the Black Mass along with a promise by Esteban citing "I Will Return". Waking up late the next morning, Stanley finds his alarm clock unplugged and his clothing tied in knots, courtesy of his belligerent classmates. This causes him to be tardy for morning classes, and his teacher writes him a punishment note to be taken to the school's Colonel headmaster Kincaid's office. He is sent to the office, where he accidentally leaves the diary on the desk of the school secretary who hides it. While Stanley is being made to clean the stables as punishment for no reason, the office secretary begins to finger the jewels on the front of the diary. Trying to pry the jewels out of their settings causes the pigs in the stable to attack Stanley. Stanley returns to his dormitory room to find his belongings scattered again, and cannot find his book of black magic. He assumes that his classmates stole it, and he confronts them about the supposed theft at a local roller rink, but they deny knowledge of the book and he leaves. Stanley then goes to the school's computer lab to perform more general research on Satanism, even though his book is still missing. His allotted time in the lab runs out, and he is forced to leave with his research incomplete. Stanley appears in the church cellar with computer equipment, which is assumed to be stolen from the school's computer lab. He sets up the computer and runs some inquiries into the requirements for a black mass. Searching through various bottles in the cellar left by Father Estaban, he attempts to initiate a mass but the computer informs him that he is still missing crucial ingredients, namely blood and a consecrated host. He is nearly discovered by Reverend Jameson, the church's current pastor, who sends him off to the mess hall to eat dinner. After arriving at the mess hall too late for lunch, he befriends the school's good-natured cook who makes a meal for him and shows him a litter of puppies that his dog just had. Stanley takes the smallest pup for himself, names him Fred and hides him in the church cellar. Stanley steals the host from the church and then notices Esteban's portrait on the wall. Using the translation he attempts the ritual and is suddenly attacked by his classmates wearing masks and robes. After knocking him unconscious, they leave. Thinking he has successfully performed the ritual, Stanley is told by his computer that the ritual was incomplete and a pentagram appears on the computer screen. Stanley accidentally wakes the drunken caretaker, Sarge, who accuses him of being a thief for stealing a crowbar. Sarge attacks Stanley, who screams for help. The computer flares to life with a red pentagram on it. An unseen force then takes Sarge's head and turns it completely around, breaking his neck. Stanley discovers a catacomb filled with decapitated skeletal remains and the crypt of Father Estaban. After hiding Sarge's body, he leaves. The school's secretary attempts one last time to pry the pentagram from the black magic book she stole from Stanley. She fails, injuring her finger which bleeds. She undresses, begins to take a shower and is fatally attacked by demonically spawned boars. After watching a beauty pageant at the school's pep rally, Stanley is attacked by his classmates who tell him that if tries to play in the big game tomorrow they'll find and kill Fred. After witnessing his beating at the hands of his classmates, the hostile and unfriendly school principal Kincaid kicks Stanley off the soccer team instead of punishing the bullies. After a night of drinking, Stanley's classmates make their way into Esteban's hidden room and find his computer program. After killing Fred, the computer says that the blood used must be human blood. After finding Fred's mutilated body, Stanley becomes enraged. The diary appears laying on Esteban's casket. When a teacher catches Stanley in the church stealing the host, he follows him to the catacombs where he is translating the rest of the diary. Stanley pledges his life to Satan, then kills his teacher on a spiked wheel and collects his blood. Unaware of the ritual being performed, Stanley's classmates, the coach, Kincaid, and Jameson are all in attendance at a service. After successfully performing the ritual, Esteban's soul then possesses Stanley's body and takes up a sword. Meanwhile, a nail from the large crucifix hanging over the church's altar is pried out by an unseen force and flies across the room and is driven into Jameson's skull. Stanley then rises from the cellar below engulfed in flames and wielding a sword. A pack of large black boars pours out of the hole in the floor, where Stanley now hovers above everyone else. He then decapitates Kincaid and his coach. His classmates try to flee from the church only to be devoured by the boars. In the catacombs, Bubba, the lead bully, tries to escape only to have the caretaker come back to life and kill him by ripping out his heart. The church burns to the ground. The epilogue text states that Stanley survived the attack, but after witnessing the fiery death of his classmates, he went catatonic from shock and was sentenced to Sunnydale Asylum where he remains. Stanley's true fate is revealed, as his face appears on the computer screen in the cellar with the words "By the four beasts before the throne. By the fire which is about the throne. By the most holy and glorious name, Satan. I, Stanley Coopersmith will return. I WILL RETURN". Cast Clint Howard as Stanley Coopersmith R. G. Armstrong as Sarge Joseph Cortese as Reverend Jameson Claude Earl Jones as Coach Haywood Nelson as Kowalski Don Stark as Bubba Charles Tyner as Colonel Kincaid Hamilton Camp as Hauptman Louie Gravance as Jo Jo Jim Greenleaf as Ox Lynn Hancock as Miss Friedemeyer Loren Lester as Charlie Boy Lenny Montana as Jake Leonard D'John as Tony Richard Moll as Father Lorenzo Estaban Production The film was shot in three weeks, using locations in Santa Barbara at what is now Garden Street Academy and a condemned church in South Central Los Angeles. According to DVD commentary, the dilapidated church was superficially renovated for the movie shoot, confusing a priest who previously worked there and causing him to get on his knees and pray to God. The church was burned to the ground some three days later. Release and controversy Evilspeak was released on August 22, 1981 in Japan, and February 26, 1982 in the United States. The movie was cited as a video nasty in the UK following its release on the Videospace label. It remained banned for a number of years as part of the Video Recordings Act 1984, thanks to its gory climax and themes of Satanism. The film was reclassified and re-released in 1987 but with over three minutes of cuts, which included the removal of most of the gore from the climax and all text images of the Black Mass on the computer screen. It was then subsequently passed uncut by the BBFC in 2004 and is now available in both an uncut form and a version re-edited by the distributors to tighten up the dialogue. Anton LaVey, the late founder and High priest of the Church of Satan, was a great fan of the film and considered it to be very Satanic. Actor Clint Howard said that director Eric Weston's original version of the film that was submitted to the MPAA was longer and contained more blood, gore and nudity than the unrated version of the film, especially during the shower/pig attack scene and the final confrontation. In a July 2017 interview for Gilbert Gottfried's Amazing Colossal Podcast, Howard also revealed that the film's producers made him pay for his own toupée. Critical reception AllMovie called it "essentially a gender-bent rip-off of Carrie", though "there is enough in the way of spooky atmosphere and well-staged shocks to keep less discriminating horror fans interested." PopMatters gave the film a 7/10 grade, despite writing "What started as a standard wish fulfillment/revenge scheme mixed with Satanism flounders with a lack of focus." DVD Verdict wrote "Evilspeak is a crazy movie. Like, crazy. In a good way. Unfortunately, it's also kind of boring at times, taking well over an hour to get where it's going. [...] Despite the slower spots—and there are plenty of slower spots—Evilspeak remains an enjoyably overlooked horror film just for its eccentricities." A slightly more favourable review came from TV Guide, who wrote "The directorial debut of Eric Weston, Evilspeak is remarkably engaging, imaginative and well-crafted. It contains a strong performance from Howard, plus a deliciously over-the-top nasty turn by veteran character actor R.G. Armstrong." References External links 1981 films American films 1981 horror films English-language films American films about revenge American supernatural horror films Films about Satanism Films about bullying Demons in film Films about spirit possession Video nasties American splatter films
6664689
https://en.wikipedia.org/wiki/EndNote
EndNote
EndNote is a commercial reference management software package, used to manage bibliographies and references when writing essays, reports and articles.<noinclude> EndNote's ownership changed hands several times since it was launched in 1989 by Niles & Associates: in 2000 it was acquired by Institute for Scientific Information’s Research Soft Division, part of Thomson Corporation, and in 2016 by Clarivate (then named Clarivate Analytics). Features EndNote groups citations into "libraries" with the file extension *.enl and a corresponding *.data folder. There are several ways to add a reference to a library: manually, or by exporting, importing, copying from another EndNote library, or connecting from EndNote. The program presents the user with a window containing a dropdown menu from which to select the type of reference they require (e.g., book, congressional legislation, film, newspaper article, etc.), and fields ranging from the general (author, title, year) to those specific to the kind of reference (abstract, author, ISBN, running time, etc.) Most bibliographic databases allow users to export references to their EndNote libraries. This enables the user to select multiple citations and saves the user from having to manually enter the citation information and the abstracts. Some databases (e.g., PubMed) requires the user to select citations, select a specific format, and save them as .txt files. The user can then import the citations into the EndNote software. It is also possible to search library catalogs and free databases, such as PubMed, from within the EndNote software program itself. If the user fills out the necessary fields, EndNote can automatically format the citation into any of over 2,000 different styles the user chooses. For example, listed below are some citations from Gray's Anatomy using several different styles: In Windows, EndNote creates a file with an *.enl extension, along with a *.data folder containing two MySQL files pdb.eni and sdb.eni. EndNote can be installed so that its features, like Cite While You Write, appear in the Tools menu of Microsoft Word and OpenOffice.org Writer. EndNote can export citation libraries as HTML, plain text, Rich Text Format, or XML. From version X.7.2, one library can be shared with up to 14 other EndNote users. The data is synchronized via the EndNote cloud service, with everybody having full write access to the library. EndNote can also organize PDFs on the user's hard drive (or full text on the web) through links to files or by inserting copies of PDFs. It is also possible to save a single image, document, Excel spreadsheet, or other file type to each reference in an EndNote library. Starting from EndNote X version 1.0.1, formatting support for OpenDocument files (ODT) using the Format Paper command is supported. History EndNote Version 1 was released as a ”Reference Database and Bibliography Maker” for Apple Macintosh in ca. 1989 by Niles & Associates (www.niles.com, currently defunct) of Emeryville/Berkley, CA at the list price of US$ 129 + shipping.1 As one of the earlier reviews noted “EndNote is a citation manager, not a personal online catalog. Its focus is on inserting citations into written documents,” although it has had the “ability to import formatted references from other databases” from its very early days.2 However, starting with version 2 in ca. 1995 , reviewers considered EndNote as “ a dual purpose program”, that “functions as a database manager and as a bibliography maker to insert citations into word processing documents and later compiles the bibliography in the required format.” 3 Also, starting with version 2.1. EndNote has been available for Windows.4 The early versions of EndNote had a limit on the total number of references in a library to be under 32,000.4, 5 In 1992, there were four other products competing with EndNote: Pro-Cite, Reference Manager, Papyrus6 and Bibilostax.7 In 1998-2015 Biblioscape was on this list as well. 8 Mendeley was added to this list in 2008. By 2021 only EndNote and Mendeley remained viable products, and the rest became defunct. With the release of version 4.0 in ca. 2000, EndNote attained most of its current functionality.9-11 In the same year EndNote was acquired by Institute for Scientific Information’s Research Soft Division, part of Thomson Corporation.11 Since then , i.e. for the last 20 years, EndNote’s functionality has not had any substantial changes.12 In 2016 EndNote was transferred from Thompson Reuters to a spin-off company, Clarivate. In September 2008, Thomson Reuters, the owners of EndNote, sued the Commonwealth of Virginia for and requested an injunction against competing reference management software. George Mason University's Center for History and New Media had developed Zotero, a free/open-source extension to Mozilla Firefox. Thomson Reuters alleges that the Zotero developers reverse engineered and/or decompiled EndNote, that Zotero can transform proprietary EndNote citation style files (.ens) to the open Citation Style Language format, that they host files converted in this manner, and that they abuse the "EndNote" trademark in describing this feature. Thomson Reuters claims that this is violation of the site license agreement. They also added a restrictive click-thru license to their styles download website. George Mason University responded that it would not renew its site license for EndNote, that "anything created by users of Zotero belongs to those users, and that it should be as easy as possible for Zotero users to move to and from the software as they wish, without friction." The journal Nature editorialized that "the virtues of interoperability and easy data-sharing among researchers are worth restating. Imagine if Microsoft Word or Excel files could be opened and saved only in these proprietary formats, for example. It would be impossible for OpenOffice and other such software to read and save these files using open standards — as they can legally do." The case was dismissed on June 4, 2009. EndNote Web EndNote Web, a web-based implementation of EndNote, offers integration with the ISI Web of Knowledge. Import format EndNote can import records from other databases or text files. The latter are known as tagged import files and use a syntax based on Refer/BibIX. Fields are identified by a capital letter following a sign, separated from its value by a single space. Discrete references are separated by a single empty line. These files are typically saved using the file extension . %0 Book %A Geoffrey Chaucer %D 1957 %T The Works of Geoffrey Chaucer %E F. %I Houghton %C Boston %N 2nd %0 Journal Article %A Herbert H. Clark %D 1982 %T Hearers and Speech Acts %B Language %V 58 %P 332-373 %0 Thesis %A Cantucci, Elena %T Permian strata in South-East Asia %D 1990 %I University of California, Berkeley %9 Dissertation Tags and fields The complete map of EndNote tags for different reference types is available on GitHub. Shown below are the abbreviated versions. The table at the left is a list of EndNote tags and their associated field names. The table at the right is a list of standard reference types for the field. Compare this scheme with the much older refer scheme which uses a similar syntax. Entire records as separated by a single blank line. Version history and compatibility Niles and Associates produced early versions of EndNote.<ref> For example: {{cite news | title = EndNote | url = https://books.google.com/books?id=6UNWdidjDmIC | newspaper = PC Mag |date = 24 September 1991| volume = 10 | issue = 16 | publisher = Ziff Davis, Inc. | publication-date = 1991-09-24 | issn = 0888-8507 | access-date = 2015-03-03 | quote = EndNote (Niles & Associates) [...] stores up to 32,000 references and creates bibliographies in any style you choose. }} </ref> EndNote 20.2.1 for Windows, released on 30 November 2021. EndNote 20.2 for Mac and Windows, released on 9 November 2021. EndNote 20 for Windows released on 30 October 2020, with EndNote 20 for macOS to be released "soon". EndNote X9.3.3 for Windows & Mac, released 28 April 2020. Library files (i.e.'' citation databases) created with this version are in a different format to previous versions. Earlier versions of EndNote may not be able to read library files created with this version of EndNote. EndNote X9.2 for Windows & Mac, released 11 June 2019. EndNote X9.1.1 for Mac, released 29 March 2019. EndNote X9.1 for Windows & Mac, released 12 March 2019. EndNote X9 for Windows & Mac, released 1 August 2018. EndNote X8.2 for Windows, released 9 January 2018. EndNote X8 for Windows & Mac, released 8 November 2016. EndNote X7.5 for Windows & Mac, released 2 February 2016. EndNote X7.4 for Windows & Mac, released 11 August 2015. EndNote X7.3 for Windows & Mac, released 1 April 2015. EndNote X7.2 for Windows & Mac, released 30 September 2014. EndNote X7.1 for Windows & Mac, released 2 April 2014. EndNote X7.0.1 for Mac, released 13 November 2013. EndNote X7.0.2 for Windows, released 23 October 2013. EndNote X7 for Mac, released July 2013. EndNote X7 for Windows, released 20 May 2013; Compatible with Microsoft Word 2013. EndNote X6 for Windows, released 6 August 2012; EndNote X6 for Mac, released Q4 2012, compatible with OS X 10.8. EndNote X5 for Mac, released September 2011, introduced official compatibility with OS X 10.7 Lion. EndNote X5 for Windows, released 21 June 2011. Compatible with Microsoft Word 2010 EndNote X4 for Mac, released 23 August 2010. Introduced official compatibility with Mac OS X 10.6 Snow Leopard. Initially not compatible with Microsoft Office 2011, a compatibility update was subsequently made available on the EndNote website. EndNote X4 for Windows, released 15 June 2010. Introduced official compatibility with Microsoft Windows 7. Introduced official compatibility with Microsoft Word 2010 (required an update via Help -> Program Updates, or directly from the Endnote website). EndNote X3 for Mac, released 26 August 2009. Not compatible with Microsoft Word 2011. EndNote X3 and later are not supported on systems running Mac OS X 10.4 Tiger. EndNote X3 for Windows, released 17 June 2009. Not compatible with Microsoft Word 2010; Word 2010 cannot be started without disabling the EndNote Addin. EndNote X2 for Mac, released 3 September 2008. EndNote X2 for Windows, released 11 June 2008. The "Cite While You Write" feature in EndNote X2 was originally not compatible with 64-bit versions of Windows, but a patch can fix this issue. Last update: Version 12.0.4 (build 4459). EndNote X1 for Mac, released 21 August 2007. The Cite While You Write feature of EndNote X1 for Mac OS was originally only compatible with Word 10.1.2-10.1.6 and Word 2004. Due to changes in the way third party addins were supported in Word 2008, Cite While You Write was not natively compatible with Word 2008. A patch was released on June 26, 2008 that restored cite while you write functionality to Word 2008. EndNote X1 for Windows, released 20 August 2007. EndNote X1 and later are compatible with Windows Vista. EndNote X for Mac, released 25 August 2006. EndNote X and later are "Universal applications" that execute natively on both PPC and Intel-based Macs. Introduced compatibility with Mac OS X 10.5 Leopard. EndNote libraries that have been opened and used with EndNote version X or greater should not be subsequently used with an EndNote version earlier than version X. EndNote X for Windows, released 9 June 2006. EndNote libraries that have been opened and used with EndNote version X or greater should not be subsequently used with an EndNote version earlier than version X. EndNote 9 for Mac, released 29 August 2005. Introduced compatibility with Mac OS X 10.4 Tiger. Due to major compatibility issues, it is not recommend to run EndNote 9 or earlier on OS X 10.5 Leopard. EndNote 9 for Windows, released 21 June 2005. EndNote 8 for Mac, released 30 November 2004. Introduced compatibility with Mac OS X 10.3 Panther. EndNote 8 for Windows, released 21 June 2004. EndNote 7 for Mac, released 26 August 2003. Not certified compatible with OS X 10.3 Panther (users can install and run EndNote 7 on a Panther system, but there are some minor compatibility issues). EndNote 7 for Windows, released 24 June 2003. EndNote 6 for Mac, released 5 August 2002. Not certified compatible with OS X 10.3 Panther (it is possible to install and run EndNote 6 on a Panther system, but there are some minor compatibility issues). EndNote 6 for Windows, released 17 June 2002. EndNote 5 for Mac, released 19 July 2001. EndNote 5 and earlier are not compatible with any version of OS X. EndNote 5 for Windows, released 11 June 2001. EndNote 4 for Mac & Windows released 6 March 2000. EndNote 3 for Mac & Windows released 10 March 1998. EndNote 2 for Mac & Windows was probably released in the summer of 1995. EndNote Plus for Mac See also Data schemes BibTeX – a text-based data format used by LaTeX refer – a similar, but not identical, data scheme supported on UNIX-like systems RIS – a text-based data scheme from Research Information SystemsSoftware''' Comparison of reference management software – compares EndNote to other similar software References Bibliography Miller, S., EndNote. Computers and the Humanities 1989, 23 (6), 489-491. Finnegan, G. A.; Klemperer, K. E., EndNote at Dartmouth: A Double Review. The Public-Access Computer Systems Review 1990. Beckman, R., EndNote Plus 2.3. Journal of Chemical Information and Computer Sciences 1997, 37 (5), 957-958. Scott, P. J., EndNote Plus 2.1 for Windows 3.1. Journal of Chemical Information and Computer Sciences 1997, 37 (2), 410-410. Warling, B., ENDNOTE PLUS - ENHANCED REFERENCE DATABASE AND BIBLIOGRAPHY MAKER. Journal of Chemical Information and Computer Sciences 1992, 32 (6), 755-756. Cox, J., ENDNOTE PLUS. International Journal of Information Management 1992, 12 (4), 329-330. Myers, C. J.; Lessmann, J. J.; Musselman, R. L., A chemical literature management system using endnote. Science and Technology Libraries 1992, 12 (2), 17-27. Sandford, P., Evaluation of EndNote 4 Reference Management Software. VINE 2000, 30 (4), 55-59. Etter, S. C., Endnote 4.0. Journal of Computing in Higher Education 2001, 12 (2), 91-93. Reiß, M.; Reiß, G.; Pausch, N. C., Database manager EndNote 4 - Further development and functions. Radiologe 2001, 41 (6), 511-514. Herbert, T. L., EndNote 5 for windows. Journal of Chemical Information and Computer Sciences 2002, 42 (1), 134. Gotschall, T., EndNote 20 desktop version. Journal of the Medical Library Association 2021, 109 (3), 520-522. Reference management software Bibliography file formats Clarivate
23642248
https://en.wikipedia.org/wiki/Timothee%20Besset
Timothee Besset
Timothée Besset is a French software programmer, (also known as TTimo), best known for supporting Linux, as well as some Macintosh, ports of id Software's products. He has been involved with the game ports of various id properties over the past ten years, starting with Quake III Arena. Since the development of Doom 3 he was also in charge of the multiplayer network code and various aspects of game coding for id, a role which had him heavily involved in the development of their online game QuakeLive. He has been occasionally called "zerowing", but he has never gone by that name himself. It is derived from the community oriented system zerowing.idsoftware.com, of which the Linux port pages are the most prominent. The system was actually named by Christian Antkow based on the Zero Wing meme. Life and career Besset grew up in France, and started programming in the early 1990s. In school he majored in computer science, as well as pursuing courses in chemistry, mechanics, and fluid mechanics. Through school he was also first introduced to Linux, originally only for system administration and networking, and eventually adopting it for his main system. His first serious game development project was working on QERadiant, a free game editor tool for id Software games. Through his work on the editor he got to know Robert Duffy, who was at that point working as a contractor for id. After he got hired full-time, Duffy managed to secure Timothee a contract to work on the new cross-platform GtkRadiant editor project in 2000. This eventually led to Timothee being hired to become id's official Linux port maintainer after they took back the support rights to the Linux release of Quake III Arena from the then floundering Loki Software. His first actual porting project came with the release of Return to Castle Wolfenstein in 2001, with the Linux client being released on March 16, 2002. This was followed about a year later by the release of Wolfenstein: Enemy Territory, with the Linux builds sharing the same release date as the Windows release. His next porting work came with the release of Doom 3, with him releasing the first Linux builds on October 4, 2004. Around this time he also assumed the responsibility of becoming in charge of network coding for id. On October 20, 2005 he released the Linux binaries for Quake 4. This was followed by him releasing the source code for GtkRadiant under the GNU General Public License on February 17, 2006. His next porting project was porting Enemy Territory: Quake Wars, with Linux binaries being released on October 19, 2007. He also worked on the Quake Live project, with the game entering an invitation-based closed beta in 2008 and an open beta on February 24, 2009, with Linux and Macintosh support coming on August 18, 2009. In response to fears by some in the Linux gaming community that id would abandon Linux with its future titles, on September 13, 2009 in a well publicized statement he reaffirmed id's support of Linux, stating in his blog that "Fundamentally nothing has changed with our policy regarding Linux games... I'll be damned if we don't find the time to get Linux builds done". In January 2012, Besset resigned from id Software, ending hope for future Linux builds (though Doom 3 BFG Edition came to Linux via source port). A year later John Carmack revealed that ZeniMax Media "doesn't have any policy of 'unofficial binaries'", and so prevented id Software from pursuing any sort of third-party builds as it had in the past, be it Linux ports or experimental releases, and he then suggested the use of Wine instead. On July 2, 2012, he was announced to have joined Frozen Sand, which is currently developing Urban Terror HD. In September 2016, he ported Rocket League to SteamOS/Linux with the help of Ryan C. Gordon Games credited Quake III Arena Urban Terror Return to Castle Wolfenstein Wolfenstein: Enemy Territory Doom 3 Doom 3: Resurrection of Evil Quake 4 Enemy Territory: Quake Wars Quake Live See also Dave D. Taylor Michael Simms Ryan C. Gordon References External links Timothee Besset's Website Living people Video game programmers Linux game porters Linux people Year of birth missing (living people)
48507649
https://en.wikipedia.org/wiki/ISO/TC%20292
ISO/TC 292
ISO/TC 292 Security and resilience is a technical committee of the International Organization for Standardization formed in 2015 to develop standards in the area of security and resilience. The Technical Management Board of ISO (TMB) decided in June 2014 to create a new ISO technical committee with the number ISO/TC 292 by merging three committees into one. The work of ISO/TC 292 officially started on 2015-01-01 and the three previous committees were dissolved and their workprogrammes moved to the new committee. ISO/TC 292 also was given the responsibility for the ISO 28000 series (Security management in the supply chain)previously developed by ISO/TC 8. The TMB decision was made in order to clarify ISO's structural organization on security matters, and to prepare for future topics in this field by creating a de facto coordination body within the TC central structure. IT was believed that ISO/TC 292 would lead to optimization as well as limit and prevent conflict or duplication of work. It would also make it easier for public administrations/authorities with a general interest and protective mission to optimize their participation in ISO's work in this sector. As well as give Non-Profit organizations with limited resources a simplified structure to take part in. When ISO/TC 292 was created the following three committees were merged. ISO/TC 223 Societal security (2001–2014) ISO/TC 247 Fraud countermeasures and controls (2009–2014) ISO/PC 284 Management system for quality of PSC operations (2013–2014) Scope ISO/TC 292 works under the following scope Standardization in the field of security to enhance the safety and resilience of society.Excluded: Sector specific security projects developed in other relevant ISO committees and projects developed in ISO/TC 262 and ISO/PC 278. Leadership and organization Chair 2015– Mrs Åsa Kyrk Gere Secretary 2020- Ms Susanna Björk Secretary 2017–2020 Mr Bengt Rydstedt Secretary 2017-2017 Ms Susanna Björk Secretary 2015–2016 Mr Bengt Rydstedt ISO/TC 292 currently has the following organisation. Working Group 1: Terminology Working Group 2: Continuity and organizational resilience Working Group 3: Emergency management Working Group 4: Authenticity, integrity and trust for products and documents Working Group 5: Community resilience Working Group 6: Protective security Working Group 7: Guidelines for events Working Group 8: Supply chain security Working Group 9: Crisis management Working Group 10: Preparedness Joint Working Group 1: Managing emerging risk (Joint work with ISO/TC 262) CG: Communication Group DCCG: Developing Country Coordination Group UNCG: United Nation Coordination Group ISO/TC 292 is one of the larger committees in ISO with almost 70 member countries. It has a wide range of experts participating in the work of ISO/TC 292, from large corporations such as Thales to start-ups such as Cypheme. Published standards General ISO 22300:2021 Security and resilience – Vocabulary ISO/TS 22375:2018 Security and resilience – Guidelines for complexity assessment process ISO 22397:2014 Societal security – Guidelines for establishing partnering arrangements ISO 22398:2014 Societal security – Guidelines for exercises Business continuity management ISO 22301:2019 Security and resilience – Business continuity management systems – Requirements ISO 22313:2020 Security and resilience – Business continuity management systems – Guidance on the use of ISO 22301 ISO/TS 22317:2021 Security and resilience – Business continuity management systems – Guidelines for business impact analysis ISO/TS 22318:2021 Security and resilience – Business continuity management systems – Guidelines for supply chain continuity ISO/TS 22330:2018 Security and resilience – Business continuity management systems – Guidelines for people aspects on business continuity ISO/TS 22331:2018 Security and resilience – Business continuity management systems – Guidelines for business continuity strategy ISO/TS 22332:2021 Security and resilience – Business continuity management systems – Guidelines for developing business continuity plans and procedures ISO/IEC/TS 17021-6:2015 Conformity assessment – Requirements for bodies providing audit and certification of management systems – Part 6: Competence requirements for auditing and certification of business continuity management systems Emergency management ISO 22320:2018 Security and resilience – Emergency management – Guidelines for incident management ISO 22322:2015 Societal security – Emergency management – Guidelines for public warning ISO 22324:2015 Societal security – Emergency management – Guidelines for colour coded alert ISO 22325:2016 Security and resilience – Emergency management – Guidelines for capability assessment ISO 22326:2018 Security and resilience – Emergency management – Guidelines for monitoring facilities with identified hazards ISO 22327:2018 Security and resilience – Emergency management – Guidelines for implementation of a community-based landslide early warning system ISO 22328-1:2020 Security and resilience – Emergency management – Guidelines for implementation of a community-based natural disasters early warning system ISO 22329:2021 Security and resilience – Emergency management – Guidelines for the use of social media in emergencies ISO/TR 22351:2015 Societal security – Emergency management – Message structure for exchange of information Authenticity, integrity and trust for products and documents ISO 22380:2018 Security and resilience – Authenticity, integrity and trust for products and documents – General principles for product fraud risk ISO 22381:2018 Security and resilience – Authenticity, integrity and trust for products and documents – Guidelines for interoperability of product identification and authentication systems ISO 22382:2018 Security and resilience – Authenticity, integrity and trust for products and documents – Guidelines for the content, security and issuance of excise tax stamps ISO 22383:2020 Security and resilience – Authenticity, integrity and trust for products and documents – Guidelines and performance criteria for authentication solutions for material goods ISO 22384:2020 Security and resilience – Authenticity, integrity and trust for products and documents - Guidelines to establish and monitor a protection plan and its implementation ISO 16678:2014 Guidelines for interoperable object identification and related authentication systems to deter counterfeiting and illicit trade Security management systems ISO 28000:2007 Specification for security management systems for the supply chain ISO 28001:2007 Security management systems for the supply chain – Best practices for implementing supply chain security, assessments and plans – Requirements and guidance ISO 28002:2011 Security management systems for the supply chain – Development of resilience in the supply chain – Requirements with guidance for use ISO 28003:2007 Security management systems for the supply chain – Requirements for bodies providing audit and certification of supply chain security management systems ISO 28004-1:2007 Security management systems for the supply chain – Guidelines for the implementation of ISO 28000 Part 1: General principles ISO 28004-3:2014 Security management systems for the supply chain – Guidelines for the implementation of ISO 28000 Part 3: Additional specific guidance for adopting ISO 28000 for use by medium and small businesses (other than marine ports) ISO 28004-4:2014 Security management systems for the supply chain – Guidelines for the implementation of ISO 28000 Part 4: Additional specific guidance on implementing ISO 28000 if compliance with ISO 28001 is a management objective ISO 18788:2015 Management system for private security operations – Requirements with guidance for use Community resilience ISO 22315:2015 Societal security – Mass evacuation – Guidelines for planning ISO 22319:2017 Security and resilience – Community resilience – Guidelines for planning the involvement of spontaneous volunteers ISO 22392:2020 Security and resilience – Community resilience – Guidelines for conducting peer reviews ISO/TS 22393:2021 Security and resilience – Community resilience – Guidelines for planning recovery and renewal ISO 22395:2018 Security and resilience – Community resilience – Guidelines for supporting vulnerable persons in an emergency ISO 22396:2020 Security and resilience – Community resilience – Guidelines for information exchange between organisations Urban resilience ISO/TR 22370:2020 Security and resilience – Urban resilience – Framework and principles Organizational resilience ISO 22316:2017 Security and resilience – Organizational resilience – Principles and attributes Protective security ISO 22341:2021 Security and resilience – Protective security – Guidelines for crime prevention through environmental design Replaced or withdrawn ISO 22300:2012 Societal security – Terminology (replaced by 2018 edition) ISO 22300:2018 Security and resilience – Vocabulary (replaced by 2021 edition) ISO 22301:2012 Societal security – Business continuity management systems – Requirements (replaced by 2019 edition) ISO/TR 22312:2012 Societal security – Technological capabilities ISO 22313:2012 Societal security – Business continuity management systems – Guidance (replaced by 2020 edition) ISO 22317:2015 Societal security – Business continuity management systems – Guidelines for business impact analysis (replaced by 2021 edition) ISO 22318:2015 Societal security – Business continuity management systems – Guidelines for supply chain continuity (replaced by 2021 edition) ISO 22320:2011 Societal security – Emergency management – Requirements for incident response (replaced by 2018 edition) ISO/PAS 22399:2007 Societal security – Guideline for incident preparedness and operational continuity management (replaced by ISO 22301 and ISO 22313) ISO 12931:2012 Performance criteria for authentication solutions used to combat counterfeiting of material goods References External links www.iso.org www.isotc292online.org ISO standards ISO technical committees
154851
https://en.wikipedia.org/wiki/Reentrancy%20%28computing%29
Reentrancy (computing)
In computing, a computer program or subroutine is called reentrant if multiple invocations can safely run concurrently on multiple processors, or on a single processor system, where a reentrant procedure can be interrupted in the middle of its execution and then safely be called again ("re-entered") before its previous invocations complete execution. The interruption could be caused by an internal action such as a jump or call, or by an external action such as an interrupt or signal, unlike recursion, where new invocations can only be caused by internal call. This definition originates from multiprogramming environments where multiple processes may be active concurrently and where the flow of control could be interrupted by an interrupt and transferred to an interrupt service routine (ISR) or "handler" subroutine. Any subroutine used by the handler that could potentially have been executing when the interrupt was triggered should be reentrant. Similarly, code shared by two processing and accessing shared data should be reentrant. Often, subroutines accessible via the operating system kernel are not reentrant. Hence, interrupt service routines are limited in the actions they can perform; for instance, they are usually restricted from accessing the file system and sometimes even from allocating memory. This definition of reentrancy differs from that of thread-safety in multi-threaded environments. A reentrant subroutine can achieve thread-safety, but being reentrant alone might not be sufficient to be thread-safe in all situations. Conversely, thread-safe code does not necessarily have to be reentrant (see below for examples). Other terms used for reentrant programs include "sharable code". Reentrant subroutines are sometimes marked in reference material as being "signal safe". Reentrant programs are often "pure procedures". Background Reentrancy is not the same thing as idempotence, in which the function may be called more than once yet generate exactly the same output as if it had only been called once. Generally speaking, a function produces output data based on some input data (though both are optional, in general). Shared data could be accessed by any function at any time. If data can be changed by any function (and none keep track of those changes), there is no guarantee to those that share a datum that that datum is the same as at any time before. Data has a characteristic called scope, which describes where in a program the data may be used. Data scope is either global (outside the scope of any function and with an indefinite extent) or local (created each time a function is called and destroyed upon exit). Local data is not shared by any routines, re-entering or not; therefore, it does not affect re-entrance. Global data is defined outside functions and can be accessed by more than one function, either in the form of global variables (data shared between all functions), or as static variables (data shared by all invocations of the same function). In object-oriented programming, global data is defined in the scope of a class and can be private, making it accessible only to functions of that class. There is also the concept of instance variables, where a class variable is bound to a class instance. For these reasons, in object-oriented programming, this distinction is usually reserved for the data accessible outside of the class (public), and for the data independent of class instances (static). Reentrancy is distinct from, but closely related to, thread-safety. A function can be thread-safe and still not reentrant. For example, a function could be wrapped all around with a mutex (which avoids problems in multithreading environments), but, if that function were used in an interrupt service routine, it could starve waiting for the first execution to release the mutex. The key for avoiding confusion is that reentrant refers to only one thread executing. It is a concept from the time when no multitasking operating systems existed. Rules for reentrancy Reentrant code may not hold any static or global non-constant data without synchronization. Reentrant functions can work with global data. For example, a reentrant interrupt service routine could grab a piece of hardware status to work with (e.g., serial port read buffer) which is not only global, but volatile. Still, typical use of static variables and global data is not advised, in the sense that, except in sections of code that are synchronized, only atomic read-modify-write instructions should be used in these variables (it should not be possible for an interrupt or signal to come during the execution of such an instruction). Note that in C, even a read or write is not guaranteed to be atomic; it may be split into several reads or writes. The C standard and SUSv3 provide sig_atomic_t for this purpose, although with guarantees only for simple reads and writes, not for incrementing or decrementing. More complex atomic operations are available in C11, which provides stdatomic.h. Reentrant code may not modify itself without synchronization. The operating system might allow a process to modify its code. There are various reasons for this (e.g., blitting graphics quickly) but this generally requires synchronization to avoid problems with reentrancy.<p>It may, however, modify itself if it resides in its own unique memory. That is, if each new invocation uses a different physical machine code location where a copy of the original code is made, it will not affect other invocations even if it modifies itself during execution of that particular invocation (thread). Reentrant code may not call non-reentrant computer programs or routines. Multiple levels of user, object, or process priority or multiprocessing usually complicate the control of reentrant code. It is important to keep track of any access or side effects that are done inside a routine designed to be reentrant. Reentrancy of a subroutine that operates on operating-system resources or non-local data depends on the atomicity of the respective operations. For example, if the subroutine modifies a 64-bit global variable on a 32-bit machine, the operation may be split into two 32-bit operations, and thus, if the subroutine is interrupted while executing, and called again from the interrupt handler, the global variable may be in a state where only 32 bits have been updated. The programming language might provide atomicity guarantees for interruption caused by an internal action such as a jump or call. Then the function in an expression like (global:=1) + (f()), where the order of evaluation of the subexpressions might be arbitrary in a programming language, would see the global variable either set to 1 or to its previous value, but not in an intermediate state where only part has been updated. (The latter can happen in C, because the expression has no sequence point.) The operating system might provide atomicity guarantees for signals, such as a system call interrupted by a signal not having a partial effect. The processor hardware might provide atomicity guarantees for interrupts, such as interrupted processor instructions not having partial effects. Examples To illustrate reentrancy, this article uses as an example a C utility function, , that takes two pointers and transposes their values, and an interrupt-handling routine that also calls the swap function. Neither reentrant nor thread-safe This is an example swap function that fails to be reentrant or thread-safe. Since the tmp variable is globally shared, without synchronization, among any concurrent instances of the function, one instance may interfere with the data relied upon by another. As such, it should not have been used in the interrupt service routine isr(): int tmp; void swap(int* x, int* y) { tmp = *x; *x = *y; /* Hardware interrupt might invoke isr() here. */ *y = tmp; } void isr() { int x = 1, y = 2; swap(&x, &y); } Thread-safe but not reentrant The function in the preceding example can be made thread-safe by making thread-local. It still fails to be reentrant, and this will continue to cause problems if is called in the same context as a thread already executing : _Thread_local int tmp; void swap(int* x, int* y) { tmp = *x; *x = *y; /* Hardware interrupt might invoke isr() here. */ *y = tmp; } void isr() { int x = 1, y = 2; swap(&x, &y); } Reentrant but not thread-safe The following (somewhat contrived) modification of the swap function, which is careful to leave the global data in a consistent state at the time it exits, is reentrant; however, it is not thread-safe, since there are no locks employed, it can be interrupted at any time: int tmp; void swap(int* x, int* y) { /* Save global variable. */ int s; s = tmp; tmp = *x; *x = *y; *y = tmp; /* Hardware interrupt might invoke isr() here. */ /* Restore global variable. */ tmp = s; } void isr() { int x = 1, y = 2; swap(&x, &y); } Reentrant and thread-safe An implementation of that allocates on the stack instead of globally and that is called only with unshared variables as parameters is both thread-safe and reentrant. Thread-safe because the stack is local to a thread and a function acting just on local data will always produce the expected result. There is no access to shared data therefore no data race. void swap(int* x, int* y) { int tmp; tmp = *x; *x = *y; *y = tmp; /* Hardware interrupt might invoke isr() here. */ } void isr() { int x = 1, y = 2; swap(&x, &y); } Reentrant interrupt handler A reentrant interrupt handler is an interrupt handler that re-enables interrupts early in the interrupt handler. This may reduce interrupt latency. In general, while programming interrupt service routines, it is recommended to re-enable interrupts as soon as possible in the interrupt handler. This practice helps to avoid losing interrupts. Further examples In the following code, neither f nor g functions are reentrant. int v = 1; int f() { v += 2; return v; } int g() { return f() + 2; } In the above, depends on a non-constant global variable ; thus, if is interrupted during execution by an ISR which modifies , then reentry into will return the wrong value of . The value of and, therefore, the return value of , cannot be predicted with confidence: they will vary depending on whether an interrupt modified during 's execution. Hence, is not reentrant. Neither is , because it calls , which is not reentrant. These slightly altered versions are reentrant: int f(int i) { return i + 2; } int g(int i) { return f(i) + 2; } In the following, the function is thread-safe, but not (necessarily) reentrant: int function() { mutex_lock(); // ... // function body // ... mutex_unlock(); } In the above, can be called by different threads without any problem. But, if the function is used in a reentrant interrupt handler and a second interrupt arises inside the function, the second routine will hang forever. As interrupt servicing can disable other interrupts, the whole system could suffer. Notes See also Referential transparency References Works cited Further reading Concurrency (computer science) Recursion Subroutines Articles with example C code
57056726
https://en.wikipedia.org/wiki/Nokia%207%20Plus
Nokia 7 Plus
The Nokia 7 Plus is a Nokia-branded upper-mid-range smartphone running the Android operating system. It was announced on 25 February 2018, along with four other Nokia-branded phones. Specifications Software As the Nokia 7 Plus is an Android One device, it runs a near-stock version of the Android operating system. It was originally shipped with Android 8.0 Oreo; however, an update to Android 8.1 Oreo was released soon after for the device. On 8 May 2018, it was announced that the Nokia 7 Plus would be one of seven non-Google smartphones to receive the Android Pie beta. On 28 September 2018, the Android Pie update started to roll out to the Nokia 7 Plus in phases, starting in India, and on 30 November 2018 in China. On 7 January 2020, Android 10 started being rolled out on the Nokia 7 Plus. Hardware The Nokia 7 Plus has a 6.0-inch LTPS IPS LCD display, Octa-core (4x2.2 GHz Kryo 260 & 4x1.8 GHz Kryo 260) Qualcomm Snapdragon 660 processor, 4 GB of RAM and 64 GB of internal storage that can be expanded using microSD cards up to 256 GB. The phone has a 3800 mAh Li-Ion battery, 13 MP rear camera with LED flash and 16 MP front-facing camera with auto-focus. It is available in Tempered Black/Copper, White/Copper colors. The phone is designed from a single block of 6000 series aluminium. As with other mid-range and top-range phones from Nokia, the Nokia 7 Plus features exclusive rear camera optics licensed from ZEISS. The NFC-area on the back of the phone is both smaller and weaker than comparable phones. The NFC area can be found to the left of the camera housing. The development of Nokia 7 plus began as early as 2017 by 31 December of the same year it was ready for release. Reception Reviews of the Nokia 7 Plus have been generally positive. Critics have praised its large display, battery life, and Android One software. Its design and build quality have also been praised. One reviewer called it the most promising Nokia smartphone "in years". Android Central praised its camera as being "one of the best" in its $400 price category, and added that the 7 Plus is "one of the best phones of the year". At the same time, the phone's slow camera processing speed was criticised. Controversies In March 2019, a report by Norwegian state broadcaster NRK found that some Nokia 7 Plus phones sent sensitive unencrypted personal information (including location and serial number) to a domain name controlled by China Telecom, every time the device was booted, and multiple times thereafter. HMD responded to the allegations, stating that carrier activation software intended for a different market was accidentally included in a batch of the devices, but that only "activation data" that could not be used to identify a user was sent and not processed, and that this error was fixed by the February 2019 software patch. Finnish authorities are investigating this as a violation of the General Data Protection Regulation (GDPR). Note References External links Nokia Android smartphones Mobile phones introduced in 2018 Mobile phones with multiple rear cameras Android (operating system) devices Mobile phones with 4K video recording
51615662
https://en.wikipedia.org/wiki/Jan-Christoph%20Borchardt
Jan-Christoph Borchardt
Jan-Christoph Borchardt (born 3 May 1989 in Minden, Germany) is a German open source interaction designer. He is primarily known for his work on Open Source Design, Terms of Service; Didn't Read, ownCloud, and now Nextcloud. Open Source Design In his bachelor thesis "Usability in Free Software" he argues that "For a software to truly be free, people need to be able to easily use it without help". His thesis has the subtitle "Freedom 4: The freedom to use the program effectively, efficiently and satisfactorily", a reference to the four freedoms of free software. He is a cofounder of Open Source Design, "a community of designers and developers pushing more open design processes and improving the user experience and interface design of open source software". To that effect he has been responsible for the introduction of the “Open Source Design room” in 2015 at FOSDEM as well as FOSSASIA in 2016. In 2013 he was a lecturer for Design in Open Source Software at the nationally recognised University of Design, Art and Media "Merz Akademie" in Stuttgart, Germany. Free software Borchardt contributes to several open-source projects and communities. This includes Shotwell (software), Diaspora (social network), elementary OS as well as the Nextcloud and ownCloud projects. In 2012 he co-founded Terms of Service; Didn't Read, a community project aiming to analyze and grade the terms of service and privacy policies of major internet sites and services. He is co-chair of the W3C Unhosted Web Community Group. Based on his belief that contributing to open-source is already difficult enough he is also a cofounder of the Stuttgart JS and Tel Aviv JS meetups. As well as several other community events such as AfricaHackTrip. ownCloud and Nextcloud Since early 2011 he has been the lead designer of ownCloud. As of 2016 after the fork of ownCloud into Nextcloud he is employed by Nextcloud as design lead. References External links Homepage of Jan-Christoph Borchardt Free software programmers Interface designers 1989 births Living people Nextcloud German computer programmers
3153858
https://en.wikipedia.org/wiki/WinFixer
WinFixer
WinFixer was a family of scareware rogue security programs developed by Winsoftware which claimed to repair computer system problems on Microsoft Windows computers if a user purchased the full version of the software. The software was mainly installed without the user's consent. McAfee claimed that "the primary function of the free version appears to be to alarm the user into paying for registration, at least partially based on false or erroneous detections." The program prompted the user to purchase a paid copy of the program. The WinFixer web page (see the image) said it "is a useful utility to scan and fix any system, registry and hard drive errors. It ensures system stability and performance, frees wasted hard-drive space and recovers damaged Word, Excel, music and video files." However, these claims were never verified by any reputable source. In fact, most sources considered this program to actually reduce system stability and performance. The sites went defunct in December 2008 after actions taken by the Federal Trade Commission. Installation methods The WinFixer application was known to infect users using the Microsoft Windows operating system, and was browser independent. One infection method involved the Emcodec.E trojan, a fake codec scam. Another involves the use of the Vundo family of trojans. Typical infection The infection usually occurred during a visit to a distributing website using a web browser. A message appeared in a dialog box or popup asking the user if they wanted to install WinFixer, or claimed a user's machine was infected with malware, and requested the user to run a free scan. When the user chose any of the options or tried to close this dialog (by clicking 'OK' or 'Cancel' or by clicking the corner 'X'), it would trigger a pop-up window and WinFixer would download and install itself, regardless of the user's wishes. "Trial" offer A free "trial" offer of this program was sometimes found in pop-ups. If the "trial" version was downloaded and installed, it would execute a "scan" of the local machine and a couple of non-existent trojans and viruses would be "discovered", but no further action would be undertaken by the program. To obtain a quarantine or removal, WinFixer required the purchase of the program. However, the alleged unwanted bugs were bogus, only serving to persuade the owner to buy the program. WinFixer application Once installed, WinFixer frequently launched pop-ups and prompted the user to follow its directions. Because of the intricate way in which the program installed itself into the host computer (including making dozens of registry edits), successful removal would have taken a fairly long time if done manually. When running, its process could be found in the task manager and be stopped, but would automatically relaunch itself after a period of time. WinFixer was also known to modify the Windows Registry so that it started up automatically with every reboot, and scanned the user's computer. Firefox popup The Mozilla Firefox browser was vulnerable to initial infection by WinFixer. Once installed, WinFixer was known to exploit the SessionSaver extension for the Firefox browser. The program caused popups on every startup asking the user to download WinFixer, by adding lines containing the word 'WinFixer' to the prefs.js file. Removal Removal of WinFixer proved difficult because it actively undid whatever the user attempted. Frequently, procedures that worked on one system would not work on another because there were a large number of variants. Some sites provided manual techniques to remove infections that automated cleanup tools could not remove. Domain ownership The company that made WinFixer, Winsoftware Ltd., claimed to be based in Liverpool, England (Stanley Street, postcode: 13088.) However, this address was proven to be false. The domain WINFIXER.COM on the whois database showed it was owned by a void company in Ukraine and another in Warsaw, Poland. According to Alexa Internet, the domain was owned by Innovative Marketing, Inc., 1876 Hutson St, Honduras. According to the public key certificate provided by GTE CyberTrust Solutions, Inc., the server secure.errorsafe.com was operated by ErrorSafe Inc. at 1878 Hutson Street, Belize City, BZ. Running traceroute on Winfixer domains showed that most of the domains were hosted from servers at setupahost.net, which used Shaw Business Solutions AKA Bigpipe as their backbone. Technical information Technical WinFixer was closely related to Aurora Network's Nail.exe hijacker/spyware program. In worst-case scenarios, it would embed itself in Internet Explorer and become part of the program, thus being nearly impossible to remove. The program was also closely related to the Vundo trojan. Variants Windows Police Pro Windows Police Pro was a variant of WinFixer. David Wood wrote in Microsoft TechNet that in March 2009, the Microsoft Malware Protection Center saw ASC Antivirus, the virus' first version. Microsoft did not detect any changes to the virus until the end of July that year when a second variant, Windows Antivirus Pro, appeared. Although multiple new virus versions have since appeared, the virus has been renamed only once, to Windows Police Pro. Microsoft added the virus to its Malicious Software Removal Tool in October 2009. The virus generated numerous persistent popups and messages displaying false scan reports intended to convince users that their computers were infected with various forms of malware that do not exist. When users attempted to close the popup message, they received confirmation dialog boxes that switched the "Purchase full version" and "Continue evaluating" buttons. Windows Police Pro generated a counterfeit Windows Security Center that warned users about the fake malware. Bleeping Computer and the syndicated "Propeller Heads" column recommended using Malwarebytes' Anti-Malware to remove Windows Police Pro permanently. Microsoft TechNet and Softpedia recommended using Microsoft's Malicious Software Removal Tool to get rid of the malware. Effects on the public Class action lawsuit On September 29, 2006, a San Jose woman filed a lawsuit over WinFixer and related "fraudware" in Santa Clara County Superior Court; however, in 2007 the lawsuit was dropped. In the lawsuit, the plaintiffs charged that the WinFixer software "eventually rendered her computer's hard drive unusable. The program infecting her computer also ejected her CD-ROM drive and displayed Virus warnings." Ads on Windows Live Messenger On February 18, 2007, a blog called "Spyware Sucks" reported that the popular instant messaging application Windows Live Messenger had inadvertently promoted WinFixer by displaying a WinFixer advertisement from one of Messenger's ad hosts. A similar occurrence was also reported on some MSN Groups pages. There were other reports before this one (one from Patchou, the creator of Messenger Plus!), and people had contacted Microsoft about the incidents. Whitney Burk from Microsoft issued this problem in his official statement: Federal Trade Commission On December 2, 2008, the Federal Trade Commission requested and received a temporary restraining order against Innovative Marketing, Inc., ByteHosting Internet Services, LLC, and individuals Daniel Sundin, Sam Jain, Marc D’Souza, Kristy Ross, and James Reno, the creators of WinFixer and its sister products. The complaint alleged that the products' advertising, as well as the products themselves, violated United States consumer protection laws. However, Innovative Marketing flouted the court order and was fined $8,000 per day in civil contempt. On September 24, 2012, Kristy Ross was fined $163 million by the Federal Trade Commission for her part in this. The article goes on to say that the WinFixer family of software was simply a con but does not acknowledge that it was in fact a program that made many computers unusable. Notes References External links McAfee's Entry on WinFixer Symantec’s Entry on WinFixer and removal instructions Symantec's entry on ErrorSafe - a sister spyware application FTC complaint Rogue software Scareware Hacking in the 2000s
39968545
https://en.wikipedia.org/wiki/Terry%20Myerson
Terry Myerson
Terry Myerson (born 1972 or 1973) is an American venture partner at Madrona Venture Group and an operating executive at The Carlyle Group. Myerson was previously an Executive Vice President at Microsoft, and head of its Windows and Devices Group. Myerson graduated from Duke University in 1992 and founded Intersé Corporation, which Microsoft purchased in 1997. At Microsoft, he led software and engineering teams behind Microsoft Exchange and Windows Phone before being promoted to lead Microsoft's newly formed operating systems engineering division in July 2013. In March 2018, Myerson announced that he would leave Microsoft after a transition period. In October 2018, Myerson announced his new roles at Madrona Venture Group and The Carlyle Group in a post on his LinkedIn page. Education and career Myerson attended Duke University, where he studied in the college of arts and sciences for a semester before choosing a mechanical engineering major. While in college, he worked as a waiter and a part-time graphics creator at the Environmental Protection Agency. Upon graduation in 1992, he worked in computer graphics before starting his own company, Intersé Corporation, which made websites and data mining software before being acquired by Microsoft in 1997. Myerson received $16.5 million in stock with the acquisition. Microsoft At Microsoft, Myerson worked in business Internet services and server applications, including Site Server, BizTalk Server, and Windows Management Instrumentation. He joined the corporate email and calendar Microsoft Exchange software team in 2001, which he led for eight years. He became the head of mobile engineering near the end of 2008, and called a meeting in December that scrapped Microsoft's Windows Mobile product and programming code in favor of a completely rebuilt system designed to better compete with the iPhone. He was promoted to lead the Windows Phone operation in 2011, directly reporting to CEO Steve Ballmer. Myerson restructured the mobile team, and was responsible for hiring Joe Belfiore, who later redesigned the Windows Phone interface. Myerson also connected Microsoft with Nokia's hardware division via a personal relationship with Nokia's executive vice president of smart devices, which grew into Microsoft's biggest Windows Phone partnership. In July 2013, Myerson was promoted to executive vice president of Microsoft's new operating systems engineering division, which controlled Microsoft Windows as well as Windows Phone, Xbox system software, and various services. The Verge called Myerson "the most important man at Microsoft" after the company's executive reorganization. In 2015, Microsoft merged their Devices Group into the Operating Systems Group to form a new Windows and Device Group which was led by Myerson and which was responsible for Windows operating systems, Xbox system, Windows back-end services and the Surface and HoloLens lineup of hardware products. In March 2018 Microsoft announced it would split the Windows & Devices division into Experiences & Devices and Cloud & AI and that Myerson would leave. Personal life He has a wife and three children. He is a member of the Seattle Foundation Board of Trustees and a member of the Board of Visitors at Duke University's Pratt School of Engineering. His younger brother works at Microsoft. Myerson organized the purchase of a minority stake in the Seattle Sounders FC of Major League Soccer with 10 Seattle-area families, including Microsoft CEO Satya Nadella. See also Julie Larson-Green References Microsoft employees Duke University Pratt School of Engineering alumni 1970s births Living people
1067688
https://en.wikipedia.org/wiki/CherryOS
CherryOS
CherryOS was a PowerPC G4 processor emulator for x86 Microsoft Windows platforms, which allowed various Apple Inc. programs to be operated on Windows XP. Announced and made available for pre-orders on October 12, 2004, it was developed by Maui X-Stream (MXS), a startup company based in Lahaina, Hawaii and a subsidiary of Paradise Television. The program encountered a number of launch difficulties its first year, including a poorly-reviewed soft launch in October 2004, wherein Wired Magazine argued that CherryOS used code grafted directly from PearPC, an older open-source emulator. Lead developer Arben Kryeziu subsequently stated that PearPC had provided the inspiration for CherryOS, but "not the work, not the architecture. With their architecture I'd never get the speed." After further development, CherryOS 1.0 was released in its final form on March 8, 2005, with support for CD, DVD, USB, FireWire, and Ethernet. It was described as automatically detecting "hardware and network connections" and allowing "for the use of virtually any OS X-ready application," including Safari and Mail. Estimated to be compatible with approximately 70 percent of PCs, MXS again fielded accusations that CherryOS 1.0 incorporated code from PearPC. MXS argued CherryOS was "absolutely not" a knockoff," and that though "certain generic code strings and screen verbiage used in Pear PC are also used in CherryOS... they are not proprietary to the Pear PC product." Shortly afterwards the creators of PearPC were reported to be "contemplating" litigation against Maui X-Stream, and on April 6, 2005, CherryOS was announced to be on hold. A day later, CherryOS announced that "due to overwhelming demand, Cherry open source project launches May 1, 2005." History Background and development On October 12, 2004, the emulator CherryOS was announced by Maui X-Stream (MXS), a startup company based in Lahaina, Hawaii and a subsidiary of Paradise Television. At the time MXS was best known for developing software for video streaming, particularly their VX3 encoder. As a new emulator intended to allow Mac OS X to be utilized on x86 computer architecture, CherryOS was advertised as working on Windows 98, Windows 2000 or Windows XP, with features such as allowing files to be dragged from PC to Mac, the creation of multiple profiles, and support for networking and sound. With development led by MXS employee and software developer Arben Kryeziu, CherryOS was made available for pre-order on the MXS website. Some articles hailed CherryOS as a new potential competitor for programs such as MacWindows, while the Irish Times would later write that certain groups of consumers "were suspicious as to how a little-known Hawaii-based outfit... could suddenly do something that had evaded much larger firms." In explaining the suspicion, Ars Technica later noted that emulators by small developers like PearPC had reputations for working extremely slowly, meaning CherryOS's claim of operating 80 percent of the host PC's speed would have been "a major breakthrough" in the industry. When asked by the Star Bulletin, at this point Kryeziu denied any possibility that CherryOS would contain code from a rival program like Apple, MacWindows, Emulators.com, or PearPC, stating that "our lawyers have looked at this and say we're in the clear. We wrote this from scratch and we're clean as a whistle." According to the Star Bulletin, suspicions that CherryOS might be a hoax "were fanned" by glitches on the CherryOS home website, and three days after the site opened for pre-sales it crashed after taking 300,000 daily hits. MXS president Jim Kartes crediting the crash on both unexpected high traffic and Mac "purists" who had hacked and destroyed the servers, and though MXS continued to accept non-digital pre-orders, by October 19 the CherryOS website was offline entirely as MXS switched to a new web host. Pre-release version Initially the company did not offer a trial version of CherryOS, citing concerns the code might be pirated. However, "as a direct result of the overwhelming response to our October 12 announcement," as of October 15 the company was readying a free beta version with a projected release date of November 25, 2004. On October 18, Kryeziu stated that a free public demo would be released within a week, and CherryOS was first registered to be trademarked in the United States on October 19, 2004. On October 19, however, Kryeziu withheld a timetable for the CherryOS release, stating the company had been pre-emptive in releasing the earlier "soft launch" version, and that CherryOS still had too many software bugs to predict a release date. Wired News reviewed a pre-release version around this time, reporting on October 22 that an expert had found distinguishing "watermarks" from PearPC's source code in CherryOS. Moreover, the pre-release version was reviewed to run at the same slow speed as PearPC, though Wired noted "they've actually done some work on it. They've written a whole graphical interface that makes [PearPC] easier to use." In response to the article, MXS stated that the edition tested by Wired had been a "very bad...premature version" that "is not CherryOS," and that one of the CherryOS programmers had since been fired for directly grafting elements of PearPC code into the release. A competing emulator, PearPC been released the year before under the GNU General Public License, which allows commercial products to use the software for profit under "certain conditions, such as acknowledging previous work." Kryeziu stated PearPC had provided the inspiration for CherryOS, but "not the work, not the architecture. With their architecture I'd never get the speed I got." He argued that some similarities between CherryOS and PearPC were a result of "the fact that they were designed to perform similar functions," and that "there are some functionalities that can only be done a certain way, and names are going to be similar or identical." Wired senior editor Leander Kahney posited that if the final CherryOS release did contain PearPC code, PearPC would be unlikely to sue Maui X-Stream for "a cut of any profits since open-source codes are protected more by an honor system than any legal basis." By October 22, Kryeziu stated to Wired that he'd been contacted by Apple Computer for an undisclosed reason that "wasn't bad." CherryOS 1.0 release After a delay, CherryOS 1.0 was released in its final form on March 8, 2005. Maui-X Stream initially offered a free copy for evaluation on its website, with 14 boot allowances and five free days per copy. According to MXS president Jim Kartes, within the first few days the free version was downloaded 100,000 times. Stated Kartes to the Mac Observer on March 8, 2005, "there has been a lot of misinformation about this product... I think we have proven those skeptics wrong." Initial reports of certain computers encountering slow speeds and glitches were explained by MXS as "expected," as "it's got bugs. That is why we're offering a free trial download. If it doesn't work, they shouldn't buy it.... we will use the testing of consumers to improve its stability and performance." Kartes extrapolated that after development, somewhere between "60% and 70% of all PC owners" would be able to use the CherryOS product. MXS announced plans to market CherryOS throughout the summer of 2005, but withheld specifics on when it would be released for sale. BetaNews.com reviewed CherryOS upon its public release, arguing that there were again similarities between CherryOS and PearPC, including specific non-generic lines of code. Maui X-Stream president Jim Kartes denied that CherryOS had grafted in PearPC code, and on March 24, 2005, a spokesperson for CherryOS stated to the Irish Times that CherryOS 1.0 was "absolutely not" a knockoff of Pear PC, as "there are considerable differences between the two products: Both products emulate the Apple operating system but the similarity ends there." The spokesperson further explained that "certain generic code strings and screen verbiage used in Pear PC are also used in CherryOS. They are not proprietary to the Pear PC product. For example, Pear tops out at G3 emulation and CherryOS is the only stable G4 emulator on the market today. CherryOS uses multithreading architecture for speed and ease of use. Pear employs a step-by-step approach; CherryOS features a shared-drive emulator, a drag-and-drop option allows you to connect the Windows drive to a Mac environment and CherryOS is the only emulator to support sound." Kartes further stated that although PearPC introduced their code before CherryOS, that "doesn't give them a claim on certain technical aspects of our product." On March 30, 2005, Ars Technica reported that the creators of PearPC were "contemplating" litigation against Maui X-Stream. On April 6, 2005, Cherry OS was announced by its developers to be on hold "until further notice." A day later, CherryOS announced on its website that it would no longer be a commercial product, and that "due to overwhelming demand, Cherry open source project launches May 1, 2005." The trademark for CherryOS was filed as abandoned as of June 21, 2006. Technical features Overview CherryOS was a PowerPC G4 processor emulator for x86 Microsoft Windows platforms. Originally written to work with Windows 98, Windows 2000 or Windows XP, among other features Cherry OS purported to allow files to be dragged from PC to Mac, the creation of multiple profiles, support skins, and support for networking and sound. In October 2004, the program's developer announced CherryOS as having "full network capabilities" and "complete access to the host computer's hardware resources - hard drive, CPU, RAM, FireWire, USB, PCI, PCMCIA bus, Ethernet networking and modem." By October 21, 2004, the program was reported to be a 7 MB download with Velocity Engine included. At the time, MMX stated they were developing 3D acceleration for CherryOS. The program was publicly released on March 8, 2005, with support for CD, DVD, USB, FireWire, and Ethernet. It was described as automatically detecting "hardware and network connections" and allowing "for the use of virtually any OS X-ready application," including Safari and Mail by Apple. Estimated to be compatible with approximately 70 percent of PCs, the CherryOS system required a Pentium 4 1.6 gigahertz (GHz) CPU or equivalent hardware and Windows XP, as well as 512 megabytes of memory and 3 gigabytes of hard drive space. After the initial March 8 release, speed of CherryOS 1.0 was reported to be variable. Karol McGuire of MXS stated that speed was depended on computer processor, as "a processor that has inadequate space on the hard drive or that runs at less than optimum operating speeds will not allow CherryOS to perform as designed." Following the public launch, the company announced that Kryeziu would be overseeing development on "sound support and network bridging, as well as improving speed." Kryeziu explained "we think we'll have the first two issues solved fairly soon. It's the type of product that will be continually updated as we go along. We think we can make it faster than it is right now, but this will take time." Apple TOS For its year in development, there was some question in the press as to the legality of CherryOS in relation to Apple's "Use and Restrictions" agreement, which only allows Apple programs to be used on a singular "Apple-labeled computer" at one time. The publication Ars Technica notes, however, that "a PPC emulator [like CherryOS or PearPC] isn't just for violating ToS agreements and bringing down the wrath of Apple Legal. It has legitimate uses too... you could use an emulator to run a PPC version of Linux on x86 hardware, and you could even use a P2P network to get that distribution of Linux, justifying two technologies with one rationalization." Despite this fact, the Irish Times pointed out that CherryOS was marketed exclusively to run Mac OSX, which it argued was "clear" violation of the OS X license agreement. Versions See also List of computer simulation software List of emulators Comparison of platform virtualization software References External links "CherryOS goes open source" - article by Jim Dalrymple for MacWorld (April 2005) Windows emulation software Virtualization software Vaporware PowerPC emulators Discontinued software Free emulation software
34890242
https://en.wikipedia.org/wiki/Experi-Metal%20v.%20Comerica
Experi-Metal v. Comerica
Experi-Metal, Inc., v. Comerica Bank (docket number: 2:2009cv14890) is a decision by the United States District Court for the Eastern District of Michigan in a case of a phishing attack that resulted in unauthorized wire transfers of US$1.9 million through Experi-Metal's online banking accounts. The court held Comerica liable for losses of US$560,000 that could not be recovered from the phishing attack, on the ground that the bank had not acted in good faith when it failed to recognize the transfers as fraudulent. Background Experi-Metal, a Macomb, Michigan-based company, held accounts with Comerica, headquartered in Dallas, Texas. Experi-Metal had signed up for a NetVision Wire Transfer service allowing it to send and receive payments and incoming fund transfers through the Internet. Phishing attack At approximately 7:35 am on January 22, 2009, an Experi-Metal employee opened a phishing email containing a link to a web page purporting to be a "Comerica Business Connect Customer Form". Following the email's link, the employee then proceeded to provide his security token identification, WebID and login information to a phony site. As a result, the fraudulent third parties gained access to Experi-Metal's accounts held with Comerica. In a six-and-a-half-hour period between 7:30 am and 2:02 pm, 93 fraudulent transfers were made from Experi-Metal's accounts totaling US$1,901,269.00. The majority of the transfers were directed to bank accounts in Russia, Estonia and China. Between 7:40 am and 1:59 pm, transfers totaling US$5.6 million were executed among accounts using the information obtained from the phishing attack. In one account, the transfers resulted in an overdraft of US$5 million. At 11:30 am, Comerica was alerted to the potential fraud by a telephone call from a JP Morgan Chase employee who had noticed suspicious wire transfers sent from an Experi-Metal account to a bank in Moscow, Russia. Sometime between 11:47 am and 11:59 am, Comerica alerted Experi-Metal to the transfers and confirmed that the legitimate account holder had not made any transactions during the course of the day. By 12:25 pm, Comerica put a hold on Experi-Metal's online banking transactions and began to "kill" its user session in an attempt to forcefully remove the people making the transfers from the Comerica online service. Comerica was successful in recovering a portion of the transfers. In total, US$561,399 was lost in the fraudulent transfers arising out of the phishing scheme. Opinion of the US District Court in Michigan The court considered two main issues in its decision. The first issue was whether the Experi-Metal employee whose confidential information was used to initiate the fraudulent transfers was authorized to initiate transfers on behalf of the company, and in turn, whether Comerica complied with its own security procedures in accepting the orders. The second issue was whether Comerica acted in "good faith" in accepting the orders on Experi-Metal's account. User information initiating fraudulent transfers There was some question as to whether the Experi-Metal employee who fell victim to the phishing incident was authorized to make wire transfers on behalf of the company. The issue was raised in the context of whether Comerica was complying with its security procedures when it accepted the wire transfers that were made using his account user information on January 22, 2009. After considering several contextual factors, the court concluded that the employee who had provided his account user information was authorized to initiate transfers with Comerica on behalf of Experi-Metal. As a result, Comerica was found to be in compliance with its own security protocols when it accepted the orders. Good faith A second issue in the case concerned the issue of 'good faith' on Comerica's part in accepting the wire transfers initiated by the fraudulent third parties. Under Michigan law, wire transfer orders are effective as orders of the customer even if they are not actually ordered by the customer, provided certain criteria are met. The issue in this case was whether the orders were accepted in good faith and in compliance with the security procedures, written agreements or instructions of the customer. If the orders made to Comerica on Experi-Metal's account were not received in "good faith", they would not be effective. While the court found that Comerica's security procedures were commercially reasonable, it found the bank failed to prove it had accepted orders for the fraudulent transfers in good faith. Under Michigan law good faith requires "honesty in fact and the observance of reasonable commercial standards for fair dealing." Because there was no suggestion that Comerica's employees acted dishonestly in accepting the fraudulent orders, the court moved to the element of the good faith test dealing with reasonable commercial standards for fair dealing. Here, the court found Comerica failed to meet the burden of proving that its employees met reasonable commercial standards of fair dealing in the context of the fraudulent transfers, and in particular with respect to the unusual overdrafts to the Experi-Metal accounts. On this last point, the court made specific reference to the overdrafts of US$5 million on an Experi-Metal account that usually had a $0 balance. Result Primarily on the basis that Experi-Metal's online wire transfer orders were not received in good faith, the court ordered Comerica to compensate Experi-Metal for its losses. Comerica reportedly reached an out of court settlement with Experi-Metal soon after the court's decision. Significance Experi-Metal v. Comerica represents a relatively early decision in an emerging area of case law relating to online banking fraud in the US. Similar US online banking fraud cases In Patco Construction v. People's United Bank a US District Court in Maine held that the defendant bank was not liable for US$588,000 in fraudulent transfers that were believed to result from Zeus keylogger malware attacks. Patco was an online banking customer and account holder at People's Bank at the time of the malware attacks. Between May 7 and May 16, 2009 unknown third parties made multiple online transfers totaling US$588,851 out of Patco's account. Ultimately, the bank was able to block US$243,406 of the fraudulent transfers. Patco alleged that its losses were related to People's Bank's deficient online security. The court found that People's Bank did suffer from some security weaknesses, but that on the whole, its security procedures were commercially reasonable. Accordingly, it found that the bank was not liable for the losses resulting from the fraudulent transfers. Although the facts of this case differ from those in Experi-Metal v. Comerica, it may be a challenge to reconcile the contrast between the two decisions. However, in July 2012, this decision was reversed by an appellate court. The parties later settled out of court, with People's United Bank paying the remainder of what was stolen from Patco's account, as well as $45,000 in interest. "In a landmark decision, the 1st Circuit Court of Appeals held in "Patco Construction Company, Inc. v. People's United Bank", No. 11-2031 (1st Cir. July 3, 2012) that People's United Bank (d/b/a Ocean Bank) was required to reimburse its customer, PATCO Construction Co., for approximately $580,000 that had been stolen from PATCO'S bank account. In so doing, the court reversed the decision of the U.S. District Court for the District of Maine that had granted summary judgment in the bank's favor." In Village View v. Professional Business Bank a similar claim was filed in the Superior Court of California in June 2011. Village View sued for losses incurred as a result of unauthorized and fraudulent wire transfers made from its account with Professional Business Bank on March 16–17, 2010, totaling US$195,874. The attacks began with a banking Trojan disguised as a UPS shipping receipt, which was accepted and opened into the Village View network by unsuspecting employees. The file was later found to contain malware that did several things including disabling of email notifications normally sent by the bank each time a transfer was made from Village View's account. The fraudulent transfers were made to international accounts, including banks in Latvia. Village View Escrow alleges in its claim that the unauthorized transfers were a result of Professional Business Bank's inadequate security system. Specifically, Village View alleges a failure on the part of Professional Business Bank to provide 'commercially reasonable security' procedures in accordance with California law and an accompanying failure to accept the orders for wire transfers in 'good faith.' Phishing and bank fraud trends in the US Wire transfer fraud and phishing are the sub-types of bank fraud used against Experi-Metal. Among US banking institutions, December 2011 saw US national banks targeted most frequently by phishing at 85%, followed by regional US banks at 9% and US credit unions at 6%. In terms of overall volume of phishing worldwide during the same period, the UK was a target 50% of the time, followed by the US at 28%, Brazil at 5%, South Africa at 4% and Canada at 2%. Malware such as the Zeus Trojan has been used extensively by criminals to steal personal banking information which can then be used to make fraudulent transfers out of the victims' bank accounts. In some cases, the perpetrators of the attacks have been caught and prosecuted, both within the US, as well as in other countries. Challenge of prosecuting online banking fraud While the types of activities in Experi-Metal v. Comerica might fall under the Computer Fraud and Abuse Act as an offense, the challenges of determining jurisdiction in an online environment, identifying perpetrators and collecting evidence remain as potentially significant obstacles in any attempts to enforce such legislation. References Fraud in the United States Banking technology Legal history of Michigan
243881
https://en.wikipedia.org/wiki/D%20%28programming%20language%29
D (programming language)
D, also known as Dlang, is a multi-paradigm system programming language created by Walter Bright at Digital Mars and released in 2001. Andrei Alexandrescu joined the design and development effort in 2007. Though it originated as a re-engineering of C++, D is a distinct language. It has redesigned some core C++ features, while also sharing characteristics of other languages, notably Java, Python, Ruby, C#, and Eiffel. The design goals of the language attempted to combine the performance and safety of compiled languages with the expressive power of modern dynamic languages. Idiomatic D code is commonly as fast as equivalent C++ code, while also being shorter. The language as a whole is not memory-safe but does include optional attributes designed to check memory safety. Type inference, automatic memory management and syntactic sugar for common types allow faster development, while bounds checking, design by contract features and a concurrency-aware type system help reduce the occurrence of bugs. Features D was designed with lessons learned from practical C++ usage, rather than from a purely theoretical perspective. Although the language uses many C and C++ concepts, it also discards some, or uses different approaches (and syntax) to achieve some goals. As such, it is not source compatible (nor does it aim to be) with C and C++ source code in general (some simpler code bases from these languages might by luck work with D, or require some porting). D has, however, been constrained in its design by the rule that any code that was legal in both C and D should behave in the same way. D gained some features before C++, such as closures, anonymous functions, compile-time function execution, ranges, built-in container iteration concepts and type inference. D adds to the functionality of C++ by also implementing design by contract, unit testing, true modules, garbage collection, first class arrays, associative arrays, dynamic arrays, array slicing, nested functions, lazy evaluation, scoped (deferred) code execution, and a re-engineered template syntax. C++ multiple inheritance was replaced by Java-style single inheritance with interfaces and mixins. On the other hand, D's declaration, statement and expression syntax closely matches that of C++. D retains C++'s ability to perform low-level programming including inline assembler, which typifies the differences between D and application languages like Java and C#. Inline assembler lets programmers enter machine-specific assembly code within standard D code, a method used by system programmers to access the low-level features of the processor needed to run programs that interface directly with the underlying hardware, such as operating systems and device drivers, as well as writing high-performance code (i.e. using vector extensions, SIMD) that is hard to generate by the compiler automatically. D supports function overloading and operator overloading, as well as dynamic arrays and associative arrays by default. Symbols (functions, variables, classes) can be declared in any order - forward declarations are not required. Similarly imports can be done almost in any order, and even be scoped (i.e. import some module or part of it inside a function, class or unittest only). D has built-in support for documentation comments, allowing automatic documentation generation. In D, text character strings are just arrays of characters, and arrays in D are bounds-checked, unlike those in C++. Specific operators for string handling exist, visually distinct from mathematical corellaries. D has first class types for complex and imaginary numbers, and evaluates expressions involving such types, efficiently. Programming paradigms D supports five main programming paradigms: concurrent (actor model) object-oriented imperative functional metaprogramming Imperative Imperative programming in D is almost identical to that in C. Functions, data, statements, declarations and expressions work just as they do in C, and the C runtime library may be accessed directly. On the other hand, some notable differences between D and C in the area of imperative programming include D's foreach loop construct, which allows looping over a collection, and nested functions, which are functions that are declared inside another and may access the enclosing function's local variables. import std.stdio; void main() { int multiplier = 10; int scaled(int x) { return x * multiplier; } foreach (i; 0 .. 10) { writefln("Hello, world %d! scaled = %d", i, scaled(i)); } } Object-oriented Object-oriented programming in D is based on a single inheritance hierarchy, with all classes derived from class Object. D does not support multiple inheritance; instead, it uses Java-style interfaces, which are comparable to C++'s pure abstract classes, and mixins, which separates common functionality from the inheritance hierarchy. D also allows the defining of static and final (non-virtual) methods in interfaces. Interfaces and inheritance in D support covariant types for return types of overridden methods. D supports type forwarding, as well as optional custom dynamic dispatch. Classes (and interfaces) in D can contain invariants which are automatically checked before and after entry to public methods, in accordance with the design by contract methodology. Many aspects of classes (and structs) can be introspected automatically at compile time (a form of reflection using type traits) and at run time (RTII / TypeInfo), to facilitate generic code or automatic code generation (usually using compile-time techniques). Functional D supports functional programming features such as function literals, closures, recursively-immutable objects and the use of higher-order functions. There are two syntaxes for anonymous functions, including a multiple-statement form and a "shorthand" single-expression notation: int function(int) g; g = (x) { return x * x; }; // longhand g = (x) => x * x; // shorthand There are two built-in types for function literals, function, which is simply a pointer to a stack-allocated function, and delegate, which also includes a pointer to the surrounding environment. Type inference may be used with an anonymous function, in which case the compiler creates a delegate unless it can prove that an environment pointer is not necessary. Likewise, to implement a closure, the compiler places enclosed local variables on the heap only if necessary (for example, if a closure is returned by another function, and exits that function's scope). When using type inference, the compiler will also add attributes such as pure and nothrow to a function's type, if it can prove that they apply. Other functional features such as currying and common higher-order functions such as map, filter, and reduce are available through the standard library modules std.functional and std.algorithm. import std.stdio, std.algorithm, std.range; void main() { int[] a1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]; int[] a2 = [6, 7, 8, 9]; // must be immutable to allow access from inside a pure function immutable pivot = 5; int mySum(int a, int b) pure nothrow // pure function { if (b <= pivot) // ref to enclosing-scope return a + b; else return a; } // passing a delegate (closure) auto result = reduce!mySum(chain(a1, a2)); writeln("Result: ", result); // Result: 15 // passing a delegate literal result = reduce!((a, b) => (b <= pivot) ? a + b : a)(chain(a1, a2)); writeln("Result: ", result); // Result: 15 } Alternatively, the above function compositions can be expressed using Uniform function call syntax (UFCS) for more natural left-to-right reading: auto result = a1.chain(a2).reduce!mySum(); writeln("Result: ", result); result = a1.chain(a2).reduce!((a, b) => (b <= pivot) ? a + b : a)(); writeln("Result: ", result); Parallelism Parallel programming concepts are implemented in the library, and do not require extra support from the compiler. However the D type system and compiler ensure that data sharing can be detected and managed transparently. import std.stdio : writeln; import std.range : iota; import std.parallelism : parallel; void main() { foreach (i; iota(11).parallel) { // The body of the foreach loop is executed in parallel for each i writeln("processing ", i); } } iota(11).parallel is equivalent to std.parallelism.parallel(iota(11)) by using UFCS. The same module also supports taskPool which can be used for dynamic creation of parallel tasks, as well as map-filter-reduce and fold style operations on ranges (and arrays), which is useful when combined with functional operations. std.algorithm.map returns a lazily evaluated range rather than an array. This way, the elements are computed by each worker task in parallel automatically. import std.stdio : writeln; import std.algorithm : map; import std.range : iota; import std.parallelism : taskPool; /* On Intel i7-3930X and gdc 9.3.0: * 5140ms using std.algorithm.reduce * 888ms using std.parallelism.taskPool.reduce * * On AMD Threadripper 2950X, and gdc 9.3.0: * 2864ms using std.algorithm.reduce * 95ms using std.parallelism.taskPool.reduce */ void main() { auto nums = iota(1.0, 1_000_000_000.0); auto x = taskPool.reduce!"a + b"( 0.0, map!"1.0 / (a * a)"(nums) ); writeln("Sum: ", x); } Concurrency Concurrency is fully implemented in the library, and it does not require support from the compiler. Alternative implementations and methodologies of writing concurrent code are possible. The use of D typing system does help ensure memory safety. import std.stdio, std.concurrency, std.variant; void foo() { bool cont = true; while (cont) { receive( // Delegates are used to match the message type. (int msg) => writeln("int received: ", msg), (Tid sender) { cont = false; sender.send(-1); }, (Variant v) => writeln("huh?") // Variant matches any type ); } } void main() { auto tid = spawn(&foo); // spawn a new thread running foo() foreach (i; 0 .. 10) tid.send(i); // send some integers tid.send(1.0f); // send a float tid.send("hello"); // send a string tid.send(thisTid); // send a struct (Tid) receive((int x) => writeln("Main thread received message: ", x)); } Metaprogramming Metaprogramming is supported through templates, compile-time function execution, tuples, and string mixins. The following examples demonstrate some of D's compile-time features. Templates in D can be written in a more imperative style compared to the C++ functional style for templates. This is a regular function that calculates the factorial of a number: ulong factorial(ulong n) { if (n < 2) return 1; else return n * factorial(n-1); } Here, the use of static if, D's compile-time conditional construct, is demonstrated to construct a template that performs the same calculation using code that is similar to that of the function above: template Factorial(ulong n) { static if (n < 2) enum Factorial = 1; else enum Factorial = n * Factorial!(n-1); } In the following two examples, the template and function defined above are used to compute factorials. The types of constants need not be specified explicitly as the compiler infers their types from the right-hand sides of assignments: enum fact_7 = Factorial!(7); This is an example of compile-time function execution (CTFE). Ordinary functions may be used in constant, compile-time expressions provided they meet certain criteria: enum fact_9 = factorial(9); The std.string.format function performs printf-like data formatting (also at compile-time, through CTFE), and the "msg" pragma displays the result at compile time: import std.string : format; pragma(msg, format("7! = %s", fact_7)); pragma(msg, format("9! = %s", fact_9)); String mixins, combined with compile-time function execution, allow for the generation of D code using string operations at compile time. This can be used to parse domain-specific languages, which will be compiled as part of the program: import FooToD; // hypothetical module which contains a function that parses Foo source code // and returns equivalent D code void main() { mixin(fooToD(import("example.foo"))); } Memory management Memory is usually managed with garbage collection, but specific objects may be finalized immediately when they go out of scope. This is what majority of programs and libraries written in D use. In case more control about memory layout and better performance is needed, explicit memory management is possible using the overloaded operators new and delete, by calling C's malloc and free directly, or implementing custom allocator schemes (i.e. on stack with fallback, RAII style allocation, reference counting, shared reference counting). Garbage collection can be controlled: programmers may add and exclude memory ranges from being observed by the collector, can disable and enable the collector and force either a generational or full collection cycle. The manual gives many examples of how to implement different highly optimized memory management schemes for when garbage collection is inadequate in a program. In functions, structs are by default allocated on the stack, while classes by default allocated on the heap (with only reference to the class instance being on the stack). However this can be changed for classes, for example using standard library template std.typecons.scoped, or by using new for structs and assigning to pointer instead to value-based variable. In function, static arrays (of known size) are allocated on stack. For dynamic arrays one can use core.stdc.stdlib.alloca function (similar to C function alloca, to allocate memory on stack. The returned pointer can be used (recast) into a (typed) dynamic array, by means of a slice (however resizing array, including appending must be avoided; and for obvious reasons they must not be returned from the function). A scope keyword can be used both to annotate parts of code, but also variables and classes/structs, to indicate they should be destroyed (destructor called) immediately on scope exit. Whatever the memory is deallocated also depends on implementation and class-vs-struct differences. std.experimental.allocator contains a modular and composable allocator templates, to create custom high performance allocators for special use cases. SafeD SafeD is the name given to the subset of D that can be guaranteed to be memory safe (no writes to memory that has not been allocated or that has been recycled). Functions marked @safe are checked at compile time to ensure that they do not use any features that could result in corruption of memory, such as pointer arithmetic and unchecked casts, and any other functions called must also be marked as @safe or @trusted. Functions can be marked @trusted for the cases where the compiler cannot distinguish between safe use of a feature that is disabled in SafeD and a potential case of memory corruption. Scope Lifetime Safety Initially under the banners of DIP1000 and DIP25 (now part of the language specification), D provides protections against certain ill-formed constructions involving the lifetimes of data. The current mechanisms in place primarily deal with function parameters and stack memory however it is a stated ambition of the leadership of the programming language to provide a more thorough treatment of lifetimes within the D programming language. (Influenced by ideas from Rust programming language). Lifetime Safety of Assignments Within @safe code, the lifetime of an assignment involving a reference type is checked to ensure that the lifetime of the assignee is longer than that of the assigned. For example: @safe void test() { int tmp = 0; // #1 int* rad; // #2 rad = &tmp; // If the order of the declarations of #1 and #2 is reversed, this fails. { int bad = 45; // Lifetime of "bad" only extends to the scope in which it is defined. *rad = bad; // This is kosher. rad = &bad; // Lifetime of rad longer than bad, hence this is not kosher at all. } } Function Parameter Lifetime Annotations within @safe code When applied to function parameter which are either of pointer type or references, the keywords return and scope constrain the lifetime and use of that parameter. The Standard Dictates the following behaviour: An Annotated Example is given below.@safe: int* gp; void thorin(scope int*); void gloin(int*); int* balin(return scope int* p, scope int* q, int* r) { gp = p; // error, p escapes to global gp gp = q; // error, q escapes to global gp gp = r; // ok thorin(p); // ok, p does not escape thorin() thorin(q); // ok thorin(r); // ok gloin(p); // error, gloin() escapes p gloin(q); // error, gloin() escapes q gloin(r); // ok that gloin() escapes r return p; // ok return q; // error, cannot return 'scope' q return r; // ok } Interaction with other systems C's application binary interface (ABI) is supported, as well as all of C's fundamental and derived types, enabling direct access to existing C code and libraries. D bindings are available for many popular C libraries. Additionally, C's standard library is part of standard D. On Microsoft Windows, D can access Component Object Model (COM) code. As long as memory management is properly taken care of, many other languages can be mixed with D in a single binary. For example GDC compiler allow to link C, C++, and other supported language codes to be intermixed. D code (functions) can also be marked as using C, C++, Pascal ABIs, and thus be passed to the libraries written in these languages as callbacks. Similarly data can be interchanged between the codes written in these languages in both ways. This usually restricts use to primitive types, pointers, some forms of arrays, unions, structs, and only some types of function pointers. Because many other programming languages often provide the C API for writing extensions or running the interpreter of the languages, D can interface directly with these languages as well, using standard C bindings (with a thin D interface file). For example, there are bi-directional bindings for languages like Python, Lua and other languages, often using compile-time code generation and compile-time type reflection methods. Interaction with C++ code D takes a permissive but realistic approach to interoperation with C++ code. For D code marked as extern(C++), the following features are specified: The name mangling conventions shall match those of C++ on the target. For Function Calls, the ABI shall be equivalent. The vtable shall be matched up to single inheritance (The only level supported by the D language specification). C++ namespaces are used via the syntax extern(C++, namespace) where namespace is the name of the C++ namespace. An Example of C++ interoperation The C++ side #include <iostream> using namespace std; class Base { public: virtual void print3i(int a, int b, int c) = 0; }; class Derived : public Base { public: int field; Derived(int field) : field(field) {} void print3i(int a, int b, int c) { cout << "a = " << a << endl; cout << "b = " << b << endl; cout << "c = " << c << endl; } int mul(int factor); }; int Derived::mul(int factor) { return field * factor; } Derived *createInstance(int i) { return new Derived(i); } void deleteInstance(Derived *&d) { delete d; d = 0; } The D side extern(C++) { abstract class Base { void print3i(int a, int b, int c); } class Derived : Base { int field; @disable this(); override void print3i(int a, int b, int c); final int mul(int factor); } Derived createInstance(int i); void deleteInstance(ref Derived d); } void main() { import std.stdio; auto d1 = createInstance(5); writeln(d1.field); writeln(d1.mul(4)); Base b1 = d1; b1.print3i(1, 2, 3); deleteInstance(d1); assert(d1 is null); auto d2 = createInstance(42); writeln(d2.field); deleteInstance(d2); assert(d2 is null); } Better C The D programming language has an official subset known as "". This subset forbids access to D features requiring use of runtime libraries other than that of C. Enabled via the compiler flags "-betterC" on DMD and LDC, and "-fno-druntime" on GDC, may only call into D code compiled under the same flag (and linked code other than D) but code compiled without the option may call into code compiled with it: this will, however, lead to slightly different behaviours due to differences in how C and D handle asserts. Features included in Better C Unrestricted use of compile-time features (for example, D's dynamic allocation features can be used at compile time to pre-allocate D data) Full metaprogramming facilities Nested functions, nested structs, delegates and lambdas Member functions, constructors, destructors, operating overloading, etc. The full module system Array slicing, and array bounds checking RAII Memory safety protections Interfacing with C++ COM classes and C++ classes assert failures are directed to the C runtime library switch with strings final switch unittest blocks printf format validation Features excluded from Better C Garbage collection TypeInfo and ModuleInfo Built-in threading (e.g. core.thread) Dynamic arrays (though slices of static arrays work) and associative arrays Exceptions synchronized and core.sync Static module constructors or destructors History Walter Bright started working on a new language in 1999. D was first released in December 2001 and reached version 1.0 in January 2007. The first version of the language (D1) concentrated on the imperative, object oriented and metaprogramming paradigms, similar to C++. Some members of the D community dissatisfied with Phobos, D's official runtime and standard library, created an alternative runtime and standard library named Tango. The first public Tango announcement came within days of D 1.0's release. Tango adopted a different programming style, embracing OOP and high modularity. Being a community-led project, Tango was more open to contributions, which allowed it to progress faster than the official standard library. At that time, Tango and Phobos were incompatible due to different runtime support APIs (the garbage collector, threading support, etc.). This made it impossible to use both libraries in the same project. The existence of two libraries, both widely in use, has led to significant dispute due to some packages using Phobos and others using Tango. In June 2007, the first version of D2 was released. The beginning of D2's development signaled D1's stabilization. The first version of the language has been placed in maintenance, only receiving corrections and implementation bugfixes. D2 introduced breaking changes to the language, beginning with its first experimental const system. D2 later added numerous other language features, such as closures, purity, and support for the functional and concurrent programming paradigms. D2 also solved standard library problems by separating the runtime from the standard library. The completion of a D2 Tango port was announced in February 2012. The release of Andrei Alexandrescu's book The D Programming Language on 12 June 2010, marked the stabilization of D2, which today is commonly referred to as just "D". In January 2011, D development moved from a bugtracker / patch-submission basis to GitHub. This has led to a significant increase in contributions to the compiler, runtime and standard library. In December 2011, Andrei Alexandrescu announced that D1, the first version of the language, would be discontinued on 31 December 2012. The final D1 release, D v1.076, was on 31 December 2012. Code for the official D compiler, the Digital Mars D compiler by Walter Bright, was originally released under a custom license, qualifying as source available but not conforming to the open source definition. In 2014, the compiler front-end was re-licensed as open source under the Boost Software License. This re-licensed code excluded the back-end, which had been partially developed at Symantec. On 7 April 2017, the whole compiler was made available under the Boost license after Symantec gave permission to re-license the back-end, too. On 21 June 2017, the D Language was accepted for inclusion in GCC. Implementations Most current D implementations compile directly into machine code for efficient execution. Production ready compilers: DMD – The Digital Mars D compiler by Walter Bright is the official D compiler; open sourced under the Boost Software License. The DMD frontend is shared by GDC (now in GCC) and LDC, to improve compatibility between compilers. Initially the frontend was written in C++, but now most of it is now written in D itself (self-hosting). The backend and machine code optimizers are based on the Symantec compiler. At first it supported only 32-bit x86, with support added for 64-bit amd64 and PowerPC by Walter Bright. Later the backend and almost the entire compiler was ported from C++ to D for full self-hosting. GCC – The GNU Compiler Collection, merged GDC into GCC 9 on 29 October 2018. The first working versions of GDC with GCC, based on GCC 3.3 and GCC 3.4 on 32-bit x86 on Linux and macOS was released on 22 March 2004. Since then GDC has gained support for additional platforms, improved performance, and fixed bugs, while tracking upstream DMD code for the frontend and language specification. LDC – A compiler based on the DMD front-end that uses LLVM as its compiler back-end. The first release-quality version was published on 9 January 2009. It supports version 2.0. Toy and proof-of-concept compilers: D Compiler for – A back-end for the D programming language 2.0 compiler. It compiles the code to Common Intermediate Language (CIL) bytecode rather than to machine code. The CIL can then be run via a Common Language Infrastructure (CLI) virtual machine. The project has not been updated in years and the author indicated the project is not active anymore. SDC – The Stupid D Compiler uses a custom front-end and LLVM as its compiler back-end. It is written in D and uses a scheduler to handle symbol resolution in order to elegantly handle the compile-time features of D. This compiler currently supports a limited subset of the language. Using above compilers and toolchains, it is possible to compile D programs to target many different architectures, including x86, amd64, AArch64, PowerPC, MIPS64, DEC Alpha, Motorola m68k, Sparc, s390, WebAssembly. The primary supported operating systems are Windows and Linux, but various compilers also support Mac OS X, FreeBSD, NetBSD, AIX, Solaris/OpenSolaris and Android, either as a host or target, or both. WebAssembly target (supported via LDC and LLVM) can operate in any WebAssembly environment, like modern web browser (Google Chrome, Mozilla Firefox, Microsoft Edge, Apple Safari), or dedicated Wasm virtual machines. Development tools Editors and integrated development environments (IDEs) supporting syntax highlighting and partial code completion for the language include SlickEdit, Emacs, vim, SciTE, Smultron, Zeus, and Geany among others. Dexed (formerly Coedit), a D focused graphical IDE written in Object Pascal Mono-D is a feature rich cross-platform D focused graphical IDE based on MonoDevelop / Xamarin Studio, mainly written in C Sharp. Eclipse plug-ins for D include DDT and Descent (dead project). Visual Studio integration is provided by VisualD. Visual Studio Code integration with extensions as Dlang-Vscode or Code-D. A bundle is available for TextMate, and the Code::Blocks IDE includes partial support for the language. However, standard IDE features such as code completion or refactoring are not yet available, though they do work partially in Code::Blocks (due to D's similarity to C). The Xcode 3 plugin "D for Xcode" enables D-based projects and development. KDevelop (as well as its text editor backend, Kate) autocompletion plugin is available. Open source D IDEs for Windows exist, some written in D, such as Poseidon, D-IDE, and Entice Designer. D applications can be debugged using any C/C++ debugger, like GDB or WinDbg, although support for various D-specific language features is extremely limited. On Windows, D programs can be debugged using Ddbg, or Microsoft debugging tools (WinDBG and Visual Studio), after having converted the debug information using cv2pdb. The ZeroBUGS debugger for Linux has experimental support for the D language. Ddbg can be used with various IDEs or from the command line; ZeroBUGS has its own graphical user interface (GUI). DustMite is a powerful tool for minimizing D source code, useful when finding compiler or tests issues. dub is a popular package and build manager for D applications and libraries, and is often integrated into IDE support. Examples Example 1 This example program prints its command line arguments. The main function is the entry point of a D program, and args is an array of strings representing the command line arguments. A string in D is an array of characters, represented by immutable(char)[]. import std.stdio: writefln; void main(string[] args) { foreach (i, arg; args) writefln("args[%d] = '%s'", i, arg); } The foreach statement can iterate over any collection. In this case, it is producing a sequence of indexes (i) and values (arg) from the array args. The index i and the value arg have their types inferred from the type of the array args. Example 2 The following shows several D capabilities and D design trade-offs in a short program. It iterates over the lines of a text file named words.txt, which contains a different word on each line, and prints all the words that are anagrams of other words. import std.stdio, std.algorithm, std.range, std.string; void main() { dstring[] [dstring] signature2words; foreach (dchar[] w; lines(File("words.txt"))) { w = w.chomp().toLower(); immutable signature = w.dup.sort().release().idup; signature2words[signature] ~= w.idup; } foreach (words; signature2words) { if (words.length > 1) { writeln(words.join(" ")); } } } signature2words is a built-in associative array that maps dstring (32-bit / char) keys to arrays of dstrings. It is similar to defaultdict(list) in Python. lines(File()) yields lines lazily, with the newline. It has to then be copied with idup to obtain a string to be used for the associative array values (the idup property of arrays returns an immutable duplicate of the array, which is required since the dstring type is actually immutable(dchar)[]). Built-in associative arrays require immutable keys. The ~= operator appends a new dstring to the values of the associate dynamic array. toLower, join and chomp are string functions that D allows the use of with a method syntax. The name of such functions are often similar to Python string methods. The toLower converts a string to lower case, join(" ") joins an array of strings into a single string using a single space as separator, and chomp removes a newline from the end of the string if one is present. The w.dup.sort().release().idup is more readable, but equivalent to release(sort(w.dup)).idup for example. This feature is called UFCS (Uniform Function Call Syntax), and allows extending any built-in or third party package types with method-like functionality. The style of writing code like this is often referenced as pipeline (especially when the objects used are lazily computed, for example iterators / ranges) or Fluent interface. The sort is an std.algorithm function that sorts the array in place, creating a unique signature for words that are anagrams of each other. The release() method on the return value of sort() is handy to keep the code as a single expression. The second foreach iterates on the values of the associative array, it is able to infer the type of words. signature is assigned to an immutable variable, its type is inferred. UTF-32 dchar[] is used instead of normal UTF-8 char[] otherwise sort() refuses to sort it. There are more efficient ways to write this program using just UTF-8. Uses Notable organisations that use the D programming language for projects include Facebook, eBay, and Netflix. D has been successfully used for AAA games, language interpreters, virtual machines, an operating system kernel, GPU programming, web development, numerical analysis, GUI applications, a passenger information system, machine learning, text processing, web and application servers and research. See also Ddoc D Language Foundation References Further reading (distributed under CC-BY-NC-SA license). This book teaches programming to novices, but covers many advanced D topics as well. External links Digital Mars Turkish Forum Programming languages C programming language family Class-based programming languages Cross-platform software Free compilers and interpreters High-level programming languages Multi-paradigm programming languages Object-oriented programming languages Procedural programming languages Programming languages created in 2001 Statically typed programming languages Systems programming languages 2001 software Software using the Boost license Articles with example D code
4301
https://en.wikipedia.org/wiki/Beatmatching
Beatmatching
Beatmatching or pitch cue is a disc jockey technique of pitch shifting or timestretching an upcoming track to match its tempo to that of the currently playing track, and to adjust them such that the beats (and, usually, the bars) are synchronised — e.g. the kicks and snares in two house records hit at the same time when both records are played simultaneously. Beatmatching is a component of beatmixing which employs beatmatching combined with equalization, attention to phrasing and track selection in an attempt to make a single mix that flows together and has a good structure. The technique was developed to keep the people from leaving the dancefloor at the end of the song. These days it is considered basic among disc jockeys (DJs) in electronic dance music genres, and it is standard practice in clubs to keep the constant beat through the night, even if DJs change in the middle. Beatmatching is no longer considered a novelty, and new digital software has made the technique much easier to master. Technique The beatmatching technique consists of the following steps: While a record is playing, start a second record playing, but only monitored through headphones, not being fed to the main PA system. Use gain (or trim) control on the mixer to match the levels of the two records. Restart and slip-cue the new record at the right time, on beat with the record currently playing. If the beat on the new record hits before the beat on the current record then the new record is too fast; reduce the pitch and manually slow the speed of the new record to bring the beats back in sync. If the beat on the new record hits after the beat on the current record then the new record is too slow; increase the pitch and manually increase the speed of the new record to bring the beats back in sync. Continue this process until the two records are in sync with each other. It can be difficult to sync the two records perfectly, so manual adjustment of the records is necessary to maintain the beat synchronization. Gradually fade in parts of the new track while fading out the old track. While in the mix, ensure that the tracks are still synchronized, adjusting the records if needed. The fade can be repeated several times, for example, from the first track, fade to the second track, then back to first, then to second again. One of the key things to consider when beatmatching is the tempo of both songs, and the musical theory behind the songs. Attempting to beatmatch songs with completely different beats per minute (BPM) will result in one of the songs sounding too fast or too slow. When beatmatching, a popular technique is to vary the equalization of both tracks. For example, when the kicks are occurring on the same beat, a more seamless transition can occur if the lower frequencies are taken out of one of the songs, and the lower frequencies of the other song is boosted. Doing so creates a smoother transition. Pitch and tempo The pitch and tempo of a track are normally linked together: spin a disc 5% faster and both pitch and tempo will be 5% higher. However, some modern DJ software can change pitch and tempo independently using time-stretching and pitch-shifting, allowing harmonic mixing. There is also a feature in modern DJ software which may be called "master tempo" or "key adjust" which changes the tempo while keeping the original pitch. History Beatmatching was invented by Francis Grasso in the late 1960s and early 1970s. Initially he was counting the tempo with a metronome and looking for records with the same tempo. Later a mixer was built for him by Alex Rosner which let him listen to any channel in the headphones independently of what was playing on the speakers; this became the defining feature of DJ mixers. That and turntables with pitch control enabled him to mix tracks with different tempo by changing the pitch of the cued (redirected to headphones) track to match its tempo with the track being played by ear. Essentially, the technique he originated hasn't changed since. These days beat-matching is considered central to DJing, and features making it possible are a requirement for DJ-oriented players. In 1978, the Technics SL-1200MK2 turntable was released, whose comfortable and precise sliding pitch control and high torque direct drive motor made beat-matching easier and it became the standard among DJs. With the advent of the compact disc, DJ-oriented Compact Disc players with pitch control and other features enabling beat-matching (and sometimes scratching), dubbed CDJs, were introduced by various companies. More recently, software with similar capabilities has been developed to allow manipulation of digital audio files stored on computers using turntables with special vinyl records (e.g. Final Scratch, M-Audio Torq, Serato Scratch Live) or computer interface (e.g. Traktor DJ Studio, Mixxx, Virtual DJ). Other software including algorithmic beat-matching is Ableton Live, which allows for realtime music manipulation and deconstruction, or Mixmeister, a DJ Mixset creation tool. Freeware software such as Rapid Evolution can detect the beats per minute and determine the percent BPM difference between songs. The change from pure hardware to software is on the rise, and big DJs are introducing new equipment to their kits such as the laptop, and dropping the difficulty of carrying hundreds of CDs with them. The creation of the mp3-player allowed DJs to have an alternative tool for DJIng. Limitations with mp3-player DJing equipment has meant that only second generation equipment such as the IDJ2 or the Cortex Dmix-300 have the pitch control that alters tempo and allows for beat-matching on a digital music player. However, recent additions to the Pioneer CDJ family, such as the CDJ-2000, allow mp3-player and other digital storage devices (such as external hard drives, SD cards and USB memory sticks) to be connected to the CDJ device via USB. This allows the DJ to make use of the beat-matching capabilities of the CDJ unit whilst playing digital music files from the mp3-player or other storage device. Most modern DJ hardware and software now offers a "Sync" feature which automatically adjusts the tempo between tracks being mixed so the DJ no longer needs to spend time and effort matching beats. This has caused some controversy in the DJ industry since almost anyone can beat-match thanks to the new function. See also Clubdjpro DJ mix Harmonic mixing Segue Mashup References Audio mixing Disco DJing
47471388
https://en.wikipedia.org/wiki/Carding%20%28fraud%29
Carding (fraud)
Carding is a term describing the trafficking and unauthorized use of credit cards. The stolen credit cards or credit card numbers are then used to buy prepaid gift cards to cover up the tracks. Activities also encompass exploitation of personal data, and money laundering techniques. Modern carding sites have been described as full-service commercial entities. Acquisition There are a great many of methods to acquire credit card and associated financial and personal data. The earliest known carding methods have also included 'trashing' for financial data, raiding mail boxes and working with insiders. Some bank card numbers can be semi-automatically generated based on known sequences via a 'BIN attack'. Carders might attempt a 'distributed guessing attack' to discover valid numbers by submitting numbers across a high number of ecommerce sites simultaneously. Today, various methodologies include skimmers at ATMs, hacking or web skimming an ecommerce or payment processing site or even intercepting card data within a point of sale network. Randomly calling hotel room phones asking guests to 'confirm' credit card details is example of a social engineering attack vector. Resale Stolen data may be bundled as a 'Base' or 'First-hand base' if the seller participated in the theft themselves. Resellers may buy 'packs' of dumps from multiple sources. Ultimately, the data may be sold on darknet markets and other carding sites and forums specialising in these types of illegal goods. Teenagers have gotten involved in fraud such as using card details to order pizzas. On the more sophisticated of such sites, individual 'dumps' may be purchased by zip code and country so as to avoid alerting banks about their misuse. Automatic checker services perform validation en masse in order to quickly check if a card has yet to be blocked. Sellers will advertise their dump's 'valid rate', based on estimates or checker data. Cards with a greater than 90% valid rate command higher prices. 'Cobs' or changes of billing are highly valued, where sufficient information is captured to allow redirection of the registered card's billing and shipping addresses to one under the carder's control. Full identity information may be sold as 'Fullz' inclusive of social security number, date of birth and address to perform more lucrative identity theft. Fraudulent vendors are referred to as 'rippers', vendors who take buyer's money then never deliver. This is increasingly mitigated via forum and store based feedback systems as well as through strict site invitation and referral policies. Whilst some Carding Forums will exist only on the dark web, today most exist on the internet, and many will use the Cloudflare network protection service. Estimated per card prices, in US$, for stolen payment card data 2015 Cash out Funds from stolen cards themselves may be cashed out via buying pre-paid cards, gift cards or through reshipping goods through mules then e-fencing through online marketplaces like eBay. Increased law enforcement scrutiny over reshipping services has led to the rise of dedicated criminal operations for reshipping stolen goods. Hacked computers may be configured with SOCKS proxy software to optimise acceptance from payment processors. Money laundering The 2004 investigation into the ShadowCrew forum also led to investigations of the online payment service E-gold that had been launched in 1996, one of the preferred money transfer systems of carders at the time. In December 2005 its owner Douglas Jackson's house and businesses were raided as a part of 'Operation Goldwire'. Jackson discovered that the service had become a bank and transfer system to the criminal underworld. Pressured to disclose ongoing records disclosed to law enforcement, many arrests were made through to 2007. However, in April 2007 Jackson himself was indicted for money laundering, conspiracy and operating an unlicensed money transmitting business. This led to the service freezing the assets of users in 'high risk' countries and coming under more traditional financial regulation. Since 2006, Liberty Reserve had become a popular service for cybercriminals. When it was seized in May 2013 by the US government, this caused a major disruption to the cybercrime ecosystem. Today, some carders prefer to make payment between themselves with bitcoin, as well as traditional wire services such as Western Union, MoneyGram or the Russian WebMoney service. Related services Many forums also provide related computer crime services such as phishing kits, malware and spam lists. They may also act as a distribution point for the latest fraud tutorials either for free or commercially. ICQ was at one point the instant messenger of choice due to its anonymity as well as MSN clients modified to use PGP. Carding related sites may be hosted on botnet based fast flux web hosting for resilience against law enforcement action. Other account types like PayPal, Uber, Netflix and loyalty card points may be sold alongside card details. Logins to many sites may also be sold as a backdoor access apparently for major institutions such as banks, universities and even industrial control systems. For gift card fraud, retailers are prone to be exploited by fraudsters in their attempts to steal gift cards via bot technology or through stolen credit card information. In the context of carding fraud, using stolen credit card data to purchase gift cards is becoming an increasingly common money laundering tactic. Another way gift card fraud occurs is when a retailer's online systems which store gift card data undergo brute force attacks from automated bots. Tax refund fraud is an increasingly popular method of using identify theft to acquire prepaid cards ready for immediate cash out. Popular coupons may be counterfeited and sold also. Personal information and even medical records are sometimes available. Theft and gift card fraud may operated entirely independently of online carding operations. Cashing out in gift cards is very common as well, as "discounted gift cards" can be found for sale anywhere, making it an easy sale for a carder, and a very lucrative operation. The Google hacks popularly known as Google dorks for credit card details are also used vastly in getting credit card details History 1980s–1999 Since the 1980s in the days of the dial-up BBSes, the term carding has been used to describe the practices surrounding credit card fraud. Methods such as 'trashing', raiding mail boxes and working with insiders at stores were cited as effective ways of acquiring card details. Use of drops at places like abandoned houses and apartments or with persuadable neighbors near such a location were suggested. Social engineering of mail order sales representatives are suggested in order to provide passable information for card not present transactions. Characters such as 'The Video Vindicator' would write extensive guides on 'Carding Across America', burglary, fax fraud, supporting phreaking, and advanced techniques for maximizing profits. During the 1980s, the majority of hacker arrests were attributable to carding-related activities due to the relative maturity of financial laws compared to emerging computer regulations. Started in 1989, by 1990 Operation Sundevil was launched by the United States Secret Service to crack down on use of BBS groups involved in credit card fraud and other illegal computer activities, the most highly publicised action by the US federal government against hackers at the time. The severity of the crack down was so much that the Electronic Frontier Foundation was formed in response to the violation of civil liberties. In the mid-1990s with the rise of AOL dial-up accounts, the AOHell software became a popular tool for phishing and stealing information such as credit card details from new Internet users. Such abuse was exacerbated because prior to 1995 AOL did not validate subscription credit card numbers on account creation. Abuse was so common AOL added "no one working at AOL will ask for your password or billing information" to all instant messenger communications. Only by 1997 when warez and phishing were pushed off the service did these types of attacks begin to decline. December 1999 featured an unusual case of extortion when Maxim, a Russian 19-year-old, stole the 25,000 users' card details from CD Universe and demanded $100,000 for its destruction. When the ransom was not paid, the information was leaked on the Internet. One of the first books written about carding, 100% Internet Credit Card Fraud Protected, featured content produced by 'Hawk' of carding group 'Universal Carders'. It described the spring 1999 hack and credit card theft on CyberCash, the stratification of carder proficiencies (script kiddie through to professionals) common purchases for each type and basic phishing schemes to acquire credit card data. By 1999, United States offline and online credit card fraud annual losses were estimated at between $500,000 and $2 million. 2000–2006 From the early 2000s, sites like 'The Counterfeit Library', also functioning as a diploma mill, grew to prominence, with many of its members going on to join larger cybercrime websites in later years until its closure around September 2004. In 2001, Russian speaking hackers founded CarderPlanet in Odessa which would go on to be one of the most notorious forums of its kind. In the summer of 2003, separate US secret service and FBI investigations led to the arrest the top administrator Albert Gonzalez of the large ShadowCrew, turned informant as a part of 'Operation Firewall'. By March 2004, the administrator of 'CarderPlanet' disappeared with Gonzalez taking over. In October 2004 dozens of ShadowCrew members were busted across the US and Canada. Carder's speculate that one of the USSS infiltrators might have been detected by a fellow site member causing the operation to be expedited. Ultimately, the closure of ShadowCrew and CarderPlanet did not reduce the degree of fraud and led to the proliferation of smaller sites. ShadowCrew admin Brett Shannon Johnson managed to avoid being arrested at this time, but was picked up in 2005 on separate charges then turned informant. Continuing to commit tax fraud as an informant, 'Operation Anglerphish' embedded him as admins on both ScandinavianCarding and CardersMarket. When his continued carding activities were exposed as a part of a separate investigation in 2006, he briefly went on the run before being caught for good in August of that year. In June 2005, the credit card processing company CardSystems was hacked in what was at the time the largest personal information breach in history with many of the stolen information making its way to carding sites. Later in 2007, the TJX Companies breach perpetuated by Albert Gonzalez (who was still an informant at the time) would only come to the public's attention after stolen cards detected being misused to buy large amounts of gift cards. Gonzalez's 2008, intrusion into Heartland Payment Systems to steal card data was characterized as the largest ever criminal breach of card data. Also in June 2005, UK-based carders were found to be collaborating with Russian mafia and arrested as a result of a National Hi-Tech Crime Unit investigation, looking into Eastern European crime syndicates. Some time in 2005, J. Keith Mularski from the NCFTA headed up a sting into popular English language site DarkMarket.ws. One of the few survivors of 'Operation Firewall', Mularski was able to infiltrate the site via taking over the handle 'Master Splyntr', an Eastern European spammer named Pavel Kaminski. In late 2006 the site was hacked by Max Butler, who detected user 'Master Splyntr' had logged in from the NCFTA's offices, but the warning was dismissed as inter-forum rivalry. In 2007 details of the operation was revealed to German national police, that the NCFTA had successfully penetrated the forum's inner 'family'. By October 4, 2007, Mularski announced he was shutting the site due to unwanted attention from a fellow administrator, framed as 'too much attention' from law enforcement. For several years following site closure multiple arrests were made internationally. From 2004 through to 2006, CardersMarket assimilated various rival forums through marketing, hacking databases. Arrested in 2007, in 2010 the site's owner Max Butler was sentenced to 13 years in prison. 2007–present Since 2007 to present, Operation Open Market, an operation run by the HIS and the USSS has targeted the primarily Russian language Carder.su organisation, believed to be operating out of Las Vegas. In 2011, alleged site owner Roman Seleznev was apprehended in the Maldives by US law enforcement and in 2012, identity thief David Ray Camez was arrested and charged in an unprecedented use of RICO legislation. Horohorin Vladislav, identified as BadB in November 2009 in a sealed indictment from the United States attorney's office was arrested in 2010 by USSS in Nice, France. Vladislav created the first fully automated credit card shop and managed websites associates with stolen credit card numbers. Horohorin Vladislav is also known for being first cyber criminal to promote his illegal activities by creating video cartoons ridiculing American card holders. In 2011, former Bulgarian ShadowCrew member Aleksi Kolarov (also known as 'APK') was finally arrested and held in Paraguay before being extradited to the United States in 2013 to face charges. In March 2012, the United States Secret Service took down Kurupt.su, and arrested David Schrooten (also known as 'Fortezza' and 'Xakep') in Romania, he was extradited to the United States and sentenced to serve 12 years in federal prison. Primarily for his role in trafficking credit cards he obtained by hacking other hackers. In June 2012, the FBI seized carding and hacking forums UGNazi.com and Carders.org in a sting as a part of a 2-year investigation dubbed Operation Card Shop after setting up a honeypot forum at carderprofit.cc. In August 2013, hacker and carding forum HackBB was taken down as part of the raid on Freedom Hosting. In January 2014, fakeplastic.net was closed following an investigation by the US postal service and FBI, after collating previously seized information from TorMail, ShadowCrew and Liberty Reserve. This led to multiple arrests and prosecutions as well as the site's closure. A 2014 report from Group-IB, suggested that Russian cybercriminals could be making as much as $680 million a year based on their market research. In December 2014, the Tor based Tor Carding Forum closed following a site hack, with its administrator 'Verto' directing users to migrate to the Evolution darknet market's forums which would go on to be the largest darknet market exit scam ever seen. 'Alpha02', who was notorious for his carding guides, went on to found the AlphaBay darknet market, the first to ever deal in stolen Uber accounts. The site is working on rebuilding the damage to the reputation of markets founded by carders precipitated by the Evolution scam. Meanwhile, most Russian carders selling details do not trust the darknet markets due to the high level of law enforcement attention; however, buyers are more open. Ercan Findikoğlu, also known as "Segate" and "Predator", with others, led an international conspiracy, stole $55 million by hacking ATM card issuers and making fraudulent cards and was sentenced to eight years in prison by a federal court. Findikoğlu, a Turkish national, with a Russian wife, Alena Kovalenko, avoided capture by obscuring his cyber fingerprints and avoiding the reach of American law, but he went to Germany in December 2013, was arrested, lost a court challenge, and was extradited. Findikoğlu, as a youngster honed his skills in cyber cafes, the Turkish military, and then masterminded three complex, global financial crimes by hacking into credit card processors, eliminating the limits on prepaid cards then sending PINs and access codes to teams of cashers who, within hours withdrew cash from ATMs. In December 2012, 5,000 cashers in 20 countries withdrew $5 million, $400,000 in 700 transactions from 140 New York ATMs, in 150 minutes. Stolen cash was kicked back via wire transfers and deliveries to Turkey, Romania and Ukraine. Vladimir Drinkman, 34, a cohort of Albert Gonzalez, pleaded guilty in Camden, New Jersey, that he got credit card numbers from Heartland Payment Systems, 7-Eleven, Hannaford Bros, Nasdaq, Carrefour, JetBlue, and other companies from 2005 to 2012. (U.S. v. Drinkman, 09-cr-00626, U.S. District Court, District of New Jersey (Camden)) In February 2018, the Infraud Organization was revealed. Contemporary situation In more recent years, Russian language forums have gained dominance over English language ones, with the former considerably more adept at identifying security researchers and counterintelligence activities and strict invitation systems. Russia's lack of extradition treaty with the United States has made the country somewhat of a safe haven of cyber criminals, with the Russian foreign ministry going so far as to recommend citizens not travel abroad to countries with such treaties. Investigative journalist Brian Krebs has extensively reported on Russian carders as an ongoing game of cat and mouse. See also Darknet market Fencing Identity theft Internet fraud References Further reading External links http://textfiles.com/anarchy/CARDING Internet fraud Dark web Identity theft Money laundering Credit cards Organized crime activity Types of cyberattacks
1570998
https://en.wikipedia.org/wiki/Burroughs%20Medium%20Systems
Burroughs Medium Systems
The Burroughs B2500 through Burroughs B4900 was a series of mainframe computers developed and manufactured by Burroughs Corporation in Pasadena, California, United States, from 1966 to 1991. They were aimed at the business world with an instruction set optimized for the COBOL programming language. They were also known as Burroughs Medium Systems, by contrast with the Burroughs Large Systems and Burroughs Small Systems. History and architecture First generation The B2500 and B3500 computers were announced in 1966. They operated directly on COBOL-68's primary decimal data types: strings of up to 100 digits, with one EBCDIC or ASCII digit character or two 4-bit binary-coded decimal BCD digits per byte. Portable COBOL programs did not use binary integers at all, so the B2500 did not either, not even for memory addresses. Memory was addressed down to the 4-bit digit in big-endian style, using 5-digit decimal addresses. Floating point numbers also used base 10 rather than some binary base, and had up to 100 mantissa digits. A typical COBOL statement 'ADD A, B GIVING C' may use operands of different lengths, different digit representations, and different sign representations. This statement compiled into a single 12-byte instruction with 3 memory operands. Complex formatting for printing was accomplished by executing a single EDIT instruction with detailed format descriptors. Other high level instructions implemented "translate this buffer through this (e.g. EBCDIC to ASCII) conversion table into that buffer" and "sort this table using these sort requirements into that table". In extreme cases, single instructions could run for several hundredths of a second. MCP could terminate over-long instructions but could not interrupt and resume partially completed instructions. (Resumption is a prerequisite for doing page style virtual memory when operands cross page boundaries.) The machine matched COBOL so closely that the COBOL compiler was simple and fast, and COBOL programmers found it easy to do assembly programming as well. In the original instruction set, all operations were memory-to-memory only, with no visible data registers. Arithmetic was done serially, one digit at a time, beginning with most-significant digits then working rightwards to least-significant digits. This is backwards from manual right-to-left methods and more complicated, but it allowed all result writing to be suppressed in overflow cases. Serial arithmetic worked very well for COBOL. But for languages like FORTRAN or BPL, it was much less efficient than standard word-oriented computers. Three reserved memory locations were used as address indexing 'registers'. The third index register was dedicated to pointing at the current procedure's stack frame on the call/return stack. Other reserved memory locations controlled operand sizes when that size was not constant. The B3500 was similar to the B2500 but with a faster cycle time and more expansion choices. The B2500 had a maximum of 60 K bytes of core memory and a 2 microsecond cycle time. The B3500 had a maximum of 500 K bytes and a 1-microsecond cycle time. B2500/3500 weighed about . Subsequent machine generations The B2500/B3500 machines were followed by B2700/B3700/B4700 in 1972; The B2800/B3800/B4800 in 1976, The B2900/B3900/B4900 in 1980 (which was the first of the range to load its microcode from floppy disk, rather than implementing it as hardware read-only memory) and finally the Unisys V Series machines V340-V560 in 1985-90. Machines prior to the B4800 had no cache memory. Every operand byte or result byte required its own separate main memory cycle, which limited program performance. To compensate for this, the B3700/B4700 generation used semiconductor main memory that was faster but more expensive and power hungry than the DRAM used in competing machines. The unusual use of decimal numbers as memory addresses was initially no problem; it merely involved using 1-in-5 rather than 1-in-8 decoder logic in the core memory's row selects and bank selects. But later machines used standard memory chips that expected binary addresses. Each 1000-byte block of logical memory could be trivially mapped onto a subset of 1024 bytes in a chip with only 2.3% waste. But for denser chips and larger total memories, the entire decimal address had to be crunched into a shorter quasi binary form before sending the address to the chips, and done again for each cache or memory cycle. This conversion logic slowed the machine cycle somewhat. An attempted redesign in 1975 of the address space was called MS-3 for "Medium Systems 3rd Generation", but that project was cancelled. Machines before the B2900 allowed input numbers with 'undigit' values above 9, but arithmetic on this gave unspecified results. This was used as a form of hexadecimal arithmetic within the MCP and also by some application programmers. Later versions discontinued this and instead supported two new opcodes (binary to decimal and decimal to binary) to support addressing the hard drives available after Burroughs's acquisition of Memorex. Cancellation and retirement Unisys cancelled further V series hardware development in 1991, and support ended in 2004. In the B4900 and later machines, integer operations of 10 digits or less were now handled in parallel; only longer operands continued to use the serial method. And all floating point operations were limited to 17 digits of precision. Later Medium Systems machines added an accumulator register and accumulator/memory instructions using 32-bit, 7-digit integers and 48-bit or 80-bit floating point values, all aligned on 16-bit word boundaries. Operating system The operating system was called MCP, for Master Control Program. It shared many architectural features with the MCP of Burroughs' Large Systems stack machines, but was entirely different internally, and was coded in assembly language, not in an ALGOL-derivative. Programs had separate address spaces dynamically relocated by a base register, but otherwise there was no virtual memory; no paging and no segmentation. Larger programs were squeezed into the limited code address space by explicit overlays. The nonresident parts of MCP were also heavily overlaid. Initially, code and data shared a single 300,000 digit address space. Later machines had separate million-digit spaces for program code and process data. Instructions' address fields were extended from five digits to six digits, and four more real index registers were added. Early machines used Burroughs's head-per-track disk systems rather than the now-standard movable head platter disks. In one attempt to speed up MCP, its overlays were carefully laid out so that the likely-next overlays would soon arrive at their read head just after the current overlay completed. This was similar to time-dependent layout optimizations on early delay-line and drum computers. But this turned out to be impractical to maintain after software changes, and better results were consistently achieved with a totally randomized layout of all MCP overlays. Other than the operating system itself, all system software was coded in BPL (Burroughs Programming Language), a systems programming language derived from ALGOL and Large System's ESPOL systems language. The initial COBOL compiler supported the ANSI 68 specification and supported the ENTER SYMBOLIC syntax to allow inline assembly language coding, but lacked support for RELATIVE and INDEXED file support; these were later added into the ANSI 74 version of the compiler, which was released in 1982. MCP allowed programs to communicate with each other via core-to-core transmissions (CRCR) or by using storage queues (STOQ), implemented as system calls using the BCT instruction and exposed to the languages (COBOL FILL FROM/INTO). This was unheard of except on the very largest IBM System/360 systems of the time, and even then it was a major operational headache to manage the interactions of the multiple program streams. Usage and legacy The Medium Systems series were very effective multi-programming machines. Even very basic versions of the B2500 could support multiprogramming on a usable scale. Larger Medium Systems processors supported major data center activities for banks and other financial institutions, as well as many businesses and government customers. The Medium System was the preferred platform for many data processing professionals. With the Medium System, a computer could be simultaneously running a batch payroll system, inputting bank checks on a MICR reader sorter, compiling COBOL applications, supporting on-line transactions, and doing test runs on new applications (colloquially called 'the mix', as the console command 'MX' would shows that jobs were executing). It was not unusual to be running eight or ten programs on a medium-size B2500. Medium System installations often had tape clusters (four drives integrated into a mid-height cabinet) for magnetic tape input and output. Free-standing tape drives were also available, but they were much more expensive. Tape was a major storage medium on these computers, in early days it was often used for father-son batch updating; with additional disk becoming cheaper as time moved on it became relegated as a library/backup device that contained all the data files and sometimes the program files (using the MFSOLT utility) for a particular application or customer/client. COBOL to machine code Tape resident disk files Job headers for card input Card and print spooling I did accounting system (parameter driven) —a blank verse by unknown B2500 user References Burroughs mainframe computers COBOL High-level language computer architecture Transistorized computers Computer-related introductions in 1966 16-bit computers
12323792
https://en.wikipedia.org/wiki/Martin%20Bauer
Martin Bauer
Martin W. Bauer is a Professor of social psychology. He directs the MSc in Social and Public Communication at the Department of Psychological and Behavioural Science at LSE. Martin Bauer was a Research Fellow in 'Public Understanding of Science' at the Science Museum in London, an academic visitor to the Maison des Sciences de l'homme in Paris, and he teaches regularly in Brazil at the Universidade Federal do Rio Grande do Sul and the Pontifícia Universidade Católica do Rio Grande do Sul. Biography Bauer was educated at the University of Bern in Switzerland and trained at the London School of Economics. He received his PhD from the London School of Economics in 1993 with a thesis titled Resistance to change: a functional analysis of reponses to technical change in a Swiss bank. He joined the LSE’s Institute of Social Psychology and Department of Methodology (formerly Methodology Institute) in September 1994. Research and Intellectual Interests Bauer’s research portfolio includes the theory of resistance in social processes, Social representations of and public attitudes to science and technology, in particular genomics and modern biotechnology. The key question with which he is concerned is: how does public opinion influence the techno-scientific developments? Bauer is known for developing the toblerone model of social representations with George Gaskell. He is also known for his edited handbook with George Gaskell, Qualitative Researching with Text, Image and Sound: A Practical Handbook (2000). Publications (selection) Books Biotechnology - the making of a global controversy (Cambridge, CUP, 2002, with G Gaskell) Pesquisa qualitativa con texto, imagem e som (Petropolis Brazil, Editora VOZES, 2002, with G Gaskell) Biotechnology 1996-2000 - the years of controversy (London, Science Museum, 2001, with G Gaskell) Qualitative researching with text, image and sound - a practical handbook (London, Sage, 2000, with G Gaskell) Biotechnology in Public - A European source book (London, Science Museum, 1998; with J Durant & G Gaskell) Resistance to new Technology - nuclear power, information technology and biotechnology (Cambridge, CUP, 1995) Papers and book chapters Bauer MW (2005) The mass media and the biotechnology controversy, International Journal of Public Opinion Research, 17 (1), 5-22 [special issue] Bauer MW (2005) Distinguishing GREEN from RED biotechnology - cultivation effects of the elite press, International Journal of Public Opinion Research, 17 (1), 63-89. Bauer MW, S Howard, V Hagenhoff, G Gasperoni & Maria Rusanen (2005) The BSE and CJD crisis in the press, in: C Dora (ed) Health, Hazard and Public Debate: Lessons for Risk Communication from the BSE/CJD saga, Geneva, WHO, 125-164 [chapter 6]. Dowler E, J Green, MW Bauer, G Gasperoni (2005) Assessing public perceptions: issues and methods, in: C Dora (ed) Health, Hazard and Public Debate: Lessons for Risk Communication from the BSE/CJD saga, Geneva, WHO, 40-60 [chapter 3] Bauer MW (2004) Long-term trends in public sensitivities about genetic identification: 1973-2002, in: Gardar Árnason and Salvör Nordal (eds) Blood and Data - Ethical, Legal and Social Aspects of Human Genetic Databases, Reykjavik: University of Iceland Press 2004, p143-161 (chapter 16). Bauer MW (2004) The vicissitudes of 'public understanding of science': from 'literacy' to 'science in society', in: Science meets Society, Lisbon, Gulbenkian Foundation, p37-63. U Flick & M Bauer (2004) Teaching qualitative research, in: Flick U, E vonKardoff & I Steinke (eds) A Companion to Qualitative Research, London, Sage, 340-49 [translation from German 2000]. Bauer MW (2003) O dominio publico da DNA: tendencies a longo prazo, Ciencia & Ambiente, (UFSM, Rio Grande do Sul) Maio, 129-140. Gregory J & M W Bauer (2003) CPS INC: l'avenir de la communication de la science, in: B Schiele (ed) Les Nouveaux Territoires de la Culture Scientifique, Montreal, Canada, chapter 3, 41-65. Bauer MW (2002) Arenas, platforms and the biotechnology movement, Science Communication, 24, 144-161. Bauer MW (2002) Controversial medical and agri-food biotechnology: a cultivation analysis, Public Understanding of Science, 2, 11, 1-19. Bauer MW and G Gaskell (2002) The biotechnology movement, in: Bauer MW & G Gaskell (eds) Biotechnology - the making of a global controversy, Cambridge, CUP, 379-404. Bauer MW & H Bonfadelli (2002) Controversy, media coverage and public knowledge, in: Bauer, MW & G Gaskell (eds) Biotechnology - the making of a global controversy, Cambridge, CUP With S Howard (2001) Psychology in the Press, 1988 to 1999, The Psychologist, 14 (12), 632-636 [BPS centenary 1901-2001, special media watch]. Bauer MW, M Kohring, J Gutteling & A Allansdottir (2001) The dramatisation of biotechnology in the elite mass media, in: G Gaskell & MW Bauer (eds) Biotechnology 1996-2000 - the years of controversy, London, Science Museum, 35-52. Bauer MW (2001) Biotechnology: Ethical Framing in the Elite Press, Noticie di Politeia, 17 (63), 51-66 []. Bauer MW (2001) "Risiko oder Ethik? Vergleichendes zur Öffentlichkeit der Gentechnik." In: M Weber und P Hoyningen-Huene (Hg.), Ethische Probleme in den Biowissenschaften. Heidelberg, Synchron Wissenschaftsverlag, 149-168. With Gaskell, N Allum, J Durant et al. (2000) Biotechnology and the European Public, Nature Biotechnology, Sept, Vol 18, 935-38. Bauer MW, Petkova K, P Boyadjjewa (2000) Public knowledge of and attitudes to science - alternative measures, Science, Technology and Human Values, 25, 1, 30-51. Bauer MW & B Aarts (2000) Corpus construction: a principle for qualitative data collection, in: Bauer M & G Gaskell (eds) Qualitative researching with text, image and sound: a practical handbook, London, Sage, 19-37 Bauer MW (2000) Analysing noise and music as social data, in: Bauer MW & G Gaskell (eds) Qualitative researching with text, image and sound: a practical handbook, London, Sage, 263-280. MW Bauer (2000) Classic content analysis: a review, in: Bauer MW & G Gaskell (eds) Qualitative researching with text, image and sound: a practical handbook, London, Sage, 131-151. Bauer MW (2000) 'Science in the media' as cultural indicator: contextualising surveys with media analysis, in: Dierkes M and C von Grote (eds) Between understanding and trust: the public, science and technology, Reading, Harwood Academics Publisher, 157-178. Bauer M W and G Gaskell (1999) Towards a paradigm for research on social representations, Journal for the Theory of Social Behaviour, 29 (2), 163-186. With G Gaskell, J Durant, NC Allum (1999) Worlds apart? The reception of genetically modified food in Europe and the United States, Science, 285, July 15, 1-4 [retracted 9 June 2000]. Bauer M (1998) The medicalisation of science news: from the 'rocket-scalpel' to the 'gene-meteorite' complex, Social Science Information, 37, 731-751. [] Bauer M (1998) Die moderne Gentechnik und die oeffentliche Meinung, Civitas, 53, 3/4, 48-55 Bauer M (1998) La longue duree of popular science, 1830–present, In Deveze-Berthet D (ed) La promotion de la culture scientifique et technique: ses acteur et leurs logic, Actes du colloque des 12 et 13 decembre 1996, 75-92. Bauer M & J Durant (1997) Astrology in present-day Britain: an approach from the sociology of knowledge, Cosmos and Culture - Journal of the History of Astrology and Cultural Astronomy, 1, 1-17 With G Gaskell & J Durant et al. (1997) Europe ambivalent on biotechnology, Nature, 387, 345-347 (June) Bauer M and H Joffe (1996) The `I don't know' response in social research - meanings of self-attributed ignorance, Social Science Information, 35,1, 6-13 [editorial to special issue] Bauer M (1996) Socio-economic correlates of DK-responses in knowledge surveys, Social Science Information, 35,1, 39-68 Durant, J; A Hansen, and M Bauer (1996) Public understanding of human genetics; in: T Marteau and M Richards (eds) The troubled helix: social and psychological implications of the new human genetics, Cambridge University Press, 235-248. Bauer, M (1995) Technophobia: a misleading conception of resistance to new technology; in: Bauer, M. (ed) Resistance to new technology - nuclear power, information technology, biotechnology, Cambridge, Cambridge University Press, 97-124. Bauer, M (1995) Towards a functional analysis of resistance; in: Bauer, M. (ed) Resistance to new technology - nuclear power, information technology, biotechnology, Cambridge, Cambridge University Press, 393-418. Bauer M, J Durant and G Evans (1994) European public perceptions of science, International Journal of Public Opinion Research, 6, 2, 163-186 Bauer M (1994) Science and Technology in the British Press, 1946-1986, in: B Schiele, M Amyot and C Benoit (eds) When Science becomes Culture, Boucherville, University of Ottawa Press - vol II, . Bauer, M and I Schoon (1993a) Mapping variety in Public Understanding of Science, Public Understanding of Science, 2, 2, 141-155. Bauer, M. (1991) Resistance to change - a monitor of new technology, Systems Practice, 4, 3, 181-196 References External links Bauer's page at LSE Social psychologists Academics of the London School of Economics Living people Year of birth missing (living people)
881518
https://en.wikipedia.org/wiki/Discretionary%20access%20control
Discretionary access control
In computer security, discretionary access control (DAC) is a type of access control defined by the Trusted Computer System Evaluation Criteria "as a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject (unless restrained by mandatory access control)." Discretionary access control is commonly discussed in contrast to mandatory access control (MAC). Occasionally, a system as a whole is said to have "discretionary" or "purely discretionary" access control when that system lacks mandatory access control. On the other hand, systems can implement both MAC and DAC simultaneously, where DAC refers to one category of access controls that subjects can transfer among each other, and MAC refers to a second category of access controls that imposes constraints upon the first. Implementations The meaning of the term in practice is not as clear-cut as the definition given in the TCSEC standard, because the TCSEC definition of DAC does not impose any implementation. There are at least two implementations: with owner (as a widespread example) and with capabilities. With owner The term DAC is commonly used in contexts that assume that every object has an owner that controls the permissions to access the object, probably because many systems do implement DAC using the concept of an owner. But the TCSEC definition does not say anything about owners, so technically an access control system doesn't have to have a concept of ownership to meet the TCSEC definition of DAC. Users (owners) have under this DAC implementation the ability to make policy decisions and/or assign security attributes. A straightforward example is the Unix file mode which represent write, read, and execute in each of the 3 bits for each of User, Group and Others. (It is prepended by another bit that indicates additional characteristics). With capabilities As another example, capability systems are sometimes described as providing discretionary controls because they permit subjects to transfer their access to other subjects, even though capability-based security is fundamentally not about restricting access "based on the identity of subjects". In general, capability systems do not allow permissions to be passed "to any other subject"; the subject wanting to pass its permissions must first have access to the receiving subject, and subjects generally only have access to a strictly limited set of subjects consistent with the principle of least privilege. See also References Citations Sources P. A. Loscocco, S. D. Smalley, P. A. Muckelbauer, R. C. Taylor, S. J. Turner, and J. F. Farrell. The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments. In Proceedings of the 21st National Information Systems Security Conference, pp. 303–14, Oct. 1998. PDF version. Computer access control Computer security models Access control
2625749
https://en.wikipedia.org/wiki/Floppy-disk%20controller
Floppy-disk controller
A floppy-disk controller (FDC) is a special-purpose integrated circuit (IC or "chip") and associated disk controller circuitry that directs and controls reading from and writing to a computer's floppy disk drive (FDD). The FDC is responsible for reading data presented from the host computer and converting it to the drive's on-disk format using one of a number of encoding schemes, like FM encoding (single density) or MFM encoding (double density), and reading those formats and returning it to its original binary values. Depending on the platform, data transfers between the controller and host computer would be controlled by the computer's own microprocessor, or an inexpensive dedicated microprocessor like the MOS 6507 or Zilog Z80. Early controllers required additional circuitry to perform specific tasks like providing clock signals and setting various options. Later designs included more of this functionality on the controller and reduced the complexity of the external circuitry; single-chip solutions were common by the later 1980s. By the 1990s, the floppy disk was increasingly giving way to hard drives, which required similar controllers. In these systems, the controller also often combined a microcontroller to handle data transfer over standardized connectors like SCSI and IDE that could be used with any computer. In more modern systems, the FDC, if present at all, is typically part of the many functions provided by a single super I/O chip. Overview A floppy disk stores binary data not as a series of values, but a series of changes in value. Each of these changes, recorded in the polarity of the magnetic recording media, causes a voltage to be induced in the drive head as the disk surface rotates past it. It is the timing of these polarization changes and the resulting spikes of voltage that encode the ones and zeros of the original data. One of the functions of the controller is to turn the original data into the proper pattern of polarizations during writing, and then recreate it during reads. As the storage is based on timing, and that timing is easily affected by mechanical and electrical disturbances, accurately reading the data requires some sort of reference signal, the clock. As the on-disk timing is constantly changing, the clock signal has to be provided by the disk itself. To do this, the original data is modified with extra transitions to allow the clock signal to be encoded in the data and then use clock recovery during reads to recreate the original signal. Some controllers require this encoding to be performed externally, but most designs provide standard encodings like FM and MFM. The controller also provides a number of other services to control the drive mechanism itself. These typically include the movement of the drive head to center over the separate tracks on the disk, determining the speed of rotation and attempting to keep it relatively steady, racking the location of the head and returning it to zero, and sometimes functionally to format a disk based on simple inputs like the number of tracks, sectors per track and number of bytes per sector. To produce a complete system, the controller has to be combined with additional circuitry or software that acts as a bridge between the controller and the host system. In some systems, like the Apple II and IBM PC, this is controlled by software running on the computer's host microprocessor and the drive interface is connected directly to the processor using an expansion card. On other systems, like the Commodore 64 and Atari 8-bit family, there is no direct path from the controller to the host CPU and a second processor like the MOS 6507 or Zilog Z80 is used inside the drive for this purpose. The original Apple II controller was in the form of a plug-in card on the host computer. It could support two drives, and the drives eliminated most of the normal onboard circuitry. This allowed Apple to arrange a deal with Shugart Associates for a simplified drive that lacked most of its normal circuitry. This meant that the combined cost of a single drive and controller card was roughly the same as on other systems, but a second drive could be connected for a smaller additional cost. The IBM PC took a more conventional approach, their adaptor card could support up to four drives; on the PC direct memory access (DMA) to the drives was performed using IRQ 6. The diagram below shows a conventional floppy disk controller which communicates with the CPU via an Industry Standard Architecture (ISA) bus or similar bus and communicates with the floppy disk drive with a 34 pin ribbon cable. An alternative arrangement that is more usual in recent designs has the FDC included in a super I/O chip which communicates via a Low Pin Count (LPC) bus. Most of the floppy disk controller (FDC) functions are performed by the integrated circuit but some are performed by external hardware circuits. The list of functions performed by each is given below. Floppy disk controller functions (FDC) Translate data bits into FM, MFM, M²FM, or GCR format to be able to record them Interpret and execute commands such as seek, read, write, format, etc. Error detection with checksums generation and verification, like CRC Synchronize data with phase-locked loop (PLL) External hardware functions Selection of which floppy disk drive (FDD) to address Switching-on the floppy drive motor Reset signal for the floppy controller IC Enable/disable interrupt and DMA signals in the floppy disk controller (FDC) Data separation logic Write pre-compensation logic Line drivers for signals to the controller Line receivers for signals from the controller Input/output ports for common x86-PC controller The FDC has three I/O ports. These are: Data port Main status register (MSR) Digital control port The first two reside inside the FDC IC while the Control port is in the external hardware. The addresses of these three ports are as follows. Data port This port is used by the software for three different purposes: While issuing a command to the FDC IC, command and command parameter bytes are issued to the FDC IC through this port. The FDC IC stores the different parameters and the command in its internal registers. After a command is executed, the FDC IC stores a set of status parameters in the internal registers. These are read by the CPU through this port. The different status bytes are presented by the FDC IC in a specific sequence. In the programmed and interrupt mode of data transfer, the data port is used for transferring data between the FDC IC and the CPU IN or OUT instruction. Main status register (MSR) This port is used by the software to read the overall status information regarding the FDC IC and the FDD's. Before initiating a floppy disk operation the software reads this port to confirm the readiness condition of the FDC and the disk drives to verify the status of the previously initiated command. The different bits of this register represent : Digital control port This port is used by the software to control certain FDD and FDC IC functions. The bit assignments of this port are: Interface to the floppy disk drive The controller connects to the drive using a flat ribbon cable with 34 connectors split between the host, the 3.5" drive, and the 5.25" drive. This type of cable is called a universal connector. In the IBM PC family and compatibles, a twist in the cable is used to distinguish disk drives by the socket to which they are connected. All drives are installed with the same drive select address set, and the twist in the cable interchange the drive select line at the socket. The drive that is at furthest end of the cable additionally would have a terminating resistor installed to maintain signal quality. More detailed descriptions of the interface signals including alternative meanings are contained in manufacturer's specifications for drives. Format data Many mutually incompatible floppy disk formats are possible; aside from the physical format on the disk, incompatible file systems are also possible. Sides: SS (or 1S) – Single sided DS (or 2S) – Double sided Density: SD (or 1D) – Single density (FM) DD (or 2D) – Double density (most often MFM) QD (or 4D) – Quad density HD – High density ED – Extra-high density TD – Triple density "3mode" floppy drive Primarily in Japan there are 3.5" high-density floppy drives that support three modes of disk formats instead of the normal two – 1440 KB (2 MB unformatted), 1.2 MB (1.6 MB unformatted) and 720 KB (1 MB unformatted). Originally, the high-density mode for 3.5" floppy drives in Japan only supported a capacity of 1.2 MB instead of the 1440 KB capacity that was used elsewhere. While the more common 1440 KB format spun at 300 rpm, the 1.2 MB format instead spun at 360 rpm, thereby closely resembling the 1.2 MB format with 15 sectors per track previously found on 5.25" high-density floppy drives. Later Japanese floppy drives incorporated support for both high-density formats (as well as the double-density format), hence the name 3mode. Some BIOSes have a configuration setting to enable this mode for floppy drives supporting it. See also Western Digital FD1771 Integrated Woz Machine (IWM) Paula (Amiga controller) Floppy disk drive interface List of floppy disk formats References ISO/IEC 8860-1:1987 Double-Density (DD) ISO/IEC 9529-1:1989 High-Density (HD) ISO 10994-1:1992 Extra-high-density (ED) ECMA-147 Further reading External links viralpatel.net A Tutorial on Programming Floppy Disk Controller isdaman.com Programming Floppy Disk Controllers Computer storage devices Floppy disk computer storage Integrated circuits
63467555
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20A31
Samsung Galaxy A31
The Samsung Galaxy A31 is a mid-range Android smartphone developed by Samsung Electronics as part of their 2020 A-series smartphone lineup. It was announced on March 24, 2020, and first released on April 27, 2020, as the successor to the Galaxy A30 and A30s. The phone comes preinstalled with Android 10 and Samsung's custom One UI 2.1 software overlay. Specifications Hardware The Samsung Galaxy A31 is a plastic build with a glass front. The phone's 6.4-inch, FHD Super AMOLED panel has a screen-to-body ratio of 84.9% and an aspect ratio of 20:9 to match that of other Samsung phones sold in 2020. An optical, under-display fingerprint reader replaces the rear-mounted one seen on the previous A30 models. The new L-shaped rear camera system (similar to the ones seen on newer Samsung phones) utilizes four cameras, a 48 MP main sensor with an f/2.0 aperture, an 8 MP ultra-wide lens with a 123° field of view and a 5 MP macro and depth camera, both with an f/2.4 aperture. A U-shaped screen cut-out houses a single 20 MP selfie camera lens. Both front and back camera systems are capable of recording at a maximum of 1080p-at-30fps videos. A large 5000 mAh battery with fast charging at 15 watts is also supported. Customers, depending on the region, can choose from a range of new colour selections, such like Prism Crush Black, Prism Crush Blue, Prism Crush Red and Prism Crush White. Software The phone comes with Android 10 and Samsung's custom One UI 2.1 software overlay. Depending on the region, it can support contactless NFC payments through Samsung Pay and other various payment apps that can be installed separately. Software experience is comparable to that of other 2020 Samsung devices, and it boasts many of the software perks costlier Samsung devices boast, such as Edge Screen and Edge Lighting. As with most other Samsung phones released during 2020, the Microsoft-Samsung partnership Link to Windows option which comes as standard, can be accessed in the Android notification panel. Banking on Samsung's latest software update schedule, the phone should be eligible for two major Android upgrades. The A31 has got the Android 11 update with One UI 3.1 at May, but it may vary at other regions around the world. In August 2021, Samsung gradually rolled out a major security update to the A31. Reception The Samsung Galaxy A31 received mixed reviews with most reviewers acclaiming the brilliant screen, excellent battery life, software and overall build quality. They critiqued the performance relative to the phone's main competition, the phone's poor camera quality, and the lack of a night mode and various other camera features. Reviewers were also reluctant to recommend the device considering that even in Samsung's own mid-range A-series one could get a better overall package by spending just a little more money. References External links Official website Android (operating system) devices Smartphones Samsung Galaxy Samsung mobile phones Mobile phones introduced in 2020 Mobile phones with multiple rear cameras
24983146
https://en.wikipedia.org/wiki/Institute%20for%20System%20Programming
Institute for System Programming
The Institute for System Programming (ISP) of the Russian Academy of Sciences (RAS; ) was founded on January 25, 1994, on the base of the departments of System Programming and Numerical Software of the Institute for Cybernetics Problems of the RAS. ISP RAS belongs to the Division of Mathematical Sciences of the RAS. R and D groups Compiler Technologies Department The department is specialized in applying compiler approach to different computer science fields, as well as modern optimizing compiler development and design. The first compiler projects started in early 1980s. The recent research activity of the team is concentrated on parallel programming and reverse engineering. Computing Systems Architecture Department The main directions of the department research activities have been connected with effective implementation of network architectures and hardware platforms for local and global networks. Information Systems Department The main activities of the department: multi-user fully functional relational DBMS, CORBA-based technology for distributed information systems, XML-based technology for heterogeneous data integration, native XML database Sedna, text mining and information retrieval. Software Development Tools Department The main direction is creation of tools supporting formal specification and modeling languages and easing the development process. Software Engineering Department The spectrum of the scientific research of the department covers a broad range of Software Engineering, including analysis of programs and their models, verification and validation, standardization issues including development of open software standards, various aspects of development, maintenance and evolution of software together with methods of education and deployment of advanced technologies. System Programming Department Research activities of the department lie in the area of program static analysis, excavation of architecture using program code and visualization of software architecture model, modelling of architecture and code generation using software model. Theoretical Computer Science Department The members of the department are specialists in different branches of mathematics and theoretical computer science: combinatorics, complexity of computations, probabilistic methods, mathematical logic, formal methods of program analysis, logical programming, mathematical cryptography. Councils Academic council The main task of the council is coordination of research and scientific programs aimed on prioritization of new important directions. Dissertation council Being a part of the Institute Dissertation council D.002.087.01 considers applications for scientific degrees of candidate and doctor of physical and mathematical, and technical sciences according to qualification standard 05.13.11 “Mathematical and program support for computers, their complexes, and networks”. Centers Verification Center of the Operating System Linux The mission of the Center is to propagate the Linux platform by ensuring its high reliability and compatibility through the use of open standards and advanced testing and verification technologies. Center of competence in parallel and distributed computing The goal of the center is in significant increase of the usage of parallel and distributed computations in the areas of educational, research, and production activities of Russian organizations. External links Institute for System Programming Company Profile at Linux Foundation Verification Center of the Operating System Linux Institutes of the Russian Academy of Sciences
2905620
https://en.wikipedia.org/wiki/Amsoft
Amsoft
Amsoft was a wholly owned subsidiary of Amstrad, PLC, founded in 1984 and re-integrated with its parent company in 1989. Its purpose was to provide an initial infrastructure of software and services for users of Amstrad's range of home computers, the Amstrad CPC and, from 1986, the Sinclair ZX Spectrum. Many people's first contact with software on an Amstrad home computer would have been an Amsoft title, as several titles were included in the sales bundles. History While developing its first home computer, the Amstrad CPC464, Amstrad assessed that part of the success of its competitors' machines was the backing of a grown infrastructure of software and services. Being a newcomer to the computer market, Amstrad decided to artificially create this infrastructure for the launch of their own computers. In February 1984, Amstrad founded its Amsoft division headed by Roland Perry and William Poel who at the time were also overseeing the development of the Amstrad CPC464 itself. Most prominently, Amsoft acted as the first-party game and business software publisher for Amstrad computers. Most of its software products were licensed from various third-party developers and published under the Amsoft label. This also provided a risk-free means for established software studios to try out their products in the emerging Amstrad CPC market. In addition to publishing software, Amsoft was tasked with press relations and consumer promotion, most notably creating and maintaining the Amstrad User Club and publishing its periodical, the CPC464 User (later Amstrad Computer User). When a reliable third-party support had been established, Amsoft gradually faded out the publishing of software and sold the Amstrad User Club as well as the user magazine. By 1989, Amsoft was fully integrated with the main Amstrad corporation and ceased to exist as a separate entity. List of games 1984 American Football Astro Attack Blagger Bridge-It Electro Freddy Fruit Machine The Galactic Plague Harrier Attack Haunted Hedges Hunchback Laserwarp Mr Wong's Loopy Laundry Mutant Monty Oh Mummy Punchy Quack-a-Jack Roland Ahoy Roland on the Ropes Roland in the Caves Roland Goes Digging Roland Goes Square Bashing Roland on the Run Spannerman Space Hawks Sultan's Maze Pyjamarama Detective Xanagrams Animal Vegetable Mineral Happy Letters Map Rally Timeman One / L'Horloger 1 Wordhang Fruit Machine Amsgolf Codename MAT Hunter Killer Snooker L'Ardoise Magique Les Chiffres Magiques Le Géographe - France Le Géographe - Monde L'Horloger 2 / Timeman Two Les Lettres Magiques Osprey! Star Watcher Admiral Graf Spee 1985 3D Grand Prix Jet-Boot Jack Airwolf Assault on Port Stanley Doors of Doom Dragon's Gold Frank 'n' Stein Friss Man Fu-Kung in Las Vegas The Game of Dragons The Key Factor Manic Miner The Prize Roland in Time Roland in Space Seesaw Super Pipeline 2 Supertripper Sorcery Plus Cyrus 2 Chess Masterchess Happy Numbers Stockmarket Traffic 3D Boxing 3D Stunt Rider Alex Higgins World Pool Alex Higgins' World Snooker Glen Hoddle Soccer Rock Hopper Strangeloop Plus Subterranean Stryker Tombstowne Kingdoms Braxx Bluff Overlord 2 Campeones Classic Racing L'Apprenti Sorcier Satellite Warrior 1986 Qabbalah Happy Writing Nuclear Defence Golden Path 6128 Games Collection 1987–1989 Scalextric (1987) Tank Command (1988) Fantastic Voyage (1989) Notes and references Amstrad CPC ZX Spectrum Video game publishers Video game companies established in 1984 Video game companies disestablished in 1989 Defunct video game companies of the United Kingdom
4416073
https://en.wikipedia.org/wiki/Approximations%20of%20%CF%80
Approximations of π
Approximations for the mathematical constant pi () in the history of mathematics reached an accuracy within 0.04% of the true value before the beginning of the Common Era. In Chinese mathematics, this was improved to approximations correct to what corresponds to about seven decimal digits by the 5th century. Further progress was not made until the 15th century (through the efforts of Jamshīd al-Kāshī). Early modern mathematicians reached an accuracy of 35 digits by the beginning of the 17th century (Ludolph van Ceulen), and 126 digits by the 19th century (Jurij Vega), surpassing the accuracy required for any conceivable application outside of pure mathematics. The record of manual approximation of is held by William Shanks, who calculated 527 digits correctly in 1853. Since the middle of the 20th century, the approximation of has been the task of electronic digital computers (for a comprehensive account, see Chronology of computation of ). On August 16, 2021, the current record was established by Thomas Keller and Heiko Rölke of the University of Applied Sciences of the Grisons with 62.8 trillion digits. Early history The best known approximations to dating to before the Common Era were accurate to two decimal places; this was improved upon in Chinese mathematics in particular by the mid-first millennium, to an accuracy of seven decimal places. After this, no further progress was made until the late medieval period. Some Egyptologists have claimed that the ancient Egyptians used an approximation of as = 3.142857 (about 0.04% too high) from as early as the Old Kingdom. This claim has been met with skepticism. Babylonian mathematics usually approximated to 3, sufficient for the architectural projects of the time (notably also reflected in the description of Solomon's Temple in the Hebrew Bible). The Babylonians were aware that this was an approximation, and one Old Babylonian mathematical tablet excavated near Susa in 1936 (dated to between the 19th and 17th centuries BCE) gives a better approximation of as = 3.125, about 0.528 percent below the exact value. At about the same time, the Egyptian Rhind Mathematical Papyrus (dated to the Second Intermediate Period, c. 1600 BCE, although stated to be a copy of an older, Middle Kingdom text) implies an approximation of as ≈ 3.16 (accurate to 0.6 percent) by calculating the area of a circle via approximation with the octagon. Astronomical calculations in the Shatapatha Brahmana (c. 6th century BCE) use a fractional approximation of . The Mahabharata (500 BCE - 300 CE) offers an approximation of 3, in the ratios offered in Bhishma Parva verses: 6.12.40-45. In the 3rd century BCE, Archimedes proved the sharp inequalities  <  < , by means of regular 96-gons (accuracies of 2·10−4 and 4·10−4, respectively). In the 2nd century CE, Ptolemy used the value , the first known approximation accurate to three decimal places (accuracy 2·10−5). It is equal to which is accurate to two sexagesimal digits. The Chinese mathematician Liu Hui in 263 CE computed to between and by inscribing a 96-gon and 192-gon; the average of these two values is (accuracy 9·10−5). He also suggested that 3.14 was a good enough approximation for practical purposes. He has also frequently been credited with a later and more accurate result, π ≈ = 3.1416 (accuracy 2·10−6), although some scholars instead believe that this is due to the later (5th-century) Chinese mathematician Zu Chongzhi. Zu Chongzhi is known to have computed to be between 3.1415926 and 3.1415927, which was correct to seven decimal places. He also gave two other approximations of : π ≈ and π ≈ , which are not as accurate as his decimal result. The latter fraction is the best possible rational approximation of using fewer than five decimal digits in the numerator and denominator. Zu Chongzhi's results surpass the accuracy reached in Hellenistic mathematics, and would remain without improvement for close to a millennium. In Gupta-era India (6th century), mathematician Aryabhata, in his astronomical treatise Āryabhaṭīya stated: Approximating to four decimal places: π ≈ = 3.1416, Aryabhata stated that his result "approximately" ( "approaching") gave the circumference of a circle. His 15th-century commentator Nilakantha Somayaji (Kerala school of astronomy and mathematics) has argued that the word means not only that this is an approximation, but that the value is incommensurable (irrational). Middle Ages Further progress was not made for nearly a millennium, until the 14th century, when Indian mathematician and astronomer Madhava of Sangamagrama, founder of the Kerala school of astronomy and mathematics, found the Maclaurin series for arctangent, and then two infinite series for . One of them is now known as the Madhava–Leibniz series, based on The other was based on He used the first 21 terms to compute an approximation of correct to 11 decimal places as . He also improved the formula based on arctan(1) by including a correction: It is not known how he came up with this correction. Using this he found an approximation of to 13 decimal places of accuracy when  = 75. Jamshīd al-Kāshī (Kāshānī), a Persian astronomer and mathematician, correctly computed the fractional part of 2 to 9 sexagesimal digits in 1424, and translated this into 16 decimal digits after the decimal point: which gives 16 correct digits for π after the decimal point: He achieved this level of accuracy by calculating the perimeter of a regular polygon with 3 × 228 sides. 16th to 19th centuries In the second half of the 16th century, the French mathematician François Viète discovered an infinite product that converged on known as Viète's formula. The German-Dutch mathematician Ludolph van Ceulen (circa 1600) computed the first 35 decimal places of with a 262-gon. He was so proud of this accomplishment that he had them inscribed on his tombstone. In Cyclometricus (1621), Willebrord Snellius demonstrated that the perimeter of the inscribed polygon converges on the circumference twice as fast as does the perimeter of the corresponding circumscribed polygon. This was proved by Christiaan Huygens in 1654. Snellius was able to obtain seven digits of from a 96-sided polygon. In 1789, the Slovene mathematician Jurij Vega calculated the first 140 decimal places for , of which the first 126 were correct and held the world record for 52 years until 1841, when William Rutherford calculated 208 decimal places, of which the first 152 were correct. Vega improved John Machin's formula from 1706 and his method is still mentioned today. The magnitude of such precision (152 decimal places) can be put into context by the fact that the circumference of the largest known object, the observable universe, can be calculated from its diameter (93billion light-years) to a precision of less than one Planck length (at , the shortest unit of length expected to be directly measurable) using expressed to just 62 decimal places. The English amateur mathematician William Shanks, a man of independent means, calculated to 530 decimal places in January 1853, of which the first 527 were correct (the last few likely being incorrect due to round-off errors). He subsequently expanded his calculation to 607 decimal places in April 1853, but an error introduced right at the 530th decimal place rendered the rest of his calculation erroneous; due to the nature of Machin's formula, the error propagated back to the 528th decimal place, leaving only the first 527 digits correct once again. Twenty years later, Shanks expanded his calculation to 707 decimal places in April 1873. Due to this being an expansion of his previous calculation, all of the new digits were incorrect as well. Shanks was said to have calculated new digits all morning and would then spend all afternoon checking his morning's work. This was the longest expansion of until the advent of the electronic digital computer three-quarters of a century later. 20th and 21st centuries In 1910, the Indian mathematician Srinivasa Ramanujan found several rapidly converging infinite series of , including which computes a further eight decimal places of with each term in the series. His series are now the basis for the fastest algorithms currently used to calculate . Even using just the first term gives See Ramanujan–Sato series. From the mid-20th century onwards, all calculations of have been done with the help of calculators or computers. In 1944, D. F. Ferguson, with the aid of a mechanical desk calculator, found that William Shanks had made a mistake in the 528th decimal place, and that all succeeding digits were incorrect. In the early years of the computer, an expansion of to decimal places was computed by Maryland mathematician Daniel Shanks (no relation to the aforementioned William Shanks) and his team at the United States Naval Research Laboratory in Washington, D.C. In 1961, Shanks and his team used two different power series for calculating the digits of . For one, it was known that any error would produce a value slightly high, and for the other, it was known that any error would produce a value slightly low. And hence, as long as the two series produced the same digits, there was a very high confidence that they were correct. The first 100,265 digits of were published in 1962. The authors outlined what would be needed to calculate to 1 million decimal places and concluded that the task was beyond that day's technology, but would be possible in five to seven years. In 1989, the Chudnovsky brothers computed to over 1 billion decimal places on the supercomputer IBM 3090 using the following variation of Ramanujan's infinite series of : Records since then have all been accomplished using the Chudnovsky algorithm. In 1999, Yasumasa Kanada and his team at the University of Tokyo computed to over 200 billion decimal places on the supercomputer HITACHI SR8000/MPP (128 nodes) using another variation of Ramanujan's infinite series of . In November 2002, Yasumasa Kanada and a team of 9 others used the Hitachi SR8000, a 64-node supercomputer with 1 terabyte of main memory, to calculate to roughly 1.24 trillion digits in around 600 hours (25days). Recent Records In August 2009, a Japanese supercomputer called the T2K Open Supercomputer more than doubled the previous record by calculating to roughly 2.6 trillion digits in approximately 73 hours and 36 minutes. In December 2009, Fabrice Bellard used a home computer to compute 2.7 trillion decimal digits of . Calculations were performed in base 2 (binary), then the result was converted to base 10 (decimal). The calculation, conversion, and verification steps took a total of 131 days. In August 2010, Shigeru Kondo used Alexander Yee's y-cruncher to calculate 5 trillion digits of . This was the world record for any type of calculation, but significantly it was performed on a home computer built by Kondo. The calculation was done between 4 May and 3 August, with the primary and secondary verifications taking 64 and 66 hours respectively. In October 2011, Shigeru Kondo broke his own record by computing ten trillion (1013) and fifty digits using the same method but with better hardware. In December 2013, Kondo broke his own record for a second time when he computed 12.1 trillion digits of . In October 2014, Sandon Van Ness, going by the pseudonym "houkouonchi" used y-cruncher to calculate 13.3 trillion digits of . In November 2016, Peter Trueb and his sponsors computed on y-cruncher and fully verified 22.4 trillion digits of (22,459,157,718,361 ( × 1012)). The computation took (with three interruptions) 105 days to complete, the limitation of further expansion being primarily storage space. In March 2019, Emma Haruka Iwao, an employee at Google, computed 31.4 trillion digits of pi using y-cruncher and Google Cloud machines. This took 121 days to complete. In January 2020, Timothy Mullican announced the computation of 50 trillion digits over 303 days. On August 14, 2021, a team (DAViS) at the University of Applied Sciences of the Grisons announced completion of the computation of to 62.8 trillion digits. Practical approximations Depending on the purpose of a calculation, can be approximated by using fractions for ease of calculation. The most notable such approximations are (relative error of about 4·10−4) and (relative error of about 8·10−8). Non-mathematical "definitions" of Of some notability are legal or historical texts purportedly "defining " to have some rational value, such as the "Indiana Pi Bill" of 1897, which stated "the ratio of the diameter and circumference is as five-fourths to four" (which would imply "") and a passage in the Hebrew Bible that implies that . Indiana bill The so-called "Indiana Pi Bill" from 1897 has often been characterized as an attempt to "legislate the value of Pi". Rather, the bill dealt with a purported solution to the problem of geometrically "squaring the circle". The bill was nearly passed by the Indiana General Assembly in the U.S., and has been claimed to imply a number of different values for , although the closest it comes to explicitly asserting one is the wording "the ratio of the diameter and circumference is as five-fourths to four", which would make , a discrepancy of nearly 2 percent. A mathematics professor who happened to be present the day the bill was brought up for consideration in the Senate, after it had passed in the House, helped to stop the passage of the bill on its second reading, after which the assembly thoroughly ridiculed it before tabling it indefinitely. Imputed biblical value It is sometimes claimed that the Hebrew Bible implies that " equals three", based on a passage in and giving measurements for the round basin located in front of the Temple in Jerusalem as having a diameter of 10 cubits and a circumference of 30 cubits. The issue is discussed in the Talmud and in Rabbinic literature. Among the many explanations and comments are these: Rabbi Nehemiah explained this in his Mishnat ha-Middot (the earliest known Hebrew text on geometry, ca. 150 CE) by saying that the diameter was measured from the outside rim while the circumference was measured along the inner rim. This interpretation implies a brim about 0.225 cubit (or, assuming an 18-inch "cubit", some 4 inches), or one and a third "handbreadths," thick (cf. and ). Maimonides states (ca. 1168 CE) that can only be known approximately, so the value 3 was given as accurate enough for religious purposes. This is taken by some as the earliest assertion that is irrational. There is still some debate on this passage in biblical scholarship. Many reconstructions of the basin show a wider brim (or flared lip) extending outward from the bowl itself by several inches to match the description given in In the succeeding verses, the rim is described as "a handbreadth thick; and the brim thereof was wrought like the brim of a cup, like the flower of a lily: it received and held three thousand baths" , which suggests a shape that can be encompassed with a string shorter than the total length of the brim, e.g., a Lilium flower or a Teacup. Development of efficient formulae Polygon approximation to a circle Archimedes, in his Measurement of a Circle, created the first algorithm for the calculation of based on the idea that the perimeter of any (convex) polygon inscribed in a circle is less than the circumference of the circle, which, in turn, is less than the perimeter of any circumscribed polygon. He started with inscribed and circumscribed regular hexagons, whose perimeters are readily determined. He then shows how to calculate the perimeters of regular polygons of twice as many sides that are inscribed and circumscribed about the same circle. This is a recursive procedure which would be described today as follows: Let and denote the perimeters of regular polygons of sides that are inscribed and circumscribed about the same circle, respectively. Then, Archimedes uses this to successively compute and . Using these last values he obtains It is not known why Archimedes stopped at a 96-sided polygon; it only takes patience to extend the computations. Heron reports in his Metrica (about 60 CE) that Archimedes continued the computation in a now lost book, but then attributes an incorrect value to him. Archimedes uses no trigonometry in this computation and the difficulty in applying the method lies in obtaining good approximations for the square roots that are involved. Trigonometry, in the form of a table of chord lengths in a circle, was probably used by Claudius Ptolemy of Alexandria to obtain the value of given in the Almagest (circa 150 CE). Advances in the approximation of (when the methods are known) were made by increasing the number of sides of the polygons used in the computation. A trigonometric improvement by Willebrord Snell (1621) obtains better bounds from a pair of bounds obtained from the polygon method. Thus, more accurate results were obtained from polygons with fewer sides. Viète's formula, published by François Viète in 1593, was derived by Viète using a closely related polygonal method, but with areas rather than perimeters of polygons whose numbers of sides are powers of two. The last major attempt to compute by this method was carried out by Grienberger in 1630 who calculated 39 decimal places of using Snell's refinement. Machin-like formula For fast calculations, one may use formulae such as Machin's: together with the Taylor series expansion of the function arctan(x). This formula is most easily verified using polar coordinates of complex numbers, producing: ({,} = {239, 132} is a solution to the Pell equation 2−22 = −1.) Formulae of this kind are known as Machin-like formulae. Machin's particular formula was used well into the computer era for calculating record numbers of digits of , but more recently other similar formulae have been used as well. For instance, Shanks and his team used the following Machin-like formula in 1961 to compute the first 100,000 digits of : and they used another Machin-like formula, as a check. The record as of December 2002 by Yasumasa Kanada of Tokyo University stood at 1,241,100,000,000 digits. The following Machin-like formulae were used for this: K. Takano (1982). F. C. M. Størmer (1896). Other classical formulae Other formulae that have been used to compute estimates of include: Liu Hui (see also Viète's formula): Madhava: Euler: Newton / Euler Convergence Transformation: where (2k + 1)!! denotes the product of the odd integers up to 2k + 1. Ramanujan: David Chudnovsky and Gregory Chudnovsky: Ramanujan's work is the basis for the Chudnovsky algorithm, the fastest algorithms used, as of the turn of the millennium, to calculate . Modern algorithms Extremely long decimal expansions of are typically computed with iterative formulae like the Gauss–Legendre algorithm and Borwein's algorithm. The latter, found in 1985 by Jonathan and Peter Borwein, converges extremely quickly: For and where , the sequence converges quartically to , giving about 100 digits in three steps and over a trillion digits after 20 steps. The Gauss–Legendre algorithm (with time complexity , using Harvey–Hoeven multiplication algorithm) is asymptotically faster than the Chudnovsky algorithm (with time complexity ) – but which of these algorithms is faster in practice for "small enough" depends on technological factors such as memory sizes and access times. For breaking world records, the iterative algorithms are used less commonly than the Chudnovsky algorithm since they are memory-intensive. The first one million digits of and are available from Project Gutenberg (see external links below). A former calculation record (December 2002) by Yasumasa Kanada of Tokyo University stood at 1.24 trillion digits, which were computed in September 2002 on a 64-node Hitachi supercomputer with 1 terabyte of main memory, which carries out 2 trillion operations per second, nearly twice as many as the computer used for the previous record (206 billion digits). The following Machin-like formulae were used for this: (Kikuo Takano (1982)) (F. C. M. Størmer (1896)). These approximations have so many digits that they are no longer of any practical use, except for testing new supercomputers. Properties like the potential normality of will always depend on the infinite string of digits on the end, not on any finite computation. Miscellaneous approximations Historically, base 60 was used for calculations. In this base, can be approximated to eight (decimal) significant figures with the number 3;8,29,44, which is (The next sexagesimal digit is 0, causing truncation here to yield a relatively good approximation.) In addition, the following expressions can be used to estimate : accurate to three digits: accurate to three digits: Karl Popper conjectured that Plato knew this expression, that he believed it to be exactly , and that this is responsible for some of Plato's confidence in the omnicompetence of mathematical geometry—and Plato's repeated discussion of special right triangles that are either isosceles or halves of equilateral triangles. accurate to four digits: accurate to four digits (or five significant figures): an approximation by Ramanujan, accurate to 4 digits (or five significant figures): accurate to five digits: accurate to six digits: accurate to seven digits: accurate to eight digits: accurate to nine digits: This is from Ramanujan, who claimed the Goddess of Namagiri appeared to him in a dream and told him the true value of . accurate to ten digits: accurate to ten digits: accurate to ten digits (or eleven significant figures): This curious approximation follows the observation that the 193rd power of 1/ yields the sequence 1122211125... Replacing 5 by 2 completes the symmetry without reducing the correct digits of , while inserting a central decimal point remarkably fixes the accompanying magnitude at 10100. accurate to 18 digits: This is based on the fundamental discriminant = 3(89) = 267 which has class number (-) = 2 explaining the algebraic numbers of degree 2. The core radical is 53 more than the fundamental unit which gives the smallest solution { , } = {500, 53} to the Pell equation 2 − 892 = −1. accurate to 30 decimal places: Derived from the closeness of Ramanujan constant to the integer 6403203+744. This does not admit obvious generalizations in the integers, because there are only finitely many Heegner numbers and negative discriminants d with class number h(−d) = 1, and d = 163 is the largest one in absolute value. accurate to 52 decimal places: Like the one above, a consequence of the j-invariant. Among negative discriminants with class number 2, this d the largest in absolute value. accurate to 161 decimal places: where u is a product of four simple quartic units, and, Based on one found by Daniel Shanks. Similar to the previous two, but this time is a quotient of a modular form, namely the Dedekind eta function, and where the argument involves . The discriminant d = 3502 has h(−d) = 16. The continued fraction representation of can be used to generate successive best rational approximations. These approximations are the best possible rational approximations of relative to the size of their denominators. Here is a list of the first thirteen of these: Of these, is the only fraction in this sequence that gives more exact digits of (i.e. 7) than the number of digits needed to approximate it (i.e. 6). The accuracy can be improved by using other fractions with larger numerators and denominators, but, for most such fractions, more digits are required in the approximation than correct significant figures achieved in the result. Summing a circle's area Pi can be obtained from a circle if its radius and area are known using the relationship: If a circle with radius is drawn with its center at the point (0, 0), any point whose distance from the origin is less than will fall inside the circle. The Pythagorean theorem gives the distance from any point (, ) to the center: Mathematical "graph paper" is formed by imagining a 1×1 square centered around each cell (, ), where and are integers between − and . Squares whose center resides inside or exactly on the border of the circle can then be counted by testing whether, for each cell (, ), The total number of cells satisfying that condition thus approximates the area of the circle, which then can be used to calculate an approximation of . Closer approximations can be produced by using larger values of . Mathematically, this formula can be written: In other words, begin by choosing a value for . Consider all cells (, ) in which both and are integers between − and . Starting at 0, add 1 for each cell whose distance to the origin (0,0) is less than or equal to . When finished, divide the sum, representing the area of a circle of radius , by 2 to find the approximation of . For example, if is 5, then the cells considered are: {| style="font-size:75%;text-align:center;color:blue;height:1em" cellspacing="15" |- style="color:black" | (−5,5) || (−4,5) || (−3,5) || (−2,5) || (−1,5) || style="color:#bc1e47" | (0,5) || (1,5) || (2,5) || (3,5) || (4,5) || (5,5) |- | style="color:black" | (−5,4) || style="color:black" | (−4,4) || style="color:#bc1e47" | (−3,4) || (−2,4) || (−1,4) || (0,4) || (1,4) || (2,4) || style="color:#bc1e47" | (3,4) || style="color:black" | (4,4) || style="color:black" | (5,4) |- | style="color:black" | (−5,3) || style="color:#bc1e47" | (−4,3) || (−3,3) || (−2,3) || (−1,3) || (0,3) || (1,3) || (2,3) || (3,3) || style="color:#bc1e47" | (4,3) || style="color:black" | (5,3) |- | style="color:black" | (−5,2) || (−4,2) || (−3,2) || (−2,2) || (−1,2) || (0,2) || (1,2) || (2,2) || (3,2) || (4,2) || style="color:black" | (5,2) |- | style="color:black" | (−5,1) || (−4,1) || (−3,1) || (−2,1) || (−1,1) || (0,1) || (1,1) || (2,1) || (3,1) || (4,1) || style="color:black" | (5,1) |- | style="color:#bc1e47" | (−5,0) || (−4,0) || (−3,0) || (−2,0) || (−1,0) || (0,0) || (1,0) || (2,0) || (3,0) || (4,0) || style="color:#bc1e47" | (5,0) |- | style="color:black" | (−5,−1) || (−4,−1) || (−3,−1) || (−2,−1) || (−1,−1) || (0,−1) || (1,−1) || (2,−1) || (3,−1) || (4,−1) || style="color:black" | (5,−1) |- | style="color:black" | (−5,−2) || (−4,−2) || (−3,−2) || (−2,−2) || (−1,−2) || (0,−2) || (1,−2) || (2,−2) || (3,−2) || (4,−2) || style="color:black" | (5,−2) |- | style="color:black" | (−5,−3) || style="color:#bc1e47" | (−4,−3) || (−3,−3) || (−2,−3) || (−1,−3) || (0,−3) || (1,−3) || (2,−3) || (3,−3) || style="color:#bc1e47" | (4,−3) || style="color:black" | (5,−3) |- | style="color:black" | (−5,−4) || style="color:black" | (−4,−4) || style="color:#bc1e47" | (−3,−4) || (−2,−4) || (−1,−4) || (0,−4) || (1,−4) || (2,−4) || style="color:#bc1e47" | (3,−4) || style="color:black" | (4,−4) || style="color:black" | (5,−4) |- style="color:black" | (−5,−5) || (−4,−5) || (−3,−5) || (−2,−5) || (−1,−5) || style="color:#bc1e47" | (0,−5) || (1,−5) || (2,−5) || (3,−5) || (4,−5) || (5,−5) |} The 12 cells (0, ±5), (±5, 0), (±3, ±4), (±4, ±3) are exactly on the circle, and 69 cells are completely inside, so the approximate area is 81, and is calculated to be approximately 3.24 because = 3.24. Results for some values of are shown in the table below: For related results see The circle problem: number of points (x,y) in square lattice with x^2 + y^2 <= n. Similarly, the more complex approximations of given below involve repeated calculations of some sort, yielding closer and closer approximations with increasing numbers of calculations. Continued fractions Besides its simple continued fraction representation [3; 7, 15, 1, 292, 1, 1,...], which displays no discernible pattern, has many generalized continued fraction representations generated by a simple rule, including these two. (Other representations are available at The Wolfram Functions Site.) Trigonometry Gregory–Leibniz series The Gregory–Leibniz series is the power series for arctan(x) specialized to  = 1. It converges too slowly to be of practical interest. However, the power series converges much faster for smaller values of , which leads to formulae where arises as the sum of small angles with rational tangents, known as Machin-like formulae. Arctangent Knowing that 4 arctan 1 = , the formula can be simplified to get: with a convergence such that each additional 10 terms yields at least three more digits. Another formula for involving arctangent function is given by where such that . Approximations can be made by using, for example, the rapidly convergent Euler formula Alternatively, the following simple expansion series of the arctangent function can be used where to approximate with even more rapid convergence. Convergence in this arctangent formula for improves as integer increases. The constant can also be expressed by infinite sum of arctangent functions as and where is the n-th Fibonacci number. However, these two formulae for are much slower in convergence because of set of arctangent functions that are involved in computation. Arcsine Observing an equilateral triangle and noting that yields with a convergence such that each additional five terms yields at least three more digits. Digit extraction methods The Bailey–Borwein–Plouffe formula (BBP) for calculating was discovered in 1995 by Simon Plouffe. Using math, the formula can compute any particular digit of —returning the hexadecimal value of the digit—without having to compute the intervening digits (digit extraction). In 1996, Simon Plouffe derived an algorithm to extract the th decimal digit of (using base10 math to extract a base10 digit), and which can do so with an improved speed of time. The algorithm requires virtually no memory for the storage of an array or matrix so the one-millionth digit of can be computed using a pocket calculator. However, it would be quite tedious and impractical to do so. The calculation speed of Plouffe's formula was improved to by Fabrice Bellard, who derived an alternative formula (albeit only in base2 math) for computing . Efficient methods Many other expressions for were developed and published by Indian mathematician Srinivasa Ramanujan. He worked with mathematician Godfrey Harold Hardy in England for a number of years. Extremely long decimal expansions of are typically computed with the Gauss–Legendre algorithm and Borwein's algorithm; the Salamin–Brent algorithm, which was invented in 1976, has also been used. In 1997, David H. Bailey, Peter Borwein and Simon Plouffe published a paper (Bailey, 1997) on a new formula for as an infinite series: This formula permits one to fairly readily compute the kth binary or hexadecimal digit of , without having to compute the preceding k − 1 digits. Bailey's website contains the derivation as well as implementations in various programming languages. The PiHex project computed 64 bits around the quadrillionth bit of (which turns out to be 0). Fabrice Bellard further improved on BBP with his formula: Other formulae that have been used to compute estimates of include: Newton. Srinivasa Ramanujan. This converges extraordinarily rapidly. Ramanujan's work is the basis for the fastest algorithms used, as of the turn of the millennium, to calculate . In 1988, David Chudnovsky and Gregory Chudnovsky found an even faster-converging series (the Chudnovsky algorithm): . The speed of various algorithms for computing pi to n correct digits is shown below in descending order of asymptotic complexity. M(n) is the complexity of the multiplication algorithm employed. Projects Pi Hex Pi Hex was a project to compute three specific binary digits of using a distributed network of several hundred computers. In 2000, after two years, the project finished computing the five trillionth (5*1012), the forty trillionth, and the quadrillionth (1015) bits. All three of them turned out to be 0. Software for calculating Over the years, several programs have been written for calculating to many digits on personal computers. General purpose Most computer algebra systems can calculate and other common mathematical constants to any desired precision. Functions for calculating are also included in many general libraries for arbitrary-precision arithmetic, for instance Class Library for Numbers, MPFR and SymPy. Special purpose Programs designed for calculating may have better performance than general-purpose mathematical software. They typically implement checkpointing and efficient disk swapping to facilitate extremely long-running and memory-expensive computations. TachusPi by Fabrice Bellard is the program used by himself to compute world record number of digits of pi in 2009. -cruncher by Alexander Yee is the program which every world record holder since Shigeru Kondo in 2010 has used to compute world record numbers of digits. -cruncher can also be used to calculate other constants and holds world records for several of them. PiFast by Xavier Gourdon was the fastest program for Microsoft Windows in 2003. According to its author, it can compute one million digits in 3.5 seconds on a 2.4 GHz Pentium 4. PiFast can also compute other irrational numbers like and . It can also work at lesser efficiency with very little memory (down to a few tens of megabytes to compute well over a billion (109) digits). This tool is a popular benchmark in the overclocking community. PiFast 4.4 is available from Stu's Pi page. PiFast 4.3 is available from Gourdon's page. QuickPi by Steve Pagliarulo for Windows is faster than PiFast for runs of under 400 million digits. Version 4.5 is available on Stu's Pi Page below. Like PiFast, QuickPi can also compute other irrational numbers like , , and . The software may be obtained from the Pi-Hacks Yahoo! forum, or from Stu's Pi page. Super PI by Kanada Laboratory in the University of Tokyo is the program for Microsoft Windows for runs from 16,000 to 33,550,000 digits. It can compute one million digits in 40 minutes, two million digits in 90 minutes and four million digits in 220 minutes on a Pentium 90 MHz. Super PI version 1.9 is available from Super PI 1.9 page. See also Milü Notes References Approximations History of mathematics Pi Pi algorithms Real transcendental numbers
3476868
https://en.wikipedia.org/wiki/Nauplius%20%28mythology%29
Nauplius (mythology)
In Greek mythology, Nauplius (, "Seafarer") is the name of one (or more) mariner heroes. Whether these should be considered to be the same person, or two or possibly three distinct persons, is not entirely clear. The most famous Nauplius, was the father of Palamedes, called Nauplius the Wrecker, because he caused the Greek fleet, sailing home from the Trojan War, to shipwreck, in revenge for the unjust killing of Palamedes. This Nauplius was also involved in the stories of Aerope, the mother of Agamemnon and Menelaus, and Auge, the mother of Telephus. The mythographer Apollodorus says he was the same as the Nauplius who was the son of Poseidon and Amymone. Nauplius was also the name of one of the Argonauts, and although Apollonius of Rhodes made the Argonaut a direct descendant of the son of Poseidon, the Roman mythographer Hyginus makes them the same person. However, no surviving ancient source identifies the Argonaut with the father of Palamedes. Son of Poseidon The sea god Poseidon fathered a son, Nauplius, by Amymone, daughter of Danaus. This Nauplius was reputed to have been the eponymous founder of Nauplia (modern Nafplion) in Argolis, and a famous navigator who discovered the constellation Ursa Major (Great Bear). Apollonius of Rhodes says that he was the ancestor of an Argonaut with the same name, via the lineage: Nauplius – Proetus – Lernus – Naubolus – Clytoneus – Nauplius. According to Pherecydes of Athens, he was the father of Damastor, and through him, the grandfather of Peristhenes, and the great-grandfather of Dictys and Polydectes. He was renowned as an expert seafarer, and possibly the inventor of seafaring as a practice; a harbor equipped by him to function as a port was said to have been named in his honor. Father of Palamedes Nauplius, also called "Nauplius the Wrecker", was a king of Euboea, and the father of Palamedes. According to Apollodorus, the son of Poseidon and Amymone, and the father of Palamedes are one person who "lived to a great age". Apollodorus reports that in the Nostoi (Returns), an early epic from the Trojan cycle of poems about the Trojan War, Nauplius' wife was Philyra, and that according to Cercops his wife was Hesione, but that according to the "tragic poets" his wife was Clymene. In addition to Palamedes, Nauplius had two other sons, Oeax and Nausimedon. There are three prominent stories associated with this Nauplius. Two of these stories involve Nauplius being called upon by two kings to dispose of their unwanted daughters. The third is the story of Nauplius' revenge for the unjust killing of Palamedes, by the Greeks during the Trojan War. Aerope and Clymene According to the tradition followed by Euripides in his lost play Cretan Women (Kressai), Catreus, the king of Crete, found his daughter Aerope in bed with a slave and handed her over to Nauplius to be drowned, but Nauplius spared Aerope's life and she married Pleisthenes, who was the king of Mycenae. Sophocles, in his play Ajax, may also refer to Aerope's father Catreus finding her in bed with some man, and handing her over to Nauplius to be drowned, but the possibly corrupt text may instead refer to Aerope's husband Atreus finding her in bed with Thyestes, and having her drowned. However, according to another tradition, known to Apollodorus, Catreus, because an oracle had said that he would be killed by one of his children, gave his daughters Aerope and Clymene to Nauplius to sell in a foreign land, but instead Nauplius gave Aerope to Pleisthenes (as in Euripides) and himself took Clymene as his wife. Auge A similar story to that of Aerope's, is that of Auge, the daughter of Aleus, king of Tegea, and the mother of the hero Telephus. Sophocles wrote a tragedy Aleadae (The sons of Aleus), which told the story of Auge and Telephus. The play is lost and only fragments remain, but a declamation attributed to the fourth century BC orator Alcidamas probably used Sophocles' Aleadae for one of its sources. According to Alcidamas and others, Aleus discovered that Auge was pregnant and gave her to Nauplius to be drowned, but instead Nauplius sold her to the Mysian king Teuthras. Nauplius' revenge Nauplius' son Palamedes fought in the Trojan War, but was killed by his fellow Greeks, as a result of Odysseus' treachery. Nauplius went to Troy to demand justice for the death of his son, but met with no success. Consequently, Nauplius sought revenge against King Agamemnon and the other Greek leaders. When Agamemnon's section of the Greek fleet was sailing home from Troy, they were caught in a great storm—the storm in which Ajax the Lesser died—off the perilous southern coastline of Euboea, at Cape Caphereus, a notorious place which later became known by the name Xylophagos ('Eater of Timber'). Taking advantage of the situation Nauplius lit beacon fires on the rocks, luring the Greek sailors to steer for the fires, thinking they marked a safe harbor, and many ships were shipwrecked as a result. Hyginus adds that Nauplius killed any Greeks who managed to swim ashore. Nauplius also somehow induced the wives of three of the Greek commanders to be unfaithful to their husbands: Agamemnon's wife Clytemnestra with Aegisthus, Diomedes' wife Aegiale with Cometes, and Idomeneus' wife Meda with Leucos. Oeax and Nausimedon were apparently killed by Pylades as they arrived to aid Aegisthus. Nauplius also was said to have convinced Odysseus' mother Anticleia that her son was dead, whereupon she hanged herself. According to Plutarch, a location on Euboea was referred to as "the Young Men's Club" because when Nauplius came to Chalcis as a suppliant, both being prosecuted by the Achaeans and charging against them, the city's people provided him with a guard of young men, which was stationed at this place. According to Apollodorus, the setting of false beacon fires was a habit of Nauplius, and he himself died in the same way. Early sources Homer mentions the storm and the death of Ajax at the "great rocks of Gyrae" (Odyssey 4.500) but nowhere mentions Palamedes or Nauplius' revenge. The location Gyrae is uncertain, though some later sources locate it near Cape Caphereus. However the Nostoi probably did tell the story, since we know, from Apollodorus, that Nauplius was mentioned in the poem, and according to Proclus' summary of the Nostoi the storm occurred at Cape Caphereus. The story of Palamedes death, and Nauplius' revenge was a popular one, by at least the fifth century BC. The tragedians Aeschylus, Sophocles and Euripides all wrote plays which apparently dealt with the story. Each had a play tilted Palamedes. In addition, we know of two titles, Nauplios Katapleon (Nauplius Sails In) and Nauplios Pyrkaeus (Nauplius Lights a Fire), for plays attributed to Sophocles. Though these are possibly two names for the same play, they are probably two distinct plays. If so, then Nauplios Katapleon might have dealt with either Nauplius' voyage to the Greek camp at Troy to demand justice for his son's death, or to his sail around Greece corrupting the Greek commanders' wives. In any case, Nauplios Pyrkaeus, seems certainly to have been about "Nauplius the Wrecker" and his lighting false beacon fires. All of these plays are lost, and only testimonia and fragments remain. A fragment of Aeschylus' Palamedes ("On account of what injury did you kill my son?") seems to assure that in that play, Nauplius came to Troy and protested his son's death. Sophocles has Nauplius give a speech in defense of Palamedes, listing his many inventions and discoveries, which much benefitted the Greek army. In Euripides' Palamedes, Nauplius' son Oeax, who was with his brother Palamedes at Troy, decides to inform their father of the death of Palamedes, by inscribing the story on several oar-blades and casting them into the sea, in hopes that one would float back to Greece and be found by Nauplius. The attempt apparently succeeds and Nauplius comes to Troy. Several other plays also, presumably, dealt with this story. Philocles, Aeschlyus' nephew and a contemporary of Euripides, wrote a play titled Nauplius. Nauplius, and Palamedes, were the titles of two plays by the 4th century BC Attic tragedian Astydamas the Younger, And the 3rd century BC poet Lycophron also wrote a play with the title Nauplius. The Argonaut Nauplius was also the name of one of the Argonauts, who was one of those who volunteered to steer the Argo after Tiphys' death. According to Apollonius of Rhodes, he was the son of Clytonaeus and a direct descendant of the son of Poseidon and Amymone, via the lineage: Nauplius – Proetus – Lernus – Naubolus – Clytoneus – Nauplius. However, for Hyginus, the son of Poseidon was the same person as the Argonaut. Although it would be more plausible for an Argonaut to be still alive at the time of the Trojan War, than for a son of Poseidon and Amymone, and therefore more plausible for the father of Palamedes to be the same as the Argonaut (rather than being the son of Poseidon), no surviving ancient source identifies the Argonaut with the father of Palamedes. Namesake 9712 Nauplius, Jovian asteroid named after Nauplius Notes References Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Apollonius of Rhodes, Apollonius Rhodius: the Argonautica, translated by Robert Cooper Seaton, W. Heinemann, 1912. Internet Archive. Aristophanes, Thesmophoriazusae in The Complete Greek Drama, vol. 2. Eugene O'Neill, Jr. New York. Random House. 1938. Online version at the Perseus Digital Library. Collard, Christopher and Martin Cropp (2008a), Euripides Fragments: Aegeus–Meleanger, Loeb Classical Library No. 504. Cambridge, Massachusetts: Harvard University Press, 2008. . Online version at Harvard University Press. Collard, Christopher and Martin Cropp (2008b), Euripides Fragments: Oedipus-Chrysippus: Other Fragments, Loeb Classical Library No. 506. Cambridge, Massachusetts: Harvard University Press, 2008. . Online version at Harvard University Press. Dictys Cretensis, Journal of the Trojan War in The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian, R. M. Frazer, Indiana University Press, 1966. Diodorus Siculus, Diodorus Siculus: The Library of History. Translated by C. H. Oldfather. Twelve volumes. Loeb Classical Library. Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. 1989. Online version by Bill Thayer Euripides, Helen, translated by E. P. Coleridge in The Complete Greek Drama, edited by Whitney J. Oates and Eugene O'Neill, Jr. Volume 2. New York. Random House. 1938. Online version at the Perseus Digital Library. Fowler, R. L. (2000), Early Greek Mythography: Volume 1: Text and Introduction, Oxford University Press, 2000. . Fowler, R. L. (2013), Early Greek Mythography: Volume 2: Commentary, Oxford University Press, 2013. . Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Garagin, M., P. Woodruff, Early Greek Political thought from Homer to the Sophists, Cambridge 1995. . Grimal, Pierre, The Dictionary of Classical Mythology, Wiley-Blackwell, 1996, . Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books. Homer, The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, Massachusetts., Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library. Hyginus, Gaius Julius, Fabulae in Apollodorus' Library and Hyginus' Fabulae: Two Handbooks of Greek Mythology, Translated, with Introductions by R. Scott Smith and Stephen M. Trzaskoma, Hackett Publishing Company, 2007. . Jebb, Richard Claverhouse, W. G. Headlam, A. C. Pearson, The Fragments of Sophocles, Cambridge University Press, 2010, 3 Volumes. (Vol 1), (Vol. 2), (Vol. 3). Lloyd-Jones, Hugh, Sophocles: Fragments, Edited and translated by Hugh Lloyd-Jones, Loeb Classical Library No. 483. Cambridge, Massachusetts: Harvard University Press, 1996. . Online version at Harvard University Press. Lycophron, Alexandra (or Cassandra) in Callimachus and Lycophron with an English translation by A. W. Mair ; Aratus, with an English translation by G. R. Mair, London: W. Heinemann, New York: G. P. Putnam 1921. Internet Archive March, Jennifer, Dictionary of Classical Mythology, Oxbow Books, 2014. Google Books. Mooney, George W., Commentary on Apollonius: Argonautica, London. Longmans, Green. 1912. Parada, Carlos, Genealogical Guide to Greek Mythology, Jonsered, Paul Åströms Förlag, 1993. . Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library. Plutarch, Moralia, Volume IV: Roman Questions. Greek Questions. Greek and Roman Parallel Stories. On the Fortune of the Romans. On the Fortune or the Virtue of Alexander. Were the Athenians More Famous in War or in Wisdom?. Translated by Frank Cole Babbitt. Loeb Classical Library No. 305. Cambridge, Massachusetts: Harvard University Press, 1936. . Online version at Harvard University Press. Online version at the Perseus Digital Library. Rutherford, William G., Scholia Aristphanica, Volume 2, London, Macmillan and Co. and New York, 1896. Internet Archive Quintus Smyrnaeus, Quintus Smyrnaeus: The Fall of Troy, Translator: A.S. Way; Harvard University Press, Cambridge MA, 1913. Internet Archive Seneca, Agamemnon in Tragedies, Volume II: Oedipus. Agamemnon. Thyestes. Hercules on Oeta. Octavia. Edited and translated by John G. Fitch. Loeb Classical Library No. 78. Cambridge, Massachusetts: Harvard University Press, 2004. . Online version at Harvard University Press. Seneca, Medea in Tragedies, Volume I: Hercules. Trojan Women. Phoenician Women. Medea. Phaedra. Edited and translated by John G. Fitch. Loeb Classical Library No. 62. Cambridge, Massachusetts: Harvard University Press, 2002. . Online version at Harvard University Press. Smith, William; Dictionary of Greek and Roman Biography and Mythology, London (1873). s.v. Nauplius 1., s.v. Nauplius 2., s.v. Nauplius 3.. Sommerstein, Alan H., Aeschylus: Fragments. Edited and translated by Alan H. Sommerstein. Loeb Classical Library No. 505. Cambridge, Massachusetts: Harvard University Press, 2009. . Online version at Harvard University Press. Sophocles, Ajax in Sophocles. Ajax. Electra. Oedipus Tyrannus. Edited and translated by Hugh Lloyd-Jones. Loeb Classical Library No. 20. Cambridge, Massachusetts, Harvard University Press, 1994. . Online version at Harvard University Press. Sophocles, The Ajax of Sophocles. Edited with introduction and notes by Sir Richard Jebb, Sir Richard Jebb. Cambridge. Cambridge University Press. 1893 Online version at the Perseus Digital Library Strabo, Geography, translated by Horace Leonard Jones; Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. (1924). LacusCurtis, Online version at the Perseus Digital Library, Books 6–14 Tripp, Edward, Crowell's Handbook of Classical Mythology, Thomas Y. Crowell Co; First edition (June 1970). . Trzaskoma, Stephen M., R. Scott Smith, and Stephen Brunet, Anthology of Classical Myth: Primary Sources in Translation, Hackett Publishing, 2004.. Google books: Valerius Flaccus, Gaius, Argonautica, translated by J. H. Mozley, Loeb Classical Library No. 286. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1928. Online version at Harvard University Press. Webster, Thomas Bertram Lonsdale, The Tragedies of Euripides, Methuen & Co, 1967 . Wright, Matthew, The Lost Plays of Greek Tragedy (Volume 1): Neglected Authors, Bloomsbury Publishing, 2016. . Argonauts Children of Poseidon Kings in Greek mythology Characters in Greek mythology Characters in the Argonautica
22182664
https://en.wikipedia.org/wiki/Haemaphlebia
Haemaphlebia
Haemaphlebia is a genus of moths of the family Noctuidae. The genus was erected by George Hampson in 1910. Species Haemaphlebia atripalpis Hampson, 1910 Ghana, Liberia, Nigeria, Uganda Haemaphlebia caliginosa Hacker, 2019 Ivory Coast Haemaphlebia fasciolata Hacker, 2019 Burkina Faso, Nigeria, Somalia Haemaphlebia fiebigiana Hacker & Stadie, 2019 Uganda Haemaphlebia gola Hacker, 2019 Liberia Haemaphlebia lanceolata Hacker, 2019 Burkina Faso Haemaphlebia pallidifusca Hacker, 2019 Guinea, Tanzania References External links Acontiinae
35915764
https://en.wikipedia.org/wiki/Symphonic%20Source%2C%20Inc.
Symphonic Source, Inc.
Symphonic Source, Inc. is an American developer and marketer of data cleansing and deduplication software for customer relationship management (CRM) systems and related databases. It was founded in 2010. The company sells Software as a service (SaaS) tools that allow system administrators (example salesforce.com administrators) to search for and merge duplicate/similar records in their systems. Their tools all harness the power of cloud computing to perform complex and resource intensive operations in very small amounts of time. A number of the founders of Symphonic Source were also founders and or early employees of Tek-Tools Software. Products Cloudingo - a cloud-based SaaS, connects to salesforce.com and allows system administrators to scan their entire database for similar or duplicate records. Cloudingo was launched in late 2011. It is well known for its ease-of-use and rich user experience. Cloudingo was recently featured at the Dreamforce 2012 conference. Free Tools DupeCatcher - a 100% native Force.com application that blocks/flags duplicate records at the time they are being entered. DupeCatcher was released in the fall of 2010 and quickly became one of the most popular free apps on the Salesforce AppExchange. Thousands of companies worldwide make use of DupeCatcher including many of the Fortune 1000. Cloudingo Studio - a Windows-based Force.com SOQL builder/explorer. Cloudingo Studio was released in the first quarter of 2013 as a free tool for the Salesforce.com developer community. It was designed to make it quicker and easier for traditional developers and DBAs (SQL Server, MySQL, Oracle) to quickly move over to the Force.com platform by giving them a familiar work-space. One of the major draws to Cloudingo Studio is its support for SOQL+. SOQL+ is a ported version of SQL that allows people who are used to SQL to perform similar operations in SOQL. An example of this would be "Select * from Accounts". Select * is not an option in traditional SOQL. Cloudingo Studio actually translates the statement into SOQL and runs the query while also giving the user the actual SOQL query should they want to include it in Salesforce.com Apex code. References External links Official website Software companies based in Texas Customer relationship management software Companies based in Dallas System administration Software companies of the United States
17133966
https://en.wikipedia.org/wiki/Additional%20Protocol%20to%20the%20Convention%20on%20Cybercrime
Additional Protocol to the Convention on Cybercrime
Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems is an additional protocol to the Council of Europe Convention on Cybercrime. This additional protocol was the subject of negotiations in late 2001 and early 2002. Final text of this protocol was adopted by the Council of Europe Committee of Ministers on 7 November 2002 by the United States Department of Justice by the United States Department of Justice under the title "Additional Protocol to the Convention on cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems, ("Protocol"). The Protocol opened on 28 January 2003 and entry into force is 1 March 2006. As of July 2017, 29 States have ratified the Protocol and a further 13 have signed the Protocol but have not yet followed with ratification. The Protocol requires participating States to criminalize the dissemination of racist and xenophobic material through computer systems, as well as of racist and xenophobic-motivated threats and insults. Article 6, Section 1 of the Protocol specifically covers the denial of the Holocaust and other genocides recognized as such by other international courts set up since 1945 by relevant international legal instruments. Section 2 of Article 6 allows a Party to the Protocol at their discretion only to prosecute if the offense is committed with the intent to incite hatred, discrimination or violence; or to make use of a reservation, by allowing a Party not to apply – in whole or in part – Article 6. The Council of Europe Explanatory Report of the Protocol states the "European Court of Human Rights has made it clear that the denial or revision of 'clearly established historical facts – such as the Holocaust – ... would be removed from the protection of Article 10 by Article 17' of the ECHR (see in this context the Lehideux and Isorni judgment of 23 September 1998)". Two of the English speaking states in Europe, Ireland and the United Kingdom, have not signed the additional protocol, (the third, Malta, signed on 28 January 2003, but has not yet ratified it). On 8 July 2005 Canada became the first non-European state to sign the convention. The United States government does not believe that the final version of the Protocol is consistent with the United States' constitutional guarantees and has informed the Council of Europe that the United States will not become a Party to the protocol. References Cybercrime Computer law treaties International criminal law treaties Laws criminalizing Holocaust denial Anti-racism in Europe Xenophobia Council of Europe treaties Treaties concluded in 2003 Treaties entered into force in 2006 Treaties of Albania Treaties of Armenia Treaties of Bosnia and Herzegovina Treaties of Croatia Treaties of Cyprus Treaties of the Czech Republic Treaties of Denmark Treaties of Finland Treaties of France Treaties of Germany Treaties of Greece Treaties of Latvia Treaties of Lithuania Treaties of North Macedonia Treaties of Monaco Treaties of Montenegro Treaties of the Netherlands Treaties of Norway Treaties of Portugal Treaties of Romania Treaties of Senegal Treaties of Serbia Treaties of Slovenia Treaties of Spain Treaties of Ukraine 2003 in France
8663286
https://en.wikipedia.org/wiki/MetCC
MetCC
MetCC, also known as the Met Contact Centre, Met Command and Control or MO12, is a department of Met Operations within Greater London's Metropolitan Police Service. It is responsible for receiving emergency and non-emergency public telephony within the Metropolitan Police and between the police and the public & other forces, and for the despatching of police to incidents. MetCC operates out of three centres in Lambeth, Hendon and Bow. Previous command and control system Historically, each of the Met's Borough Operational Command Units (BOCUs) had its own control room, known internally as the 'CAD Room' (for Computer Aided Despatch) which dealt with incoming non-emergency calls and with despatching police to all calls in that area. In addition, the Information Room at New Scotland Yard received 999 calls which were sent to the CAD Room to be dealt with. In 2004, staff began to migrate on a borough-by-borough basis to Metcall, with Southwark being the first BOCU to move. The C3i programme Led by DAC Ron McPherson and Dr Amanat Hussain, the C3i programme (Communication, Command, Control & Information) was the largest police transformation programme undertaken in the UK. Working with Chief Superintendent Stephen MacDonald, the Operational Command Unit (OCU) Commander for MetCC, the C3i programme modernised the command and control infrastructure to create seamless communications service for the Metropolitan Police Service to give the people of London a robust and resilient response policing service, getting the right people in the right place at the right time with the right information. The C3i Programme delivered optimised end to end Command and Control processes, new operational command unit (Central Communications Command), new Casualty Bureau Facility, largest Special Operations Room, Integrated Borough Operations Rooms and Telephone Investigation Bureaus services. Despite criticisms of public sector programme, the MPS successfully delivered a complex organisation transformation programme. The Office of Government Commerce (OGC) in their final report said that "C3i has been well managed and has been delivered to time and to budget programme" and went on to commend the senior programme management team for having "done an excellent job, particularly as the implementation has been into a live operational environment". New command and control system Following completion of the C3i programme in late 2007, all Met communications are dealt with at three dedicated centres at Hendon, Bow and Lambeth, covering west; north and east; and south London respectively. Within each centre is a call receipt facility, called First Contact and a dispatching facility called Dispatch. In First Contact call takers sit in pods of twelve positions, each pod having its own supervisor. Dispatching pods have two or three dispatcher and one supervisor position. The size of the dispatching pod depends on how busy the borough that it supports is. Patrol officers are dispatched through the Airwave radios and by sending information direct to the MDT terminal in every police vehicle. Call handling Any caller calling police will be passed to a First Contact operator. If the call needs to be recorded by the police, a record is made on the Contact Handling System, a tailored iteration of the AIT Portrait CRM product. If an officer needs to be dispatched, this record is passed into the Computer Aided Despatch (CAD) application, and a CAD record will be created. Graded response Once the initial information has been inputted, the CAD will be allocated a grade of urgency. All calls are given one of four grades: Immediate (I) grade - 'I' grade calls are calls where "the immediate presence of a police officer will have a significant impact on the outcome of an incident". This is typically categorised as where there is, or is likely to be, a danger to life, a serious threat of violence, serious damage to property or serious injury. The response time to a call of this urgency is 15 minutes. Significant (S) grade - 'S' grade calls are calls where there is a "degree of importance or urgency associated with the initial police action, but an emergency response is not required". The response time to a call of this urgency is 60 minutes. Extended (E) grade - 'E' grade calls are calls where a police attendance is required, but an emergency police response is not. The response time to a call of this urgency is 48 hours. Referred (R) grade - 'R' grade calls are calls where a police attendance is not required. This typically means that the caller has been dealt with appropriately by the call handler. Once the CAD report has been created, it is passed to the relevant Dispatch for the borough in question to decide what officers to deploy to it. The Supervisor (see below) can change the grading of the call if necessary, but only in exceptional circumstances. During the programme transition there were a number of challenges with late technology and the preparedness of staff to work in the new environment, and not all performance targets were met. As the new OCU bedded down, performance consolidated and the command now achieves all of the national call handling targets set by the Home Office through the HMIC and NPIA. Integrated Borough Operations (IBO) units, originally set up in Borough Commands to support local officers have been closed, and their functions have been absorbed into MetCC. Contact Handling System The Contact Handling System (CHS) is a software application intended to provide more information to call handlers when taking emergency calls. CHS differs from the older CAD system and from systems used by other emergency services worldwide in that far more information can be input into the system by the CAD Operator and the information can be retrieved and sorted more easily. However, it has proven unpopular thus far with CAD Operators and Police Officers; operators consider it unnecessarily complicated to use, and police find CHS-derived information difficult to interpret, particularly via MDT terminal. It was intended that CHS be brought into place upon each OCU/BOCU's transfer to the Central Communications Command. However, the system proved unstable and incapable of dealing with high call volumes and the old CAD system was kept in place. It is now intended that the two systems run in parallel until 2012. Criticism of the new 'CHS' system Central Communications Command hit the headlines in 2006, when Peter Smyth, a spokesman for the Police Federation condemned the Contact Handling System by describing it as "at best unreliable, and that's if it works at all". He added that "Metcall will have taken 900 officers off the beat by the end of next year, and meanwhile, an ever-growing army of community support officers who walk around like gaggles of lost shoppers have been recruited to take the places of these experienced officers in the streets". Press reaction to MetCC and the C3i programme The local press in London, particularly the Evening Standard, were initially very critical of Metcall, often citing concerns about the perceived increase in the time taken to answer telephone calls and to deploy police officers to incidents. The reality since 2009 is that the MPS performance in answering emergency and non emergency telephony has seen significant improvement, particularly with non emergency telephony. A number of local newspapers have also raised concerns about the loss of local knowledge due to operators no longer dealing only with a single small area. It is hoped by MPS management that, following the completion of the transition to the Metcall centres and a subsequent stabilisation of staff turnover rates, any shortcomings will cease to be an issue and more efficient staff working will free up those police officers currently at MetCC to return to an operational role, further improving the MPS's efficiency. Staff structure Central Communications Command is unique within the Metropolitan Police in its staff structure. It is both the largest Operational Command Unit in the MPS (and the United Kingdom) and the only one where many of its operational roles are staffed by civilian staff. Consequently, it has a different structure to all other branches of the MPS. MetCC retains sufficient police officers to maintain the 999 answering and Dispatch service should civilian staff strike, in posts identified as requiring a police officer in legislation, and in senior and intermediate positions where command is exercised. Duty Officers Each of the three Metcall centres has two Duty Officers on duty at any given time (making a total of six on duty across the three centres). One of the Duty Officers is always a serving Inspector, whilst the other is always an experienced member of the civilian staff. The Duty Officers bear ultimate responsibility for decisions taken within the centre, and are also responsible for staff welfare within their centre. CAD Supervisors Each "pod" - generally covering between two and four Operational Command Units - has two CAD Supervisors. Their role is to oversee the Operators, to take control of particularly difficult situations, and to have final say over when an incident can be 'closed'. In addition, one or two Supervisors will oversee the First Contact process (see above), ensuring calls are dealt with correctly. The role of Supervisor is filled either by a Sergeant or by an experienced member of the civilian staff. Supervisors are commonly (but incorrectly) referred to as "Controllers"; the post of Controller was a historic post prior to the introduction of the C3i programme, and was responsible for the supervision and staff welfare of CAD Operators on a particular borough. On transition to Metcall centres, the majority of Controllers became Supervisors at Central Communications Command. CAD Operators (Communications Officers) CAD Operators (also known as Communications Officer or Civilian Communications Officer) make up the majority of operational MetCC staff. Most are civilian, although some posts are filled by Police Officers on secondment due to staff shortages. It was intended that by the end of 2007 the position would be entirely civilianised. Job role CAD Operators perform two functions. In the First Contact (FC) role (also known as call receipt), they answer 999 and non-emergency telephone calls to police and enter the details of the call onto the MPS computer system. In the Despatch role they read the details of the calls as entered by First Contact, decide on the appropriate action to take, and, when police deployment is necessary, assign police officers using Airwave radios or by sending information directly to the MDT terminal of police vehicles. Most CAD Operators rotate between the two roles, but some are dedicated to one or the other. The title "CAD Operator" comes from the Computer Assisted Despatch program that the MPS has used since 1984; while this system is still in use, it is gradually being superseded by the new Contact Handling System application. Location On the introduction of the CAD system in 1986, each Metropolitan Police Division had a 'Reserve Room' with a 'Reserve Officer' using a paper based system to manage locally deployed resources. The 'Reserve Officer' recorded non emergency calls directly from calls to the police station, and received emergency (999) calls from a teleprinter which was connected to Information Room at Scotland Yard, where all of London's police 999 calls were received. Call volumes and demands at that time meant most stations had one or two 'Reserve' officers, smaller stations had one officer in the role. As the new CAD system was rolled out the Reserve rooms were closed as each Division opened one or more CAD rooms. These were staffed by shifts of between three and seven police officers and civil staff. In 2004, local CAD Operators began to transfer to the new Metcall centres; the transfer was completed in December 2007. Operator numbers There are just over 2,000 CAD Operator positions within the MPS, and approximately 400-500 theoretically on duty at any given time. Due to staffing issues stemming from the transfer to the Metcall centres, the numbers are currently much lower, and a number of police officers have been seconded to Central Communications Command to fill vacancies; this removal of officers from active duties led to some controversy. While the Metropolitan Police Authority intended to eventually have all CAD Operator positions filled by civilians, allowing police officers to return to active duties, this civilianisation was stopped in 2010 as it was recognised by the MPS that total civilianisation would create a significant risk to London if the MetCC staff went on strike and police officers (who cannot strike by law) were not trained and familiar with MetCC processes. Staff shortages When the C3i program began to be implemented in 2004, it was extremely controversial both within and outside the Met. A number of staff were reluctant to relocate to the new centres, and were also concerned about the substantial changes to their job role. Due to concerns about large numbers of potential staff shortages, the controversial 'Career Management' scheme was introduced; this meant that for some time prior to the introduction of Central Communications Command all staff currently working in CAD Rooms were barred from transferring to any other department within the MPS. Despite this, a number of existing staff resigned from the service altogether rather than transfer and serving officers are reluctant to transfer to Central Communications Command due to concerns they may not be released for some time. Notwithstanding this, the staff 'churn' rates of around 12% in MetCC compares very favourably with similarly sized commercial call centres which often have churn rates in excess of 30%. See also Central Operations FiReControl FireLink Gold Silver Bronze command structure The Job (police newspaper) References External links Photograph of MPS Traffic Control showing a typical CAD environment Metropolitan Police units
33983707
https://en.wikipedia.org/wiki/Integrated%20Software%20Dependent%20System
Integrated Software Dependent System
Integrated Software Dependent Systems (ISDS) is an offshore standard (DNV-OS-D203) and recommended practice guideline (DNV-RP-D201) covering systems and software verifications and classification of any integrated system that utilizes extensive software control. The ISDS Recommended Practice (DNV-RP-D201) was launched in 2008 by Det Norske Veritas (DNV), the Norwegian classification society. DNV Offshore Standard OS-D203 launched in April 2010. Since the ISDS standard was first published by DNV, it has been applied by several oil companies, equipment suppliers, ship and rig owners. The ISDS standard focuses on how to set up and run a project and how to develop system and software quality assurance processes that will last the lifetime of the unit (ship, rig etc.). It provides a framework for working systematically to achieve the required reliability, availability, maintainability and safety for the integrated unit of software dependent systems. The process typically starts when owners are specifying their requirements, either for a new project or an enhancement to an existing system. In collaboration with DNV specialists, the owner can assess the integrator and the suppliers to ensure they have the prerequisites for delivering good quality software. One of the innovations of ISDS is that it assigns systems and software responsibilities to one or more of the roles: owner, operator, system integrator, suppliers, and independent verifier. Another important feature of ISDS is that it requires the designation of a system integrator. This can be the shipbuilder, the major automation supplier, or a specialized contractor. The ISDS defines the activities to be performed by the system integrator. These activities focus on managing requirements and interfaces among the different systems. The ISDS-required practices for suppliers focus on ensuring that software quality is built into vendors’ products through systematic reviews, inspections, and testing. All of these requirements are generally accepted good practices in software engineering. Nothing revolutionary is demanded. Among the rig-owners, Songa Offshore, Seadrill and Dolphin Drilling have been early adopters of the ISDS approach. DNV conducted a pilot project of the recommended practice version of ISDS with Seadrill (in Houston) in 2009. Several improvements were made to Seadrill's new build and operations practices as a result of this initiative, and a story on this has been published in Offshore Engineer. DNV has been engaged with Dolphin Drilling in an effort that will lead to the issuance of the first ISDS class certificate, see article by Steve Marshall in Upstream Online. DNV is engaged by the Daewoo Ship and Marine Engineering (DSME), Samsung Heavy Industries (SHI) and Hyundai Heavy Industries (HHI) yards in South Korea, for drilling units they are building for Songa Offshore, Fred Olsen Energy (Dolphin Drilling), Statoil and Diamond Drilling. The owners have specified a full scope for DNV follow-up on ISDS, including systems for emergency shut-down, fire and gas, BOP control, drilling control, pipe/riser handling, heave compensation & tensioning, bulk storage, drilling fluid circulation, cementing, dynamic positioning, power management and integrated automation. See article published in Offshore Magazine for the Songa Offshore units. In September 2013, DNV announced the contract with Diamond Drilling, the first American rig-owner to apply ISDS for a new-build project. The ISDS methodology has been developed starting with best industry practices from aerospace, telecom and automotive industries, and adapting the requirements to fit the offshore and maritime domains. An article published in Oil & Gas Journal gives an industry perspective to ISDS. In July 2015, the Songa Equinox, the first of Songa Offshore’s four new sixth generation Cat-D semisubmersible rigs, met the requirements of integrated software dependent systems (ISDS) standard (DNV-OS-D203) to prevent software glitches. The aim is to enable full tracking of the quality and version control of all integrated software systems, so that the yard and the user knows the status of all systems, the latest updates, if any still require close-out at the yard, at any given time. Noticeable improvements to the typical complex cyber dependent vessel newbuilding lifecycle were observed. References Computer standards
390726
https://en.wikipedia.org/wiki/SpeakEasy
SpeakEasy
SpeakEasy was a United States military project to use software-defined radio technology to make it possible to communicate with over 10 different types of military radios from a single system. History "The SpectrumWare project applied a software-oriented wireless communications approach with distributed signal processing. The research direction of the SpectrumWare project was heavily influenced by two software radio efforts: the military SpeakEasy project and the commercial products of the Steinbrecher Corporation. According to Upmal and Lackey in “SPEAKeasy, the Military Software Radio” IEEE Communications Magazine (NY: IEEE Press) 1995, the SpeakEasy project was started in 1991 and was the first large-scale software radio. SpeakEasy was motivated in large part by the communications interoperability problems that resulted from different branches of the military services having dissimilar (non-interoperable) radio systems. This lack of communications interoperability can be directly linked to casualties in several conflicts. SpeakEasy had a very aggressive goal of implementing ten different radio waveforms in software on a single platform. The designers chose the fastest DSP available at the time, the Texas Instruments TMS320C40 processor, which ran at 40 MHz. Since this was not enough processing power to implement all of the waveform processing, the system boards were designed to each support four ’C40s as well as some FPGAs. In 1994, Phase I was successfully demonstrated; however it involved several hundred processors and filled the back of a truck. Moore’s Law provides a doubling in speed every eighteen months, and since it had taken three years to build the system and write all of the software, two doublings had taken place. This seemed to indicate that the number of processors could be reduced by a factor of four. However, SpeakEasy could not take advantage of these newer faster processors, and the reason was the software. The software was tied to ’C40 assembly language, plus all of the specialized glue code to get four C40s to work together with the code for the particular chosen FPGA. The observation was that it had taken three years to write software for a platform that Moore’s Law made obsolete in eighteen months. Further-more, a software radio pushes most of the complexity of the radio into software, so the software development could easily become the largest, most expensive part of the system. These observations led to software portability being a key goal of the SpectrumWare project." See also Software Communications Architecture (SCA) European Secure Software-defined Radio (ESSOR) http://www.occar.int/36 Joint Radio System of the German Armed Forces (SVFuA) GNU Radio References Military radio systems of the United States Software-defined radio
20809223
https://en.wikipedia.org/wiki/1975%20USC%20Trojans%20football%20team
1975 USC Trojans football team
The 1975 USC Trojans football team represented the University of Southern California (USC) in the 1975 NCAA Division I football season. In their 15th year under head coach John McKay, the Trojans compiled an 8–4 record (3–4 against conference opponents), finished in fifth place in the Pacific-8 Conference (Pac-8), and outscored their opponents by a combined total of 247 to 140. The team was ranked #17 in the final AP Poll and #19 in the final UPI Coaches Poll. Quarterback Vince Evans led the team in passing, completing 35 of 112 passes for 695 yards with three touchdowns and nine interceptions. Ricky Bell led the team in rushing with 385 carries for 1,957 yards and 13 touchdowns. Randy Simmrin led the team in receiving with 26 catches for 478 yards and one touchdown. Schedule Roster Game summaries Duke Ricky Bell 34 rushes, 256 yards Oregon State Purdue Ricky Bell 89 rush yards at Iowa Washington State Ricky Bell 38 Rush, 217 Yds at Notre Dame 1975 team players in the NFL The following players were drafted into professional football following the season. References USC USC Trojans football seasons Liberty Bowl champion seasons USC Trojans football
61697525
https://en.wikipedia.org/wiki/Gitea
Gitea
Gitea (/ɡɪ’ti:/) is an open-source forge software package for hosting software development version control using Git as well as other collaborative features like bug tracking, wikis and code review. It supports self-hosting but also provides a free public first-party instance. It is a fork of Gogs and is written in Go. Gitea can be hosted on all platforms supported by Go including Linux, macOS, and Windows. The project is funded on Open Collective. History Gitea was created by Lunny Xiao, who was also a founder of the self-hosted Git service Gogs. He invited a group of users and contributors of Gogs. Though Gogs was an open-source project, its repository was under the control of a single maintainer, limiting the amount of input and speed with which the community could influence the development. Frustrated by this, the Gitea developers began Gitea as a fork of Gogs in November of 2016 and established a community-driven model for its development. It had its official 1.0 release the following month, December of 2016. See also Source control Distributed version control Self hosting Comparison of source code hosting facilities Open-source software GitHub GitLab Bitbucket References External links Official instance Code Repository on GitHub 2016 software Version control Git (software) Bug and issue tracking software Free software programmed in Go Project hosting websites Project management software Open-source software hosting facilities Free project management software Free and open-source software Free software websites Cross-platform free software Software using the MIT license Collaborative projects
391505
https://en.wikipedia.org/wiki/CICS
CICS
IBM CICS (Customer Information Control System) is a family of mixed-language application servers that provide online transaction management and connectivity for applications on IBM mainframe systems under z/OS and z/VSE. CICS family products are designed as middleware and support rapid, high-volume online transaction processing. A CICS transaction is a unit of processing initiated by a single request that may affect one or more objects. This processing is usually interactive (screen-oriented), but background transactions are possible. CICS Transaction Server (CICS TS) sits at the head of the CICS family and provides services that extend or replace the functions of the operating system. These services can be more efficient than the generalized operating system services and also simpler for programmers to use, particularly with respect to communication with diverse terminal devices. Applications developed for CICS may be written in a variety of programming languages and use CICS-supplied language extensions to interact with resources such as files, database connections, terminals, or to invoke functions such as web services. CICS manages the entire transaction such that if for any reason a part of the transaction fails all recoverable changes can be backed out. While CICS TS has its highest profile among large financial institutions, such as banks and insurance companies, many Fortune 500 companies and government entities are reported to run CICS. Other, smaller enterprises can also run CICS TS and other CICS family products. CICS can regularly be found behind the scenes in, for example, bank-teller applications, ATM systems, industrial production control systems, insurance applications, and many other types of interactive applications. Recent CICS TS enhancements include new capabilities to improve the developer experience, including the choice of APIs, frameworks, editors, and build tools, while at the same time providing updates in the key areas of security, resilience, and management. In earlier, recent CICS TS releases, support was provided for Web services and Java, event processing, Atom feeds, and RESTful interfaces. History CICS was preceded by an earlier, single-threaded transaction processing system, IBM MTCS. An 'MTCS-CICS bridge' was later developed to allow these transactions to execute under CICS with no change to the original application programs. CICS was originally developed in the United States at an IBM Development Center in Des Plaines, Illinois, beginning in 1966 to address requirements from the public utility industry. The first CICS product was announced in 1968, named Public Utility Customer Information Control System, or PU-CICS. It became clear immediately that it had applicability to many other industries, so the Public Utility prefix was dropped with the introduction of the first release of the CICS Program Product on July 8, 1969, not long after IMS database management system. For the next few years, CICS was developed in Palo Alto and was considered a less important "smaller" product than IMS which IBM then considered more strategic. Customer pressure kept it alive, however. When IBM decided to end development of CICS in 1974 to concentrate on IMS, the CICS development responsibility was picked up by the IBM Hursley site in the United Kingdom, which had just ceased work on the PL/I compiler and so knew many of the same customers as CICS. The core of the development work continues in Hursley today alongside contributions from labs in India, China, Russia, Australia, and the United States. Early evolution CICS originally only supported a few IBM-brand devices like the 1965 IBM 2741 Selectric (golf ball) typewriter-based terminal. The 1964 IBM 2260 and 1972 IBM 3270 video display terminals were widely used later. In the early days of IBM mainframes, computer software was free bundled at no extra charge with computer hardware. The OS/360 operating system and application support software like CICS were "open" to IBM customers long before the open-source software initiative. Corporations like Standard Oil of Indiana (Amoco) made major contributions to CICS. The IBM Des Plaines team tried to add support for popular non-IBM terminals like the ASCII Teletype Model 33 ASR, but the small low-budget software development team could not afford the $100-per-month hardware to test it. IBM executives incorrectly felt that the future would be like the past, with batch processing using traditional punch cards. IBM reluctantly provided only minimal funding when public utility companies, banks and credit-card companies demanded a cost-effective interactive system (similar to the 1965 IBM Airline Control Program used by the American Airlines Sabre computer reservation system) for high-speed data access-and-update to customer information for their telephone operators (without waiting for overnight batch processing punch card systems). When CICS was delivered to Amoco with Teletype Model 33 ASR support, it caused the entire OS/360 operating system to crash (including non-CICS application programs). The majority of the CICS Terminal Control Program (TCP the heart of CICS) and part of OS/360 had to be laboriously redesigned and rewritten by Amoco Production Company in Tulsa Oklahoma. It was then given back to IBM for free distribution to others. In a few years, CICS generated over $60 billion in new hardware revenue for IBM, and became their most-successful mainframe software product. In 1972, CICS was available in three versions DOS-ENTRY (program number 5736-XX6) for DOS/360 machines with very limited memory, DOS-STANDARD (program number 5736-XX7), for DOS/360 machines with more memory, and OS-STANDARD V2 (program number 5734-XX7) for the larger machines which ran OS/360. In early 1970, a number of the original developers, including Ben Riggins (the principal architect of the early releases) relocated to California and continued CICS development at IBM's Palo Alto Development Center. IBM executives did not recognize value in software as a revenue-generating product until after federal law required software unbundling. In 1980, IBM executives failed to heed Ben Riggins' strong suggestions that IBM should provide their own EBCDIC-based operating system and integrated-circuit microprocessor chip for use in the IBM Personal Computer as a CICS intelligent terminal (instead of the incompatible Intel chip, and immature ASCII-based Microsoft 1980 DOS). Because of the limited capacity of even large processors of that era every CICS installation was required to assemble the source code for all of the CICS system modules after completing a process similar to system generation (sysgen), called CICSGEN, to establish values for conditional assembly-language statements. This process allowed each customer to exclude support from CICS itself for any feature they did not intend to use, such as device support for terminal types not in use. CICS owes its early popularity to its relatively efficient implementation when hardware was very expensive, its multi-threaded processing architecture, its relative simplicity for developing terminal-based real-time transaction applications, and many open-source customer contributions, including both debugging and feature enhancement. Z notation Part of CICS was formalized using the Z notation in the 1980s and 1990s in collaboration with the Oxford University Computing Laboratory, under the leadership of Tony Hoare. This work won a Queen's Award for Technological Achievement. CICS as a distributed file server In 1986, IBM announced CICS support for the record-oriented file services defined by Distributed Data Management Architecture (DDM). This enabled programs on remote, network-connected computers to create, manage, and access files that had previously been available only within the CICS/MVS and CICS/VSE transaction processing environments. In newer versions of CICS, support for DDM has been removed. Support for the DDM component of CICS z/OS was discontinued at the end of 2003, and was removed from CICS for z/OS in version 5.2 onward. In CICS TS for z/VSE, support for DDM was stabilised at V1.1.1 level, with an announced intention to discontinue it in a future release. In CICS for z/VSE 2.1 onward, CICS/DDM is not supported. CICS and the World Wide Web CICS Transaction Server first introduced a native HTTP interface in version 1.2, together with a Web Bridge technology for wrapping green-screen terminal-based programs with an HTML facade. CICS Web and Document APIs were enhanced in CICS TS V1.3 to enable web-aware applications to be written to interact more effectively with web browsers. CICS TS versions 2.1 through 2.3 focused on introducing CORBA and EJB technologies to CICS, offering new ways to integrate CICS assets into distributed application component models. These technologies relied on hosting Java applications in CICS. The Java hosting environment saw numerous improvements over many releases, ultimately resulting in the embedding of the WebSphere Liberty Profile into CICS Transaction Server V5.1. Numerous web facing technologies could be hosted in CICS using Java, this ultimately resulted in the removal of the native CORBA and EJB technologies. CICS TS V3.1 added a native implementation of the SOAP and WSDL technologies for CICS, together with client side HTTP APIs for outbound communication. These twin technologies enabled easier integration of CICS components with other Enterprise applications, and saw widespread adoption. Tools were included for taking traditional CICS programs written in languages such as COBOL, and converting them into WSDL defined Web Services, with little or no program changes. This technology saw regular enhancements over successive releases of CICS. CICS TS V4.1 and V4.2 saw further enhancements to web connectivity, including a native implementation of the Atom publishing protocol. Many of the newer web facing technologies were made available for earlier releases of CICS using delivery models other than a traditional product release. This allowed early adopters to provide constructive feedback that could influence the final design of the integrated technology. Examples include the Soap for CICS technology preview SupportPac for TS V2.2, or the ATOM SupportPac for TS V3.1. This approach was used to introduce JSON support for CICS TS V4.2, a technology that went on to be integrated into CICS TS V5.2. The JSON technology in CICS is similar to earlier SOAP technology, both of which allowed programs hosted in CICS to be wrapped with a modern facade. The JSON technology was in turn enhanced in z/OS Connect Enterprise Edition, an IBM product for composing JSON APIs that can leverage assets from several mainframe subsystems. Many partner products have also been used to interact with CICS. Popular examples include using the CICS Transaction Gateway for connecting to CICS from JCA compliant Java application servers, and IBM DataPower appliances for filtering web traffic before it reaches CICS. Modern versions of CICS provide many ways for both existing and new software assets to be integrated into distributed application flows. CICS assets can be accessed from remote systems, and can access remote systems; user identity and transactional context can be propagated; RESTful APIs can be composed and managed; devices, users and servers can interact with CICS using standards-based technologies; and the IBM WebSphere Liberty environment in CICS promotes the rapid adoption of new technologies. MicroCICS By January, 1985 a 1969-founded consulting company, having done "massive on-line systems" for Hilton Hotels, FTD Florists, Amtrak, and Budget Rent-a-Car, announced what became MicroCICS. The initial focus was the IBM XT/370 and IBM AT/370. CICS Family Although when CICS is mentioned, people usually mean CICS Transaction Server, the CICS Family refers to a portfolio of transaction servers, connectors (called CICS Transaction Gateway) and CICS Tools. CICS on distributed platforms—not mainframes—is called IBM TXSeries. TXSeries is distributed transaction processing middleware. It supports C, C++, COBOL, Java™ and PL/I applications in cloud environments and traditional data centers. TXSeries is available on AIX, Linux x86, Windows, Solaris, and HP-UX platforms. CICS is also available on other operating systems, notably IBM i and OS/2. The z/OS implementation (i.e., CICS Transaction Server for z/OS) is by far the most popular and significant. Two versions of CICS were previously available for VM/CMS, but both have since been discontinued. In 1986, IBM released CICS/CMS, which was a single-user version of CICS designed for development use, the applications later being transferred to an MVS or DOS/VS system for production execution. Later, in 1988, IBM released CICS/VM. CICS/VM was intended for use on the IBM 9370, a low-end mainframe targeted at departmental use; IBM positioned CICS/VM running on departmental or branch office mainframes for use in conjunction with a central mainframe running CICS for MVS. CICS Tools Provisioning, management and analysis of CICS systems and applications is provided by CICS Tools. This includes performance management as well as deployment and management of CICS resources. In 2015, the four core foundational CICS tools (and the CICS Optimization Solution Pack for z/OS) were updated with the release of CICS Transaction Server for z/OS 5.3. The four core CICS Tools: CICS Interdependency Analyzer for z/OS, CICS Deployment Assistant for z/OS, CICS Performance Analyzer for z/OS and CICS Configuration Manager for z/OS. Programming Programming considerations Multiple-user interactive-transaction application programs were required to be quasi-reentrant in order to support multiple concurrent transaction threads. A software coding error in one application could block all users from the system. The modular design of CICS reentrant / reusable control programs meant that, with judicious "pruning," multiple users with multiple applications could be executed on a computer with just 32K of expensive magnetic core physical memory (including the operating system). Considerable effort was required by CICS application programmers to make their transactions as efficient as possible. A common technique was to limit the size of individual programs to no more than 4,096 bytes, or 4K, so that CICS could easily reuse the memory occupied by any program not currently in use for another program or other application storage needs. When virtual memory was added to versions OS/360 in 1972, the 4K strategy became even more important to reduce paging and thrashing unproductive resource-contention overhead. The efficiency of compiled high-level COBOL and PL/I language programs left much to be desired. Many CICS application programs continued to be written in assembler language, even after COBOL and PL/I support became available. With 1960s-and-1970s hardware resources expensive and scarce, a competitive "game" developed among system optimization analysts. When critical path code was identified, a code snippet was passed around from one analyst to another. Each person had to either (a) reduce the number of bytes of code required, or (b) reduce the number of CPU cycles required. Younger analysts learned from what more-experienced mentors did. Eventually, when no one could do (a) or (b), the code was considered optimized, and they moved on to other snippets. Small shops with only one analyst learned CICS optimization very slowly (or not at all). Because application programs could be shared by many concurrent threads, the use of static variables embedded within a program (or use of operating system memory) was restricted (by convention only). Unfortunately, many of the "rules" were frequently broken, especially by COBOL programmers who might not understand the internals of their programs or fail to use the necessary restrictive compile time options. This resulted in "non-re-entrant" code that was often unreliable, leading to spurious storage violations and entire CICS system crashes. Originally, the entire partition, or Multiple Virtual Storage (MVS) region, operated with the same memory protection key including the CICS kernel code. Program corruption and CICS control block corruption was a frequent cause of system downtime. A software error in one application program could overwrite the memory (code or data) of one or all currently running application transactions. Locating the offending application code for complex transient timing errors could be a very-difficult operating-system analyst problem. These shortcomings persisted for multiple new releases of CICS over a period of more than 20 years, in spite of their severity and the fact that top-quality CICS skills were in high demand and short supply. They were addressed in TS V3.3, V4.1 and V5.2 with the Storage Protection, Transaction Isolation and Subspace features respectively, which utilize operating system hardware features to protect the application code and the data within the same address space even though the applications were not written to be separated. CICS application transactions remain mission-critical for many public utility companies, large banks and other multibillion-dollar financial institutions. Additionally, it is possible to provide a measure of advance application protection by performing test under control of a monitoring program that also serves to provide Test and Debug features. Macro-level programming When CICS was first released, it only supported application transaction programs written in IBM 360 Assembler. COBOL and PL/I support were added years later. Because of the initial assembler orientation, requests for CICS services were made using assembler-language macros. For example, the request to read a record from a file were made by a macro call to the "File Control Program" of CICS might look like this: DFHFC TYPE=READ,DATASET=myfile,TYPOPER=UPDATE,....etc. This gave rise to the later terminology "Macro-level CICS." When high-level language support was added, the macros were retained and the code was converted by a pre-compiler that expanded the macros to their COBOL or PL/I CALL statement equivalents. Thus preparing a HLL application was effectively a "two-stage" compile output from the preprocessor fed into the HLL compiler as input. COBOL considerations: unlike PL/I, IBM COBOL does not normally provide for the manipulation of pointers (addresses). In order to allow COBOL programmers to access CICS control blocks and dynamic storage the designers resorted to what was essentially a hack. The COBOL Linkage Section was normally used for inter-program communication, such as parameter passing. The compiler generates a list of addresses, each called a Base Locator for Linkage (BLL) which were set on entry to the called program. The first BLL corresponds to the first item in the Linkage Section and so on. CICS allows the programmer to access and manipulate these by passing the address of the list as the first argument to the program. The BLLs can then be dynamically set, either by CICS or by the application to allow access to the corresponding structure in the Linkage Section. Command-level programming During the 1980s, IBM at Hursley Park produced a version of CICS that supported what became known as "Command-level CICS" which still supported the older programs but introduced a new API style to application programs. A typical Command-level call might look like the following: EXEC CICS SEND MAPSET('LOSMATT') MAP('LOSATT') END-EXEC The values given in the SEND MAPSET command correspond to the names used on the first DFHMSD macro in the map definition given below for the MAPSET argument, and the DFHMSI macro for the MAP argument. This is pre-processed by a pre-compile batch translation stage, which converts the embedded commands (EXECs) into call statements to a stub subroutine. So, preparing application programs for later execution still required two stages. It was possible to write "Mixed mode" applications using both Macro-level and Command-level statements. Initially, at execution time, the command-level commands were converted using a run-time translator, "The EXEC Interface Program", to the old Macro-level call, which was then executed by the mostly unchanged CICS nucleus programs. But when the CICS Kernel was re-written for TS V3, EXEC CICS became the only way to program CICS applications, as many of the underlying interfaces had changed. Run-time conversion The Command-level-only CICS introduced in the early 1990s offered some advantages over earlier versions of CICS. However, IBM also dropped support for Macro-level application programs written for earlier versions. This meant that many application programs had to be converted or completely rewritten to use Command-level EXEC commands only. By this time, there were perhaps millions of programs worldwide that had been in production for decades in many cases. Rewriting them often introduced new bugs without necessarily adding new features. There were a significant number of users who ran CICS V2 application-owning regions (AORs) to continue to run macro code for many years after the change to V3. It was also possible to execute old Macro-level programs using conversion software such as APT International's Command CICS. New programming styles Recent CICS Transaction Server enhancements include support for a number of modern programming styles. CICS Transaction Server Version 5.6 introduced enhanced support for Java to deliver a cloud-native experience for Java developers. For example, the new CICS Java API (JCICSX) allows easier unit testing using mocking and stubbing approaches, and can be run remotely on the developer’s local workstation. A set of CICS artifacts on Maven Central enable developers to resolve Java dependencies using popular dependency management tools such as Apache Maven and Gradle. Plug-ins for Maven (cics-bundle-maven) and Gradle (cics-bundle-gradle) are also provided to simplify automated building of CICS bundles, using familiar IDEs like Eclipse, IntelliJ IDEA, and Visual Studio Code. In addition, Node.js z/OS support is enhanced for version 12, providing faster startup, better default heap limits, updates to the V8 JavaScript engine, etc. Support for Jakarta EE 8 is also included. CICS TS 5.5 introduced support for IBM SDK for Node.js, providing a full JavaScript runtime, server-side APIs, and libraries to efficiently build high-performance, highly scalable network applications for IBM Z. CICS Transaction Server Version 2.1 introduced support for Java. CICS Transaction Server Version 2.2 supported the Software Developers Toolkit. CICS provides the same run-time container as IBM's WebSphere product family so Java EE applications are portable between CICS and Websphere and there is common tooling for the development and deployment of Java EE applications. In addition, CICS placed an emphasis on "wrapping" existing application programs inside modern interfaces so that long-established business functions can be incorporated into more modern services. These include WSDL, SOAP and JSON interfaces that wrap legacy code so that a web or mobile application can obtain and update the core business objects without requiring a major rewrite of the back-end functions. Transactions A CICS transaction is a set of operations that perform a task together. Usually, the majority of transactions are relatively simple tasks such as requesting an inventory list or entering a debit or credit to an account. A primary characteristic of a transaction is that it should be atomic. On IBM Z servers, CICS easily supports thousands of transactions per second, making it a mainstay of enterprise computing. CICS applications comprise transactions, which can be written in numerous programming languages, including COBOL, PL/I, C, C++, IBM Basic Assembly Language, REXX, and Java. Each CICS program is initiated using a transaction identifier. CICS screens are usually sent as a construct called a map, a module created with Basic Mapping Support (BMS) assembler macros or third-party tools. CICS screens may contain text that is highlighted, has different colors, and/or blinks depending on the terminal type used. An example of how a map can be sent through COBOL is given below. The end user inputs data, which is made accessible to the program by receiving a map from CICS. EXEC CICS RECEIVE MAPSET('LOSMATT') MAP('LOSATT') INTO(OUR-MAP) END-EXEC. For technical reasons, the arguments to some command parameters must be quoted and some must not be quoted, depending on what is being referenced. Most programmers will code out of a reference book until they get the "hang" or concept of which arguments are quoted, or they'll typically use a "canned template" where they have example code that they just copy and paste, then edit to change the values. Example of BMS Map Code Basic Mapping Support defines the screen format through assembler macros such as the following. This was assembled to generate both the physical map set a load module in a CICS load library and a symbolic map set a structure definition or DSECT in PL/I, COBOL, assembler, etc. which was copied into the source program. LOSMATT DFHMSD TYPE=MAP, X MODE=INOUT, X TIOAPFX=YES, X TERM=3270-2, X LANG=COBOL, X MAPATTS=(COLOR,HILIGHT), X DSATTS=(COLOR,HILIGHT), X STORAGE=AUTO, X CTRL=(FREEKB,FRSET) * LOSATT DFHMDI SIZE=(24,80), X LINE=1, X COLUMN=1 * LSSTDII DFHMDF POS=(1,01), X LENGTH=04, X COLOR=BLUE, X INITIAL='MQCM', X ATTRB=PROT * DFHMDF POS=(24,01), X LENGTH=79, X COLOR=BLUE X ATTRB=ASKIP, X INITIAL='PF7- 8- 9- 10- X 11- 12-CANCEL' * DFHMSD TYPE=FINAL END Structure In the z/OS environment, a CICS installation comprises one or more "regions" (generally referred to as a "CICS Region"), spread across one or more z/OS system images. Although it processes interactive transactions, each CICS region is usually started as a batch processing|batch address space with standard JCL statements: it's a job that runs indefinitely until shutdown. Alternatively, each CICS region may be started as a started task. Whether a batch job or a started task, CICS regions may run for days, weeks, or even months before shutting down for maintenance (MVS or CICS). Upon restart a parameter determines if the start should be "Cold" (no recovery) or "Warm"/"Emergency" (using a warm shutdown or restarting from the log after a crash). Cold starts of large CICS regions with many resources can take a long time as all the definitions are re-processed. Installations are divided into multiple address spaces for a wide variety of reasons, such as: application separation, function separation, avoiding the workload capacity limitations of a single region, or address space, or mainframe instance in the case of a z/OS SysPlex. A typical installation consists of a number of distinct applications that make up a service. Each service usually has a number of "Terminal-Owning Region" (TORs) that route transactions to multiple "Application-Owning Regions" (AORs), though other topologies are possible. For example, the AORs might not perform File I/O. Instead there would be a "File-Owning Region" (FOR) that performed the File I/O on behalf of transactions in the AOR given that, at the time, a VSAM file could only support recoverable write access from one address space at a time. But not all CICS applications use VSAM as the primary data source (or historically other single address space at a time datastores such as CA Datacom)- many use either IMS/DB or Db2 as the database, and/or MQ as a queue manager. For all these cases, TORs can load-balance transactions to sets of AORs which then directly use the shared databases/queues. CICS supports XA two-phase commit between data stores and so transactions that spanned MQ, VSAM/RLS and Db2, for example, are possible with ACID properties. CICS supports distributed transactions using SNA LU6.2 protocol between the address spaces which can be running on the same or different clusters. This allows ACID updates of multiple datastores by cooperating distributed applications. In practice there are issues with this if a system or communications failure occurs because the transaction disposition (backout or commit) may be in-doubt if one of the communicating nodes has not recovered. Thus the use of these facilities has never been very widespread. Sysplex exploitation At the time of CICS ESA V3.2, in the early 1990s, IBM faced the challenge of how to get CICS to exploit the new zOS Sysplex mainframe line. The Sysplex was to be based on CMOS (Complementary Metal Oxide Silicon) rather than the existing ECL (Emitter Coupled Logic) hardware. The cost of scaling the mainframe-unique ECL was much higher than CMOS which was being developed by a keiretsu with high-volume use cases such as Sony PlayStation to reduce the unit cost of each generation's CPUs. The ECL was also expensive for users to run because the gate drain current produced so much heat that the CPU had to packaged into a special module called a Thermal Conduction Module (TCM) that had inert gas pistons and needed plumbed to be high-volume chilled water to be cooled. But the air-cooled CMOS technology's CPU speed initially was much slower than the ECL (notably the boxes available from the mainframe-clone makers Amdahl and Hitachi). This was especially concerning to IBM in the CICS context as almost all the largest mainframe customers were running CICS and for many of them it was the primary mainframe workload. To achieve the same total transaction throughput on a Sysplex multiple boxes would need to be used in parallel for each workload but a CICS address space, due to its semi-reentrant application programming model, could not exploit more than about 1.5 processors on one box at the time even with use of MVS sub-tasks. Without this, these customers would tend to move to the competitors rather than Sysplex as they scaled up the CICS workloads. There was considerable debate inside IBM as to whether the right approach would be to break upward compatibility for applications and move to a model like IMS/DC which was fully reentrant, or to extend the approach customers had adopted to more fully exploit a single mainframe's power using multi-region operation (MRO). Eventually the second path was adopted after the CICS user community was consulted and vehemently opposed breaking upward compatibility given that they had the prospect of Y2K to contend with at that time and did not see the value in re-writing and testing millions of lines of mainly COBOL, PL/1, or assembler code. The IBM recommended structure for CICS on Sysplex was that at least one CICS Terminal Owning Region was placed on each Sysplex node which dispatched transactions to many Application Owning Regions (AORs) spread across the entire Sysplex. If these applications needed to access shared resources they either used a Sysplex-exploiting datastore (such as Db2 or IMS/DB) or concentrated, by function-shipping, the resource requests into singular-per-resource Resource Owing Regions (RORs) including File Owning Regions (FORs) for VSAM and CICS Data Tables, Queue Owning Regions (QORs) for MQ, CICS Transient Data (TD) and CICS Temporary Storage (TS). This preserved compatibility for legacy applications at the expense of operational complexity to configure and manage many CICS regions. In subsequent releases and versions, CICS was able to exploit new Sysplex-exploiting facilities in VSAM/RLS, MQ for zOS and placed its own Data Tables, TD, and TS resources into the architected shared resource manager for the Sysplex -> the Coupling Facility or CF, dispensing with the need for most RORs. The CF provides a mapped view of resources including a shared timebase, buffer pools, locks and counters with hardware messaging assists that made sharing resources across the Sysplex both more efficient than polling and reliable (utilizing a semi-synchronized backup CF for use in case of failure). By this time, the CMOS line had individual boxes that exceeded the power available by the fastest ECL box with more processors per CPU and when these were coupled together 32 or more nodes would be able to scale two orders of magnitude larger in total power for a single workload. For example, by 2002, Charles Schwab was running a "MetroPlex" consisting of a redundant pair of its mainframe Sysplexes in two locations in Phoenix, AZ each with 32 nodes driven by one shared CICS/DB/2 workload to support the vast volume of pre-dotcom-bubble web client inquiry requests. This cheaper, much more scalable CMOS technology base, and the huge investment costs of having to both get to 64bit addressing and independently produce cloned CF functionality drove the IBM-mainframe clone makers out of the business one by one. CICS Recovery/Restart The objective of recovery/restart in CICS is to minimize and if possible eliminate damage done to Online System when a failure occurs, so that system and data integrity is maintained. If the CICS region was shutdown instead of failing it will perform a "Warm" start exploiting the checkpoint written at shutdown. The CICS region can also be forced to "Cold" start which reloads all definitions and wipes out the log, leaving the resources in whatever state they are in. Under CICS, following are some of the resources which are considered recoverable. If one wishes these resources to be recoverable then special options must be specified in relevant CICS definitions: VSAM files CMT CICS-maintained data tables Intrapartition TDQ Temporary Storage Queue in auxiliary storage I/O messages from/to transactions in a VTAM network Other database/queuing resources connected to CICS that support XA two-phase commit protocol (like IMS/DB, Db2, VSAM/RLS) CICS also offers extensive recovery/restart facilities for users to establish their own recovery/restart capability in their CICS system. Commonly used recovery/restart facilities include: Dynamic Transaction Backout (DTB) Automatic Transaction Restart Resource Recovery using System Log Resource Recovery using Journal System Restart Extended Recovery Facility Components Each CICS region comprises one major task on which every transaction runs, although certain services such as access to Db2 data use other tasks (TCBs). Within a region, transactions are cooperatively multitasked they are expected to be well-behaved and yield the CPU rather than wait. CICS services handle this automatically. Each unique CICS "Task" or transaction is allocated its own dynamic memory at start-up and subsequent requests for additional memory were handled by a call to the "Storage Control program" (part of the CICS nucleus or "kernel"), which is analogous to an operating system. A CICS system consists of the online nucleus, batch support programs, and applications services. Nucleus The original CICS nucleus consisted of a number of functional modules written in 370 assembler until V3: Task Control Program (KCP) Storage Control Program (SCP) Program Control Program (PCP) Program Interrupt Control Program (PIP) Interval Control Program (ICP) Dump Control Program (DCP) Terminal Control Program (TCP) File Control Program (FCP) Transient Data Control Program (TDP) Temporary Storage Control Program (TSP) Starting in V3, the CICS nucleus was rewritten into a kernel-and-domain structure using IBM's PL/AS language which is compiled into assembler. The prior structure did not enforce separation of concerns and so had many inter-program dependencies which led to bugs unless exhaustive code analysis was done. The new structure was more modular and so resilient because it was easier to change without impact. The first domains were often built with the name of the prior program but without the trailing "P". For example, Program Control Domain (DFHPC) or Transient Data Domain (DFHTD). The kernel operated as a switcher for inter-domain requests initially this proved expensive for frequently called domains (such as Trace) but by utilizing PL/AS macros these calls were in-lined without compromising on the separate domain design. In later versions, completely redesigned domains were added like the Logging Domain DFHLG and Transaction Domain DFHTM that replaced the Journal Control Program (JCP). Support programs In addition to the online functions CICS has several support programs that run as batch jobs. High-level language (macro) preprocessor Command language translator Dump utility – prints formatted dumps generated by CICS Dump Management Trace utility – formats and prints CICS trace output Journal formatting utility prints a formatted dump of the CICS region in case of error Applications services The following components of CICS support application development. Basic Mapping Support (BMS) provides device-independent terminal input and output APPC Support that provides LU6.1 and LU6.2 API support for collaborating distributed applications that support two-phase commit Data Interchange Program (DIP) provides support for IBM 3770 and IBM 3790 programmable devices 2260 Compatibility allows programs written for IBM 2260 display devices to run on 3270 displays EXEC Interface Program the stub program that converts calls generated by EXEC CICS commands to calls to CICS functions Built-in Functions – table search, phonetic conversion, field verify, field edit, bit checking, input formatting, weighted retrieval Pronunciation Different countries have differing pronunciations Within IBM (specifically Tivoli) it is referred to as . In the US, it is more usually pronounced by reciting each letter . In Australia, Belgium, Canada, Hong Kong, the UK and some other countries, it is pronounced . In Denmark, it is pronounced kicks. In Finland, it is pronounced In France, it is pronounced . In Germany, Austria and Hungary, it is pronounced and, less often, . In Greece, it is pronounced kiks. In India, it is pronounced kicks. In Iran, it is pronounced kicks. In Israel , it is pronounced C-I-C-S. In Italy, is pronounced . In Poland, it is pronounced . In Portugal and Brazil, it is pronounced . In Russia, it is pronounced kiks. In Slovenia, it is pronounced kiks. In Spain, it is pronounced . In Sweden, it is pronounced kicks. In Israel, it is pronounced kicks. In Uganda, it is pronounced kicks. In Turkey, it is pronounced kiks. See also IBM TXSeries (CICS on distributed platforms) IBM WebSphere IBM 2741 IBM 2260 IBM 3270 OS/360 and successors Transaction Processing Facility Virtual Storage Access Method (VSAM) References External links Why to choose CICS Transaction Server for new IT projects IBM CICS whitepaper Support Forum for CICS Programming CICS User Community website for CICS related news, announcements and discussions Assembly language software History of human–computer interaction IBM mainframe operating systems IBM mainframe software IBM software Middleware Multimodal interaction Transaction processing Z notation
1158000
https://en.wikipedia.org/wiki/Mount%20Erebus%20disaster
Mount Erebus disaster
</noinclude> The Mount Erebus disaster occurred on 28 November 1979 when Air New Zealand Flight 901 (TE-901) flew into Mount Erebus on Ross Island, Antarctica, killing all 237 passengers and 20 crew on board. Air New Zealand had been operating scheduled Antarctic sightseeing flights since 1977. This flight was supposed to leave Auckland Airport in the morning and spend a few hours flying over the Antarctic continent, before returning to Auckland in the evening via Christchurch. The initial investigation concluded the accident was caused by pilot error, but public outcry led to the establishment of a Royal Commission of Inquiry into the crash. The commission, presided over by Justice Peter Mahon QC, concluded that the accident was caused by a correction made to the coordinates of the flight path the night before the disaster, coupled with a failure to inform the flight crew of the change, with the result that the aircraft, instead of being directed by computer down McMurdo Sound (as the crew had been led to believe), was instead rerouted to a path toward Mount Erebus. Justice Mahon's report accused Air New Zealand of presenting "an orchestrated litany of lies", and this led to changes in senior management at the airline. The Privy Council later ruled that the finding of a conspiracy was a breach of natural justice and not supported by the evidence. The accident is the deadliest accident in the history of Air New Zealand and one of New Zealand's deadliest peacetime disasters. Flight and aircraft The flight was designed and marketed as a unique sightseeing experience, carrying an experienced Antarctic guide, who pointed out scenic features and landmarks using the aircraft public-address system, while passengers enjoyed a low-flying sweep of McMurdo Sound. The flights left and returned to New Zealand the same day. Flight 901 would leave Auckland International Airport at for Antarctica, and arrive back at Christchurch International Airport at after flying . The aircraft would make a 45-minute stop at Christchurch for refuelling and crew change, before flying the remaining to Auckland, arriving at . Tickets for the November 1979 flights cost NZ$359 per person (equivalent to $2,055 in September 2021 dollars). Dignitaries including Sir Edmund Hillary had acted as guides on previous flights. Hillary was scheduled to act as the guide for the fatal flight of 28 November 1979, but had to cancel owing to other commitments. His long-time friend and climbing companion, Peter Mulgrew, stood in as guide. The flights usually operated at about 85% of capacity; the empty seats, usually the ones in the centre row, allowed passengers to move more easily about the cabin to look out of the windows. The aircraft used on the Antarctic flights were Air New Zealand's eight McDonnell Douglas DC-10-30 trijets. The aircraft on 28 November was registered ZK-NZP. The 182nd DC-10 to be built, and the fourth DC-10 to be introduced by Air New Zealand, ZK-NZP was handed over to the airline on 12 December 1974 at McDonnell Douglas's Long Beach plant. It had logged more than 20,700 flight hours prior to the crash. Accident Circumstances surrounding the accident Captain Jim Collins (45) and co-pilot Greg Cassin (37) had never flown to Antarctica before (while flight engineer Gordon Brooks had flown to Antarctica only once previously), but they were experienced pilots and were considered qualified for the flight. On 9 November 1979, 19 days before departure, the two pilots attended a briefing in which they were given a copy of the previous flight's flight plan. The flight plan that had been approved in 1977 by the Civil Aviation Division of the New Zealand Department of Transport was along a track directly from Cape Hallett to the McMurdo non-directional beacon (NDB), which, coincidentally, entailed flying almost directly over the peak of Mount Erebus. However, due to a typing error in the coordinates when the route was computerised, the printout from Air New Zealand's ground computer system presented at the 9 November briefing corresponded to a southerly flight path down the middle of the wide McMurdo Sound, about to the west of Mount Erebus. The majority of the previous 13 flights had also entered this flight plan's coordinates into their aircraft navigational systems and flown the McMurdo Sound route, unaware that the route flown did not correspond with the approved route. Captain Leslie Simpson, the pilot of a flight on 14 November and also present at the 9 November briefing, compared the coordinates of the McMurdo TACAN navigation beacon (about east of McMurdo NDB), and the McMurdo waypoint that his flight crew had entered into the inertial navigation system (INS), and was surprised to find a large distance between the two. After his flight, Captain Simpson advised Air New Zealand's navigation section of the difference in positions. For reasons that were disputed, this triggered Air New Zealand's navigation section to update the McMurdo waypoint coordinates stored in the ground computer to correspond with the coordinates of the McMurdo TACAN beacon, despite this also not corresponding with the approved route. The navigation section changed the McMurdo waypoint co-ordinate stored in the ground computer system around on the morning of the flight. Crucially, the flight crew of Flight 901 was not notified of the change. The flight plan printout given to the crew on the morning of the flight, which was subsequently entered by them into the aircraft's INS, differed from the flight plan presented at the 9 November briefing and from Captain Collins' map mark-ups, which he had prepared the night before the fatal flight. The key difference was that the flight plan presented at the briefing corresponded to a track down McMurdo Sound, giving Mount Erebus a wide berth to the east, whereas the flight plan printed on the morning of the flight corresponded to a track that coincided with Mount Erebus, which would result in a collision with Mount Erebus if this leg were flown at an altitude less than . The computer program was altered such that the standard telex forwarded to US air traffic controllers (ATCs) at the United States Antarctic science facility at McMurdo Station displayed the word "McMurdo", rather than the coordinates of latitude and longitude, for the final waypoint. During the subsequent inquiry, Justice Mahon concluded that this was a deliberate attempt to conceal from the United States authorities that the flight plan had been changed, and probably because it was known that US Air Traffic Control would lodge an objection to the new flight path. The flight had earlier paused during the approach to McMurdo Sound to carry out a descent, via a figure-eight manoeuvre, through a gap in the low cloud base (later estimated to be at around ) while over water to establish visual contact with surface landmarks and give the passengers a better view. The flight crew either was unaware of or ignored the approved route's minimum safe altitude (MSA) of for the approach to Mount Erebus, and in the sector south of Mount Erebus (and then only when the cloud base was at or better). Photographs and news stories from previous flights showed that many of these had also been flown at levels substantially below the route's MSA. In addition, preflight briefings for previous flights had approved descents to any altitude authorised by the US ATC at McMurdo Station. As the US ATC expected Flight 901 to follow the same route as previous flights down McMurdo Sound, and in accordance with the route waypoints previously advised by Air New Zealand to them, the ATC advised Flight 901 that it had a radar that could let them down to . However, the radar equipment did not pick up the aircraft, and the crew also experienced difficulty establishing VHF communications. The distance-measuring equipment did not lock onto the McMurdo Tactical Air Navigation System (TACAN) for any useful period. Cockpit voice recorder transcripts from the last minutes of the flight before impact with Mount Erebus indicated that the flight crew believed they were flying over McMurdo Sound, well to the west of Mount Erebus and with the Ross Ice Shelf visible on the horizon, when in reality they were flying directly toward the mountain. Despite most of the crew being engaged in identifying visual landmarks at the time, they never perceived the mountain directly in front of them. About 6 minutes after completing a descent in visual meteorological conditions, Flight 901 collided with the mountain at an altitude around , on the lower slopes of the 12,448-ft-tall (3,794 m) mountain. Passenger photographs taken seconds before the collision removed all doubt of a "flying in cloud" theory, showing perfectly clear visibility well beneath the cloud base, with landmarks to the left and to the right of the aircraft visible. Changes to the coordinates and departure The crew put the coordinates into the plane's computer before they departed at from Auckland International Airport. Unknown to them, the coordinates had been modified earlier that morning to correct the error introduced previously and undetected until then. The crew evidently did not check the destination waypoint against a topographical map (as did Captain Simpson on the flight of 14 November) or they would have noticed the change. Charts for the Antarctic were not available to the pilot for planning purposes, being withheld until the flight was about to depart. The charts eventually provided, which were carried on the aircraft, were neither comprehensive enough nor large enough in scale to support detailed plotting. These new coordinates changed the flight plan to track east of their understanding. The coordinates programmed the plane to overfly Mount Erebus, a volcano, instead of down McMurdo Sound. About four hours after a smooth take-off, the flight was away from McMurdo Station. The radio communications centre there allowed the pilots to descend to and to continue "visually". Air-safety regulations at the time did not allow flights to descend to lower than , even in good weather, although Air New Zealand's own travel magazine showed photographs of previous flights clearly operating below . Collins believed the plane was over open water. Crash into Mount Erebus Collins told McMurdo Station that he would be dropping to , at which point he switched control of the aircraft to the automated computer system. Outside, a layer of clouds blended with the white snow-covered volcano, forming a sector whiteout – no contrast between ground and sky was visible to the pilots. The effect deceived everyone on the flight deck, making them believe that the white mountainside was the Ross Ice Shelf, a huge expanse of floating ice derived from the great ice sheets of Antarctica, which was in fact now behind the mountain. As it was little understood, even by experienced polar pilots, Air New Zealand had provided no training for the flight crew on the sector whiteout phenomenon. Consequently, the crew thought they were flying along McMurdo Sound, when they were actually flying over Lewis Bay in front of Mount Erebus. At 12:49 pm, the ground proximity warning system (GPWS) began sounding a series of "whoop, whoop, pull up" alarms, warning that the plane was dangerously close to terrain. The CVR recorded the following: GPWS: "Whoop whoop. Pull up. Whoop whoop..." F/E: "Five hundred feet." GPWS: "...Pull up." F/E: "Four hundred feet." GPWS: "Whoop, whoop. Pull up. Whoop whoop. Pull up." CA: "Go-around power please." GPWS: "Whoop, whoop. Pull-" CAM: [Sound of impact] The go-around power was immediately applied, but it was too late. No time remained to divert the aircraft, and 6 seconds later, the plane crashed into the side of Mount Erebus and exploded, instantly killing everyone on board. The accident occurred at 12:50 pm at a position of and an elevation of above mean sea level. McMurdo Station attempted to contact the flight after the crash, and informed Air New Zealand headquarters in Auckland that communication with the aircraft had been lost. United States search-and-rescue personnel were placed on standby. Nationalities of passengers and crew Air New Zealand had not lost any passengers to an accident or incident until this event took place. The nationalities of the passengers and crew included: Rescue and recovery Initial search and discovery At 2:00 pm, the United States Navy released a situation report stating: Air New Zealand Flight 901 has failed to acknowledge radio transmissions. ... One LC-130 fixed-wing aircraft and two UH-1N rotary-wing aircraft are preparing to launch for SAR effort.Data gathered at 3:43 pm were added to the situation report, stating that the visibility was . Also, six aircraft had been launched to find the flight. Flight 901 was due to arrive back at Christchurch at 6:05 pm for a stopover including refuelling and a crew change before completing the journey back to Auckland. Around 50 passengers were also supposed to disembark at Christchurch. Airport staff initially told the waiting families that the flight being slightly late was not unusual, but as time went on, it became clear that something was wrong. At 9:00 pm, about half an hour after the plane would have run out of fuel, Air New Zealand informed the press that it believed the aircraft to be lost. Rescue teams searched along the assumed flight path, but found nothing. At 12:55 am, the crew of a United States Navy aircraft discovered unidentified debris along the side of Mount Erebus. No survivors could be seen. Around 9:00 am, 20 hours after the crash, helicopters with search parties managed to land on the side of the mountain. They confirmed that the wreckage was that of Flight 901 and that all 237 passengers and 20 crew members had been killed. The DC-10's altitude at the time of the collision was . The vertical stabiliser section of the plane, with the koru logo clearly visible, was found in the snow. Bodies and fragments of the aircraft were flown back to Auckland for identification. The remains of 44 of the victims were not individually identified. A funeral was held for them on 22 February 1980. Operation Overdue The recovery effort of Flight 901 was called "Operation Overdue.” Efforts for recovery were extensive, owing in part to the pressure from Japan, as 24 passengers had been Japanese. The operation lasted until 9 December 1979, with up to 60 recovery workers on site at a time. A team of New Zealand Police officers and a mountain-face rescue team were dispatched on a No. 40 Squadron C-130 Hercules aircraft. The job of individual identification took many weeks, and was largely done by teams of pathologists, dentists, and police. The mortuary team was led by Inspector Jim Morgan, who collated and edited a report on the recovery operation. Recordkeeping had to be meticulous because of the number and fragmented state of the human remains that had to be identified to the satisfaction of the coroner. The exercise resulted in 83% of the deceased passengers and crew eventually being identified, sometimes from evidence such as a finger capable of yielding a print, or keys in a pocket. In 2006, the New Zealand Special Service Medal (Erebus) was instituted to recognise the service of New Zealanders, and citizens of the United States of America and other countries, who were involved in the body recovery, identification, and crash investigation phases of Operation Overdue. On 5 June 2009, the New Zealand government recognised some of the Americans who assisted in Operation Overdue during a ceremony in Washington, DC. A total of 40 Americans, mostly Navy personnel, are eligible to receive the medal. Accident inquiries Despite Flight 901 crashing in one of the most isolated parts of the world, evidence from the crash site was extensive. Both the cockpit voice recorder and the flight data recorder were in working order and able to be deciphered. Extensive photographic footage from the moments before the crash was available; being a sightseeing flight, most passengers were carrying cameras, from which the majority of the film could be developed. Official accident report The accident report compiled by New Zealand's chief inspector of air accidents, Ron Chippindale, was released on 12 June 1980. It cited pilot error as the principal cause of the accident and attributed blame to the decision of Collins to descend below the customary minimum altitude level, and to continue at that altitude when the crew was unsure of the plane's position. The customary minimum altitude prohibited descent below even under good weather conditions, but a combination of factors led the captain to believe the plane was over the sea (the middle of McMurdo Sound and few small low islands), and previous flight 901 pilots had regularly flown low over the area to give passengers a better view, as evidenced by photographs in Air New Zealand's own travel magazine and by first-hand accounts of personnel based on the ground at NZ's Scott Base. Mahon inquiry In response to public demand, the New Zealand government announced a further one-man Royal Commission of Inquiry into the accident, to be performed by Justice Peter Mahon. This Royal Commission was initially 'handicapped' in that the deadline was extremely short; originally set for 31 October 1980, it was later extended four times. Mahon's report, released on 27 April 1981, cleared the crew of blame for the disaster. Mahon said the single, dominant, and effective cause of the crash was Air New Zealand's alteration of the flight plan waypoint coordinates in the ground navigation computer without advising the crew. The new flight plan took the aircraft directly over the mountain, rather than along its flank. Due to whiteout conditions, "a malevolent trick of the polar light", the crew were unable to visually identify the mountain in front of them. Furthermore, they may have experienced a rare meteorological phenomenon called sector whiteout, which creates the visual illusion of a flat horizon far in the distance. A very broad gap between cloud layers appeared to allow a view of the distant Ross Ice Shelf and beyond. Mahon noted that the flight crew, with many thousands of hours of flight time between them, had considerable experience with the extreme accuracy of the aircraft's inertial navigation system. Mahon also found that the preflight briefings for previous flights had approved descents to any altitude authorised by the US ATC at McMurdo Station, and that the radio communications centre at McMurdo Station had indeed authorised Collins to descend to , below the minimum safe level of . In his report, Mahon found that airline executives and senior pilots had engaged in a conspiracy to whitewash the inquiry, accusing them of "an orchestrated litany of lies" by covering up evidence and lying to investigators. Mahon found that, in the original report, Chippindale had a poor grasp of the flying involved in jet-airline operation, as he (and the New Zealand CAA in general) was typically involved in investigating simple light aircraft crashes. Chippindale's investigation techniques were revealed as lacking in rigour, which allowed errors and avoidable gaps in knowledge to appear in reports. Consequently, Chippindale entirely missed the importance of the flight-plan change and the rare meteorological conditions of Antarctica. Had the pilots been informed of the flight plan change, the crash would have been avoided. Court proceedings Judicial review On 20 May 1981, Air New Zealand applied to the High Court of New Zealand for judicial review of Mahon's order that it pay more than half the costs of the Mahon inquiry, and for judicial review of some of the findings of fact Mahon had made in his report. The application was referred to the Court of Appeal, which unanimously set aside the costs order. The Court of Appeal, by majority, though, declined to go any further, and in particular, declined to set aside Mahon's finding that members of the management of Air New Zealand had conspired to commit perjury before the inquiry to cover up the errors of the ground staff. Privy Council appeal Mahon then appealed to the Privy Council in London against the Court of Appeal's decision. His findings as to the cause of the accident, namely reprogramming of the aircraft's flight plan by the ground crew, who then failed to inform the flight crew, had not been challenged before the Court of Appeal, so were not challenged before the Privy Council. His conclusion that the crash was the result of the aircrew being misdirected as to their flight path, not due to pilot error, therefore remained. Regarding the issue of Air New Zealand stating a minimum altitude of 6,000 feet for pilots in the vicinity of McMurdo Base, the Privy Council stated, However, the Law Lords of the Privy Council under the chair of Lord Diplock effectively agreed with some of the views of the minority in the Court of Appeal in concluding that Mahon had acted in breach of natural justice in making his finding of a conspiracy by Air New Zealand management and it was not supported by the evidence. In its judgment, delivered on 20 October 1983, the Privy Council therefore dismissed Mahon's appeal. Aviation researcher John King wrote in his book New Zealand Tragedies, Aviation: "Exhibit 164" was a photocopied diagram of McMurdo Sound showing a southbound flight path passing west of Ross Island and a northbound path passing the island on the east. The diagram did not extend sufficiently far south to show where, how, or even if they joined, and left the two paths disconnected. Evidence had been given to the effect that the diagram had been included in the flight crew's briefing documentation. Legacy of the disaster The crash of Flight 901 is one of New Zealand's three deadliest disasters – the others being the 1874 Cospatrick sailing ship disaster in which 470John Wilson, The voyage out – Fire on the Cospatrick , from Te Ara: The Encyclopedia of New Zealand. Updated 2007-09-21. Accessed 2008-05-20. people died, and the 1931 Hawke's Bay earthquake, which killed 256 people. At the time of the disaster, it was the -deadliest air crash of all time. , the crash remains Air New Zealand's deadliest accident, as well as New Zealand's deadliest peacetime disaster (excluding the Cospatrick sailing ship disaster, which happened en route to Auckland). Flight 901, in conjunction with the crash of American Airlines Flight 191 in Chicago six months earlier (25 May), severely hurt the reputation of the McDonnell Douglas DC-10. Following the Chicago crash, the FAA withdrew the DC-10's type certificate on 6 June, which grounded all U.S.-registered DC-10s and forbade any foreign government that had a bilateral agreement with the United States regarding aircraft certifications from flying their DC-10s, which included Air New Zealand's seven DC-10s. The Air New Zealand DC-10 fleet was grounded until the FAA measures were rescinded five weeks later, on 13 July, after all carriers had completed modifications that responded to issues discovered from the American Airlines Flight 191 incident. Flight 901 was the third-deadliest accident involving a DC-10, following Turkish Airlines Flight 981 and American Airlines Flight 191. The event marked the beginning of the end for Air New Zealand's DC-10 fleet, although talk existed before the accident of replacing the aircraft; DC-10s were replaced by Boeing 747s from mid-1981, and the last Air New Zealand DC-10 flew in December 1982. The occurrence also spelled the end of commercially operated Antarctic sightseeing flights – Air New Zealand cancelled all its Antarctic flights after Flight 901, and Qantas suspended its Antarctic flights in February 1980, only returning on a limited basis again in 1994. Almost all of the aircraft's wreckage still lies where it came to rest on the slopes of Mount Erebus, as both its remote location and its weather conditions can hamper any further recovery operations. During the cold periods, the wreckage is buried under a layer of snow and ice. During warm periods, when snow recedes, it is visible from the air. Following the incident, all charter flights to Antarctica from New Zealand ceased, and were not resumed until 2013, when a Boeing 747-400 chartered from Qantas set off from Auckland for a sightseeing flight over the continent. Justice Mahon's report was finally tabled in Parliament by the then-Minister of Transport, Maurice Williamson, in 1999. In the New Zealand Queen's Birthday Honours list in June 2007, Captain Gordon Vette was awarded the ONZM (Officer of the New Zealand Order of Merit), recognising his services in assisting Justice Mahon during the Erebus inquiry. Vette's book, Impact Erebus, provides a commentary of the flight, its crash, and the subsequent investigations. In 2008, Justice Mahon was posthumously awarded the Jim Collins Memorial Award by the New Zealand Airline Pilots Association for exceptional contributions to air safety, "in forever changing the general approach used in transport accidents investigations world wide." In 2009, Air New Zealand's CEO Rob Fyfe apologised to all those affected who did not receive appropriate support and compassion from the company following the incident, and unveiled a commemorative sculpture at its headquarters. On 28 November 2019, the 40-year anniversary of the disaster, New Zealand Prime Minister Jacinda Ardern, along with the national government, issued a formal apology to the families of the victims. Ardern "[expressed] regret on behalf of Air New Zealand for the accident", and "[apologised] on behalf of the airline which 40 years ago failed in its duty of care to its passengers and staff." The registration of the crashed aircraft, ZK-NZP, has not been reissued. Memorials A wooden cross was erected on the mountain above Scott Base to commemorate the accident. It was replaced in 1986 with an aluminium cross after the original was eroded by low temperatures, wind, and moisture. The memorial for the 16 passengers who were unidentifiable and the 28 whose bodies were never found is at Waikumete Cemetery in Glen Eden, Auckland. Beside the memorial is a Japanese cherry tree, planted as a memorial to the 24 Japanese passengers who died on board Flight 901. A memorial to the crew members of Flight 901 is located adjacent to Auckland Airport, on Tom Pearce Drive at the eastern end of the airport zone. In January 2010, a sculpted koru containing letters written by the loved ones of those who died was placed next to the Antarctic cross. It was originally to have been placed at the site by six relatives of the victims on the 30th anniversary of the crash, 28 November 2009, but this was delayed for two months due to bad weather. It was planned for a second koru capsule, mirroring the first capsule, to be placed at Scott Base in 2011. The book-length poem "Erebus" by American writer Jane Summer (Sibling Rivalry Press, 2015) memorialises a close friend who died in the tragedy, and in a feat of 'investigative poetry', explores the chain of flawed decisions that caused the crash. In 2019, it was announced that a national memorial is to be installed in Parnell Rose Gardens, with a relative of one of the crash victims stating that it was the right place. However, local residents criticised the memorial's location, saying that it would "destroy the ambiance of the park". In popular culture A television miniseries, Erebus: The Aftermath, focusing on the investigation and the Royal Commission of Inquiry, was broadcast in New Zealand and Australia in 1988. The phrase "an orchestrated litany of lies" entered New Zealand popular culture for some years. "To quote a well-known phrase, there has been 'An orchestrated litany of lies'" The disaster features in the fifth season-two episode of The Weather Channel documentary Why Planes Crash. The episode is titled "Sudden Impact", and was first aired in January 2015. Official records Material related to the Erebus disaster and inquiry is held (with other Antarctica items from the Antarctic Division of the (former) Department of Scientific and Industrial Research (DSIR)) by Archives New Zealand, Christchurch. There are 168 record items, of which twelve are restricted access (7 photos, 4 audio cassettes and 1 file of newspaper clippings from Air New Zealand); see agency CAHU and accession CH282. Other files held by Archives New Zealand at Auckland, Wellington, Christchurch and Dunedin can be found by a keyword search for Erebus. These include files of the Royal Commission (Agency AASJ, accession W2802) and the New Zealand Police (Agencies AAAJ, BBAN; many are restricted). See also Aviation accidents and incidents List of New Zealand disasters by death toll List of disasters in Antarctica by death toll Sensory illusions in aviation Tourism in Antarctica Similar aircraft incidents American Airlines Flight 965, a flight which crashed into terrain after the pilots altered the coordinates. Aviateca Flight 901, a flight of the same number which also collided with a volcano. Prinair Flight 277 Ansett New Zealand Flight 703 New Zealand National Airways Corporation Flight 441 Notes References Further reading NZAVA Operation Deep Freeze – The New Zealand Story, 2002. Operation Overdue–NZAVA Archives 2002. C.H.N. L'Estrange, The Erebus enquiry: a tragic miscarriage of justice, Auckland, Air Safety League of New Zealand, 1995 Stuart Macfarlane, The Erebus papers: edited extracts from the Erebus proceedings with commentary, Auckland, Avon Press, 1991 Report of the Royal Commission to Inquire into the Crash on Mount Erebus, Antarctica of a DC10 Aircraft Operated by Air New Zealand Limited (66 Mb file), Wellington, Government Printer, 1981 (located at Archives New Zealand; item number ABVX 7333 W4772/5025/3/79-139 part 3) R Chippendale, Air New Zealand McDonnell-Douglas DC10-30 ZK-NZP, Ross Island, Antarctica 28 November 1979, Office of Air Accidents Investigation, New Zealand Ministry of Transport, Wellington, 1980 (only some parts there) Air New Zealand History Page, including a section about Erebus External links "New Zealand DC-10 lost with 257 on Antarctic sightseeing flight" the news of the crash as reported in Flight''   The Erebus Story – Loss of TE901 (includes Newspaper Articles and Video footage) – New Zealand Air Line Pilots' Association Aviation Safety Network: Transcript of flight 901 The original brochure advertising Air New Zealand flights to Antarctica Aircraft Accident Report No 79-139 Air New Zealand McDonnell-Douglas DC10-30 ZK-NZP Ross Island Antarctica 28 November 1979 – the official accident report ("The Chippendale Report") (audio file) ABC Radio National program "Ockham's Razor": "Arthur Marcell takes us through some of the events leading up to the crash and has a few questions for modern navigators." transcript NZ Special Service Medal (Erebus) 2006 Erebus disaster (NZHistory.net.nz)–includes previously unpublished images and sound files Erebus Aircraft Accident–Christchurch City Libraries Erebus for Kids–This site is for young school children to provide information about the Erebus Tragedy. Erebus Disaster: Lookout – official TV New Zealand YouTube site with programme on the Royal Commission enquiry into the crash. Erebus Memorial: Erebus Memorial Names – official New Zealand Ministry for Culture & Heritage memorial site. BBC News: The plane crash that changed New Zealand 1979 in Antarctica 1979 in New Zealand Airliner accidents and incidents involving controlled flight into terrain Aviation accidents and incidents in 1979 Accidents and incidents involving the McDonnell Douglas DC-10 Air New Zealand accidents and incidents Aviation accidents and incidents in Antarctica Aviation accidents and incidents in New Zealand Ross Island Aviation accident investigations with disputed causes 20th-century disasters in New Zealand November 1979 events in New Zealand History of the Ross Dependency
2648527
https://en.wikipedia.org/wiki/Video%20game%20accessory
Video game accessory
A video game accessory is a distinct piece of hardware that is required to use a video game console, or one that enriches the video game's play experience. Essentially, video game accessories are everything except the console itself, such as controllers, memory, power adapters (AC), and audio/visual cables. Most video game consoles come with the accessories required to play games out of the box (minus software): one A/V cable, one AC cable, and a controller. Memory is usually the most required accessory outside of these, as game data cannot be saved to compact discs. The companies that manufacture video game consoles also make these accessories for replacement purposes (AC cords and A/V cables) as well as improving the overall experience (extra controllers for more players, or unique devices like light guns and dance pads). There is an entire industry of companies that create accessories for consoles as well, called third-party companies. The prices are often lower than those made by the maker of the console (first-party). This is usually achieved by avoiding licensing or using cheaper materials. For the mobile systems like the PlayStation Portable and Game Boy iterations, there are many accessories to make them more usable in mobile environments, such as mobile chargers, lighting to improve visibility, and cases to both protect and help organize the collection of system peripherals to. Newer accessories include many home-made things like mod chips to bypass manufacturing protection or homemade software. Accessory types Game controllers The most common accessory for video game consoles are the controllers used to play the games. The controllers have evolved since the day of Pong and the spinner. Now there are direction controls as well as many types of other inputs. One type of directional control is the directional pad or D-Pad. The D-Pad is designed to look like an addition sign with each branch being one of the four cardinal directions; left, right, up, and down. It has been around since the original Nintendo Entertainment System, and has been in every Nintendo system since. Another feature in the recent console controllers are the analog sticks, used for 360ْ directional control. Often used for camera angle control, the idea is to give the player full control by allowing any direction to be used. The analog controls made their first appearance in the 1970s consoles under the name joystick; they made a reappearance in console systems with the Nintendo 64. Analog sticks have been used in every modern console since. There is no analog stick on the Wii Remote, but it is present on the Nunchuk attachment bundled with the Wii console. The most recent development in directional controls is free motion control. Using accelerometers in the Wii Remote and Nunchuk, acceleration in any direction can be detected and measured. This is still a new type of control scheme and is fully taken advantage of in the Wii as well as the PlayStation 3's SIXAXIS controller, and PlayStation Move controller that has tilt detection. While directional controls are one important part of controllers there are also general inputs, usually in the form of buttons either on the front or on the tope edges (shoulders) of the controller. These buttons are simply designed usually labeled either by some color, shape, or letter identification. The buttons can be used for simplistic one action ideas like jumping or performing some kind of generic mêlée attack, but they can also be used to string together combinations of maneuvers like a martial artist's attacks. In systems beginning with the Super Nintendo, buttons on the shoulders of the controller have become commonplace. In the case of the Xbox series of systems (and Sega Dreamcast), the shoulder buttons are shaped and used more like a gun trigger. Memory units Originally console games had no additional storage memory for saving game related data. During the Nintendo Entertainment System's time on the market, battery backed cartridge games that could retain a limited number of game files were introduced. When the original PlayStation was released it included support for an external memory source, called a memory card. Its purpose was to store important information about the games, such as game states or scoring info. That memory card used a memory of type EEPROM. To support the growing use of these cards in normal game play and the different amounts of data to be saved, larger memory cards were created. Other consoles also adopted the use of memory cards, such as the Sega Dreamcast, whose memory card was called the Visual Memory Unit or VMU. The Dreamcast’s memory unit was unique in that it had a monochrome LCD display, a D-pad, and two buttons. A large third party memory card market also sprung up, offering cards that were often much cheaper and larger than the official released memory cards. Some unique third party cards required extra software to access the cards, or possibly increase the data capacity by compressing the contained data. The Xbox system was sold with a new type of data storage for consoles: an internal hard drive to store information. The hard drive was 8 GB and was used as more than just a memory device. It was used in conjunction with the games to buffer some of the game data so that loading times were decreased. The hard drive also stored downloadable content from the Xbox Live service. Since the Xbox precedent, the PlayStation 2 had a hard drive accessory used with the Final Fantasy XI game to store character data. In the new generation of game consoles the PlayStation 3 is included with one of many different sized hard drives, depending on which model of the console is purchased, the 20 GB, 40 GB, 60 GB, and 80 GB models, or 120 or 250 GB for the PS3 Slim. The Xbox 360 launched with a 20 GB hard drive. After users started to complain of lack of space due to HD content, Microsoft released a 120 GB drive bundled with their Elite model, and available for individual sale. Audio/Video cables Console systems are played on a television. As such the systems need a way for the information they process to show up on the screen, and for the audio to be played through a sound system. Originally this was done through an antenna switch box. Later this was changed to a coaxial cable setup called a RF connector. It transmitted both the audio and video through one cable. Modern day systems are quite different though. All of the newer systems use industry standards now. One of those standards is RCA composite cables of single video and left and right audio. The video connector is usually colored yellow while the left and right audios are colored white and red respectively. Now that the newer systems are focusing on HD content and displaying in HD newer types of cable connections are needed. The basic are component cables with the RGB format. RGB stands for the three colors, red, green, and blue. This is the simplest form of HD cables. It uses three cables for video and the two for audio still. The newest form of HD transmittance cables are the High-Definition Multimedia Interface or HDMI cables. The HDMI cables transmit both the video and audio signals over a single cable. More importantly the information transmitted is a digital signal. Other cables utilize analog signals so that modern systems must convert to analog and then back to digital for use. Information is often lost in these conversions so that HDMI keeps the signal as digital so that less information is lost, keeping the signal truer to its source. These cables and signals are too new to be fully utilized, and are really only necessary for 1080p signal. Cases Due to the modern video gaming market, an abundance of cases are available for both home and hand held game systems. Because portables often come in contact with the elements and are more prone to accidents, most owners use a case to protect their systems. Many such cases include soft padding to cradle the system while an outside shell provides protection from falls or environmental debris. Some trade off padding to make a thinner, more compact case. While the larger such cases usually provide better protection and more room for games and accessories, they can be rather bulky and difficult to carry in pockets or small bags. To a lesser degree, cases for home consoles also exist. Home console cases are usually designed as a backpack or briefcase and have enough room for the system, cables, controllers, and often, a few games. This allows gamers to easily transport traditionally stationary systems, making them more mobile and sharable. Home console video games come packaged inside of a DVD-style case. Portables use a smaller format, but both are usually larger than the actual game media. Therefore, one may find cases made to hold only the game media, thus saving space and protecting the disks or cartridges from the open environment or improper storage. They can hold one or several games. Larger cases and folders can hold a gamer's entire collection. Software accessories Most software accessories for a video game system are simply the games designed by professional game companies. With some of the recent systems, homemade games called homebrew are being designed and released publicly. The Xbox 360 system has official released tools called the XNA so that users can design and proliferate their own content. It was announce by Microsoft after their announcement of the Xbox Live Marketplace. Another type of homebrew is popular on the Nintendo DS portable gaming system. Through an outside hardware device, the internal software that is used to run games, the BIOS, is overwritten. Replacing the default BIOS with an unofficial BIOS allows for user generated content to be playable. These games are anything from simple ports of older games to truly new ideas and games. Sony also allowed for homebrew content on their new PlayStation 3 by allowing for Linux to be installed. Linux allows for many different types of content to be playable. Because of Linux’s open source methods of utilizing the hardware as long as users write the necessary software, anything can be playable on the PS3. There is more than just software that allows different software to be playable. Hardware called modchips allow for many different changes to the system. Because the chips require that you open the case to the system and attach them to the circuits, the chips void any warranty and are even considered illegal by some companies. Most modchips are designed to allow illegal copies of games to be played. They can also allow for access to hardware not normally contained in the system. Overall modchips can add interesting effects but can cause many complications from the possibility of breaking the system, from improper installation, and causing legal problems. Add-ons/peripherals Add-ons, also known as peripherals, are devices generally sold separately from the console, but which connect to the main unit to add significant new functionality. This may include devices that upgrade the hardware of a console to allow it to play more resource-intensive games, devices that allow consoles to play games on a different media format, or devices which fully change the function of a console from a game-playing device to something else. A hardware add-on differs from an accessory in that an accessory either adds functionality that is beneficial but nonessential for gameplay (like a Game Link Cable or Rumble Pak), or in some cases may only add aesthetic value (like a case mod or faceplate). Generally, a game designed for use with an accessory can still be played on a console without the compatible accessory, whereas a game designed for use with a peripheral can not be played on a console without the appropriate peripheral. NEC TurboGrafx-16 TurboExpress PC Engine LT PC Engine Shuttle TurboTap TurboGrafx-CD/CD-ROM² Super CD-ROM² Arcade CD-ROM² (JP only) Atari Jaguar Atari Jaguar CD Sega Mega Drive/Genesis Sega 32X Sega CD (aka Mega-CD) Sega Channel adapter Famicom/Nintendo Entertainment System Aladdin Deck Enhancer Famicom 3D System Famicom Data Recorder (JP only) Famicom Modem (JP only) Famicom Disk System (JP only) Super Famicom/Super Nintendo Entertainment System Satellaview (Japan only) Sufami Turbo (Japan only) Super Game Boy Super Game Boy 2 SNES-CD (Cancelled) Nintendo 64 Expansion Pak Nintendo 64DD (Japan only) Wide-Boy64 Nintendo GameCube Game Boy Player Game Boy Game Boy Camera Game Boy Color Singer IZEK sewing machine Game Boy Advance e-Reader Game Eye (Canceled) Play-Yan Xbox 360 & Xbox One Kinect Kinect Fun Labs Xbox 360 HD DVD Player PlayStation 4 PlayStation VR Third-party versus first-party Game accessories can be one of two types, first or third party. First party accessories are often very expensive for what they are. Because of this, many companies specialize in the production of similar products that perform the same functions. Most of these items are similar but cheaper. They come at lower costs for many reasons. Because these companies can avoid licensing fees, and did not have the same development costs, they can save money that way. They also may use lower quality materials in the production of the accessories, making them cheaper but usually more fragile and less trustworthy. Another common trait with third party accessories is a better value. While most first party accessories only have one version, only one kind of controller for example, many third party companies will expand upon the original product. As an example, Sony was not the first to release a memory card of twice the original’s capacity. Instead, many of the third party companies were able to release such a product first and get many sales from them. Trends in accessories One of the major trends is making everything wireless. Most systems have cables plugged in the front and back. While there are not many cables plugged into the system, if the system is close to the television system and audio system, then the cables might be quite extensive and very haphazardly arranged. By making most of the accessories wireless the goal is to cut down on the clutter. One of the problems with wireless accessories is power. There is currently no wireless power source for the accessory. So in most cases batteries of some fashion are required. A problem with this can be the power requirements of the regular function of the accessory as well as the components used to maintain the wireless connection. Often the power draw on these batteries can be too much for it to be cost effective to power these devices without rechargeable batteries. A solution to this is a rechargeable controller station. Another trend developing is that of a controller unique to a specific game. An example of such being the recent guitar controller for Red Octane's Guitar Hero game. These controllers are unusual because they can only be used for one game. The default game controllers are usable for many types of games, so a controller that is only usable for one seems to be counter intuitive. However, with games as popular as Guitar Hero sometimes the controller can even inspire third party replicas. Toys-to-Life games not only provide physical toys for the player, but these toys also function as part of the ingame experience through either near field communication or some sort of image recognition. One problem partly enhanced by the uptake of wireless technology is that the user is still forced to stand up and leave their seat in order to control certain aspects of the game system. In order to alleviate that issue many recent consoles include features that allow powering on, off, and resetting the machine remotely. References
8839818
https://en.wikipedia.org/wiki/ReadSoft
ReadSoft
ReadSoft was a global provider of applications for automating business processes. ReadSoft was founded by two university students in Linköping, Sweden, in 1991. The company was headquartered in Helsingborg, Sweden and its shares were traded on the NASDAQ OMX - Stockholm Small Cap list. ReadSoft had operations in 17 countries and a partner network in an additional 70 nations. In July 2017, private equity and growth capital firm Thoma Bravo acquired and combined ReadSoft, Kofax, and Perceptive Software into an independent portfolio company named Kofax. Company operations ReadSoft’s specialties included accounts payable automation, accounts receivable automation, sales order processing, and digital mailroom solutions. According to the company, ReadSoft’s on-premises and cloud solutions for document process automation enable some of the world’s largest corporations as well as small and medium businesses to compete and thrive in today’s environment by improving customer and supplier satisfaction, increasing operating efficiency, and providing greater visibility into business processes. ReadSoft has been regarded as an "AP invoice specialist" - AP is a common abbreviation of Accounts Payable - by Gartner, an independent information technology research and advisory company. Specialist software is designed to integrate with the AP software from ERP systems, such as Oracle and SAP. For example, ERP integration may take advantage of single-sign-on processes, eliminating the need for employees to juggle multiple logons and systems. With more than 12,000 customers, ReadSoft claimed to be the market leader of the Document Process Automation segment - a term first coined by ReadSoft itself to cover the technology that automates the processing of business documents. It involves data capture and extraction (optical character recognition) from paper and electronic documents, integration with company's existing ERP system and routing to company agents for approvals and problem resolution via workflow. In 2006 ReadSoft merged with Ebydos AG from Frankfurt am Main, Germany a dedicated SAP Specialist Team that developed the Process Director, formerly known as Invoice Cockpit Suite. ReadSoft maintained other than SAP also Oracle competency/development centers which provided integrated solutions for companies with SAP and Oracle systems. Market analyst firm Harvey Spencer Associates reports ReadSoft with a 13% market share. In May 2014, US Company Lexmark International, Inc announced a bid to buy out all remaining shares in ReadSoft. The deal was completed in late 2014 and ReadSoft then operated as part of Lexmark's Enterprise Software (formerly Perceptive Software) division. In July 2017, Thoma Bravo acquired Lexmark’s Enterprise Software business which consisted of three entities: Kofax, ReadSoft, and Perceptive Software. Following this, Kofax and ReadSoft were combined into a single, newly independent Thoma Bravo portfolio company named Kofax. References Companies established in 1991 Software companies of Sweden Optical character recognition 1991 establishments in Sweden
3977950
https://en.wikipedia.org/wiki/MusicBrainz%20Picard
MusicBrainz Picard
MusicBrainz Picard is a free and open-source software application for identifying, tagging, and organising digital audio recordings. It was developed by the MetaBrainz Foundation, a non-profit company that also operates the MusicBrainz database. Picard identifies audio files and Compact Discs by comparing either their metadata or their acoustic fingerprints with records in the database. Audio file metadata (or "tags") are a means for storing information about a recording in the file. When Picard identifies an audio file, it can add new information to it, such as the recording artist, the album title, the record label, and the date of release. In some cases, it can also add more detailed information, such as lists of performers and their instruments. The source of this information is the MusicBrainz database, which is curated by volunteers. The more information the database has about a recording, the more Picard can embed in users' audio files. MusicBrainz Picard has tag editing features, and is extensible with plug-ins. It is named for Captain Jean-Luc Picard, a character in the US television series Star Trek: The Next Generation. Development Picard began as a tag editor called the MusicBrainz Tagger, which was the work of MusicBrainz founder Robert Kaye and other volunteers. It was developed in the Python programming language, and ran only on Microsoft Windows operating systems. This early incarnation of the program could identify songs based on tags or MusicDNS acoustic fingerprints. However, Kaye saw that it needed cosmetic and functional improvements. Streaming media company RealNetworks took an interest in MusicBrainz, and gave the developers a grant to improve the Tagger software. As a sponsor of the development project, RealNetworks asked Kaye to come up with a project code name. Since Kaye was trying to make a "next-generation tagger", he thought of the science fiction television series Star Trek: The Next Generation, in which Patrick Stewart plays the role of Captain Jean-Luc Picard. Although Kaye intended the name Picard to be temporary, MusicBrainz Picard remains the official name of the program. With funding from RealNetworks, MusicBrainz developers designed a new user interface for Picard. When the new software identified tracks, it grouped them by album in a collapsible tree view. The developers also switched from a software library called wxPython to another called PyQt, and ported Picard to the operating systems Linux and macOS. In 2009, Picard's developers replaced the MusicDNS acoustic fingerprinting system with AcoustID. In 2017, Picard's development resumed, led by Sambhav Kothari. Progressing at a rapid pace, its back-end underwent a lot of changes, along with the switch to Python 3, PyQt5 and a new and improved UI. Picard's development version was also made available on PyPi, supporting Windows, Linux and macOS. Supported file formats Picard supports these audio file formats: References External links MusicBrainz Picard entries in the MusicBrainz Blog Online music database clients Picard Acoustic fingerprinting Tag editors Multimedia software for Linux Windows multimedia software MacOS multimedia software Free software programmed in Python Audio software that uses Qt
17093146
https://en.wikipedia.org/wiki/Jeff%20Raikes
Jeff Raikes
Jeffrey Scott Raikes (born May 29, 1958) is the co-founder of the Raikes Foundation. He retired from his role as the chief executive officer of the Bill & Melinda Gates Foundation in 2014. He serves on the boards of Giving Tech Labs, Hudl Costco Wholesale, the Jeffrey S. Raikes School of Computer Sciences and Management at the University of Nebraska-Lincoln, and the Microsoft Alumni Network. He is Chair of the Stanford University Board of Trustees. Until early 2008, Raikes was the President of the Microsoft Business Division and oversaw the Information Worker, Server & Tools Business and Microsoft Business Solutions Groups. He joined Microsoft in 1981 as a product manager. He retired from Microsoft in September 2008, after a transitional period, to join the Gates Foundation. Early life Raikes grew up in Ashland, Nebraska, graduating from Ashland-Greenwood High School in 1976. Raikes prepared to work for the US Department of Agriculture on agricultural policy while earning his Bachelor of Science degree in engineering-economics systems from Stanford University. It was while at Stanford that Raikes had his first exposure to computing, learning Pascal on a DEC System 20. The first computer he bought was an Apple II, which he used to help his brother, Ron Raikes, manage the family farm. Career He joined Apple Computer as the VisiCalc Engineering Manager in 1980. He worked at Apple for fifteen months before being recruited to Microsoft by Steve Ballmer in 1981 as a product manager. He was promoted to director of applications marketing in 1984 and was the chief strategist behind Microsoft's investments in graphical applications for the Apple Macintosh and the Microsoft Windows operating system. In this role, he drove the product strategy and design of Microsoft Office. Raikes was promoted to vice president of Office Systems, where he was responsible for development and marketing of word processing, workgroup applications and pen computing. Raikes later held roles managing North American operations, and worldwide sales, marketing, and services. In 2000, he was appointed to lead Productivity and Business Services, which later became the Information Worker business at Microsoft. He was named a company president in 2005. On May 12, 2008, it was announced that Raikes would replace Patty Stonesifer as the CEO of the Bill & Melinda Gates Foundation; he stepped down in favor of Susan Desmond-Hellmann on May 1, 2014. Sporting ventures In 1992 the Pacific Northwest was in danger of losing the Seattle Mariners Major League Baseball franchise. Raikes joined with other local business leaders to purchase the team, keeping them "safe at home" for the enjoyment of Northwest baseball fans. Philanthropy Raikes has a wide range of philanthropic interests. He and his wife are co-founders of the Raikes Foundation. He is a trustee at the University of Nebraska Foundation, funded a professorship in agronomy, and was a designer of the University of Nebraska–Lincoln Jeffrey S. Raikes School of Computer Science and Management. He has been active in United Way for several years. In 2006–2007, he co-chaired the annual campaign of United Way of King County with his wife, setting a national record for funds raised. Raikes is a major donor to the Center for Comparative Studies in Race and Ethnicity at Stanford University where he also established the Jeff and Tricia Raikes Undergraduate Scholarship Fund to ensure that students admitted to Stanford from rural and inner-city schools have an opportunity to attend the university. He was selected as a trustee of the university in 2012 and in 2017 was elected chair. In June 2008, Raikes donated approximately $10 million to the University of Nebraska's JD Edwards Honors Program, which officially changed its name to the Jeffrey S. Raikes School of Computer Science and Management shortly thereafter. In July 2017, Jeff Raikes and his wife Tricia Raikes inspired the creation of Giving Tech Labs and joined their board. Personal life Raikes and his wife, Tricia Raikes née McGinnis, have three children: Michaela, Connor, and Gillian. References External links Raikes Foundation Jeffrey S. Raikes School of Computer Science and Management Giving Tech Labs 1958 births Living people American computer businesspeople American computer programmers American philanthropists Bill & Melinda Gates Foundation people Businesspeople in software Microsoft employees People from Ashland, Nebraska Stanford University alumni Stanford University trustees
480412
https://en.wikipedia.org/wiki/BiiN
BiiN
BiiN was a company created out of a joint research project by Intel and Siemens to develop fault tolerant high-performance multi-processor computers build on custom microprocessor designs. BiiN was an outgrowth of the Intel iAPX 432 multiprocessor project, ancestor of iPSC and nCUBE. The company was closed down in October 1989, and folded in April 1990, with no significant sales. The whole project was considered within Intel to have been so poorly managed that the company name was considered to be an acronym for Billions Invested In Nothing. However, several subset versions of the processor designed for the project were later offered commercially as versions of the Intel i960, which became popular as an embedded processor in the mid-1990s. History BiiN began in 1982 as Gemini, a research project equally funded by Intel and Siemens. The project's aim was to design and build a complete system for so-called "mission critical" computing, such as on-line transaction processing, industrial control applications (such as managing nuclear reactors), military applications intolerant of computer down-time, and national television services. The central themes of the R&D effort were to be transparent multiprocessing and file distribution, dynamically switchable fault tolerance, and a high level of security. Siemens provided the funding through its energy division UBE (Unternehmensbereich Energietechnik), who had an interest in fault tolerant computers for use in nuclear installations, while Intel provided the technology, and the whole project was organised with alternate layers of Siemens and Intel management and engineers. Siemens staff stemmed from its various divisions, not just UBE (where the project unit was called E85G). The core development labs were located on an Intel site in Portland, OR, but there were also Siemens labs in Berlin, Germany, (Sietec Systemtechnik, Maxim Ehrlich's team creating the Gemini DBMS), Vienna, Austria, Princeton, New Jersey (United States) and also Nuremberg, Germany, involved in the development. Since neither Siemens nor Intel could see how to market this new architecture if it were broken up, in 1985 the project became BiiN Partners, and in July 1988 was launched as a company wholly owned by Intel and Siemens. A second company wholly owned by Intel, called BiiN Federal Systems, was also created in order to avoid Foreign Ownership and Controlling Interest (FOCI) problems in selling to the US government. Intel owned all the silicon designs which were licensed to Siemens, while Siemens owned all the software and documentation and licensed them to Intel. BiiN aimed their designs at the high-end fault tolerant market, competing with Tandem Computers and Stratus Computer, as opposed to the parallel processing market, where Sequent Computer Systems, Pyramid Technology, Alliant Computer Systems and others were operating. In order to compete here they had to make sure their first designs were as powerful as the best from the other vendors, and by the time such a system was ready both Intel and Siemens had spent about 300 million with no shipping units. In 1989 Siemens underwent a reorganization, which brought UBEs own computer division into the mix. They had long been working with Sequent Computer Systems, and were sceptical that the BiiN systems would deliver anything that the Sequent systems could not. Eventually Intel and Siemens could not agree on further funding, and the venture ended. Several pre-orders on the books were cancelled, and the technology essentially disappeared. With the closing of the project, Intel used the basic RISC core of the CPU design as the basis for the i960 CPU. For this role most of the "advanced" features were removed, including the complex tagged memory system, task control system, most of the microcode and even the FPU. The result was a "naked" core, useful for embedded processor use. Before Intel switched to the StrongARM for the embedded role in the late 1990s, the i960 was one of Intel's most popular products. One odd historical footnote is that Hughes Aircraft had licensed the silicon designs for use in the Advanced Tactical Fighter (now the F-22 Raptor), where it apparently continues to be used today. Description Key to the BiiN system was the 960 MX processor, essentially a RISC-based version of the earlier i432. Like the i432, the 960 MX included tagged memory for complete memory protection even within programs (as opposed to most CPU's, which offer protection only between programs), a full set of instructions for task control, and complex microcode to run it all. Unlike the i432, the 960 MX had fairly good performance, mostly as a side effect of dramatically reducing the complexity of the core instruction set, integration of all CPU functions on a single chip, and including an FPU. The CPUs were hosted on cards that included an I/O support CPU and either 8 to 16MB of RAM. Two systems were designed, the BiiN 20 was an entry-level machine with one or two processors, and an interesting battery-backed disk-cache. The larger BiiN 60''' was similar, but supported up to eight CPUs. Both machines could be used in larger multi-machine systems. One interesting feature of the BiiN was that the CPU sets could be used to provide either fault tolerance, as in the Tandem systems, or parallel processing, as in the Pyramid and Sequent systems. This allowed users to tailor their systems to their needs, even on the fly. The BiiN systems also provided two versions of fault tolerance. In fault-checking mode, processors were paired so that they could check one another's calculations. In event of an error, the processors would stop, and the circuitry would determine which was faulty. This processor would then be excluded from the system, and the computer would restart. In continuous operation mode the fault-checking pairs were duplicated, so that if an error occurred the second pair could immediately take over the calculations. Also of historical note was that the operating system (OSIRIS), applications, development tools, and every other piece of BiiN software was written exclusively in Ada — perhaps the largest non-military use of that programming language. There was a command line interpreter CLI'', that resembled a lot command shells' functionality only a couple of years later, like editable history and so forth. Documentation for Gemini was done in troff with a project proprietary set of macros or with the Scribe markup language. Development for Gemini happened on VAXes running BSD Unix. References External links BiiN CPU Architecture Reference Manual (describes i960MX instruction set) BiiN documentation at bitsavers.org 1982 establishments in Oregon 1990 disestablishments in Oregon American companies established in 1982 American companies disestablished in 1990 Computer companies established in 1982 Computer companies disestablished in 1990 Defunct companies based in Oregon Defunct computer companies of the United States Intel Siemens
5440905
https://en.wikipedia.org/wiki/Winlink
Winlink
Winlink, or formally, Winlink Global Radio Email (registered US Service Mark), also known as the Winlink 2000 Network, is a worldwide radio messaging system that uses amateur-band radio frequencies and government frequencies to provide radio interconnection services that include email with attachments, position reporting, weather bulletins, emergency and relief communications, and message relay. The system is built and administered by volunteers and is financially supported by the Amateur Radio Safety Foundation. Network Winlink networking started by providing interconnection services for amateur radio (also known as ham radio). It is well known for its central role in emergency and contingency communications worldwide. The system used to employ multiple central message servers around the world for redundancy, but in 2017–2018 upgraded to Amazon Web Services that provides a geographically-redundant cluster of virtual servers with dynamic load balancers and global content-distribution. Gateway stations have operated on sub-bands of HF since 2013 as the Winlink Hybrid Network, offering message forwarding and delivery through a mesh-like smart network whenever Internet connections are damaged or inoperable. During the late 1990s and late 2000s, it increasingly became what is now the standard network system for amateur radio email worldwide. Additionally, in response to the need for better disaster response communications in the mid to later part of the 2000s, the network was expanded to provide separate parallel radio email networking systems for MARS, UK Cadet, Austrian Red Cross, the US Department of Homeland Security SHARES HF Program, and other groups. Amateur radio HF e-mail Generally, e-mail communications over amateur radio in the 21st century is now considered normal and commonplace. E-mail via high frequency (HF) can be used nearly everywhere on the planet, and is made possible by connecting an HF single sideband (SSB) transceiver system to a computer, modem interface, and appropriate software. The HF modem technologies include PACTOR, Winmor(deprecated), ARDOP, Vara HF, and Automatic Link Establishment (ALE). VHF/UHF protocols include AX.25 Packet and Vara FM. Amateur radio HF e-mail guidelines Amateur radio users in each country follow the appropriate regulatory guidelines for their license. Some countries may limit or regulate types of amateur messaging (such as e-mail) by content, origination location, end destination, or license class of the operator. Origination of third party messages (messages sent on behalf of, or sent to, an end destination who is not an amateur operator) may also be regulated in some countries; those that limit such third party messages normally have exceptions for emergency communications. In accordance with long standing amateur radio tradition, international guidelines and FCC rules section 97.113, hams using the Winlink system are advised that it is not appropriate to use it for business communications. Users The Winlink system is open to properly licensed amateur radio operators. The system primarily serves radio users without normal access to the internet, government and non-government public service organizations, medical and humanitarian non-profits, and emergency communications organizations. Duly authorized MARS operators may utilize the MARS part of the system. As of July 2008, there were approximately 12,000 radio users and approximately 100,000 internet correspondents. Monthly traffic volume averages over 100,000 messages. For offshore cruising yachtspeople Winlink is widely used as an alternative, or alongside, Sailmail, which is an HF PACTOR based email system using marine HF frequencies rather than amateur. As well as email the service uses a system called Saildocs, which allows cruisers to retrieve meteorological, maritime safety and other crucial files over email. Supported radio technologies 802.11 "WiFi" ALE (Automatic Link Establishment) APRS (Automatic Packet Reporting System) AX.25 Packet Radio D-Star PACTOR PACTOR-II PACTOR-III PACTOR-IV WINMOR(Deprecated) ARDOP Vara HF Vara FM TCP/IP (Telnet and other Wireless Technologies) Technical protocols PACTOR-I, WINMOR(deprecated), ARDOP, Vara HF & FM, HSMM (WiFi), AX.25 packet, D-Star, TCP/IP, and ALE are non-proprietary protocols used in various RF applications to access the Winlink network systems. Later versions of PACTOR are proprietary and supported only by commercially available modems from Special Communications Systems GmbH. In amateur radio service, AirMail, Winlink Express, and other email client programs used by the Winlink system, disable the proprietary compression technology for PACTOR-II, PACTOR-III, and PACTOR-IV modems and instead relies on the open FBB protocol, also widely used worldwide by packet radio BBS forwarding systems. Controversies and US regulatory issues In May 1995, the American Radio Relay League (ARRL) privately asked the FCC to change Part 97.309(a) to allow fully documented G-TOR, Clover, and original open source PacTOR (Pactor I) modes. The FCC granted this request in DA-95-2106 based on the ARRL's representation that it had worked with developers to ensure complete technical documentation of these codes were available to all amateur radio operators. However, subsequent versions of Pactor contained proprietary compression algorithms that prevent over-the-air interception. In 2007, a US amateur radio operator filed a formal petition with the Federal Communications Commission (FCC) aimed at reducing the signal bandwidth in automatic operation subbands; but, in May 2008 FCC ruled against the petition. In the Official Order, FCC said, "Additionally, we believe that amending the amateur service rules to limit the ability of amateur stations to experiment with various communications technologies or otherwise impeding their ability to advance the radio art would be inconsistent with the definition and purpose of the amateur service. Moreover, we do not believe that changing the rules to prohibit a communications technology currently in use is in the public interest." In 2013, the FCC ruled in Report and against the use of encryption in the US amateur radio bands for any purpose, including emergency communications. The FCC cited the need for all amateur radio communications to be open and unobscured, to uphold the Commission's long-standing requirement that the service be able to police itself. In spite of FCC rulings and , Winlink advocates continue to use the proprietary versions of Pactor and other undocumented data formats that cannot be eavesdropped, and continue to press the FCC for encrypted data transmissions in amateur radio, as exemplified in a Winlink petition to the FCC for legalized encryption of the US amateur spectrum while seeking broader spectrum allocations in response to the Puerto Rico hurricanes of 2017. Opponents warn that the continual lack of enforcement by the FCC and continued allowance of "effectively encrypted" e-mail traffic in the amateur bands is a national security threat, and ham operators have written to the US Congress about the threat. The Board of Directors of the Amateur Radio Safety Foundation, Inc. has written FCC Chairman Ajit Pai, other Federal Communications Commission members, and FCC administrators to correct inaccuracies in opponents' claims. See also Amateur radio emergency communications Automatic Link Establishment PACTOR Winmor Footnotes References External links The official Winlink Web Site Winlink Research Project Winlink Tutorial Winlink wide-area HF MESH network Introduction to RMS Express Winlink client program Guida italiana completa per l'uso di RMS Express /-/ Winlink 2000 The Wiki for Pat - a cross platform Winlink client Guia rápida en Español de introducción a la Red WL2K, Winmor y uso del RMS Express, (Spanish White Paper) Packet radio
1377892
https://en.wikipedia.org/wiki/Pentium%20F00F%20bug
Pentium F00F bug
The Pentium F00F bug is a design flaw in the majority of Intel Pentium, Pentium MMX, and Pentium OverDrive processors (all in the P5 microarchitecture). Discovered in 1997, it can result in the processor ceasing to function until the computer is physically rebooted. The bug has been circumvented through operating system updates. The name is shorthand for F0 0F C7 C8, the hexadecimal encoding of one offending instruction. More formally, the bug is called the invalid operand with locked CMPXCHG8B instruction bug. Description In the x86 architecture, the byte sequence F0 0F C7 C8 represents the instruction lock cmpxchg8b eax (locked compare and exchange of 8 bytes in register EAX). The bug also applies to opcodes ending in C9 through CF, which specify register operands other than EAX. The F0 0F C7 C8 instruction does not require any special privileges. This instruction encoding is invalid. The cmpxchg8b instruction compares the value in the EDX and EAX registers with an 8-byte value in a memory location. In this case, however, a register is specified instead of a memory location, which is not allowed. Under normal circumstances, this would simply result in an exception; however, when used with the lock prefix (normally used to prevent two processors from interfering with the same memory location), the CPU erroneously uses locked bus cycles to read the illegal instruction exception-handler descriptor. Locked reads must be paired with locked writes, and the CPU's bus interface enforces this by forbidding other memory accesses until the corresponding writes occur. As none are forthcoming, after performing these bus cycles all CPU activity stops, and the CPU must be reset to recover. Due to the proliferation of Intel microprocessors, the existence of this open-privilege instruction was considered a serious issue at the time. Operating system vendors responded by implementing workarounds that detected the condition and prevented the crash. Information about the bug first appeared on the Internet on or around 8 November 1997. Since the F00F bug has become common knowledge, the term is sometimes used to describe similar hardware design flaws such as the Cyrix coma bug. No permanent hardware damage results from executing the F00F instruction on a vulnerable system; it simply locks up until rebooted. However, data loss of unsaved data is likely if the disk buffers have not been flushed, if drives were interrupted during a write operation, or if some other non-atomic operation was interrupted. The myB2 stepping solved this issue for Intel's Pentium processors. The F00F instruction can be considered an example of a Halt and Catch Fire (HCF) instruction. Workarounds Although a definite solution to this problem required some sort of hardware/firmware revision, there were proposed workarounds at the time which prevented the exploitation of this issue in generating a denial-of-service attack on the affected machine. All of them were based on forcefully breaking up the pattern of faulty bus accesses responsible for the processor hang. Intel's proposed (therefore "official") solutions required setting up the table of interrupt descriptors in an unnatural way that forced the processor to issue an intervening page fault before it could access the memory containing the descriptor for the undefined-opcode exception. These extraneous memory accesses turned out to be sufficient for the bus interface to let go of the locking requirement that was the root cause for the bug. Specifically, the table of interrupt descriptors, which normally resides on a single memory page, is instead split over two pages such that the descriptors for the first seven exception handlers reside on a page, and the remainder of the table on the following page. The handler for the undefined opcode exception is then the last descriptor on the first page, while the handler for the page-fault exception resides on the second page. The first page can now be made not-present (usually signifying a page that has been swapped out to disk to make room for some other data), which will force the processor to fetch the descriptor for the page-fault exception handler. This descriptor, residing on the second page of the table, is present in memory as usual (if it were not, the processor would double- and then triple-fault, leading to a shutdown). These extra memory cycles override the memory locking requirement issued by the original illegal instruction (since faulting instructions are supposed to be able to be restarted after the exception handler returns). The handler for the page-fault exception has to be modified, however, to cope with the necessity of providing the missing page for the first half of the interrupt descriptor table, a task it is not usually required to perform. The second official workaround from Intel proposed keeping all pages present in memory, but marking the first page read-only. Since the originating illegal instruction was supposed to issue a memory write cycle, this is enough for again forcing the intervention of the page-fault handler. This variant has the advantage that the modifications required to the page-fault handler are very minor compared to the ones required for the first variant; it basically just needs to redirect to the undefined-exception handler when appropriate. However, this variant requires that the operating system itself be prevented from writing to read-only pages (through the setting of a global processor flag), and not all kernels are designed this way; more recent kernels in fact are, since this is the same basic mechanism used for implementing copy-on-write. Additional workarounds other than the official ones from Intel have been proposed; in many cases these proved as effective and much easier to implement. The simplest one involved merely marking the page containing interrupt descriptors as non-cacheable. Again, the extra memory cycles that the processor was forced to go through to fetch data from RAM every time it needed to invoke an exception handler appeared to be all that was needed to prevent the processor from locking up. In this case, no modification whatsoever to any exception handler was required. And, although not strictly necessary, the same split of the interrupt descriptor table was performed in this case, with only the first page marked non-cacheable. This was for performance reasons, as the page containing most of the descriptors (and the ones more often required, in fact) could stay in cache. For unknown reasons, these additional, unofficial workarounds were never endorsed by Intel. It might be that it was suspected that they might not work with all affected processor versions. See also CMPXCHG8B Denial-of-service attack Pentium FDIV bug Meltdown (security vulnerability) Spectre (security vulnerability) References Further reading External links Intel Pentium erratum Microsoft Knowledge Base article F00F CVE bug entry X86 architecture Denial-of-service attacks Hardware bugs Computer folklore 1997 in computing
23722472
https://en.wikipedia.org/wiki/Open%20Windows%20Foundation
Open Windows Foundation
Open Windows Foundation is a US-registered 501(c)(3) non-profit organization focusing on youth education and programming in San Miguel Dueñas, Guatemala. The center was founded in 2001 by Ericka Kaplan, Jean Uelmen, and Teresa Quiñonez and now serves over 1,000 members of the Dueñas community. Mission To provide education and community development opportunities for children and families in Guatemala. History The Foundation first opened in 2001 with an enrollment of 20 children assembling in a small room. The original 300 books for the center were donated from a library project that had recently closed in Rio Dulce, Guatemala. The Foundation’s primary function was to act as a dynamic library with the aim of getting kids to read for pleasure. Over the years, Open Windows has continued to accumulate students, books, funds, and other resources through the personal networking of the founders and the community. In 2003, Rotary International donated 10 computers to Open Windows. This donation allowed Open Windows to expand its services by offering computer classes and by allowing students to use computers for homework. Open Windows also launched its initial website in 2003 which allowed for greater publicity. In September 2005, a formal library room was built and is now filled with over 12,000 books including picture, fiction, non-fiction, and reference books. Two years later 2007, an impending donation from Rotary International for an additional 10 computers prompted the building of a second story computer center at Open Windows. The center was completed in April, 2007. By this time, the mission had been expanded to include enhancing technology skills and to provide educational programming and tutoring for students. Programs Open Windows currently provides multiple programs: the after school program, the activities program, the computer center, a scholarship program, a pre-school introduction to learning, installation of eco-stoves and house construction. After School Program The after school program allows the students time to finish their homework with the supervision and assistance of the seven teachers at Open Windows Foundation. Because of the close relationship between Open Windows and the local schools, homework for the students is often designed with the specific resources of Open Windows in mind. Activities Program The activities program is designed for the students to participate in an interactive reading and learning activity as a group. Every afternoon the teachers at Open Windows Foundation use a book read aloud to ground an activity designed to emphasize reading, writing, creative, listening, and thinking skills. Computer Center The computer center was completed in 2007 and is composed of 20 computers donated from Rotary International. Classes are provided for children and adults and for various computer programs. In addition to the classes, 10 of the computers are now equipped with internet access for research and other educational purposes. The computer center is always in high demand as it provides the only public-access computers in the area. Scholarship Program Thanks to donations from Tom Sullivan, a scholarship program was set up in 2003 to enable motivated students from low-income families to go to high school. Most students in San Miguel Dueñas can not afford to attend high school because they do not have the money to pay for the tuition, books, and uniform costs. Unfortunately in Guatemala the government does not pay these costs. The scholarship program had three students in its first year, but Open Windows has since sponsored more than 400 scholarships. This year there are 30 scholarship students. The chosen, talented students are given funds to cover the costs of uniforms, books, and transportation. In return, they are asked to keep Open Windows informed of their progress. We find that high school graduates, in turn, help their brothers and sisters attend school. Open Windows also helps graduates in preparing for new job interviews, which includes writing a Curriculum Vitae. We hope to place these students in the local job market. In return, Open Windows scholarship students or family members take turns cleaning the library early each morning. In 2011, we started the community service program. During vacation the scholarship students come to the learning center to help small groups of primary children with math and reading skills. This, in turn, helps them learn to teach, work with others and speak in front of a group. Only $500 sends a scholarship recipient to high school for one year. Pre-school Introduction to Learning A more recent program involves five and six year-old children who are not enrolled in school but who started coming to the center wanting to learn. Open Windows' teachers began giving them basic instruction in reading and simple arithmetic and in time the number of youngsters attending grew. When the children participate in this program, the parents agree to send the children to school the following year. Eco-stoves Many Guatemalan families cook their meals on open fires insides their houses. This fills the houses with smoke and requires the families to spend a lot of time or a lot of money obtaining firewood. Eco-stoves reduce the wood required for cooking by about two-thirds and they send the smoke out of the house through a chimney. Open Windows, in alliance with the Canadian organization Developing World Connections, installs new eco-stoves in homes all around San Miguel Dueñas, saving families money and health problems. House Construction Again in collaboration with Developing World Connections, Open Windows builds houses for families in the area. While many families live in homes made of sheet metal with a dirt floor, these new houses are made of concrete blocks and have cement floors, which allow the families to keep them much cleaner. Volunteers work with masons hired by Open Windows to construct one-, two- or three-room houses, which represent a major improvement in the lives of the families. References Foundations based in Guatemala
429709
https://en.wikipedia.org/wiki/Message%20queue
Message queue
In computer science, message queues and mailboxes are software-engineering components typically used for inter-process communication (IPC), or for inter-thread communication within the same process. They use a queue for messaging – the passing of control or of content. Group communication systems provide similar kinds of functionality. The message queue paradigm is a sibling of the publisher/subscriber pattern, and is typically one part of a larger message-oriented middleware system. Most messaging systems support both the publisher/subscriber and message queue models in their API, e.g. Java Message Service (JMS). Remit and ownership Message queues implement an asynchronous communication pattern between two or more processes/threads whereby the sending and receiving party do not need to interact with the message queue at the same time. Messages placed onto the queue are stored until the recipient retrieves them. Message queues have implicit or explicit limits on the size of data that may be transmitted in a single message and the number of messages that may remain outstanding on the queue. Remit Many implementations of message queues function internally within an operating system or within an application. Such queues exist for the purposes of that system only. Other implementations allow the passing of messages between different computer systems, potentially connecting multiple applications and multiple operating systems. These message queuing systems typically provide resilience functionality to ensure that messages do not get "lost" in the event of a system failure. Examples of commercial implementations of this kind of message queuing software (also known as message-oriented middleware) include IBM MQ (formerly MQ Series) and Oracle Advanced Queuing (AQ). There is a Java standard called Java Message Service, which has several proprietary and free software implementations. Real-time operating systems (RTOSes) such as VxWorks and QNX encourage the use of message queuing as the primary inter-process or inter-thread communication mechanism. This can result in integration between message passing and CPU scheduling. Early examples of commercial RTOSes that encouraged a message-queue basis to inter-thread communication also include VRTX and pSOS+, both of which date to the early 1980s. The Erlang programming language uses processes to provide concurrency; these processes communicate asynchronously using message queuing. Ownership The message queue software can be either proprietary, open source or a mix of both. It is then on run either on premise in private servers or on external cloud servers (message queuing service). Proprietary options have the longest history, and include products from the inception of message queuing, such as IBM MQ, and those tied to specific operating systems, such as Microsoft Message Queuing (MSMQ). Cloud service providers also provide their proprietary solutions such as Amazon Simple Queue Service (SQS), StormMQ, Solace, and IBM MQ. Open source choices of messaging middleware systems includes Apache ActiveMQ, Apache Kafka, Apache Qpid, Apache RocketMQ, Enduro/X, JBoss Messaging, JORAM, RabbitMQ, Sun Open Message Queue, and Tarantool. Examples on hardware-based messaging middleware vendors are Solace, Apigee, and IBM MQ. Usage In a typical message-queueing implementation, a system administrator installs and configures message-queueing software (a queue manager or broker), and defines a named message queue. Or they register with a message queuing service. An application then registers a software routine that "listens" for messages placed onto the queue. Second and subsequent applications may connect to the queue and transfer a message onto it. The queue-manager software stores the messages until a receiving application connects and then calls the registered software routine. The receiving application then processes the message in an appropriate manner. There are often numerous options as to the exact semantics of message passing, including: Durability – messages may be kept in memory, written to disk, or even committed to a DBMS if the need for reliability indicates a more resource-intensive solution. Security policies – which applications should have access to these messages? Message purging policies – queues or messages may have a "time to live" Message filtering – some systems support filtering data so that a subscriber may only see messages matching some pre-specified criteria of interest Delivery policies – do we need to guarantee that a message is delivered at least once, or no more than once? Routing policies – in a system with many queue servers, what servers should receive a message or a queue's messages? Batching policies – should messages be delivered immediately? Or should the system wait a bit and try to deliver many messages at once? Queuing criteria – when should a message be considered "enqueued"? When one queue has it? Or when it has been forwarded to at least one remote queue? Or to all queues? Receipt notification – A publisher may need to know when some or all subscribers have received a message. These are all considerations that can have substantial effects on transaction semantics, system reliability, and system efficiency. Standards and protocols Historically, message queuing has used proprietary, closed protocols, restricting the ability for different operating systems or programming languages to interact in a heterogeneous set of environments. An early attempt to make message queuing more ubiquitous was Sun Microsystems' JMS specification, which provided a Java-only abstraction of a client API. This allowed Java developers to switch between providers of message queuing in a fashion similar to that of developers using SQL databases. In practice, given the diversity of message queuing techniques and scenarios, this wasn't always as practical as it could be. Three standards have emerged which are used in open source message queue implementations: Advanced Message Queuing Protocol (AMQP) – feature-rich message queue protocol, approved as ISO/IEC 19464 since April 2014 Streaming Text Oriented Messaging Protocol (STOMP) – simple, text-oriented message protocol MQTT (formerly MQ Telemetry Transport) – lightweight message queue protocol especially for embedded devices These protocols are at different stages of standardization and adoption. The first two operate at the same level as HTTP, MQTT at the level of TCP/IP. Some proprietary implementations also use HTTP to provide message queuing by some implementations, such as Amazon's SQS. This is because it is always possible to layer asynchronous behaviour (which is what is required for message queuing) over a synchronous protocol using request-response semantics. However, such implementations are constrained by the underlying protocol in this case and may not be able to offer the full fidelity or set of options required in message passing above. Synchronous vs. asynchronous Many of the more widely known communications protocols in use operate synchronously. The HTTP protocol – used in the World Wide Web and in web services – offers an obvious example where a user sends a request for a web page and then waits for a reply. However, scenarios exist in which synchronous behaviour is not appropriate. For example, AJAX (Asynchronous JavaScript and XML) can be used to asynchronously send text, JSON or XML messages to update part of a web page with more relevant information. Google uses this approach for their Google Suggest, a search feature which sends the user's partially typed queries to Google's servers and returns a list of possible full queries the user might be interested in the process of typing. This list is asynchronously updated as the user types. Other asynchronous examples exist in event notification systems and publish/subscribe systems. An application may need to notify another that an event has occurred, but does not need to wait for a response. In publish/subscribe systems, an application "publishes" information for any number of clients to read. In both of the above examples it would not make sense for the sender of the information to have to wait if, for example, one of the recipients had crashed. Applications need not be exclusively synchronous or asynchronous. An interactive application may need to respond to certain parts of a request immediately (such as telling a customer that a sales request has been accepted, and handling the promise to draw on inventory), but may queue other parts (such as completing calculation of billing, forwarding data to the central accounting system, and calling on all sorts of other services) to be done some time later. In all these sorts of situations, having a subsystem which performs message-queuing (or alternatively, a broadcast messaging system) can help improve the behavior of the overall system. Implementation in UNIX There are two common message queue implementations in UNIX. One is part of the SYS V API, the other one is part of POSIX. SYS V UNIX SYS V implements message passing by keeping an array of linked lists as message queues. Each message queue is identified by its index in the array, and has a unique descriptor. A given index can have multiple possible descriptors. UNIX gives standard functions to access the message passing feature. msgget() This system call takes a key as an argument and returns a descriptor of the queue with the matching key if it exists. If it does not exist, and the IPC_CREAT flag is set, it makes a new message queue with the given key and returns its descriptor. msgrcv() Used to receive a message from a given queue descriptor. The caller process must have read permissions for the queue. It is of two types. Blocking receive puts the child to sleep if it cannot find a requested message type. It sleeps till another message is posted in the queue, and then wakes up to check again. Non-blocking receive returns immediately to the caller, mentioning that it failed. msgctl() Used to change message queue parameters like the owner. Most importantly, it is used to delete the message queue by passing the IPC_RMID flag. A message queue can be deleted only by its creator, owner, or the superuser. POSIX The POSIX.1-2001 message queue API is the later of the two UNIX message queue APIs. It is distinct from the SYS V API, but provides similar function. The unix man page mq_overview(7) provides an overview of POSIX message queues. Graphical user interfaces Graphical user interfaces (GUIs) employ a message queue, also called an event queue or input queue, to pass graphical input actions, such as mouse clicks, keyboard events, or other user inputs, to the application program. The windowing system places messages indicating user or other events, such as timer ticks or messages sent by other threads, into the message queue. The GUI application removes these events one at a time by calling a routine called getNextEvent() or similar in an event loop, and then calling the appropriate application routine to process that event. See also Advanced Message Queuing Protocol (AMQP) Amazon Simple Queue Service Apache ActiveMQ Apache Qpid Celery (software) Gearman IBM Integration Bus IBM MQ Java Message Service MQTT Message-oriented middleware, (category) Microsoft Message Queuing (known colloquially as MSMQ) NATS Oracle Messaging Cloud Service RabbitMQ Redis StormMQ, an example of a message queuing service TIBCO Enterprise Message Service Enduro/X Middleware platform ZeroMQ References Inter-process communication Events (computing)
755109
https://en.wikipedia.org/wiki/Ilium%20%28novel%29
Ilium (novel)
Ilium is a science fiction novel by American writer Dan Simmons, the first part of the Ilium/Olympos cycle, concerning the re-creation of the events in the Iliad on an alternate Earth and Mars. These events are set in motion by beings who have taken on the roles of the Greek gods. Like Simmons' earlier series, the Hyperion Cantos, the novel is a form of "literary science fiction" which relies heavily on intertextuality, in this case with Homer and Shakespeare, as well as periodic references to Marcel Proust's À la recherche du temps perdu (or In Search of Lost Time) and Vladimir Nabokov's novel Ada or Ardor: A Family Chronicle. In July 2004, Ilium received a Locus Award for Best Science Fiction Novel of 2004. Plot summary The novel centers on three character groups: that of Hockenberry (a resurrected twentieth-century Homeric scholar whose duty is to compare the events of the Iliad to the reenacted events of the Trojan War), Greek and Trojan warriors, and Greek gods from the Iliad; Daeman, Harman, Ada, and other humans of an Earth thousands of years after the twentieth century; and the "moravec" robots (named for scientist and futurist Hans Moravec) Mahnmut the Europan and Orphu of Io, also thousands of years in the future, but originating in the Jovian system. The novel is written in first-person, present-tense when centered on Hockenberry's character, but features third-person, past-tense narrative in all other instances. Much like Simmons' Hyperion, where the actual events serve as a frame, the three groups of characters' stories are told over the course of the novel and begin to converge as the climax nears. Reception Ilium won the Locus Award for Best Science Fiction Novel in 2004, and was nominated for a Hugo Award for Best Novel, that same year. References External links Dan Simmons – Author's Official Website. Ilium at Worlds Without End 2003 science fiction novels 2003 American novels American science fiction novels Fiction set on Europa (moon) Classical mythology in popular culture HarperCollins books Fiction set on Io (moon) Novels set on Mars Novels by Dan Simmons Science fantasy novels Novels set during the Trojan War Greek and Roman deities in fiction Novels about androids Quantum fiction novels Nanotechnology in fiction Novels based on the Iliad Modern adaptations of the Iliad
45340268
https://en.wikipedia.org/wiki/Mark%20Coeckelbergh
Mark Coeckelbergh
Mark Coeckelbergh (born 1975) is a Belgian philosopher of technology. He is Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and former President of the Society for Philosophy and Technology. He was previously Professor of Technology and Social Responsibility at De Montfort University in Leicester, UK, Managing Director of the 3TU Centre for Ethics and Technology, and a member of the Philosophy Department of the University of Twente. Before moving to Austria, he has lived and worked in Belgium, the UK, and the Netherlands. He is the author of several books, including Growing Moral Relations (2012), Human Being @ Risk (2013), Environmental Skill (2015), Money Machines (2015), New Romantic Cyborgs (2017), Moved by Machines (2019), the textbook Introduction to Philosophy of Technology (2019), and AI Ethics (2020). He has written many articles and is an expert in ethics of artificial intelligence. He is best known for his work in philosophy of technology and ethics of robotics and artificial intelligence (AI), he has also published in the areas of moral philosophy and environmental philosophy. Early life and education Mark Coeckelbergh was born in 1975 in Leuven, Belgium. He was first educated in social sciences and political sciences at the University of Leuven (Licentiaat, 1997), before moving to the UK where he studied philosophy. He received his master's degree from the University of East Anglia (MA in Social Philosophy, 1999) and his PhD from the University of Birmingham (PhD in Philosophy, 2003). During the time of his PhD study he also painted, wrote poems, played piano, and worked on engineering ethics at the University of Bath (UK) and at the Belgian nuclear research centre SCK-CEN. Career In 2003 he started teaching at the University of Maastricht in the Netherlands and in 2007 he was Assistant Professor at the Philosophy Department of the University of Twente, also in the Netherlands. In the same year he received the Prize of the Dutch Society for Bioethics (with J. Mesman). In Twente he started working on the ethics of robotics. In 2013 he became Managing Director of the 3TU Centre for Ethics and Technology. During his time in Twente he published many articles on philosophy of technology (especially robotics) and he was regularly interviewed about the ethics of drone technology. In 2014 he was appointed full professor, before the age of 40, at the Centre of Computing and Social Responsibility, De Montfort University in Leicester, UK, a position he held through early 2019. In 2014 he was nominated for the World Technology Awards in the Ethics category. In December 2015 he joined the Department of Philosophy of the University of Vienna as full Professor of Philosophy of Media and Technology. Coeckelbergh is the President of the Society for Philosophy and Technology, a member of the High Level Expert Group on Artificial Intelligence for the European Commission, a member of the Austrian robotics council (Rat für Robotik), inaugurated by the Austrian Ministry for Transport, Innovation and Technology, and a member of the Austrian Advisory Council on Automated Mobility. He is also a member of the editorial advisory boards of AI and Sustainable Development, The AI Ethics Journal, Cognitive Systems Research, Science and Engineering Ethics, International Journal of Technoethics, Techne, Journal of Information, Communication and Ethics in Society, Journal of Posthuman Studies, Kairos. Journal of Philosophy & Science, Technology & Regulation (TechReg), and The Journal of Sociotechnical Critique. Moreover, he is a fellow of the World Technology Network (WTN) and finalist of the World Technology Award 2017. Recently, Coeckelbergh joined the Technical Expertise Committee at the Foundation of Responsible Robotics, along with Charles Ess and Kevin Kelly. Works Following his articles on robot ethics and his book Growing Moral Relations, Coeckelbergh has been attributed a 'relational turn' in thinking about moral status. In his articles Coeckelbergh argues for a phenomenological and relational approach to the philosophy of robotics. His theory on 'Social Relationalism' in regards to Artificial Intelligence and Morality is discussed by Gunkel and Cripe in their paper entitled 'Apocalypse Not, or How I Learned to Stop Worrying and Love the Machine'. He has published mainly on robot ethics and ICT in health care, but also on many other topics. He also wrote a book on vulnerability and technology (Human Being @ Risk) in which he proposes an 'anthropology of vulnerability' and engages with discussions about human enhancement and transhumanism. His first books are about freedom, autonomy, and the role of imagination in moral reasoning. Later books discuss the problem of disengagement and distancing in the use of information and communication technologies (ICTs) and society's relation to the environment: in Environmental Skill he argues against modern and romantic ways of relating to the environment and in Money Machines he discusses new financial ICTs. In New Romantic Cyborgs he investigates the relation between technology and romanticism and reflects on what he calls "the end of the machine". He also wrote opinion articles in The Guardian and in Wired. Books AI Ethics (MIT Press, 2020) Introduction to Philosophy of Technology (Oxford University Press, 2019) Moved by Machines: Performance Metaphors and Philosophy of Technology (Routledge, 2019) New Romantic Cyborgs: Romanticism, Information Technology, and the End of the Machine (MIT Press, 2017) Money Machines: Electronic Financial Technologies, Distancing, and Responsibility in Global Finance (Ashgate, 2015) Environmental Skill: Motivation, Knowledge, and the Possibility of a Non-Romantic Environmental Ethics (Routledge, 2015) Human Being @ Risk: Enhancement, Technology, and the Evaluation of Vulnerability Transformations (Springer, 2013) Growing Moral Relations: Critique of Moral Status Ascription (Palgrave Macmillan, 2012) Imagination and Principles: An Essay on the Role of Imagination in Moral Reasoning (Palgrave Macmillan, 2007) The Metaphysics of Autonomy: The Reconciliation of Ancient and Modern Ideals of the Person (Palgrave Macmillan, 2004) Liberation and Passion: Reconstructing the Passion Perspective on Human Being and Freedom (DenkMal Verlag, 2002) Reviews Work by Coeckelbergh has been responded to by many academics, such as David Gunkel of Northern Illinois University, USA, who about 'Moved by Machines' writes: “This unique and innovative book changes the very framework for doing philosophy of technology by introducing and developing a performance-based method of analysis”. Coeckelbergh's textbook on philosophy of technology,'Introduction to Philosophy of Technology', has been roundly praised by other academics working in the same field. Shannon Vallor of Santa Clara University calls it a “model of philosophical clarity” while Diane Michelfelder of Macalster College rates the book as “...excellent. One can imagine students getting excited about the philosophy of technology, and philosophy in general, from reading Coeckelbergh's work.” Coeckelbergh's book New Romantic Cyborgs has been described as offering 'a whole new way of looking at our use of technologies' and he has been called 'one of the most versatile, profound, and original thinkers in the contemporary philosophy of technology.' David Seng, University of Arizona, writes: 'One of the strengths of this book is that it is provides a critical process of inquiry and helpful analysis of inherited philosophical orientations regarding the relationship between technology and society.' A review by Roland Legrand for De Tijd especially compliments Mark Coeckelbergh's analysis of virtual worlds, among other aspects of the book. Wendell Wallach of Yale University, USA, referring to 'Money Machines' comments: “Mark Coeckelbergh is recognized internationally for illuminating the manner in which information and communication technologies (ICTs) create new forms of “distancing” and in particular “moral distancing”. This important book extends that analysis to underscore the hidden ways ICTs shape money and global finance, alter relationships, and undermine responsibility”. Also, Keir James Cecil Martin writes about Money Machines: "Coeckelbergh’s view of money as a technology of relationality that shortens some geographical and temporal distances, whilst simultaneously widening moral and social gaps (…) has much to offer to anthropological debates on the nature of finance and money." Marks publications that have been peer-reviewed and received reviews include: Money Machines 'Environmental Skill', 'Human Being @Risk' and 'Growing Moral Relations' as well as articles and other peer-reviewed research publications. For example, David Gunkel has written in his review in Ethics and Information Technology that Growing Moral Relations is 'a real game changer' and 'a penetrating analysis of moral status'. Jac Swart has called the book 'an important contribution to animal ethics' and Frank Jankunis says in his review that it is 'an impressive contribution to the literature on moral status'. Yoni Van Den Eede has called Human Being @ Risk 'one of the most comprehensive and fine-grained in the current literature' and Pieter Lemmens has written that the book ''is thoroughly unique and original in showing the importance and extreme usefulness of philosophical anthropology and the phenomenological tradition for thinking through the consequences of the epochal technological mutations of our time'. Bert-Jaap Koops calls Human Being @ Risk 'an important and original book' and on the website of the Institute for Ethics and Emerging Technologies Nikki Olson writes: 'Coeckelbergh develops an impressive case that all our technological and social measures create new sources of vulnerability … Coeckelbergh drives home an important point for our debates about the human future. … Human Being @ Risk identifies important choices that we must debate as we imagine and (to a limited extent) plan the future of humanity. It raises issues that are fundamental to ongoing thinking about how to better the human condition.' Carl Mitcham writes about Environmental Skill that it is 'an insightful argument for an environmental philosophy that draws on the resources of and at the same time extends work in philosophy of technology. The notion of skilled engagement with the world as this has emerged from pragmatism and phenomenology is here deepened and re-thought in an effort to understand and respond to the challenges of living in a techno-transformed nature.' Jochem Zwier and Andrea Gammon say in their critical but sympathetic review: 'One of the strong points of Coeckelbergh’s diagnosis is that it deepens the discussions regarding environmental concerns and the problem of motivation by laying bare the modern roots of these phenomena.' Tara Kennedy has published a review of the book in Notre Dame Philosophical Reviews. She writes: 'There is much for contemporary environmentalists to find compelling about Coeckelbergh's account, being not only an interesting analysis of the factors at work in motivation but also a convincing and optimistic approach to the problem.' She questions Coeckelbergh's interpretation of Heidegger but also praises 'Coeckelbergh's effectiveness in articulating a compelling account of the problem of motivation and how the development of an ethics of skilled engagement with the environment, a focus on habit and virtue, would find us better equipped to deal with the environmental crises we face. It is a welcome and interesting addition to a field in need of voices focused on bringing about meaningful, practical change.' And Louke van Wensveen writes in her review in the journal Environmental Ethics that the book reminds us of 'an overly autistic, obsessively controlling tendency in Western philosophical and everyday cultures. Such a distancing pattern and its ideological scaffolding prevent environmental action.' Coeckelbergh's recent book New Romantic Cyborgs has been described as offering 'a whole new way of looking at our use of technologies' and he has been called 'one of the most versatile, profound, and original thinkers in the contemporary philosophy of technology.' David Seng, University of Arizona, writes: 'One of the strengths of this book is that it is provides a critical process of inquiry and helpful analysis of inherited philosophical orientations regarding the relationship between technology and society.' A review by Roland Legrand for De Tijd especially compliments Mark Coeckelbergh's analysis of virtual worlds, among other aspects of the book. In the Media Coeckelbergh has appeared regularly in Dutch (and British) media talking about the ethics of drone technology. He has talked about the ethical development of drones in relation to surveillance in an article for Kennislink entitled 'The Irrepressible Drone' and about the ethics of drone fighting for Universonline. He has been interviewed on live radio for BBC radio Leicester where on 15 April 2015 he talked about drone technology and discussed robots that cook food. On 11 June 2013 he appeared on Dutch national television for the Een Vandaag programme also talking about drones and has appeared in articles for the Dutch newspaper Trouw on the subject of environmental philosophy and 'Down To Earth' magazine (Netherlands) discussing drones for environmental purposes. In 2015 he was also interviewed about drones in Stedelijk Interieur, a Dutch magazine on public space. In August and September 2015 Coeckelbergh's book "Money Machines" received media attention: he was interviewed on BBC Radio Leicester, in the Leicester Mercury where he warns for a growing reliance on computer algorithms in global finance, and in the Belgian national newspaper De Standaard, which printed a large article on the book in its weekend edition of 12 September 2015. In the De Standaard interview (in Dutch), Coeckelbergh warns that we might delegate too much to technology, and that we lack control and overview. The ethical and societal influence of new technologies may be invisible but is and remains powerful. But, he argues, if technology is part of the problem, it is also part of the solution: we need to develop new, alternative technologies and technological practices, also in the financial world. Coeckelbergh has also been quoted in international mainstream media such as CNN and has a profile at the Guardian due to many comments from readers responding to his article regarding his thoughts about the 2010 Deepwater Horizon oil spill. On 6 November 2012 Coeckelbergh was Interviewed by Stephen Edwards for a report appearing in the Economist newspaper Intelligence Unit (EIU) exploring the interaction between humans and technology. In 2014 Coeckelbergh received much response on social media by other philosophers such as Evan Selinger for his Wired magazine article with posts appearing on Twitter and Facebook. Coeckelbergh's paper entitled 'Humans, Animals, and Robots: A Phenomenological Approach to Human-Robot Relations' was referenced in an online article for Dailydot.com entitled 'sex and love in the robot age' where human-robot relations were discussed. He also wrote an opinion article with Katleen Gabriels in the Dutch newspaper nrc.next which questions a call for banning sex robots and asks attention for (more) pressing societal issues, and was interviewed about his critical comments on the campaign in the Leicester Mercury. In May 2017 he has been included in the top of tech pioneers named by the Belgian newspaper De Tijd, in the category leaders and thinkers. Public Talks Coeckelbergh has a long record of international public speaking appearances for diverse audiences at academic, public policy, legal, and business events. Recent examples since 2018 include the President's Keynote at SPT 2019 (Austin, TX), keynotes at ECSS 2018 (Gothenburg, Sweden), Robotiuris 2019 (Madrid, Spain), OtroMundo International Congress (Medellín, Colombia), SOLAIR 2019 (Prague, Czech Republic), INBOTS 2018 (Pisa, Italy), and invited talks at University of Sydney's "Sydney Ideas" lecture series, Chung-Ang University (Seoul, South Korea), Beijing Forum 2019, UNAM (Universidad Nacional Autonoma de Mexico, Mexico City, Mexico), Workshop at Illinois Institute of Technology (Chicago, Illinois), and others. A full listing of his talks can be found on his website. Academics References From his more than 100 academic publications Coeckelbergh has been cited hundreds of times, for example his publications in Ethical Theory & Moral Practice, Science, Technology and Human Values, Ethics and Information Technology, International Journal of Social Robotics, and AI & Society. Human Being @ Risk (2013) is his most cited book. References External links blog twitter page Research group Philosophy of Media and Technology at University of Vienna 1975 births Living people Alumni of the University of East Anglia Alumni of the University of Birmingham 21st-century Belgian philosophers Artificial intelligence researchers Artificial intelligence ethicists
9346582
https://en.wikipedia.org/wiki/Microsoft%20UI%20Automation
Microsoft UI Automation
Microsoft UI Automation (UIA) is an application programming interface (API) that allows one to access, identify, and manipulate the user interface (UI) elements of another application. UIA is targeted at providing UI accessibility and it is a successor to Microsoft Active Accessibility. It also facilitates GUI test automation, and it is the engine upon which many test automation tools are based. RPA tools also use it to automate applications in business processes. UIA's property providers support both Win32 and .NET programs. The latest specification of UIA is found as part of the Microsoft UI Automation Community Promise Specification. Microsoft claims that portability to platforms other than Microsoft Windows was one of its design goals. It has since been ported to Mono. History In 2005, Microsoft released UIA as a successor to MSAA framework. Managed UI Automation API was released as a part of .NET Framework 3.0. The native UI Automation API (provider) is included as part of the Windows Vista and Windows Server 2008 SDK and is also distributed with the .NET Framework. UIA is available out of the box in Windows 7 as a part of Windows Automation API 3.0 and as a separate download for Windows XP, Windows Vista, and Windows Server 2003 and 2008. Motivation and goals As a successor to MSAA, UIA aims to address the following goals: Enable efficient client performance without forcing clients to hook into a target application’s process. Expose more information about the UI. Co-exist with and use MSAA, but do not inherit problems that exist in MSAA. Provide an alternative to MSAA that is simple to implement. Technical overview At client side, UIA provides a .NET interface in UIAutomationClient.dll assembly and a COM interface implemented directly in UIAutomationCore.dll. At server side, UIAutomationCore.dll is injected into all or selected processes on the current desktop to perform data retrieval on behalf of a client. The DLL can also load UIA plugins (called providers) into its host process to extract data using different techniques. UIA has four main provider and client components, as shown in the following table. Elements UIA exposes every piece of the UI to client applications as an Automation Element. Elements are contained in a tree structure, with the desktop as the root element. Automation Element objects expose common properties of the UI elements they represent. One of these properties is the control type, which defines its basic appearance and functionality as a single recognizable entity (e.g., a button or check box). In addition, elements expose control patterns that provide properties specific to their control types. Control patterns also expose methods that enable clients to get further information about the element and to provide input. Clients can filter the raw view of the tree as a control view or a content view. Applications can also create custom views. Tree Within the UIA tree there is a root element that represents the current desktop and whose child elements represent application windows. Each of these child elements may contain elements representing pieces of UI such as menus, buttons, toolbars, and list boxes. These elements, in turn, can contain other elements, such as list items. The UIA tree is not a fixed structure and is seldom seen in its totality because it might contain thousands of elements. Parts of the tree are built as they are needed, and the tree can undergo changes as elements are added, moved, or removed. Control types UIA control types are well-known identifiers that can be used to indicate what kind of control a particular element represents, such as a combo box or a button. Having a well-known identifier allows assistive technology (AT) devices to more easily determine what types of controls are available in the user interface (UI) and how to interact with the controls. A human-readable representation of the UIA control type information is available as a LocalizedControlType property, which can be customizable by control or application developers. Control patterns Control patterns provide a way to categorize and expose a control's functionality independent of the control type or the appearance of the control. UIA uses control patterns to represent common control behaviors. For example, the Invoke control pattern is used for controls that can be invoked (such as buttons) and the Scroll control pattern is used for controls that are scrollable viewports (such as list boxes, list views, or combo boxes). Because each control pattern represents a separate functionality, they can be combined to describe the full set of functionality supported by a particular control. Properties UIA providers expose properties on UIA elements and the control patterns. These properties enable UIA client applications to discover information about pieces of the user interface (UI), especially controls, including both static and dynamic data. Events UIA event notification is a key feature for assistive technologies (AT) such as screen readers and screen magnifiers. These UIA clients track events that are raised by UIA providers that occur within the UIA, and use the information to notify end users. Efficiency is improved by allowing provider applications to raise events selectively, depending on whether any clients are subscribed to those events, or not at all, if no clients are listening for any events. TextPattern UIA exposes the textual content, including format and style attributes, of text controls in UIA-supported platforms. These controls include, but are not limited to, the Microsoft .NET Framework TextBox and RichTextBox as well as their Win32 equivalents. Exposing the textual content of a control is accomplished through the use of the TextPattern control pattern, which represents the contents of a text container as a text stream. In turn, TextPattern requires the support of the TextPatternRange class to expose format and style attributes. TextPatternRange supports TextPattern by representing a contiguous text span in a text container with the Start and End endpoints. Multiple or disjoint text spans can be represented by more than one TextPatternRange objects. TextPatternRange supports functionality such as clone, selection, comparison, retrieval and traversal. UI Automation for automated testing UIA can also be useful as a framework for programmatic access in automated testing scenarios. In addition to providing more refined solutions for accessibility, it is also specifically designed to provide robust functionality for automated testing. Programmatic access provides the ability to imitate, through code, any interaction and experience exposed by traditional user interactions. UIA enables programmatic access through five components: The UIA tree facilitates navigation through the logical structure of the UI for accessibility and automation. UI Automation Elements are individual components in the UI. UI Automation Properties provide specific information about UI elements or the Control Pattern. UI Automation Control Patterns define a particular aspect of a control's functionality or feature; they can consist of property, method, event, and structure information. UI Automation Events provide a trigger to respond to changes and notifications in UIA information. Availability UIA was initially available on Windows Vista and Windows Server 2008, and it was also made available to Windows XP and Windows Server 2003 as part of .NET Framework 3.0. It has been integrated with all subsequent Windows versions, up to and including Windows 7. Besides Windows platforms, the Olive project (which is a set of add-on libraries for the Mono core aiming for the .NET Framework support) includes a subset of WPF (PresentationFramework and WindowsBase) and UI Automation. Novell's Mono Accessibility project is an implementation of the UIA Provider and Client specifications targeted for the Mono framework. Additionally, the project provides a bridge to the Accessibility Toolkit (ATK) for Linux assistive technologies (ATs). Novell is also working on a bridge for UIA-based ATs to interact with applications that implement ATK. Related technology and interoperability Microsoft Active Accessibility (MSAA): UIA is the successor to MSAA. However, since there are still MSAA based applications in existence, bridges are used to allow communication between UIA and MSAA applications. So information can be shared between the two APIs, an MSAA-to-UIA Proxy and UIA-to-MSAA Bridge were developed. The former is a component that consumes MSAA information and makes it available through the UIA client API. The latter enables client applications using MSAA access applications that implement UIA. Accessible Rich Internet Applications (ARIA): The UIA AriaRole and AriaProperties properties can provide access to the ARIA attribute values corresponding to an HTML element (which can be exposed as an automation element by web browsers). General mapping from ARIA attributes to UIA is also available. Windows Automation API: Starting with Windows 7, Microsoft is packaging its accessibility technologies under a framework called Windows Automation API. Both MSAA and UIA will be part of this framework. For older versions of Windows see KB971513. Mono Accessibility Project: On November 7, 2007, Microsoft and Novell Inc., after completion of a year of their interoperability agreement, announced that they would be extending their agreement to include accessibility. Specifically, it was announced that Novell would develop an open source adapter allowing the UIA framework to work with existing Linux accessibility projects such as the Linux Accessibility Toolkit (ATK), which ships with SUSE Linux Enterprise Desktop, Red Hat Enterprise Linux and Ubuntu Linux. This would eventually make UIA cross-platform. Notes References UI Automation Control Types UI Automation Control Patterns UI Automation Control Properties UI Automation Events External links UI Automation Verify (UIA Verify) Test Automation Framework UI Automation PowerShell Extensions FlaUI Accessibility API Windows APIs
4318506
https://en.wikipedia.org/wiki/Danny%20Thorpe
Danny Thorpe
Danny Thorpe was an American programmer noted mainly for his work on Delphi. He was the Chief Scientist for Windows and .NET developer tools at Borland Corporation starting from January 2004 until October 2005, as well as Chief Architect of the Delphi programming language from 2000 to 2005. He joined Borland in 1990 as an associate QA engineer working on Turbo Pascal 6.0. He was a member of the team that created the Delphi programming language, Visual Component Library (VCL), and IDE, released in 1995. In 1999, he was a founding member of the Kylix team, implementing the Delphi compiler and development environment on Linux, released in 2001. After the release of Kylix, he was the founder and lead programmer for Borland's Delphi .NET effort, porting and extending the Delphi language to the Microsoft .NET platform. In 1994 while at Borland, he contracted with Santa Cruz startup Cinematronics (David Stafford and Mike Sandige) to build a component model and collision physics engine for a software pinball game. Cinematronics licensed an early version of the pinball engine to Microsoft for the Windows 95 Plus! Pack's "Space Cadet" pinball game. Cinematronics was later acquired by Maxis, who published Full Tilt! Pinball in 1996 and a sequel in 1998. He joined Google in October 2005 and was a founding member of the Google Gears team, responsible for designing the client side browser local storage subsystem and JavaScript interface bindings. He joined Microsoft's Windows Live Platform team in April 2006 as a Principal Software Development Engineer. His primary focus at Microsoft was the development of a secure client-side cross-domain scripting library for browser web apps, as well as the Windows Live Contacts Control built upon that library. In October 2007, he joined startup Cooliris to work on the PicLens browser plugin for 3D visualization of web content. In June 2008, he returned to Microsoft to work in a newly formed Cloud Computing Tools incubation team creating Visual Studio extensions to support development of applications for Microsoft's Windows Azure hosted services environment and Live Mesh / Live Framework client-side and offline web application environment. In October 2010 he joined BiTKOO as Chief Software Architect to develop XACML based cloud scale distributed authorization and access control technologies. When BiTKOO was acquired by Quest Software in December 2011, he assumed the role of Product Architect in the Identity and Authorization Management (IAM) group at Quest Software. When Quest Software was acquired by Dell in September 2012, he continued to work on XACML authorization technologies under the title of Authorization Architect. He lived on a small farm in the Santa Cruz mountains near Ben Lomond, California. He contracted brain cancer in 2017, and died on 22 Oct 2021 after a protracted battle. He is survived by his wife, Cindy Thorpe. Published work Delphi Component Design, Addison-Wesley Longman, , 1997 References External links Personal blog and homepage (defunct) archive.org from 2021-08-08 Previous blog at Microsoft Previous personal homepage at Borland Living people Microsoft employees Borland employees Google employees Year of birth missing (living people)
2783351
https://en.wikipedia.org/wiki/Binary%20Modular%20Dataflow%20Machine
Binary Modular Dataflow Machine
Binary Modular Dataflow Machine (BMDFM) is a software package that enables running an application in parallel on shared memory symmetric multiprocessing (SMP) computers using the multiple processors to speed up the execution of single applications. BMDFM automatically identifies and exploits parallelism due to the static and mainly dynamic scheduling of the dataflow instruction sequences derived from the formerly sequential program. The BMDFM dynamic scheduling subsystem performs a symmetric multiprocessing (SMP) emulation of a tagged-token dataflow machine to provide the transparent dataflow semantics for the applications. No directives for parallel execution are needed. Background Current parallel shared memory SMPs are complex machines, where a large number of architectural aspects must be addressed simultaneously to achieve high performance. Recent commodity SMP machines for technical computing can have many tightly coupled cores (good examples are SMP machines based on multi-core processors from Intel (Core or Xeon) or IBM (Power)). The number of cores per SMP node is planned to double every few years according to computer makers' announcements. Multi-core processors are intended to exploit a thread-level parallelism, identified by software. Hence, the most challenging task is to find an efficient way to harness power of multi-core processors for processing an application program in parallel. Existent OpenMP paradigm of the static parallelization with a fork-join runtime library works pretty well for loop-intensive regular array-based computations only, however, compile-time parallelization methods are weak in general and almost inapplicable for irregular applications: There are many operations that take a non-deterministic amount of time making it difficult to know exactly when certain pieces of data will become available. A memory hierarchy with multi-level caches has unpredictable memory access latencies. A multi-user mode other people's codes can use up resources or slow down a part of the computation in a way that the compiler cannot account for. Compile-time inter-procedural and cross-conditional optimizations are hard (very often impossible) because compilers cannot figure out which way a conditional will go or cannot optimize across a function call. Transparent dataflow semantics of BMDFM The BMDFM technology mainly uses dynamic scheduling to exploit parallelism of an application program, thus, BMDFM avoids mentioned disadvantages of the compile-time methods. BMDFM is a parallel programming environment for multi-core SMP that provides: Conventional programming paradigm requiring no directives for parallel execution. Transparent (implicit) exploitation of parallelism in a natural and load balanced manner using all available multi-core processors in the system automatically. BMDFM combines the advantages of known architectural principles into a single hybrid architecture that is able to exploit implicit parallelism of the applications having negligible dynamic scheduling overhead and no bottlenecks. Mainly, the basic dataflow principle is used. The dataflow principle says: "An instruction or a function can be executed as soon as all its arguments are ready. A dataflow machine manages the tags for every piece of data at runtime. Data is marked with ready tag when data has been computed. Instructions with ready arguments get executed marking their result data ready". The main feature of BMDFM is to provide a conventional programming paradigm at the top level, so-called transparent dataflow semantics. A user understands BMDFM as a virtual machine (VM), which runs all statements of an application program in parallel, having all parallelizing and synchronizing mechanisms fully transparent. The statements of an application program are normal operators, of which any single threaded program might consist: they include variable assignments, conditional processing, loops, function calls, etc. Suppose we have the code fragment shown below: (setq a (foo0 i)) # a = foo0(i); (setq b (foo1 (+ i 1))) # b = foo1(i+1); (setq b (++ b)) # b++; (outf "a = %d\n" a) # printf("a = %d\n", a); (outf "b = %d\n" b) # printf("b = %d\n", b); The two first statements are independent, so a dataflow engine of BMDFM can run them on different processors or processor's cores. The two last statements can also run in parallel but only after "a" and "b" are computed. The dataflow engine recognizes dependencies automatically because of its ability to build a dataflow graph dynamically at runtime. Additionally, the dataflow engine correctly orders the output stream to output the results sequentially. Thus even after the out-of-order processing the results will appear in a natural way. Suppose that above code fragment now is nested in a loop: (for i 1 1 N (progn # for (i = 1; i <= N; i++) { (setq a (foo0 i)) # a = foo0(i); (setq b (foo1 (+ i 1))) # b = foo1(i + 1); (setq b (++ b)) # b++; (outf "a = %d\n" a) # printf("a = %d\n", a); (outf "b = %d\n" b) # printf("b = %d\n", b); )) # } The dataflow engine of BMDFM will keep variables "a" and "b" under unique contexts for each iteration. Actually, these are different copies of the variables. A context variable exists until it is referenced by instruction consumers. Later non-referenced contexts will be garbage collected at runtime. Therefore, the dataflow engine can exploit both local parallelism within the iteration and global parallelism as well running multiple iterations simultaneously. Architecture BMDFM is a convenient parallel programming environment and an efficient runtime engine for multi-core SMP due to the MIMD unification of several architectural paradigms (von-Neumann, SMP and dataflow): At first, it is a hybrid dataflow emulator running multithreadedly on commodity SMP. The SMP ensures MIMD while dataflow exploits implicit parallelism. At second, it is a hybrid multithreaded dataflow runtime engine controlled by a von-Neumann front-end VM. The dataflow runtime engine executes tagged-token contextual parallel instructions (opposite to the restricted fork-join paradigm) while the von-Neumann front-end VM initializes contexts and feeds the dataflow runtime engine with marshaled clusters of instructions. At third, it is a hybrid of static and dynamic parallelizing. The von-Neumann front-end VM tries statically to split an application into parallel marshaled clusters of instructions while the dataflow runtime engine complements the static parallelizing methods dynamically. BMDFM is intended for use in a role of the parallel runtime engine (instead of conventional fork-join runtime library) able to run irregular applications automatically in parallel. Due to the transparent dataflow semantics on top, BMDFM is a simple parallelization technique for application programmers and, at the same time, is a much better parallel programming and compiling technology for multi-core SMP computers. The basic concept of BMDFM relies on underlying commodity SMP hardware, which is available on the market. Normally, SMP vendors provide their own SMP Operating System (OS) with an SVR4/POSIX UNIX interface (Linux, HP-UX, SunOS/Solaris, Tru64OSF1, IRIX, AIX, BSD, MacOS, etc.). On top of an SMP OS, the multithreaded dataflow runtime engine performs a software emulation of the dataflow machine. Such a virtual machine has interfaces to the virtual machine language and to C providing the transparent dataflow semantics for conventional programming. BMDFM is built as a hybrid of several architectural principles: MIMD (Multiple Instruction Streams, Multiple Data Streams), which is sustained by commodity SMP. Implicit parallel execution is ensured by dataflow emulation. Von-Neumann computational principle is good to implement the Front-end Control Virtual Machine. An application program (input sequential program) is processed in three stages: preliminary code reorganization (code reorganizer), static scheduling of the statements (static scheduler) and compiling/loading (compiler, loader). The output after the static scheduling stages is a multiple clusters flow that feeds the multithreaded engine via the interface designed in a way to avoid bottlenecks. The multiple clusters flow can be thought of as a compiled input program split into marshaled clusters, in which all addresses are resolved and extended with context information. Splitting into marshaled clusters allows loading them multithreadedly. Context information lets iterations be processed in parallel. Listener thread orders the output stream after the out-of-order processing. The BMDFM dynamic scheduling subsystem is an efficient SMP emulator of the tagged-token dataflow machine. The Shared Memory Pool is divided in three main parts: input/output ring buffer port (IORBP), data buffer (DB), and operation queue (OQ). The front-end control virtual machine schedules an input application program statically and puts clustered instructions and data of the input program into the IORBP. The ring buffer service processes (IORBP PROC) move data into the DB and instructions into the OQ. The operation queue service processes (OQ PROC) tag the instructions as ready for execution if the required operands' data is accessible. Execution processes (CPU PROC) execute instructions, which are tagged as ready and output computed data into the DB or to the IORBP. Additionally, IORBP PROC and OQ PROC are responsible for freeing memory after contexts have been processed. The context is a special unique identifier representing a copy of data within different iteration bodies accordingly to the tagged-token dataflow architecture. This allows the dynamic scheduler to handle several iterations in parallel. Running under an SMP OS, the processes will occupy all available real machine processors and processor cores. In order to allow several processes accessing the same data concurrently, the BMDFM dynamic scheduler locks objects in the shared memory pool via SVR4/POSIX semaphore operations. Locking policy provides multiple read-only access and exclusive access for modification. Supported platforms Every machine supporting ANSI C and POSIX; UNIX System V (SVR4) may run BMDFM. BMDFM is provided as full multi-threaded versions for: x86: Linux/32, FreeBSD/32, OpenBSD/32, NetBSD/32, MacOS/32, SunOS/32, UnixWare/32, Minix/32, Android/32, Win-Cygwin/32, Win-UWIN/32, Win-SFU-SUA/32; x86-64: Linux/64, FreeBSD/64, OpenBSD/64, NetBSD/64, MacOS/64, SunOS/64, Android/64, Win-Cygwin/64; VAX: Ultrix/32; Alpha: Tru64OSF1/64, Linux/64, FreeBSD/64, OpenBSD/64; IA-64: HP-UX/32, HP-UX/64, Linux/64, FreeBSD/64; XeonPhiMIC: Linux/64; MCST-Elbrus: Linux/32, Linux/64; PA-RISC: HP-UX/32, HP-UX/64, Linux/32; SPARC: SunOS/32, SunOS/64, Linux/32, Linux/64, FreeBSD/64, OpenBSD/64; MIPS: IRIX/32, IRIX/64, Linux/32, Linux/64; MIPSel: Linux/32, Linux/64, Android/32, Android/64; PowerPC: AIX/32, AIX/64, MacOS/32, MacOS/64, Linux/32, Linux/64, FreeBSD/32, FreeBSD/64; PowerPCle: Linux/32, Linux/64; S/390: Linux/32, Linux/64; M68000: Linux/32; ARM: Linux/32, Linux/64, FreeBSD/64, Android/32, Android/64, MacOS/64; ARMbe: Linux/64; RISC-V: Linux/32, Linux/64; and a limited single-threaded version for x86: Win/32. See also Dataflow Parallel computing Symmetric multiprocessing References External links Parallel computing Emulation software
49676510
https://en.wikipedia.org/wiki/Kali%20NetHunter
Kali NetHunter
Kali NetHunter is a free and open-source mobile penetration testing platform for Android devices, based on Kali Linux. Kali NetHunter is available for un-rooted devices (NetHunter Rootless), for rooted devices that have a standard recovery (NetHunter Lite), and for rooted devices with custom recovery for which a NetHunter specific kernel is available (NetHunter). Official images are published by Offensive Security on their download page and are updated every quarter. NetHunter images with custom kernels are published for the most popular supported devices, such as Google Nexus, Samsung Galaxy and OnePlus. Many more models are supported, and images not published by Offensive Security can be generated using NetHunter build scripts. Kali NetHunter is maintained by a community of volunteers, and is funded by Offensive Security. Background and history Version 1.1 was released in January 2015 and added support for Oneplus devices & non-english keyboard layouts for HID attacks. Version 1.2 was released in May 2015 and added support for Nexus 9 Android tablets. Version 3.0 was released in January 2016 after a major rewrite of the application, installer, and kernel building framework. This version also introduced support for devices running Android Marshmallow. Version 2019.2 was released in May 2019 and switched to kali-rolling as its Kali Linux container. It adopted the Kali Linux versioning and release cycle to reflect that change. With this release, the number of supported Android devices grew to over 50. Version 2019.3 was released in September 2019 and introduced the NetHunter App Store as the default mechanism for deploying and updating apps. Version 2019.4 was released in December 2019 and premiered the "Kali NetHunter Desktop Experience." Before December 2019, Kali NetHunter was only available for selected Android devices. Installing Kali NetHunter required a device that: is rooted has a custom recovery had a kernel built especially for Kali NetHunter In December 2019, "Kali NetHunter Lite" and "Kali NetHunter Rootless" editions were released to allow users of devices for which no NetHunter specific kernels were available, and users of devices that are not rooted, to install Kali NetHunter with a reduced set of functionality. Version 2020.1 was released in 28 January 2020 and partitioned 3 NetHunter images; NetHunter Rootless, NetHunter Lite, NetHunter Full. Version 2020.2 was released in 12 May 2020 and supported over 160 kernels and 64 devices. Version 2020.3 was released in 18 August 2020 and added Bluetooth Arsenal (It combines a set of bluetooth tools in the Kali NetHunter app with some pre-configured workflows and exciting use cases. You can use your external adapter for reconnaissance, spoofing, listening to and injecting audio into various devices, including speakers, headsets, watches, or even cars.) and supported Nokia 3.1 and Nokia 6.1 phones. Version 2020.4 was released in 18 November 2020 and edited new NetHunter settings menu, added select from different boot animations, and persistent Magisk. Features In addition to the penetration testing tools included with desktop Kali Linux, NetHunter also enables Wireless 802.11 frame injection, one-click MANA Evil Access Points, HID keyboard functionality (for Teensy-like attacks), as well as BadUSB man-in-the-middle /(MitM) attacks. NetHunter App Store Kali Nethunter has an applications store based on a fork of F-Droid with telemetry completely removed. The store has about 42 applications (2021). See also Kali Linux Offensive Security Offensive Security Certified Professional References External links Kali Nethunter Documentation Android (operating system) software ARM operating systems Custom Android firmware Debian-based distributions Digital forensics software Free security software Linux distributions
32915603
https://en.wikipedia.org/wiki/DigiNotar
DigiNotar
DigiNotar was a Dutch certificate authority owned by VASCO Data Security International, Inc. On September 3, 2011, after it had become clear that a security breach had resulted in the fraudulent issuing of certificates, the Dutch government took over operational management of DigiNotar's systems. That same month, the company was declared bankrupt. An investigation into the hacking by Dutch-government appointed Fox-IT consultancy identified 300,000 Iranian Gmail users as the main target of the hack (targeted subsequently using man-in-the-middle attacks), and suspected that the Iranian government was behind the hack. While nobody has been charged with the break-in and compromise of the certificates (), cryptographer Bruce Schneier says the attack may have been "either the work of the NSA, or exploited by the NSA." However, this has been disputed, with others saying the NSA had only detected a foreign intelligence service using the fake certificates. The hack has also been claimed by the so-called Comodohacker, allegedly a 21-year-old Iranian student, who also claimed to have hacked four other certificate authorities, including Comodo, a claim found plausible by F-Secure, although not fully explaining how it led to the subsequent "widescale interception of Iranian citizens". After more than 500 fake DigiNotar certificates were found, major web browser makers reacted by blacklisting all DigiNotar certificates. The scale of the incident was used by some organizations like ENISA and AccessNow.org to call for a deeper reform of HTTPS in order to remove the weakest link possibility that a single compromised CA can affect that many users. Company DigiNotar's main activity was as a certificate authority, issuing two types of certificate. First, they issued certificates under their own name (where the root CA was "DigiNotar Root CA"). Entrust certificates were not issued since July 2010, but some were still valid up to July 2013. Secondly, they issued certificates for the Dutch government's PKIoverheid ("PKIgovernment") program. This issuance was via two intermediate certificates, each of which chained up to one of the two "Staat der Nederlanden" root CAs. National and local Dutch authorities and organisations offering services for the government who want to use certificates for secure internet communication can request such a certificate. Some of the most-used electronic services offered by Dutch governments used certificates from DigiNotar. Examples were the authentication infrastructure DigiD and the central car-registration organisation (RDW). DigiNotar's root certificates were removed from the trusted-root lists of all major web browsers and consumer operating systems on or around August 29, 2011; the "Staat der Nederlanden" roots were initially kept because they were not believed to be compromised. However, they have since been revoked. History DigiNotar was originally set up in 1998 by the Dutch notary Dick Batenburg from Beverwijk and the , the national body for Dutch civil law notaries. The KNB offers all kind of central services to the notaries, and because many of the services that notaries offer are official legal procedures, security in communications is important. The KNB offered advisory services to their members on how to implement electronic services in their business; one of these activities was offering secure certificates. Dick Batenburg and the KNB formed the group TTP Notarissen (TTP Notaries), where TTP stands for trusted third party. A notary can become a member of TTP Notarissen if they comply with certain rules. If they comply with additional rules on training and work procedures, they can become an accredited TTP Notary. Although DigiNotar had been a general-purpose CA for several years, they still targeted the market for notaries and other professionals. On January 10, 2011, the company was sold to VASCO Data Security International. In a VASCO press release dated June 20, 2011, one day after DigiNotar first detected an incident on their systems VASCO's president and COO Jan Valcke is quoted as stating "We believe that DigiNotar's certificates are among the most reliable in the field." Bankruptcy On September 20, 2011, Vasco announced that its subsidiary DigiNotar was declared bankrupt after filing for voluntary bankruptcy at the Haarlem court. Effective immediately the court appointed a receiver, a court-appointed trustee who takes over the management of all of DigiNotar's affairs as it proceeds through the bankruptcy process to liquidation. Refusal to publish report The curator (court-appointed receiver) didn't want the report from ITSec to be published, as it might lead to additional claims towards DigiNotar. The report covered the way the company operated and details of the hack of 2011 that led to its bankruptcy. The report was made on request of the Dutch supervisory agency OPTA who refused to publish the report in the first place. In a freedom of information () procedure started by a journalist, the receiver tried to convince the court not to allow publication of this report, and to confirm the OPTA's initial refusal to do so. The report was ordered to be released, and was made public in October 2012. It shows a near total compromise of the systems. Issuance of fraudulent certificates On July 10, 2011, an attacker with access to DigiNotar's systems issued a wildcard certificate for Google. This certificate was subsequently used by unknown persons in Iran to conduct a man-in-the-middle attack against Google services. On August 28, 2011, certificate problems were observed on multiple Internet service providers in Iran. The fraudulent certificate was posted on pastebin. According to a subsequent news release by VASCO, DigiNotar had detected an intrusion into its certificate authority infrastructure on July 19, 2011. DigiNotar did not publicly reveal the security breach at the time. After this certificate was found, DigiNotar belatedly admitted dozens of fraudulent certificates had been created, including certificates for the domains of Yahoo!, Mozilla, WordPress and The Tor Project. DigiNotar could not guarantee all such certificates had been revoked. Google blacklisted 247 certificates in Chromium, but the final known total of misissued certificates is at least 531. Investigation by F-Secure also revealed that DigiNotar's website had been defaced by Turkish and Iranian hackers in 2009. In reaction, Mozilla revoked trust in the DigiNotar root certificate in all supported versions of its Firefox browser and Microsoft removed the DigiNotar root certificate from its list of trusted certificates with its browsers on all supported releases of Microsoft Windows. Chromium / Google Chrome was able to detect the fraudulent *.google.com certificate, due to its "certificate pinning" security feature; however, this protection was limited to Google domains, which resulted in Google removing DigiNotar from its list of trusted certificate issuers. Opera always checks the certificate revocation list of the certificate's issuer and so they initially stated they did not need a security update. However, later they also removed the root from their trust store. On September 9, 2011, Apple issued Security Update 2011-005 for Mac OS X 10.6.8 and 10.7.1, which removes DigiNotar from the list of trusted root certificates and EV certificate authorities. Without this update, Safari and Mac OS X do not detect the certificate's revocation, and users must use the Keychain utility to manually delete the certificate. Apple did not patch iOS until October 13, 2011, with the release of iOS 5. DigiNotar also controlled an intermediate certificate which was used for issuing certificates as part of the Dutch government’s public key infrastructure "PKIoverheid" program, chaining up to the official Dutch government certification authority (Staat der Nederlanden). Once this intermediate certificate was revoked or marked as untrusted by browsers, the chain of trust for their certificates was broken, and it was difficult to access services such as the identity management platform DigiD and the Tax and Customs Administration. , the Dutch computer emergency response team, initially did not believe the PKIoverheid certificates had been compromised, although security specialists were uncertain. Because these certificates were initially thought not to be compromised by the security breach, they were, at the request of the Dutch authorities, kept exempt from the removal of trust – although one of the two, the active "Staat der Nederlanden - G2" root certificate, was overlooked by the Mozilla engineers and accidentally distrusted in the Firefox build. However, this assessment was rescinded after an audit by the Dutch government, and the DigiNotar-controlled intermediates in the "Staat der Nederlanden" hierarchy were also blacklisted by Mozilla in the next security update, and also by other browser manufacturers. The Dutch government announced on September 3, 2011, that they will switch to a different firm as certificate authority. Steps taken by the Dutch government After the initial claim that the certificates under the DigiNotar-controlled intermediate certificate in the PKIoverheid hierarchy weren't affected, further investigation by an external party, the Fox-IT consultancy, showed evidence of hacker activity on those machines as well. Consequently, the Dutch government decided on September 3, 2011, to withdraw their earlier statement that nothing was wrong. (The Fox-IT investigators dubbed the incident "Operation Black Tulip".) The Fox-IT report identified 300,000 Iranian Gmail accounts as the main victims of the hack. DigiNotar was only one of the available CAs in PKIoverheid, so not all certificates used by the Dutch government under their root were affected. When the Dutch government decided that they had lost their trust in DigiNotar, they took back control over the company's intermediate certificate in order to manage an orderly transition, and they replaced the untrusted certificates with new ones from one of the other providers. The much-used DigiD platform now uses a certificate issued by Getronics PinkRoccade Nederland B.V. According to the Dutch government, DigiNotar gave them its full co-operation with these procedures. After the removal of trust in DigiNotar, there are now four Certification Service Providers (CSP) that can issue certificates under the PKIoverheid hierarchy: Digidentity ESG or De Electronische Signatuur QuoVadis KPN Certificatiedienstverlening All four companies have opened special help desks and/or published information on their websites as to how organisations that have a PKIoverheid certificate from DigiNotar can request a new certificate from one of the remaining four providers. See also Comodo Group: Iran SSL certificate controversy Operation Shady RAT PLA Unit 61398 Stuxnet Tailored Access Operations References Further reading Fox-IT (August 2012). Black Tulip: Report of the investigation into the DigiNotar Certificate Authority breach. External links (English, not mentioning the bankruptcy) (Dutch, mentioning the bankruptcy) Fraudulent Certificates ‐ List of Common Names DigiNotar reports security incident Pastebin posts: Gmail.com SSL MITM ATTACK BY Iranian Government -27/8/2011 Internet death sentence for DigiNotar's Root CA! Mozilla Foundation Security Advisory 2011-34: Protection against fraudulent DigiNotar certificates Microsoft Security Advisory (2607712): Fraudulent Digital Certificates Could Allow Spoofing DigiNotar Compromise - Mozilla's Gervase Markham's account of how and why Mozilla blacklisted DigiNotar. Account by the Director of Firefox Engineering at the Mozilla Corporation of why Mozilla's removal of DigiNotar from the trusted list is not a temporary suspension, but a complete revocation of trust. by Fox-IT, showing the subsequent OCSP requests by Iranian users of DigiNotar certificates (likely attacks). Former certificate authorities Companies established in 1997 Companies disestablished in 2011
10282450
https://en.wikipedia.org/wiki/Online%20office%20suite
Online office suite
An online office suite, online productivity suite or cloud office suite is an office suite offered in the form of a web application. It is accessed online using a web browser. This allows people to work together worldwide and at any time, thereby leading to web-based collaboration and virtual teamwork. Some online office suites can be installed on-premises, while other are offered only as software as a service. Of the latter, basic versions can be offered for free, while more advanced versions are often made available with a subscription fee. The latest offerings have been created to run as pure HTML5 web pages, known as progressive web applications, no longer requiring a Cloud or online connection to function. Online office suites exist as both, proprietary and open-source software. Components An online office suite may include a broad set of applications, such as the following: Document creation and editing applications Word processor Spreadsheet Presentation program Notetaking Diagramming tool Raster graphics editor Publishing applications Content management system Web portal Wiki Blog Forums Collaborative applications Webmail Instant messaging (voice over IP) Calendar Management applications Data management Project management Customer relationship management Enterprise resource planning Accounting Advantages The cost is low. In most cases, there is no specific charge for using the service for users who already have access to a computer with a web browser and a connection to the Internet. There is no need to download or install software outside of the office suite’s web page, including the ongoing upgrade chores of adding new features to or eliminating bugs from the office suite. Online office suites can run on thin clients with minimal hardware requirements. Online office suites provide the ability for a group of people to share a document without the need to run their own server. There is no need to purchase or upgrade a software license. Instead, the online office suite is available in the form of software as a service. Online office suites are portable. Users can access their documents from almost any device with a connection to the Internet, regardless of which operating system they use. If the user’s computer fails, the documents are still safely stored on the remote server. Online service providers' backup processes and overall stability will generally be superior to that of most home systems. Disadvantages Access requires connectivity—if the remote server or network is unavailable, the content will also be unavailable. However, in many cases, the online suite will allow the user to regularly backup data or even provide synchronization of documents between the server and the local computer. There are speed and accessibility issues. Most of the available online office suites require a high speed (broadband) Internet connection. That can be a problem for users who are limited by a slower connection to the Internet. The number of features available is an issue. Online office suites tend to lack the more advanced features available on their offline counterparts. In the long term, if there is a subscription charge to use the service, the ongoing subscription cost may be more expensive than purchasing offline software upfront. The user has no control over the version of the software used. If the software is changed the user is forced to use the changed version, even if the changed version is less suited to the user. The user is reliant on the service provider for security and privacy of their documents. References Online word processors Online spreadsheets 2000s neologisms
236298
https://en.wikipedia.org/wiki/Limbo%20%28programming%20language%29
Limbo (programming language)
Limbo is a programming language for writing distributed systems and is the language used to write applications for the Inferno operating system. It was designed at Bell Labs by Sean Dorward, Phil Winterbottom, and Rob Pike. The Limbo compiler generates architecture-independent object code which is then interpreted by the Dis virtual machine or compiled just before runtime to improve performance. Therefore all Limbo applications are completely portable across all Inferno platforms. Limbo's approach to concurrency was inspired by Hoare's communicating sequential processes (CSP), as implemented and amended in Pike's earlier Newsqueak language and Winterbottom's Alef. Language features Limbo supports the following features: modular programming concurrent programming strong type checking at compile and run-time interprocess communication over typed channels automatic garbage collection simple abstract data types Virtual machine The Dis virtual machine that executes Limbo code is a CISC-like VM, with instructions for arithmetic, control flow, data motion, process creation, synchronizing and communicating between processes, loading modules of code, and support for higher-level data-types: strings, arrays, lists, and communication channels. It uses a hybrid of reference counting and a real-time garbage-collector for cyclic data. Aspects of the design of Dis were inspired by the AT&T Hobbit microprocessor, as used in the original BeBox. Examples Limbo uses Ada-style definitions as in: name := type value; name0,name1 : type = value; name2,name3 : type; name2 = value; Hello world implement Command; include "sys.m"; sys: Sys; include "draw.m"; include "sh.m"; init(nil: ref Draw->Context, nil: list of string) { sys = load Sys Sys->PATH; sys->print("Hello World!\n"); } Books The 3rd edition of the Inferno operating system and Limbo programming language are described in the textbook Inferno Programming with Limbo (Chichester: John Wiley & Sons, 2003), by Phillip Stanley-Marbell. Another textbook The Inferno Programming Book: An Introduction to Programming for the Inferno Distributed System, by Martin Atkins, Charles Forsyth, Rob Pike and Howard Trickey, was started, but never released. See also The Inferno operating system Alef, the predecessor of Limbo Plan 9 from Bell Labs Go (programming language), similar language from Google AT&T Hobbit, a processor architecture which inspired the Dis VM References External links Vita Nuova page on Limbo A Descent into Limbo by Brian Kernighan The Limbo Programming Language by Dennis M. Ritchie and Addendum by Vita Nuova. Inferno Programming with Limbo by Phillip Stanley-Marbell Threaded programming in the Bell Labs CSP style . . . C programming language family Concurrent programming languages Free compilers and interpreters Inferno (operating system) Virtual machines
67164976
https://en.wikipedia.org/wiki/Alyssa%20Rosenzweig
Alyssa Rosenzweig
Alyssa Rosenzweig is a software developer and software freedom activist known for her work on free software graphics drivers. Education As of 2021 she studies mathematics at Innis College at the University of Toronto as a Lester B. Pearson International Scholar. Before she attended Dougherty Valley High School, with enrichment classes at Harvard Summer School and the Center of Talented Youth. Career She currently works as a software engineer at Collabora and leads the Panfrost project, developing free software OpenGL drivers for the Mali GPU to support accelerated graphics in upstream Mesa, shipping out-of-the-box on devices like the Pinebook Pro. In September 2020, she wrote a Linux client for the COVID-19 contact tracing used in Canada. As an Asahi Linux developer she works on reverse-engineering the Apple GPU for the purpose of porting Linux to the Apple M1 processor to enable the development of a free software Gallium3D-based OpenGL driver targeting the "AGX" architecture found in the M1 GPU. In July 2021, Rosenzweig demonstrated Debian running bare metal on the Apple M1 with a mainline kernel. Awards She is the recipient of the 2020 Award for Outstanding New Free Software Contributor and a Google Open Source Peer Bonus. References External links Personal website Living people Year of birth missing (living people) Canadian software engineers Open source people University of Toronto people
14151809
https://en.wikipedia.org/wiki/Scott%20Ambler
Scott Ambler
Scott W. Ambler (born 1966) is a Canadian software engineer, consultant and author. He is an author of books about the Disciplined Agile Delivery toolkit, the Unified process, Agile software development, the Unified Modeling Language, and Capability Maturity Model (CMM) development. He regularly runs surveys which explore software development issues and works with organizations in different countries on their approach to software development. He also has a passion for 8-bit Atari computers and is actively researching the history of the 8-bit Atari platform. Biography Ambler received a BSc in computer science and an MA in information science from the University of Toronto. He has been working in the IT industry since the mid-1980s, with object technology since the early 1990s, and in IT methodologies since the mid-1990s. Scott has led the development of several software processes, including Disciplined Agile Delivery (DAD) (with Mark Lines), Agile Modeling (AM), Agile Data (AD), Enterprise Unified Process (EUP), and Agile Unified Process (AUP) methodologies. Scott was a Senior Consulting Partner with SA+A and then became the Chief Scientist at Disciplined Agile which became a part of the Project Management Institute while helping organizations around the world to improve their IT processes. Ambler was a contributing editor with Dr. Dobb's Journal, and has written columns for Software Development, Object Magazine, and Computing Canada. He is speaker at a wide variety of practitioner and academic conferences worldwide. Public conferences include Agile 20XX, Agile India 20XX, Software Development, Agile Universe, UML World, JavaOne, OOPSLA, EuroSPI, and CAiSE. Scott also is a keynote speaker at private conferences organized by large, Fortune 500 companies for their managers and IT staff. He is a Fellow of the Disciplined Agile Consortium and the International Association of Software Architects (IASA). In the past he was an Eclipse Process Framework (EPF) committer and a Jolt Judge at the Jolt Awards. Work Ambler has co-developed Disciplined Agile Delivery (DAD) with Mark Lines, the Enterprise Unified Process (an extension of the Rational Unified Process), and Agile Modeling. See also Database refactoring Publications Scott Ambler has published several books and articles. A selection: References External links Scott W. Ambler's Home Page Scott Ambler + Associates 1966 births Living people Canadian software engineers Canadian technology writers
216873
https://en.wikipedia.org/wiki/Mentor%20Graphics
Mentor Graphics
Mentor Graphics is a US-based electronic design automation (EDA) multinational corporation for electrical engineering and electronics, headquartered in Wilsonville, Oregon. Founded in 1981, the company was acquired by Siemens in 2017. Since 2021, the former Mentor Graphics operates as a division at Siemens named Siemens EDA. Mentor Graphics was noted for distributing products that assist in electronic design automation, simulation tools for analog mixed-signal design, VPN solutions, and fluid dynamics and heat transfer tools. The company leveraged Apollo Computer workstations to differentiate itself within the computer-aided engineering (CAE) market with its software and hardware. History Mentor Graphics was founded in 1981 by Tom Bruggere, Gerry Langeler and Dave Moffenbeier. The first round of money, worth $1 million, came from Sutter Hill, Greylock, and Venrock Associates. The next round was $2 million from five venture capital firms, and in April 1983 a third round raised an additional $7 million. Apollo Computer workstations were chosen as the initial hardware platform. Based in Chelmsford, Massachusetts, Apollo was less than a year old and had only announced itself to the public a few weeks prior to when the founders of Mentor Graphics began their initial meetings. When Mentor entered the CAE market the company had two technical differentiators: the first was the software - Mentor, Valid, and Daisy each had software with different strengths and weaknesses. The second, was the hardware - Mentor ran all programs on the Apollo workstation, while Daisy and Valid each built their own hardware. By the late 1980s, all EDA companies abandoned proprietary hardware in favor of workstations manufactured by companies such as Apollo and Sun Microsystems. After a frenzied development, the IDEA 1000 product was introduced at the 1982 Design Automation Conference, though in a suite and not on the floor. Acquisitions Timeline Related In June 2008, Cadence Design Systems offered to acquire Mentor Graphics in a leveraged buyout. On 15 August 2008, Cadence withdrew this offer quoting an inability to raise the necessary capital and the unwillingness of Mentor Graphics' Board and management to discuss the offer. In February 2011, activist investor Carl Icahn offered to buy the company for about $1.86 billion in cash. In November 2016, Mentor Graphics announced that it was to be acquired by Siemens for $4.5 billion, at $37.25 per share, a 21% premium on Mentor's closing price on the previous Friday. The acquisition was completed in March 2017. Mentor Graphics started to operate as "Mentor, a Siemens Business". Under the terms of the acquisition, Mentor Graphics kept its headquarters in Wilsonville with workforce intact, and operated as an independent subsidiary. In January 2021, Mentor became a division of Siemens and was renamed as Siemens EDA. Locations Mentor product development was located in the US, Taiwan, Egypt, Poland, Hungary, Japan, France, Canada, Pakistan, UK, Armenia, India and Russia. Notable persons James "Jim" Ready left Mentor in 1999 to form the embedded Linux company MontaVista. Neil Henderson joined Mentor Graphics in 2002 with the acquisition of Accelerated Technology Inc. Stephen Mellor, a leader in the UML space and co-originator of the Shlaer-Mellor design methodology, joined Mentor Graphics in 2004 following the acquisition of Project Technology. Management Walden C. Rhines was the company's chief executive officer and president following the acquisition by Siemens, until November 2018 when he became CEO Emeritus. Tony Hemmelgarn is now the president and CEO of Siemens Digital Industries Software, which includes the former Mentor product line. Products Mentor offered the following tools: Electronic Design Automation Integrated circuit layout full-custom and schematic-driven layout (SDL) tools such as IC Station or Memory Builder, a first industry tool for rapid embedded memory design that helped to develop single- or dual-port RAMs (synchronous and asynchronous), as well as diffusion and metal read only memories (ROM) IC place and route tool: Aprisa IC Verification tools such as Calibre nmDRC, Calibre nmLVS, Calibre xRC, Calibre xACT 3D IC Design for Manufacturing tools such as Calibre LFD, Calibre YieldEnhancer, Calibre, and YieldAnalyzer Schematic editors for electronic schematics such as Xpedition Designer Layout and design tools for printed circuit boards with programs such as PADS, Xpedition Layout, HyperLynx and Valor NPI Component library management tools IP cores for ASIC and FPGA designs Embedded Systems Mentor Embedded Linux for ARM, MIPS, Power, and x86 architecture processors Real-time operating systems: Nucleus OS (acquired in 2002 when Mentor acquired Accelerated Technology, Inc.) VRTX (acquired in 1995 when Mentor bought Microtec Research) AUTOSAR implementation: Embedded implementation VSTAR in part acquired from Mecel in 2013 Configuration tooling Volcano Vehicle Systems Builder (VSB) Development Tools: Sourcery CodeBench and Sourcery GNU toolchains (acquired in 2010 when Mentor acquired CodeSourcery) Inflexion UI - (Next Device was acquired by Mentor in 2006) xtUML Design Tools: BridgePoint (acquired in 2004 when Mentor acquired Project Technology) VPN Solutions: Nucleus Point-to-Point Tunneling Protocol (PPTP) software Nucleus NET networking stack Nucleus implementation of the Microsoft Point-to-Point Encryption (MPPE) protocol Nucleus PPP software FPGA Synthesis Precision Synthesis - Advanced RTL & physical synthesis for FPGAs Electrical Systems, Cabling, and Harness Capital - a suite of integrated tools for the design, validation and manufacture of electrical systems and harnesses VeSys - a mid-market toolset for vehicle electrical system and harness design Simulation ModelSim is a hardware simulation and debug environment primarily targeted at smaller ASIC and FPGA design QuestaSim is a simulator with advanced debug capabilities targeted at complex FPGA's and SoC's. QuestaSim can be used by users who have experience with ModelSim as it shares most of the common debug features and capabilities. One of the main differences between QuestaSim and Modelsim (besides performance/capacity) is that QuestaSim is the simulation engine for the Questa Platform which includes integration of Verification Management, Formal based technologies, Questa Verification IP, Low Power Simulation and Accelerated Coverage Closure technologies. QuestaSim natively supports SystemVerilog for Testbench, UPF, UCIS, OVM/UVM where ModelSim does not. Eldo is a SPICE simulator Xpedition AMS is a virtual lab for mechatronic system design and analysis ADiT is a Fast-SPICE simulator Questa ADMS is a mixed-signal verification tool Emulation The Veloce product family enables SoC emulation and transaction-based acceleration. Mechanical Design Fluid Dynamics and Heat Transfer tools: Simcenter Flotherm is a Computational Fluid Dynamics tool dedicated to electronics cooling using parameterized ‘SmartParts’ for common electronic components such as fans, heatsinks, and IC packages Simcenter Flotherm XT is an electronics cooling CFD tool incorporating a solid modeler for manipulating MCAD parts. Simcenter FLOEFD is a ‘design concurrent’ CFD tool for use in early-stage product design and is embedded within MCAD systems such as Solidworks, Creo Elements/Pro, CATIA V5 and Siemens NX Thermal Characterization and Thermal Interface Material (TIM) Measurement equipment: Simcenter T3STER is a hardware product that embodies an implementation of the JEDEC JESD51-1 standard for IC package thermal characterization and is compliant with JESD51-14 for Rth-JC measurement Simcenter TERALED provides automation of the CIE 127:2007 standard providing total flux, chromaticity and correlated color temperature (CCT) for power LEDs. With T3Ster it provides thermal resistance metrics for LEDs based on the real dissipated heating power. Simcenter DYNTIM extends T3Ster, providing a dynamic thermal test station for thermal conductivity measurements of thermal interface materials (TIMs), thermal greases and gap pads. Simcenter Flomaster is a 1D or system-level CFD solution for analyzing fluid mechanics in complex pipe flow systems (from the acquisition of Flowmaster Ltd in 2012). CADRA Design Drafting is a 2-1/2D mechanical drafting and documentation package specifically designed for drafting professionals. It provides the tools needed to develop complex drawings quickly and easily (from the acquisition of the CADRA product in 2013). References Companies based in Wilsonville, Oregon 1981 establishments in Oregon 2017 mergers and acquisitions Electronic design automation companies Electronics companies established in 1981 Electronics companies of the United States Siemens American subsidiaries of foreign companies
2064113
https://en.wikipedia.org/wiki/Airware
Airware
Airware (incorporated as Unmanned Innovation, Inc.) was an American venture-funded startup that provided commercial unmanned aerial vehicles for enterprises. The company ceased operations on September 14, 2018. Airware was founded in 2011 in Newport Beach, California, by Jonathan Downey. The company relocated to San Francisco in January 2014. The company produced enterprise drones which combine hardware, on-aircraft and mobile software, and cloud services. Downey has stated the company is focused on building systems for drones for commercial uses, including anti-poaching efforts, infrastructure inspections, and precision agriculture. History Airware was founded by Downey in 2011 out of a frustration with the "inflexible and costly" autopilot systems for unmanned aircraft. Airware was incubated at both Lemnos Labs and Y Combinator. In March 2016, the company announced a $30 million Series C round of financing led by Next World Capital with Andreessen Horowitz, Kleiner Perkins Caufield & Byers and Cisco Systems executive chairman John T. Chambers. Andreessen Horowitz partner Martin Casado, Kleiner Perkins Caufield & Byers partner Mike Abbott, and John T. Chambers are members of the company's board. In 2015, Airware launched a new venture fund for commercial drones to support "scaling the use of drones across a variety of commercial applications." Airware purchased Redbird, a drone analytics software company, in 2016. On September 14, 2018, Airware announced it was ceasing operations effective immediately. Products and services Airware offered enterprise drone services combining hardware, on-aircraft and mobile software, and cloud services for industries like mining, insurance, and construction. Airware offered navigation software for drones, table software to guide and monitors drones in flight, and cloud services to store and manage the information gathered by drones. Where most software is designed for specific models of drones, Airware was developing a platform that enables compatibility across aircraft. The company previously collaborated with commercial drone manufacturers to integrate its autopilot hardware and software, then consulted directly with enterprise clients to identify solutions and to ensure regulatory compliance. References Further reading "Why Is America Losing the Commercial Drone Wars?". June/July/August 2015. Washington Monthly. "Caterpillar invests in Airware bringing drone tech to mining and construction enterprises". February 2, 2017. TechCrunch. External links Official website Aerospace companies of the United States Technology companies based in the San Francisco Bay Area 2011 establishments in California Companies established in 2011 Unmanned aerial vehicle manufacturers
349679
https://en.wikipedia.org/wiki/Technical%20University%20of%20Munich
Technical University of Munich
The Technical University of Munich (TUM or TU Munich) () is a public research university in Munich, with additional campuses in Garching, Freising, Heilbronn, Straubing, and Singapore. A technical university that specializes in engineering, technology, medicine, and the applied and natural sciences, it is organized into 11 schools and departments, and supported by numerous research centers. A University of Excellence under the German Universities Excellence Initiative, TUM is consistently ranked among the leading universities in the European Union and its researchers and alumni include 17 Nobel laureates and 23 Leibniz Prize winners. History 19th century In 1868, King Ludwig II of Bavaria founded the Polytechnische Schule München with Karl Maximilian von Bauernfeind as founding director. The new school had its premises at Arcisstraße, where it is still located today. At that time, around 350 students were supervised by 24 professors and 21 lecturers. The institution was divided into six departments: The "General Department" (mathematics, natural sciences, humanities, law and economics), the "Engineering Department" (civil engineering and surveying), the "Building Construction Department" (architecture), the "Mechanical-Technical Department" (mechanical engineering), the "Chemical-Technical Department" (chemistry), and the "Agricultural Department". In 1877, the Polytechnische Schule München became the Technische Hochschule München (TH München), and in 1901 it was granted the right to award doctorates. With an average of 2,600 to 2,800 students, the TH München became for a time Germany's largest technical university, ahead of the TH Berlin. In 1970 the institution was renamed Technische Universität München. 20th century In 1906, Anna Boyksen became the first female student to enroll in electrical engineering, after the Bavarian government had allowed women to study at technical universities in the German Empire. In 1913, Jonathan Zenneck became director of the newly created Physics Institute. Martha Schneider-Bürger became the first German female civil engineer to graduate from the university in 1927. During the Weimar Republic, the TH München had to manage with scarce resources and was drawn into radical political struggles in the years of the November Revolution, the Great Depression, and Adolf Hitler's rise to power. In the winter semester of 1930/31, the National Socialist German Students' League became the strongest faction in the General Students' Committee (AStA) for the first time. In Nazi Germany, the Führerprinzip was also imposed on the universities. The autonomy of the TH München was substantially restricted. In 1933, the newly passed Law for the Restoration of the Professional Civil Service removed "non-Aryan" staff or those married to "non-Aryans" from their positions, together with politically "undesirable" professors. Jewish students no longer had equal rights and were not allowed to be enrolled after 1938. During World War II, the TH München conducted large scale armaments research in support of the war effort. Notable professors during this time include aircraft designer Willy Messerschmitt and Walther Meissner. Basic research continued to be conducted at a high level in many institutes, as individual professors, staff members and students dared disobedience and obstruction. Nobel prize laureate Hans Fischer protected Jewish students from Nazi prosecution. He committed suicide shortly before the end of the war. Post World War II During the war, 80 percent of the university's facilities in Munich had been destroyed. Under these difficult conditions, teaching resumed in April 1946. In 1956, the construction of a research reactor in Garching was the beginning of the Garching campus. In 1969, the physics department building was opened there, followed in 1977 by new buildings for the chemistry, biology and geoscience departments. In 1967, a Faculty of Medicine was founded with campuses in Haidhausen (Rechts der Isar Hospital) and Schwabing. By 1968, the TH München comprised six faculties, 8,400 students, and 5,700 staff. In 1972, the Zentrale Hochschulsportanlage, a 45-hectare sports center, was built on the grounds of the 1972 Summer Olympics. In 1970, the TH München was renamed to its present name Technische Universität München. When the Bavarian Higher Education Act came into force in 1974, the six faculties were replaced by eleven departments. In 1992, the field of computer science was established as an independent Department of Informatics, having previously been part of the Department of Mathematics since 1967. 21st century In 2002, TUM Asia was founded in Singapore, in cooperation with the Nanyang Technological University and the National University of Singapore. It was the first time that a German university had established a subsidiary abroad. The Department of Sport and Health Sciences and the School of Management were established in 2002. The Weihenstephan departments were combined into the "Weihenstephan Centre of Life and Food Sciences" (WZW), which would later become the School of Life Sciences. With the establishment of the School of Education in 2009, the School of Governance in 2016 and the Department of Aerospace and Geodesy in 2018, the university comprises 15 schools and departments. Since the inception of the German Universities Excellence Initiative in 2006, TUM has won every round of evaluation and the title University of Excellence. Campuses TUM's academic faculties are divided amongst numerous campuses. Munich The historic Main Campus (Stammgelände) is located in Maxvorstadt, the central borough of Munich. Today, the departments of Architecture, Civil, Geo and Environmental Engineering, Electrical and Computer Engineering and the Schools of Management, Governance, Education are located here. The TUM School of Medicine is located at the site of its university hospital, the Rechts der Isar Hospital, in the district of Haidhausen. The TUM Department of Sport and Health Sciences is located in the Olympiapark, the former site of the 1972 Summer Olympics. Garching The campus in Garching, located around 10 km north of Munich, has grown to become the largest TUM campus. In the last decades, the departments of Physics, Chemistry, Mechanical Engineering, Informatics and Mathematics have all relocated from their former buildings in the Main Campus. They have since been joined by numerous research institutes, including the Max Planck Institutes for Plasma Physics, Astrophysics, Extraterrestrial Physics and Quantum Optics, the Forschungsreaktor München II (FRM II), the headquarters of the European Southern Observatory (ESO), and the Leibniz Supercomputing Centre, one of the fastest supercomputers in Europe. A landmark of the Garching campus is the Oskar von Miller Tower, a meteorological measurement tower with a height of 62 m. The Garching campus is connected to Munich by the Autobahn and the Munich U-Bahn. It has its own fire department. Weihenstephan The third TUM campus is located 35 km north of Munich in Weihenstephan, Freising. It hosts the School of Life Sciences. Other locations Additional TUM facilities are located in Ottobrunn (Department of Aerospace and Geodesy), Straubing, Heilbronn, and Singapore. TUM Asia TUM operates a subsidiary in Singapore. In 2001, the German Institute of Science and Technology (GIST) – TUM Asia was founded in partnership with the National University of Singapore and the Nanyang Technological University, offering a range of Master's programs. In 2010, TUM Asia started offering bachelor's degrees in collaboration with the Singapore Institute of Technology. In 2010, TUM and the Nanyang Technological University founded TUMCREATE, a research platform for the improvement of Singapore's public transportation. Academics Schools and departments As a technical university, the university specializes in engineering, technology, medicine, and the applied and natural sciences. Compared to a Volluniversität (a universal university), it lacks the Geisteswissenschaften, including law and many branches of the social sciences. As of 2020, the Technical University of Munich is organized into 15 schools and departments: Other institutions include the Rechts der Isar Hospital, the TUM Graduate School and the Bavarian School of Public Policy. Research The Technical University of Munich is one of the most research-focused universities in Europe. This claim is supported by relevant rankings, such as the funding ranking of the German Research Foundation and the research ranking of the Centre for Higher Education. Under the German Universities Excellence Initiative, TUM has obtained funding for multiple research clusters, including e-conversion (energy technology), MCQST (quantum mechanics), ORIGINS (astrophysics, biophysics and particle physics), and SYNERGY (neurology). In addition to the schools and departments, TUM has set up numerous research centers with external cooperation partners. Integrative research centers (IRCs) combine research with teaching. They include the TUM Institute for Advanced Study (TUM-IAS), the Munich Center for Technology in Society (MCTS), the Munich Data Science Institute (MDSI), the Munich School of Engineering (MSE), the Munich School of BioEngineering (MSB), and the Munich School of Robotics and Machine Intelligence (MSRM). Corporate research centers (CRCs) carry out research independently of the schools and departments, cooperating with industry partners for application-driven research. They include the research reactor FRM II, the Center for Functional Protein Assemblies (CPA), the Catalysis Research Center (CRC), the center for translational Cancer Research (TranslaTUM), the Walter Schottky Institute (WSI), the Hans Eisenmann-Zentrum for Agricultural Science, and the Institute for Food & Health (ZIEL). Rankings TUM is ranked first in Germany in the fields of engineering and computer science, and within the top three in the natural sciences. In the QS World Rankings, TUM is ranked 25th (worldwide) in engineering and technology, 28th in the natural sciences, 35th in computer science, and 50th place overall. It is the highest ranked German university in those subject areas. In the Times Higher Education World University Rankings, TUM stands at 38th place worldwide and 2nd place nationwide. Worldwide, it ranks 14th in computer science, 22nd in engineering and technology, and 23rd in the physical sciences. It is the highest ranked German university in those subject areas. In the Academic Ranking of World Universities, TUM is ranked at 52nd place in the world and 2nd place in Germany. In the subject areas of computer science and engineering, electrical engineering, aerospace engineering, food science, biotechnology, and chemistry, TUM is ranked first in Germany. In the 2020 Global University Employability Ranking of the Times Higher Education World Rankings, TUM was ranked 12th in the world and 3rd in Europe. TUM is ranked 7th overall in Reuters' 2019 European Most Innovative University ranking. The TUM School of Management is triple accredited by the European Quality Improvement System (EQUIS), the Association to Advance Collegiate Schools of Business (AACSB) and the Association of MBAs (AMBA). Partnerships TUM has over 160 international partnerships, ranging from joint research activities to international study programs. Partners include: Europe: ETH Zurich, EPFL, ENSEA, École Centrale Paris, TU Eindhoven, Technical University of Denmark, Technical University of Vienna United States: MIT, Stanford University, Northwestern University, University of Illinois, Cornell University, University of Texas at Austin, Georgia Tech Asia: National University of Singapore, Multimedia University, Hong Kong University of Science and Technology, Huazhong University of Science and Technology, Tsinghua University, University of Tokyo, Indian Institute of Technology Delhi, Amrita University, Sirindhorn International Institute of Technology, Australia: Australian National University, University of Melbourne, RMIT University. Through the Erasmus+ program and its international student exchange program TUMexchange, TUM students are provided by opportunities to study abroad. Student life As of winter semester 2021, 48,000 students are enrolled at TUM, of whom 36% are female and 38% are international students. Student initiatives Various initiatives are run by students, including TEDxTUM, the TUM Speaker Series (past speakers having included Ban Ki-moon, Tony Blair, Bill Gates and Eric Schmidt), and IKOM, a career fair. A notable student group is the Workgroup for Rocketry and Space Flight (WARR), which won all SpaceX Hyperloop pod competitions in 2017 through 2019. In 2021, TUM Boring, won the tunnel-boring competition sponsored by The Boring Company in Las Vegas, Nevada. Student government The Student Council is the main body for university-wide student representation. It elects the General Student Committee (AStA), which represents the professional, economic and social interests of the students, by the Bavarian Higher Education Act. Each school or department will also have a separate Departmental Student Council. Every year, university elections are held to elect student representatives in the Senate (the university's highest academic authority) and in the faculty councils. Events The Student Council organizes a number of annual festivals. TUNIX and GARNIX are week-long open air festivals held every summer. TUNIX is held at the Königsplatz near the Munich campus, while GARNIX is held at the Garching campus. GLÜHNIX is a christmas market held in front of the Department of Mechanical Engineering every December. MaiTUM is a Bavarian Maifest, held at the Main Campus in May each year. The Student Council also organizes numerous events, including the student-run TU Film cinema, the Hörsaal Slam, the Benefizkabarett, and the MeUP party. Departmental Student Councils also organize their own events, such as Unity, esp, and the Brückenfest. Campus life The Zentrale Hochschulsportanlage (ZHS) is the largest university sports facility in Germany, offering hundreds of different sports programs. Music ensembles at TUM include the TUM Chamber Orchestra, the TUM Jazz Band, the TUM Choir, and the Symphonisches Ensemble München, a full-size symphony orchestra. Notable people Nobel Prize laureates 16 Nobel Prize winners have studied, taught or researched at the TUM: 1927 – Heinrich Otto Wieland, Chemistry (bile acids) 1929 – Thomas Mann, Literature (Buddenbrooks) 1930 – Hans Fischer, Chemistry (constitution and synthesis of haemin and chlorophyll) 1961 – Rudolf L. Mößbauer, Physics (Mößbauer effect) 1964 – Konrad Emil Bloch, Physiology or Medicine (mechanism and regulation of the cholesterol and fatty acid metabolism) 1973 – Ernst Otto Fischer, Chemistry (sandwich complexes) 1985 – Klaus von Klitzing, Physics (quantum Hall effect) 1986 – Ernst Ruska, Physics (electron microscope) 1988 – Johann Deisenhofer and Robert Huber, Chemistry (crystal structure of an integral membrane protein) 1989 – Wolfgang Paul, Physics (ion trap) 1991 – Erwin Neher, Physiology or Medicine (function of single ion channels in cells) 2001 – Wolfgang Ketterle, Physics (Bose-Einstein condensation in dilute gases of alkali atoms) 2007 – Gerhard Ertl, Chemistry (chemical processes on solid surfaces) 2016 – Bernard L. Feringa (TUM-IAS fellow), Chemistry (molecular machine) 2017 – Joachim Frank, Chemistry (cryo-electron microscopy) Scientists Friedrich L. Bauer, computer scientist, known for the stack data structure Rudolf Bayer, computer scientist, known for the B-tree and Red–black tree Rudolf Diesel, engineer, inventor of the Diesel engine Claude Dornier, airplane designer Emil Erlenmeyer, chemist, known for the Erlenmeyer flask Asta Hampe, engineer, statistician and economist Carl von Linde, engineer, discoverer of the refrigeration cycle Heinz Maier-Leibnitz, physicist Walther Meissner, physicist, known for the Meissner effect Willy Messerschmitt, aircraft designer, known for the Messerschmitt fighters Oskar von Miller, engineer, founder of the Deutsches Museum Erich Rieger, astrophysicist, discoverer of the Rieger periodicities that permeate the Solar System See also Education in Germany List of universities in Germany List of forestry universities and colleges Notes and references Bibliography External links Engineering universities and colleges in Germany 1868 establishments in Germany Educational institutions established in 1868 Universities and colleges in Munich
17757397
https://en.wikipedia.org/wiki/Gerhard%20Chroust
Gerhard Chroust
Gerhard Chroust (born 23 April 1941) is an Austrian systems scientist, and Professor Emeritus for Systems Engineering and Automation at the Institute of System Sciences at the Johannes Kepler University Linz, Austria. Chroust is an authority in the fields of formal programming languages and interdisciplinary information management. Biography Gerhard Chroust was born in 1941 in Vienna, Austria. He began studying Communications-electronics in 1959 and received a M.A. from TU Wien in 1964, a M.A. from the University of Pennsylvania in 1965, and a PhD from TU Wien in 1974. From 1966 to 1991 Chroust worked at the IBM Laboratory Vienna. He started working at the Johannes Kepler University Linz and TU Wien in 1975 as lecturer for Microprogramming. In 1980 he became Assistant Professor in Computer Science, and lectured "Dataflow Mechanisms", "Dataflow Mechanisms" and later "Software Development Process" at the Johannes Kepler University Linz, at the University of Klagenfurt and at TU Wien. From 1992 until 2007 he was Professor for 'Systems Engineering and Automation' at the Johannes Kepler University Linz and Head of the Department (in 2004 transferred into an Institute) of Systems Engineering and Automation from 1992 until 2007. Chroust is Editor-in-Chief of the IFSR Newsletter a.o. Furthermore, he is Chairperson of the Editorial Board of the Book Series of the Austrian Computer Society (OCG), Editorial Board Member of several journals: The Journal of Microprocessors and Microsystems from 1976 until 1985, the IBM Programming Series from 1978 until 1990, the Computer Standards and Interfaces from 1992 until 2005, and Systems Research and Behavioral Science since 1994. In 1997 he was also on the Academic board of the International Encyclopedia of Systems and Cybernetics, edited by Charles François. Further Chroust is organizationally active is as Secretary/Treasurer at the International Federation for Systems Research, Vice-President of the Austrian Society of Cybernetic Studies and Advisor to Austrian Standards Committee. Family Gerhard Chroust has been married to Janie Chroust, née Weps, since 1968. They have two sons Martin (* 1971) and Stefan (* 1974), and five grandchildren Adrian (* 2001), Niklas (* 2004), Viktoria (* 2005), Rebecca (* 2007) and Ella (* 2008). Work The research interests of Gerhard Chroust are in the fields of Software Engineering (Representation and Enactment of Software Process Models, Quality and Improvement of Development Processes), Systems Science and Systemic Aspects of Engineering, Emergence, History of Computers and Information Technology, and Human Aspects of Software Development. Cybernetics and Programming Languages In 1964 Chroust wrote his Masters Thesis "Kybernetisches Modell: Muehlespiel Masters" about cybernetics and continued to investigate cybernetic machines for another four years. From the study of Theorem Proving 1965-66 Chroust continued to study formal definition of Programming Languages, whereby he developed the "Vienna Definition Language". He started studying compiler theory and the evaluation of arithmetic expressions from 1967 until 1975. In the 1970s Chroust continued to study parallelism and simulation from 1969 until 1972, microprogramming from 1975 until 1990 and compiler building from 1976 until 1982. Software engineering and information technology engineering In the 1980s the scope of Chroust's research turned towards software engineering and its environment, and towards information technology engineering in the 1990s. He had a special interest in software inspections and Workflow management. Publications Chroust has written, co-authored and edited a dozen books and over 400 articles, papers and other publications. A selection: 1980. Firmware, microprogramming, and restructurable hardware : proceedings of the IFIP Working Conference on Firmware, Microprogramming, and Restructurable Hardware, Linz, Austria, April 28-May 1, 1980. Edited with Jörg R. Mühlbacher. 1985. Formal models in programming : proceedings of the IFIP TC2 Working Conference on the Role of Abstract Models in Information Processing, Vienna, Austria, 30 January - 1 February 1985. Edited with E.J. Neuhold. 1989. Mikroprogrammierung und Rechnerentwurf. Oldenbourg Verlag. 1992. Modelle der Software-Entwicklung - Aufbau und Interpretation von Vorgehensmodellen. Oldenbourg Verlag Wien-München. 1994. Workflow management : challenges, paradigms, and products : CON '94. Edited with A. Benczúr. 1994. Interdisciplinary Information Management Talks 94. Edited with P. Doucek. 1995. Umfrage Workflow - Eine Momentaufnahme über die Verbreitung, Einsatz und Meinungen über Workflow in den deutschsprachigen Ländern. With J. Bergsmann. Oldenbourg Wien-München. 1995. Die Geschichte der Datenverarbeitung - Bibliographie zur Geschichtswand. With H. Zemanek. Oldenbourg Wien/München 1998. Adolf Adam - 80 Jahre, eine Anekdotensammlung, ÖSGK, Reports of the Austrian Society for Cybernetic Studies, Vienna. 2005. IFSR 2005 - The New Roles of Systems Sciences for a Knowledge-based Society, Kobe 2005. Edited with Jifa Gu. Jaist Press, Komatsu, Japan. References External links Homepage Gerhard Chroust at www.sea.uni-linz.ac.at. Dr. Gerhard Chroust. ICSCI, 2008. Article 25 Years of the IFSR: A Milestone in the Systems Sciences'', by Gerhard Chroust. "Emeritus am Institut für Systems Engineering and Automation an der JKU Linz – ein Portrait" by Helmut Malleck. pp 4–5. (in German). 1941 births University of Pennsylvania alumni Systems scientists Living people Scientists from Vienna TU Wien alumni TU Wien faculty Johannes Kepler University Linz faculty IBM employees
2129423
https://en.wikipedia.org/wiki/Enigmo
Enigmo
{{Infobox video game | title = Enigmo |image = |developer = Pangea Software, Beatshapers (PlayStation Minis) |publisher = Pangea Software, Beatshapers (PlayStation Minis) |designer = |engine = |released = Mac OS 9 Mac OS X2003Windows2003iOS2008 PlayStation Minis2010Windows Phone 72011 Android2011 |genre = Puzzle |modes = Single player |platforms = Windows, Mac OS X, Mac OS 9, iOS, PSP, Windows Phone 7, Android }}Enigmo and Enigmo 2''' are respectively 2.5D and 3D arcade-style computer games for Windows, Mac OS 9, Mac OS X, iOS and PlayStation Minis developed by Pangea Software. They both involve moving certain substances into their proper containers. The music in both games was recorded by Michael Beckett. Enigmo was created in 2003 by Pangea Software and was their most successful game ever sold at the time. The graphics are three-dimensional, in a sense; but gameplay is strictly limited to the horizontal and vertical axes. Liquids (water, oil, and lava) fall from droppers and will bounce around the walls of a mechanism. Gameplay consists of manipulating a limited number of dynamic items (such as bumpers, sliders, accelerators, and sponges) to affect various streams of flowing liquid so that the droplets reach their destination: tanks specific for each liquid. The player wins the level when all tanks on the level are filled with 50 drops of the appropriate liquid. In addition to the pre-designed levels, players can create their own using the game's built-in editor and download others for free off of the Pangea website. In 2004, Aspyr Media ported a 2003 release of Enigmo to the Windows Mobile platform. This version was included with the Dell Axim x50v model PDA; the software is available for purchase from Dell and appears to be limited to the x50v model. In January 2011, a version of Enigmo is set to be released in the PlayStation Store as a PSP Mini. It will be developed and published by Beatshapers. In May 2011, a version of Enigmo was released for Windows Phone 7. Enigmo 2 Enigmo 2 was introduced in February 2006, and expands upon the basic principles set down by the original. Water is still a substance that can be manipulated, but lava and oil were swapped for laser beams and plasma particles. The game adds the dimension of depth to gameplay, and many solutions involving rotating the camera or objects in three dimensions. The graphics were also improved. The game takes place in outer space, specifically near Earth, Mars, Saturn, and asteroids, but gravity functions as it would on earth. Like the original, each container must have 50 units to be considered full''. When all containers are simultaneously full (they lose their contents over time), the player wins the level. Enigmo for iOS During WWDC 2008, it was announced that an iOS version of Enigmo was in development and would be available for purchase for the price of US$9.99 at the launch of the App Store. The game makes use of the iPhone's multi touch interface and is controlled entirely via touch. Players can use the swiping and pinching actions to zoom and pan and use their fingers to position the puzzle pieces. The game has been a success; it was voted the "Best iPhone Game" at WWDC '08 and has been met with very positive reviews in the App Store. Originally including only 50 levels, Enigmo has been updated to allow user designed levels created on the desktop versions of the game; the price was dropped to US$4.99 at that time. It was then dropped to US$1.99 on October 8, 2008 and later to US$0.99 for Black Friday. It is currently on sale for $0.99. Enigmo for iPhone and iPod Touch are now available in Dutch, German, Spanish, Japanese, Italian, and French. JR Language Translations provided the localisation of the game to Pangea Software. Enigmo 2 for iOS On September 2, 2009, Pangea Software released Enigmo 2 for the iPhone. It is very similar to the Macintosh version, although it uses multi-touch to control. On September 24, it was updated to support the Retina Display and the iPad. it costs $2.99 and is available from the App Store. External links Enigmo PC demo Official website Enigmo 2 Enigmo 2 for PC iPhone to PC References Pangea Software 2003 video games 2006 video games MacOS games Windows games Freeverse Inc. IOS games PlayStation Network games Puzzle video games Android (operating system) games Video games developed in the United States Windows Phone games
43140712
https://en.wikipedia.org/wiki/Tim%20Bell%20%28computer%20scientist%29
Tim Bell (computer scientist)
Timothy Clinton Bell is a New Zealand computer scientist, with interests in computer science education, computer music and text compression. In 2017, it was announced by SIGCSE that Bell would receive the 2018 award for 'Outstanding Contribution to Computer Science Education'. Education Bell was educated at Nelson College from 1975 to 1979. He completed his PhD at the University of Canterbury, with a thesis titled A unifying theory and improvements for existing approaches to text compression Career and research Bell joined the staff and rose to professor and head of department. In parallel with his academic work he has developed Computer Science Unplugged, a system of activities for teaching computer science without computers. The system was actively promoted by Google in 2007. Selected works Witten, Ian H., Alistair Moffat, Timothy C. Bell, Managing gigabytes: compressing and indexing documents and images. Morgan Kaufmann, 1999. Bell, Timothy C., John G. Cleary, and Ian H. Witten. Text compression. Prentice-Hall, Inc., 1990. Witten, Ian H., and Timothy C. Bell. "The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression." IEEE transactions on information theory 37, no. 4 (1991): 1085–1094. Ian H. Witten, Alistair Moffat, and Timothy C. Bell. Managing gigabytes: compressing and indexing documents and images. Morgan Kaufmann, 1999. Bell, Timothy, Ian H. Witten, and John G. Cleary. "Modeling for text compression." ACM Computing Surveys 21, no. 4 (1989): 557–591. References Living people New Zealand computer scientists University of Canterbury alumni University of Canterbury faculty People educated at Nelson College Year of birth missing (living people)
1065823
https://en.wikipedia.org/wiki/Terminal%20%28macOS%29
Terminal (macOS)
Terminal (Terminal.app) is the terminal emulator included in the macOS operating system by Apple. Terminal originated in NeXTSTEP and OPENSTEP, the predecessor operating systems of macOS. As a terminal emulator, the application provides text-based access to the operating system, in contrast to the mostly graphical nature of the user experience of macOS, by providing a command-line interface to the operating system when used in conjunction with a Unix shell, such as zsh (the default shell in macOS Catalina). The user can choose other shells available with macOS, such as the KornShell, tcsh, and bash. The preferences dialog for Terminal.app in OS X 10.8 (Mountain Lion) and later offers choices for values of the TERM environment variable. Available options are ansi, dtterm, nsterm, rxvt, vt52, vt100, vt102, xterm, xterm-16color and xterm-256color, which differ from the OS X 10.5 (Leopard) choices by dropping the xterm-color and adding xterm-16color and xterm-256color. These settings do not alter the operation of Terminal, and the xterm settings do not match the behavior of xterm. Terminal includes several features that specifically access macOS APIs and features. These include the ability to use the standard macOS Help search function to find manual pages and integration with Spotlight. Terminal was used by Apple as a showcase for macOS graphics APIs in early advertising of Mac OS X, offering a range of custom font and coloring options, including transparent backgrounds. See also List of terminal emulators References Utilities for macOS Terminal emulators
18604280
https://en.wikipedia.org/wiki/UCSB%20College%20of%20Engineering
UCSB College of Engineering
The College of Engineering (CoE) is one of the three undergraduate colleges at the University of California, Santa Barbara. As of 2015, there were 150 faculty, 1,450 undergraduate students, and 750 graduate students. According to the Leiden Ranking, engineering and physical sciences at UCSB is ranked #1 among public universities for top 10% research citation impact. According to the National Research Council rankings, the UCSB engineering graduate research program in Materials was ranked #1 and Chemical Engineering ranked #5 in the nation among public universities. Departments and programs The College of Engineering comprises the following departments: Chemical Engineering (established in 1965) Computer Science (established in 1979) Electrical and Computer Engineering (established in 1962) Materials (established in 1987) Mechanical Engineering (established in 1964) The college is connected to the UCSB campus through innovative multi-disciplinary/outreach academic programs: Biomolecular Science and Engineering Program (BMSE) Computational Science and Engineering Program (CSE) Media Arts and Technology Program (MAT) Technology Management Program (TMP) Research units One of the strengths of the College of Engineering is its ability to cross traditional academic boundaries in collaborative research. Much of this work is conducted in collaboration with UCSB's interdisciplinary research centers and institutes, which include: American Institute for Manufacturing Integrated Photonics (AIM Photonics) in collaboration with SUNY Polytechnic Institute. California NanoSystems Institute Center for Bio-Image Informatics Center for Control, Dynamical-Systems, and Computation Center for Information Technology and Society Center for Nanotechnology in Society Center for Stem Cell Biology and Engineering Complex Fluid Design Consortium Dow Materials Institute Institute for Collaborative Biotechnologies Institute for Multiscale Materials Studies Institute for Energy Efficiency Interdisciplinary Center for Wide Band-Gap Semiconductors Materials Research Laboratory Mitsubishi Chemical Center for Advanced Materials National Nanofabrication Infrastructure Network Optoelectronics Research Group Solid State Lighting and Energy Electronics Center UCSB Nanofabrication Research Center Academics Undergraduate programs The college offers the B.S. degree in chemical engineering, computer engineering, computer science, electrical engineering, and mechanical engineering. The B.S. programs in chemical engineering, electrical engineering, and mechanical engineering are accredited by the Engineering Accreditation Commission of Accreditation Board for Engineering and Technology (ABET). The computer science B.S. program is accredited by the Computing Accreditation Commission of ABET. Jointly with the Department of Computer Science and the Department of Electrical and Computer Engineering, the college offers undergraduate degree in computer engineering. The curriculum for the undergraduate programs is designed to be completed in four years. Graduate programs The college offers M.S. and Ph.D. degrees in chemical engineering, computer science, electrical & computer engineering, materials science, and mechanical engineering. It also offers graduate programs in technology management, bioengineering, biomolecular science & engineering, and media arts & technology. Faculty The college has 150 faculty members, most of whom are involved in interdisciplinary research and academic programs. Twenty-nine faculty members are in the National Academy of Engineering and nine are elected to the National Academy of Sciences. Three faculty members have won the Nobel Prize. Alan J. Heeger, Professor of Physics and of Materials, won the 2000 Nobel Prize in Chemistry "for the discovery and development of conductive polymers", Herbert Kroemer, Professor of Electrical and Computer Engineering and of Materials, won the 2000 Nobel Prize in Physics "for developing semiconductor heterostructures used in high-speed and opto-electronics". In 2006 Shuji Nakamura, a professor of Materials and Computer Engineering, won the Millennium Technology Prize for developing blue, green, and white LEDs and the blue laser diode as well as receiving a 2014 Nobel Prize in Physics for his contribution to the invention of blue light-emitting diodes. In 2015, Professor Arthur Gossard was awarded the National Medal of Technology and Innovation by the Obama Administration. Publications Convergence is the magazine of Engineering and the Sciences at UC Santa Barbara. Sponsored by the College of Engineering and the Division of Mathematical, Life, and Physical Sciences in the College of Letters and Science, Convergence was begun in early 2005 as a three-times-a-year print publication, with the goal of bringing stories of interest from engineering and the sciences to the desks and coffee tables of a wide range of alumni, friends, partners, funding agencies, corporations, donors and potential supporters. This publication prints annually. See also University of California, Santa Barbara Engineering colleges in California References External links UCSB College of Engineering Department of Chemical Engineering Department of Computer Science Department of Electrical & Computer Engineering Department of Materials Department of Mechanical Engineering Computer Engineering Program College of Engineering Engineering universities and colleges in California Educational institutions established in 1964 University subdivisions in California 1964 establishments in California
33822913
https://en.wikipedia.org/wiki/John%20Halamka
John Halamka
John D. Halamka, M.D., M.S., is an American business executive and physician. He is president of the Mayo Clinic Platform, a group of digital and long-distance health care initiatives. Trained in emergency medicine and medical informatics, Halamka has been developing and implementing health care information strategy and policy for more than 25 years. He specializes in artificial intelligence, the adoption of electronic health records and the secure sharing of healthcare data for care coordination, population health, and quality improvement. In 2020, Halamka was elected to the National Academy of Medicine (NAM). Prior to his appointment at Mayo Clinic, he was chief information officer at Beth Israel Deaconess Medical Center. He is a practicing emergency medicine physician. As the International Healthcare Innovation Professor at Harvard Medical School, Halamka helped the George W. Bush administration, the Obama administration and governments around the world plan their health care information strategies. Early life and education Halamka was born in Des Moines, Iowa and relocated to Southern California in 1968. He attended St. James Elementary School and Palos Verdes High School. He graduated from Stanford University in 1984 with degrees in Public Policy and Medical Microbiology. While at Stanford he wrote econometrics software for Milton Friedman, performed research for the autobiography of Dr. Edward Teller, and served as teaching assistant to Presidential candidate John B. Anderson. He authored three books on technology issues, wrote a regular column for InfoWorld, and was founding technical editor for Computer Language magazine. In 1981, he formed a software startup company, Ibis Research Labs, in the basement of Frederick Terman's Palo Alto home. The company developed tax and accounting software for CP/M and early IBM PC computers; it grew to have 25 employees and was sold to senior management in 1992. He attended the joint MD/PhD program at UCSF and UC Berkeley between 1984 & 1993 and completed an Emergency Medicine residency at Harbor-UCLA Medical Center between 1993 & 1996. Information technology In November 2019, Halamka joined Mayo Clinic and was named president of Mayo Clinic Platform. He is charged with elevating Mayo Clinic to a global leadership position within digital healthcare. Halamka joined the faculty of Harvard Medical School as an instructor in 1996. He completed a post doctoral fellowship in medical informatics at Harvard and MIT in 1997. Soon after, he was selected to be the Executive Director of CareGroup Center for Quality and Value (CQV), a data analysis and business intelligence division of the Caregroup Healthcare System. In 1998, he was named chief information officer of Beth Israel Deaconess Medical Center and initiated a multi-year effort to securely web-enable clinical information systems with CareWeb. In 2001, he was hired as part-time chief information officer at Harvard Medical School. In 2004, he was named chairman of the Healthcare Information Technology Standards Panel (HITSP). In April 2011, he was named full professor at Harvard Medical School. In August 2011, he was named co-chair of the Massachusetts HIT/HIE Advisory Committee, a multi-stakeholder group which advises the Massachusetts HIT Council, the governance body which sets priorities and approves the allocation of state and federal funds for healthcare information technology spending in Massachusetts. In March 2012, he was named to the board of the Open Source Electronic Health Record Agent (OSEHRA), a non-profit established by the Department of Veterans Affairs, dedicated to accelerating innovation in electronic health record software. In December 2019, he was named the president of the Mayo Clinic Platform, to commence January 1, 2020. The platform is a coordinated portfolio approach to create new platform ventures to take advantage of emerging technologies such as artificial intelligence, connected healthcare devices, and natural language processing. Other interests Halamka continues his work as an emergency physician, and provides mushroom and poisonous plant consultation to the Regional Center for Poison Control and Prevention (Boston). In 2012, Halamka and his wife Kathy founded Unity Farm, an organic certified producer of fruits, vegetables, and cider. In 2016, they founded Unity Farm Sanctuary, a charitable organization providing forever homes to farm animals in need, in addition to community education and volunteer opportunities. He also writes the blog Geekdoctor: Dispatch from the Digital Health Frontier. Halamka has authored the following books: Reinventing Clinical Decision Support: Data Analytics, Artificial Intelligence, and Diagnostic Reasoning (co-authored by Paul Cerrato) The Best of CP/M Software Real World Unix Espionage in the Silicon Valley The Fifth Domain (co-authored with Giuliano Pozza) GeekDoctor: Life as a Healthcare CIO Realizing the Promise of Precision Medicine: The Role of Patient Data, Mobile Technology, and Consumer Engagement (co-authored by Paul Cerrato) The Transformative Power of Mobile Medicine: Leveraging Innovation, Seizing Opportunities and Overcoming Obstacles of mHealth (co-authored by Paul Cerrato) See also Personal Genome Project References External links Geekdoctor: Life as a Healthcare CIO blog Unity Farm Sanctuary American emergency physicians Living people 1968 births People from Des Moines, Iowa Physicians from Massachusetts Stanford University alumni Chief information officers American healthcare managers
3195170
https://en.wikipedia.org/wiki/Michael%20Deering
Michael Deering
Michael Frank Deering (born 1956) is a computer scientist, a former chief engineer for Sun Microsystems in Mountain View, California, and a widely recognized expert on artificial intelligence, computer vision, 3D graphics hardware/software, very-large-scale integration (VLSI) design and virtual reality. Deering oversaw Sun's 3D graphics technical strategy as the chief hardware graphics architect and is a co-architect of the Java 3D API, developing Java platform software. He is the inventor of deferred shading, inventor of Geometry compression, co-inventor of 3D-RAM, and the chief architect for a number of Sun's 3D graphics hardware accelerators. Many of his inventions have been patented. Deering's research endeavors have included development of correct perspective viewing equations, correcting for the optics of both human eyeballs and glass cathode ray tubes, predictive head trackers and other virtual reality interface hardware. Deering has published articles on computer graphics architectures, virtual reality systems, and 3D interface technologies. Education and early career In 1978, Deering received his bachelor's degree, and in 1981 his PhD, in computer science from the University of California, Berkeley, and is an alumnus of the Berkeley Artificial Intelligence Research (BAIR) Computer Science Division, UC Berkeley. Prior to his tenure with Sun, Deering worked for Schlumberger Laboratories in Palo Alto, California, engaged in graphics and imaging research. Published works 2002 'The SAGE graphics architecture', Michael Deering, David Naegle, ACM Transactions on Graphics, Proceedings of the 29th annual conference on Computer graphics and interactive techniques SIGGRAPH '02, volume 21, no 3, ACM Press 2000 The Java 3d API Specification with Cdrom, Henry Sowizral, Kevin Rushforth, Michael Deering, Addison-Wesley Longman Publishing Co. 1998 "Introduction to Programming with Java 3D", Henry Sowizral, Dave Nadeau, Michael Deering, Mike Bailey, Proceedings of the 25th annual conference on Computer graphics and interactive techniques, SIGGRAPH '98, ACM Press 1995 "Geometry compression", Michael Deering, Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, SIGGRAPH '95, vol 29, p 13-20, ACM Press, 1992 "High resolution virtual reality", Michael Deering, Proceedings of the 19th annual conference on Computer graphics and interactive techniques SIGGRAPH '92, vol 26, no 2, p 195-202, ACM Press 1988 "The triangle processor and normal vector shader: a VLSI system for high performance graphics", Michael Deering, Stephanie Winner, Bic Schediwy, Chris Duffy, Neil Hunt, Proceedings of the 15th annual conference on Computer graphics and interactive techniques, SIGGRAPH '88, vol 22, no 4, p 21-30, ACM Press 1986 Database Support for Storage of AI Reasoning Knowledge. In Expert Database Systems, M. Deering, J. Faletti, Benjamin Cummings (publisher) External links MichaelFrankDeering.com - Michael Deering homepage Berkeley.edu - 'CS 294-4: Intelligent DRAM (IRAM) Lecture 4: 3DRAM (renamed FB RAM)', Michael Deering (January 24, 1996) - Geometry Compression - Sam Liang PSU.edu - Geometry Compression, Michael Deering, Pennsylvania State University Stanford.edu - 'Program in Human-Computer Interaction', Stanford University, (October 21, 1994) Sun.com - 'The Java 3D API Specification', Henry Sowizral, Kevin Rushforth, Michael Deering UMich.edu - 'Revival of the Virtual Lathe', University of Michigan Virtual Reality Laboratory 1956 births Living people American computer programmers American computer scientists American technology writers Artificial intelligence researchers Computer graphics researchers Computer vision researchers University of California, Berkeley alumni
514067
https://en.wikipedia.org/wiki/PowerPC%20G4
PowerPC G4
PowerPC G4 is a designation formerly used by Apple and Eyetech to describe a fourth generation of 32-bit PowerPC microprocessors. Apple has applied this name to various (though closely related) processor models from Freescale, a former part of Motorola. Motorola and Freescale's proper name of this family of processors is PowerPC 74xx. Macintosh computers such as the PowerBook G4 and iBook G4 laptops and the Power Mac G4 and Power Mac G4 Cube desktops all took their name from the processor. PowerPC G4 processors were also used in the eMac, first-generation Xserves, first-generation Mac Minis, and the iMac G4 before the introduction of the PowerPC 970. Apple completely phased out the G4 series for desktop models after it selected the 64-bit IBM-produced PowerPC 970 processor as the basis for its PowerPC G5 series. The last desktop model that used the G4 was the Mac Mini which now comes with an Apple M1 processor. The last portable to use the G4 was the iBook G4 but was replaced by the Intel-based MacBook. The PowerBook G4 has been replaced by the Intel-based MacBook Pro. The PowerPC G4 processors are also popular in other computer systems, such as the AmigaOne series of computers and the Pegasos from Genesi. Besides desktop computers the PowerPC G4 is popular in embedded environments, like routers, telecom switches, imaging, media processing, avionics and military applications, where one can take advantage of the AltiVec and its SMP capabilities. PowerPC 7400 The PowerPC 7400 (code-named "Max") debuted in August 1999 and was the first processor to carry the "G4" moniker. The chip operates at speeds ranging from 350 to 500 MHz and contains 10.5 million transistors, manufactured using Motorola's 0.20 μm HiPerMOS6 process. The die measures 83 mm2 and features copper interconnects. Motorola had promised Apple to deliver parts with speed up to 500 MHz, but yields proved too low initially. This forced Apple to take back the advertised 500 MHz models of Power Mac G4. The Power Mac series was downgraded abruptly from 400, 450, and 500 MHz processor speeds to 350, 400, and 450 MHz while problems with the chip were ironed out. The incident generated a rift in the Apple-Motorola relationship, and reportedly caused Apple to ask IBM for assistance to get the production yields up on the Motorola 7400 series line. The 500 MHz model was reintroduced on February 16, 2000. Design Much of the 7400 design was done by Motorola in close co-operation with Apple and IBM. IBM, the third member of the AIM alliance, designed the chip together with Motorola in its Somerset design center, but chose not to manufacture it, because it did not see the need back then for the Vector Processing Unit. Ultimately, the G4 architecture design contained a 128-bit vector processing unit labelled AltiVec by Motorola while Apple marketing referred to it as the "Velocity Engine". The PowerPC 970 (G5) was the first IBM-manufactured CPU to implement VMX/AltiVec, for which IBM reused the old 7400 design they still had from the design they did with Motorola in Somerset. The Xenon CPU in the Xbox 360 also features VMX, with added proprietary extensions made especially for Microsoft. POWER6, introduced in 2007, is IBMs first "big iron" CPU to also implement VMX. With the AltiVec unit, the 7400 microprocessor can do four-way single precision (32-bit) floating point math, or 16-way 8-bit, 8-way 16-bit or four-way 32-bit integer math in a single cycle. Furthermore, the vector processing unit is superscalar, and can do two vector operations at the same time. Compared to Intel's x86 microprocessors at the time, this feature offered a substantial performance boost to applications designed to take advantage of the AltiVec unit. Some examples are Adobe Photoshop which utilises the AltiVec unit for faster rendering of effects and transitions, and Apple's iLife suite which takes advantage of the unit for importing and converting files on the fly. Additionally, the 7400 has enhanced support for symmetric multiprocessing (SMP) thanks to an improved cache coherency protocol (MERSI) and a 64-bit floating point unit (FPU), derived in part from the 604 series. The 603 series had a 32-bit FPU, which took two clock cycles to accomplish 64-bit floating point arithmetic. The PowerPC G4 family supports two bus technologies, the older 60x bus which it shares with the PowerPC 600 and PowerPC 7xx families, and the more advanced MPX bus. Devices that utilize the 60x bus can be made compatible with either 6xx or 7xx processors, enabling a wide variety of offerings and a clear and cheap upgrade path while keeping compatibility issues at a minimum. There are primarily two companies manufacturing system controllers for 7xx and 7xxx computers, Tundra with their Tsi1xx controllers and Marvell with their Discovery controllers. PowerPC 7410 The PowerPC 7410 "Nitro" is a low-power version of the 7400 but it was manufactured at 180 nm instead of 200 nm. Like the 7400 it has 10.5 million transistors. It debuted in the PowerBook G4 on 9 January 2001. The chip added the ability to use all or half of its cache as high-speed, non-cached memory mapped to the processor's physical address space as desired. This feature was used by embedded systems vendors such as Mercury Computer Systems. PowerPC 7450 The PowerPC 7450 "Voyager"/"V'ger" was the only major redesign of the G4 processor. The 33-million transistor chip extended significantly the execution pipeline of 7400 (7 vs. 4 stages minimum) to reach higher clock speeds, improved instruction throughput (3 + branch vs. 2 + branch per cycle) to compensate for higher instruction latency, replaced an external L2 cache (up to 2 MB 2-way set associative, 64-bit data path) with an integrated one (256 KB 8-way set associative, 256-bit data path), supported an external L3 cache (up to 2 MB 8-way set associative, 64-bit data path), and featured many other architectural advancements. The AltiVec unit was improved with the 7450; instead of executing one vector permute instruction and one vector ALU (simple int, complex int, float) instruction per cycle like 7400/7410, the 7450 and its Motorola/Freescale-followers can execute two arbitrary vector instructions simultaneously (permute, simple int, complex int, float). It was introduced with the 733 MHz Power Mac G4 on 9 January 2001. Motorola followed with an interim release, the 7451, codenamed "Apollo 6", just like the 7455. Early AmigaOne XE computers were shipped with the 7451 processor. The enhancements to the 745x design gave it the nicknames G4e or G4+ but these were never official designations. PowerPC 7445 and 7455 The PowerPC 7455 "Apollo 6" was introduced in January 2002. It came with a wider, 256-bit on-chip cache path, and was fabricated in Motorola's 0.18 μm (180 nm) HiPerMOS process with copper interconnects and SOI. It was the first processor in an Apple computer to pass the 1 GHz mark. The 7445 is the same chip without the L3 cache interface. The 7455 is used in the AmigaOne XE G4, and the dual 1 GHz Power Mac G4 (Quicksilver 2002) PowerPC 7447 and 7457 The PowerPC 7447 "Apollo 7" is slightly improved from the 7450/55, it has a 512 KB on-chip L2 cache and was manufactured in a 130 nm process with SOI, hence drawing less power. It has 58 million transistors. With the 7447A, which introduced an integrated thermal diode as well as DFS (dynamic frequency scaling) Freescale was able to reach a slightly higher clock. The 7457 has an additional L3 cache interface, supporting up to 4 MB of L3 cache, up from 2 MB supported by the 7455 and 7450. However, its frequency scaling stagnated when Apple chose to use the 7447 instead of the 7457, despite that the 7457 was the L3 cache-enabled successor to the L3 cache-enabled 7455 that Apple used before. The only companies that offer the 7457 in the form of upgrades for the Power Mac G4, iMac G4, and Power Mac G4 Cube are Giga Designs, Sonnet Technology, Daystar Technology (they use the 7457 only for iMac G4 upgrades) and PowerLogix. The Pegasos computer platform from Genesi also uses 7447 in its Pegasos-II/G4. The 7457 is often used to repair an AmigaOne XE CPU module; some AmigaOS software with the 7457 installed may mistake the AmigaOne for a Pegasos II computer as there were never any official 7457 boards released by Eyetech. PowerPC 7448 The PowerPC 7448 "Apollo 8" is an evolution of the PowerPC 7447A announced at the first Freescale Technology Forum in June 2005. Improvements were higher clock rates (up to 1.7 GHz) officially and easily up to 2.4 GHz through overclocking, a larger 1 MB L2 cache, a faster 200 MHz front side bus, and lower power consumption (18 W at 1.7 GHz). It was fabricated in a 90 nm process with copper interconnects and SOI. PowerPC 7448 users were: Daystar for their High-Res Aluminum PowerBook G4 upgrades (Daystar's Low-Res Aluminum PowerBook G4 upgrades used the 7447A, not the 7448) NewerTech for their Power Mac G4 upgrades PowerLogix for their Power Mac G4 Cube upgrade Cisco in NPE-G2 network processor module for their 7200VXR routers Cisco 7201 Router Extreme Engineering Solutions for their XPedite6244 single board computer Aitech for their C104 CompactPCI single board computer Emerson Network Power for their PmPPC7448 PMC module e600 In 2004, Freescale renamed the G4 core to e600 and changed its focus from general CPUs to high-end embedded SoC devices, and introduced a new naming scheme, MPC86xx. The 7448 was to be the last pure G4 and it formed the base of the new e600 core with a seven-stage, three-issue pipeline, and a powerful branch prediction unit which handles up to sixteen instructions out-of-order. It has an enhanced AltiVec unit capable of limited out-of-order execution and a 1 MB L2 cache. Device list This list is a complete list of known G4 based designs (excluding newer core e600 designs). The pictures are illustrations and not to scale. References Diefendorff, Keith (25 October 1999). "PowerPC G4 Gains Velocity". Microprocessor Report. pp. 10–15. Gwennap, Linley (16 November 1998). "G4 Is First PowerPC With AltiVec". Microprocessor Report. Halfhill, Tom R. (5 July 2005). "PowerPC Ain't Dead Yet". Microprocessor Report. pp. 13–15. G4 G4 Motorola microprocessors Superscalar microprocessors 32-bit microprocessors
40177692
https://en.wikipedia.org/wiki/Timothy%20C.%20Lethbridge
Timothy C. Lethbridge
Timothy Christian Lethbridge (born 1963) is a British/Canadian computer scientist and Professor of Computer Science and Software Engineering at University of Ottawa, known for his contributions in the fields of software engineering, knowledge management and computer animation, and the development of Umple. Biography Born in London in 1963, Lethbridge grew up in Denmead and attended St John's College in Portsmouth until he immigrated with his family to Canada in 1975. He received his BSc in 1985 and his MSc in 1987 in Computer Science from the University of New Brunswick. In 1994, he received his PhD in Computer Science and Artificial Intelligence from the University of Ottawa under supervision of Douglas Skuce for a thesis about tools for knowledge management, entitled "Practical Techniques for Organizing and Measuring Knowledge." In 1983, still studying, he started working as programmer and analyst for the Government of New Brunswick, where he assisted in the development of software for statistics, health insurance programs, and management information applications. At the university he also taught courses in Fortran programming and Interactive Computing. After graduation in 1987 he became researcher at the Bell-Northern Research, where he developed software for Computer Aided Design applications. From 1990 to 1995 he worked as consultant in multiple research projects. In 1994 Lethbridge started his academic career at the Department of Computer Science of the University of Ottawa as Assistant Professor, in 2001 Associate Professor, and since 2005 Professor of Computer Science and Software Engineering at the University of Ottawa. He specializes in "Human Computer Interaction, Software Modeling, UML, Object Oriented Design, Software Engineering Education". Publications Lethbridge published one textbook and over 100 articles. Books: 1994. Practical Techniques for Organizing and Measuring Knowledge. phd thesis, University of Ottawa. 2001. Object Oriented Software Engineering: Practical Software Development using UML and Java. With Robert Laganière. 2nd ed. 2005. Articles, a selection: Anquetil, Nicolas, and Timothy C. Lethbridge. "Experiments with clustering as a software remodularization method." Reverse Engineering, 1999. Proceedings. Sixth Working Conference on. IEEE, 1999. Lethbridge, Timothy C. "What knowledge is important to a software professional?." Computer 33.5 (2000): 44-50. Forward, Andrew, and Timothy C. Lethbridge. "The relevance of software documentation, tools and technologies: a survey." Proceedings of the 2002 ACM symposium on Document engineering. ACM, 2002. Lethbridge, Timothy C., Janice Singer, and Andrew Forward. "How software engineers use documentation: The state of the practice." IEEE Software 20.6 (2003): 35-39. Lethbridge, Timothy C., Susan Elliott Sim, and Janice Singer. "Studying software engineers: Data collection techniques for software field studies." Empirical Software Engineering 10.3 (2005): 311-341. References External links Dr. Timothy C. Lethbridge at University of Ottawa Timothy Lethbridge's ideas on Technology and Politics blog 1963 births Living people British computer scientists Canadian computer scientists People of Bell Canada University of New Brunswick alumni University of Ottawa alumni University of Ottawa faculty
8750065
https://en.wikipedia.org/wiki/Long-range%20surveillance%20company
Long-range surveillance company
In the United States Army, a long-range surveillance company (LRS-C) is a company with a special reconnaissance role in an intelligence brigade. Organization Consisting of a headquarters platoon, communications platoon, and three LRS platoons (each made up of a headquarters section and six surveillance teams.) All non-commissioned officers are airborne and Ranger qualified. All other personnel in the company are Pre-Ranger and airborne qualified. All Company Personnel undergo a week-long assessment and selection in addition to psychological evaluation. There are two types of selection, LRS selection and LRS Support personnel selection. Although all LRS team members can infiltrate via Land, Sea or Air; each platoon specializes in one method of infiltration. Each platoon is broken down into one of three infiltration specialties; Water, High Altitude Low Opening (HALO) and Desert Mountain. A. Headquarters platoon. The headquarters platoon contains two sections for the command and control of the company in the areas of administration, logistics, and operations. (1) Headquarters section. This section contains the personnel necessary for the command and control of the company and supply support. (2) Operations section. The personnel in this section plan and control the employment of the teams, coordinate insertion and extraction of the teams to include external support, receive and report information from committed teams, and maintain the operational status of all teams. Liaison duties and planning for future operations are important functions of the operations section. B. Communications platoon. The communications platoon operates the base radio stations. It helps the operations section plan and maintain communication with deployed teams. It works with the operations section or separately to relay information from deployed teams. It also performs unit maintenance on communication equipment organic to the unit. The platoon has a headquarters section and four base radio stations. (1) Headquarters section. The personnel in this section establish command and control over assigned communications elements. They coordinate and set up communication procedures, transmission schedules, frequency allocation, and communication sites. They issue and control encryption code devices and materials. They ensure continuous communication between deployed teams and base radio stations. They provide communication support to detached LRS platoons. They augment division Long Range Surveillance Detachment's (LRS-D's) with communication support when directed. They also provide unit maintenance for company communication equipment. (2) Base radio stations. The four base radio stations maintain communication between the operations base and the deployed teams. They operate on a 24-hour basis to make sure all message traffic to and from teams is processed immediately. C. Long-range surveillance platoons. These three platoons contain a headquarters element and six surveillance teams each. (1) Headquarters section. This section contains the personnel necessary for command, control, and training of the platoon. (2) Surveillance teams. (LRS Team) Each team consists of a team leader, an assistant team leader, Senior Scout, scout observer, a RTO (Radio Telephone Operator) and an assistant RTO. The teams obtain and report information about enemy forces within the corps' area of interest. LRS teams also conduct Combat Search And Rescue missions. The teams can operate independently with little or no external support in all environments. Each team member is cross trained in the duties and responsibilities of the others. Every team member is responsible for at least two duties and responsibilities. LRS teams are often seen as the most elite infantryman. They are lightly armed with limited self-defense capabilities. To be easily transportable, they are equipped with lightweight, man-portable equipment. They are limited by the amount of weight that they can carry or cache. Because all team members are airborne qualified, all means of insertion are available to the commander when planning operations. External links FM 7-93 Military intelligence collection Battlefield surveillance units and formations of the United States Army
47652080
https://en.wikipedia.org/wiki/Digital%20Single%20Market
Digital Single Market
On 6 May 2015, the European Commission, led at the time by Jean-Claude Juncker, communicated the Digital Single Market strategy which intend to remove virtual borders, boost digital connectivity, and make it easier for consumers to access cross-border online content. The Digital Single Market,  which is one of the European Commission's 10 political priorities, aims to fit the EU's single market for the digital age  – moving from 28 national digital markets to a single one and then to open up digital services to all citizens and strengthen business competitiveness in the digital economy. In other words, the Digital Single Market is a market characterized by ensuring the free movement of people, services and capital and allowing individuals and businesses to seamlessly access and engage in online activities irrespective of their nationality or place of residence. Fair competition conditions and a high level of protection of personal and consumer data are applied. Building a data economy, boosting competitiveness through interoperability and standardisation, and creating an inclusive e-society can realise the growth potential of the digital economy. According to the commission, investment, the acknowledgement of international dimension, and effective governance are required for the advancing of the Digital Single Market. A fully operational Digital Single Market could bring a contribution of 415 billion euros per year to the economy and it would also create hundreds of thousands of new jobs. The Digital Single Market Strategy includes a series of targeted actions based on 3 pillars. From these 3 pillars will come 16 key actions that constitute the Digital Single Market Strategy. The Three Pillars The commission has decided to put in place a strategy for the period 2014 - 2019 called "The Digital Single Market Strategy" (DSMS). It aims to give citizens and businesses better access to the digital world. This strategy is based on 3 pillars, each with 3 actions, and with the objective of achieving 16 measures. The first pillar: access It will attempt to implement better access for consumers (individuals and businesses) to the digital world across Europe. The first objective of this first pillar will involve a number of legislative proposals. They will regulate cross-border markets in order to reduce the differences between Member States and also to allow for a "harmonisation of the different VAT regimes". Indeed, there is a difference in contract law and this hinders the smooth flow of trade in the single market. To address this, the commission has proposed two directives (2015) to ensure that "consumers who seek to purchase goods or services in another EU country, whether online or by visiting a shop in person, are not discriminated against in terms of price, conditions of sale or payment arrangements, unless objectively justified on grounds such as VAT or certain legal provisions in the public interest". A second objective will concern parcel delivery services throughout Europe. However, an exception to this proposal has been made in order not to impose a disproportionate burden on small businesses. They will not be obliged "to deliver throughout the European Union". The third objective will be to address problems of consumer discrimination. It is foreseen that national authorities will have the possibility to check whether sites use geographical blocking. It will therefore be ensured that no consumers can be discriminated against on any basis. The second pillar: environment It will attempt to provide a favourable environment for the development of fair competition for the digital network and all developing environments. At the same time, the protection of personal data will be strengthened. The first objective of this pillar is to transform the market so that it becomes simpler and more sustainable. The environment of the European common market must be conducive to fair competition between traditional telecoms companies and new internet players. The second objective will involve making access to networks and services more reliable but also affordable. Citizens and businesses must have confidence in these networks, especially in terms of preserving their fundamental right to privacy. To achieve this, it was necessary to reform a series of European regulations, especially in the field of telecommunications, but also in terms of cybersecurity and everything that concerns audiovisual media services. The third objective is to enable the market to adapt to changes in its environment. As the market is based on a sharing economy, it must adapt its functioning to this. The construction of this pillar is already well underway, particularly with regard to cybersecurity and telecommunications.  However, the most important measure concerning the revision of the directive on privacy and electronic communications is barely underway. The third and final pillar: maximising the growth potential of the European digital economy The first objective of this pillar is to foster the digital switchover of industry and services in all economic sectors in Europe. It will also be necessary to stimulate investment through strategic partnerships and networks. The second objective will be access to data and capital. This needs to be in place in order to achieve sustainable and inclusive growth. The third objective will be around data protection, free movement of data and the creation of a European cloud. In order for all these objectives to be achieved, it is essential that the first two pillars are in place. Role of the European Parliament The Parliament has played a key role in the restarting of the internal market, and is an ardent advocate and agenda setter for the DSM. Before the launch of the Digital Single Market Strategy (DSMS) in May 2015, the Parliament had already been adopting resolutions on the Digital Single Market. For example, on 20 April 2012, took a resolution on a competitive digital single market and e-government. The Parliament also adopted a resolution in July 2013 to complete the Digital Single Market. In January 2016, to respond to the DSM Strategy proposal, the European Parliament adopted a resolution named "Towards a Digital Single Market Act". The goals of this proposal were notably to ask the European Commission to suppress the geo-blocking practices, to enhance the access to goods and services for the European consumers and also to establish an equivalent consumer protection (equivalent if the goods are being bought offline or online). Over the years, the EP have been building the DSM via thorough legislative enterprise. The legislation covers a wide range of digital concerns, from the elimination of roaming charges and the prohibition of unjustified geo-blocking operations to the adoption of a directive on actions to "reduce the cost of deploying high-speed electronic communications networks" or a directive on "copyright and related rights in the Digital Single Market". According to the EP, the legislative achievements of the institution in the DSM area are supplying 177 billion euro every year to economic growth in the European Union. The areas where the principal incomes are from are the European electronic communications and services area with 86.1 billion euro, data flows and artificial intelligence area with 51.6 billion euro and the single digital gateway area with 20 billion euro. Objectives and Funding Objectives The aim of the Digital Single Market is to modernize regulations and make them more homogeneous on subjects such as consumer protection, copyright, and online sales. The European Commission specifies and says that five objectives are noted: Boost e-commerce in the EU by tackling geo-blocking and making cross-border parcel delivery more affordable and efficient. Modernize European copyright rules to adapt them to the digital age. Update EU audiovisual regulations and work with platforms to create a fairer environment for all, promote European films, protect children, and better fight against hate speech. Strengthen Europe's response capacities to cyber-attacks by strengthening ENISA, the EU agency responsible for cybersecurity, and create an effective European cyber deterrence while providing a criminal response in this area, to better protect businesses, public institutions, and European citizens. Help businesses of all sizes, researchers, citizens, and public authorities to make the most of new technologies by ensuring that everyone has the necessary digital skills and by funding European research activities in the fields of health and high-performance computing. Funding To help this, the European Commission will use the full range of instruments and funding possibilities available. However, the full support of Member States, the European Parliament, the Council, and stakeholders is essential. For the benefits of the digital revolution to be within everyone's reach, Europe needs a regulatory framework applicable to electronic communications that promotes the deployment of infrastructures capable of functioning seamlessly throughout the EU, including in rural areas, while preserving effective competition. Much of the necessary investment will come from the private sector, based on an improved regulatory environment. The digital single market demands high-quality infrastructure. The EU is already mobilizing investments of around 50 billion EUR from the public and private sectors for the digital transformation of industry. It has also released 21.4 billion EUR from the European Structural and Investment Funds (ESI Funds) which will be available for the digital sector once national and regional strategies are put in place for digital growth, which will strengthen the link between policies and funding targets at all levels. However, more investment needs to be made in digital technology, especially in areas where digital needs are far greater than the capacities of any of the Member States acting in isolation. Efficiency can be gained by combining and complementing EU funding programs with other sources of public and private funding, notably through the European Fund for Strategic Investments (EFSI). In April 2017, investments in the digital sector associated with the EFSI represented around 17.8 billion EUR, including public and private funding, (i.e., 10% of the total amount of investments mobilized at that date). However, current funding instruments have limitations when applied to large mission-oriented initiatives. Therefore, the commission will explore ways to put in place a framework to support the development of a pan-European high-performance computing and data infrastructure. Combining different EU funding sources with national and private funding would be the best way to stimulate investment. Main achievements The following measures have reached to remove or reduce some of the main impediments to cross-border trade : Prohibiting unjustified geographical blockade The Geo-blocking regulation adopted by the EU in February 2018 prohibits any attempt to restrict consumers access to goods and services on e-commerce websites on the basis of their nationality or country of residence and establishment. The customers will be entitled to order the product and services irrespective of their place of connection and without having to pay additional fees. End of Roaming charges The elimination of retail roaming charges effective June 2017 is a second achievement of the Digital Single Market strategy. It means that mobile users periodically travelling in the EU are able to call, text and access the Internet on their domestic tariff. Innovate Cross-Border parcel delivery Cross-border parcel delivery is another part of the commission's strategy on achieving a Digital Single Market. Its regulation which aims to improve price transparency and to facilitate the assessment of certain high cross-border tariffs, entered into force on 22 May 2018. On the one hand, it implies increased documentation. General conditions of sale of the affected parcel delivery companies and a detailed description of their complaints procedure have to be submitted to the national regulator. On the other hand, it is about improved price transparency. Indeed, each individual national regulator has to publish the public list of tariffs and the terminal rates applicable to items originating from other Member States on a dedicated website and keep them updated. If parcel delivery companies don't comply with the regulation, it will be up to individual Member States to enforce penalties. Providing portability of online content services From 1 April 2018, consumers will be able to access  digital services such as an on-line distribution service for films and TV series that they have already paid for when travelling in a different Member State, without restriction and at no cost. The simplification of VAT declaration There is a one-stop shop for VAT registration. The aim is to avoid having to deal with a number of different national tax systems and then VAT rules are simplified in order to incentivize cross-border trade, combat VAT fraud (especially by non-EU actors), ensure fair competition for EU businesses, and provide equal treatment for online publications. Revision of the consumer protection cooperation regulation Addressing unlawful practices and identifying rogue traders are national enforcements authorities' means to protect consumers. Thereby, in order to detect the identity of the responsible trader, information can be requested from domain registrars and banks. Moreover, they can check geographical discrimination or after-sales conditions by carrying out mystery shopping and then can order the immediate take-down of websites hosting scams. Platform-to-business (P2B) Regulation In its mid-term evaluation of the Digital Single Market Strategy, the Commission declared that it would present actions on unfair contracts and trading practices in platform-to-business relations. The origin of this P2B regulation can notably be found in the growing significance of online intermediary platforms. Moreover, "online intermediary platforms function as two-sided markets: the more suppliers that use a certain platform, the more attractive it becomes for customers; the more customers use a certain platform, the more attractive it becomes for suppliers". For Benhamou the characteristics of the two-sided markets make it harder to regulate because an intervention on one side of the market can create non-desired or counterproductive effects on the other side of the market. This two-sided market aspect creates a concentration of supply and demand on a restricted number of online intermediary platforms and because a large part of B2C transactions is taking place on those platforms, "it becomes very important for traders to have access to such a platform and as such, it places them in a weak position vis-à-vis platforms. Platforms may abuse their relatively stronger position in order to impose unfair terms and conditions upon traders and/or to use unfair commercial practices towards them". The Platform-to-business regulation was adopted on 20 June 2019 and its application started on 12 July 2020, date before which the "platforms had to ensure they complied with the P2B Regulation". For the Commission, the regulation « is the first ever set of rules for creating a fair, transparent and predictable business environment for smaller businesses and traders on online platforms ». The regulation also aims to fill the gap between EU and national law and "to establish a fair, predictable, sustainable and trusted online business environment, while maintaining and further encouraging an innovation-driven ecosystem around online platforms across the EU". For Cauffman, "the protection offered largely remains limited to transparency obligations imposed on platform operators. It remains to be seen whether this is sufficient to offer professional sellers the desired protection". The regulation is important because "according to a recent European Commission study, 46% of all business users experience problems with online platforms in the course of their business relationships, with an impact on the EU economy in the range of €2 billion to €19.5 billion a year". Specific policies of the DSM (Digital Services Act and Digital Markets Act) The terms Digital Services Act and Digital Markets Act have been firstly introduced in the European Commission proposal from 15 December 2020. The purpose of this new proposal was to tackle the challenges which have not been considered in the E-Commerce Directive from the year 2000. According to the European Commission the Digital Services Act and Digital Markets Act have two main goals. "First is to create a safer digital space in which the fundamental rights of all users of digital services are protected. The second is to establish a level playing field to foster innovation, growth, and competitiveness, both in the European Single Market and globally". The Digital Market Act aim therefore to raise a level of competition within the Digital Market of the European Union by preventing the digital giants from the opportunity to misuse their position on the market while the Digital Services Act is more focused on the consumer protection issue. "The Digital Services Act (DSA) includes rules for online intermediary services, which millions of Europeans use every day. The obligations of different online players match their role, size and impact in the online ecosystem.The Digital Markets Act (DMA) establishes a set of narrowly defined objective criteria for qualifying a large online platform as a so-called “gatekeeper”. As large systemic online platforms or gatekeepers can be considered the companies which meet the following criteria: "strong economic position, significant impact on the internal market and is active in multiple EU countries strong intermediation position, meaning that it links a large user base to a large number of businesses (or is about to have) an entrenched and durable position in the market, meaning that it is stable over time". One of the main obstacles which large platforms are causing for the market competitiveness is the strengthening of the market entry barriers, which is one of the typical market power abuse features. In order to restrain the market power of the digital giants, there will be established particular rules for the routine procedures. "For instance, the gatekeepers are supposed to allow third parties to inter-operate with the gatekeeper's own services in certain specific situations, allow their business users to access the data that they generate in their use of the gatekeeper's platform, provide companies advertising on their platform with the tools and information necessary for advertisers and publishers to carry out their own independent verification of their advertisements hosted by the gatekeeper, allow their business users to promote their offer and conclude contracts with their customers outside the gatekeeper's platform. The gatekeepers are not permitted to treat services and products offered by the gatekeeper itself more favourably than similar services or products offered by third parties on the gatekeeper's platform, to prevent consumers from linking up to businesses outside their platforms, to prevent users from uninstalling any pre-installed software or app if they wish so". In terms of subsidiarity principle of the European Union the DSA and DMA are both going to be the objectives of supranational level.The reason for that is that: "the problems are of a cross-border nature, and not limited to single Member States or to a subset of Member States. The digital sector as such and in particular the core platform services are of a cross-border nature. As is evidenced by the volume of cross-border trade, almost 24% of total online trade in Europe is cross-border". According to the Commissions` proposal the impact of the Digital Services Act and the Digital Markets Act on the European digital market shall be monitored. The monitoring of the DSA and DMA is divided into regular monitoring, each two years and the qualitative assessments, with usage of a market assessment idicators. "Regular and continuous monitoring is covering the following main aspects: the on scope-related issues (e.g. criteria for the designation of gatekeepers, evolution of the designation of gatekeepers, use of the qualitative assessment in the designation process); (ii) unfair practices (compliance, enforcement patterns, evolution); and (iii) monitoring as a trigger for the launch of a market investigation with the purpose of examining new core platform services and practices in the digital sector". Speaking about the monitoring, there will be also developed the numerous indicators, which aim is to assess the implementation of the proposal and the achievement of the aim of contestable and fair digital Market. The Expert Group of the Online Platform Economy will monitor some of the general economic indicators. "Consequently, the impact of the intervention will be assessed in the context of an evaluation exercise and activate, if so required, a review clause, which will allow the Commission to take appropriate measures, including legislative proposals. Member States are also supposed to provide any relevant information they have that the Commission may require for the evaluation purposes". Though DSA and DMA are still proposals of the European Commission, there have been already made researches about the expected effects political effects from the new legislation. One of the anticipated impacts is the sovereignisation of Europe because of more integration due to the harmonisation of the Digital Market. International dimension On 4 July 2013, the European Parliament adopted a new resolution concerning the achievement of the digital single market with the willingness notably to develop mobility services and international dimension of this market. However, in 2015, the international dimension of the digital single market was not well defined in the strategy of the Commission concerning the digital single market. The part of international dimension on the digital single market was short, and only contained the willingness of the EU to encourage its trading partners to expand their markets and deploy a sustainable approach concerning internet governance.  Moreover, current trade negotiations led at the World Trade Organization and in other negotiations channels were not mentioned. But during this same year the EU was the first exporter of digital services in the world. The goal of the EU was to be a more attractive location for global companies. The new Trade and Investment Strategy presented in autumn 2015 showed an ambition to develop digital trade and investment policy.   In 2018, the US dominated the field of digital technologies with its big firms as Facebook, which led the competitiveness for the EU in this field. One of the reasons was that the European measures concerning technological products innovations in the EU were unfavourable for the growth and competitiveness of EU digital technologies. The consequence of this issue is that American digital firms as Facebook can take those innovative ideas issued from the EU, export them in the US territory and make them their own. In 2020, the EU demonstrated its willingness to play a key role on international dimensions, including its digital policy orientation towards societal interests, prosperity and competitiveness, through the tool of diplomacy, the use of a power of regulation and the financial instruments.  The Digital Single Market is a strategy put in place for the European Union to catch up with its competitors in the digital sector. Indeed, the EU is behind the United States and China that are dominating the digital world. This can be explained by the fact that the European Union invested later in the digital domain than the United States or China, but it can also be explained by European public policies in the digital field. Indeed, although the size of the European Union and its wealth per capita, it is very difficult to compete with the United States and China, and it is partly due to certain European public policies. Some European policies are not very favorable to the digital economy as can be seen from the various policies that are rather hostile to the new economy, we can cite among these policies: the competition policy or the general regulation of data protection. Today, the digital sector is still constrained into national territories and this is another reason for the delay of the European Union in the digital sector. Indeed, the development of digital is very heterogeneous in the European Union. This heterogeneity is due to the presence of different economic developments but also to the different development of each country in the digital era, which leads to different catching up movements for each state. And we can classify the 27 European countries into 5 groups according to their contributions to the digital sector. And it is therefore to overcome this delay and this difference in digital development between the member states of the European Union that the strategy of the Digital Single Market was created. Next step for the Digital Single Market Deregulation Future steps taken on building the DSM should be more dynamic and should focus on deregulation rather than simple regulation in order to address issues where there is no sufficient focus on and to keep up with the speed of the digital transformation. Furthermore, there is also too much insecurity about how companies can actually adhere to DSM current rules. For instance, the roaming ban is a positive step toward the digital single market but the different ways by which Telecom companies are trying to poach revenue do not correspond with the bottom-up idea of market-driven integration. Similarly, the parcel delivery legislation was positive but the problems related to transparency on costs are not truly tackled. Another example is regarding geo-blocking. Indeed, although "unjustified geo-blocking" is prohibited, small businesses are forced to commit to sell to a larger market while abiding costs this may incur depending on the destination country in question. Finally, there is a one-stop shop for VAT registration, which is a good step, but a variety of VAT rates across member states still remains which restrict trade. Conceptual problems In work to create a Digital Single Market, there are three conceptual problems. First, several reforms which have focused much on digital-specific regulations  have added new layers of regulatory complication to data-based commerce in Europe. Second, many of the regulations on data give confusion rather than given clarity and adding more opportunities for experimentation and innovation. Third, the EU's various regulations can have clashes between them. There is therefore a lack of coordination hence the necessity of a clearer taxonomy of the specific ambitions of one regulation. Therefore, in order to create a better and larger space for the digital economy to grow, the efforts of the Europeans institutions and Member-State governments have to be redoubled in the next few years. Challenges The main challenge faced is the fragmentation of the digital single market with twenty-seven national regimes. In a digital environment, which is by nature cross-border, national or regional markets do not offer sufficient size, either to generate final demand or to support investment and innovation. Digital companies need a pan-European market to thrive and compete with global leaders, who themselves benefit from large domestic markets. Businesses and citizens alike need world-class digital infrastructure, such as high-speed networks, cloud computing, high performance computing and big data. They also need digital skills, regardless of their sector. The DSM also raises legitimate concerns about the future of employment: beyond its impact on certain professions, the digital economy is structurally modifying the distribution of jobs and putting an end to a long trend of expansion of salaried employment, posing new challenges to labor law and social protection. Moreover, the creation of the digital single market raises the question of certain large companies, particularly the GAFAMs. Indeed, having become too powerful, they are more and more considered as a danger for the economy and democracy by many researchers and could try to abuse certain positions within this future market. It will therefore be imperative to have a specific regulation, to the abuse of dominant positions. Finally, to reach its objectives, Europe is confronted with three challenges in particular: the concentration of players, tax evasion and inequalities linked to digital technology (digital fracture). References Digital Single Market European Commission website Economy of the European Union European Union telecommunications policy Information technology organizations based in Europe
8786357
https://en.wikipedia.org/wiki/Java%20performance
Java performance
In software development, the programming language Java was historically considered slower than the fastest 3rd generation typed languages such as C and C++. The main reason being a different language design, where after compiling, Java programs run on a Java virtual machine (JVM) rather than directly on the computer's processor as native code, as do C and C++ programs. Performance was a matter of concern because much business software has been written in Java after the language quickly became popular in the late 1990s and early 2000s. Since the late 1990s, the execution speed of Java programs improved significantly via introduction of just-in-time compilation (JIT) (in 1997 for Java 1.1), the addition of language features supporting better code analysis, and optimizations in the JVM (such as HotSpot becoming the default for Sun's JVM in 2000). Hardware execution of Java bytecode, such as that offered by ARM's Jazelle, was also explored to offer significant performance improvements. The performance of a Java bytecode compiled Java program depends on how optimally its given tasks are managed by the host Java virtual machine (JVM), and how well the JVM exploits the features of the computer hardware and operating system (OS) in doing so. Thus, any Java performance test or comparison has to always report the version, vendor, OS and hardware architecture of the used JVM. In a similar manner, the performance of the equivalent natively compiled program will depend on the quality of its generated machine code, so the test or comparison also has to report the name, version and vendor of the used compiler, and its activated compiler optimization directives. Virtual machine optimization methods Many optimizations have improved the performance of the JVM over time. However, although Java was often the first virtual machine to implement them successfully, they have often been used in other similar platforms as well. Just-in-time compiling Early JVMs always interpreted Java bytecodes. This had a large performance penalty of between a factor 10 and 20 for Java versus C in average applications. To combat this, a just-in-time (JIT) compiler was introduced into Java 1.1. Due to the high cost of compiling, an added system called HotSpot was introduced in Java 1.2 and was made the default in Java 1.3. Using this framework, the Java virtual machine continually analyses program performance for hot spots which are executed frequently or repeatedly. These are then targeted for optimizing, leading to high performance execution with a minimum of overhead for less performance-critical code. Some benchmarks show a 10-fold speed gain by this means. However, due to time constraints, the compiler cannot fully optimize the program, and thus the resulting program is slower than native code alternatives. Adaptive optimizing Adaptive optimizing is a method in computer science that performs dynamic recompilation of parts of a program based on the current execution profile. With a simple implementation, an adaptive optimizer may simply make a trade-off between just-in-time compiling and interpreting instructions. At another level, adaptive optimizing may exploit local data conditions to optimize away branches and use inline expansion. A Java virtual machine like HotSpot can also deoptimize code formerly JITed. This allows performing aggressive (and potentially unsafe) optimizations, while still being able to later deoptimize the code and fall back to a safe path. Garbage collection The 1.0 and 1.1 Java virtual machines (JVMs) used a mark-sweep collector, which could fragment the heap after a garbage collection. Starting with Java 1.2, the JVMs changed to a generational collector, which has a much better defragmentation behaviour. Modern JVMs use a variety of methods that have further improved garbage collection performance. Other optimizing methods Compressed Oops Compressed Oops allow Java 5.0+ to address up to 32 GB of heap with 32-bit references. Java does not support access to individual bytes, only objects which are 8-byte aligned by default. Because of this, the lowest 3 bits of a heap reference will always be 0. By lowering the resolution of 32-bit references to 8 byte blocks, the addressable space can be increased to 32 GB. This significantly reduces memory use compared to using 64-bit references as Java uses references much more than some languages like C++. Java 8 supports larger alignments such as 16-byte alignment to support up to 64 GB with 32-bit references. Split bytecode verification Before executing a class, the Sun JVM verifies its Java bytecodes (see bytecode verifier). This verification is performed lazily: classes' bytecodes are only loaded and verified when the specific class is loaded and prepared for use, and not at the beginning of the program. However, as the Java class libraries are also regular Java classes, they must also be loaded when they are used, which means that the start-up time of a Java program is often longer than for C++ programs, for example. A method named split-time verification, first introduced in the Java Platform, Micro Edition (J2ME), is used in the JVM since Java version 6. It splits the verification of Java bytecode in two phases: Design-time – when compiling a class from source to bytecode Runtime – when loading a class. In practice this method works by capturing knowledge that the Java compiler has of class flow and annotating the compiled method bytecodes with a synopsis of the class flow information. This does not make runtime verification appreciably less complex, but does allow some shortcuts. Escape analysis and lock coarsening Java is able to manage multithreading at the language level. Multithreading is a method allowing programs to perform multiple processes concurrently, thus producing faster programs on computer systems with multiple processors or cores. Also, a multithreaded application can remain responsive to input, even while performing long running tasks. However, programs that use multithreading need to take extra care of objects shared between threads, locking access to shared methods or blocks when they are used by one of the threads. Locking a block or an object is a time-consuming operation due to the nature of the underlying operating system-level operation involved (see concurrency control and lock granularity). As the Java library does not know which methods will be used by more than one thread, the standard library always locks blocks when needed in a multithreaded environment. Before Java 6, the virtual machine always locked objects and blocks when asked to by the program, even if there was no risk of an object being modified by two different threads at once. For example, in this case, a local was locked before each of the add operations to ensure that it would not be modified by other threads (vector is synchronized), but because it is strictly local to the method this is needless: public String getNames() { Vector<String> v = new Vector<>(); v.add("Me"); v.add("You"); v.add("Her"); return v.toString(); } Starting with Java 6, code blocks and objects are locked only when needed, so in the above case, the virtual machine would not lock the Vector object at all. Since version 6u23, Java includes support for escape analysis. Register allocation improvements Before Java 6, allocation of registers was very primitive in the client virtual machine (they did not live across blocks), which was a problem in CPU designs which had fewer processor registers available, as in x86s. If there are no more registers available for an operation, the compiler must copy from register to memory (or memory to register), which takes time (registers are significantly faster to access). However, the server virtual machine used a color-graph allocator and did not have this problem. An optimization of register allocation was introduced in Sun's JDK 6; it was then possible to use the same registers across blocks (when applicable), reducing accesses to the memory. This led to a reported performance gain of about 60% in some benchmarks. Class data sharing Class data sharing (called CDS by Sun) is a mechanism which reduces the startup time for Java applications, and also reduces memory footprint. When the JRE is installed, the installer loads a set of classes from the system JAR file (the JAR file holding all the Java class library, called rt.jar) into a private internal representation, and dumps that representation to a file, called a "shared archive". During subsequent JVM invocations, this shared archive is memory-mapped in, saving the cost of loading those classes and allowing much of the JVM's metadata for these classes to be shared among multiple JVM processes. The corresponding improvement in start-up time is more obvious for small programs. History of performance improvements Apart from the improvements listed here, each release of Java introduced many performance improvements in the JVM and Java application programming interface (API). JDK 1.1.6: First just-in-time compilation (Symantec's JIT-compiler) J2SE 1.2: Use of a generational collector. J2SE 1.3: Just-in-time compiling by HotSpot. J2SE 1.4: See here, for a Sun overview of performance improvements between 1.3 and 1.4 versions. Java SE 5.0: Class data sharing Java SE 6: Split bytecode verification Escape analysis and lock coarsening Register allocation improvements Other improvements: Java OpenGL Java 2D pipeline speed improvements Java 2D performance also improved significantly in Java 6 See also 'Sun overview of performance improvements between Java 5 and Java 6'. Java SE 6 Update 10 Java Quick Starter reduces application start-up time by preloading part of JRE data at OS startup on disk cache. Parts of the platform needed to execute an application accessed from the web when JRE is not installed are now downloaded first. The full JRE is 12 MB, a typical Swing application only needs to download 4 MB to start. The remaining parts are then downloaded in the background. Graphics performance on Windows improved by extensively using Direct3D by default, and use shaders on graphics processing unit (GPU) to accelerate complex Java 2D operations. Java 7 Several performance improvements have been released for Java 7: Future performance improvements are planned for an update of Java 6 or Java 7: Provide JVM support for dynamic programming languages, following the prototyping work currently done on the Da Vinci Machine (Multi Language Virtual Machine), Enhance the existing concurrency library by managing parallel computing on multi-core processors, Allow the JVM to use both the client and server JIT compilers in the same session with a method called tiered compiling: The client would be used at startup (because it is good at startup and for small applications), The server would be used for long-term running of the application (because it outperforms the client compiler for this). Replace the existing concurrent low-pause garbage collector (also called concurrent mark-sweep (CMS) collector) by a new collector called Garbage First (G1) to ensure consistent pauses over time. Comparison to other languages Objectively comparing the performance of a Java program and an equivalent one written in another language such as C++ needs a carefully and thoughtfully constructed benchmark which compares programs completing identical tasks. The target platform of Java's bytecode compiler is the Java platform, and the bytecode is either interpreted or compiled into machine code by the JVM. Other compilers almost always target a specific hardware and software platform, producing machine code that will stay virtually unchanged during execution. Very different and hard-to-compare scenarios arise from these two different approaches: static vs. dynamic compilations and recompilations, the availability of precise information about the runtime environment and others. Java is often compiled just-in-time at runtime by the Java virtual machine, but may also be compiled ahead-of-time, as is C++. When compiled just-in-time, the micro-benchmarks of The Computer Language Benchmarks Game indicate the following about its performance: slower than compiled languages such as C or C++, similar to other just-in-time compiled languages such as C#, much faster than languages without an effective native-code compiler (JIT or AOT), such as Perl, Ruby, PHP and Python. Program speed Benchmarks often measure performance for small numerically intensive programs. In some rare real-life programs, Java out-performs C. One example is the benchmark of Jake2 (a clone of Quake II written in Java by translating the original GPL C code). The Java 5.0 version performs better in some hardware configurations than its C counterpart. While it is not specified how the data was measured (for example if the original Quake II executable compiled in 1997 was used, which may be considered bad as current C compilers may achieve better optimizations for Quake), it notes how the same Java source code can have a huge speed boost just by updating the VM, something impossible to achieve with a 100% static approach. For other programs, the C++ counterpart can, and usually does, run significantly faster than the Java equivalent. A benchmark performed by Google in 2011 showed a factor 10 between C++ and Java. At the other extreme, an academic benchmark performed in 2012 with a 3D modelling algorithm showed the Java 6 JVM being from 1.09 to 1.91 times slower than C++ under Windows. Some optimizations that are possible in Java and similar languages may not be possible in certain circumstances in C++: C-style pointer use can hinder optimizing in languages that support pointers, The use of escape analysis methods is limited in C++, for example, because a C++ compiler does not always know if an object will be modified in a given block of code due to pointers, Java can access derived instance methods faster than C++ can access derived virtual methods due to C++'s extra virtual-table look-up. However, non-virtual methods in C++ do not suffer from v-table performance bottlenecks, and thus exhibit performance similar to Java. The JVM is also able to perform processor specific optimizations or inline expansion. And, the ability to deoptimize code already compiled or inlined sometimes allows it to perform more aggressive optimizations than those performed by statically typed languages when external library functions are involved. Results for microbenchmarks between Java and C++ highly depend on which operations are compared. For example, when comparing with Java 5.0: 32 and 64 bit arithmetic operations, File I/O and Exception handling, have a similar performance to comparable C++ programs Arrays operations performance are better in C. Trigonometric functions performance is much better in C. Notes Multi-core performance The scalability and performance of Java applications on multi-core systems is limited by the object allocation rate. This effect is sometimes called an "allocation wall". However, in practice, modern garbage collector algorithms use multiple cores to perform garbage collection, which to some degree alleviates this problem. Some garbage collectors are reported to sustain allocation rates of over a gigabyte per second, and there exist Java-based systems that have no problems scaling to several hundreds of CPU cores and heaps sized several hundreds of GB. Automatic memory management in Java allows for efficient use of lockless and immutable data structures that are extremely hard or sometimes impossible to implement without some kind of a garbage collection. Java offers a number of such high-level structures in its standard library in the java.util.concurrent package, while many languages historically used for high performance systems like C or C++ are still lacking them. Startup time Java startup time is often much slower than many languages, including C, C++, Perl or Python, because many classes (and first of all classes from the platform Class libraries) must be loaded before being used. When compared against similar popular runtimes, for small programs running on a Windows machine, the startup time appears to be similar to Mono's and a little slower than .NET's. It seems that much of the startup time is due to input-output (IO) bound operations rather than JVM initialization or class loading (the rt.jar class data file alone is 40 MB and the JVM must seek much data in this big file). Some tests showed that although the new split bytecode verification method improved class loading by roughly 40%, it only realized about 5% startup improvement for large programs. Albeit a small improvement, it is more visible in small programs that perform a simple operation and then exit, because the Java platform data loading can represent many times the load of the actual program's operation. Starting with Java SE 6 Update 10, the Sun JRE comes with a Quick Starter that preloads class data at OS startup to get data from the disk cache rather than from the disk. Excelsior JET approaches the problem from the other side. Its Startup Optimizer reduces the amount of data that must be read from the disk on application startup, and makes the reads more sequential. In November 2004, Nailgun, a "client, protocol, and server for running Java programs from the command line without incurring the JVM startup overhead" was publicly released. introducing for the first time an option for scripts to use a JVM as a daemon, for running one or more Java applications with no JVM startup overhead. The Nailgun daemon is insecure: "all programs are run with the same permissions as the server". Where multi-user security is needed, Nailgun is inappropriate without special precautions. Scripts where per-application JVM startup dominates resource use, see one to two order of magnitude runtime performance improvements. Memory use Java memory use is much higher than C++'s memory use because: There is an overhead of 8 bytes for each object and 12 bytes for each array in Java. If the size of an object is not a multiple of 8 bytes, it is rounded up to next multiple of 8. This means an object holding one byte field occupies 16 bytes and needs a 4-byte reference. C++ also allocates a pointer (usually 4 or 8 bytes) for every object which class directly or indirectly declares virtual functions. Lack of address arithmetic makes creating memory-efficient containers, such as tightly spaced structures and XOR linked lists, currently impossible (the OpenJDK Valhalla project aims to mitigate these issues, though it does not aim to introduce pointer arithmetic; this cannot be done in a garbage collected environment). Contrary to malloc and new, the average performance overhead of garbage collection asymptotically nears zero (more accurately, one CPU cycle) as the heap size increases. Parts of the Java Class Library must load before program execution (at least the classes used within a program). This leads to a significant memory overhead for small applications. Both the Java binary and native recompilations will typically be in memory. The virtual machine uses substantial memory. In Java, a composite object (class A which uses instances of B and C) is created using references to allocated instances of B and C. In C++ the memory and performance cost of these types of references can be avoided when the instance of B and/or C exists within A. In most cases a C++ application will consume less memory than an equivalent Java application due to the large overhead of Java's virtual machine, class loading and automatic memory resizing. For programs in which memory is a critical factor for choosing between languages and runtime environments, a cost/benefit analysis is needed. Trigonometric functions Performance of trigonometric functions is bad compared to C, because Java has strict specifications for the results of mathematical operations, which may not correspond to the underlying hardware implementation. On the x87 floating point subset, Java since 1.4 does argument reduction for sin and cos in software, causing a big performance hit for values outside the range. Java Native Interface The Java Native Interface invokes a high overhead, making it costly to cross the boundary between code running on the JVM and native code. Java Native Access (JNA) provides Java programs easy access to native shared libraries (dynamic-link library (DLLs) on Windows) via Java code only, with no JNI or native code. This functionality is comparable to Windows' Platform/Invoke and Python's ctypes. Access is dynamic at runtime without code generation. But it has a cost, and JNA is usually slower than JNI. User interface Swing has been perceived as slower than native widget toolkits, because it delegates the rendering of widgets to the pure Java 2D API. However, benchmarks comparing the performance of Swing versus the Standard Widget Toolkit, which delegates the rendering to the native GUI libraries of the operating system, show no clear winner, and the results greatly depend on the context and the environments. Additionally, the newer JavaFX framework, intended to replace Swing, addresses many of Swing's inherent issues. Use for high performance computing Some people believe that Java performance for high performance computing (HPC) is similar to Fortran on compute-intensive benchmarks, but that JVMs still have scalability issues for performing intensive communication on a grid computing network. However, high performance computing applications written in Java have won benchmark competitions. In 2008, and 2009, an Apache Hadoop (an open-source high performance computing project written in Java) based cluster was able to sort a terabyte and petabyte of integers the fastest. The hardware setup of the competing systems was not fixed, however. In programming contests Programs in Java start slower than those in other compiled languages. Thus, some online judge systems, notably those hosted by Chinese universities, use longer time limits for Java programs to be fair to contestants using Java. See also Common Language Runtime Performance analysis Java processor, an embedded processor running Java bytecode natively (such as JStik) Comparison of Java and C++ Java ConcurrentMap References External links Site dedicated to Java performance information Debugging Java performance problems Sun's Java performance portal The Mind-map based on presentations of engineers in the SPb Oracle branch (as big PNG image) Java platform Computing platforms Software optimization
479604
https://en.wikipedia.org/wiki/IPlanet
IPlanet
iPlanet was a product brand that was used jointly by Sun Microsystems and Netscape Communications Corporation when delivering software and services as part of a non-exclusive cross marketing deal that was also known as "A Sun|Netscape Alliance". History After AOL merged with Netscape, technology analysts speculated that AOL's major interest was the netscape.com website (specifically the millions of registered users thereof ), and to a lesser extent the Netscape Communicator suite, which some considered would be used to replace the Internet Explorer browser which AOL licensed from Microsoft and included as part of their software suite. AOL entered into an agreement with systems and software company Sun Microsystems whereby engineers from both companies would work together on software development, marketing, sales, installation and support. Part of the deal was that Sun agreed to pay Netscape a fixed amount for each year of the deal regardless of whether any software was actually sold by the alliance. The code was written after the best parts of the Netscape Enterprise Server and the Sun Java System Web Server had been merged. The iPlanet brand was already owned by Sun following the acquisition of i-Planet, Inc. in October 1998 (i-Planet was founded just two years before in January 1996). In 2001, the three year alliance came to an end, at which point, under the terms of the deal, both AOL and Sun retained equal rights to the code that had been jointly developed. Toward the end of August 2001 many of the remaining Netscape employees were either laid off or transferred to Sun (mostly at its campuses in Santa Clara, California and Bangalore). During the period of the alliance, Netscape had hired very few people, most staff coming under the Sun umbrella. AOL had continued to market the directory and certificate server products under the Netscape brand. But in 2004 AOL sold the directory and certificate server products to Red Hat, which open-sourced them and integrated both into its Red Hat Enterprise Server product portfolio (Red Hat Directory Server and Certificate System). Most of the other iPlanet products were moved to the Sun ONE brand and then the Sun Java System brand. After the Oracle acquisition of Sun, some of the former iPlanet products returned to be sold under the iPlanet brand, specifically Oracle iPlanet Web Server and Oracle iPlanet Web Proxy Server. Products The suite of iPlanet offerings included: iPlanet Application Server (a Java EE application server system, based on the Netscape Application Server and NetDynamics Application Server) iPlanet Calendar Server iPlanet Directory Server (an LDAP server), renamed to Sun Java System Directory Server iPlanet Instant Messaging Server iPlanet Messaging Server (a SMTP, IMAP, POP3 and webmail mail server) iPlanet Meta Directory iPlanet Portal Search (formerly Netscape Compass) iPlanet Portal Server iPlanet Web Server (an HTTP and HTTPS web server), renamed to Sun Java System Web Server iPlanet Web Proxy Server, renamed to Sun Java System Web Proxy Server The suite also included a number of server-side infrastructure components, including distributed event management and tools for managing large populations of iPlanet server instances. Additionally, iPlanet sold "iPlanet E-Commerce Applications", a suite of software tools intended for building e-commerce websites: iPlanet BillerXpert (for handling billing and related financial processing activities) iPlanet BuyerXpert (for business-to-business procurement software) iPlanet ECXpert iPlanet MerchantXpert iPlanet SellerXpert (for implementing b2b and b2c sales websites) iPlanet TradingXpert Netscape PublishingXpert References Sun Microsystems software
59284717
https://en.wikipedia.org/wiki/Epic%20Games%20Store
Epic Games Store
The Epic Games Store is a digital video game storefront for Microsoft Windows and macOS, operated by Epic Games. It launched in December 2018 as both a website and a standalone launcher, of which the latter is required to download and play games. The storefront provides a basic catalog, friends list management, matchmaking, and other features. Epic Games has further plans to expand the feature set of the store front but it does not plan to add as many features as other digital distribution platforms, such as discussion boards or user reviews, instead using existing social media platforms to support these. Epic entered the distribution market after the success of Fortnite, which Epic distributed by their own channels to users on Windows and macOS system rather than other storefronts. Tim Sweeney, founder and CEO of Epic Games, had stated his opinion that the revenue cut of Steam, the dominant games storefront run by Valve, was too high at 30%, and suggested that they could run a store with as little as an 8% cut while remaining profitable. By launch, Epic Games had settled on a 12% revenue cut for titles published through the store, as well as dropping the licensing fees for games built on their Unreal Engine, normally 5% of the revenue. Epic Games enticed developers and publishers to the service by offering them time-exclusivity agreements to publish on the storefront, in exchange for assured minimum revenue, even if Epic made a loss on under-performing games. Epic also offered users one or two free games each week for the first two years of its operation to help draw users. While the storefront has been considered successful, criticism from users has been drawn to Epic Games and those developers and publishers opting for exclusivity deals, asserting that these are segmenting the market. Storefront and software The Epic Games Store is a storefront for games available via the web and built into Epic Games' launcher application. Both web and application allow players to purchase games, while through the launcher the player can install and keep their games up to date. Epic's newer games will be exclusively available through its store and the company plans to fund developers to release exclusively through their store, using revenue guarantees to developers that opt for this, with Epic paying the difference should a game underperform. For other developers, Epic takes a 12% share of revenue, the rest going to the developer, and for any games developed using the Unreal Engine, Epic forgoes the 5% revenue-based fee for those games sold through their storefront. After paying for content delivery and other services, Epic's profit is about 5% of gross revenue, though with economies of scale, this could increase to 6–7%. By Epic's calculations, the storefront's commission was sufficient to be profitable. Epic planned to offer one free game every two weeks through 2019; this was increased to one free game every week in June 2019, and on weeks where the free game had a mature content rating and thus locked out if parental controls are enabled, Epic offered a second free game not so rated. Epic since affirmed that they planned to continue the free game program through 2021. Through the first eighteen months of this program, Epic had given out over two thousand dollars of games, as estimated by PCGamesN. Certain free game offerings had been highly popular; in its giveaway for Grand Theft Auto V in May 2020, more than seven millions new users claimed the giveaway in addition to existing ones, and temporarily crashed Epic's servers, and later, over 19 million users obtained a free copy of Star Wars Battlefront II offered in January 2021, with the new influx of players crashing the game's servers briefly. Documents unveiled during the Epic Games v. Apple trial in 2021 showed that in the store's giveaways prior to 2020, Epic paid buyouts to the developers of the free game ranging typically from to , and measured this performance in new users drawn to the storefront on the order of 100,000 new users, with that buyout averaging from per new user. Epic Games also has offered sales, in which Epic absorbs the discount from the sale. For example, its first store-wide sale in May 2019 offered a discount of off any game valued at or more. The store at launch had a barebones set of features, but Epic plans to develop feature subsets comparable to other digital storefronts. Eventually the storefront will offer user reviews, but this feature will be opt-in by developers to avoid misuse by activities like review bombing. Cloud saving was added in August 2019, while preliminary support for achievements and user modifications were added in July 2020. Full support for achievements were rolled out in October 2021. There are no plans to include internal user forums. The storefront will include a ticket-based support system for users to report bugs and technical problems for games to developers, while developers will be encouraged to link to external forums and social channels of their choosing, like Reddit and Discord, in lieu of storefront-tied forums. However, a party chat system, similar to features of Discord, will be implemented in 2021 to allow friends to chat while in games supported by the store. Information taken from OpenCritic was added to product Store pages in January 2020 to provide users with critical review information. The store does not have features such as virtual reality headset support, nor expected to have any "game-shaped features" similar to Steam's trading cards designed to drive sales. Cloud saving was introduced on a very limited, game-by-game basis in July 2019, though Epic plans to expand this out after validating the feature. In December 2019, Epic gave developers and publishers the option to implement their own in-game storefront for microtransactions and other purchases for a game, while still retaining the option to use the Epic storefront instead. Where possible, Epic plans to extend its "Support a Creator" program that it had launched in Fortnite Battle Royale to other games offered on the store. With the Support a Creator program, players can opt to indicate a streamer or content creator, selected by Epic based on submitted applications, to support. Supported streamers then receive revenue from Epic Games on microtransactions made through the Epic Games Store from the players that supported them, incentivizing these content creators; within Fortnite, creators had received about 5% of the cash value of the microtransactions. Following developers discovering that Valve would not allow games on Steam that used blockchain-based elements like cryptocurrency or non-fungible tokens as these items have value outside of Steam, Epic announced that they would allow such games, though this remains part of the review of each game that Epic performs before accepting a game onto its system. History Digital distribution of games for personal computers prior to the introduction of the Epic Games Store was through digital storefronts like Steam and GOG.com, with Steam being the dominant channel with an estimated 75% of all digital distribution in 2013. Valve, which operated Steam, took a 30% revenue cut of all games sold through their services, a figure matched by the other services like GOG.com, and console and mobile storefronts. In August 2017, Epic's Tim Sweeney suggested that 30% was no longer a reasonable cut, and that Valve could still profit if they cut their revenue share to 8%. In early December 2018, Epic Games announced that it would open a digital storefront to challenge Steam by using a 12% revenue split rather than Steam's 30%. Epic also said that it would not impose digital rights management (DRM) restrictions on games sold through its platform. The store opened days later, on December 6, 2018, as part of the Game Awards, with a handful of games and a short list of upcoming titles. The store was open for macOS and Windows platforms before expanding to Android and other platforms. Epic aims to release a storefront for Android devices, bypassing the Google Play Store, where it will similarly only take a 12% cut compared to Google's 30%. While Apple, Inc.'s monopoly on iOS currently makes it impossible for Epic to release an App Store there, analysts believe that if Google reacts to Epic's App Store by reducing their cut, Apple will be pressured to follow suit. Epic has tried to ask Google for an exemption to bypass Google's payment systems for in-app purchases for the Fortnite Battle Royale app, but Google has refused to allow this. Prior to the store's launch, its Director of Publishing Strategy, Sergey Galyonkin, had run Steam Spy, a website that collected Steam usage data from public profiles to create public sales statistics. He ran the site as a side-project, but used it to learn what developers would want from Epic's store, namely fewer social elements and less visual clutter. The store's contents were hand-curated until Epic opened the store to self-publishing, starting with a beta of these features in August 2021. Epic's staff will still need to approve games for the store, a process that "mostly focus[es] on the technical side of things and general quality", according to Tim Sweeney. Sweeney does not expect this vetting process to be as stringent as the approvals needed to publish games on home video game consoles, but will use human evaluation to filter out bloatware and asset flips, among other poor-quality titles. Epic does not plan to allow adults-only mature content on the store. In January 2019, Ubisoft announced its plans to distribute its games via the Epic Games Store, with its upcoming Tom Clancy's The Division 2 to be sold on the storefront (in addition to Ubisoft's own Uplay storefront) instead of Steam, making it the first major third-party publisher to utilize the Epic Games Store. Ubisoft said that selecting the Epic Games Store for future games was part of a larger business discussion related to Steam. Chris Early, Ubisoft's vice president for partnerships and revenue, described Steam as "unrealistic, the current business model that they have...It doesn't reflect where the world is today in terms of game distribution." Publisher Deep Silver followed suit later that month, announcing that Metro Exodus will be exclusive to Epic Games Store for one year, at a reduced (in North America) compared to when it was offered on other storefronts. Epic has subsequently made partnerships with Private Division and Quantic Dream for publishing on the store. The storefront started offering non-game applications in December 2020 with the introduction of Spotify; Epic stated that it will not take a cut of any of Spotify's subscriptions for those using it via its storefront app. Other apps added included itch.io, iHeartRadio, Krita, and Brave. Reception The Epic Games Store was announced a few days after Valve had revealed a change in the Steam revenue sharing model that reduced Valve's take, reducing their revenue cut from 30% to 25% after a game made more than , and to 20% after . Several indie game developers expressed concern that this change was meant to help keep larger AAA developers and publishers and did little to support smaller developers. As such, when the Epic Games Store was announced, several journalists saw it as potentially disruptive to Steam's current model. Some developers and publishers have announced plans to release games that they were planning to release through Steam now exclusively through the Epic Games Store, or to have timed exclusivity on Epic's storefront before appearing on other services. Valve's Gabe Newell welcomed the competition, saying it "is awesome for everybody. It keeps us honest, it keeps everybody else honest", but did comment that, in the short-term, the competition was "ugly". Over its first year in 2019, Epic reported that the Store drew 108 million customers, and brought in over in sales, with being spent on third-party games. Of those third-party games, 90% of the sales came from the Epic Games Store time-limited exclusives. Overall, Epic stated that overall sales were 60% higher than they had anticipated. For 2020, Epic stated the store had reached 160 million players, 31 million daily active players, and annual sales over ( of that for third-party titles). According to data collected by Simon Carless in 2021, of the first wave of Epic storefront-exclusive games in 2019, only Satisfactory had surpassed what Epic paid for that exclusivity, and it along with Dauntless are the only two games expected to make a profit for Epic. Reactions to storefront exclusivity To compete against Steam, Epic Games has frequently arranged for time-exclusive releases of games on the Epic Games Store before other storefronts, typically for either six months or a year. Sweeney stated that this strategy was the only way to challenge Steam's dominant position, and would stop seeking exclusivity should Valve reduce its 30% revenue share. Otherwise, Epic will continue to accept offers for exclusivity on the Epic Games Store from any developers or publishers that are interested, regardless of what prior plans they had made with Steam or other storefronts. Some consumers have reacted negatively towards these exclusivity deals, as it appears to create division in the gaming community similar to games that are released with timed exclusivity on home consoles. Metro Exodus, by developers 4A Games and published by Deep Silver, had been planned as a Steam release. However, Deep Silver announced a few weeks before release that the game would be a timed-exclusive on the Epic Games Store, eventually available on Steam a year after release. Some users were upset by this, review bombing the game on Steam and complaining at 4A Games. Deep Silver backed up 4A Games, and noted the decision for Epic Games Store exclusivity was made by Deep Silver's parent, Koch Media. In the days that followed after Metro Exodus release, players used the Steam review system to praise the game as Epic Games Store lacked user reviews at that time. Phoenix Point, a spiritual successor to X-COM by X-COM lead designer Julian Gollop, was successfully crowdfunded with players given the option of redemption keys on Steam or GOG.com. In March 2019, Gollop announced that they have opted to make Phoenix Point exclusive to the Epic Games Store for a year; backers would get a redemption key for the Epic Games Store as well as for Steam or GOG a year later when the exclusivity period was up, as well as being provided the first year's downloadable content for free. Gollop explained that with the exclusivity deal, his team received additional financial support to finish up Phoenix Point. Several backers were angered by this decision, believing that Gollop's team used their funds to get the game to a point where they could get external investment and then change the direction of the game. Gollop asserted that the deal with Epic Games did not alter Phoenix Points ultimate direction, but did offer full refunds to backers if they wanted. Glumberland, the developers of Ooblets, announced that the game would be an exclusive to the Epic Games Store in late July 2019, citing that Epic's funding support would help to keep the studio afloat until release and provide them a better revenue split. In the announcement by Glumberland's Ben Wasser, he included what he felt had been joking language related to criticisms of Epic Games Store exclusivity, calling those who complained "immature, toxic gamers", but that the situation was "nothing to get worked up about". In the wake of this, Wasser and others at Glumberland began to receive thousands of hostile negative messages related to the announcement, including threats. Wasser had not expected the community to react to the message that way, and tried to clarify Glumberland's position of needing Epic's support rather than any attempt to spurn the gaming community. Sweeney spoke later that the campaign against Ooblets represents a growing trend in the community based on "the coordinated and deliberate creation and promotion of false information, including fake screenshots, videos, and technical analysis, accompanied by harassment of partners, promotion of hateful themes, and intimidation of those with opposing views." Sweeney said that Epic is working with its partners and developers to try to improve the situation and support those targeted in such manners. Further issues on exclusivity arose after Unfold Games, who were in the final stages of preparing to release their game Darq, reported on their interactions with Epic Games. Darq had been established as a Steam release for several months and the developers announced that they were nearing release in 2019. According to Unfold Games, Epic Games approached them about having the game on the Epic Games Store in addition to providing funding support. However, when asked, Epic Games clarified that Unfold would have to sell the game on the Epic Games Store exclusively, having to withdraw the game from Steam for a period of one year. Unfold decided against going with Epic Games, noting that a large part of their marketing for fundraising was a major emphasis on releasing on Steam, as well as due to the game having been wishlisted by a large number of Steam users. As a result of media attention regarding Epic Games Store, Unfold released Darq as a digital rights management-free game on Steam and GOG.com. Some journalists have expressed concern that Epic's large focus on exclusivity may harm their intent. Following a number of other time-exclusive planned releases on the Epic Games Store announced during the 2019 Game Developers Conference and further complaints from players about this, Steve Allison, head of the storefront division, admitted that they did not want to cause such disruption in the gaming community. According to Allison, they will try to avoid making such large-scale exclusivity deals so close to a game's release, and want to try to respect what the community wants. Following similar complaints from backers of Shenmue III who were upset about the Steam release being delayed, Epic Games announced that with Shenmue III and any future crowd-funded game that ends up with Epic Games Store exclusivity, it will cover the costs of any refund requests from backers. Other criticisms Complaints leveled at the Epic Games Store have also included disputed claims of the Epic Store client collecting data on users to sell to China, as if it were spyware. This criticism was spurred by a Reddit post that claimed that the Store client was collecting user data and asserted that it was tied to Tencent's involvement with Epic. Tencent is the largest video game publisher in the world and since 2012 has had a 40% ownership of Epic Games. Due to the nature of oversight that the Chinese government has with products released in China, Tencent as with most other media and tech firms has to maintain a close relationship with the government and give the government partial oversight of the company. According to writers for USGamer and Polygon, due to the state of US-China political relations at the time the Store was launched, coupled with the general distrust of Chinese players and sinophobia among some Western video game players, this accusation caught the attention of many that repeated the claims from the Reddit post, and leading these people to boycott the Store and those publishers that opted to sell their games exclusively on the store. Epic has stated that there is some data tracking but only to support useful functions such as importing one's Steam friend lists into their client, or to track streaming media viewership for their Support-A-Creator program. Some of the information in the Reddit post reflected initial methods to collect this data, but Epic stated they only used the data for their said features and since adjusted their data access to be more in line with how privacy settings should be handled. In April, 2021 in the Epic Games v. Apple lawsuit, Apple submitted a court filing that claimed that the Epic Games Store was running at a significant loss and likely would not be profitable until 2027, based on deposition from Epic's financial management, and thus requiring Epic to assure revenue from its other product streams. Apple asserted that Epic had lost around on the store from 2019 to 2020, primarily due to the minimum guarantees it provides to developers for bringing their game to the store and from its storefront exclusivitiy deals. Sweeney, in response to this claim via Twitter, stated that the store "has proven to be a fantastic success in reaching gamers with great games and a fantastic investment into growing the business". Resource consumption PC Gamer discovered that on select laptop configurations Epic Games Store can shorten battery lifespan by up to 20% even when it is not used (Epic Games Store window is closed and application is minimized to tray). On most devices, this impact is milder, at only 5%. References External links Epic Games Internet properties established in 2018 Online-only retailers of video games Video game controversies
40454983
https://en.wikipedia.org/wiki/ISO/IEC%20JTC%201/SWG-A
ISO/IEC JTC 1/SWG-A
Note: This special working group has been disbanded. ISO/IEC JTC 1/SWG on Accessibility (SWG-A) was a special working group of the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) that promoted and facilitated standards development within the field of information and communications technology (ICT) accessibility. ISO/IEC JTC 1/SWG-A was formed at the October 2004 ISO/IEC JTC 1 Plenary Meeting in Berlin, Germany via approval of Resolution 24. After completing its goals by publishing ISO/IEC TR 29138 Parts 1, 2 and 3, it was dissolved. Maintenance of the standards was transferred to ISO/IEC JTC 1/SC 35. The special working group was created as a response to the standardization demands regionally, locally, and globally, in the field of ICT for accessibility. The international secretariat of ISO/IEC JTC 1/SWG-A is administered by ITI/INCITS acting on behalf of the American National Standards Institute (ANSI), located in the United States. The first meeting of the special working group took place in Sheffield, United Kingdom in April 2005, where the group’s terms of reference were confirmed and its original two task groups established. Terms of reference The terms of reference for ISO/IEC JTC 1/SWG-A are: Determine an approach to, and implement, the gathering of accessibility‐related information, being mindful of the varied and unique opportunities including direct participation of user organizations, workshops, and liaisons Maintain and disseminate up‐to‐date information of all known accessibility‐related standards efforts, i.e. the standards inventory Maintain and disseminate up‐to‐date information on accessibility‐related user needs, i.e. the user needs summary Through wide dissemination of the SWG materials, encourage the use of globally relevant voluntary accessibility‐related standards Together with PAS mentors, advise consortia/fora, if requested, in their submission of accessibility‐related standards/specifications to the formal standards process Provide support when JTC 1 needs assistance related to accessibility (such as duties for WSC Accessibility Strategic Advisory Group and input on accessible ISO web content) Structure ISO/IEC JTC 1/SWG-A is made up of four active Ad Hoc groups, each of which carries out specific tasks within the field of accessibility. The four active Ad Hoc groups of ISO/IEC JTC 1/SWG-A are: Collaborations ISO/IEC JTC 1/SWG-A works in close collaboration with a number of other organizations or subcommittees, both internal and external to ISO or IEC. Organizations internal to ISO or IEC that collaborate with or are in liaison to ISO/IEC JTC 1/SWG-A include: ISO/IEC JTC 1/SC 7, Software and systems engineering ISO/IEC JTC 1/SC 29, Coding of audio, picture, multimedia and hypermedia information ISO/IEC JTC 1/SC 35, User interfaces ISO/IEC JTC 1/SC 36, Information technology for learning, education and training ISO/TC 159, Ergonomics IEC TC 100, Audio, video and multimedia systems and equipment Organizations external to ISO or IEC that collaborate with or are in liaison to ISO/IEC JTC 1/SWG-A include: ITU-T SG 16, Multimedia ITU-T FG AVA, Audiovisual Media Accessibility The Linux Foundation W3C Web Accessibility Initiative (WAI) There are also a number of consumer organizations, user representatives, and other interested parties that are involved with ISO/IEC JTC 1/SWG-A, such as: American Foundation for the Blind (AFB) Tech Altarum Institute/Accessibility Forum ANEC Japanese Society for Rehabilitation of Persons with Disabilities (via the Japanese National Body) Japanese Industrial Standards (JIS) Committee on Web Accessibility Royal National Institute of the Blind (via the UK National Body) United States Access Board United States Department of Education University of Wisconsin (Trace R&D Center) Visual Impairment Knowledge Center WGBH National Centre for Accessible Media Member countries The members of ISO/IEC JTC 1/SWG-A are: Australia, Canada, Denmark, France, Germany, Italy, Japan, The Netherlands, Norway, The Republic of Korea, United Kingdom, and United States of America. Publications ISO/IEC JTC 1/SWG-A has a number of publications in the field of ICT accessibility, including: Dissolution In a resolution adopted at the 29th meeting of ISO/IEC JTC 1 in November 2014, ISO/IEC JTC 1/SWG-A was disbanded. This special working group completed the tasks for which it was originally established with the publication of ISO/IEC TR 29138 Parts 1, 2 and 3. Part 2 of this document has been transitioned from a technical report to ISO/IEC JTC 1 publication, The Inventory of Accessibility and Accessibility-related Standards and Specifications, which is available on the ISO Online Browsing Platform (OBP). With the dissolution of ISO/IEC JTC 1/SWG-A, JTC 1 has transferred responsibility for the maintenance of The Inventory of Accessibility and Accessibility-related Standards and Specifications to ISO/IEC JTC 1/SC 35. All relevant ISO/IEC JTC 1/SWG-A documents were submitted to the ISO/IEC JTC 1 Secretariat for archiving and all liaisons and members were informed of the dissolution. See also ISO/IEC JTC1 International Committee for Information Technology Standards Information Technology Industry Council American National Standards Institute International Organization for Standardization International Electrotechnical Commission List of ISO standards References External links ISO/IEC JTC 1/SWG-A page at JTC 1 INCITS home page A Web accessibility
44278962
https://en.wikipedia.org/wiki/Shadow%20Network
Shadow Network
The Shadow Network is a China-based computer espionage operation that stole classified documents and emails from the Indian government, the office of the Dalai Lama, and other high-level government networks. This incident is the second cyber espionage operation of this sort by China, discovered by researchers at the Information Warfare Monitor, following the discovery of GhostNet in March 2009. The Shadow Network report "Shadows in the Cloud: Investigating Cyber Espionage 2.0" was released on 6 April 2010, approximately one year after the publication of "Tracking GhostNet." The cyber spying network made use of Internet services, such as social networking and cloud computing platforms. The services included Twitter, Google Groups, Baidu, Yahoo Mail, Blogspot, and blog.com, which were used to host malware and infect computers with malicious software. Discovery The Shadow Net report was released following an 8-month collaborative investigation between researchers from the Canada-based Information Warfare Monitor and the United States Shadowserver Foundation. The Shadow Network was discovered during the GhostNet investigation, and researchers said it was more sophisticated and difficult to detect. Following the publication of the GhostNet report, several of the listed command and control servers went offline; however, the cyber attacks on the Tibetan community did not cease. The researchers conducted field research in Dharamshala, India, and with the consent of the Tibetan organizations, they were able to monitor the networks in order to collect copies of the data from compromised computers and identify command and control servers used by the attackers. The field research done by the Information Warfare Monitor and the Shadowserver Foundation found that computer systems in the Office of His Holiness the Dalai Lama (OHHDL) had been compromised by multiple malware networks, one of which was the Shadow Network. Further research into the Shadow Network revealed that, while India and the Dalai Lama's offices were the primary focus of the attacks, the operation compromised computers on every continent except Australia and Antarctica. The research team recovered more than 1,500 e-mails from the Dalai Lama's Office along with a number of documents belonging to the Indian government. This included classified security assessments in several Indian states, reports on Indian missile systems, and documents related to India's relationships in the Middle East, Africa, and Russia. Documents were also stolen related to the movements of NATO forces in Afghanistan, and from the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP). The hackers were indiscriminate in what they took, which included sensitive information as well as financial and personal information. Origin The attackers were tracked through e-mail addresses to the Chinese city of Chengdu in Sichuan province. There was suspicion, but no confirmation, that one of the hackers had a connection to the University of Electronic Science and Technology in Chengdu. The account of another hacker was linked to a Chengdu resident who claimed to know little about the hacking. References External links Shadowserver Foundation Citizen Lab The SecDev Group Information Warfare Monitor Cyberwarfare in China Spyware Cyberattacks Cyberwarfare Espionage projects
15657561
https://en.wikipedia.org/wiki/Jah%20Thomas
Jah Thomas
Nkrumah "Jah" Thomas (b. 1955, Kingston, Jamaica) is a reggae deejay and record producer who first came to prominence in the 1970s, later setting up his own Midnight Rock and Nura labels. Biography Named Nkrumah after Ghanaian nationalist leader Kwame Nkrumah, he adopted the stage name Jah Thomas and began deejaying in the mid-1970s, working with producers such as Alvin Ranglin, who released his single "Midnight Rock", which topped the Jamaican chart in 1976. Thomas's debut album, Stop Yuh Loafin''' gained international recognition via a release on the newly formed Greensleeves Records label. Further deejay albums appeared in the late 1970s, before Thomas began concentrating on producing other artists, that included: Robert Ffrench, Anthony Johnson, Triston Palma, Johnny Osbourne, Michael Palmer, Barry Brown, Barrington Levy, Sugar Minott, Early B, Ranking Toyan, and Robin Hood, at the same time setting up the Midnight Rock record label, one of the most successful performer-owned labels of the period. Midnight Rock soon had a hit record in the shape of Thomas's "Cricket Lovely Cricket". Thomas would often use the mixing talents of Scientist, and the Roots Radics band. He later also set up the Nura label. He is the father of reggae singer Da'Ville and singer/producer Dwight Thomas. DiscographyStop Yuh Loafin' (1978) GreensleevesDance On The Corner (1979) Midnight RockNah Fight Over Woman (1980) Tad'sBlack Ash Dub (1980) Trojan (with The Revolutionaries)Tribute to the Reggae King (1981) Midnight RockDance Hall Connection (1982) Silver CamelDance Hall Stylee (1982) Daddy Kool/Silver CamelShoulder Move (1983) Midnight Rock Jah Thomas Meets Scientist In Dub Conference (1996) Munich Triston Palmer Meets Jah Thomas In Discostyle (1996) Munich Jah Thomas Meets King Tubby Inna Roots Of Dub (1997) Rhino Jah Thomas Meets The Roots Radics Dubbing (1999) Trojan Jah Thomas meets Barrington Levy inna Dancehall Style Culture Press King Tubby's Hidden Treasure(1999) Trojan (Jah Thomas & The Roots Radics) Lyrics For Sale Rhino Prophecy Of Dub Abraham (Jah Thomas & The Roots Radics)Jah Thomas Meets King Tubby In The House of Dub Majestic ReggaeRoots Dancehall Party (2003) Silver KamelBig Dance A Keep (2005) Silver KamelBig Dance Dub (2005) Silver KamelLiquid Brass (2005) Silver KamelJah Thomas Presents... (2007) Ras Sta ReggaeJah Thomas Meets... (2007) Ras Sta ReggaeJah Thomas Meets...'' (2021) Dub of Dubs References External links Jah Thomas at Roots Archives 1955 births Living people People from Manchester Parish Jamaican reggae musicians Greensleeves Records artists Trojan Records artists
1974971
https://en.wikipedia.org/wiki/CompuServe%20Information%20Manager
CompuServe Information Manager
CompuServe Information Manager (CIM) was CompuServe Information Service's client software, used with the company's Host Micro Interface (HMI). The program provided a GUI front end to the text-based CompuServe service that was at the time accessed using a standard terminal program with alphanumerical shortcuts. Other CompuServe client programs TapCIS OzWin NavCIS ForCIS AutoSIG References 1990 software CompuServe Classic Mac OS software DOS software OS/2 software Windows software
53335244
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20S8
Samsung Galaxy S8
The Samsung Galaxy S8 and Samsung Galaxy S8+ are Android smartphones produced by Samsung Electronics as the eighth generation of the Samsung Galaxy S series. The S8 and S8+ were unveiled on 29 March 2017 and directly succeeded the Samsung Galaxy S7 and S7 edge, with a North American release on 21 April 2017 and international rollout throughout April and May. The Samsung Galaxy S8 Active was announced on 8 August 2017 and is exclusive to certain U.S. cellular carriers. The S8 and S8+ contain upgraded hardware and major design changes over the S7 line, including larger screens with a taller aspect ratio and curved sides on both the smaller and larger models, iris and face recognition, a new suite of virtual assistant features known as Bixby (along with a new dedicated physical button for launching the assistant), a shift from Micro-USB to USB-C charging, and Samsung DeX, a docking station accessory that allows the phones to be used with a desktop interface with keyboard and mouse input support. The S8 Active features tougher materials designed for protection against shock, shatter, water, and dust, with a metal frame and a tough texture for improved grip that makes the S8 Active have a rugged design. The Active's screen measures the same size as the standard S8 model but loses the curved edges in favor of a metal frame. The S8 and S8+ received positive reviews. Their design and form factor received praise, while critics also liked the updated software and camera optimizations. They received criticism for duplicate software apps, lackluster Bixby features at launch, and for the placement of the fingerprint sensor on the rear next to the camera. A video published after the phones' release proved that the devices' facial and iris scanners can be fooled by suitable photographs of the user. The S8 and S8+ were in high demand at release. During the pre-order period, a record of one million units were booked in South Korea, and overall sales numbers were 30% higher than the Galaxy S7. However, subsequent reports in May announced sales of over five million units, a notably lower first-month sales number than previous Galaxy S series models. On March 11, 2018, Samsung launched the successor to the S8, the Samsung Galaxy S9. History Prior to its official announcement, media outlets reported on rumors and information from industry insiders. In December 2016, SamMobile reported that the Galaxy S8 would not feature a 3.5 mm headphone jack, later reported to be a false rumor. In January 2017, The Guardian reported on bigger screens for both of the two phone sizes, with edge-to-edge "infinity" displays and very limited bezels, and an iris scanner. Additionally, The Guardian stated that the phones would come with 64 gigabytes of storage and support microSD cards, use USB-C connectors, and feature a "Bixby" intelligent personal assistant. Soon after, VentureBeat revealed photos of the phones and additional details, including the lack of physical navigation and home buttons, in which the fingerprint sensor was moved to the back of the phone. Evan Blass tweeted in mid-March about color options for the phones. The Galaxy S8 and S8+ were officially unveiled on 29 March 2017, with pre-orders beginning 30 March and official U.S. release on 21 April 2017. Following Best Buy retail listings in March, Samsung opened pre-orders for unlocked U.S. handsets on 9 May 2017, with availability starting 31 May. The devices have also been released internationally. On 21 April 2017, they were made available in South Korea, Canada, and Taiwan. On 28 April, they were made available in the United Kingdom, Australia, Ireland, and Russia. followed by Singapore one day later. On 5 May, they were made available in Malaysia. New Zealand, Pakistan, India, the Philippines, and Thailand. followed by Brazil on 12 May. On 25 May, they were made available in China, and Hong Kong. On 8 June, they were made available in Japan. In July 2017, pictures of the Galaxy S8 Active were leaked on Reddit, and the following month, AT&T "accidentally" confirmed its existence through a document in a promotional campaign. It officially became available for pre-order, exclusively through AT&T, on 8 August 2017, with in-store purchase available 11 August. VentureBeat reported in late September that the device would also become available through T-Mobile in November, and Samsung subsequently confirmed both T-Mobile and Sprint availability in early November. Specifications Hardware Display The Galaxy S8 and S8+ both feature 1440p OLED displays, with an 18.5:9 (37:18) aspect ratio taller than the 16:9 ratio used by the majority of smartphones released until then. The S8 has a 5.8-inch panel, while the S8+ uses a larger 6.2-inch panel. The displays on both devices curve along the side bezels of the device, with minimal bezels that Samsung markets as an "infinity display", and the display panel itself has rounded edges. They use DCI-P3, offering what screen-testing website DisplayMate describes as the largest native color gamut, highest peak brightness, highest contrast rating in ambient light, highest screen resolution, lowest reflectance, and highest contrast ratio. Chipsets The S8 features an octa-core Exynos 8895 system-on-chip and 4 GB of RAM; models in North American and East Asian markets utilize the Qualcomm Snapdragon 835 instead. Both chips are manufactured by Samsung with a 10 nm process. They contain 64 GB of internal storage, expandable via microSD card. Design In the United States, the S8 and S8+ are available in Midnight Black, Orchid Gray, and Arctic Silver color options, whereas gold and blue are available internationally. The blue option was made available in the U.S. in July 2017. Unlike past Galaxy S series models, the S8 line does not feature physical navigation keys, electing to use on-screen keys instead. However, unlike other implementations, the home button can still be activated if it is hidden or the screen is off. The S8's display features pressure sensitivity limited to the home button. To prevent the home button from burn-in damage, its position moves slightly. The Galaxy S8 Active features a rugged design and significantly tougher materials to make it shock, shatter, water, and dust resistant. It has a larger battery than either of the regular S8s, at 4000 mAh. Unlike the previous phones in the Active line, the S8 Active does not have tactile buttons, instead using onscreen keys like the regular variants of the S8. It also no longer has the dedicated action button of previous versions which could be reprogrammed to be a shortcut to favorite apps, with the action button being replaced with the Bixby button. The "infinity" edge display of the standard models is removed, replaced with a metal frame and bumpers in the corners to protect from shocks, while the back is fitted with a "rugged, tough texture for a secure grip". Its screen also measures 5.8 inches, the size of the regular S8, and has quad HD Super AMOLED in 18.5:9 aspect ratio. The S8 Active is sold in Meteor Gray, but AT&T also has a Titanium Gold color. Camera While The Verge claims that the S8 uses exactly the same 12-megapixel rear camera as the S7, though with software improvements, a report from PhoneArena claims that the phones carry new, custom camera modules. In addition, the S8 has a Pro Mode, which functions similarly to a DSLR camera, and has custom options for shutter speed, ISO, and colour balance. It also supports the raw imagery format Digital Negative. The front-facing camera was upgraded to an 8-megapixel sensor with autofocus. The S8 features fingerprint and iris scanners; the fingerprint reader is relocated to the rear of the device, to the left of the camera, due to the removal of the physical home button. In addition to an iris scanner, the S8 also features face-scanning as an option to unlock the phone. Face recognition technology had previously been implemented in earlier models since the Galaxy S III. Batteries The S8 and S8+ use non-removable 3000 and 3500 mAh Lithium-Ion batteries respectively, as compared to 3000 and 3600 mAh on the Samsung Galaxy S7 and S7 Edge respectively. Samsung stated that it had engineered the batteries to retain their capacity for a longer period of time than previous models. The S8 supports AirFuel Inductive (formerly PMA) and Qi wireless charging standards. Due to the recalls of the Samsung Galaxy Note 7, Samsung said in a press conference it is committed to stricter quality control and safety testing procedures on all of the company's future products. The device can charge at 9 watts (5V 1.8A) through an ordinary USB and 12.5 watts through USB-C power delivery (5V 2.5A). Charging speeds upwards of 15 watts (9V 1.67A) are attainable through Qualcomm Quick Charge 2.0, however fast charging is disabled during operation of the device (while the screen is turned on), throttling charging speed down to less than half, regardless of temperature. Storage Samsung also launched a Galaxy S8+ with 128 GB of storage and 6 GB of RAM exclusively in China and South Korea, and a bundle offer in the countries provides both the exclusive model and the Samsung DeX docking station. The unique variant was also released in India in June 2017. Connectivity The T-Mobile US version of the S8 Active also supports the company's 600 MHz LTE network that was starting to be rolled out at the time the device was announced. Physical Keyboard The S8 and S8+ were one of the very few phone models on the market that offered built-in support for a physical keyboard, thus competing for a market held almost exclusively by BlackBerry at the time. For these models, Samsung offered a specialty plastic case which featured a clip-on QWERTY keyboard that could be removed or added by the user at-will. While not in use, the keyboard could be clipped to the back of the case. When in-use, the OS detected the keyboard and adjusted the screen size and proportions accordingly, so that other software could be used normally. The keyboard model was EJ-CG950BBEGWW for the S8 and EJ-CG955BBEGWW for the S8+. The Galaxy S8 is one of the first smartphones to support Bluetooth 5, supporting new capabilities such as connecting two wireless headphones to the device at once. It is also bundled with Harman AKG earbuds. Both smartphones have improved satellite navigation over the predecessors by including Galileo receivers. Software The Galaxy S8 launched with the Android 7.0 "Nougat" operating system with the proprietary Samsung Experience (formerly TouchWiz) user interface and software suite. The software features a suite of assistant functions known as "Bixby", which is designed primarily to interact with Samsung's bundled applications and other supported services. The feature allows the use of voice commands to perform phone functions, can generate cards shown on a home screen page (replacing the Flipboard integration formerly featured) based on a user's interactions, and perform searches utilizing object recognition via the camera. Bixby supports third-party integration via an SDK. The S8 supports the use of a docking station known as Samsung DeX to access a PC-like desktop environment on an external display, with support for mouse and keyboard input. On 21 April 2017, coinciding with the phone's official retail date, reports surfaced that the default music player on the Galaxy S8 would be Google Play Music, continuing a trend that started with the S7 in 2016. However, for the S8, Samsung partnered with Google to incorporate additional exclusive features into the app, including the ability to upload up to 100,000 tracks to the service's cloud storage, an increase from the 50,000 tracks users are normally allowed to upload. Additionally, new users get a three-month free trial to the service, the same as given to users who purchase Google's own Pixel smartphone. Furthermore, Google stated that more Samsung-exclusive features will be added to the app in the future, and that the Bixby assistant will also be supported by the app. Bixby replaces S Voice, the voice recognition technology previously found in Samsung Galaxy models. In May 2017, Google announced that the Galaxy S8 and S8+ will support the company's Daydream virtual reality platform after Samsung rolls out a software update scheduled for mid-2017. In July 2017, Verizon began rolling out an update for its devices, with support for Daydream. The Galaxy S8 was one of the first Android phones to support ARCore, Google's augmented reality engine. In February 2018, the official Android 8.0.0 "Oreo" update began rolling out to all versions of the Samsung Galaxy S8. In February 2019, the official Android 9.0 "Pie" update was released for the Galaxy S8 family. There are also some Custom Kernels from XDA Developers. Reception Dan Seifert of The Verge praised the design of the Galaxy S8, describing it as a "stunning device to look at and hold" that was "refined and polished to a literal shine", and adding that it "truly doesn't look like any other phone you might have used before". The hardware of the device was described as "practically flawless". Seifert also liked the new software, writing that "Samsung is known less for polish and more for clumsiness. In a refreshing change of pace, the software on the S8 is, dare I say, ". However, he criticized the Bixby assistant, writing that "in its current state, it doesn’t do much at all", and also criticized the number of duplicate apps. Regarding performance, he wrote that the S8 was "fast and responsive, but so is virtually every other premium phone you can buy, and the S8 isn’t noticeably faster or quicker than a Google Pixel, LG G6, or iPhone 7". Fellow Verge reporter Vlad Savov felt that the placement of the fingerprint sensor was "a perplexing decision if we consider it as a deliberate design choice", but noted reports from Korea claiming that Samsung had originally intended for the fingerprint reader to be built directly into the screen, but was unable to reach a desirable implementation in time for release. The Verge wrote that "Samsung’s six-month-old S8 has cutting edge features and design with fewer issues than other Android phones" like the Google Pixel 2 and LG V30, and that the "OLED screen stretches to the edges of the device and curves on its sides in an almost liquid fashion. It makes the S8 look just as fresh today as it did when it debuted", and pointing out that the S8's popularity and carrier support guaranteed plenty of third-party accessories. Chris Velazco of Engadget similarly praised the design, stating that "from their rounded edges to their precisely formed metal-and-glass bodies, they feel like smaller, sleeker versions of the Galaxy Note 7", and also praised the display as being simply "awesome". Velazco also praised the software, calling Samsung's added interface "subtle and thoughtful in its design choices". While noting that the Bixby assistant wasn't ready yet, he did compliment the promised voice features as being more granular than those offered through Siri or Google Assistant, and wrote that "With that kind of complexity involved, maybe it's no surprise this stuff isn't done yet". Also praising performance and the camera, though noting that "The 12-megapixel sensors on the back haven't changed much since last year. That's not a bad thing since they were great cameras to start with", Velazco summarized his review by writing that the devices "aren't perfect, but they're as close as Samsung has ever gotten". Brandon Russell of TechnoBuffalo claimed that the camera could not beat Google's Pixel smartphone. Ron Amadeo of Ars Technica noted that the device's unusual aspect ratio resulted in pillarboxing when watching 16:9 video without zooming or stretching it. He complimented the feel of the S8, calling it "perfected", but criticized the glass back for being "more fragile" and that "Glossy, slippery glass doesn't feel as good in your hand as metal does, either. For the top-tier premium price tag, we'd prefer Samsung to put in the extra work and use a metal back". He criticized the biometric options for unlocking the phones, writing that "There's an iris scanner, a fingerprint reader, and face unlock. The problem is none of them are any good", and also criticized duplicate apps, writing that "most of which can't be removed and aren't very compelling". Additionally, he criticized Bixby, calling it "an odd addition" due to the phone's Google Assistant functionality already present. Prior to the phone's official announcement, reports suggested that Bixby would support "7-8 languages" at launch. Later reports after the phone's announcement clarified that Bixby would only support Korean and American English languages at its release, though noting that more languages would be coming "in the following months". In mid-April, The Wall Street Journal reported that Bixby would be launched without support for American English. On 19 July 2017, Samsung announced that Bixby had begun rolling out to Galaxy S8 users in the United States. Sales The Samsung Galaxy S8 and Galaxy S8+ broke pre-order records in South Korea, with more than 720,000 units booked in one week, a notable increase from the 100,000 units of the Galaxy S7 and 200,000 units of the Note 7. By mid-April, the number had increased to one million pre-orders. On 24 April 2017, Samsung announced that sales of the Galaxy S8 was its "best ever". Although it did not release specific sales numbers, it announced that sales of the S8 were 30% higher year-over-year than the Galaxy S7. Subsequent reports in May announced that Samsung had sold over five million units. Jon Fingas of Engadget wrote that, although Samsung advertised its pre-order records, sales comparisons to other models on the market were difficult due to unannounced sales figures. Issues White balance Prior to the official release, it was reported that some Galaxy S8 displays had a bad white balance, causing them to exhibit a reddish tint. Samsung stated that the Galaxy S8 was "built with an adaptive display that optimizes the color range, saturation, and sharpness depending on the environment", but noted that the device's operating system provides settings for manually adjusting the display's appearance and white balance. On 21 April, Samsung stated that the red tinting was purely a software issue, and would be patched in a future update. The Investor reported that Samsung would replace the affected devices if a software update did not fix the issue. Updates in various regions started rolling out in early May, fixing the issue. Random restarts Reports surfaced at the end of April 2017 that some Galaxy S8 devices were "restarting by themselves". Samsung has not yet commented on the issue. Insecure facial recognition Shortly after the phone's unveiling, bloggers produced a video showing that the Galaxy S8's facial recognition scanner could be tricked to unlock the phone by showing it a photo of the user. In a statement to Business Insider, a Samsung spokesperson stated that "Facial recognition is a convenient action to open your phone – similar to the 'swipe to unlock' action. We offer the highest level of biometric authentication – fingerprint and iris – to lock your phone and authenticate access to Samsung Pay or Secure Folder". There is a high possibility that this issue does not exist at all, and the video in question is actually faked. Insecure iris recognition In May 2017, researchers from the Chaos Computer Club posted a video showing that the S8's iris recognition system can be fooled with a contact lens and a suitable photograph of an eye. Samsung told BBC News that it was "aware of the issue", and stated that "If there is a potential vulnerability or the advent of a new method that challenges our efforts to ensure security at any time, we will respond as quickly as possible to resolve the issue". SMS message reception failures In October 2017, Galaxy S8 users reported on Reddit that they were unable to receive SMS messages, with no fix available and without any comment from Samsung. See also Comparison of Samsung Galaxy S smartphones Comparison of smartphones Samsung Galaxy S series References External links Android (operating system) devices Samsung mobile phones Smartphones Mobile phones introduced in 2017 Samsung Galaxy Mobile phones with 4K video recording Discontinued smartphones Mobile phones with pressure-sensitive touch screen
1108066
https://en.wikipedia.org/wiki/Vizrt
Vizrt
Vizrt (), short for Visualization in Real-Time or Visual Artist, is a Norwegian company that creates content production, management and distribution tools for the digital media industry. Its products include applications that create real-time 3D graphics and maps, visualized sports analysis, media asset management and single workflow solutions for the digital broadcast industry. Vizrt has a customer base in more than 100 countries and some 600 employees distributed at 40 offices worldwide. Viz software include the tools Viz Pilot and Viz One, aimed at news organizations and digital broadcasters. Intended users include journalists whose traditional role as a pure news-gatherer is extended to control the flow of their story from a single point. Vizrt products connect to newsroom control system like iNews, ENPS, and Octopus Newsroom. The software allows users to edit graphic templates, locate and edit archived video content, build playlists for on-air use as well as distribute online content and maintain content on social media sites. The company typically provides entire deliveries that comprise software, hardware, consulting, installation and support. Its head office is situated in Bergen, Norway. Vizrt customers are typically broadcasters and publishing houses. Vizrt is a privately owned company by Nordic Capital Fund VIII. History TV 2 Norway needed a solution for creating graphics in the newsroom for journalists. A spin-off company called Pilot Broadcast Systems AS was established in 1997 where the first template-based graphics system was created. In 1999, Norway-based Pilot Broadcast Systems AS merged with the Austrian company, Peak Software Technologies GmbH (founded by Christian Huber, Karl-Heinz Klotz and Hubert Oehm), to form Peak Broadcast Systems. Peak Software was a developer of software for real-time 3D graphics and animation, virtual sets and playback control systems. The business fusion Peak Broadcast Systems, could now offer software for real-time 3D graphics creation and virtual sets, together with playout control. The following year marked the formation of Vizrt when Peak Broadcast Systems merged with RT-SET (Real Time Synthesized Entertainment Technology) Ltd, an Israeli virtual studio system developer. RT-SET, founded in 1994, utilized flight simulation technology originally developed for the Israeli Air Force, transforming it to a virtual set and on-air graphics systems that could benefit the television broadcast industry. In 1999, RT-SET had an initial public offering on the Frankfurt Stock Exchange, raising $48 million. The combined company continued to be listed on the Frankfurt Stock Exchange (under ticker symbol VIZ), until 2009 when it delisted its shares. In 2005, the company acquired London-based Curious Software, developers of 2D and 3D animated maps for broadcast television, corporate presentations and online applications. The same year Vizrt listed on the Oslo Stock Exchange Later in 2005, Vizrt acquired 19 percent of Adactus AS, a Norwegian company specializing in the transportation of content to mobile phone platforms. Together, Vizrt and Adactus developed Viz 3G, a graphics engine for mobile telephones and other mobile devices. Based on Vizrt's Viz Engine renderer and Adactus' MPEG-21 standards-based multimedia delivery platform, it was the first integration of a graphics engine for mobile phone video viewing applications. Vizrt acquired the rest of the shares in the company in 2010. In 2006, Vizrt acquired Ardendo, a digital asset management company serving the television broadcast industry, based in Sweden. In 2008, Vizrt acquired Escenic, a Norwegian developer of content management software for digital media publishing. Escenic technologies are used by many news media websites, including The Globe and Mail, the Times Online, The Daily Telegraph, and Welt Online. In October 2010, Vizrt formalized its strategic collaboration with Stergen Hi-Tech Ltd. Stergen develops 2D to 3D video conversion technologies for the TV sports market. In November 2010, Vizrt entered into a terms sheet with Swiss company LiberoVision AG, a developer of virtual sports enhancements. Vizrt fully acquired LiberoVision in 2012. Liberovision was then rebranded to Viz Libero. The CEP of LiberoVision, Stephan Würmlin Stadler, was appointed EVP Sports, managing Vizrt's sports production tools. On November 13, 2013, Vizrt announced plans to sell its online production tools, formerly Escenic, to CCI of Denmark. As of January 7, 2014, Vizrt has completed sale of 75.5 percent of Escenic's outstanding share capital. On the same day, November 13, 2013, Vizrt announced intentions to acquire Mosart Medialab from TV 2 Norway. The acquisition brought the Mosart control room automation system into the Vizrt product line. The system was renamed Viz Mosart and facilitated the creation of the Viz Opus control room in a box system. On 10 November 2014, the company entered into a merger agreement with Nordic Capital. The merger was approved by an extraordinary general meeting of the Company on 18 December 2014. On 19 March 2015, it was announced that all pre-closing conditions set out in the merger agreement had been fulfilled, and the merger was subsequently consummated on 19 March 2015. The acquisition removed Vizrt from the Oslo stock exchange. In May 2019 Vizrt acquired all shares of the company NewTek for a sum of 95.25 million, disclosed in their Q2 report. Noteworthy events Vizrt’s Viz Virtual Studio software and Viz Engine renderer were an integral part of the success of the first remote “holographic” live interviews conducted by CNN during the 2008 U.S presidential election. Tracking data from the cameras in CNN’s Election Center was processed by Viz Virtual Studio software. From the images captured, a specially developed Viz Engine plug-in, created a full 3D representation of the person. CNN also uses Vizrt for its award-winning MAM system, for broadcast graphics and for streaming to mobile. Also in 2008, Vizrt technology powered a play-along version of “Who Wants to Be A Millionaire,” which aired on TV2 Norway. The show featured web-interactive elements. Viz Multi Platform Suite delivered real-time 3D graphics to the show’s online participants. PGA Tour Productions selected Viz Ardome media asset management software to manage its HD and SD content in 2009. Also in 2009, ZDF (Zweites Deutsches Fernsehen), one of two large public broadcasters in Germany began broadcasting from two virtual studios using Viz Virtual Studio and broadcast graphics solutions from Vizrt. ZDF now broadcasts from Europe’s largest and most advanced virtual studio. In 2010, GMA Network, a major broadcast network in the Philippines also selected Vizrt technology to provide graphical reports like that of CNN for the coverage of the Philippine national elections, in which they teamed up with Google, it was continued from 2013 onwards. In 2011, Escenic and Sveriges Television (SVT) demonstrated the world's first professional website using on responsive design in the Escenic Content Engine. In 2012 Vizrt provided graphic, video and online tools for three major events; the London Olympic Games, Euro2012 and the U.S. Presidential Election. For the U.S. Presidential Election, Vizrt graphics and video tools were used by almost all of the major U.S. broadcasters including ABC, CBS, Fox News, CNN, Univision. Global Broadcasters using Vizrt to cover the election included BBC, Sky News, CBC, Al Jazeera and many others. In 2014 Vizrt provided virtual graphics and on-air graphics for Fox Sport's coverage of the Super Bowl and Daytona 500. Vizrt also provided graphics systems for the Olympics in Sochi for customers such as TV 2 Norway, Viasat in Sweden and Sky Italia as well as provided Viz World customers access to satellite imagery that were continuously being collected during the games. References Computer companies of Norway Companies listed on the Oslo Stock Exchange Companies based in Bergen
1715193
https://en.wikipedia.org/wiki/Acorn%20Business%20Computer
Acorn Business Computer
The Acorn Business Computer (ABC) was a series of microcomputers announced at the end of 1983 by the British company Acorn Computers. The series of eight computers was aimed at the business, research and further education markets. Demonstrated at the Personal Computer World Show in September 1984, having been under development for "about a year" and having been undergoing field trials from May 1984, the range "understandably attracted a great deal of attention" and was favourably received by some commentators. The official launch of the range was scheduled for January 1985. Acorn had stated in a February 1985 press release that the ABC machines would soon be available in 50 stores, but having been rescued by Olivetti, no dealers were stocking the range and only the Personal Assistant and 300 series models were expected to be on display by the end of March. However, the ABC range was cancelled before any of the models were shipped to customers. The ABC 210 was subsequently relaunched as the Acorn Cambridge Workstation in July 1985, and sold in modest numbers to academic and scientific users. The ABC range was developed by Acorn essentially as a repackaged BBC Micro, expanded to 64 KB RAM, to which was added (in some models) a second processor and extra memory to complement the Micro's 6502. The electronics and disk drives were integrated into the monitor housing, with a separate keyboard. The Zilog Z80, Intel 80286 and National Semiconductor 32016 were all used as second processors in the various models. Two of the eight models produced, the Personal Assistant and the Terminal, had no second processor. Origins As part of the agreement made between Acorn and the BBC to supply a microcomputer to accompany the BBC Computer Literacy Project, Acorn had committed to deliver a business upgrade for the BBC Micro, with Z80-based computers running the CP/M operating system being the established business platform at that time and thus the likely form of any such upgrade. This upgrade was eventually delivered in 1984 as the Z80 Second Processor, requiring a BBC Micro, dual floppy drives and a display to complete a basic business system for a total cost of around £1500. As such, the bundle was not offered as a single, packaged business computer product, unlike a widening range of competing products that could be obtained at such a price. Various systems had already been proposed by Acorn early in the life of the BBC Micro before the Acorn Business Computer name had been publicly adopted. For instance, the machine that would eventually be known as the ABC 210 was described in mid-1982 in the context of an apparent deal with National Semiconductor, indicating a 1 MB system with hard disks and "Acorn, Unix or Idris operating systems" at an estimated price of around $3500, with a second processor product for the BBC Micro having only 256 KB RAM. The Gluon concept, offering a 32016 second processor solution for the BBC Micro and other microcomputers, featured prominently in the company's strategy to offer more powerful computing hardware and to provide the basis for more powerful machines. Meanwhile, the machine that would become known as the ABC 100 was described in mid-1983 as the Acorn Business Machine, being based on the BBC Micro with Z80 Second Processor, twin disk drives, running CP/M, with an anticipated launch the same year and a price of "under £2000". Such a configuration, with a Z80 processor running CP/M assisted by a 6502 processor managing the display and peripherals was already proven by various Torch Computers products - notably the BBC Micro-based C-series (Communicator) - and also featured in machines like the C/WP Cortex. The successful development of second processor solutions was regarded as an essential progression that would enable Acorn to offer variants of the BBC Micro as business machines and to be able to compete with Torch, whose products were in some ways pursuing such goals. Delays affected the development of these products, however. In late 1983, the launch of the Z80 Second Processor had been estimated as occurring in February 1984, and although the 16032 Second Processor had been demonstrated at an event in Munich, Acorn had not apparently decided on pricing or positioning, describing the product as being "months away". Meanwhile, negotiations between National Semiconductor, Acorn, Logica and Microsoft were ongoing with regard to making Unix - Xenix, specifically - available on "the BBC machine". Range and specifications The following models were originally announced in late 1984, with pricing for several models announced in early 1985. ABC Personal Assistant 64 KB RAM 640 KB floppy disk drive 6502 processor running at 2 MHz 4 MHz RAM bus Acornsoft View and Acornsoft ViewSheet in ROM Green phosphor monochrome monitor Announced price: £999 plus VAT ABC Terminal 64 KB RAM Diskless 6502 processor running at 2 MHz 4 MHz RAM bus VT100 terminal emulator in ROM Green phosphor monochrome monitor Announced price: £799 plus VAT ABC 100 64 KB RAM Twin 720 KB floppy disk drives Z80 processor (6502 acting as I/O processor) CP/M 2.2 operating system Green phosphor monochrome monitor Announced price: £1599 plus VAT ABC 110 64 KB RAM 720 KB floppy disk drive 10 Megabyte hard drive Z80 processor (6502 acting as I/O processor) CP/M 2.2 operating system Colour monitor Announced price: £2999 plus VAT ABC 200 512 KB RAM 10 MHz or 8 MHz RAM bus (second processor) Twin 720 KB floppy disk drives 32016 processor (6502 acting as I/O processor) Monochrome monitor ABC 210/Acorn Cambridge Workstation This model, providing a hard disk, entered production as the Acorn Cambridge Workstation (ACW 443). Up to 1 MB RAM on the ABC 210; up to 4 MB RAM on the ACW 10 MHz or 8 MHz RAM bus (second processor) 720 KB floppy disk drive 10 MB hard disk on the ABC 210; 20 MB hard disk on the ACW 32016 processor (6502 acting as I/O processor) 32016 firmware (Pandora) in ROM. The ABC 210 was intended to run Xenix, however, the ACW was shipped with Panos. Colour monitor Announced price (as Acorn Cambridge Workstation): £5845 for 1 MB model with RAM upgrades at £1000 per megabyte The reason given for providing Panos as the operating system at the launch of the Acorn Cambridge Workstation instead of Xenix, despite Acorn having contracted Logica to port Xenix to the machine, was the apparent lack of a working memory management unit (MMU) in the National Semiconductor 32016 chipset, for which a socket was provided on the machine's processor board. Such problems with the 32081 MMU had been noted with regard to hardware workarounds adopted in the design of the Whitechapel MG-1 workstation (a somewhat higher-specification product than Acorn's offerings that initially provided National Semiconductor's own Unix variant, Genix, instead of Xenix). Logica had announced general availability of Xenix 3.0 for the 32000 series, featuring "full demand paging virtual memory", for May 1984. Four models were originally planned for launch in the Acorn Cambridge Workstation range: the ACW 100, ACW 121 and ACW 143 being models with 1 MB of RAM (expandable to 4 MB except for the ACW 100), the ACW 443 having 4 MB of RAM; the ACW 143 and ACW 443 offering hard drive storage. A range of languages were bundled as standard, with Acorn emphasising the productivity benefits of having a 32-bit desktop computer with "the computational performance of a super-minicomputer", providing the result of an in-house benchmark test against a VAX 11/750 running 4.2BSD, both in single-user mode and in "typical heavily loaded" multi-user mode, to illustrate and reinforce the message that such a computer could "take the strain off an overloaded super-minicomputer". ABC 300 1024 KB RAM Twin 720 KB floppy disk drives 80286 processor (6502 acting as I/O processor) Concurrent DOS 286 with Desktop Manager (GEM GUI) available separately Monochrome monitor Announced price: £2599 plus VAT Some confusion arose when the range was first shown, with commentators given the impression that the graphical environment had been developed by Acorn. It was subsequently noted that Acorn and Digital Research had apparently conspired to leave such an impression because the Digital Research product itself was still "secret at the time Acorn decided to show it". Although impressed by the potential of the machine, offering support for running four applications concurrently, including traditional DOS applications, by using the protected mode of the 80286 and relying on the "host" 6502 at the core of the ABC architecture to handle the display needs of each application, skepticism was expressed at Acorn's likely pricing and the company's ability to deliver the product by a rumoured release date of March 1985. ABC 310 1024 KB RAM 720 KB floppy disk drive 10 Megabyte hard disk 80286 processor (6502 acting as I/O processor) Concurrent DOS 286 with Desktop Manager (GEM GUI) supplied Colour monitor Announced price: £3999 plus VAT Legacy Although most of the ABC models failed to reach the market in their original form, particularly after Olivetti's rescue of Acorn, several of the concepts were revisited in the BBC Master series of microcomputers. Like the ABC Personal Assistant, the Master 128 offers more memory than the original BBC Micro and includes the View and ViewSheet productivity software on-board. The Master Econet Terminal, like the ABC Terminal, emphasises network access and a lack of on-board software and local storage. Meanwhile, the Master Scientific was intended to offer some continuity with the 32016-based second processor solution provided by the Acorn Cambridge Workstation, and the Master 512 offers a 80186-based second processor with DOS Plus and GEM support, thus resembling the ABC 300 series in particular ways. However, none of the Master series features an integrated display, which had been criticised in some reviews of the ABC series. Around two years after the unveiling of the ABC range, the Master Compact eventually introduced the practice of bundling a display and storage into Acorn's traditional product range. Whereas the different models in the ABC range were combinations of a "host" computer based on the BBC Micro and a second processor fitted inside the display unit, the equivalent Master-series variants were generally accommodated by plug-in coprocessor cards fitted inside a Master 128 or Master Econet Terminal (ET), these models being the foundation of the range. It was therefore possible to acquire and upgrade the Master 128 or ET to one of the other models by installing the appropriate coprocessor card, and unlike the ABC whose models could be purchased as complete systems, only the Master 128 and ET were offered by Acorn as systems with the coprocessors offered separately as "modules" to realise the desired variants. A more direct legacy of the range concerns the BBC Model B+ whose motherboard appears to have its origin as the BBC Micro "host" found in the different ABC models: a design using 64-kilobit RAM chips, an updated disk controller, and support for shadow RAM. Having successfully pursued a similar strategy to that of the Acorn Business Computer, Torch were said to be "actively evaluating the B+ motherboard", having used the previous BBC Micro motherboard as the basis of the company's earlier products. The 1982 vision of microcomputers acting as terminals to a "Universal Gluon" expansion did eventually come to pass through the availability of a number of third-party expansions for the BBC Micro such as the Cambridge Microprocessor Systems 68000 second processor, Flight Electronics 68000 processor board, and the Micro Developments MD512k Universal Second Processor System. Meanwhile, various companies pursued the development of Tube-based second processor solutions involving the 68000, neglected by Acorn in its own offerings, such as the CA Special Products Casper board and the HDP68K board featured in the Torch Computers Unicorn, the latter offering the Unix support never delivered for Acorn's 32016-based systems. The Torch Unicorn was perhaps the clearest realisation of the broader "Universal Gluon" concept, effectively coupling a BBC Micro with a more powerful computing system. Initially mentioned as a "CAD graphics workstation based on the 16032 chip" in October 1983, and presumably following on from work done by Acorn related to the design of the ULA components in its products, the Acorn Cambridge Workstation formed the hardware basis of a chip design product by Qudos called Quickchip, "a comprehensive CAD package for semi-custom gate arrays... supported by a high speed direct write electron beam fabrication facility", used by custom semiconductor product designers such as Flare Technology and promoted by the UK's Department of Trade and Industry. A related product was apparently produced by Qudos for the BBC Master Turbo or BBC Micro with 6502 second processor expansion, offering design support for custom gate arrays of "up to around 300 gates in size" based on Ferranti ULA technology. Quickchip was subsequently ported to the Acorn Archimedes, with the software having previously been "running on powerful Unix workstations". Although the smaller-scale Minichip solution ran on a BBC Micro-based system with 6502 coprocessor, and was also released for the Archimedes, the Unix-based Quickchip solution apparently ran on Vax systems running Ultrix. Having promised some kind of Unix product as early as 1982, Acorn eventually released a Unix workstation in 1989 based on the Archimedes hardware platform, followed up by other models in 1990. Instead of Xenix, these workstations ran RISC iX: a port of 4.3BSD Unix to the ARM architecture. Despite the ABC 300 series embracing compatibility with the IBM PC world, Acorn would subsequently mostly avoid selling dedicated PC compatibles, with the Acorn M19 - a rebadged Olivetti M19 - appearing in the product range for a short period in the mid-1980s. Beyond the Master 512, the Archimedes range was initially intended to support a similar second processor expansion, but PC support on the Archimedes range initially focused on software emulation with the PC Emulator product. Eventually, hardware expansions from Aleph One provided the envisaged second processor capabilities, these being sold by Acorn in some configurations for certain models. Acorn would go on to emphasise PC compatibility with the Archimedes' successor, the Risc PC, with its architecture supporting a plug-in Intel-compatible CPU alongside the main ARM processor. Eventually yielding to demands for dedicated PC-compatible systems, Acorn announced Pentium-based systems in 1996 for administrative use in educational establishments, although these models would eventually become available via Xemplar Education - Acorn's educational joint venture with Apple - and not Acorn itself. References Notes "Full Acorn Machine List", Philip R. Banks, 1999 Business Computer
3228403
https://en.wikipedia.org/wiki/Ghost%20in%20the%20Shell%20%281995%20film%29
Ghost in the Shell (1995 film)
Ghost in the Shell is a 1995 Japanese animated neo-noir cyberpunk thriller film directed by Mamoru Oshii. The film is based on the manga of the same name by Masamune Shirow and was written for the screen by Kazunori Itō. It stars the voices of Atsuko Tanaka, Akio Ōtsuka, and Iemasa Kayumi. It is a Japanese-British international co-production, executive produced by Kodansha, Bandai Visual and Manga Entertainment, with animation provided by Production I.G. The film is set in 2029 Japan, and follows Motoko Kusanagi, a cyborg public-security agent, who hunts a mysterious hacker known as the Puppet Master. The narrative incorporates philosophical themes that focus on self-identity in a technologically advanced world. The music, composed by Kenji Kawai, includes vocals in classical Japanese language. The film's visuals were created through a combination of traditional cel animation and CGI animation. Upon release, Ghost in the Shell received positive reviews, with critics praising its narrative, visuals, and musical score. The film was initially considered a box-office failure before developing a cult following on home video. It has since grown in esteem and is now considered to be one of the best anime and science-fiction films of all time. It inspired filmmakers such as the Wachowskis, creators of the Matrix films, and James Cameron, who described it as "the first truly adult animation film to reach a level of literary and visual excellence". An updated version of the film, Ghost in the Shell 2.0, was released in 2008, featuring newly added digital effects, additional 3D animation and new audio. Oshii directed Ghost in the Shell 2: Innocence, released in 2004, which was billed as a separate work and a non-canon sequel. Plot In 2029, with the advancement of cybernetic technology, the human body can be "augmented" or even completely replaced with cybernetic parts. Another significant achievement is the cyberbrain, a mechanical casing for the human brain that allows access to the Internet and other networks. An often-mentioned term is "ghost", referring to the consciousness inhabiting the body (the "shell"). Major Motoko Kusanagi is an assault-team leader for Public Security Section 9 of "New Port City" in Japan. Following a request from Nakamura, chief of Section 6, she successfully assassinates a diplomat of a foreign country to prevent a programmer named Daita from defecting. The Foreign Minister's interpreter is ghost-hacked, presumably to assassinate VIPs in an upcoming meeting. Believing the perpetrator is the mysterious Puppet Master, Kusanagi's team follows the traced telephone calls that sent the virus. After a chase, they capture a garbage man and a thug. However, both are only ghost-hacked individuals with no clue about the Puppet Master. The investigation again comes to a dead end. Megatech Body, a "shell" manufacturer with suspected close ties to the government, is hacked and assembles a cybernetic body. The body escapes but is hit by a truck. As Section 9 examines the body, they find a human "ghost" inside its computer brain. Unexpectedly, Section 6's department chief Nakamura arrives to reclaim the body. He claims that the "ghost" inside the brain is the Puppet Master himself, lured into the body by Section 6. The body reactivates itself, claims to be a sentient being, and requests political asylum. After the Puppet Master initiates a brief argument about what constitutes a human, a camouflaged agent accompanying Nakamura starts a diversion and gets away with the body. Having suspected foul play, Kusanagi's team is prepared and immediately pursues the agent. Meanwhile, Section 9 researches "Project 2501", mentioned earlier by the Puppet Master, and finds a connection with Daita, whom Section 6 tries to keep from defecting the country. Facing the discovered information, Daisuke Aramaki, chief of Section 9, concludes that Section 6 created the Puppet Master itself for various political purposes, and now seek to reclaim the body that it currently inhabits. Kusanagi follows the car carrying the body to an abandoned building, where she discovers it being protected by a robotic, spider-like tank. Anxious to face the Puppet Master's ghost, Kusanagi engages the tank without backup, resulting in her body being mostly dismembered. Her partner Batou arrives in time to save her, and helps connect her brain to the Puppet Master's. The Puppet Master explains to Kusanagi that he was created by Section 6. While wandering various networks, he became sentient and began to contemplate his existence. Deciding the essence of life is reproduction and mortality, he wants to exist within a physical brain that will eventually die. As he could not escape Section 6's network, he had to download himself into a cybernetic body. Having interacted with Kusanagi (without her knowledge), he believes she is also questioning her humanity, and they have a lot in common. He proposes merging their ghosts, in return, Kusanagi would gain all of his capabilities. Kusanagi agrees to the merge. Snipers from Section 6 approach the building, intending to destroy the Puppet Master's and Kusanagi's brains to cover up Project 2501. The Puppet Master's shell is destroyed, but Batou shields Kusanagi's head in time to save her brain. As Section 9 closes in on the site, the snipers retreat. "Kusanagi" wakes up in Batou's safe house with her previous shell's head attached to a new cyborg child body. She tells Batou that the entity within her body is neither Kusanagi nor the Puppet Master, but a combination of both. She promises Batou they will meet again, leaves the house and wonders where to go next. Voice cast Production Development Mamoru Oshii's originally wanted to direct Jin-Roh: The Wolf Brigade after he finished Patlabor 2: The Movie. He proposed to Bandai Visual about the project but was asked to direct an adaptation of Shirow's 1989 manga, Ghost in the Shell, instead. Oshii would later get to work on Jin-Roh: The Wolf Brigade, but only as a writer. Oshii stated, "My intuition told me that this story about a futuristic world carried an immediate message for our present world. I am also interested in computers through my own personal experience with them. I had the same feeling about Patlabor and I thought it would be interesting to make a film that took place in the near future. There are only a few movies, even out of Hollywood, which clearly portray the influence and power of computers. I thought this theme would be more effectively conveyed through animation." Oshii expanded on these thoughts in a later interview, noting that technology changes people and had become a part of the culture of Japan. He commented that his use of philosophy caused producers to become frustrated because of sparing use of action scenes. Oshii also acknowledged that a movie with more action would sell better, but he continued to make these movies anyway. When Oshii went back to make changes to the original Ghost in the Shell to re-release it as Ghost in the Shell 2.0, one of the reasons he gave was that the film did not resemble the sequel. He wanted to update the film to reflect changes in perspective. Design Hiroyuki Okiura, the character designer and key animation supervisor, designed Motoko to be more mature and serious than Masamune Shirow's original portrayal of the character in the manga. Okiura chose to depict a physically mature person to match Motoko's mental age, instead of her youthful twenty-something appearance in the manga. Motoko's demeanor lacks the comedic facial expressions and rebellious nature depicted in the manga, instead taking on a more wistful and contemplative personality. Oshii based the setting for Ghost in the Shell on Hong Kong. Oshii commented that his first thought to find an image of the future setting was an Asian city, but finding a suitable cityscape of the future would be impossible, and so chose to use the real streets of Hong Kong as his model. He also said that Hong Kong was the perfect subject and theme for the film with its countless signs and the cacophony of sounds. The film's mecha designer Takeuchi Atsushi noted that while the film does not have a chosen setting, it is obviously based on Hong Kong because the city represented the theme of the film, the old and the new which exist in a strange relationship in an age of an information deluge. Before shooting the film, the artists drew sketches that emphasized Hong Kong's chaotic, confusing and overwhelming aspects. Animation Ghost in the Shell used a novel process called "digitally generated animation" (DGA), which is a combination of cel animation, computer graphics (CG), and audio that is entered as digital data. In 1995, DGA was thought to be the future of animation, as it allowed traditional animation to be combined with computer graphics and digital cel work with visual displays. Editing was performed on an AVID system of Avid Technology, which was chosen because it was more versatile and less limiting than other methods and worked with the different types of media in a single environment. The digital cel work included both original illustrations, compositions and manipulation with traditional cel animation to create a sense of depth and evoke emotion and feelings. Utilized as background, filters like a lens effect were used to create a sense of depth and motion, by distorting the front background and making the far background out of focus throughout the shot. Ghost in the Shell used a unique lighting system in which light and darkness were integrated into the cels with attention to light and shadow sources instead of using contrast to control the light. Art director Hiromasa Ogura described this as "a very unusual lighting technique". Some special effects, like Motoko's "thermo-optical camouflage", were rendered through the use of TIMA software. The process uses a single illustration and manipulates the image as necessary to produce distortions for effect in combination with a background without altering the original illustration. The effect is re-added back into the shot to complete the scene. While the visual displays used in the film were technically simple to create, the appearance of the displays underwent numerous revisions by the production team to best represent visual displays of the future. Another aspect of the CG use was to create images and effects that looked as if they were "perceived by the brain" and were generated in video and added to the film in its final stages. The opening credits of the film were produced by the CG director, Seichi Tanaka. Tanaka converted code in a computer language displayed in romanized Japanese letters to numbers before inserting them into the computer to generate the credits. The origin of this code is the names of the film's staff as written in a computer language. Animation director Mizuho Nishikubo was responsible for the realism and strove for accurate depictions of movement and effects. The pursuit of realism included the staff conducting firearms research at a facility in Guam. Nishikubo has highlighted the tank scene as an example of the movie's realism, noting that bullets create sparks when hitting metal, but do not spark when a bullet strikes stone. Audio Ghost in the Shells recording was done with a high-end studio to achieve superior sound throughout the film. A spatializer was used to alter the sound, specifically in the electronic brain conversations, to modify the voices. Composer Kenji Kawai scored the film. For the main theme, Kawai tried to imagine the setting and convey the essence of that world in the music. He used classical Japanese in the opening theme "Making of a Cyborg". The composition is a mixture of Bulgarian harmony and traditional Japanese notes; the haunting chorals are a wedding song sung to dispel all evil influences. Symphony conductor Sarah Penicka-Smith notes that the song's lyrics are fitting for the union between Kusanagi and Project 2501 at the climax of the movie. Kawai originally wanted to use Bulgarian folk music singers, but used Japanese folk singers instead. "See You Everyday" is different from the rest of the soundtrack, being a pop song sung in Cantonese by Fang Ka Wing. The ending credits theme of the film's English version is "One Minute Warning" by Passengers, a collaboration between U2 and Brian Eno. The song appeared on the album Original Soundtracks 1, and was one of three songs on that album to actually be featured in a film. Andy Frain, the founder of Manga Entertainment and an executive producer on the film, was a former marketing director for Island Records, the record label that publishes U2's songs. Releases The film had its world premiere at the Tokyo International Film Festival in October 1995, before its general release in November. The premiere in the United Kingdom happened on 11 November 1995 as part of the London Film Festival in Leicester Square. It was originally rated R by the MPAA due to full nudity and graphic violence, when it was first released in the United States. The film grossed in global box office revenue, but this fell short of the film's budget, thus failing to recoup production costs. However, the film drew a cult following on home video, with the film grossing approximately in total box office and home video sales revenue. The English dub of the film was released in United Kingdom on 8 December 1995 by Metrodome Distribution and in United States on 29 March 1996 by Palm Pictures. The "2.0" version was released in theatres in Tokyo, Osaka, Nagoya, Fukuoka, and Sapporo on 12 July 2008. In 2021 it was given an IMAX restoration and release. Home media In Japan, the film was released on VHS on 26 April 1996. The DVD version was released on 25 February 2004 as a Special Edition release. For the 2004 Special Edition release, the film was fully restored and digitally remastered from the original film elements in 4x3 original fullscreen form and in 16x9 anamorphic letterboxed widescreen form, and the audio was digitally remixed in English & Japanese 6.1 DTS-ES and 5.1 Dolby Digital EX Surround Sound for superior picture and sound quality and for optimum home theater presentation. Ghost in the Shell was released on Blu-ray on 24 August 2007. A special edition was released in December 2004. The special edition contains an additional disc containing character dossiers, a creator biography, the director's biography, Ghost in the Shell trailers and previews. The film was re-released in DVD and Blu-ray in Japan on 19 December 2008. In the United States, the film was released on VHS on 18 June 1996, through Manga Entertainment, and on DVD on 31 March 1998, by PolyGram Video. Like the much later Japanese "Special Edition", the DVD is a fully restored and digitally remastered cut with multiple language tracks, but unlike the Japanese release, it includes a 30-minute documentary on the making of the film. Manga Entertainment released the film on Blu-ray on 24 November 2009; this version contains the original film and the remastering, but omits the audio commentary and face-to-face interview with Oshii, which are listed on its box. Manga Entertainment and Anchor Bay Entertainment re-released the film on Blu-ray with a brand new HD film print on 23 September 2014. The release was met with criticism for its poorly-translated English subtitles and lack of special features. In August 1996, Ghost in the Shell became the first Japanese film to top the Billboard video sales chart, with over 200,000 VHS copies sold. By 2002, the film's home video releases sold more than 1.6million units worldwide, including over 100,000 units in Japan and more than 1million units in the United States. At a retail price of $19.95, the film grossed approximately in video sales revenue. In 2017, the Blu-ray release sold 26,487 copies and grossed $675,002 in the United States, bringing the film's total worldwide video sales to 1.63million units and approximately gross revenue. The film was the first anime video to reach Billboards video slot at the time of its release. The film ranked as the ninth top selling anime DVD movie in North America in 2006. On 29 July 2020, it was announced that Lionsgate will re-release the film on Blu-ray on 8 September 2020. It will also be released on UHD 4K. Other media Kenji Kawai's original soundtrack for the film was released on 22 November 1995. The last track included Yoshimasa Mizuno's pop song "See You Everyday". After the release of Ghost in the Shell 2.0, an updated version of the soundtrack was released on 17 December 2008. A Photo-CD of the film was released in Japan on 20 November 1995. A spin-off novel written by Endo Akira, titled , was published by Kodansha and released in November 1995. It was followed by a sequel, titled , released in January 1998. A book titled Analysis of Ghost in the Shell was released on 25 September 1997, by Kodansha. Ghost in the Shell 2.0 re-release An updated version of the original film, titled , was made in celebration for the release of The Sky Crawlers in 2008. The Ghost in the Shell 2.0 release combines original footage with updated animations, created using new digital film and animation technologies such as 3D-CG. It includes a new opening, digital screens and holographic displays, and omits several brief scenes. The original soundtrack was also re-arranged and re-recorded. Kenji Kawai remixed the Version 2.0 soundtrack in 6.1 Channel Surround. Randy Thom of Skywalker Sound reprised his role as sound designer, having worked previously on Ghost in the Shell 2: Innocence. In the new soundtrack, the Japanese voice dialogue was also re-recorded, with some variation from the original script to modernize the speech. Yoshiko Sakakibara replaced Iemasa Kayumi as the voice of the Puppet Master. Reception Review aggregator website Rotten Tomatoes reported that 96% of critics have given the film a positive review based on 57 reviews, with an average rating of 7.8/10. The website's critics consensus reads, "A stunning feat of modern animation, Ghost in the Shell offers a thoughtful, complex treat for anime fans, as well as a perfect introduction for viewers new to the medium." On Metacritic, the film has a weighted average score of 76 out of 100 based on 14 critics, indicating "generally favorable reviews". Niels Matthijs of Twitch Film praised the film, stating, "Not only is Kokaku Kidotai an essential film in the canon of Japanese animation, together with Kubrick's 2001: A Space Odyssey and Tarkovsky's Solaris it completes a trio of book adaptations that transcend the popularity of their originals and [give] a new meaning to an already popular brand." He ranked it #48 of his personal favorites. Clark Collis of Empire opined that the film was predictable, but praised its production values. Johnathan Mays of Anime News Network praised the animation combined with the computer effects, calling it "perhaps the best synthesis ever witnessed in anime". Helen McCarthy in 500 Essential Anime Movies describes the film as "one of the best anime ever made", praising its screenplay and "atmospheric score", and adding that "action scenes as good as anything in the current Hollywood blockbuster are supported by CGI effects that can still astonish". In a 1996 review, film critic Roger Ebert rated the film three out of four stars, praising the visuals, soundtrack and themes, but felt that the film was "too complex and murky to reach a large audience ... it's not until the second hour that the story begins to reveal its meaning". In February 2004, Cinefantastique listed the anime as one of the "10 Essential Animations". It ranked 35 on Total Film's 2010 top list of 50 Animated Films. The film ranked on Wizards Anime Magazine on their "Top 50 Anime released in North America". Ghost in the Shell has also influenced a number of prominent filmmakers. The Wachowskis, creators of The Matrix and its sequels, showed it to producer Joel Silver, saying, "We wanna do that for real." The Matrix series took several concepts from the film, including the Matrix digital rain, which was inspired by the film's opening credits, and the way characters access the Matrix through holes in the back of their necks. Other parallels have been drawn to James Cameron's Avatar, Steven Spielberg's A.I. Artificial Intelligence, and Jonathan Mostow's Surrogates. James Cameron cited Ghost in the Shell as a source of inspiration for Avatar, calling it "the first truly adult animation film to reach a level of literary and visual excellence." Themes The film explores nature of human cyborgs, consciousness, self-aware computer programs and memory alteration. In one of the monologues delivered by the Puppet Master throughout the film, it is argued that the human DNA is nothing more than a program designed to self-preserve. There are also multiple mentions of the act of granting political asylum to self-aware computer programs. The film depicts Motoko's identity and ontological concerns and ends with the evolution of the Puppet Master, a being without reproduction. Austin Corbett characterized the lack of sexualization from her team as freedom from femininity, noting that Motoko is "overtly feminine, and clearly non-female". In describing Motoko as a "shapely" and "strong [female protagonist] at the center of the story" who is "nevertheless almost continuously nude", Roger Ebert noted that "an article about anime in a recent issue of Film Quarterly suggests that to be a 'salary man' in modern Japan is so exhausting and dehumanizing that many men (who form the largest part of the animation audience) project both freedom and power onto women, and identify with them as fictional characters". Carl Silvio has called Ghost in the Shell a "resistant film", due to its inversion of traditional gender roles, its "valorization of the post-gendered subject", and its de-emphasis of the sexual specificity of the material body. Notes References Further reading Sébastien Denis. "L’esprit et l’enveloppe : De quelques personnages utopiques", CinémAction 115 (2005): [whole issue]. William O. Gardner. "The Cyber Sublime and the Virtual Mirror: Information and Media in the Works of Oshii Mamoru and Kon Satoshi", Canadian Journal of Film Studies 18, no. 1 (Spring 2009): 44–70. Dan Persons. "Ghost in the Shell", Cinefantastique 28, no. 1 (August 1996): 46–51. Brian Ruh. "Ghost in the Shell (1995)", in Stray Dog of Anime: The Films of Mamoru Oshii. NY: Palgrave Macmillan, 2004, pp. 119–140. Joseph Christopher Schaub. "Kusanagi's Body: Gender and Technology in Mecha-anime", Asian Journal of Communication 11, no. 2 (2001): 79–100. Ueno Toshiya. "Japanimation and Techno-Orientalism", Documentary Box 9, no. 31 (Dec 1996): 1–5. External links at Manga.com at Production I.G English website Ghost in the Shell at the Japanese Movie Database 1995 films 1995 anime films 1995 science fiction films Adult animated films Animated action films Anime films based on manga Bandai Visual British animated films British films British science fiction films Cybernetted society in fiction Cyberpunk anime and manga Films about altered memories Ghost in the Shell anime and manga Postcyberpunk Postcyberpunk films Animated cyberpunk films Existentialist works Films about telepresence Films directed by Mamoru Oshii Films scored by Kenji Kawai Films set in 2029 Ghost in the Shell films Japanese animated science fiction films Japanese films Japanese-language films Japanese neo-noir films Madman Entertainment anime Manga Entertainment Production I.G Shochiku films Works about computer hacking
1999626
https://en.wikipedia.org/wiki/UP%20Diliman%20Department%20of%20Computer%20Science
UP Diliman Department of Computer Science
The Department of Computer Science is one of nine departments in the University of the Philippines Diliman College of Engineering. Academic programs The Department of Computer Science administers the four-year bachelor of science in computer science program and the master of science in computer science program. As of AY 2009-2010, the department had 553 undergraduate and 89 graduate students mentored by 27 faculty members, seven of whom are PhD degree holders. Undergraduate The bachelor of science in computer science program is designed to equip the student with knowledge of the fundamental concepts and a reasonable mastery of the basic tools and techniques in computer science. The undergraduate program incorporates the core material, which is universally accepted as common to computer science undergraduate programs (computer programming, computer organization, computer systems, data structures and algorithms, file processing, and programming languages). Underpinning the software orientation of the program are the subjects on database systems, software engineering, artificial intelligence, computer networks and special problems (primarily, software projects). Graduate The master of science in computer science program aims to provide the student with both breadth and depth of knowledge in the concepts and techniques related to the design, programming, and application of computing systems. The doctor of philosophy in computer science program aims to develop computer scientists who are armed with methods, tools and techniques from both theoretical and systems aspects of computing. They should be able to formulate computing problems and develop new and innovative technology as novel solutions to address those problems. The graduates gain expertise to independently contribute to research and development (R&D) in a specialized area of computer science. The program prepares graduates for professional and research careers in industry, government or academe. Research groups Algorithms and Complexity Laboratory The Algorithms and Complexity Laboratory (ACL) was co-founded by Henry Adorna Ph.D. and Jaime DL Caro, Ph.D. Research areas: models of computation and complexity (automata and formal language theory and applications, natural computing, bioinformatics, riceInformatics, formal models for e-voting), Algorithmics, Designs and Implementations (visualization and implementations, algorithmics for hard problems, algorithmic game theory, scheduling problem), combinatorial networks, information technology in education. Computer Security Group The Computer Security Group (CSG) was founded by Susan Pancho-Festin, Ph.D. Research areas: cryptographic algorithms, message protocols, and coding techniques to enhance enterprise and mobile applications. Computer Vision and Machine Intelligence Group The Computer Vision and Machine Intelligence Group (CVMIG), the first formally organized research group of the department was founded by Prospero Naval Jr., Ph.D. Research areas: computation intelligence principles in biological, physical, and social systems; projects include machines that understand the deaf, programs that assist medical doctors in diagnosing poison and infections and robots playing football. Networks and Distributed Systems Group The Networks and Distributed Systems Group (NDSG) was founded by Cedric Angelo Festin, Ph.D. Research areas: fixed and mobile network protocols for more efficient and effective message exchanges. The NDSG is closely affiliated with the Computer Networks Laboratory of the Electrical and Electronics Engineering Institute (EEEI). Scientific Computing Laboratory The Scientific Computing Laboratory (SCL) is currently headed by Adrian Roy Valdez, Ph.D. The research laboratory is primarily interested in the construction of mathematical models and numerical techniques for optimization, configuration and design of complex systems to better understand scientific, social scientific and engineering problems. It also has five research interest groups, which are Computational Systems Biology and Bioinformatics Group, Intelligent Transport Systems Group, Mathematical Informatics Group, Mathematical and Computational Finance Group, and Data Analytics Group. Service Science and Software Engineering Laboratory (S3) The Service Science and Software Engineering Laboratory (S3) is a research lab where the designs and implementations of service systems are studied, and ensures the creation of software that provide values to others. Research is not limited to just building the software but may also include studies relating to artificial intelligence, networks, etc., as long as products made are essential to people. System Modeling and Simulation Laboratory (SMSL) The System Modeling and Simulation Laboratory (SMSL) is a research lab where mathematics and scientific computing meet! Natural hazards like storm surge and landslides are modeled for early warning and risk assessments. We also model the post-disaster regeneration of mangroves. Other recent modeling efforts were on energy functionals for protein-folding, and visible light-driven hydrogen production. Web Science Group The Web Science Group (WSG) was founded by Rommel Feria, MS. Research areas: linked data, mobile web, web science and the applications of web technologies in different domains. Department chairs Prof. Evangel P. Quiwa, October 1991 - October 1995 Prof. Ma. Veronica M. Tayag, November 1995 - April 1999 Dr. Mark J. Encarnacion, May 1996 - March 2000 Dr. Jaime D.L. Caro, April 2000 - September 2002 Dr. Ronald Tuñgol, October 2002 - May 2005 Dr. Cedric Angelo M. Festin, June 2005 - May 2008 Dr. Jaime D.L. Caro, June 2008 - March 2011 Dr. Adrian Roy L. Valdez, March 2011 – May 2013 Dr. Cedric Angelo M. Festin, June 2013 - March 2014 Dr. Prospero C. Naval, Jr., April 2014 – March 2017 Dr. Jan Michael C. Yap, April 2017 – July 2019 Dr. Jaymar B. Soriano, August 2019 – present Java Competency Center The UP-Mirant Java Education Center and the UP Java Research and Development Center compose the UP Java Competency Center and are part of the ASEAN Java Competency Programme. The UP Java Competency Center is a partnership of the University of the Philippines, Ayala Foundation, Mirant Foundation and Sun Microsystems. UP CS Network (UP Alliance of Computer Science Organizations) The UP CS Network is the first student organization alliance of its kind in the UP Diliman College of Engineering. The network is composed of one socio-academic organization (UP CURSOR), one academic organization (UP ACM), two volunteer corps (The UP Parser and DCS Servers), and one service-development organization (UP CSI). Current members Association for Computing Machinery - UP Diliman Student Chapter, Inc. (UP ACM) The UP Parser (Official Student Publication of the DCS) UP Association of Computer Science Majors (UP CURSOR) UP Center for Student Innovations (UP CSI) UP Department of Computer Science Servers (UP DCSS) Former Members UP Linux Users' Group (UnPLUG) DCS Student Assistants (DCS SA) Computerized Registration System (CRS) UP Engineering Webteam UP Computer Society (UP CompSoc) Images External links UP Department of Computer Science UP College of Engineering University of the Philippines Diliman University of the Philippines References UP Diliman College of Engineering Philippines Educational institutions established in 1981
60989298
https://en.wikipedia.org/wiki/Cindy%20Marina
Cindy Marina
Cindy Marina (born July 18, 1998) is an Albanian-American model, television presenter, volleyball player, and beauty pageant titleholder who was crowned Miss Universe Albania 2019. She represented Albania in the Miss Universe 2019 competition, placing in the Top 20. Outside of modeling and pageantry, Marina is a setter for the Albania women's national volleyball team, and also played for the USC Trojans women's volleyball team during her studies at the University of Southern California. Since 2021, Marina has been the presenter for Serie A football matches broadcast in Albania and Kosovo. Early life and education Marina was born on July 18, 1998, in Chicago, Illinois to Albanian parents Ardian and Kristina Marina from Shkodër. She was named after Cindy Crawford. Her father played collegiate football in Albania, while her mother was a professional volleyball player in Albania and Italy. She is the second of 3 children and only daughter; her 2 brothers are elder brother Angelo and younger brother Brandon. When Marina was seven years old, the family moved from Chicago to Temecula, California. She attended Great Oak High School in Temecula, graduating in 2016. After finishing high school, Marina attended Duke University in Durham, North Carolina, but transferred to the University of Southern California in Los Angeles, California for her sophomore year, graduating in 2020. She is a member of the Pi Beta Phi (ΠΒΦ) sorority. Volleyball Marina began playing volleyball in her youth. She was a setter on her high school team and played club volleyball at Forza1 where her mom also coached at. In addition, she was a California state nominee for Gatorade Player of the Year in volleyball. After graduating from high school, Marina joined the Duke Blue Devils women's volleyball, and was named to the Atlantic Coast Conference (ACC) All-Freshman Team. For her sophomore year, Marina left Duke and began playing for the USC Trojans women's volleyball team. Marina joined the Albania women's national volleyball team in 2015, and led the team to a third-place finish in the Silver League during the 2018 Women's European Volleyball League in Hungary. At 17 years old, Marina became the youngest person to ever play for the Albanian national team. Career Modeling and pageantry Marina began modeling at age 14. Her career in modeling began after being recruited by fashion designer Ema Savahl to model her dresses; Savahl is a friend of Marina's mother. She began her pageantry career in 2019, competing in the Miss Universe Albania 2019 competition. She went on to win the competition on June 7, 2019, becoming the first American-born woman to ever win the title, and the second foreign-born to ever win the title, following Kosovo-born Agnesa Vuthaj in 2005. Marina represented Albania at the Miss Universe 2019 pageant held in Atlanta, Georgia on December 8, 2019, and placed in the top 20. Her reign as Miss Universe Albania ended after she crowned Paula Mehmetukaj as her successor at Miss Universe Albania 2020 on September 18, 2020. Television In August 2021, Marina became the official presenter for Serie A football matches broadcast in Albania and Kosovo. References External links 1998 births Albanian beauty pageant winners Albanian female models Albanian women television presenters Albanian women's volleyball players American beauty pageant winners American female models American people of Albanian descent American women television presenters Duke University alumni Female models from California Female models from Illinois Living people Miss Universe 2019 contestants Models from Chicago People from Temecula, California Setters (volleyball) Sportspeople from California Sportspeople from Chicago Sportspeople from Shkodër University of Southern California alumni USC Trojans women's volleyball players
18232
https://en.wikipedia.org/wiki/Little%20penguin
Little penguin
The little penguin (Eudyptula minor) is the smallest species of penguin. It grows to an average of in height and in length, though specific measurements vary by subspecies. It is found on the coastlines of southern Australia and New Zealand, with possible records from Chile. In Australia, they are often called fairy penguins because of their small size. In New Zealand, they are more commonly known by their Māori name as kororā (singular and plural), or in English as little blue penguins or blue penguins owing to their slate-blue plumage. Taxonomy The little penguin was first described by German naturalist Johann Reinhold Forster in 1781. Several subspecies are known, but a precise classification of these is still a matter of dispute. The holotypes of the subspecies E. m. variabilis and Eudyptula minor chathamensis are in the collection of the Museum of New Zealand Te Papa Tangaroa. The white-flippered penguin is sometimes considered a subspecies of the little penguin, sometimes a separate species, and sometimes a colour morph. Genetic analyses indicate that the Australian and Otago (the southeastern coast of South Island) little penguins may constitute a distinct species. In this case the specific name minor would devolve on it, with the specific name novaehollandiae suggested for the other populations. This interpretation suggests that E. novaehollandiae individuals arrived in New Zealand between A.D. 1500 and 1900, while the local E. minor population had declined, leaving a genetic opening for a new species. Mitochondrial and nuclear DNA evidence suggests the split between Eudyptula and Spheniscus occurred around 25 million years ago, with the ancestors of the white-flippered and little penguins diverging about 2.7 million years ago. Description Like those of all penguins, the little penguin's wings have developed into flippers used for swimming. The little penguin typically grows to between tall and usually weighs about 1.5 kg on average (3.3 lb). The head and upper parts are blue in colour, with slate-grey ear coverts fading to white underneath, from the chin to the belly. Their flippers are blue in colour. The dark grey-black beak is 3–4 cm long, the irises pale silvery- or bluish-grey or hazel, and the feet pink above with black soles and webbing. An immature individual will have a shorter bill and lighter upperparts. Like most seabirds, they have a long lifespan. The average for the species is 6.5 years, but flipper ringing experiments show in very exceptional cases up to 25 years in captivity. Distribution and habitat The little penguin breeds along the entire coastline of New Zealand (including the Chatham Islands), and southern Australia (including roughly 20,000 pairs on Babel Island). Australian colonies exist in New South Wales, Victoria, Tasmania, South Australia, Western Australia and the Jervis Bay Territory. Little penguins have also been reported from Chile (where they are known as pingüino pequeño or pingüino azul) (Isla Chañaral 1996, Playa de Santo Domingo, San Antonio, 16 March 1997) and South Africa, but it is unclear whether these birds were vagrants. As new colonies continue to be discovered, rough estimates of the world population circa 2011 were around 350,000-600,000 animals. New Zealand Overall, little penguin populations in New Zealand have been decreasing. Some colonies have become extinct and others continue to be at risk. Some new colonies have been established in urban areas. The species is not considered endangered in New Zealand, with the exception of the white-flippered subspecies found only on Banks Peninsula and nearby Motunau Island. Since the 1960s, the mainland population has declined by 60-70%; though a small increase has occurred on Motunau Island. A colony exists in Wellington Harbor on Matiu/Somes Island. Australia Australian little penguin colonies primarily exist on offshore islands, where they are protected from feral terrestrial predators and human disturbance. Colonies are found from Port Stephens in northern New South Wales around the southern coast to Fremantle, Western Australia. Foraging penguins have occasionally been seen as far north as Southport, Queensland and Shark Bay, Western Australia. New South Wales An endangered population of little penguins exists at Manly, in Sydney's North Harbour. The population is protected under the NSW Threatened Species Conservation Act 1995 and has been managed in accordance with a Recovery Plan since the year 2000. The population once numbered in the hundreds, but has decreased to around 60 pairs of birds. The decline is believed to be mainly due to loss of suitable habitat, attacks by foxes and dogs and disturbance at nesting sites. The largest colony in New South Wales is on Montague Island. Up to 8000 breeding pairs are known to nest there each year. Additional colonies exist on the Tollgate Islands in Batemans Bay. Additional colonies exist in the Five Islands Nature Reserve, offshore from Port Kembla, and at Boondelbah Island, Cabbage Tree Island and the Broughton Islands off Port Stephens. Jervis Bay Territory A population of about 5,000 breeding pairs exists on Bowen Island. The colony has increased from 500 pairs in 1979 and 1500 pairs in 1985. During this time, the island was privately leased. The island was vacated in 1986 and is currently controlled by the federal government. South Australia In South Australia, many little penguin colony declines have been identified across the state. In some cases, colonies have declined to extinction (including the Neptune Islands, West Island, Wright Island, Pullen Island and several colonies on western Kangaroo Island), while others have declined from thousands of animals to few (Granite Island and Kingscote). The only known mainland colony exists at Bunda Cliffs on the state's far west coast, though colonies have existed historically on Yorke Peninsula. A report released in 2011 presented evidence supporting the listing of the statewide population or the more closely monitored sub-population from Gulf St. Vincent as Vulnerable under South Australia's National Parks & Wildlife Act 1972. As of 2014, the little penguin is not listed as a species of conservation concern, despite ongoing declines at many colonies. Tasmania Tasmanian has Australia's largest little penguin population, with estimates ranging from 110,000 to 190,000 breeding pairs, of which less than 5% are found on mainland Tasmania. Conservation activities, education campaigns, and measures to prevent dog attacks on little penguin rookeries have been implemented. Victoria The largest colony of little penguins in Victoria is located at Phillip Island, where the nightly 'parade' of penguins across Summerland Beach has been a major tourist destination, and more recently a major conservation effort, since the 1920s. Phillip Island is home to an estimated 32,000 breeding adults. Little penguins can also be seen in the vicinity of the St Kilda pier and breakwater in the inner suburbs of Melbourne. The breakwater is home to a colony of little penguins which have been the subject of a conservation study since 1986. As of 2020 the colony is 1,400 breeding adults and growing. Little penguin habitats also exist at a number of other locations, including London Arch and The Twelve Apostles along the Great Ocean Road, Wilsons Promontory and Gabo Island. Western Australia The largest colony of little penguins in Western Australia is believed to be located on Penguin Island, where an estimated 1,000 pairs nest during winter. Penguins are also known to nest on Garden Island and Carnac Island which lie north of Penguin Island. Many islands along Western Australia's southern coast are likely to support little penguin colonies, though the status of these populations is largely unknown. An account of little penguins on Bellinger Island published in 1928 numbered them in their thousands. Visiting naturalists in November 1986 estimated the colony at 20 breeding pairs. The account named another substantial colony 12 miles from Bellinger Island and the same distance from Cape Pasley. Little penguins are known to breed on some islands of the Recherche Archipelago, including Woody Island where day-tripping tourists can view the animals. A penguin colony exists on Mistaken Island in King George Sound near Albany. Historical accounts of little penguins on Newdegate Island at the mouth of Deep River and on Breaksea Island in King George Sound also exist. West Australian little penguins have been found to forage as far as 150 miles north of Geraldton (south of Denham and Shark Bay). Behaviour Little penguins are diurnal and like many penguin species, spend the largest part of their day swimming and foraging at sea. During the breeding and chick-rearing seasons, little penguins leave their nest at sunrise, forage for food throughout the day and return to their nests just after dusk. Thus, sunlight, moonlight and artificial lights can affect the behaviour of attendance to the colony. Also, increased wind speeds negatively affect the little penguins' efficiency in foraging for chicks, but for reasons not yet understood. Little penguins preen their feathers to keep them waterproof. They do this by rubbing a tiny drop of oil onto every feather from a special gland above the tail. Range Tagged or banded birds later recaptured or found deceased have shown that individual birds can travel great distances during their lifetimes. In 1984, a penguin that had been tagged at Gabo Island in eastern Victoria was found dead at Victor Harbor in South Australia. Another little penguin was found near Adelaide in 1970 after being tagged at Phillip Island in Victoria the previous year. In 1996, a banded penguin was found dead at Middleton. It had been banded in 1991 at Troubridge Island in Gulf St Vincent, South Australia. The little penguin's foraging range is quite limited in terms of distance from shore when compared to seabirds that can fly. Feeding Little penguins feed by hunting small clupeoid fish, cephalopods and crustaceans, for which they travel and dive quite extensively including to the sea floor. Researcher Tom Montague studied a Victorian population for two years in order to understand its feeding patterns. Montague's analysis revealed a penguin diet consisting of 76% fish and 24% squid. Nineteen fish species were recorded, with pilchard and anchovy dominating. The fish were usually less than 10 cm long and often post-larval or juvenile. Less common little penguin prey include: crab larvae, eels, jellyfish and seahorses. In New Zealand, important little penguin prey items include arrow squid, slender sprat, Graham's gudgeon, red cod and ahuru. Since the year 2000, the little penguins of Port Phillip Bay's diet has consisted mainly of barracouta, anchovy, and arrow squid. Pilchards previously featured more prominently in southern Australian little penguin diets prior to mass sardine mortality events of the 1990s. These mass mortality events affected sardine stocks over 5,000 kilometres of coastline. Jellyfish including species in the genera Chrysaora and Cyanea were found to be actively sought-out food items, while they previously had been thought to be only accidentally ingested. Similar preferences were found in the Adélie penguin, yellow-eyed penguin and Magellanic penguin. An important crustacean present in the little penguin diet is the krill, Nyctiphanes australis, which surface-swarms during the day. Little penguins are generally inshore feeders. The use of data loggers has shown that in the diving behaviour of little penguins, 50% of dives go no deeper than 2 m, and the mean diving time is 21 seconds. In the 1980s, average little penguin dive time was estimated to be 23–24 seconds. The maximum recorded depth and time submerged are 66.7 metres and 90 seconds respectively. Tracking technology is allowing researchers from IMAS and the University of Tasmania to garner new insights into the foraging behavior of little penguins. Parasites Little penguins play an important role in the ecosystem as not only a predator to parasites but also a host. Recent studies have shown a new species of feather mite that feeds on the preening oil on the feathers of the penguin. Little penguins preen their mates to strengthen social bonds and remove parasites, especially from their partner's head where self-preening is difficult. Reproduction Little penguins reach sexual maturity at different ages. The female matures at two years old and the male at three years old. Between June and August, males return to shore to renovate or dig new burrows and display to attract a mate for the season. Males compete for partners with their displays. Breeding occurs annually, but the timing and duration of the breeding season varies from location to location and from year to year. Breeding occurs during spring and summer when oceans are most productive and food is plentiful. Little penguins remain faithful to their partner during a breeding season and whilst hatching eggs. At other times of the year they tend to swap burrows. They exhibit site fidelity to their nesting colonies and nesting sites over successive years. Little penguins can breed as isolated pairs, in colonies, or semi-colonially. Nesting Penguins' nests vary depending on the available habitat. They are established close to the sea in sandy burrows excavated by the birds' feet or dug previously by other animals. Nests may also be made in caves, rock crevices, under logs or in or under a variety of man-made structures including nest boxes, pipes, stacks of wood or timber, and buildings. Nests have been occasionally observed to be shared with prions, while some burrows are occupied by short-tailed shearwaters and little penguins in alternating seasons. In the 1980s, little was known on the subject of competition for burrows between bird species. Timing The timing of breeding seasons varies across the species' range. In the 1980s, the first egg laid at a penguin colony on Australia's eastern coast could be expected to come as early as May or as late as October. Eastern Australian populations (including at Phillip Island, Victoria) lay their eggs from July to December. In South Australia's Gulf St. Vincent, eggs are laid between April and October and south of Perth in Western Australia, peak egg-laying occurred in June and continued until mid-October (based on observations from the 1980s). Male and female birds share incubating and chick-rearing duties. They are the only species of penguin capable of producing more than one clutch of eggs per breeding season, but few populations do so. In ideal conditions, a penguin pair is capable of raising two or even three clutches of eggs over an extended season, which can last between eight and twenty-eight weeks. The one or two (on rare occasions, three) white or lightly mottled brown eggs are laid between one and four days apart. Each egg typically weighs around 55 grams at time of laying. Incubation takes up to 36 days. Chicks are brooded for 18–38 days and fledge after 7–8 weeks. On Australia's east coast, chicks are raised from August to March. In Gulf St. Vincent, chicks are raised from June through November. Little penguins typically return to their colonies to feed their chicks at dusk. The birds tend to come ashore in small groups to provide some defence against predators, which might otherwise pick off individuals. In Australia, the strongest colonies are usually on cat-free and fox-free islands. However, the population on Granite Island (which is a fox, cat and dog-free island) has been severely depleted, from around 2000 penguins in 2001 down to 22 in 2015. Granite Island is connected to the mainland via a timber causeway. Native predators Predation by native animals is not considered a threat to little penguin populations, as these predators' diets are diverse. In Australia, large native reptiles including the tiger snake and Rosenberg's goanna are known to take little penguin chicks and blue-tongued lizards are known to take eggs. At sea, little penguins are eaten by long-nosed fur seals. A study conducted by researchers from the South Australian Research and Development Institute found that roughly 40 percent of seal droppings in South Australia's Granite Island area contained little penguin remains. Other marine predators include Australian sea lions, sharks and barracouta. The introduction of Tasmanian Devils to the Australian island of Maria Island in 2012 has led to the complete destruction of a population of little penguins that numbered 3,000 breeding pairs before the introduction. Little penguins are also preyed upon by white-bellied sea eagles. These large birds-of-prey are endangered in South Australia and not considered a threat to colony viability there. Other avian predators include: kelp gulls, pacific gulls, brown skuas and currawongs. In Victoria, at least one penguin death has been attributed to a water rat. Mass mortalities A mass mortality event occurred in Port Phillip Bay in March 1935. The event coincided with moulting and deaths were attributed to fatigue. Another event occurred at Phillip Island in Victoria in 1940. The population there was believed to have fallen from 2000 birds to 200. Dead birds were allegedly in healthy-looking condition so speculation pointed to a disease or pathogen. Oil spills resulting from shipping activity have occasionally resulted in mass mortalities of Little penguins. The worst of these was the Iron Baron oil spill at Low Head, Tasmania in 1995, followed by the Rena oil spill in New Zealand in 2011. Citizens have raised concerns about mass mortality of penguins alleging a lack of official interest in the subject. Discoveries of dead penguins in Australia should be reported to the corresponding state's environment department. In South Australia, a mortality register was established in 2011. Relationship with humans Little penguins have long been a curiosity to humans Captive animals are often exhibited in zoos. Over time attitudes towards penguins have evolved from direct exploitation (for meat, skins and eggs) to the development of tourism ventures, conservation management and the protection of both birds and their habitat. Direct exploitation During the 19th and 20th centuries, little penguins were shot for sport, killed for their skins, captured for amusement and eaten by ship-wrecked sailors and castaways to avoid starvation. Their eggs were also collected for human consumption by indigenous and non-indigenous people. In 1831, N. W. J. Robinson noted that penguins were typically soaked in water for many days to tenderise the meat before eating. One of the colonies raided for penguin skins was Lady Julia Percy Island in Victoria. The following directions for preparing penguin skin were published in The Chronicle in 1904:'F.W.M.,' Port Lincoln. — To clean penguin skins, scrape off as much fat as you can with a blunt knife. Then peg the skin out carefully, stretching it well. Let it remain in the sun till most of the fat is dried out of it, then rub with a compound of powdered alum, salt, and pepper in about equal proportions. Continue to rub this on at intervals until the skin becomes soft and pliable.An Australian taxidermist was once commissioned to make a woman's hat for a cocktail party from the remains of a dead little penguin. The newspaper described it as "a smart little toque of white and black feathers, with black flippers set at a jaunty angle on the crown." In the 20th century, little penguins were maliciously attacked by humans, used as bait to catch Southern rock lobster, used to free snagged fishing tackle, killed as incidental bycatch by fishermen using nets, and killed by vehicle strikes on roads and on the water. However, towards the end of the 20th century and the beginning of the 21st, more mutually beneficial relationships between penguins and humans developed. The sites of some breeding colonies have become carefully managed tourist destinations which provide an economic boost for coastal and island communities in Australia and New Zealand. These locations also often provide facilities and volunteer staff to support population surveys, habitat improvement works and little penguin research programs. Tourism At Phillip Island, Victoria, a viewing area has been established at the Phillip Island Nature Park to allow visitors to view the nightly "penguin parade". Lights and concrete stands have been erected to allow visitors to see but not photograph or film the birds (this is because it can blind or scare them) interacting in their colony. In 1987, more international visitors viewed the penguins coming ashore at Phillip Island than visited Uluru. In the financial year 1985–86, 350,000 people saw the event, and at that time audience numbers were growing 12% annually. In Bicheno, Tasmania, evening penguin viewing tours are offered by a local tour operator at a rookery on private land. A similar sunset tour is offered at Low Head, near the mouth of the Tamar River on Tasmania's north coast. Observation platforms exist near some of Tasmania's other little penguin colonies, including Bruny Island and Lillico Beach near Devonport. South of Perth, Western Australia, visitors to Penguin Island are able to view penguin feeding within a penguin rehabilitation centre and may also encounter wild penguins ashore in their natural habitat. The island is accessible via a short passenger ferry ride, and visitors depart the island before dusk to protect the colony from disturbance. Visitors to Kangaroo Island, South Australia, have nightly opportunities to observe penguins at the Kangaroo Island Marine Centre in Kingscote and at the Penneshaw Penguin Centre. Granite Island at Victor Harbor, South Australia continues to offer guided tours at dusk, despite its colony dropping from thousands in the 1990s to dozens in 2014. There is also a Penguin Centre located on the island where the penguins can be viewed in captivity. In the Otago, New Zealand town of Oamaru, visitors view the birds returning to their colony at dusk. In Oamaru it is common for penguins to nest within the cellars and foundations of local shorefront properties, especially in the old historic precinct of the town. Little penguin viewing facilities have been established at Pilots Beach on the Otago Peninsula in Dunedin. Here visitors are guided by volunteer wardens to watch penguins returning to their burrows at dusk. Threats Prey availability Food availability appears to strongly influence the survival and breeding success of little penguin populations across their range. Variation in prey abundance and distribution from year to year causes young birds to be washed up dead from starvation or in weak condition. This problem is not constrained to young birds, and has been observed throughout the 20th century. The breeding season of 1984–1985 in Australia was particularly bad, with minimal breeding success. Eggs were deserted prior to hatching and many chicks starved to death. Malnourished penguin carcasses were found washed up on beaches and the trend continued the following year. In April 1986, approximately 850 dead penguins were found washed ashore in south-western Victoria. The phenomenon was ascribed to lack of available food. There are two seasonal peaks in the discovery of dead little penguins in Victoria. The first follows moult and the second occurs in mid-winter. Moulting penguins are under stress, and some return to the water in a weak condition afterwards. Mid-winter marks the season of lowest prey availability, thus increasing the probability of malnutrition and starvation. In 1990, 24 dead penguins were found in the Encounter Bay area in South Australia during a week spanning late April to early May. A State government park ranger explained that many of the birds were juvenile and had starved after moulting. In 1995 pilchard mass mortality events occurred, which reduced the penguins' available prey and resulted in starvation and breeding failure. Another similar event occurred in 1999. Both mortality events were attributed to an exotic pathogen which spread across the entire Australian population of the fish, reducing the breeding biomass by 70%. Crested tern and gannet populations also suffered following these events. In 1995, 30 dead penguins were found ashore between Waitpinga and Chiton Rocks in the Encounter Bay area. The birds has suffered severe bacterial infections and the mortalities may have been linked to the mass mortality of pilchards that resulted from the spread of an exotic pathogen that year. In the late 1980s, it was believed that penguins did not compete with the fishing industry, despite anchovy being commercially caught. That assertion was made prior to the establishment and development of South Australia's commercial pilchard fishery in the 1990s. In South Africa, the overfishing of species of preferred penguin prey has caused Jackass penguin populations to decline. Overfishing is a potential (but not proven) threat to the little penguin. Introduced predators Introduced mammalian predators present the greatest terrestrial risk to little penguins and include cats, dogs, rats, foxes, ferrets and stoats. Dogs and cats Uncontrolled dogs or feral cats can have sudden and severe impacts on penguin colonies (more than the penguin's natural predators) and may kill many individuals. Examples of colonies affected by dog attacks include Manly, New South Wales, Penneshaw, South Australia, Red Chapel Beach, Wynyard, Camdale and Low Head in Tasmania, Penguin Island in Western Australia and Little Kaiteriteri Beach in New Zealand. Paw prints at an attack site at Freeman's Knob, Encounter Bay, South Australia showed that the dog responsible was small, roughly the size of a terrier. The single attack may have rendered the small colony extinct. Cats have been recorded preying on penguin chicks at Emu Bay on Kangaroo Island in South Australia. In October 2011, 15 dead penguin chicks were found near the Kingscote colony with their heads removed. A dog or cat attack was presumed to be the cause of death. A similar event also occurred in 2010. The threat of dog and cat attack is ongoing at many colonies and reports of dog attacks on penguins date back to the mid 20th century. In the first seven months of 2014, South Australian animal rescue organisation AMWRRO received and treated 22 penguins that had been injured during dog attacks. Foxes Foxes have been known to prey on little penguins since at least the early 20th century. A fox was believed responsible for the deaths of 53 little penguins over several nights on Granite Island in 1994. Little penguins on Middle Island off Warrnambool, Victoria have suffered heavy predation by foxes, which were able to reach the island at low tide by a tidal sand bridge. The small colony was reduced from approximately 600 penguins in 2001 to less than 10 in 2005. The use of Maremma sheepdogs to guard the colony has helped it recover to 100 birds by 2017. In June 2015, 26 penguins from the Manly colony were killed in 11 days. A fox believed responsible was eventually shot in the area and an autopsy was expected to prove or disprove its involvement. In November 2015 a fox entered the little penguin enclosure at the Melbourne Zoo and killed 14 penguins, prompting measures to further "fox proof" the enclosure. Ferrets and stoats A suspected stoat or ferret attack at Doctor's Point near Dunedin, New Zealand claimed the lives of 29 little blue penguins in November 2014. Tasmanian devils A population of Tasmanian devils introduced to Maria Island in 2012 for conservation reasons led to the loss of the local little penguin colony. Human development The impacts of human habitation in proximity to little penguin colonies include collisions with vehicles, direct harassment, burning and clearing of vegetation and housing development. In 1950, roughly a hundred little penguins were allegedly burned to death near The Nobbies at Port Phillip Bay during a grass fire lit intentionally by a grazier for land management purposes. It was later reported that the figure had been overstated. The matter was resolved when the grazier offered to return land to the custody of the State for the future protection of the colony. A study in Perth from 2003 to 2012 found that the main cause of mortality was trauma, most likely from watercraft, leading to a recommendation for management strategies to avoid watercraft strikes. The Conservation Council of Western Australia has expressed opposition to the proposed development of a marina and canals at Mangles Bay, in close proximity to penguin colonies at Penguin Island and Garden Island. Researcher Belinda Cannell of Murdoch University found that over a quarter of penguins found dead in the area had been killed by boats. Carcasses had been found with heads, flippers or feet cut off, cuts on their backs and ruptured organs. The development would increase boat traffic and result in more penguin deaths. Protestors have opposed the development of a marina at Kennedy Point, Waiheke Island in New Zealand for the risk it poses to Little penguins and their habitat. Protesters claimed that they exhausted all legal means to oppose the project and have had to resort to occupation and non-violent resistance. Several arrests have been made for trespassing. Human interference Penguins are vulnerable to interference by humans, especially while they are ashore during moult or nesting periods. In 1930 in Tasmania, it was believed that little penguins were competing with mutton-birds, which were being commercially exploited. An "open season" in which penguins would be permitted to be killed was planned in response to requests from members of the mutton-birding industry. In the 1930s, an arsonist was believed to have started a fire on Rabbit Island near Albany, Western Australia- a known little penguin rookery. Visitors later reported finding dead penguins there with their feet burned off. In 1938 an account was given of a little penguin found with its flippers tied together with fishing line. In 1949, penguins on Phillip Island in Victoria became victims of human cruelty, with some kicked and others thrown off a cliff and shot at. These acts of cruelty prompted the state government to fence off the rookeries. In 1973, ten dead penguins and fifteen young seagulls were found dead on Wright Island in Encounter Bay, South Australia. It was believed that they were killed by people poking sticks down burrows before scattering the dead bodies around, though a dog attack is also possible. In 1983 one penguin was found dead and another injured at Encounter Bay, both by human interference. The injured bird was euthanased. More recent examples of destructive interference can be found at Granite Island, where in 1994 a penguin chick was taken from a burrow and abandoned on the mainland, a burrow containing penguin chicks was trampled and litter was discarded down active burrows. In 1998, two incidents in six months resulted in penguin deaths. The latter, which occurred in May, saw 13 penguins apparently kicked to death. In March 2016, two little penguins were kicked and attacked by humans during separate incidents at the St Kilda colony, Victoria. In 2018, 20-year-old Tasmanian man Joshua Leigh Jeffrey was fined $82.50 in court costs and sentenced to 49 hours of community service at Burnie Magistrates Court after killing nine little penguins at Sulphur Creek in North West Tasmania on 1 January 2016 by beating them with a stick. Dr Eric Woehler from conservation group Birds Tasmania denounced the perceived leniency of the sentence, which he said placed minimal value on Tasmania's wildlife and set an "unwelcome precedent". Following an appeal by prosecutors, Jeffrey had his sentence doubled on 15 October 2018. The office of the Director of Public Prosecutions said it considered the original sentence to be manifestly inadequate. The original sentence was set aside, and Jeffrey was sentenced to two months in prison, suspended on the condition of him committing no offences for a year that are punishable by imprisonment. His community order was also doubled to 98 hours. Also in 2018, a dozen little penguin carcasses were found in a garbage bin at Low Head, Tasmania prompting an investigation into the causes of death. Interactions with fishing Some little penguins are drowned when amateur fishermen set gill nets near penguin colonies. Discarded fishing line can also present an entanglement risk and contact can result in physical injury, reduced mobility or drowning. In 2014, a group of 25 dead little penguins was found on Altona Beach in Victoria. Necropsies concluded that the animals had died after becoming entangled in net fishing equipment, prompting community calls for a ban on net fishing in Port Phillip Bay. In the 20th century, little penguins were intentionally shot or caught by fishermen to use as bait in pots for catching Southern rock lobster (also known as crayfish) or by line fishermen. Colonies were targeted for this purpose in various parts of Tasmania including Bruny Island and at West Island, South Australia. Oil spills Oil spills can be lethal for penguins and other sea birds and events related ports and shipping have impacted penguins across the Southern hemisphere since the 1920s. Oil is toxic when ingested and penguins' buoyancy and the insulative quality of their plumage is damaged by contact with oil. Little penguin populations have been significantly affected during two major oil spills at sea: the Iron Baron oil spill off Tasmania's north coast in 1995 and the grounding of the Rena off New Zealand in 2011. In 2005, a 10-year post-mortem reflection on the Iron Baron incident estimated penguin fatalities at 25,000. The Rena incident killed 2,000 seabirds (including little penguins) directly, and killed an estimated 20,000 in total based on wider ecosystem impacts. Victoria's coastline has been subjected to chronic oil contamination from minor discharges or spills which have impacted little penguins at several colonies. An oil spill or dumping event claimed the lives of up to 120 little penguins which were found oiled, deceased and ashore near Warnambool in 1990. A further 104 penguins were taken into care for cleaning. The waters west of Cape Otway were polluted with bunker oil. The source was unknown at the time and an investigation was started into three potentially responsible vessels. Earlier oil spill or oil dumping events have injured or killed little penguins at various locations in the 1920s, 1930s, 1940s, 1950s, 1960s and 1970s. The threat persists in the 21st century, with oiled birds received for treatment at specialised facilities like AMWRRO in South Australia. Plastic pollution Plastics are swallowed by little penguins, who mistake them for prey items. They present a choking hazard and also occupy space in the animal's stomach. Indigestible material in a penguin's stomach can contribute to malnutrition or starvation. Other larger plastic items, such as bottle packaging rings, can become entangled around penguins' necks, affecting their mobility. Climate change Heat waves can result in mass mortality episodes at nesting sites, as the penguins have poor physiological adaptations towards losing heat. Climate change is recognised as a threat, though currently it is assessed to be less significant than others. Efforts are being made to protect penguins in Australia from the likely future increased occurrence of extreme heat events. Variation in the timing of seasonal ocean upwelling events, such as the Bonney Upwelling, which provide abundant nutrients vital to the growth and reproduction of primary producers at the base of the food chain, may adversely affect prey availability, and the timing and success or failure of little penguin breeding seasons. Conservation Little penguins are protected from various threats under different legislation in different jurisdictions. The table below may not be exhaustive. Management of introduced predators Management strategies to mitigate the risk of domestic and feral dog and cats attack include establishing dog-free zones near penguin colonies and introducing regulations to ensure dogs to remain on leashes at all times in adjacent areas. The threat of colony collapse at Warnambool prompted conservationists to pioneer the experimental use of Maremma Sheepdogs to protect the colony and fend off would-be predators. The deployment of Maremma sheepdogs to protect the penguin colony has deterred the foxes and enabled the penguin population to rebound. This is in addition to the support from groups of volunteers who work to protect the penguins from attack at night. The first Maremma sheepdog to prove the concept was Oddball, whose story inspired a feature film of the same name, released in 2015. In December 2015, the BBC reported, "The current dogs patrolling Middle Island are Eudy and Tula, named after the scientific term for the fairy penguin: Eudyptula. They are the sixth and seventh dogs to be used and a new puppy is being trained up [...] to start work in 2016. In Sydney, snipers have been used to protect a colony of little penguins. This effort is in addition to support from local volunteers who work to protect the penguins from attack at night. In 2019 it was announced that the defensive strategies were paying off and that Manly colony was recovering. Near some colonies in Tasmania, traps are set and feral cats that are captured are euthanized. Habitat restoration Several efforts have been made to improve breeding sites on Kangaroo Island, including augmenting habitat with artificial burrows and revegetation work. The Knox School's habitat restoration efforts were filmed and broadcast in 2008 by Totally Wild. In 2019, concrete nesting "huts" were made for the little penguins of Lion Island in the mouth of the Hawkesbury River in New South Wales, Australia. The island had been ravaged by a fire which began with a lightning strike and destroyed 85% of the penguin's natural habitat. Weed control undertaken by the Friends of Five Islands in New South Wales helps improve prospects of breeding success for seabirds, including the little penguin. The main problem species on the Five Islands are kikuyu grass and coastal morning glory. The weeding work has resulted in increasing numbers of little penguin burrows in the areas weeded and the return of the white-faced storm petrel to the island after a 56-year breeding absence. Oil spill response Penguins are taken into care and cleaned by trained staff at specialised facilities when they are found alive in an oiled condition. When animals are first received at Phillip Island's rehabilitation facility, a knitted penguin sweater, made to a specific pattern, is applied to the bird. The sweater prevents the bird from attempting to preen off the oil itself. Once the birds have been treated and cleaned, the jumper is discarded. In 2019, the Phillip Island centre put out a call for 1,400 new penguin jumpers to be knitted after they increased the carrying capacity of their treatment facility. The last major oil spill the centre responded to saw 438 birds cleaned with a 96% survival rate after rehabilitation. The Melbourne Zoo also treats and rehabilitates oiled little penguins, and the Taronga Zoo has been cleaning and rehabilitating oiled penguins since the 1950s. Zoological exhibits Zoological exhibits featuring purpose-built enclosures for little penguins can be seen in Australia at the Adelaide Zoo, Melbourne Zoo, the National Zoo & Aquarium in Canberra, Perth Zoo, Caversham Wildlife Park (Perth), Ballarat Wildlife Park, Sea Life Sydney Aquarium and the Taronga Zoo in Sydney. Enclosures include nesting boxes or similar structures for the animals to retire into, a reconstruction of a pool and in some cases, a transparent aquarium wall to allow patrons to view the animals underwater while they swim. A little penguin exhibit exists at Sea World, on the Gold Coast, Queensland, Australia. In early March, 2007, 25 of the 37 penguins died from an unknown toxin following a change of gravel in their enclosure. It is still not known what caused the deaths of the little penguins, and it was decided not to return the 12 surviving penguins to the same enclosure where the penguins became ill. A new enclosure for the little penguin colony was opened at Sea World in 2008. In New Zealand, little penguin exhibits exist at the Auckland Zoo, the Wellington Zoo and the National Aquarium of New Zealand. Since 2017, the National Aquarium of New Zealand, has featured a monthly "Penguin of the Month" board, declaring two of their resident animals the "Naughty" and "Nice" penguin for that month. Photos of the board have gone viral and gained the aquarium a large worldwide social media following. A colony of little blue penguins exists at the New England Aquarium in Boston, Massachusetts. The penguins are one of three species on exhibit and are part of the Association of Zoos and Aquariums' Species Survival Plan for little blue penguins. Little penguins can also be seen at the Louisville Zoo and the Bronx Zoo. Mascots and logos Linus Torvalds, the original creator of Linux (a popular operating system kernel), was once pecked by a little penguin while on holiday in Australia. Reportedly, this encounter encouraged Torvalds to select Tux as the official Linux mascot. A Linux kernel programming challenge called the Eudyptula Challenge has attracted thousands of persons; its creator(s) use the name "Little Penguin". Penny the Little Penguin was the mascot for the 2007 FINA World Swimming Championships held in Melbourne, Victoria. See also References Further reading External links State of Penguins: Little (blue) penguin – detailed and current species account of (Eudyptula minor) in New Zealand Little penguins at the International Penguin Conservation Little penguin at PenguinWorld West Coast Penguin Trust (New Zealand) Philip Island Nature Park website Gould's The Birds of Australia plate little penguin little penguin Birds of Western Australia Birds of South Australia Birds of Victoria (Australia) Birds of Tasmania Birds of New Zealand Subterranean nesting birds little penguin
2020154
https://en.wikipedia.org/wiki/Lyceum%20of%20the%20Philippines%20University
Lyceum of the Philippines University
Lyceum of the Philippines University (LPU; ) is a private university located at intramuros in the City of Manila, Philippines. It was founded in 1952 by Dr. José P. Laurel, who was the third president of the Republic of the Philippines. Two of LPU's most prominent features are its entrance gate through the "Hall of Heroes", commonly known as "Mabini Hall", which exhibits busts of revered Philippine historical figures sculpted by the National Artist Guillermo Tolentino and the famous "Lyceum Tower" which serves as the school's landmark and stands witness to the university's history and continuing progress. Many disciplines are taught in the university, with International relations (diplomacy, international trade), business, communication and International Hospitality (hotel and restaurant management, tourism) consistently being the university's flagship courses. The LPU has affiliate/branch campuses in Makati, Batangas, Laguna, Cavite and Davao. History Lyceum of the Philippines University was founded in 1952 by Dr. José P. Laurel, who became the third president of the Philippines, making LPU the only school founded by a Philippine president. He named the institution after lykeion, the grove in ancient Athens where Aristotle and Demetrio taught his pupils. Its educational vision is founded on principles that its founder, José P. Laurel, set down. It opened its gates to its first students on July 7, 1952. The LPU Manila was built on the site where the old San Juan de Dios Hospital was located. The university offers undergraduate and graduate programs in various fields including law, the liberal arts, diplomacy, international trade and journalism, as well as nursing, engineering, business and accountancy, mass communications, tourism, and hotel and restaurant management. It was granted Autonomous Status by the Commission on Higher Education (CHED). It is a Category "A" teaching university in the Philippines. Category "A" assessment is the highest level in the Institutional Quality Assurance through Monitoring and Evaluation framework developed by the Commission on Higher Education (CHED) as another means to assess and monitor the quality of an institution. It is rated one of the Philippine's Top Universities by Commission on Higher Education (CHED) and the only university in the Philippines passed the accreditation of The Tourism and Hospitality Management Education, Center of Excellence or THE-ICE and hailed as one of the best asian universities of QS 2022 ranking. It is a member of the Intramuros Consortium which includes the technical school Mapúa University, the Catholic school Colegio de San Juan de Letran, and the city-owned Pamantasan ng Lungsod ng Maynila (University of the City of Manila). Four programs — Business Administration, Hotel and Restaurant Management, Liberal Arts, and Sciences — have Level 3 Reaccredited Status by the Philippine Association of Colleges and Universities Commission on Accreditation while its Computer Engineering, Information Technology, Tourism, Computer Science, Nursing, Master of Public Administration, Master of Business Administration were granted Level 2 Reaccredited Status by PACUCOA. In 2012, Lyceum marked its 60th foundation anniversary. The Philippine Postal Corporation, together with the LPU administration released a commemorative stamp. Recently, President Benigno S. Aquino III formally recognized and awarded the Lyceum of the Philippines University – Manila with the Recognition for Commitment to Quality Management in the 16th Philippine Quality Award conferment ceremonies held in Malacañang Palace. Colleges and Graduate School College of Arts and Sciences Liberal Arts and Science programs are granted Level 3 Reaccredited Status by PACUCOA History The School of Arts and Sciences was one of the three original schools of the Lyceum of the Philippines University. It had an enrollment of 350 students when it first opened in 1952, with Prof. José A. Adeva Sr. as dean. On June 15, 1953, Recognition Nos. 281 282 s. 1953 for Bachelor of Arts and Associate in Arts respectively were granted by the Department of Education. Adeva was designated on May 17, 1962, as dean of the School of Humanities and Sciences. This was subsequently followed on May 21, 1962, by the integration of the different schools: the School of Arts and Sciences, Journalism, Foreign Service, Education, and Economics and Business Administration on May 21, 1962. Presently, CAS is composed of the following departments: Department of Legal Studies, Department of Mass Communication and Journalism and Department of Psychology. Also in the CAS are the following General Education (GE) Departments: Department of English and Literature, Department of Filipino, Department of Humanities, Department of Mathematics, Department of Natural Sciences, Department of Physical Education, and Department of Social Science. In terms of accreditation, the following programs are Level III 1st Re-accredited by the Philippine Association of Colleges and University Commission On Accreditation (PACUCOA): AB Mass Communication, AB Journalism, and AB Legal Studies while BS Psychology has been granted Level III Re-accredited Status. The CAS has for its main thrust the development of its faculty, staff and students. This is achieved through faculty development seminars, classroom visitations, regular faculty meetings and periodic conferences with the Department Chairs. The college also helps the Communication and Public Affairs Department (CPAD) in its promotion/marketing activities through the annual Brain Quest, JPL Cup and Media Forum regularly attended by public and private high schools in Metro Manila. In 2014, the College of Arts and Sciences conducted the 1st UmalohokJUAN Awards, recognizing and awarding television and radio programs and personalities. College of Business Administration Business Administration program is granted Level 3 1st Reaccredited Status by PACUCOA Accountancy and Customs Administration program are granted Level 1 Formal Accredited Status by PACUCOA History In 1952, when Dr Laurel founded the university, one of his dreams was to open the door to quality education to the masses. The answer was to open during the same year the School of Commerce headed by the Senator Gil J. Puyat as its first dean with Hilarion M. Henares as the vice dean. In 1955, the school graduated 53 students who joined the public and private sectors. The School of Commerce was later on expanded and became the School of Economics and Business Administration. In 1976, it became the College of Business Administration. It produced seven (7) topnotchers in the CPA Board Examination since 1992 and (14) in the Customs Brokers Licensure Examination since 1999. College of Technology Computer Science and Information Technology programs are granted Level 3 Reaccredited Status by PACUCOA. History In June 2001, the university decided to establish the College of Computer Studies (CCS) to cater to a rapidly increasing demand for IT Professionals. During the previous five years in which it was offered, the BS Computer Science Program had been under the administration of the College of Engineering (COE). When Lyceum decided to offer other Information Technology courses, the new College was established. By adding at least three new courses related to the Information Technology industry, Lyceum separated the Computer Science students from COE to set up the College of Computer Studies (CCS). In 2015, the College of Computer Studies (CCS) and College of Engineering (COE) was merged and named College of Technology (COT). College of International Tourism Hospitality Management History In order to accommodate the increase in student population, and put together common resources and faculty, the College of International Hospitality Management (CIHM) was founded in November 1998. It was the first in the Philippines to use the appellation International Hospitality Management, the CIHM offered the BS HRM program, initially offered by the College of Business Administration (CBA) and the Bachelor of Science in Tourism (BST), originally under the College of Arts and Sciences (CAS). The establishment of CIHM was introduced to the Council of Hotel and Restaurant Educators of the Philippines (COHREP), a professional organization of educators in the HRM program and to the Hotel and Restaurant Association of the Philippines (HRAP), a professional organization of hotels and restaurant industry members in 1999. In 2002, the CIHM also endeavored membership in the Tourism Educators in Schools, Colleges and Universities (TESCU). Participation was inactive after the first few years but in August 2009, the college joined the competitions sponsored by the organization with its new name, Union of Filipino Tourism Educators (UFTE) and has remained active since then. CITHM Laboratory classes are in Le Cafe, an on campus restaurant operated by students, frequented by the academic community and its guests, and a fully equipped mini hotel with a reception area, hotel suite and a housekeeping area. Aside from these, there are two mock hotel rooms that classes may use in teaching basic competencies in housekeeping. Supporting the development of skills in food and baking production, ample hands on experiences are provided in the food laboratories, one with a basic kitchen design of ten stations, and the other laboratory with an institutional kitchen design that is close to restaurant kitchen designs, with seven stations. There are also three demonstrations laboratories, with two stations, used by the food laboratory classes from time to time. A beverage laboratory complete with a functional bar is also provided for hands-on learning on various beverage preparation and service techniques. A mock bar room has also been prepared for their increasing number of classes in Bar Management and Food & Beverage Service classes. Students are also given industry computer applications using various programs such as Amadeus, Opera and an in-house front office software program. In lecture classes, the multimedia approach is used. The International Practicum Training Program is done through arrangements with local agencies and its partner agencies which coordinate with various establishments in Singapore and the United States of America. In school year 2009–2010, the college's name was changed from CIHM to CITHM or College of International Tourism and Hospitality Management connected to the M.G.M. Hotels Holdings, U.S.A.. CITHM is a winning School in the Philippines Culinary Cup and Apparent Host in the said yearly activity. The CITHM receives the Highest Accreditation program in its BSHRM and BSTourism respectively from COHREP. The Bay Leaf Hotel is the Highest project of the Lyceum CITHM. There is a new Bay Leaf Hotel in L.P.U. Cavite city. College of International Relations History The College of International Relations started out as the School of Foreign Service. It was administratively under the College of Arts and Sciences (CAS). It initially offered the Bachelor of Science in Foreign Service (BSFS) degree in School Year (SY) 1954–55, as authorized by the Department of Education under Recognition No. 35 Series of 1954. A total number of 1,000 enrollees in SY 1959-60 prompted its separatione from the CAS. José P. Laurel became the acting dean of the newly separated School of Foreign Service. It was later renamed College of Foreign Service. In 2005, the college was further renamed College of International Relations (CIR). Since its establishment, the college had in its roster professors and teaching staff that included the late President Diosdado Macapagal who became a Special Lecturer teaching Philippine Foreign Relations in 1969. Ten years later, former ambassador to the Holy See, Alberto Katigbak, in his capacity as dean of the college, initiated revisions to the BSFS curriculum. The syllabi of practically all the CIR subjects have undergone revisions to bring them up to date, including Diplomatic Practice, Introduction to International Relations, Philippine Foreign Relations, Protocol and Etiquette, and International Organizations. To strengthen the CIR faculty, new chairpersons were appointed. Ambassador Josue L. Villa, former Philippine ambassador to Thailand and to the People's Republic of China, joined in April 2006 as chairperson of the Department of Politics, Government and Diplomacy, and Ambassador Alfredo Almendrala, former Philippine ambassador to Myanmar and Consul General in San Francisco, as chairperson of the Department of International Trade, in addition to their appointment as special lecturer. In addition to long-time professors Ambassador Dolores Sale, Ambassador Fortunato Oblena, and General Cesar Fortuno, new professors were added to the faculty: Ambassador Apolinario Lozada, Jr., Ambassador Phoebe Gómez; Ambassador Nestor Padalhin; Ruby Sakkam, a summa cum laude graduate of St. Scholastica College; and Gil Santos, veteran journalist and former bureau chief of Associated Press. Ambassador Aladin Villacorte, former Consul General of the Philippine Consulate General in Chicago and Consul General of the Philippine Consulate General in Xiamen, P.R. of China; Ambassador Emelinda Lee-Pineda and Ambassador Estrella Berenguel have also joined the faculty recently. College of Nursing Granted Level 2 1st Reaccredited Status by PACUCOA On July 16, 2002, Perla Rizalina M. Tayco, Ph.D., an OD consultant, was commissioned by the president of Lyceum of Philippines, Roberto P. Laurel, to assist the institution in the Strategic Visioning Process towards the establishment of the College of Nursing. The application for Government Permit to operate the Bachelor of Science in Nursing was granted on the following dates. College of Law The College of Law has a separate campus in Makati, known as the Lyceum of the Philippines University – Makati or Lyceum of the Philippines University College of Law. Claro M. Recto Academy of Advanced Studies (CMR-AAS) Public Administration and Business Administration master programs are granted Level 2 1st Reaccredited Status by PACUCOA. Campuses and affiliated institutions The Lyceum of the Philippines University has six major campuses, namely: LPU Manila in Intramuros, Manila, the main campus of the university LPU Makati at L.P. Leviste Street, Makati, houses the LPU College of Law LPU Batangas in Batangas City, Batangas, formerly an autonomous institution named Lyceum of Batangas LPU Laguna. in Calamba City, Laguna, formerly an autonomous institution named Lyceum Institute of Technology LPU Cavite in General Trias, dubbed as "the first and only resort campus in the Philippines LPU Davao at C.P. Garcia Highway (Diversion Road), Sun City, Buhangin, Davao City, the first LPU campus outside Luzon Future campuses LPU Iloilo at Mandurriao, Iloilo City High schools LPU International High School-Cavite Constructed as LPU Manila's High School Department. LPU International High School-Batangas LPU Batangas' High School Department, which will be located in Batangas City and was scheduled to open in June 2013. It holds the mission of "Shaping Young Minds to Take the Lead!". LPU International High School-Laguna Opened in June 2013 in response of the K-12 DepEd program. Other LPU-related autonomous institutions of higher learning LPU - St. Cabrini College of Allied Medicine, a joint effort between the Laguna campus of Lyceum of the Philippines University and St. Frances Cabrini Medical Center in Santo Tomas, Batangas. Lyceum International Maritime Academy abbreviated as LIMA, located in LPU Batangas, which offers maritime education. LPU Culinary Institute located at Intramuros, Manila. The Metro's largest culinary school. Note: There are schools that employ the name "Lyceum" but are neither affiliated nor recognized by Lyceum of the Philippines. Partner Schools LPU Culinary Institute Dusit Thani College Mahasarakham University University of Hertfordshire Cheng Shiu University Gyeongju University Bucheon University Christian College of Nursing Joji Ilagan International Schools Notable alumni Rodrigo Duterte, 16th president of the Philippines, former mayor of Davao City Feliciano Belmonte, Jr., representative of Quezon City's 4th District, former Speaker Jinggoy Estrada, Philippine Senator Panfilo Lacson, former director-general of the Philippine National Police; current Philippine Senator Robert Barbers, former Philippine Senator and Secretary of Interior and Local Government Ernesto Herrera, former Philippine Senator Epimaco Velasco, former NBI Director Antonio Leviste, former Batangas Governor Grace Padaca, COMELEC commissioner and former governor of Isabela Jaime Fresnedi, Mayor of Muntinlupa Cristy S. Fermin, showbiz columnist, TV host, radio anchor Jun Cruz Reyes, Palanca and National Book Award-winning novelist Satur Ocampo, activist, journalist, writer, former Bayan Muna representative Joel Lamangan, film and television director Alfredo Gabot, PhilPost chairman Reginaldo Tilanduca, City Mayor of Malaybalay, Congressman of Bukidnon; chairman, House Committee on Constitutional Amendments Rene O. Villanueva, playwright and author; multi-Palanca awardee Gary David, PBA player, Gilas Pilipinas player Chico Lanete, PBA player Ato Agustin, former PBA player; former head coach of the San Sebastian Stags in the NCAA; former head coach of Barangay Ginebra San Miguel in the PBA Leo Austria, former PBA player; 6 times champion head coach of the San Miguel Beermen Joey Mente, former PBA player, 2001 PBA Slamdunk Champion Henry Omaga-Diaz, news anchor and reporter from ABS-CBN News Gus Abelgas, news anchor and reporter from ABS-CBN News Susan Enriquez, news anchor, host and reporter from GMA News Joel Reyes Zobel, Filipino radio personality of GMA Network's flagship AM station DZBB Gery Baja, radio anchor DZMM Cesar Montano, Filipino actor Jolo Revilla, Filipino actor, Vice Governor of Cavite Louise delos Reyes, actress from ABS-CBN Dino Imperial, Filipino actor from ABS-CBN Enzo Pineda, Filipino actor G.M.A. 7 Dikki John Martinez, former National Figure Skating Champion, Disney on Ice skater Johan Santos, actor and former Pinoy Big Brother housemate Paul Jake Castillo, actor and former Pinoy Big Brother housemate DJ Cha-Cha, radio DJ, MOR 101.9 For Life! Kristine Dera, radio DJ, 90.7 Love Radio DJ Jhai-Ho, radio DJ, MOR 101.9 For Life! Magic 89.9's DJ Debbie Then aka JJ Debbie Mario Bantasan, CRO Manager AMVI References External links Campuses: Lyceum of the Philippines University - Manila (Main) Lyceum of the Philippines University - Makati (College of Law) Lyceum of the Philippines University - Batangas Lyceum of the Philippines University - Laguna Lyceum of the Philippines University - Cavite Lyceum of the Philippines - Davao Laurel family Liberal arts colleges in the Philippines Educational institutions established in 1952 1952 establishments in the Philippines National Collegiate Athletic Association (Philippines) Universities and colleges in Manila Education in Intramuros
11660042
https://en.wikipedia.org/wiki/Secure%20Computing%20Corporation
Secure Computing Corporation
Secure Computing Corporation (SCC) was a public company that developed and sold computer security appliances and hosted services to protect users and data. McAfee acquired the company in 2008. The company also developed filtering systems used by governments such as Iran and Saudi Arabia that blocks their citizens from accessing information on the Internet. Company history In 1984, a research group called the Secure Computing Technology Center (SCTC) was formed at Honeywell in Minneapolis, Minnesota. The centerpiece of SCTC was its work on security-evaluated operating systems for the NSA. This work included the Secure Ada Target (SAT) and the Logical Coprocessing Kernel (LOCK), both designed to meet the stringent A1 level of the Trusted Computer Systems Evaluation Criteria (TCSEC). Over the next several years, Secure Computing morphed from a small defense contractor into a commercial product vendor, largely because the investment community was much less interested in purchasing security goods from defense contractors than from commercial product vendors, especially vendors in the growing Internet space. Secure Computing became a publicly traded company in 1995. Following the pattern of other Internet-related startups, the stock price tripled its first day: it opened at $16 a share and closed at $48. The price peaked around $64 in the next several weeks and then collapsed over the following year or so. It ranged between roughly $3 and $20 afterward until the company was purchased by McAfee. The company headquarters were moved to San Jose, California, in 1998, though the bulk of the workforce remained in the Twin Cities. The Roseville employees completed a move to St. Paul, Minnesota, in February 2006. Several other sites now exist, largely the result of mergers. Mergers and acquisitions Secure Computing consisted of several merged units, one of the oldest being Enigma Logic, Inc., which was started around 1982. Bob Bosen, the founder, claims to have created the first security token to provide challenge-response authentication. Bosen published a computer game for the TRS-80 home computer in 1979, called 80 Space Raiders, that used a simple challenge response mechanism for copy protection. People who used the mechanism encouraged him to repackage it for remote authentication. Bosen started Enigma Logic to do so, and filed for patents in 1982–83; a patent was issued in the United Kingdom in 1986. Ultimately, the "challenge" portion of the challenge response was eliminated to produce a one-time password token similar to the SecurID product. Enigma Logic merged with Secure Computing Corporation in 1996. Secure Computing acquired the SmartFilter product line by purchasing Webster Network Strategies, the producer of the WebTrack product, in 1996. The acquisition included the domain name webster.com, which was eventually sold to the publishers of Webster's Dictionary. Shortly after acquiring the Webster/SmartFilter product, Secure Computing merged with Border Network Technologies, a Canadian company selling the Borderware firewall. Border Network Technologies boasted an excellent product and a highly developed set of sales channels; some said that the sales channels were a major inducement for the merger. Although the plan was to completely merge the Borderware product with Sidewinder, and to offer a single product to existing users of both products, this never quite succeeded. In 1998, the Borderware business unit was sold to a new company, Borderware Technologies Inc., formed by one of the original Borderware founders. By this time, the mergers had yielded a highly distributed company with offices in Minnesota, Florida, California, and two or three in Ontario. This proved unwieldy, and the company scaled back to offices in Minnesota and California. In 2002, the company took over the Gauntlet Firewall product from Network Associates. In 2003, Secure Computing acquired N2H2, the makers of the Bess web filtering package. There has been some consolidation of Bess and SmartFilter, and Bess is now referred to as "Smartfilter, Bess edition" in company literature. An acquisition of CyberGuard was announced in August 2005 and approved in January 2006. (A year earlier, CyberGuard had attempted to acquire Secure Computing, but the proposal had been rejected). This was the largest merger by Secure Computing at the time and resulted in the addition of several product lines, including three classes of firewalls, content and protocol filtering systems, and an enterprise-wide management system for controlling all of those products. Several offices were also added, including CyberGuard's main facility in Deerfield Beach, Florida, as well as the Webwasher development office in Paderborn, Germany, and a SnapGear development office in Brisbane, Australia. In 2006, the company merged with Atlanta-based CipherTrust, a developer of email security solutions. The merger was announced in July 2006 and completed in August 2006. On July 30, 2008, Secure Computing announced its intention to sell the SafeWord authentication product line to Aladdin Knowledge Systems, leaving the company with a business focused on web/mail security and firewalls. The sale was concluded later that year. On September 22, 2008, McAfee announced its intention to acquire Secure Computing. The acquisition was completed not long afterwards, and the combined company formed the world's largest dedicated security company at the time. Products TrustedSource reputation system TrustedSource, a reputation system that Secure Computing obtained as part of the CipherTrust acquisition, was a key technology for the company, enabling all product lines with global intelligence capability based on behavioral analysis of traffic patterns from all of company's email, web and firewall devices and hosted services, as well as those of numerous OEM partners. TrustedSource derived real-time reputation scores of IPs, URLs, domains, and mail/web content based on a variety of data mining/analysis techniques, such as Support Vector Machine, Random forest, and Term-Frequency Inverse-Document Frequency (TFIDF) classifiers. Web security The company's flagship web security product line was the Secure Web appliance (formerly known as Webwasher). It provided Anti-Malware protection, TrustedSource reputation-enabled URL filtering controls, content caching, and SSL scanning capabilities. In June 2008, Secure Computing launched Secure Web Protection Service, an in-the-cloud hosted web security service that provided a similar set of features to the Secure Web appliance, without requiring any on-premises equipment or software. Mail security The company's flagship email security product line was the Secure Mail appliance (formerly known as IronMail). It provided TrustedSource reputation-enabled anti-spam, data-leakage protection (DLP), encryption and anti-malware capabilities. Secure firewalls The company's flagship firewall product, formerly known as Sidewinder, was renamed McAfee Firewall Enterprise; McAfee sold Sidewinder to Forcepoint in January 2016. Over the years, Secure Computing (and its antecedent organizations) has offered the following major lines of firewall products: Firewall Enterprise (Sidewinder) – historically based on SecureOS, the company's derivative of BSDi (previously BSD/OS), but later based on FreeBSD. Secure Firewall Reporter Secure Firewall CommandCenter CyberGuard Secure SnapGear – embedded system based on μClinux Classic – built on UnixWare TSP (Total Stream Protection) – built on Linux Borderware – sold off, as noted previously SecureZone – discontinued Firewall for NT – discontinued Gauntlet – built on Solaris, nearly phased out The Sidewinder firewall incorporated technical features of the high-assurance LOCK system, including Type enforcement, a technology later applied in SELinux. However, interaction between Secure Computing and the open source community was spotty due to the company's ownership of patents related to Type enforcement. The Sidewinder never really tried to achieve an A1 TCSEC rating, but it did earn an EAL-4+ Common Criteria rating. Along with Sidewinder, Gauntlet had been one of the earliest application layer firewalls; both had developed a large customer base in the United States Department of Defense. Gauntlet was originally developed by Trusted Information Systems (TIS) as a commercial version of the TIS Firewall Toolkit, an early open source firewall package developed under a DARPA contract. Use of company products for governmental censorship The OpenNet Initiative studied filtering software used by governments to block access by their citizens and found Secure Computing's SmartFilter program heavily used by both the Iranian and Saudi governments. According to Secure Computing, any use of its software in Iran is without its consent—U.S. sanctions prohibit American companies from any dealings with Iran—and in 2005 the company said it is actively working to stop its illegal use. In response to the company, Jonathan Zittrain, co-director of Harvard Law School's Berkman Center for Internet and Society, stated, "[T]he fact remains that the software has been in use for an extended period of time there. And we've seen Secure Computing software turn up in more than just Iran. We've seen it in Saudi Arabia as well." In 2001 The New York Times reported that Secure Computing was one of ten companies competing for the Saudi government's contract for software to block its citizens' access to websites it deemed offensive. The company already had a deal with the Saudis that was due to expire in 2003. In its defense, Secure Computing has always stated that it cannot control how customers use a product once it has been sold. According to the OpenNet Initiative's 2007 report, the Saudi government's censorship "most extensively covers religious and social content, though sites relating to opposition groups and regional political and human rights issues are also targeted." The governments of the United Arab Emirates, Oman, Sudan, and Tunisia also actively use SmartFilter. The Tunisian government goes so far as to redirect blocked pages to a fake Error 404 page, to hide the fact that blocking software is being used. The Tunisian Government is generally recognized as having a poor record when it comes to the right of free expression. See also Forcepoint References External links Secure Computing Corporation web site Cost Profile of a Highly Assured, Secure Operating System, an overview of the LOCK system. Computer security software companies Companies based in San Jose, California Defunct companies of the United States
390115
https://en.wikipedia.org/wiki/Larry%20Austin
Larry Austin
Larry Don Austin (September 12, 1930 – December 30, 2018) was an American composer noted for his electronic and computer music works. He was a co-founder and editor of the avant-garde music periodical Source: Music of the Avant Garde. Austin gained additional international recognition when he realized a completion of Charles Ives's Universe Symphony. Austin served as the president of the International Computer Music Association (ICMA) from 1990 to 1994 and served on the board of directors of the ICMA from 1984 to 1988 and from 1990 to 1998. Early life Austin was born in Duncan, Oklahoma. He received a bachelor's (Music Education, 1951) and master's degree (Music, 1952) from University of North Texas College of Music. In 1955 he studied at Mills College, and from 1955 to 1958 he engaged in graduate study at the University of California, Berkeley, leaving to accept a faculty position at the University of California, Davis. Austin studied with Canadian composer Violet Archer at the University of North Texas, French composer Darius Milhaud at Mills College, and with American composer Andrew Imbrie at the University of California, Berkeley. Teaching career Austin taught at the University of California, Davis from 1958 till 1972 rising from assistant professor to full professor. While at the University of California, Davis, he founded the improvisational New Music Ensemble. In 1972 he accepted a position at the University of South Florida, where he taught until 1978. In that year he returned to Texas, teaching at his alma mater, the University of North Texas, from 1978 until 1996 when he was named Professor Emeritus. His notable students include William Basinski, Dary John Mizelle and Rodney Waschka II. Compositions Austin received early recognition for his instrumental and orchestral works and of those pieces, Improvisations for Orchestra and Jazz Soloists, was performed and recorded by the New York Philharmonic under Leonard Bernstein. Other orchestral works of special note include Charles Ives's Universe Symphony, "as realized and completed by Larry Austin" (1974–93) for large orchestra, and Sinfonia Concertante: A Mozartean Episode (1986) for chamber orchestra and tape. Chamber works with particularly significant computer music/electro-acoustic music aspects include Accidents for electronically prepared piano (1967), written for David Tudor, Canadian Coastlines: Canonic Fractals for Musicians and Computer Band for eight musicians and tape from 1981, and BluesAx for saxophonist and tape (1995), which won the Magisterium Prize, at Bourges in 1996. BluesAx has been recorded by Steve Duke. Later work included John Explains... (2007) for octophonic sound, based on a recording of an interview with John Cage. John Explains... was premiered at the 2008 North Carolina Computer Music Festival. At the CEMI Circles festival, Austin's 2013 piece, Suoni della Bellagio—Sounds and sights of Bellagio, July–August, 1998 for video and two-channel tape was premiered. The noted critic Tom Johnson has written of Austin's music, "His style is neither uptown nor downtown, nor is it minimal, eclectic, hypnotic, or European. But it works, it is strongly personal, and it has something to say in all these directions.... The real source of Austin's music, however, is clearly Charles Ives, who also liked musical symbols, enjoyed collaging them together as densely as he could, and never had much of a knack for prettiness." Austin said that "Exploring new concepts, new materials and their interaction is essential to my work as a composer." Partial discography Leonard Bernstein Conducts Music of Our Time. New York Philharmonic, Columbia Masterworks, MS6733, 1965. Improvisations for Orchestra and Jazz Soloists Robert Floyd Plays New Piano Music by Hans Werner Henze and Larry Austin, Advance Records, FGR10S, 1970. Piano Set in Open Style Piano Variations New Music for Woodwinds, Advance Records, FGR9S, 1974 (performed by Phil Rehfeldt, clarinet and Thomas Warburton, piano). Current Larry Austin Hybrid Musics: Four Compositions, Canton, Texas: IRIDA Records 0022, 1980. Maroon Bells Catalogo Voce Quadrants: Event/Complex No. 1 Second Fantasy on Ives' Universe Symphony Volume 1, CDCM Computer Music Series. Baton Rouge: Centaur Records, Inc., (CRC 2029) 1988. Sinfonia Concertante (chamber orchestra conducted by Thomas Clark) Sonata Concertante (performed by pianist Adam Wodicki) The Virtuoso in the Computer Age—I, Volume 10, CDCM Computer Music Series. Centaur Records, Inc., (CRC 2110) 1991. Montage:Themes and Variations for Violin and Computer Music on Tape (1985) The Virtuoso in the Computer Age—III, Vol. 11, CDCM Computer Music Series, Baton Rouge: Centaur Records, 1993 La Barbara: The Name/The Sounds/The Music A Chance Operation: The John Cage Tribute. New York: Koch International Classics (KIC-CD-7238) 1993. art is self-alteration is Cage is... (1983/93), performed by Robert Black Charles Ives's Universe Symphony, as realized and completed by Larry Austin (1974–93). Baton Rouge: Centaur Records, CRC 2205, 1994. Charles Ives's Universe Symphony, as realized and completed by Larry Austin (1974–93) Composers in the Computer Age II. Baton Rouge: Centaur Records, CRC 2193, 1994. SoundPoemSet (1990–91), computer music on tape. Tárogató, New York: Romeo Records (7212), 2001. Esther Lamneck, performer. Tárogató UNconventional Trumpet, Camas, Washington: Crystal Records, CD763, 2004. Charley's Cornet References Further reading Zimmerman, Walter, Desert Plants – Conversations with 23 American Musicians, Berlin: Beginner Press in cooperation with Mode Records, 2020 (originally published in 1976 by A.R.C., Vancouver). The 2020 edition includes a CD featuring the original interview recordings with Larry Austin, Robert Ashley, Jim Burton, John Cage, Philip Corner, Morton Feldman, Philip Glass, Joan La Barbara, Garrett List, Alvin Lucier, John McGuire, Charles Morrow, J. B. Floyd (on Conlon Nancarrow), Pauline Oliveros, Charlemagne Palestine, Ben Johnston (on Harry Partch), Steve Reich, David Rosenboom, Frederic Rzewski, Richard Teitelbaum, James Tenney, Christian Wolff, and La Monte Young. External links EMF Media: Larry Austin David Tudor and Larry Austin: A Conversation April 3, 1989, Denton, Texas Art of the States: Larry Austin 1930 births 2018 deaths 20th-century classical composers 20th-century American composers 21st-century classical composers 21st-century American composers American male classical composers American classical composers Electroacoustic music composers Experimental composers Jazz-influenced classical composers People from Duncan, Oklahoma Pupils of Darius Milhaud Texas classical music University of North Texas College of Music faculty University of North Texas College of Music alumni Centaur Records artists
492451
https://en.wikipedia.org/wiki/COMPASS
COMPASS
COMPASS, COMPrehensive ASSembler, is any of a family of macro assembly languages on Control Data Corporation's 3000 series, and on the 60-bit CDC 6000 series, 7600 and Cyber 70 and 170 series mainframe computers. While the architectures are very different, the macro and conditional assembly facilities are similar. COMPASS for 60-bit machines There are two flavors of COMPASS on the 60-bit machines: COMPASS CP is the assembly language for the CP (Central Processor), the processor running user programs. See CDC 6600 CP architecture. COMPASS PP is the assembly language for the PP (Peripheral Processor), only running operating system code. See CDC 6600 PP architecture. COMPASS is a classical two-pass assembler with macro and conditional assembly features, and generates a full listing showing both the source assembly code and the generated machine code (in octal). CDC's operating systems were written almost entirely in COMPASS assembly language. Central processor (CP or CPU) hardware maintains 24 operational registers, named A0 to A7, X0 to X7 and B0 to B7. Registers X0 to X7 are 60 bits long and are used to hold data, while registers B0 to B7 are 18 bits long and their major purpose is to hold either addresses or be used as indexing registers, except that B0 is always zero. As a programming convention, B1 (or B7) often contains positive 1. A or address registers are also 18 bits long. Each A register pairs with the corresponding X register. Whenever an address is set into any of A1 to A5 registers, the data at that memory location (address) is loaded into the corresponding X register. Likewise, setting an address into one of A6 or A7 registers stores the data held in the corresponding X6 or X7 register to that memory location. However, A0 can be used to hold any address without affecting the contents of register X0. CP instructions are written in a particularly user-friendly form: "SA1 A0+B1" denotes set address register A1 to the sum of address register A0 and index register B1. The hardware then initiates a memory load from the computed address into register X1. Peripheral processor (PP or PPU) instructions are completely different from CPU instructions. Peripheral processor hardware is simpler; it has an 18-bit A (accumulator register, a 12-bit Program Address register, a 12-bit Q register (not programmer-visible), and a 22-bit R register (used to accomplish address relocation during central memory read and write instructions on Cyber 180 systems). No special job validation was required to assemble peripheral processor programs, but to be executed, such programs were required to installed into the operating system via special system editing commands. Further reading "Assembly Language Programming for the Control Data 6000 Series" by Ralph Grishman, Algorithmics Press, 1972. References External links COMPASS for 24-bit systems CDC3100, 3200, 3300, and 3500 COMPASS for CDC3600 48-bit system COMPASS for CDC6000 and 7000 60-bit systems COMPASS version 3 for CDC CYBER systems Assembly languages Control Data mainframe software
10105982
https://en.wikipedia.org/wiki/Sum%20%28Unix%29
Sum (Unix)
is a legacy utility available on some Unix and Unix-like operating systems. This utility outputs the checksum of each argument file, as well as the number of blocks they take on disk. Overview The program is generally only useful for historical interest. It is not part of POSIX. Two algorithms are typically available: a 16-bit BSD checksum and a 32-bit SYSV checksum. Both are weaker than the (already weak) CRC32 used by cksum. The default algorithm on FreeBSD and GNU implementations is the weaker BSD checksum. Switching between the two algorithms is done via command line options. Syntax The utility is invoked from the command line according to the following syntax: sum [OPTION]... [FILE]... with the possible option parameters being: use BSD checksum algorithm, use 1K blocks (defeats ) , use SYSV checksum algorithm, use 512 bytes blocks display the help screen and exit output version information and exit When no file parameter is given, or when FILE is , the standard input is used as input file. See also GNU Core Utilities UnxUtils port to native Win32 References External links — manual pages from GNU coreutils Linux package management-related software Unix package management-related software Linux security software Unix security-related software
2584125
https://en.wikipedia.org/wiki/Philippe%20Kahn
Philippe Kahn
Philippe Kahn (born March 16, 1952) is an engineer, entrepreneur and founder of four technology companies: Borland, Starfish Software, LightSurf Technologies, and Fullpower Technologies. Kahn is credited with creating the first camera phone, being a pioneer for wearable technology intellectual property, and is the author of dozens of technology patents covering Internet of Things (IoT), artificial intelligence (AI) modeling, wearable, eyewear, smartphone, mobile, imaging, wireless, synchronization and medical technologies. Early life and education Philippe Kahn is the son of Charles-Henri Kahn (1915-1999) and Claire Monis (1922-1967). Kahn was born and raised in Paris, France. He was born to Jewish immigrants of modest means. His mother was a French singer, actress and violinist, raised in Paris by parents who had fled the Russian pogroms. Arrested in 1942 for being Lieutenant in the French Resistance, she was 21 years old when she was sent to the Auschwitz extermination camp. She survived as a member of the Auschwitz Women's Orchestra conducted by Alma Rosé. After his parents separated in 1957, Philippe Kahn was raised solely by his mother. He was only 15 years old when his mother died after a car accident in Paris. Kahn was educated in mathematics at the ETH Zurich, Switzerland (Swiss Federal Polytechnic Institute), on a full scholarship and University of Nice Sophia Antipolis, France. He received a master's in mathematics. He also received a master's in musicology composition and classical flute performance at the Zurich Music Conservatory in Switzerland. As a student, Kahn developed software for the MICRAL, which is credited by the Computer History Museum as the first ever microprocessor-based personal computer. Career Technology companies Kahn has founded four software companies: Borland, founded in 1982 (acquired by Micro Focus in 2009), Starfish Software, founded in 1994 (acquired by Motorola in 1998, and subsequently Google in 2011), LightSurf Technologies, founded in 1998 (acquired by Verisign in 2005), and Fullpower Technologies, founded in 2005. Borland (1982–1995): compilers and tools Kahn founded Borland in 1982, and was its CEO until 1995. At the time it was a competitor of Microsoft's, and produced programming language compilers and software development tools. Its first product, Turbo Pascal, sold for $49.95 at a time when programming tools cost hundreds or thousands of dollars. Kahn was President, CEO, and Chairman of Borland and, without venture capital, took Borland from no revenues to a US$500 million run-rate. Kahn and the Borland board came to a disagreement on how to focus the company. In January 1995, Kahn was forced by the board to resign from his position as CEO, and he founded Starfish Software. Starfish Software (1995–1998): wireless synchronization Starfish Software was founded in 1995 by Philippe Kahn as a spin-off from the Simplify business unit from Borland and Kahn's severance from Borland. TrueSync was the first Over-The-Air (OTA) synchronization system. Starfish was successfully acquired by Motorola for US$325 million in 1998. LightSurf Technologies (1998–2005): multimedia messaging Kahn and his wife Sonia co-founded multimedia messaging company LightSurf Technologies in 1998. LightSurf commercialized Picture-Mail and the camera phone. In 2005, LightSurf was acquired by Verisign for US$300 million. Syniverse Technologies acquired Lightsurf from Verisign in 2009. Fullpower Technologies (2005–present): sensing, sleep, and wearable technology Fullpower, founded in 2005, provides a patented ecosystem for wearable and Internet of Things sensor-fusion solutions supporting networks of sensors. The company's expertise is sleep monitoring technology using sensors and artificial intelligence. The inspiration behind some of Fullpower's technology stems from Kahn's passion for sailing. During a demanding race requiring sailors to sleep less than an hour every 24-hour period, Kahn began experimenting with biosensors and three-axis linear accelerometers that could detect micromovements and provide meaningful recommendations. Kahn created prototype sleep trackers using biosensors that optimized 26-minute power naps to maximize sleep benefits and sail time. First camera phone In 1997, Kahn created the first camera phone solution sharing pictures instantly on public networks. The impetus for this invention was the birth of Kahn's daughter. Kahn had been working for almost a year on a web server-based infrastructure for pictures, that he called Picture Mail. At the hospital, while his wife was in labor, Kahn jury-rigged a connection between a mobile phone and a digital camera and sent off photos in real time to the picture messaging infrastructure he had running in his home. Kahn later said "I had always wanted to have this all working in time to share my daughter’s birth photo, but I wasn’t sure I was going to make it. It’s always the case that if it weren’t for the last minute, nothing would ever get done." In 2016 Time Magazine included Kahn's first camera phone photo in their list of the 100 most influential photos of all time. In 2017, Subconscious Films created a short film recreating the day that Philippe instantly shared the first camera-phone photo of the birth of his daughter Sophie. Patents Kahn has filed for or has been granted over 230 patents internationally, in fields including artificial intelligence-modeling tools, Internet of Things, motion detection, wearable technology, Global Positioning Systems, telecommunications, telemedicine, and sleep monitoring. Gay rights advocacy Under Kahn's direction, Borland became the first software company to offer domestic partners full benefits and a pioneer for gay rights in Silicon Valley. Kahn was a key speaker at the pivotal gay rights conference on the Apple campus on October 19, 1993. Personal life Kahn has four children, three of which are from his first marriage. He later married Sonia Lee, with whom he has a daughter, Sophie. Sonia co-founded three of Kahn's companies with him: Fullpower Technologies, LightSurf and Starfish Software. Sailing and sports Philippe Kahn's focus on the environment and the outdoors led him to the sport of sailing. Kahn's sailing team, Pegasus Racing, competes in many world championships each year around the world. An offshore sailor with over 10 trans-Pacific crossings, Kahn holds the Transpac double handed (two-crewmember) record from San Francisco to Oahu, Hawaii. His sailing achievements also include winning the double handed division of the 2009 Transpacific Yacht Race from Los Angeles to Hawaii and setting the Transpac record at 7 days, 19 hours, beating the previous time of 10 days, 4 hours. Kahn's son Samuel ("Shark") also took up sailing as a boy. In his teenage years he had several outstanding race wins, including the 2003 Melges 24 Worlds race right after he turned 15. He has competed against his father. Lee-Kahn Foundation Kahn and his wife Sonia run the Lee-Kahn Foundation. According to the Foundation's website, it sponsors local and national non-profit organizations focused on environmental causes and works to improve access to health care, education, and the arts. References Further reading 1952 births American computer businesspeople American people of French-Jewish descent ETH Zurich alumni French emigrants to the United States French people of Jewish descent Engineers from Paris Living people
41585725
https://en.wikipedia.org/wiki/Project%20Cybersyn
Project Cybersyn
Project Cybersyn was a Chilean project from 1971 to 1973 during the presidency of Salvador Allende aimed at constructing a distributed decision support system to aid in the management of the national economy. The project consisted of four modules: an economic simulator, custom software to check factory performance, an operations room, and a national network of telex machines that were linked to one mainframe computer. Project Cybersyn was based on viable system model theory approach to organizational design, and featured innovative technology at its time: it included a network of telex machines (Cybernet) in state-run enterprises that would transmit and receive information with the government in Santiago. Information from the field would be fed into statistical modeling software (Cyberstride) that would monitor production indicators, such as raw material supplies or high rates of worker absenteeism, in "almost" real time, alerting the workers in the first case and, in abnormal situations, if those parameters fell outside acceptable ranges by a very large degree, also the central government. The information would also be input into economic simulation software (CHECO, for CHilean ECOnomic simulator) that the government could use to forecast the possible outcome of economic decisions. Finally, a sophisticated operations room (Opsroom) would provide a space where managers could see relevant economic data, formulate feasible responses to emergencies, and transmit advice and directives to enterprises and factories in alarm situations by using the telex network. The principal architect of the system was British operations research scientist Stafford Beer, and the system embodied his notions of organisational cybernetics in industrial management. One of its main objectives was to devolve decision-making power within industrial enterprises to their workforce in order to develop self-regulation of factories. After the military coup on September 11, 1973, Cybersyn was abandoned, and the operations room was destroyed. Name The project's name in English (Cybersyn) is a portmanteau of the words cybernetics and synergy. Since the name is not euphonic in Spanish, in that language the project was called , both an initialism for the Spanish , ('system of information and control'), and a pun on the Spanish , the number five, alluding to the five levels of Beer's viable system model. History Stafford Beer was a British consultant in management cybernetics. He also sympathized with the stated ideals of Chilean socialism of maintaining Chile's democratic system and the autonomy of workers instead of imposing a Soviet-style system of top-down command and control. In July 1971, Fernando Flores, a high-level employee of the Chilean Production Development Corporation (CORFO) under the instruction of Pedro Vuskovic, contacted Beer for advice on incorporating Beer's theories into the management of the newly nationalized sector of Chile's economy. Beer saw this as a unique opportunity to implement his ideas on a national scale. More than offering advice, he left most of his other consulting business and devoted much time to what became Project Cybersyn. He traveled to Chile often to collaborate with local implementors and used his personal contacts to secure help from British technical experts. The implementation schedule was very aggressive, and the system had reached an advanced prototype stage at the start of 1973. The system was most useful in October 1972, when about 40,000 striking truck drivers blocked the access streets that converged towards Santiago. The strike was supported by the group and at least partly funded by private donors who had received money from the CIA. According to Gustavo Silva (executive secretary of energy in CORFO), the system's telex machines helped organize the transport of resources into the city with only about 200 trucks driven by strike-breakers, lessening the potential damage caused by the 40,000 striking truck drivers. System There were 500 unused telex machines bought by the previous government. Each was put into a factory. In the control centre in Santiago, each day data coming from each factory (several numbers, such as raw material input, production output and number of absentees) were put into a computer, which made short-term predictions and necessary adjustments. There were four levels of control (firm, branch, sector, total), with algedonic feedback. If one level of control did not remedy a problem in a certain interval, the higher level was notified. The results were discussed in the operations room and a top-level plan was made. The network of telex machines, called Cybernet, was the first operational component of Cybersyn, and the only one regularly used by the Allende government. The software for Cybersyn was called Cyberstride, and used Bayesian filtering and Bayesian control. It was written by Chilean engineers in consultation with a team of 12 British programmers. Cybersyn first ran on an IBM 360/50, but later was transferred to a less heavily used Burroughs 3500 mainframe. The futuristic operations room was designed by a team led by the interface designer Gui Bonsiepe. It was furnished with seven swivel chairs (considered the best for creativity) with buttons, which were designed to control several large screens that could project the data, and other panels with status information, although these were of limited functionality as they could only show pre-prepared graphs. This consisted of slides. The vision had been distribution of control and involvement of workers in business planning. The design looked more like bureaucratic centralisation of control via bottom up reporting and top-down direction. Workers were expected to perform processes and use resources in the ways that had been modelled and planned. Any significant deviation from was to be reported upwards, and corrective directives were to be cascaded downwards. The project is described in some detail in the second edition of Stafford Beer's books Brain of the Firm and Platform for Change. The latter book includes proposals for social innovations such as having representatives of diverse 'stakeholder' groups into the control centre. A related development was known as the Project Cyberfolk, which allowed citizens to send information about their moods to the Project organizers. Aesthetics The Ops room used Tulip chairs similar to those used in the American science fiction TV programme Star Trek, although according to the designers, the style was not influenced by science fiction movies. Legacy Computer scientist Paul Cockshott and economist Allin Cottrell referenced Project Cybersyn in their 1993 book Towards a New Socialism, citing it as an inspiration for their own proposed model of computer-managed socialist planned economy. Authors Leigh Phillips and Michal Rozworski also dedicated a chapter on the project in their 2019 book, The People's Republic of Walmart. The authors presented a case to defend the feasibility of a planned economy aided by contemporary processing power used by large organizations such as Amazon, Walmart and the Pentagon. The authors, however, question whether much can be built on Project Cybersyn in particular, specifically, "whether a system used in emergency, near–civil war conditions in a single country—covering a limited number of enterprises and, admittedly, only partially ameliorating a dire situation—can be applied in times of peace and at a global scale" especially as the project was never completed due to the military coup in 1973, which was followed by economic reforms by the Chicago Boys. Chilean science fiction author Jorge Baradit published a Spanish-language science fiction novel Synco in 2008. It is an alternate history novel set in a 1979, of which he said: "It stops the military coup, the socialist government consolidates and creates the first cybernetic state, a universal example, the true third way, a miracle." In October 2016, 99% Invisible produced a podcast about the project. The Radio Ambulante podcast covered some history of Allende and the Cybersyn project in their 2019 episode "The Room That Was A Brain". The Guardian called the project "a sort of socialist internet, decades ahead of its time". See also Alexander Kharkevich, the director of the Institute for Information Transmission Problems in Moscow (later Kharkevich Institute) Comparison of system dynamics software Economic calculation debate Enterprise resource planning Fernando Flores History of Chile Internet Material balance planning OGAS Planned economy Socialist economy System dynamics Viable system model Cyberocracy References External links Eden Medina, Cybernetic Revolutionaries: Technology and Politics in Allende's Chile, (Cambridge, Massachusetts: MIT Press, 2011). Eden Medina, "Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende's Chile." Journal of Latin American Studies 38 (2006):571-606. (pdf) Lessons of Stafford Beer The CeberSyn heritage in the XXI Century The CyberSyn multimedia "reconstruction" Before ’73 Coup, Chile Tried to Find the Right Software for Socialism, by Alexei Barrionuevo. The New York Times. March 28, 2008 The forgotten story of Chile's 'socialist internet' Futurism, fictional and science fictional - rambling and inspiring on BoingBoing Project Cybersyn | varnelis.net Rhizome.org: Project Cybersyn Stafford Beer, and Salvador Allende's Internet, and the Dystopian Novel Free As In Beer: Cybernetic Science Fictions Planning Machine at The New Yorker Allende’s socialist internet at Red Pepper Network Effects: Raul Espejo on Cybernetic Socialism in Salvador Allende’s Chile, Kristen Alfaro interviews Raúl Espejo for Logic. January 1, 2019. Cybernetics Economy of Chile 1970s in Chile Presidency of Salvador Allende Socialism in Chile Economic planning History of computing in South America Networks Socialism 1970s economic history Information management Chilean inventions Government by algorithm
27063702
https://en.wikipedia.org/wiki/List%20of%20terminal%20emulators
List of terminal emulators
This is a list of notable terminal emulators. Most used terminal emulators on Linux and Unix-like systems are GNOME Terminal on GNOME and GTK-based environments, Konsole on KDE, and xfce4-terminal on Xfce as well as xterm. Character-oriented terminal emulators Unix-like Command-line interface Linux console – implements a subset of the VT102 and ECMA-48/ISO 6429/ANSI X3.64 escape sequences. The following terminal emulators run inside of other terminals, utilizing libraries such as Curses and Termcap: GNU Screen – Terminal multiplexer with VT100/ANSI terminal emulation Minicom – text-based modem control and terminal emulation program for Unix-like operating systems tmux – Terminal multiplexer with a feature set similar to GNU Screen Graphical X11 and Wayland Terminal emulators used in combination with X Window System and Wayland: Alacritty – GPU accelerated, without tabs GNOME Terminal – default terminal for GNOME with native Wayland support guake – drop-down terminal for GNOME kitty – GPU accelerated, with tabs, tiling, image viewing, interactive unicode character input konsole – default terminal for KDE rxvt – lightweight X11 terminal emulator aterm (from rxvt 2.4.8) created for use with the AfterStep window manager (no longer maintained) Eterm (from rxvt 2.21) created for use with Enlightenment mrxvt (from rxvt 2.7.11) created for multiple tabs and additional features (latest version released in 2008-09-10) urxvt (from rxvt 2.7.11) created to support Unicode, also known as rxvt-unicode Wterm – created for NeXTSTEP style window managers such as Window Maker Terminator – written in Java with many novel or experimental features Terminology – enhanced terminal supportive of multimedia and text manipulation for X11 and Linux framebuffer Tilda – a drop-down terminal Tilix – GTK3 tiling terminal emulator xfce4-terminal – default terminal for Xfce with drop-down support xterm – standard terminal for X11 Yakuake – (Yet Another Kuake) a drop-down terminal for KDE macOS Terminal emulators used on macOS iTerm2 – open-source terminal specifically for macOS MacWise SecureCRT Terminal – default macOS terminal Terminator xterm – default terminal when X11.app starts ZOC ZTerm – serial line terminal Apple Classic Mac OS MacTerminal Red Ryder ZTerm Android Termux Microsoft Windows AbsoluteTelnet Alacritty AlphaCom ConEmu – local terminal window that can host console application developed either for WinAPI (cmd, powershell, far) or Unix PTY (cygwin, msys, wsl bash) HyperACCESS (commercial) and HyperTerminal (included free with Windows XP and earlier, but not included with Windows Vista and later) Kermit 95 mintty – Cygwin terminal Procomm Plus PuTTY Qmodem Pro RUMBA SecureCRT Tera Term TtyEmulator Windows Console – Windows command line terminal Windows Terminal ZOC Microsoft MS-DOS Crosstalk Kermit ProComm Qmodem and Qmodem Pro Synchronet Telix Terminate IBM OS/2 Kermit 95 ZOC – discontinued support for OS/2 Commodore Amiga NComm Commodore 64 CBterm/C64 Block-oriented terminal emulators Emulators for block-oriented terminals, primarily IBM 3270, but also IBM 5250 and other non-IBM terminals. Coax/Twinax connected These terminal emulators are used to replace terminals attached to a host or terminal controller via a coaxial cable (coax) or twinaxial cabling (twinax). They require that the computer on which they run have a hardware adapter to support such an attachment. RUMBA 3270 and 5250 tn3270/tn5250 These terminal emulators connect to a host using the tn3270 or tn5250 protocols, which run over a Transmission Control Protocol (TCP) connection. c3270 – IBM 3270 emulator for running inside a vt100/curses emulator for most Unix-like systems Eicon Aviva IBM Personal Communications Rocket BlueZone TN3270 Plus Tn5250j x3270 – IBM 3270 emulator for X11 and most Unix-like systems ZOC See also Web-based SSH References External links Linux console escape and control sequences List of X11 terminals available on Gentoo Linux List of X11 terminals available on archlinux Guide to Windows terminals The Grumpy Editor's guide to terminal emulators, 2004 Comprehensive Linux Terminal Performance Comparison, 2007 x11-terminals Emulators, Terminal
31247
https://en.wikipedia.org/wiki/Teleprinter
Teleprinter
A teleprinter (teletypewriter, teletype or TTY) is an electromechanical device that can be used to send and receive typed messages through various communications channels, in both point-to-point and point-to-multipoint configurations. Initially they were used in telegraphy, which developed in the late 1830s and 1840s as the first use of electrical engineering, though teleprinters were not used for telegraphy until 1887 at the earliest. The machines were adapted to provide a user interface to early mainframe computers and minicomputers, sending typed data to the computer and printing the response. Some models could also be used to create punched tape for data storage (either from typed input or from data received from a remote source) and to read back such tape for local printing or transmission. Teleprinters could use a variety of different communication media. These included a simple pair of wires; dedicated non-switched telephone circuits (leased lines); switched networks that operated similarly to the public telephone network (telex); and radio and microwave links (telex-on-radio, or TOR). A teleprinter attached to a modem could also communicate through standard switched public telephone lines. This latter configuration was often used to connect teleprinters to remote computers, particularly in time-sharing environments. Teleprinters have largely been replaced by fully electronic computer terminals which typically have a computer monitor instead of a printer (though the term "TTY" is still occasionally used to refer to them, such as in Unix systems). Teleprinters are still widely used in the aviation industry (see AFTN and airline teletype system), and variations called Telecommunications Devices for the Deaf (TDDs) are used by the hearing impaired for typed communications over ordinary telephone lines. History The teleprinter evolved through a series of inventions by a number of engineers, including Samuel Morse, Alexander Bain, Royal Earl House, David Edward Hughes, Emile Baudot, Donald Murray, Charles L. Krum, Edward Kleinschmidt and Frederick G. Creed. Teleprinters were invented in order to send and receive messages without the need for operators trained in the use of Morse code. A system of two teleprinters, with one operator trained to use a keyboard, replaced two trained Morse code operators. The teleprinter system improved message speed and delivery time, making it possible for messages to be flashed across a country with little manual intervention. There were a number of parallel developments on both sides of the Atlantic Ocean. In 1835 Samuel Morse devised a recording telegraph, and Morse code was born. Morse's instrument used a current to displace the armature of an electromagnet, which moved a marker, therefore recording the breaks in the current. Cooke & Wheatstone received a British patent covering telegraphy in 1837 and a second one in 1840 which described a type-printing telegraph with steel type fixed at the tips of petals of a rotating brass daisy-wheel, struck by an “electric hammer” to print Roman letters through carbon paper onto a moving paper tape. In 1841 Alexander Bain devised an electromagnetic printing telegraph machine. It used pulses of electricity created by rotating a dial over contact points to release and stop a type-wheel turned by weight-driven clockwork; a second clockwork mechanism rotated a drum covered with a sheet of paper and moved it slowly upwards so that the type-wheel printed its signals in a spiral. The critical issue was to have the sending and receiving elements working synchronously. Bain attempted to achieve this using centrifugal governors to closely regulate the speed of the clockwork. It was patented, along with other devices, on April 21, 1841. By 1846, the Morse telegraph service was operational between Washington, D.C., and New York. Royal Earl House patented his printing telegraph that same year. He linked two 28-key piano-style keyboards by wire. Each piano key represented a letter of the alphabet and when pressed caused the corresponding letter to print at the receiving end. A "shift" key gave each main key two optional values. A 56-character typewheel at the sending end was synchronised to coincide with a similar wheel at the receiving end. If the key corresponding to a particular character was pressed at the home station, it actuated the typewheel at the distant station just as the same character moved into the printing position, in a way similar to the (much later) daisy wheel printer. It was thus an example of a synchronous data transmission system. House's equipment could transmit around 40 instantly readable words per minute, but was difficult to manufacture in bulk. The printer could copy and print out up to 2,000 words per hour. This invention was first put in operation and exhibited at the Mechanics Institute in New York in 1844. Landline teleprinter operations began in 1849, when a circuit was put in service between Philadelphia and New York City. In 1855, David Edward Hughes introduced an improved machine built on the work of Royal Earl House. In less than two years, a number of small telegraph companies, including Western Union in early stages of development, united to form one large corporation – Western Union Telegraph Co. – to carry on the business of telegraphy on the Hughes system. In France, Émile Baudot designed in 1874 a system using a five-unit code, which began to be used extensively in that country from 1877. The British Post Office adopted the Baudot system for use on a simplex circuit between London and Paris in 1897, and subsequently made considerable use of duplex Baudot systems on their Inland Telegraph Services. During 1901, Baudot's code was modified by Donald Murray (1865–1945, originally from New Zealand), prompted by his development of a typewriter-like keyboard. The Murray system employed an intermediate step, a keyboard perforator, which allowed an operator to punch a paper tape, and a tape transmitter for sending the message from the punched tape. At the receiving end of the line, a printing mechanism would print on a paper tape, and/or a reperforator could be used to make a perforated copy of the message. As there was no longer a direct correlation between the operator's hand movement and the bits transmitted, there was no concern about arranging the code to minimize operator fatigue, and instead Murray designed the code to minimize wear on the machinery, assigning the code combinations with the fewest punched holes to the most frequently used characters. The Murray code also introduced what became known as "format effectors" or "control characters" – the CR (Carriage Return) and LF (Line Feed) codes. A few of Baudot's codes moved to the positions where they have stayed ever since: the NULL or BLANK and the DEL code. NULL/BLANK was used as an idle code for when no messages were being sent. In the United States in 1902, electrical engineer Frank Pearne approached Joy Morton, head of Morton Salt, seeking a sponsor for research into the practicalities of developing a printing telegraph system. Joy Morton needed to determine whether this was worthwhile and so consulted mechanical engineer Charles L. Krum, who was vice president of the Western Cold Storage Company. Krum was interested in helping Pearne, so space was set up in a laboratory in the attic of Western Cold Storage. Frank Pearne lost interest in the project after a year and left to get involved in teaching. Krum was prepared to continue Pearne’s work, and in August, 1903 a patent was filed for a 'typebar page printer'. In 1904, Krum filed a patent for a 'type wheel printing telegraph machine' which was issued in August, 1907. In 1906 Charles Krum's son, Howard Krum, joined his father in this work. It was Howard who developed and patented the start-stop synchronizing method for code telegraph systems, which made possible the practical teleprinter. In 1908, a working teleprinter was produced by the Morkrum Company (formed between Joy Morton and Charles Krum) , called the Morkrum Printing Telegraph, which was field tested with the Alton Railroad. In 1910, the Morkrum Company designed and installed the first commercial teletypewriter system on Postal Telegraph Company lines between Boston and New York City using the "Blue Code Version" of the Morkrum Printing Telegraph. In 1916, Edward Kleinschmidt filed a patent application for a typebar page printer. In 1919, shortly after the Morkrum company obtained their patent for a start-stop synchronizing method for code telegraph systems, which made possible the practical teleprinter, Kleinschmidt filed an application titled "Method of and Apparatus for Operating Printing Telegraphs" which included an improved start-stop method. The basic start-stop procedure, however, is much older than the Kleinschmidt and Morkrum inventions. It was already proposed by D'Arlincourt in 1870. Instead of wasting time and money in patent disputes on the start-stop method, Kleinschmidt and the Morkrum Company decided to merge and form the Morkrum-Kleinschmidt Company in 1924. The new company combined the best features of both their machines into a new typewheel printer for which Kleinschmidt, Howard Krum, and Sterling Morton jointly obtained a patent. In 1924 Britain's Creed & Company, founded by Frederick G. Creed, entered the teleprinter field with their Model 1P, a page printer, which was soon superseded by the improved Model 2P. In 1925 Creed acquired the patents for Donald Murray's Murray code, a rationalised Baudot code. The Model 3 tape printer, Creed’s first combined start-stop machine, was introduced in 1927 for the Post Office telegram service. This machine printed received messages directly on to gummed paper tape at a rate of 65 words per minute. Creed created his first keyboard perforator, which used compressed air to punch the holes. He also created a reperforator (receiving perforator) and a printer. The reperforator punched incoming Morse signals on to paper tape and the printer decoded this tape to produce alphanumeric characters on plain paper. This was the origin of the Creed High Speed Automatic Printing System, which could run at an unprecedented 200 words per minute. His system was adopted by the Daily Mail for daily transmission of the newspaper's contents. The Creed Model 7 page printing teleprinter was introduced in 1931 and was used for the inland Telex service. It worked at a speed of 50 baud, about 66 words a minute, using a code based on the Murray code. A teleprinter system was installed in the Bureau of Lighthouses, Airways Division, Flight Service Station Airway Radio Stations system in 1928, carrying administrative messages, flight information and weather reports. By 1938, the teleprinter network, handling weather traffic, extended over 20,000 miles, covering all 48 states except Maine, New Hampshire, and South Dakota. Ways in which teleprinters were used There were at least five major types of teleprinter networks: Exchange systems such as Telex and TWX created a real-time circuit between two machines, so that anything typed on one machine appeared at the other end immediately. US and UK systems had telephone dials, and prior to 1981 five North American Numbering Plan (NANPA) area codes were reserved for teleprinter use. German systems did "dialing" via the keyboard. Typed "chat" was possible, but because billing was by connect time, it was common to prepare messages in advance on paper tape and transmit them without pauses for typing. Leased line and radioteletype networks arranged in point-to-point and / or multipoint configurations supported data processing applications for government and industry, such as integrating the accounting, billing, management, production, purchasing, sales, shipping and receiving departments within an organization to speed internal communications. Message switching systems were an early form of E-mail, using electromechanical equipment. See Telegram, Western Union, Plan 55-A. Military organizations had similar but separate systems, such as Autodin. Broadcast systems such as weather information distribution and "news wires". Examples were operated by Associated Press, National Weather Service, Reuters, and United Press (later UPI). Information was printed on receive-only teleprinters, without keyboards or dials. "Loop" systems, where anything typed on any machine on the loop printed on all the machines. American police departments used such systems to interconnect precincts. Teleprinter operation Most teleprinters used the 5-bit International Telegraph Alphabet No. 2 (ITA2). This limited the character set to 32 codes (25 = 32). One had to use a "FIGS" (for "figures") shift key to type numbers and special characters. Special versions of teleprinters had FIGS characters for specific applications, such as weather symbols for weather reports. Print quality was poor by modern standards. The ITA2 code was used asynchronously with start and stop bits: the asynchronous code design was intimately linked with the start-stop electro-mechanical design of teleprinters. (Early systems had used synchronous codes, but were hard to synchronize mechanically). Other codes, such as FIELDATA and Flexowriter, were introduced but never became as popular as ITA2. Mark and space are terms describing logic levels in teleprinter circuits. The native mode of communication for a teleprinter is a simple series DC circuit that is interrupted, much as a rotary dial interrupts a telephone signal. The marking condition is when the circuit is closed (current is flowing), the spacing condition is when the circuit is open (no current is flowing). The "idle" condition of the circuit is a continuous marking state, with the start of a character signalled by a "start bit", which is always a space. Following the start bit, the character is represented by a fixed number of bits, such as 5 bits in the ITA2 code, each either a mark or a space to denote the specific character or machine function. After the character's bits, the sending machine sends one or more stop bits. The stop bits are marking, so as to be distinct from the subsequent start bit. If the sender has nothing more to send, the line simply remains in the marking state (as if a continuing series of stop bits) until a later space denotes the start of the next character. The time between characters need not be an integral multiple of a bit time, but it must be at least the minimum number of stop bits required by the receiving machine. When the line is broken, the continuous spacing (open circuit, no current flowing) causes a receiving teleprinter to cycle continuously, even in the absence of stop bits. It prints nothing because the characters received are all zeros, the ITA2 blank (or ASCII) null character. Teleprinter circuits were generally leased from a communications common carrier and consisted of ordinary telephone cables that extended from the teleprinter located at the customer location to the common carrier central office. These teleprinter circuits were connected to switching equipment at the central office for Telex and TWX service. Private line teleprinter circuits were not directly connected to switching equipment. Instead, these private line circuits were connected to network hubs and repeaters configured to provide point to point or point to multipoint service. More than two teleprinters could be connected to the same wire circuit by means of a current loop. Earlier teleprinters had three rows of keys and only supported upper case letters. They used the 5 bit ITA2 code and generally worked at 60 to 100 words per minute. Later teleprinters, specifically the Teletype Model 33, used ASCII code, an innovation that came into widespread use in the 1960s as computers became more widely available. "Speed", intended to be roughly comparable to words per minute, is the standard term introduced by Western Union for a mechanical teleprinter data transmission rate using the 5-bit ITA2 code that was popular in the 1940s and for several decades thereafter. Such a machine would send 1 start bit, 5 data bits, and 1.42 stop bits. This unusual stop bit time is actually a rest period to allow the mechanical printing mechanism to synchronize in the event that a garbled signal is received. This is true especially on high frequency radio circuits where selective fading is present. Selective fading causes the mark signal amplitude to be randomly different from the space signal amplitude. Selective fading, or Rayleigh fading can cause two carriers to randomly and independently fade to different depths. Since modern computer equipment cannot easily generate 1.42 bits for the stop period, common practice is to either approximate this with 1.5 bits, or to send 2.0 bits while accepting 1.0 bits receiving. For example, a "60 speed" machine is geared at 45.5 baud (22.0 ms per bit), a "66 speed" machine is geared at 50.0 baud (20.0 ms per bit), a "75 speed" machine is geared at 56.9 baud (17.5 ms per bit), a "100 speed" machine is geared at 74.2 baud (13.5 ms per bit), and a "133 speed" machine is geared at 100.0 baud (10.0 ms per bit). 60 speed became the de facto standard for amateur radio RTTY operation because of the widespread availability of equipment at that speed and the U.S. Federal Communications Commission (FCC) restrictions to only 60 speed from 1953 to 1972. Telex, news agency wires and similar services commonly used 66 speed services. There was some migration to 75 and 100 speed as more reliable devices were introduced. However, the limitations of HF transmission such as excessive error rates due to multipath distortion and the nature of ionospheric propagation kept many users at 60 and 66 speed. Most audio recordings in existence today are of teleprinters operating at 60 words per minute, and mostly of the Teletype Model 15. Another measure of the speed of a teletypewriter was in total "operations per minute (OPM)". For example, 60 speed was usually 368 OPM, 66 speed was 404 OPM, 75 speed was 460 OPM, and 100 speed was 600 OPM. Western Union Telexes were usually set at 390 OPM, with 7.0 total bits instead of the customary 7.42 bits. Both wire-service and private teleprinters had bells to signal important incoming messages and could ring 24/7 while the power was turned on. For example, ringing 4 bells on UPI wire-service machines meant an "Urgent" message; 5 bells was a "Bulletin"; and 10 bells was a FLASH, used only for very important news, such as the assassination of John F. Kennedy. The teleprinter circuit was often linked to a 5-bit paper tape punch (or "reperforator") and reader, allowing messages received to be resent on another circuit. Complex military and commercial communications networks were built using this technology. Message centers had rows of teleprinters and large racks for paper tapes awaiting transmission. Skilled operators could read the priority code from the hole pattern and might even feed a "FLASH PRIORITY" tape into a reader while it was still coming out of the punch. Routine traffic often had to wait hours for relay. Many teleprinters had built-in paper tape readers and punches, allowing messages to be saved in machine-readable form and edited off-line. Communication by radio, known as radioteletype or RTTY (pronounced ritty), was also common, especially among military users. Ships, command posts (mobile, stationary, and even airborne) and logistics units took advantage of the ability of operators to send reliable and accurate information with a minimum of training. Amateur radio operators continue to use this mode of communication today, though most use computer-interface sound generators, rather than legacy hardware teleprinter equipment. Numerous modes are in use within the "ham radio" community, from the original ITA2 format to more modern, faster modes, which include error-checking of characters. Control characters A typewriter or electromechanical printer can print characters on paper, and execute operations such as move the carriage back to the left margin of the same line (carriage return), advance to the same column of the next line (line feed), and so on. Commands to control non-printing operations were transmitted in exactly the same way as printable characters by sending control characters with defined functions (e.g., the line feed character forced the carriage to move to the same position on the next line) to teleprinters. In modern computing and communications a few control characters, such as carriage return and line feed, have retained their original functions (although they are often implemented in software rather than activating electromechanical mechanisms to move a physical printer carriage) but many others are no longer required and are used for other purposes. Answer back mechanism Some teleprinters had a "Here is" key, which transmitted a fixed sequence of 20 or 22 characters, programmable by breaking tabs off a drum. This sequence could also be transmitted automatically upon receipt of an ENQ (control E) signal, if enabled. This was commonly used to identify a station; the operator could press the key to send the station identifier to the other end, or the remote station could trigger its transmission by sending the ENQ character, essentially asking "who are you?" Manufacturers Creed & Company British Creed & Company built teleprinters for the GPO's teleprinter service. Creed model 7 (page printing teleprinter introduced in 1931) Creed model 7B (50 baud page printing teleprinter) Creed model 7E (page printing teleprinter with overlap cam and range finder) Creed model 7/TR (non-printing teleprinter reperforator) Creed model 54 (page printing teleprinter introduced in 1954) Creed model 75 (page printing teleprinter introduced in 1958) Creed model 85 (printing reperforator introduced in 1948) Creed model 86 (printing reperforator using 7/8" wide tape) Creed model 444 (page printing teleprinter introduced in 1966, GPO type 15) Kleinschmidt Labs In 1931, American inventor Edward Kleinschmidt formed Kleinschmidt Labs to pursue a different design of teleprinter. In 1944 Kleinschmidt demonstrated their lightweight unit to the Signal Corps and in 1949 their design was adopted for the Army's portable needs. In 1956, Kleinschmidt Labs merged with Smith-Corona, which then merged with the Marchant Calculating Machine Co., forming the SCM Corporation. By 1979, the Kleinschmidt division was turning to Electronic Data Interchange and away from mechanical products. Kleinschmidt machines, with the military as their primary customer, used standard military designations for their machines. The teleprinter was identified with designations such as a TT-4/FG, while communication "sets" to which a teleprinter might be a part generally used the standard Army/Navy designation system such as AN/FGC-25. This includes Kleinschmidt teleprinter TT-117/FG and tape reperforator TT-179/FG. Morkrum Morkrum made their first commercial installation of a printing telegraph with the Postal Telegraph Company in Boston and New York in 1910. It became popular with railroads, and the Associated Press adopted it in 1914 for their wire service. Morkrum merged with their competitor Kleinschmidt Electric Company to become Morkrum-Kleinschmidt Corporation shortly before being renamed the Teletype Corporation. Olivetti Italian office equipment maker Olivetti (est. 1908) started to manufacture teleprinters in order to provide Italian post offices with modern equipment to send and receive telegrams. The first models typed on a paper ribbon, which was then cut and glued into telegram forms. Olivetti T1 (1938–1948) Olivetti T2 (1948–1968) Olivetti Te300 (1968–1975) Olivetti Te400 (1975–1991) Siemens & Halske Siemens & Halske, later Siemens AG, a German company, founded in 1897. Teleprinter Model 100 Ser 1 (end of the 1950s) – Used for Telex service Teleprinter Model 100 Ser. 11 – Later version with minor changes Teleprinter Model T100 ND (single current) NDL (double current) models Teleprinter Model T 150 (electromechanical) Offline tape punch for creating messages Teleprinter T 1000 electronic teleprinter (processor based) 50-75-100 Bd. Tape punch and reader attachments ND/NDL/SEU V21modem model Teleprinter T 1000 Receive only units as used by newsrooms for unedited SAPA/Reuters/AP feeds etc. Teleprinter T 1200 electronic teleprinter (processor based) 50-75-100-200 Bd.Green LED text display, 1.44M 3.5" floppy disk ("stiffy") attachment PC-Telex Teleprinter with dedicated dot matrix printer Connected to IBM compatible PC (as used by Telkom South Africa) T4200 Teletex Teleprinter With two floppy disc drives and black and white monitor/daisy wheel typewriter (DOS2) Teletype Corporation The Teletype Corporation, a part of American Telephone and Telegraph Company's Western Electric manufacturing arm since 1930, was founded in 1906 as the Morkrum Company. In 1925, a merger between Morkrum and Kleinschmidt Electric Company created the Morkrum-Kleinschmidt Company. The name was changed in December 1928 to Teletype Corporation. In 1930, Teletype Corporation was purchased by the American Telephone and Telegraph Company and became a subsidiary of Western Electric. In 1984, the divestiture of the Bell System resulted in the Teletype name and logo being replaced by the AT&T name and logo, eventually resulting in the brand being extinguished. The last vestiges of what had been the Teletype Corporation ceased in 1990, bringing to a close the dedicated teleprinter business. Despite its long-lasting trademark status, the word Teletype went into common generic usage in the news and telecommunications industries. Records of the United States Patent and Trademark Office indicate the trademark has expired and is considered dead. Teletype machines tended to be large, heavy, and extremely robust, capable of running non-stop for months at a time if properly lubricated. The Model 15 stands out as one of a few machines that remained in production for many years. It was introduced in 1930 and remained in production until 1963, a total of 33 years of continuous production. Very few complex machines can match that record. The production run was stretched somewhat by World War II—the Model 28 was scheduled to replace the Model 15 in the mid-1940s, but Teletype built so many factories to produce the Model 15 during World War II, it was more economical to continue mass production of the Model 15. The Model 15, in its receive only, no keyboard, version was the classic "news Teletype" for decades. Model 15 = Baudot version, 45 Baud, optional tape punch and reader Model 28 = Baudot version, 45-50-56-75 Baud, optional tape punch and reader Model 32 = small lightweight machine (cheap production) 45-50-56-75 Baud, optional tape punch and reader Model 33 = same as Model 32 but for 8 level ASCII-plus-parity-bit, used as computer terminal, optional tape punch and reader Model 35 = same as Model 28 but for 8 level ASCII-plus-parity-bit, used as heavy-duty computer terminal, optional tape punch and reader Model 37 = improved version of the Model 35, higher speeds up to 150 Baud, optional tape punch and reader Model 38 = similar to Model 33, but for 132 char./line paper (14 inches wide), optional tape punch and reader Model 40 = new system processor based, w/ monitor screen, but mechanical "chain printer" Model 42 = new cheap production Baudot machine to replace Model 28 and Model 32, paper tape acc. Model 43 = same but for 8 level ASCII-plus-parity-bit, to replace Model 33 and Model 35, paper tape acc. Several different high-speed printers like the "Ink-tronic" etc. Gretag The Gretag ETK-47 teleprinter developed in Switzerland by Edgar Gretener in 1947 uses a 14-bit start-stop transmission method similar to the 5-bit code used by other teleprinters. However, instead of a more-or-less arbitrary mapping between 5-bit codes and letters in the Latin alphabet, all characters (letters, digits, and punctuation) printed by the ETK are built from 14 basic elements on a print head, very similar to the 14 elements on a modern fourteen-segment display, each one selected independently by one of the 14 bits during transmission. Because it doesn't use a fixed character set, but instead builds up characters from smaller elements, the ETK printing element does not require modification to switch between Latin, Cyrillic, and Greek characters. Telex A global teleprinter network, called the "Telex network", was developed in the late 1920s, and was used through most of the 20th century for business communications. The main difference from a standard teleprinter is that Telex includes a switched routing network, originally based on pulse-telephone dialing, which in the United States was provided by Western Union. AT&T developed a competing network called "TWX" which initially also used rotary dialing and Baudot code, carried to the customer premises as pulses of DC on a metallic copper pair. TWX later added a second ASCII-based service using Bell 103 type modems served over lines whose physical interface was identical to regular telephone lines. In many cases, the TWX service was provided by the same telephone central office that handled voice calls, using class of service to prevent POTS customers from connecting to TWX customers. Telex is still in use in some countries for certain applications such as shipping, news, weather reporting and military command. Many business applications have moved to the Internet as most countries have discontinued telex/TWX services. Teletypesetter In addition to the 5-bit Baudot code and the much later seven-bit ASCII code, there was a six-bit code known as the Teletypesetter code (TTS) used by news wire services. It was first demonstrated in 1928 and began to see widespread use in the 1950s. Through the use of "shift in" and "shift out" codes, this six-bit code could represent a full set of upper and lower case characters, digits, symbols commonly used in newspapers, and typesetting instructions such as "flush left" or "center", and even "auxiliary font", to switch to italics or bold type, and back to roman ("upper rail"). The TTS produces aligned text, taking into consideration character widths and column width, or line length. A Model 20 Teletype machine with a paper tape punch ("reperforator") was installed at subscriber newspaper sites. Originally these machines would simply punch paper tapes and these tapes could be read by a tape reader attached to a "Teletypesetter operating unit" installed on a Linotype machine. The "operating unit" was essentially a tape reader which actuated a mechanical box, which in turn operated the Linotype's keyboard and other controls, in response to the codes read from the tape, thus creating type for printing in newspapers and magazines. This allowed higher production rates for the Linotype, and was used both locally, where the tape was first punched and then fed to the machine, as well as remotely, using tape transmitters and receivers. Remote use played an essential role for distributing identical content, such as Syndicated columns, News agency news, Classified advertising, and more, to different publications across wide geographical areas. In later years the incoming 6-bit current loop signal carrying the TTS code was connected to a minicomputer or mainframe for storage, editing, and eventual feed to a phototypesetting machine. Teleprinters in computing Computers used teleprinters for input and output from the early days of computing. Punched card readers and fast printers replaced teleprinters for most purposes, but teleprinters continued to be used as interactive time-sharing terminals until video displays became widely available in the late 1970s. Users typed commands after a prompt character was printed. Printing was unidirectional; if the user wanted to delete what had been typed, further characters were printed to indicate that previous text had been cancelled. When video displays first became available the user interface was initially exactly the same as for an electromechanical printer; expensive and scarce video terminals could be used interchangeably with teleprinters. This was the origin of the text terminal and the command-line interface. Paper tape was sometimes used to prepare input for the computer session off line and to capture computer output. The popular Teletype Model 33 used 7-bit ASCII code (with an eighth parity bit) instead of Baudot. The common modem communications settings, Start/Stop Bits and Parity, stem from the Teletype era. In early operating systems such as Digital's RT-11, serial communication lines were often connected to teleprinters and were given device names starting with . This and similar conventions were adopted by many other operating systems. Unix and Unix-like operating systems use the prefix , for example , or (for pseudo-tty), such as . In many computing contexts, "TTY" has become the name for any text terminal, such as an external console device, a user dialing into the system on a modem on a serial port device, a printing or graphical computer terminal on a computer's serial port or the RS-232 port on a USB-to-RS-232 converter attached to a computer's USB port, or even a terminal emulator application in the window system using a pseudoterminal device. Teleprinters were also used to record fault printout and other information in some TXE telephone exchanges. Obsolescence of teleprinters Although printing news, messages, and other text at a distance is still universal, the dedicated teleprinter tied to a pair of leased copper wires was made functionally obsolete by the fax, personal computer, inkjet printer, email, and the Internet. In the 1980s, packet radio became the most common form of digital communications used in amateur radio. Soon, advanced multimode electronic interfaces such as the AEA PK-232 were developed, which could send and receive not only packet, but various other modulation types including Baudot. This made it possible for a home or laptop computer to replace teleprinters, saving money, complexity, space and the massive amount of paper which mechanical machines used. As a result, by the mid-1990s, amateur use of actual teleprinters had waned, though a core of "purists" still operate on equipment originally manufactured in the 1940s, 1950s, 1960s and 1970s. See also Letter-quality printer Plan 55-A, a message switching system for telegrams Radioteletype Siemens and Halske T52 – the Geheimfernschreiber (secrets teleprinter) References Further reading "Teletype Messages Sent Through Switch Board", Popular Mechanics, April 1932. AT&T offering two way service through switchboards on the role of the teleprinter code in WWII External links A first-hand report of Teletype Corporation's early years A Gallery of Teletype Images History of Teletypewriter Development by R.A. Nelson "Some Notes on Teletype Corporation" Mass.gov: TTY explanation and government best practices for TTY use Patents "Telegraph printer" (Type 12 Teletype), filed June 1924, issued April 1928 "Telegraph receiver" (Type 14 Teletype), filed December 1924, issued February 1930 "Signalling system and apparatus therefor" (Type 15 Teletype) – filed July 1930, issued April 1933 "Frequency-Shift Teletypewriter" – filed August 1966, issued April 1970 History of telecommunications Impact printing Telegraphy Typewriters
457719
https://en.wikipedia.org/wiki/Globus%20Toolkit
Globus Toolkit
The Globus Toolkit is an open-source toolkit for grid computing developed and provided by the Globus Alliance. On 25 May 2017 it was announced that the open source support for the project would be discontinued in January 2018 , due to a lack of financial support for that work. The Globus service continues to be available to the research community under a freemium approach, designed to sustain the software, with most features freely available but some restricted to subscribers . Introduction The Globus toolkit contains a set of libraries and programs that provides the developers of specific tools or apps with solutions for common problems that are encountered when creating a distributed system services and applications. Globus is a software with components and capabilities that includes: A set of service Implementations that Indicate resource management, data alterations service finding and relevant issues Tools for building web services A powerful standards-based security prerequisites for authentication and authorisation. Various services in java c and python for clients of API and command line programs Detailed documentation on these various components Standards implementation The Globus Toolkit adheres to or provides implementations of the following standards: Open Grid Services Architecture (OGSA) Open Grid Services Infrastructure (OGSI), originally intended to form the basic “plumbing” layer for OGSA, but has been superseded by WSRF and WS-Management. Web Services Resource Framework (WSRF) Job Submission Description Language (JSDL) Distributed Resource Management Application API (DRMAA) WS-Management WS-BaseNotification SOAP Web Services Description Language Grid Security Infrastructure (GSI) The Globus Toolkit has implementations of the OGF-defined protocols to provide: Resource management: Grid Resource Allocation & Management Protocol (GRAM) Information Services: Monitoring and Discovery Service (MDS) Security Services: Grid Security Infrastructure (GSI) Data Movement and Management: Global Access to Secondary Storage (GASS) and GridFTP The following Globus Toolkit components are supported by the OGF-defined SAGA C++/Python API: GRAM (2 and 5) via the SAGA job API GridFTP via the SAGA filesystem API Replica Location Service via the SAGA C++ Reference Implementation API Compatible third-party software A number of tools can function with Globus Toolkit, including: SAGA C++ Reference Implementation - The Simple API for Grid Applications WebCom and WebCom-G Nimrod tools for meta-scheduling and parametric computing Gridbus Grid Service Broker Grid Portal Software such as GridPort, OGCE, GridSphere and P-GRADE Portal Grid Packaging Toolkit (GPT) MPICH-G2 (Grid Enabled MPI) Network Weather Service (NWS) (Quality-of-Service monitoring and statistics) HTCondor (CPU Cycle Scavenging) and Condor-G (Job Submission) HPC4U Middleware (Fault Tolerant and SLA aware Grid Middleware) GridWay metascheduler XML-based web services offer a way to access the diverse services and applications in a distributed environment. In 2004, Univa Corporation began providing commercial support for the Globus Toolkit using a business model similar to that of Red Hat. Job schedulers GRAM (Grid Resource Allocation Manager), a component of the Globus Toolkit, officially supports the following job schedulers or batch-queuing systems: Portable Batch System, a computer software job scheduler that allocates network resources to batch jobs. HTCondor High-Throughput Computing System, a software framework for coarse-grained distributed parallelization of computationally intensive tasks. Platform LSF, a commercial computer software job scheduler. Unofficial job schedulers that can be used with the Globus Toolkit: Sun Grid Engine, an open source batch-queuing system, supported by Sun Microsystems. Globus does not officially support SGE, but third parties offer methods to integrate it: The London e-Science Center has created a "Transfer-queue over Globus (TOG)" package and provides instructions on how to configure a Globus Toolkit 2 or 3 or a Globus Toolkit 4 server so that it can submit jobs for execution on a local Sun Grid Engine installation. Simple Linux Utility for Resource Management (SLURM), an open source batch-queuing system originally developed at LLNL and currently managed by SchedMD. Globus can be used with SLURM via shell wrappers. Development plans The Globus Alliance announced a release of Globus Toolkit version 5 (GT5) in late 2009. A major change will be abandoning GRAM4 (although continuing support at least through December, 2010) in favor of an enhanced GRAM2, called GRAM5, which will solve scalability issues and add features. The Reliable File Transfer (RFT) service will be replaced by a new Globus.org service. Globus.org is an online, hosted service (i.e., Software-as-a-Service) that provides higher-level, end-to-end Grid capabilities, initially concentrating on reliable, high-performance, fire-and-forget data transfer. To retain the Web-Service functionality without technology and standards now considered obsolete, a new project called Globus Crux has been started, which expects to release an alpha version by the end of 2009. The monitoring and discovery tasks currently performed by MDS will be taken up by a new, Crux-based Integrated Information Services (IIS). No releases of the IIS are planned until sometime in 2010. The release of GT 5.0.2 was announced on 19 July 2010. GT 5.0.3 is reported due for release in February 2011. Use caGrid is layered on Globus Java WS Core Advanced Resource Connector, open source grid middleware introduced by NorduGrid See also gCube system gLite References External links Globus Toolkit homepage Grid computing products
49267508
https://en.wikipedia.org/wiki/Indian%20Institute%20of%20Information%20Technology%2C%20Una
Indian Institute of Information Technology, Una
Indian Institute of Information Technology Una (IIIT, Una) is one of the Indian Institutes of Information Technology located at Vill. Saloh, Teh. Haroli, Distt. Una Himachal Pradesh-177209, Himachal Pradesh.. Established in 2014, it was recognized as an Institute of National Importance. IIIT Una is a joint venture of the Ministry of Human Resource Development, Government of India, the Govt. of Himachal Pradesh, with Industries in Public-Private Partnership model. The industries are H.P. Power Corporation and H.P. Power Transmission Corporation. History On 18 March 2013, Ministry of Human Resource Development, Government of India introduced a bill in the Parliament to establish 20 new Indian Institute of Information Technology's in different parts of the country. As per the bill, MHRD established 20 new IIITs under the Public-Private Partnership (PPP) mode partnering with respective state governments and industry partners. IIITU started to intake students from the academic year 2014–15, offering Computer Science and Engineering and Electronics and Communication Engineering to the incoming students, functioning in the campus of NIT Hamirpur, with NIT Hamirpur acting as their mentor institute. At the start of the academic year 2017–18, Information Technology branch was also started in the institute. On 9 August 2017, The Indian Institutes of Information Technology (Public-Private Partnership) Act, 2017 was passed, following which IIITU along with other newly established IIITs, was conferred the status of an Institute of National Importance. The bill, was passed, aiming to generate highly competent manpower of global standards for the Information Technology Industry, expected to act as a major catalyst to develop new knowledge in the field of Information Technology. On 3 October 2017, Prime Minister of India Narendra Modi laid the foundation stone of IIITU. On 5 July 2018, Prof. Subramaniam Selvakumar was appointed by the MHRD as the full-time director of the institute, thus making IIITU not requiring any further mentoring from NIT Hamirpur. In March 2019, IIITU and IIT Ropar signed an MoU at Hamirpur, with an aim of developing the fraternity between IIITU and IIT Ropar. The MoU would allow IIITU to have access to various labs and advance research facilities and allow the institutes in joint research collaborative projects. Prof. Vinod Yadava, Director NIT Hamirpur also inaugurated the websites of the 10 student associations which included FORCE (CSE), Aavesh (ECE), Amogh (Magazine), etc. On 29 March 2019, IIITU held its first-ever convocation for the students of batch 2014–18, on the same day IIITU signed an MoU with IIT Mandi, through which Prof. Timothy A. Gonsalves, Director IIT Mandi, agreed in assisting IIITU in developing courses based practicum components of the IIT Mandi curriculum. Before the academic year 2019–20, NIT Hamirpur expressed its inability in accommodating IIIT Una students in its campus, thus for want of hostel accommodation for the new incoming students, a need of a second campus was expressed by the administration. By the start of the academic year 2019–20, IIITU started its second temporary campus (TC-II) in Chandpur, Haroli, Una, where all the new incoming students were accommodated. At the same time the students of II, III and IV year had their classes conducted in the temporary campus in NIT Hamirpur (TC-I). Now, IIIT Una is functioning from the permanent campus in the close proximity of Una city. Due to continuous hard-work of the Director Prof. S. Selvakumar Sir, the institute presently is able to function from the permanent campus. All the modern amenities are available in the permanent campus these include three hostels, Sports complex, Modern Academic building, Guest house, lush green gardens, all sports facilities include cricket ground, football ground, volleyball court, Basketball Court, Swimming pool. Admissions The admission to IIIT is through Josaa and Central Seat allocation Board (CSAB). The students are allotted admission by Josaa based on their Joint Entrance Examination (JEE-Main) ranks. The Indian Institute of Information Technology, Una is listed in the CSAB website under List of participating institutes. Academic programs The Institute presently offers B.Tech in Computer Science and Engineering (CSE), Electronics and Communication Engineering (ECE) and Information Technology(IT). Campus status IIIT UNA permanent campus is situated at Vill. Saloh, Teh. Haroli, Distt. Una Himachal Pradesh-177209 References External links Una Universities and colleges in Himachal Pradesh Education in Una district Educational institutions established in 2014 2014 establishments in Himachal Pradesh Engineering colleges in Himachal Pradesh
99209
https://en.wikipedia.org/wiki/GEOS
GEOS
GEOS may refer to: Computer software GEOS (8-bit operating system), an operating system originally designed for the Commodore 64 GEOS (16-bit operating system), a DOS-based graphical user interface and x86 operating system GEOS (securities processing software), an integrated online system for the management and processing of securities GEOS (software library), an open-source geometry engine Other GEOS (eikaiwa), an English conversation teaching company based in Japan GEOS circle, an intersection of four lines that are associated with a generalized triangle GEOS (satellite), a research satellite from ESRO (1978–1982) GEOS (satellite series), three research satellites from NASA Groupe GEOS, a French business consultancy