id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
6008500 | https://en.wikipedia.org/wiki/CPU%20time | CPU time | CPU time (or process time) is the amount of time for which a central processing unit (CPU) was used for processing instructions of a computer program or operating system, as opposed to elapsed time, which includes for example, waiting for input/output (I/O) operations or entering low-power (idle) mode. The CPU time is measured in clock ticks or seconds. Often, it is useful to measure CPU time as a percentage of the CPU's capacity, which is called the CPU usage. CPU time and CPU usage have two main uses.
The CPU time is used to quantify the overall empirical efficiency of two functionally identical algorithms. For example any sorting algorithm takes an unsorted list and returns a sorted list, and will do so in a deterministic number of steps based for a given input list. However a bubble sort and a merge sort have different running time complexity such that merge sort tends to complete in fewer steps. Without any knowledge of the workings of either algorithm a greater CPU time of bubble sort shows it is less efficient for particular input data than merge sort.
This type of measurement is especially useful when comparing like algorithms that are not trivial in complexity. In this case the wall time (actual duration elapsed) is irrelevant, the computer may execute the program slower or faster depending on real world variables such as the CPU's temperature, as well as other operating system variables, such as the process's priority.
The CPU usage is used to quantify how the processor is shared between computer programs. High CPU usage by a single program may indicate that it is highly demanding of processing power or that it may malfunction; for example, it has entered an infinite loop. CPU time allows measurement of the processing power a single program requires, eliminating interference, such as time executed waiting for input or being suspended to allow other programs to run.
In contrast, elapsed real time (or simply real time, or wall-clock time) is the time taken from the start of a computer program until the end as measured by an ordinary clock. Elapsed real time includes I/O time, any multitasking delays, and all other types of waits incurred by the program.
Subdivision
CPU time or CPU usage can be reported either for each thread, for each process or for the entire system. Moreover, depending on what exactly the CPU was doing, the reported values can be subdivided in:
User time is the amount of time the CPU was busy executing code in user space.
System time is the amount of time the CPU was busy executing code in kernel space. If this value is reported for a thread or process, then it represents the amount of time the kernel was doing work on behalf of the executing context, for example, after a thread issued a system call.
Idle time (for the whole system only) is the amount of time the CPU was not busy, or, otherwise, the amount of time it executed the System Idle process. Idle time actually measures unused CPU capacity.
Steal time (for the whole system only), on virtualized hardware, is the amount of time the operating system wanted to execute, but was not allowed to by the hypervisor. This can happen if the physical hardware runs multiple guest operating system and the hypervisor chose to allocate a CPU time slot to another one.
Unix commands for CPU time
Unix command top
The Unix command top provides CPU time, priority, elapsed real time, and other information for all processes and updates it in real time.
Unix command time
The Unix command time prints CPU time and elapsed real time for a Unix process.
% gcc nextPrimeNumber.c -o nextPrimeNumber
% time ./nextPrimeNumber 30000007
Prime number greater than 30000007 is 30000023
0.327u 0.010s 0:01.15 28.6% 0+0k 0+0io 0pf+0w
This process took a total of 0.337 seconds of CPU time, out of which 0.327 seconds was spent in user space, and the final 0.010 seconds in kernel mode on behalf of the process. Elapsed real time was 1.15 seconds.
The following is the source code of the application nextPrimeNumber which was used in the above example.
// nextPrimeNumber.c
#include <stdio.h>
#include <stdlib.h>
int isPrimeNumber(unsigned long int n) {
for (int i = 2; i <= (n >> 1); ++i)
if (n % i == 0) return 0;
return 1;
}
int main(int argc, char *argv[]) {
unsigned long int argument = strtoul(argv[1], NULL, 10), n = argument;
while (!isPrimeNumber(++n));
printf("Prime number greater than %lu is %lu\n", argument, n);
return 0;
}
POSIX functions clock() and getrusage()
POSIX functions clock() and getrusage() can be used to get CPU time consumed by any process in a POSIX environment. If the process is multithreaded, the CPU time is the sum for all threads.
With Linux starting from kernel 2.6.26 there is a parameter RUSAGE_THREAD which leads to resource usage statistics for the calling thread only.
Total CPU time
On multi-processor machines, a computer program can use two or more CPUs for processing using parallel processing scheduling. In such situations, the notion of total CPU time is used, which is the sum of CPU time consumed by all of the CPUs utilized by the computer program.
CPU time and elapsed real time
Elapsed real time is always greater than or equal to the CPU time for computer programs which use only one CPU for processing. If no wait is involved for I/O or other resources, elapsed real time and CPU time are very similar.
CPU time and elapsed real time for parallel processing technology
If a program uses parallel processing, total CPU time for that program would be more than its elapsed real time. (Total CPU time)/(Number of CPUs) would be same as elapsed real time if the work load is evenly distributed on each CPU and no wait is involved for I/O or other resources.
Example: A software application executed on a hexa-core processor creates three Unix processes for fulfilling the user requirement. Each of these three processes creates two threads, enumerating a total of 6 working threads. Computation is distributed evenly on the 6 independent threads. If no wait for resources is involved, total CPU time is expected to be six times the elapsed real time.
See also
Elapsed real time
CPU
Process (computing)
System time
top
mpstat
Load (computing)
References
External links
Computer performance |
2824310 | https://en.wikipedia.org/wiki/Openfiler | Openfiler | Openfiler is an operating system that provides file-based network-attached storage and block-based storage area network. It was created by Xinit Systems, and is based on the CentOS Linux distribution. It is free software licensed under the GNU GPLv2
History
The Openfiler codebase was started at Xinit Systems in 2001. The company created a project and donated the codebase to it in October 2003.
The first public release of Openfiler was made in May 2004. The latest release was published in 2011.
Although there has been no formal announcement, there is no evidence that Openfiler is being actively developed since 2015. DistroWatch has listed Openfiler as discontinued. The official website states that paid support is still available.
Criticism
Though some users have run Openfiler for years with few problems, in a 2013 article on SpiceWorks website, the author recommended against using Openfiler, citing lack of features, lack of support and risk of data loss.
See also
Celerra, a commercial proprietary NAS solution - development discontinued in early 2012
NetApp filer, a commercial proprietary filer
FreeNAS, a FreeBSD based free and open-source NAS solution
OpenMediaVault — an out-of-the-box Linux NAS solution developed by a former FreeNAS developer, based upon Debian Linux (previous name: CoreNAS)
Gluster
NAS4Free — a reliable network-attached storage (NAS) server software.
NexentaStor - Advanced enterprise-level NAS software solution (Debian/OpenSolaris-based)
References
Further reading
External links
Computer storage devices
Free file transfer software
Software appliances
Network-attached storage |
1678664 | https://en.wikipedia.org/wiki/CiteULike | CiteULike | CiteULike was a web service which allowed users to save and share citations to academic papers. Based on the principle of social bookmarking, the site worked to promote and to develop the sharing of scientific references amongst researchers. In the same way that it is possible to catalog web pages (with Furl and delicious) or photographs (with Flickr), scientists could share citation information using CiteULike. Richard Cameron developed CiteULike in November 2004 and in 2006 Oversity Ltd. was established to develop and support CiteULike. In February 2019, CiteULike announced that it would be ceasing operations as of March 30, 2019.
When browsing issues of research journals, small scripts stored in bookmarks (bookmarklets) allowed one to import articles from repositories like PubMed, and CiteULike supported many more. Then the system attempted to determine the article metadata (title, authors, journal name, etc.) automatically. Users could organize their libraries with freely chosen tags and this produces a folksonomy of academic interests.
Basic principles
Initially, one added a reference to CiteULike directly from within a web browser, without needing a separate program. For common online databases like PubMed, author names, title, and other details were imported automatically. One could manually add tags for grouping of references. The web site could be used to search public references by all users or only one's own references. References could later be exported via BibTeX or EndNote to be used on local computers.
Creation of entries and definition of keywords
CiteULike provided bookmarklets to quickly add references from the web pages of the most common sites. These small scripts read the citation information from the web page and imported into the CiteULike database for the currently logged in user.
Sites supported for semi-automatic import included Amazon.com, arXiv.org, JSTOR, PLoS, PubMed, SpringerLink, and ScienceDirect. It was also possible although more time-consuming to add entries manually.
Entries could be tagged for easier retrieval and organisation. More frequent tags were displayed in a proportionally larger font. Tags could be clicked to call up articles containing this tag.
Sharing and exporting entries
New entries were added as public by default, which made them accessible to everyone. Entries could be added as private and were then only available to the specific user. Users of CiteULike thus automatically shared all their public entries with other users. The tags assigned to public entries contributed to the site-wide tag network. All public references could also be searched and filtered by tag.
In addition, the site provided groups that users could join themselves or by invitation. Groups were typically labs, institutions, professions, or research areas.
On line CiteULike entries could be downloaded to a local computer by means of export functions. One export format was BibTeX, the referencing system used in TeX and LaTeX. The BibTeX output could also be imported directly into Overleaf. The RIS file format was also available for commercial bibliography programs such as EndNote or Reference Manager. It also allowed import into the free Zotero bibliography extension of Firefox. Export was possible for individual entries or the entire library.
CiteULike gave access to personal or shared bibliographies directly from the web. It allowed one to see what other people had posted publicly, which tags they had added, and how they had commented and rated a paper. It was also possible to browse the public libraries of people with similar interests to discover interesting papers. Groups allowed individual users to collaborate with other users to build a library of references. The data were backed up daily from the central server.
Software
CiteULike was written in Tcl, with user contributed plugins in Python, Perl, Ruby and Tcl; some additional modules were written in Java; data were stored using PostgreSQL There was no API but plugins could be contributed using Subversion. The software behind the service was closed source, but the dataset collected by the users was in the public domain.
About the site
The site stemmed from personal scientific requirements. The initial author found existing bibliography software cumbersome.
CiteULike was created in November 2004 and further developed in December 2006, running until March 2019. The site was based in the UK. The service was free and was run independently of any particular publisher with a liberal privacy policy.
See also
Reference management software
Comparison of reference management software
Social media
References
External links
CiteULike
Journal list
Inside Higher Ed "Keeping Citations Straight, Finding New Ones"
Interview with Kevin Emamy about Citeulike
Library 2.0
Social bookmarking websites
Reference management software
Social information processing
Internet properties established in 2004
Social cataloging applications
Internet properties disestablished in 2019 |
55804317 | https://en.wikipedia.org/wiki/2017%E2%80%9318%20Little%20Rock%20Trojans%20men%27s%20basketball%20team | 2017–18 Little Rock Trojans men's basketball team | The 2017–18 Little Rock Trojans men's basketball team represented the University of Arkansas at Little Rock during the 2017–18 NCAA Division I men's basketball season. The Trojans, led by second-year head coach Wes Flanigan, played their home games at the Jack Stephens Center in Little Rock, Arkansas as members of the Sun Belt Conference. They finished the season 7–25, 4–14 in Sun Belt play to finish in last place. They lost in the first round of the Sun Belt Tournament to Appalachian State.
On March 9, 2018, the school fired head coach Wes Flanigan after just two seasons where he compiled a record of 22–42. On March 29, the school hired former NBA player Darrell Walker who had spent the last two seasons as head coach of Division II Clark Atlanta University.
Previous season
The Trojans finished the 2016–17 season 15–17, 6–12 in Sun Belt play to finish in tenth place. They lost in the first round of the Sun Belt Tournament to Louisiana–Lafayette.
Roster
Schedule and results
|-
!colspan=9 style=| Exhibition
|-
!colspan=9 style=| Non-conference regular season
|-
!colspan=9 style=| Sun Belt Conference regular season
|-
!colspan=9 style=| Sun Belt Tournament
References
Little Rock
Little Rock Trojans men's basketball seasons
Little Rock
Little Rock |
9791362 | https://en.wikipedia.org/wiki/Sumatra%20PDF | Sumatra PDF | Sumatra PDF is a free and open-source document viewer that supports many document formats including: Portable Document Format (PDF), Microsoft Compiled HTML Help (CHM), DjVu, EPUB, FictionBook (FB2), MOBI, PRC, Open XML Paper Specification (OpenXPS, OXPS, XPS), and Comic Book Archive file (CB7, CBR, CBT, CBZ). If Ghostscript is installed, it supports PostScript files. It is developed exclusively for Microsoft Windows.
Features
Sumatra has a minimalist design, with its simplicity attained at the cost of extensive features. For rendering PDFs, it uses the MuPDF library.
Sumatra was designed for portable use, as it consists of one file with no external dependencies, making it usable from an external USB drive, needing no installation. This classifies it as a portable application to read PDF, XPS, DjVu, CHM, eBooks (ePub and Mobi) and Comic Book (CBZ and CBR) formats.
As is characteristic of many portable applications, Sumatra uses little disk space. In 2009, Sumatra 1.0 had a 1.21 MB setup file, compared to Adobe Reader 9.5's 32 MB. In January, 2017, the latest version of SumatraPDF, 3.1.2, had a single 6.1 Mb executable file; in comparison, Adobe Reader XI used 320 MB of disk space.
The PDF format's use restrictions were implemented in Sumatra 0.6, preventing users from printing or copying from documents that the document author restricts, a form of Digital Rights Management. Kowalczyk stated "I decided that [Sumatra] will honor PDF creator's wishes". Other open-source readers like Okular and Evince make this optional, and Debian patches software to remove these restrictions, in accord with its principles of interoperability and re-use.
Through version 1.1, printing was achieved by rasterizing each PDF page to a bitmap. This resulted in very large spool files and slow printing.
Since version 0.9.1, hyperlinks embedded in PDF documents have been supported.
Sumatra is multilingual, with 69 community-contributed translations.
Sumatra supports SyncTeX, a bidirectional method to synchronize TeX source and PDF output produced by pdfTeX or XeTeX.
Since version 0.9.4, Sumatra supports the JPEG 2000 format.
Development
Sumatra PDF is written mainly by two contributors: Krzysztof Kowalczyk and Simon Bünzli. The source code is developed in two programming languages, mostly in C++, with some components in C. The source code is provided with support for Microsoft Visual Studio.
As it was first designed when Windows XP was the current version of Windows, Sumatra initially had some incompatibilities with earlier versions of Windows. Support for Windows 95, 98 and ME has since been removed.
Initially, Kowalczyk did not release a 64-bit version of Sumatra, indicating that while it might offer slightly more speed and available memory, he believed at that time that it would greatly add to user confusion and that the benefits would not outweigh the potential costs. However, some users requested 64-bit builds of Sumatra and other developers had compiled unofficial 64-bit builds which loaded documents faster than the 32-bit builds. However, the official builds' developer had requested that unofficial builds not bear the 'Sumatra' name. In October 2015, an official 64-bit version of Sumatra was released.
The Sumatra source code was originally hosted on Google Code. Due to US export legal restrictions, it was unavailable "in countries on the United States Office of Foreign Assets Control sanction list, including Cuba, Iran, North Korea, Sudan and Syria." The source code is currently hosted on GitHub.
History
The first version of Sumatra PDF, designated version 0.1, was based on Xpdf 0.2 and was released on 1 June 2006. It switched to Poppler from version 0.2. In version 0.4, it changed to MuPDF for more speed and better support for the Windows platform. Poppler remained as alternative engine for a time, and from version 0.6 to 0.8 it was automatically used to render pages that MuPDF failed to load. Poppler was removed in version 0.9, released on 10 August 2008.
In July 2009, Sumatra PDF changed its license from GNU GPLv2 to GNU GPLv3 to match the same license change on MuPDF.
Version 1.0 was released on 17 November 2009, after more than three years of cumulative development. Version 2.0 was released on 2 April 2012, over two years after the release of version 1.0.
In 2007, the first unofficial translations were released by Lars Wohlfahrt before Sumatra PDF got official multi-language support.
In October 2015, version 3.1 introduced a 64-bit version, in addition to their original 32-bit version.
Name and artwork
The author has indicated that the choice of the name "Sumatra" is not a tribute to the Sumatra island or coffee, stating that there is no particular reasoning behind the name.
The graphics design of Sumatra is a tribute to the cover of the Watchmen graphic novel by Alan Moore and Dave Gibbons.
Critical reception
Sumatra has attracted acclaim for its speed and simplicity, for being portable, its keyboard shortcuts, and its open-source development.
At one time the Free Software Foundation Europe recommended Sumatra PDF, but then removed its recommendation in February 2014, due to the presence of the non-freely licensed unrar code in Sumatra. Foundation representative Heiki Ojasild explained, "while they continue to make use of the non-free library, SumatraPDF cannot be recognised as Free Software". Unrar was eventually replaced with a free alternative in version 3.0, making it 100% free software.
See also
List of PDF software
List of portable software
References
External links
EPUB readers
Free PDF readers
Free software programmed in C++
Portable software
Software using the GPL license
Windows-only free software |
413102 | https://en.wikipedia.org/wiki/Folding%40home | Folding@home | Folding@home (FAH or F@h) is a distributed computing project aimed to help scientists develop new therapeutics for a variety of diseases by the means of simulating protein dynamics. This includes the process of protein folding and the movements of proteins, and is reliant on simulations run on volunteers' personal computers. Folding@home is currently based at Washington University in St. Louis and led by Greg Bowman, a former student of Vijay Pande.
The project utilizes graphics processing units (GPUs), central processing units (CPUs), and ARM processors like those on the Raspberry Pi for distributed computing and scientific research. The project uses statistical simulation methodology that is a paradigm shift from traditional computing methods. As part of the client–server model network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation. Volunteers can track their contributions on the Folding@home website, which makes volunteers' participation competitive and encourages long-term involvement.
Folding@home is one of the world's fastest computing systems. With heightened interest in the project as a result of the COVID-19 pandemic, the system achieved a speed of approximately 1.22 exaflops by late March 2020 and reached 2.43 exaflops by April 12, 2020, making it the world's first exaflop computing system. This level of performance from its large-scale computing network has allowed researchers to run computationally costly atomic-level simulations of protein folding thousands of times longer than formerly achieved. Since its launch on October 1, 2000, Folding@home was involved in the production of 226 scientific research papers. Results from the project's simulations agree well with experiments.
Background
Proteins are an essential component to many biological functions and participate in virtually all processes within biological cells. They often act as enzymes, performing biochemical reactions including cell signaling, molecular transportation, and cellular regulation. As structural elements, some proteins act as a type of skeleton for cells, and as antibodies, while other proteins participate in the immune system. Before a protein can take on these roles, it must fold into a functional three-dimensional structure, a process that often occurs spontaneously and is dependent on interactions within its amino acid sequence and interactions of the amino acids with their surroundings. Protein folding is driven by the search to find the most energetically favorable conformation of the protein, i.e., its native state. Thus, understanding protein folding is critical to understanding what a protein does and how it works, and is considered a holy grail of computational biology. Despite folding occurring within a crowded cellular environment, it typically proceeds smoothly. However, due to a protein's chemical properties or other factors, proteins may misfold, that is, fold down the wrong pathway and end up misshapen. Unless cellular mechanisms can destroy or refold misfolded proteins, they can subsequently aggregate and cause a variety of debilitating diseases. Laboratory experiments studying these processes can be limited in scope and atomic detail, leading scientists to use physics-based computing models that, when complementing experiments, seek to provide a more complete picture of protein folding, misfolding, and aggregation.
Due to the complexity of proteins' conformation or configuration space (the set of possible shapes a protein can take), and limits in computing power, all-atom molecular dynamics simulations have been severely limited in the timescales that they can study. While most proteins typically fold in the order of milliseconds, before 2010, simulations could only reach nanosecond to microsecond timescales. General-purpose supercomputers have been used to simulate protein folding, but such systems are intrinsically costly and typically shared among many research groups. Further, because the computations in kinetic models occur serially, strong scaling of traditional molecular simulations to these architectures is exceptionally difficult. Moreover, as protein folding is a stochastic process (i.e., random) and can statistically vary over time, it is challenging computationally to use long simulations for comprehensive views of the folding process.
Protein folding does not occur in one step. Instead, proteins spend most of their folding time, nearly 96% in some cases, waiting in various intermediate conformational states, each a local thermodynamic free energy minimum in the protein's energy landscape. Through a process known as adaptive sampling, these conformations are used by Folding@home as starting points for a set of simulation trajectories. As the simulations discover more conformations, the trajectories are restarted from them, and a Markov state model (MSM) is gradually created from this cyclic process. MSMs are discrete-time master equation models which describe a biomolecule's conformational and energy landscape as a set of distinct structures and the short transitions between them. The adaptive sampling Markov state model method significantly increases the efficiency of simulation as it avoids computation inside the local energy minimum itself, and is amenable to distributed computing (including on GPUGRID) as it allows for the statistical aggregation of short, independent simulation trajectories. The amount of time it takes to construct a Markov state model is inversely proportional to the number of parallel simulations run, i.e., the number of processors available. In other words, it achieves linear parallelization, leading to an approximately four orders of magnitude reduction in overall serial calculation time. A completed MSM may contain tens of thousands of sample states from the protein's phase space (all the conformations a protein can take on) and the transitions between them. The model illustrates folding events and pathways (i.e., routes) and researchers can later use kinetic clustering to view a coarse-grained representation of the otherwise highly detailed model. They can use these MSMs to reveal how proteins misfold and to quantitatively compare simulations with experiments.
Between 2000 and 2010, the length of the proteins Folding@home has studied have increased by a factor of four, while its timescales for protein folding simulations have increased by six orders of magnitude. In 2002, Folding@home used Markov state models to complete approximately a million CPU days of simulations over the span of several months, and in 2011, MSMs parallelized another simulation that required an aggregate 10 million CPU hours of computing. In January 2010, Folding@home used MSMs to simulate the dynamics of the slow-folding 32-residue NTL9 protein out to 1.52 milliseconds, a timescale consistent with experimental folding rate predictions but a thousand times longer than formerly achieved. The model consisted of many individual trajectories, each two orders of magnitude shorter, and provided an unprecedented level of detail into the protein's energy landscape. In 2010, Folding@home researcher Gregory Bowman was awarded the Thomas Kuhn Paradigm Shift Award from the American Chemical Society for the development of the open-source MSMBuilder software and for attaining quantitative agreement between theory and experiment. For his work, Pande was awarded the 2012 Michael and Kate Bárány Award for Young Investigators for "developing field-defining and field-changing computational methods to produce leading theoretical models for protein and RNA folding", and the 2006 Irving Sigal Young Investigator Award for his simulation results which "have stimulated a re-examination of the meaning of both ensemble and single-molecule measurements, making Pande's efforts pioneering contributions to simulation methodology."
Examples of application in biomedical research
Protein misfolding can result in a variety of diseases including Alzheimer's disease, cancer, Creutzfeldt–Jakob disease, cystic fibrosis, Huntington's disease, sickle-cell anemia, and type II diabetes. Cellular infection by viruses such as HIV and influenza also involve folding events on cell membranes. Once protein misfolding is better understood, therapies can be developed that augment cells' natural ability to regulate protein folding. Such therapies include the use of engineered molecules to alter the production of a given protein, help destroy a misfolded protein, or assist in the folding process. The combination of computational molecular modeling and experimental analysis has the possibility to fundamentally shape the future of molecular medicine and the rational design of therapeutics, such as expediting and lowering the costs of drug discovery. The goal of the first five years of Folding@home was to make advances in understanding folding, while the current goal is to understand misfolding and related disease, especially Alzheimer's.
The simulations run on Folding@home are used in conjunction with laboratory experiments, but researchers can use them to study how folding in vitro differs from folding in native cellular environments. This is advantageous in studying aspects of folding, misfolding, and their relationships to disease that are difficult to observe experimentally. For example, in 2011, Folding@home simulated protein folding inside a ribosomal exit tunnel, to help scientists better understand how natural confinement and crowding might influence the folding process. Furthermore, scientists typically employ chemical denaturants to unfold proteins from their stable native state. It is not generally known how the denaturant affects the protein's refolding, and it is difficult to experimentally determine if these denatured states contain residual structures which may influence folding behavior. In 2010, Folding@home used GPUs to simulate the unfolded states of Protein L, and predicted its collapse rate in strong agreement with experimental results.
The large data sets from the project are freely available for other researchers to use upon request and some can be accessed from the Folding@home website. The Pande lab has collaborated with other molecular dynamics systems such as the Blue Gene supercomputer, and they share Folding@home's key software with other researchers, so that the algorithms which benefited Folding@home may aid other scientific areas. In 2011, they released the open-source Copernicus software, which is based on Folding@home's MSM and other parallelizing methods and aims to improve the efficiency and scaling of molecular simulations on large computer clusters or supercomputers. Summaries of all scientific findings from Folding@home are posted on the Folding@home website after publication.
Alzheimer's disease
Alzheimer's disease is an incurable neurodegenerative disease which most often affects the elderly and accounts for more than half of all cases of dementia. Its exact cause remains unknown, but the disease is identified as a protein misfolding disease. Alzheimer's is associated with toxic aggregations of the amyloid beta (Aβ) peptide, caused by Aβ misfolding and clumping together with other Aβ peptides. These Aβ aggregates then grow into significantly larger senile plaques, a pathological marker of Alzheimer's disease. Due to the heterogeneous nature of these aggregates, experimental methods such as X-ray crystallography and nuclear magnetic resonance (NMR) have had difficulty characterizing their structures. Moreover, atomic simulations of Aβ aggregation are highly demanding computationally due to their size and complexity.
Preventing Aβ aggregation is a promising method to developing therapeutic drugs for Alzheimer's disease, according to Naeem and Fazili in a literature review article. In 2008, Folding@home simulated the dynamics of Aβ aggregation in atomic detail over timescales of the order of tens of seconds. Prior studies were only able to simulate about 10 microseconds. Folding@home was able to simulate Aβ folding for six orders of magnitude longer than formerly possible. Researchers used the results of this study to identify a beta hairpin that was a major source of molecular interactions within the structure. The study helped prepare the Pande lab for future aggregation studies and for further research to find a small peptide which may stabilize the aggregation process.
In December 2008, Folding@home found several small drug candidates which appear to inhibit the toxicity of Aβ aggregates. In 2010, in close cooperation with the Center for Protein Folding Machinery, these drug leads began to be tested on biological tissue. In 2011, Folding@home completed simulations of several mutations of Aβ that appear to stabilize the aggregate formation, which could aid in the development of therapeutic drug therapies for the disease and greatly assist with experimental nuclear magnetic resonance spectroscopy studies of Aβ oligomers. Later that year, Folding@home began simulations of various Aβ fragments to determine how various natural enzymes affect the structure and folding of Aβ.
Huntington's disease
Huntington's disease is a neurodegenerative genetic disorder that is associated with protein misfolding and aggregation. Excessive repeats of the glutamine amino acid at the N-terminus of the huntingtin protein cause aggregation, and although the behavior of the repeats is not completely understood, it does lead to the cognitive decline associated with the disease. As with other aggregates, there is difficulty in experimentally determining its structure. Scientists are using Folding@home to study the structure of the huntingtin protein aggregate and to predict how it forms, assisting with rational drug design methods to stop the aggregate formation. The N17 fragment of the huntingtin protein accelerates this aggregation, and while there have been several mechanisms proposed, its exact role in this process remains largely unknown. Folding@home has simulated this and other fragments to clarify their roles in the disease. Since 2008, its drug design methods for Alzheimer's disease have been applied to Huntington's.
Cancer
More than half of all known cancers involve mutations of p53, a tumor suppressor protein present in every cell which regulates the cell cycle and signals for cell death in the event of damage to DNA. Specific mutations in p53 can disrupt these functions, allowing an abnormal cell to continue growing unchecked, resulting in the development of tumors. Analysis of these mutations helps explain the root causes of p53-related cancers. In 2004, Folding@home was used to perform the first molecular dynamics study of the refolding of p53's protein dimer in an all-atom simulation of water. The simulation's results agreed with experimental observations and gave insights into the refolding of the dimer that were formerly unobtainable. This was the first peer reviewed publication on cancer from a distributed computing project. The following year, Folding@home powered a new method to identify the amino acids crucial for the stability of a given protein, which was then used to study mutations of p53. The method was reasonably successful in identifying cancer-promoting mutations and determined the effects of specific mutations which could not otherwise be measured experimentally.
Folding@home is also used to study protein chaperones, heat shock proteins which play essential roles in cell survival by assisting with the folding of other proteins in the crowded and chemically stressful environment within a cell. Rapidly growing cancer cells rely on specific chaperones, and some chaperones play key roles in chemotherapy resistance. Inhibitions to these specific chaperones are seen as potential modes of action for efficient chemotherapy drugs or for reducing the spread of cancer. Using Folding@home and working closely with the Center for Protein Folding Machinery, the Pande lab hopes to find a drug which inhibits those chaperones involved in cancerous cells. Researchers are also using Folding@home to study other molecules related to cancer, such as the enzyme Src kinase, and some forms of the engrailed homeodomain: a large protein which may be involved in many diseases, including cancer. In 2011, Folding@home began simulations of the dynamics of the small knottin protein EETI, which can identify carcinomas in imaging scans by binding to surface receptors of cancer cells.
Interleukin 2 (IL-2) is a protein that helps T cells of the immune system attack pathogens and tumors. However, its use as a cancer treatment is restricted due to serious side effects such as pulmonary edema. IL-2 binds to these pulmonary cells differently than it does to T cells, so IL-2 research involves understanding the differences between these binding mechanisms. In 2012, Folding@home assisted with the discovery of a mutant form of IL-2 which is three hundred times more effective in its immune system role but carries fewer side effects. In experiments, this altered form significantly outperformed natural IL-2 in impeding tumor growth. Pharmaceutical companies have expressed interest in the mutant molecule, and the National Institutes of Health are testing it against a large variety of tumor models to try to accelerate its development as a therapeutic.
Osteogenesis imperfecta
Osteogenesis imperfecta, known as brittle bone disease, is an incurable genetic bone disorder which can be lethal. Those with the disease are unable to make functional connective bone tissue. This is most commonly due to a mutation in Type-I collagen, which fulfills a variety of structural roles and is the most abundant protein in mammals. The mutation causes a deformation in collagen's triple helix structure, which if not naturally destroyed, leads to abnormal and weakened bone tissue. In 2005, Folding@home tested a new quantum mechanical method that improved upon prior simulation methods, and which may be useful for future computing studies of collagen. Although researchers have used Folding@home to study collagen folding and misfolding, the interest stands as a pilot project compared to Alzheimer's and Huntington's research.
Viruses
Folding@home is assisting in research towards preventing some viruses, such as influenza and HIV, from recognizing and entering biological cells. In 2011, Folding@home began simulations of the dynamics of the enzyme RNase H, a key component of HIV, to try to design drugs to deactivate it. Folding@home has also been used to study membrane fusion, an essential event for viral infection and a wide range of biological functions. This fusion involves conformational changes of viral fusion proteins and protein docking, but the exact molecular mechanisms behind fusion remain largely unknown. Fusion events may consist of over a half million atoms interacting for hundreds of microseconds. This complexity limits typical computer simulations to about ten thousand atoms over tens of nanoseconds: a difference of several orders of magnitude. The development of models to predict the mechanisms of membrane fusion will assist in the scientific understanding of how to target the process with antiviral drugs. In 2006, scientists applied Markov state models and the Folding@home network to discover two pathways for fusion and gain other mechanistic insights.
Following detailed simulations from Folding@home of small cells known as vesicles, in 2007, the Pande lab introduced a new computing method to measure the topology of its structural changes during fusion. In 2009, researchers used Folding@home to study mutations of influenza hemagglutinin, a protein that attaches a virus to its host cell and assists with viral entry. Mutations to hemagglutinin affect how well the protein binds to a host's cell surface receptor molecules, which determines how infective the virus strain is to the host organism. Knowledge of the effects of hemagglutinin mutations assists in the development of antiviral drugs. As of 2012, Folding@home continues to simulate the folding and interactions of hemagglutinin, complementing experimental studies at the University of Virginia.
In March 2020, Folding@home launched a program to assist researchers around the world who are working on finding a cure and learning more about the coronavirus pandemic. The initial wave of projects simulate potentially druggable protein targets from SARS-CoV-2 virus, and the related SARS-CoV virus, about which there is significantly more data available.
Drug design
Drugs function by binding to specific locations on target molecules and causing some desired change, such as disabling a target or causing a conformational change. Ideally, a drug should act very specifically, and bind only to its target without interfering with other biological functions. However, it is difficult to precisely determine where and how tightly two molecules will bind. Due to limits in computing power, current in silico methods usually must trade speed for accuracy; e.g., use rapid protein docking methods instead of computationally costly free energy calculations. Folding@home's computing performance allows researchers to use both methods, and evaluate their efficiency and reliability. Computer-assisted drug design has the potential to expedite and lower the costs of drug discovery. In 2010, Folding@home used MSMs and free energy calculations to predict the native state of the villin protein to within 1.8 angstrom (Å) root mean square deviation (RMSD) from the crystalline structure experimentally determined through X-ray crystallography. This accuracy has implications to future protein structure prediction methods, including for intrinsically unstructured proteins. Scientists have used Folding@home to research drug resistance by studying vancomycin, an antibiotic drug of last resort, and beta-lactamase, a protein that can break down antibiotics like penicillin.
Chemical activity occurs along a protein's active site. Traditional drug design methods involve tightly binding to this site and blocking its activity, under the assumption that the target protein exists in one rigid structure. However, this approach works for approximately only 15% of all proteins. Proteins contain allosteric sites which, when bound to by small molecules, can alter a protein's conformation and ultimately affect the protein's activity. These sites are attractive drug targets, but locating them is very computationally costly. In 2012, Folding@home and MSMs were used to identify allosteric sites in three medically relevant proteins: beta-lactamase, interleukin-2, and RNase H.
Approximately half of all known antibiotics interfere with the workings of a bacteria's ribosome, a large and complex biochemical machine that performs protein biosynthesis by translating messenger RNA into proteins. Macrolide antibiotics clog the ribosome's exit tunnel, preventing synthesis of essential bacterial proteins. In 2007, the Pande lab received a grant to study and design new antibiotics. In 2008, they used Folding@home to study the interior of this tunnel and how specific molecules may affect it. The full structure of the ribosome was determined only as of 2011, and Folding@home has also simulated ribosomal proteins, as many of their functions remain largely unknown.
Potential applications in biomedical research
There are many more protein misfolding promoted diseases that can be benefited from Folding@home to either discern the misfolded protein structure or the misfolding kinetics, and assist in drug design in the future. The often fatal prion diseases is among the most significant.
Prion diseases
A prion (PrP) is a transmembrane cellular protein found widely in eukaryotic cells. In mammals, it is more abundant in the central nervous system. Although its function is unknown, its high conservation among species indicates an important role in the cellular function. The conformational change from the normal prion protein (PrPc, stands for cellular) to the disease causing isoform PrPSc (stands for prototypical prion disease–scrapie) causes a host of diseases collectly known as transmissible spongiform encephalopathies (TSEs), including Bovine spongiform encephalopathy (BSE) in bovine, Creutzfeldt-Jakob disease (CJD) and fatal insomnia in human, chronic wasting disease (CWD) in the deer family. The conformational change is widely accepted as the result of protein misfolding. What distinguishes TSEs from other protein misfolding diseases is its transmissible nature. The ‘seeding’ of the infectious PrPSc, either arising spontaneously, hereditary or acquired via exposure to contaminated tissues, can cause a chain reaction of transforming normal PrPc into fibrils aggregates or amyloid like plaques consist of PrPSc.
The molecular structure of PrPSc has not been fully characterized due to its aggregated nature. Neither is known much about the mechanism of the protein misfolding nor its kinetics. Using the known structure of PrPc and the results of the in vitro and in vivo studies described below, Folding@home could be valuable in elucidating how PrPSc is formed and how the infectious protein arrange themselves to form fibrils and amyloid like plaques, bypassing the requirement to purify PrPSc or dissolve the aggregates.
The PrPc has been enzymatically dissociated from the membrane and purified, its structure studied using structure characterization techniques such as NMR spectroscopy and X-ray crystallography. Post-translational PrPc has 231 amino acids (aa) in murine. The molecule consists of a long and unstructured amino terminal region spanning up to aa residue 121 and a structured carboxy terminal domain. This globular domain harbors two short sheet-forming anti-parallel β-strands (aa 128 to 130 and aa 160 to 162 in murine PrPc) and three α-helices (helix I: aa 143 to 153; helix II: aa 171 to 192; helix III: aa 199 to 226 in murine PrPc), Helices II and III are anti-parallel orientated and connected by a short loop. Their structural stability is supported by a disulfide bridge, which is parallel to both sheet-forming β-strands. These α-helices and the β-sheet form the rigid core of the globular domain of PrPc.
The disease causing PrPSc is proteinase K resistant and insoluble. Attempts to purify it from the brains of infected animals invariably yield heterogeneous mixtures and aggregated states that are not amenable to characterization by NMR spectroscopy or X-ray crystallography. However, it is a general consensus that PrPSc contains a high percentage of tightly stacked β-sheets than the normal PrPc that renders the protein insoluble and resistant to proteinase. Using techniques of cryoelectron microscopy and structural modeling based on similar common protein structures, it has been discovered that PrPSc contains ß-sheets in the region of aa 81–95 to aa 171, while the carboxy terminal structure is supposedly preserved, retaining the disulfide-linked α-helical conformation in the normal PrPc. These ß-sheets form a parallel left-handed beta-helix. Three PrPSc molecules are believed to form a primary unit and therefore build the basis for the so-called scrapie-associated fibrils. The catalytic activity depends on the size of the particle. PrPSc particles which consist of only 14-28 PrPc molecules exhibit the highest rate of infectivity and conversion.
Despite the difficulty to purify and characterize PrPSc, from the known molecular structure of PrPc and using transgenic mice and N-terminal deletion, the potential ‘hot spots’ of protein misfolding leading to the pathogenic PrPSc could be deduced and Folding@home could be of great value in confirming these. Studies found that both the primary and secondary structure of the prion protein can be of significance of the conversion.
There are more than twenty mutations of the prion protein gene (PRNP) that are known to be associated with or that are directly linked to the hereditary form of human TSEs [56], indicating single amino acids at certain position, likely within the carboxy domain, of the PrPc can affect the susceptibility to TSEs.
The post-translational amino terminal region of PrPc consists of residues 23-120 which make up nearly half of the amino sequence of full-length matured PrPc. There are two sections in the amino terminal region that may influence conversion. First, residues 52-90 contains an octapeptide repeat (5 times) region that likely influences the initial binding (via the octapeptide repeats) and also the actual conversion via the second section of aa 108–124. The highly hydrophobic AGAAAAGA is located between aa residue 113 and 120 and is described as putative aggregation site, although this sequence requires its flanking parts to form fibrillar aggregates.
In the carboxy globular domain, among the three helices, study show that helix II has a significant higher propensity to β-strand conformation. Due to the high conformational flexvoribility seen between residues 114-125 (part of the unstructured N-terminus chain) and the high β-strand propensity of helix II, only moderate changes in the environmental conditions or interactions might be sufficient to induce misfolding of PrPc and subsequent fibril formation.
Other studies of NMR structures of PrPc showed that these residues (~108–189) contain most of the folded domain including both β-strands, the first two α-helices, and the loop/turn regions connecting them, but not the helix III. Small changes within the loop/turn structures of PrPc itself could be important in the conversion as well. In another study, Riek et al. showed that the two small regions of β-strand upstream of the loop regions act as a nucleation site for the conformational conversion of the loop/turn and α-helical structures in PrPc to β-sheet.
The energy threshold for the conversion are not necessarily high. The folding stability, i.e. the free energy of a globular protein in its environment is in the range of one or two hydrogen bonds thus allows the transition to an isoform without the requirement of high transition energy.
From the respective of the interactions among the PrPc molecules, hydrophobic interactions play a crucial role in the formation of β-sheets, a hallmark of PrPSc, as the sheets bring fragments of polypeptide chains into close proximity. Indeed, Kutznetsov and Rackovsky showed that disease-promoting mutations in the human PrPc had a statistically significant tendency towards increasing local hydrophobicity.
In vitro experiments showed the kinetics of misfolding has an initial lag phase followed by a rapid growth phase of fibril formation. It is likely that PrPc goes through some intermediate states, such as at least partially unfolded or degraded, before finally ending up as part of an amyloid fibril.
Patterns of participation
Like other distributed computing projects, Folding@home is an online citizen science project. In these projects non-specialists contribute computer processing power or help to analyze data produced by professional scientists. Participants receive little or no obvious reward.
Research has been carried out into the motivations of citizen scientists and most of these studies have found that participants are motivated to take part because of altruistic reasons; that is, they want to help scientists and make a contribution to the advancement of their research. Many participants in citizen science have an underlying interest in the topic of the research and gravitate towards projects that are in disciplines of interest to them. Folding@home is no different in that respect. Research carried out recently on over 400 active participants revealed that they wanted to help make a contribution to research and that many had friends or relatives affected by the diseases that the Folding@home scientists investigate.
Folding@home attracts participants who are computer hardware enthusiasts. These groups bring considerable expertise to the project and are able to build computers with advanced processing power. Other distributed computing projects attract these types of participants and projects are often used to benchmark the performance of modified computers, and this aspect of the hobby is accommodated through the competitive nature of the project. Individuals and teams can compete to see who can process the most computer processing units (CPUs).
This latest research on Folding@home involving interview and ethnographic observation of online groups showed that teams of hardware enthusiasts can sometimes work together, sharing best practice with regard to maximizing processing output. Such teams can become communities of practice, with a shared language and online culture. This pattern of participation has been observed in other distributed computing projects.
Another key observation of Folding@home participants is that many are male. This has also been observed in other distributed projects. Furthermore, many participants work in computer and technology-based jobs and careers.
Not all Folding@home participants are hardware enthusiasts. Many participants run the project software on unmodified machines and do take part competitively. Over 100,000 participants are involved in Folding@home. However, it is difficult to ascertain what proportion of participants are hardware enthusiasts. Although, according to the project managers, the contribution of the enthusiast community is substantially larger in terms of processing power.
Performance
Supercomputer FLOPS performance is assessed by running the legacy LINPACK benchmark. This short-term testing has difficulty in accurately reflecting sustained performance on real-world tasks because LINPACK more efficiently maps to supercomputer hardware. Computing systems vary in architecture and design, so direct comparison is difficult. Despite this, FLOPS remain the primary speed metric used in supercomputing. In contrast, Folding@home determines its FLOPS using wall-clock time by measuring how much time its work units take to complete.
On September 16, 2007, due in large part to the participation of PlayStation 3 consoles, the Folding@home project officially attained a sustained performance level higher than one native petaFLOPS, becoming the first computing system of any kind to do so. Top500's fastest supercomputer at the time was BlueGene/L, at 0.280 petaFLOPS. The following year, on May 7, 2008, the project attained a sustained performance level higher than two native petaFLOPS, followed by the three and four native petaFLOPS milestones in August 2008 and September 28, 2008 respectively. On February 18, 2009, Folding@home achieved five native petaFLOPS, and was the first computing project to meet these five levels. In comparison, November 2008's fastest supercomputer was IBM's Roadrunner at 1.105 petaFLOPS. On November 10, 2011, Folding@home's performance exceeded six native petaFLOPS with the equivalent of nearly eight x86 petaFLOPS. In mid-May 2013, Folding@home attained over seven native petaFLOPS, with the equivalent of 14.87 x86 petaFLOPS. It then reached eight native petaFLOPS on June 21, followed by nine on September 9 of that year, with 17.9 x86 petaFLOPS. On May 11, 2016 Folding@home announced that it was moving towards reaching the 100 x86 petaFLOPS mark.
Further use grew from increased awareness and participation in the project from the coronavirus pandemic in 2020. On March 20, 2020 Folding@home announced via Twitter that it was running with over 470 native petaFLOPS, the equivalent of 958 x86 petaFLOPS. By March 25 it reached 768 petaFLOPS, or 1.5 x86 exaFLOPS, making it the first exaFLOP computing system. On November 20, 2020 Folding@home only has 0.2 x86 exaFLOPS due to a calculation error.
Points
Similarly to other distributed computing projects, Folding@home quantitatively assesses user computing contributions to the project through a credit system. All units from a given protein project have uniform base credit, which is determined by benchmarking one or more work units from that project on an official reference machine before the project is released. Each user receives these base points for completing every work unit, though through the use of a passkey they can receive added bonus points for reliably and rapidly completing units which are more demanding computationally or have a greater scientific priority. Users may also receive credit for their work by clients on multiple machines. This point system attempts to align awarded credit with the value of the scientific results.
Users can register their contributions under a team, which combine the points of all their members. A user can start their own team, or they can join an existing team. In some cases, a team may have their own community-driven sources of help or recruitment such as an Internet forum. The points can foster friendly competition between individuals and teams to compute the most for the project, which can benefit the folding community and accelerate scientific research. Individual and team statistics are posted on the Folding@home website.
If a user does not form a new team, or does not join an existing team, that user automatically becomes part of a "Default" team. This "Default" team has a team number of "0". Statistics are accumulated for this "Default" team as well as for specially named teams.
Software
Folding@home software at the user's end involves three primary components: work units, cores, and a client.
Work units
A work unit is the protein data that the client is asked to process. Work units are a fraction of the simulation between the states in a Markov model. After the work unit has been downloaded and completely processed by a volunteer's computer, it is returned to Folding@home servers, which then award the volunteer the credit points. This cycle repeats automatically. All work units have associated deadlines, and if this deadline is exceeded, the user may not get credit and the unit will be automatically reissued to another participant. As protein folding occurs serially, and many work units are generated from their predecessors, this allows the overall simulation process to proceed normally if a work unit is not returned after a reasonable period of time. Due to these deadlines, the minimum system requirement for Folding@home is a Pentium 3 450 MHz CPU with Streaming SIMD Extensions (SSE). However, work units for high-performance clients have a much shorter deadline than those for the uniprocessor client, as a major part of the scientific benefit is dependent on rapidly completing simulations.
Before public release, work units go through several quality assurance steps to keep problematic ones from becoming fully available. These testing stages include internal, beta, and advanced, before a final full release across Folding@home. Folding@home's work units are normally processed only once, except in the rare event that errors occur during processing. If this occurs for three different users, the unit is automatically pulled from distribution. The Folding@home support forum can be used to differentiate between issues arising from problematic hardware and bad work units.
Cores
Specialized molecular dynamics programs, referred to as "FahCores" and often abbreviated "cores", perform the calculations on the work unit as a background process. A large majority of Folding@home's cores are based on GROMACS, one of the fastest and most popular molecular dynamics software packages, which largely consists of manually optimized assembly language code and hardware optimizations. Although GROMACS is open-source software and there is a cooperative effort between the Pande lab and GROMACS developers, Folding@home uses a closed-source license to help ensure data validity. Less active cores include ProtoMol and SHARPEN. Folding@home has used AMBER, CPMD, Desmond, and TINKER, but these have since been retired and are no longer in active service. Some of these cores perform explicit solvation calculations in which the surrounding solvent (usually water) is modeled atom-by-atom; while others perform implicit solvation methods, where the solvent is treated as a mathematical continuum. The core is separate from the client to enable the scientific methods to be updated automatically without requiring a client update. The cores periodically create calculation checkpoints so that if they are interrupted they can resume work from that point upon startup.
Client
A Folding@home participant installs a client program on their personal computer. The user interacts with the client, which manages the other software components in the background. Through the client, the user may pause the folding process, open an event log, check the work progress, or view personal statistics. The computer clients run continuously in the background at a very low priority, using idle processing power so that normal computer use is unaffected. The maximum CPU use can be adjusted via client settings. The client connects to a Folding@home server and retrieves a work unit and may also download the appropriate core for the client's settings, operating system, and the underlying hardware architecture. After processing, the work unit is returned to the Folding@home servers. Computer clients are tailored to uniprocessor and multi-core processor systems, and graphics processing units. The diversity and power of each hardware architecture provides Folding@home with the ability to efficiently complete many types of simulations in a timely manner (in a few weeks or months rather than years), which is of significant scientific value. Together, these clients allow researchers to study biomedical questions formerly considered impractical to tackle computationally.
Professional software developers are responsible for most of Folding@home's code, both for the client and server-side. The development team includes programmers from Nvidia, ATI, Sony, and Cauldron Development. Clients can be downloaded only from the official Folding@home website or its commercial partners, and will only interact with Folding@home computer files. They will upload and download data with Folding@home's data servers (over port 8080, with 80 as an alternate), and the communication is verified using 2048-bit digital signatures. While the client's graphical user interface (GUI) is open-source, the client is proprietary software citing security and scientific integrity as the reasons.
However, this rationale of using proprietary software is disputed since while the license could be enforceable in the legal domain retrospectively, it doesn't practically prevent the modification (also known as patching) of the executable binary files. Likewise, binary-only distribution does not prevent the malicious modification of executable binary-code, either through a man-in-the-middle attack while being downloaded via the internet, or by the redistribution of binaries by a third-party that have been previously modified either in their binary state (i.e. patched), or by decompiling and recompiling them after modification. These modifications are possible unless the binary files – and the transport channel – are signed and the recipient person/system is able to verify the digital signature, in which case unwarranted modifications should be detectable, but not always. Either way, since in the case of Folding@home the input data and output result processed by the client-software are both digitally signed, the integrity of work can be verified independently from the integrity of the client software itself.
Folding@home uses the Cosm software libraries for networking. Folding@home was launched on October 1, 2000, and was the first distributed computing project aimed at bio-molecular systems. Its first client was a screensaver, which would run while the computer was not otherwise in use. In 2004, the Pande lab collaborated with David P. Anderson to test a supplemental client on the open-source BOINC framework. This client was released to closed beta in April 2005; however, the method became unworkable and was shelved in June 2006.
Graphics processing units
The specialized hardware of graphics processing units (GPU) is designed to accelerate rendering of 3-D graphics applications such as video games and can significantly outperform CPUs for some types of calculations. GPUs are one of the most powerful and rapidly growing computing platforms, and many scientists and researchers are pursuing general-purpose computing on graphics processing units (GPGPU). However, GPU hardware is difficult to use for non-graphics tasks and usually requires significant algorithm restructuring and an advanced understanding of the underlying architecture. Such customization is challenging, more so to researchers with limited software development resources. Folding@home uses the open-source OpenMM library, which uses a bridge design pattern with two application programming interface (API) levels to interface molecular simulation software to an underlying hardware architecture. With the addition of hardware optimizations, OpenMM-based GPU simulations need no significant modification but achieve performance nearly equal to hand-tuned GPU code, and greatly outperform CPU implementations.
Before 2010, the computing reliability of GPGPU consumer-grade hardware was largely unknown, and circumstantial evidence related to the lack of built-in error detection and correction in GPU memory raised reliability concerns. In the first large-scale test of GPU scientific accuracy, a 2010 study of over 20,000 hosts on the Folding@home network detected soft errors in the memory subsystems of two-thirds of the tested GPUs. These errors strongly correlated to board architecture, though the study concluded that reliable GPU computing was very feasible as long as attention is paid to the hardware traits, such as software-side error detection.
The first generation of Folding@home's GPU client (GPU1) was released to the public on October 2, 2006, delivering a 20–30 times speedup for some calculations over its CPU-based GROMACS counterparts. It was the first time GPUs had been used for either distributed computing or major molecular dynamics calculations. GPU1 gave researchers significant knowledge and experience with the development of GPGPU software, but in response to scientific inaccuracies with DirectX, on April 10, 2008 it was succeeded by GPU2, the second generation of the client. Following the introduction of GPU2, GPU1 was officially retired on June 6. Compared to GPU1, GPU2 was more scientifically reliable and productive, ran on ATI and CUDA-enabled Nvidia GPUs, and supported more advanced algorithms, larger proteins, and real-time visualization of the protein simulation. Following this, the third generation of Folding@home's GPU client (GPU3) was released on May 25, 2010. While backward compatible with GPU2, GPU3 was more stable, efficient, and flexibile in its scientific abilities, and used OpenMM on top of an OpenCL framework. Although these GPU3 clients did not natively support the operating systems Linux and macOS, Linux users with Nvidia graphics cards were able to run them through the Wine software application. GPUs remain Folding@home's most powerful platform in FLOPS. As of November 2012, GPU clients account for 87% of the entire project's x86 FLOPS throughput.
Native support for Nvidia and AMD graphics cards under Linux was introduced with FahCore 17, which uses OpenCL rather than CUDA.
PlayStation 3
From March 2007 until November 2012, Folding@home took advantage of the computing power of PlayStation 3s. At the time of its inception, its main streaming Cell processor delivered a 20 times speed increase over PCs for some calculations, processing power which could not be found on other systems such as the Xbox 360. The PS3's high speed and efficiency introduced other opportunities for worthwhile optimizations according to Amdahl's law, and significantly changed the tradeoff between computing efficiency and overall accuracy, allowing the use of more complex molecular models at little added computing cost. This allowed Folding@home to run biomedical calculations that would have been otherwise infeasible computationally.
The PS3 client was developed in a collaborative effort between Sony and the Pande lab and was first released as a standalone client on March 23, 2007. Its release made Folding@home the first distributed computing project to use PS3s. On September 18 of the following year, the PS3 client became a channel of Life with PlayStation on its launch. In the types of calculations it can perform, at the time of its introduction, the client fit in between a CPU's flexibility and a GPU's speed. However, unlike clients running on personal computers, users were unable to perform other activities on their PS3 while running Folding@home. The PS3's uniform console environment made technical support easier and made Folding@home more user friendly. The PS3 also had the ability to stream data quickly to its GPU, which was used for real-time atomic-level visualizing of the current protein dynamics.
On November 6, 2012, Sony ended support for the Folding@home PS3 client and other services available under Life with PlayStation. Over its lifetime of five years and seven months, more than 15 million users contributed over 100 million hours of computing to Folding@home, greatly assisting the project with disease research. Following discussions with the Pande lab, Sony decided to terminate the application. Pande considered the PlayStation 3 client a "game changer" for the project.
Multi-core processing client
Folding@home can use the parallel computing abilities of modern multi-core processors. The ability to use several CPU cores simultaneously allows completing the full simulation far faster. Working together, these CPU cores complete single work units proportionately faster than the standard uniprocessor client. This method is scientifically valuable because it enables much longer simulation trajectories to be performed in the same amount of time, and reduces the traditional difficulties of scaling a large simulation to many separate processors. A 2007 publication in the Journal of Molecular Biology relied on multi-core processing to simulate the folding of part of the villin protein approximately 10 times longer than was possible with a single-processor client, in agreement with experimental folding rates.
In November 2006, first-generation symmetric multiprocessing (SMP) clients were publicly released for open beta testing, referred to as SMP1. These clients used Message Passing Interface (MPI) communication protocols for parallel processing, as at that time the GROMACS cores were not designed to be used with multiple threads. This was the first time a distributed computing project had used MPI. Although the clients performed well in Unix-based operating systems such as Linux and macOS, they were troublesome under Windows. On January 24, 2010, SMP2, the second generation of the SMP clients and the successor to SMP1, was released as an open beta and replaced the complex MPI with a more reliable thread-based implementation.
SMP2 supports a trial of a special category of bigadv work units, designed to simulate proteins that are unusually large and computationally intensive and have a great scientific priority. These units originally required a minimum of eight CPU cores, which was raised to sixteen later, on February 7, 2012. Along with these added hardware requirements over standard SMP2 work units, they require more system resources such as random-access memory (RAM) and Internet bandwidth. In return, users who run these are rewarded with a 20% increase over SMP2's bonus point system. The bigadv category allows Folding@home to run especially demanding simulations for long times that had formerly required use of supercomputing clusters and could not be performed anywhere else on Folding@home. Many users with hardware able to run bigadv units have later had their hardware setup deemed ineligible for bigadv work units when CPU core minimums were increased, leaving them only able to run the normal SMP work units. This frustrated many users who invested significant amounts of money into the program only to have their hardware be obsolete for bigadv purposes shortly after. As a result, Pande announced in January 2014 that the bigadv program would end on January 31, 2015.
V7
The V7 client is the seventh and latest generation of the Folding@home client software, and is a full rewrite and unification of the prior clients for Windows, macOS, and Linux operating systems. It was released on March 22, 2012. Like its predecessors, V7 can run Folding@home in the background at a very low priority, allowing other applications to use CPU resources as they need. It is designed to make the installation, start-up, and operation more user-friendly for novices, and offer greater scientific flexibility to researchers than prior clients. V7 uses Trac for managing its bug tickets so that users can see its development process and provide feedback.
V7 consists of four integrated elements. The user typically interacts with V7's open-source GUI, named FAHControl. This has Novice, Advanced, and Expert user interface modes, and has the ability to monitor, configure, and control many remote folding clients from one computer. FAHControl directs FAHClient, a back-end application that in turn manages each FAHSlot (or slot). Each slot acts as replacement for the formerly distinct Folding@home v6 uniprocessor, SMP, or GPU computer clients, as it can download, process, and upload work units independently. The FAHViewer function, modeled after the PS3's viewer, displays a real-time 3-D rendering, if available, of the protein currently being processed.
Google Chrome
In 2014, a client for the Google Chrome and Chromium web browsers was released, allowing users to run Folding@home in their web browser. The client used Google's Native Client (NaCl) feature on Chromium-based web browsers to run the Folding@home code at near-native speed in a sandbox on the user's machine. Due to the phasing out of NaCL and changes at Folding@home, the web client was permanently shut down in June 2019.
Android
In July 2015, a client for Android mobile phones was released on Google Play for devices running Android 4.4 KitKat or newer.
On February 16, 2018 the Android client, which was offered in cooperation with Sony, was removed from Google Play. Plans were announced to offer an open source alternative in the future.
Comparison to other molecular simulators
Rosetta@home is a distributed computing project aimed at protein structure prediction and is one of the most accurate tertiary structure predictors. The conformational states from Rosetta's software can be used to initialize a Markov state model as starting points for Folding@home simulations. Conversely, structure prediction algorithms can be improved from thermodynamic and kinetic models and the sampling aspects of protein folding simulations. As Rosetta only tries to predict the final folded state, and not how folding proceeds, Rosetta@home and Folding@home are complementary and address very different molecular questions.
Anton is a special-purpose supercomputer built for molecular dynamics simulations. In October 2011, Anton and Folding@home were the two most powerful molecular dynamics systems. Anton is unique in its ability to produce single ultra-long computationally costly molecular trajectories, such as one in 2010 which reached the millisecond range. These long trajectories may be especially helpful for some types of biochemical problems. However, Anton does not use Markov state models (MSM) for analysis. In 2011, the Pande lab constructed a MSM from two 100-µs Anton simulations and found alternative folding pathways that were not visible through Anton's traditional analysis. They concluded that there was little difference between MSMs constructed from a limited number of long trajectories or one assembled from many shorter trajectories. In June 2011 Folding@home added sampling of an Anton simulation in an effort to better determine how its methods compare to Anton's. However, unlike Folding@home's shorter trajectories, which are more amenable to distributed computing and other parallelizing methods, longer trajectories do not require adaptive sampling to sufficiently sample the protein's phase space. Due to this, it is possible that a combination of Anton's and Folding@home's simulation methods would provide a more thorough sampling of this space.
See also
BOINC
DreamLab, for the use on Smartphones
Foldit
List of distributed computing projects
Comparison of software for molecular mechanics modeling
Molecular modeling on GPUs
SETI@home
Storage@home
Molecule editor
Volunteer computing
World Community Grid
References
Sources
External links
Bioinformatics
Computational biology
Computational chemistry
Computer-related introductions in 2000
Cross-platform software
Data mining and machine learning software
Distributed computing projects
Hidden Markov models
Mathematical and theoretical biology
Medical technology
Medical research organizations
Molecular dynamics software
Molecular modelling
Molecular modelling software
PlayStation 3 software
Proprietary cross-platform software
Protein folds
Protein structure
Simulation software
Science software for Linux
Science software for MacOS
Science software for Windows
Stanford University
Washington University in St. Louis |
66771654 | https://en.wikipedia.org/wiki/Sniffer%20%28protocol%20analyzer%29 | Sniffer (protocol analyzer) | The Sniffer was a computer network packet and protocol analyzer developed and first sold in 1986 by Network General Corporation of Mountain View, CA. By 1994 the Sniffer had become the market leader in high-end protocol analyzers. According to SEC 10-K filings and corporate annual reports, between 1986 and March 1997 about $933M worth of Sniffers and related products and services had been sold as tools for network managers and developers.
The Sniffer was the antecedent of several generations of network protocol analyzers, of which the current most popular is Wireshark.
Sniffer history
The Sniffer was the first product of Network General Corporation, founded on May 13, 1986 by Harry Saal and Len Shustek to develop and market network protocol analyzers. The inspiration was an internal test tool that had been developed within Nestar Systems, a personal computer networking company founded in October 1978 by Saal and Shustek along with Jim Hinds and Nick Fortis. In 1982 engineers John Rowlands and Chris Reed at Nestar’s UK subsidiary Zynar Ltd developed an ARCNET promiscuous packet receiver and analyzer called TART (“Transmit and Receive Totaliser”) for use as an internal engineering test tool. It used custom hardware, and software for an IBM PC written in a combination of BASIC and 8086 assembly code. When Nestar was acquired by Digital Switch Corporation (now DSC Communications) of Plano, Texas in 1986, Saal and Shustek received the rights to TART.
At Network General, Saal and Shustek initially sold TART as the “R-4903 ARCNET Line Analyzer (‘The Sniffer’)”. They then reengineered TART for IBM’s Token Ring network hardware, created a different user interface with software written in C, and began selling it as The Sniffer™ in December of 1986. The company had four employees at the end of that year.
In April 1987 the company released an Ethernet version of the Sniffer, and in October, versions for ARCNET, StarLAN, and IBM PC Network Broadband. Protocol interpreters were written for about 100 network protocols at various levels of the protocol stack, and customers were given the ability to write their own interpreters. The product line gradually expanded to include the Distributed Sniffer System for multiple remote network segments, the Expert Sniffer for advanced problem diagnosis, and the Watchdog for simple network monitoring.
Corporate history
Between inception and the end of 1988, Network General sold $13.7M worth of Sniffers and associated services. Financing was initially provided only by the founders until an investment of $2M by TA Associates in December 1987. On February 6, 1989 the company, which had 29 employees at the time, raised $22M with a public stock offering of 1,900,000 shares on NASDAQ as NETG. On August 3, 1989, they sold an additional 1,270,000 shares in a secondary offering, and on April 7, 1992 an additional 2,715,000 shares in a third offering.
In December 1989, Network General bought Legend Software, a one-person company in New Jersey that had been founded by Dan Hansen. Their product was a network monitor called LAN Patrol, which was enhanced, rebranded, and sold by Network General as WatchDog.
By 1995 Network General had sold Sniffer-related products totaling $631M at an average gross margin of 77%. It had almost 1000 employees and was selling about 1000 Sniffers a month.
In December 1997 Network General merged with McAfee Associates (MCAF) to form Network Associates, in a stock swap deal valued at $1.3B. Weeks later, Network Associates bought Pretty Good Privacy, Inc. (“PGP”) , the encryption company founded in 1991 by Phil Zimmerman, for $35M in cash. Saal and Shustek left the company shortly thereafter.
In 2002, much of the PGP product line was sold to the newly formed PGP Corporation for an undisclosed amount. It was subsequently acquired by Symantec in 2010.
In mid-2004, Network Associates sold off the Sniffer technology business to investors led by Silver Lake Partners and Texas Pacific Group for $275M in cash, creating a new Network General Corporation. That same year, Network Associates readopted its founder’s name and became McAfee Inc. In September 2007, the new Network General was acquired by NetScout Systems for $205M. Netscout marketed "Sniffer Portable" in 2013, and in 2018 they divested their handheld network test tool business, including the Sniffer, to StoneCalibre.
Intellectual property rights
Network General, prior to the merger with McAfee, had filed no patents on the Sniffer. The source code and some of the hardware designs were protected by trade secrets. Most of that was eventually acquired by NetScout in the 2007 acquisition.
Network General Corporation applied for a trademark to “Sniffer” used in the context of “analyzing and testing digital traffic operations in local area networks” on May 30, 1989. It was accepted and registered on May 28, 1991. Network General protected its use with, for example, a full-page ad in the Wall Street Journal on October 4, 1995. As of 2021 that trademark registration is still active, and is now owned by NetScout Systems Inc. Network General also owned the trademarks to “Expert Sniffer”, “TeleSniffer”, and “Distributed Sniffer System”, all of which have expired.
The original Nestar ARCNET Sniffer
The ARCNET Sniffer developed as an internal test tool by Zynar used the IBM PC ARCNET Network Interface Card developed by Nestar for the PLAN networking systems. That board used the COM9026 integrated ARCNET controller from Standard Microsystems Corporation, which had been developed in collaboration with Datapoint.
There was no promiscuous mode in the SMC chip that would allow all packets to be received regardless of the destination address. So to create the Sniffer, a daughterboard was developed that intercepted the receive data line to the chip and manipulated the data so that every packet looked like a broadcast and was received by the chip.
Since the ability to receive all packets was viewed as a violation of network privacy, the circuitry implementing it was kept secret, and the daughterboard was potted in black epoxy to discourage reverse-engineering.
The source code of the original TART/Sniffer BASIC and assembler program is available on GitHub.
The 1986 Network General Sniffer
The Sniffer was a promiscuous mode packet receiver, which means it received a copy of all network packets without regard to what computer they were addressed to. The packets were filtered, analyzed using what is now sometimes called Deep Packet Inspection, and stored for later examination.
The Sniffer was implemented above Microsoft’s MS-DOS operating system, and used a 40 line 80-character text-only display. The first version, the PA-400 protocol analyzer for Token-Ring networks, was released on a Compaq Portable II “luggable” computer that had an Intel 80286 processor, 640 KB of RAM, a 20 MB internal hard disk, a 5 ¼” floppy disk drive, and a 9” monochrome CRT screen. The retail price of the Sniffer in unit quantities was $19,995
The two major modes of operation were:
“capture”, in which
packets are captured, stored, counted, and summarized
filters control which packets are captured
triggers control when capture should stop, perhaps because a sought-after network error condition had occurred
“display”, in which
packets are analyzed and interpreted
filters control which packets are displayed
options control which aspects of the packets are displayed
Navigation of the extensive menu system on the character-mode display was through a variation of Miller Columns that were originally created by Mark S Miller at Datapoint Corporation for their file browser. As the Sniffer manual described, “The screen shows you three panels, arranged from left to right. Immediately to the left of your current (highlighted) position is the node you just came from. Above and below you in the center panel are alternative nodes that are also reachable from the node to your left… To your right are nodes reachable from the node you're now on.”
Pressing F10 initiated capture and a real-time display of activity.
When capture ended, packets were analyzed and displayed in one or more of the now-standard three synchronized vertical windows: multiple packet summary, single packet decoded detail, and raw numerical packet data. Highlighting linked the selected items in each window.
In the multiple-packet summary, the default display was of information at the highest level of the protocol stack present in that packet. Other displays could be requested using the “display options” menu.
The translation of data at a particular level of the network protocol stack into user-friendly text was the job of a “protocol interpreter”, or PI. Network General provided over 100 PI’s for commonly-used protocols of the day:
Decoding higher protocol levels often required the interpreter to maintain state information about connections so that subsequent packets could be property interpreted. That was implemented with a combination of locally cached data within the protocol interpreter, and the ability to look back at earlier packets stored in the capture buffer.
Sniffer customers could write their own protocol interpreters to decode new or rare protocols not supported by Network General. Interpreters were written in C and linked with the rest of the Sniffer modules to create a new executable program. The procedure for creating new PIs was documented in April 1987 as part of Sniffer version 1.20.
In addition to supporting many network protocols, there were versions of the Sniffer that collected data from the major local area networks in use in the 1980s and early 1990s:
IBM Token-Ring
Token Bus
Ethernet (thick, thin, twisted pair)
Datapoint ARCnet
Starlan
AppleTalk
Corvus Omninet
FDDI
ISDN
Frame Relay
Synchronous Data Link Control (SDLC)
Asynchronous Transfer Mode (ATM)
X.25
IBM PC Network (Sytek)
Competitors
Even in the early years, the Sniffer had competition, at least for some aspects of the product. Several were, like the Sniffer, ready-to-use packaged instruments:
Excelan's 1984 Nutcracker, and its 1986 LANalyzer
Communications Machinery Corporation's DRN-1700 LanScan Ethernet Monitor
Hewlett-Packard's HP-4972A LAN Protocol Analyzer
Digital Equipment Corporation's LAN Traffic Monitor
Tektronix's TMA802 Media Analyzer
There were also several software-only packet monitors and decoders, often running on Unix, and often with only a command-line user interface:
tcpdump, using the Berkeley Packet Filter and other capture mechanisms provided by the operating system
LANWatch, originally from FTP Software
See also
Comparison of packet analyzers
Wireshark
tcpdump
References
External links
"The Ancient History of Computers and Network Sniffers" (Sharkfest 2021 keynote talk) -
Network analyzers |
53723 | https://en.wikipedia.org/wiki/Digital%20media | Digital media | Digital media means any communication media that operate with the use of any of various encoded machine-readable data formats. Digital media can be created, viewed, distributed, modified, listened to, and preserved on a digital electronics device. Digital can be defined as any data represented by a series of digits, while media refers to methods of broadcasting or communicating these information. Together, digital media refers to mediums of digitized information broadcast to us through a screen and/or a speaker. This also includes text, audio, video, and graphics that are transmitted over the internet for viewing or listening to on the internet.
Digital media platforms, such as YouTube, Vimeo, and Twitch accounted for viewership rates of 27.9 billion hours in 2020.
Digital media
Examples of digital media include software, digital images, digital video, video games, web pages and websites, social media, digital data and databases, digital audio such as MP3, electronic documents and electronic books. Digital media often contrasts with print media, such as printed books, newspapers and magazines, and other traditional or analog media, such as photographic film, audio tapes or video tapes.
Digital media has had a significantly broad and complex impact on society and culture. Combined with the Internet and personal computing, digital media has caused disruptive innovation in publishing, journalism, public relations, entertainment, education, commerce and politics. Digital media has also posed new challenges to copyright and intellectual property laws, fostering an open content movement in which content creators voluntarily give up some or all of their legal rights to their work. The ubiquity of digital media and its effects on society suggest that we are at the start of a new era in industrial history, called the Information Age, perhaps leading to a paperless society in which all media are produced and consumed on computers. However, challenges to a digital transition remain, including outdated copyright laws, censorship, the digital divide, and the spectre of a digital dark age, in which older media becomes inaccessible to new or upgraded information systems. Digital media has a significant, wide-ranging and complex impact on society and culture.
History
Codes and information by machines were first conceptualized by Charles Babbage in the early 1800s. Babbage imagined that these codes would give him instructions for his Motor of Difference and Analytical Engine, machines that Babbage had designed to solve the problem of error in calculations.
Between 1822 and 1823, Ada Lovelace, mathematics, wrote the first instructions for calculating numbers on Babbage engines. Lovelace's instructions are now believed to be the first computer program.
Although the machines were designed to perform analysis tasks, Lovelace anticipated the possible social impact of computers and programming, writing. "For in the distribution and combination of truths and formulas of analysis, which may become easier and more quickly subjected to the mechanical combinations of the engine, the relationships and the nature of many subjects in which science necessarily relates in new subjects, and more deeply researched […] there are in all extensions of human power or additions to human knowledge, various collateral influences, in addition to the primary and primary object reached." Other old machine readable media include instructions for pianolas and weaving machines.
It is estimated that in the year 1986 less than 1% of the world's media storage capacity was digital and in 2007 it was already 94%. The year 2002 is assumed to be the year when human kind was able to store more information in digital than in analog media (the "beginning of the digital age").
Digital computers
Though they used machine-readable media, Babbage's engines, player pianos, jacquard looms and many other early calculating machines were themselves analog computers, with physical, mechanical parts. The first truly digital media came into existence with the rise of digital computers. Digital computers use binary code and Boolean logic to store and process information, allowing one machine in one configuration to perform many different tasks. The first modern, programmable, digital computers, the Manchester Mark 1 and the EDSAC, were independently invented between 1948 and 1949. Though different in many ways from modern computers, these machines had digital software controlling their logical operations. They were encoded in binary, a system of ones and zeroes that are combined to make hundreds of characters. The 1s and 0s of binary are the "digits" of digital media.
"As We May Think"
While digital media did not come into common use until the late 20th century, the conceptual foundation of digital media is traced to the work of scientist and engineer Vannevar Bush and his celebrated essay "As We May Think," published in The Atlantic Monthly in 1945. Bush envisioned a system of devices that could be used to help scientists, doctors, historians and others, store, analyze and communicate information. Calling this then-imaginary device a "memex", Bush wrote:
The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him.
Bush hoped that the creation of this memex would be the work of scientists after World War II. Though the essay predated digital computers by several years, "As We May Think," anticipated the potential social and intellectual benefits of digital media and provided the conceptual framework for digital scholarship, the World Wide Web, wikis and even social media. It was recognized as a significant work even at the time of its publication.
Impact
The digital revolution
Since the 1960s, computing power and storage capacity have increased exponentially, largely as a result of MOSFET scaling which enables MOS transistor counts to increase at a rapid pace predicted by Moore's law. Personal computers and smartphones put the ability to access, modify, store and share digital media in the hands of billions of people. Many electronic devices, from digital cameras to drones have the ability to create, transmit and view digital media. Combined with the World Wide Web and the Internet, digital media has transformed 21st century society in a way that is frequently compared to the cultural, economic and social impact of the printing press. The change has been so rapid and so widespread that it has launched an economic transition from an industrial economy to an information-based economy, creating a new period in human history known as the Information Age or the digital revolution.
The transition has created some uncertainty about definitions. Digital media, new media, multimedia, and similar terms all have a relationship to both the engineering innovations and cultural impact of digital media. The blending of digital media with other media, and with cultural and social factors, is sometimes known as new media or "the new media." Similarly, digital media seems to demand a new set of communications skills, called transliteracy, media literacy, or digital literacy. These skills include not only the ability to read and write—traditional literacy—but the ability to navigate the Internet, evaluate sources, and create digital content. The idea that we are moving toward a fully digital, paperless society is accompanied by the fear that we may soon—or currently—be facing a digital dark age, in which older media are no longer accessible on modern devices or using modern methods of scholarship. Digital media has a significant, wide-ranging and complex effect on society and culture.
A senior engineer at Motorola named Martin Cooper was the first person to make a phone call on April 3, 1973. He decided the first phone call should be to a rival telecommunications company saying "I'm speaking via a mobile phone". However the first commercial mobile phone was released in 1983 by Motorola. In the early 1990s Nokia came into succession, with their Nokia 1011 being the first mass-produced mobile phone. The Nokia Communicator 9000 became the first smartphone as it was inputed with an Intel 24 MHz CPU and had 8 MB of RAM. Smartphone users have increased by a lot over the years currently the highest countries with users include China with over 850 million users, India with over 350 million users, and in third place The United States with about 260 million users as of 2019. While Android and iOS both dominate the smartphone market. A study By Gartner found that in 2016 about 88% of the worldwide smartphones were Android while iOS had a market share of about 12%. About 85% of the mobile market revenue came from the mobile games.
The impact of the digital revolution can also be assessed by exploring the amount of worldwide mobile smart device users there are. This can be split into 2 categories; smart phone users and smart tablet users. Worldwide there are currently 2.32 billion smartphone users across the world. This figure is to exceed 2.87 billion by 2020. Smart tablet users reached a total of 1 billion in 2015, 15% of the world's population.
The statistics evidence the impact of digital media communications today. What is also of relevance is the fact that the numbers of smart device users is rising rapidly yet the amount of functional uses increase daily. A smartphone or tablet can be used for hundreds of daily needs. There are currently over 1 million apps on the Apple App store. These are all opportunities for digital marketing efforts. A smartphone user is impacted with digital advertising every second they open their Apple or Android device. This further evidences the digital revolution and the impact of revolution. This has resulted in a total of 13 billion dollars being paid out to the various app developers over the years. This growth has fueled the development of millions of software applications. Most of these apps are able to generate income via in app advertising. Gross revenue for 2020 is projected to be about $189 million.
Disruption in industry
Compared with print media, the mass media, and other analog technologies, digital media are easy to copy, store, share and modify. This quality of digital media has led to significant changes in many industries, especially journalism, publishing, education, entertainment, and the music business. The overall effect of these changes is so far-reaching that it is difficult to quantify. For example, in movie-making, the transition from analog film cameras to digital cameras is nearly complete. The transition has economic benefits to Hollywood, making distribution easier and making it possible to add high-quality digital effects to films. At the same time, it has affected the analog special effects, stunt, and animation industries in Hollywood. It has imposed painful costs on small movie theaters, some of which did not or will not survive the transition to digital. The effect of digital media on other media industries is similarly sweeping and complex.
Between 2000 and 2015, the print newspaper advertising revenue has fallen from $60 billion to a nearly $20 billion. Even one of the most popular days for papers, Sunday, has seen a 9% circulation decrease the lowest since 1945.
In journalism, digital media and citizen journalism have led to the loss of thousands of jobs in print media and the bankruptcy of many major newspapers. But the rise of digital journalism has also created thousands of new jobs and specializations. E-books and self-publishing are changing the book industry, and digital textbooks and other media-inclusive curricula are changing primary and secondary education.
In academia, digital media has led to a new form of scholarship, also called digital scholarship, making open access and open science possible thanks to the low cost of distribution. New fields of study have grown, such as digital humanities and digital history. It has changed the way libraries are used and their role in society. Every major media, communications and academic endeavor is facing a period of transition and uncertainty related to digital media.
Often time the magazine or publisher have a Digital edition which can be referred to an electronic formatted version identical to the print version. There is a huge benefit to the publisher and cost, as half of traditional publishers' costs come from production, including raw materials, technical processing, and distribution.
Since 2004, there has been a decrease in newspaper industry employment, with only about 40,000 people working in the workforce currently. Alliance of Audited Media & Publishers information during the 2008 recession, over 10% of print sales are diminished for certain magazines, with a hardship coming from only 75% of the sales advertisements as before. However, in 2018, major newspapers advertising revenue was 35% from digital ads.
In contrast, mobile versions of newspapers and magazines came in second with a huge growth of 135%. The New York Times has noted a 47% year of year rise in their digital subscriptions. 43% of adults get news often from news websites or social media, compared with 49% for television. Pew Research also asked respondents if they got news from a streaming device on their TV – 9% of U.S. adults said that they do so often.
Individual as content creator
Digital media has also allowed individuals to be much more active in content creation. Anyone with access to computers and the Internet can participate in social media and contribute their own writing, art, videos, photography and commentary to the Internet, as well as conduct business online. The dramatic reduction in the costs required to create and share content have led to a democratization of content creation as well as the creation of new types of content, like blogs, memes, and video essays. Some of these activities have also been labelled citizen journalism. This spike in user created content is due to the development of the internet as well as the way in which users interact with media today. The release of technologies such mobile devices allow for easier and quicker access to all things media. Many media creation tools that were once available to only a few are now free and easy to use. The cost of devices that can access the Internet is steadily falling, and personal ownership of multiple digital devices is now becoming the standard. These elements have significantly affected political participation. Digital media is seen by many scholars as having a role in Arab Spring, and crackdowns on the use of digital and social media by embattled governments are increasingly common. Many governments restrict access to digital media in some way, either to prevent obscenity or in a broader form of political censorship.
Over the years YouTube has grown to become a website with user generated media. This content is oftentimes not mediated by any company or agency, leading to a wide array of personalities and opinions online. Over the years, YouTube and other platforms have also shown their monetary gains, as the top 10 YouTube performers generating over 10 million dollars each year. Many of these YouTube profiles over the years have a multi camera set up as we would see on TV. Many of these creators also creating their own digital companies as their personalities grow. Personal devices have also seen an increase over the years. Over 1.5 billion users of tablets exist in this world right now and that is expected to slowly grow About 20% of people in the world regularly watch their content using tablets in 2018
User-generated content raises issues of privacy, credibility, civility and compensation for cultural, intellectual and artistic contributions. The spread of digital media, and the wide range of literacy and communications skills necessary to use it effectively, have deepened the digital divide between those who have access to digital media and those who don't.
The rising of digital media has made the consumer's audio collection more precise and personalized. It is no longer necessary to purchase an entire album if the consumer is ultimately interested in only a few audio files.
Web-only news
The rise of streaming services has led to a decrease of cable TV services to about 59%, while streaming services are growing at around 29%, and 9% are still users of the digital antenna. TV Controllers now incorporate designated buttons for streaming platforms. Users are spending an average of 1:55 with digital video each day, and only 1:44 on social networks. 6 out of 10 people report viewing their television shows and news via a streaming service. Platforms such as Netflix have gained attraction due to their adorability, accessibility, and for its original content. Companies such as Netflix have even bought previously cancelled shows such as Designated Survivor, Lucifer, and Arrested Development. As the internet becomes more and more prevalent, more companies are beginning to distribute content through internet only means. With the loss of viewers, there is a loss of revenue but not as bad as what would be expected.
Copyright challenges
Digital media are numerical networks of interactive systems that link databases that allow users to navigate from one bit of content or webpage to another. Because of this ease, digital media poses several challenges to the current copyright and intellectual property laws. The effortlessness of creating, modifying, and sharing digital media can make copyright enforcement challenging. Many copyright laws are widely seen as outdated. For example, under current copyright law, common Internet memes are generally illegal to share in many countries. Legal rights can be unclear for many common Internet activities. These include posting a pictures from someone else's social media account, writing fanfiction, or covering and/or using popular songs in content such as YouTube videos. Over the last decade, the concepts of fair use and copyright is being applied to all different types of online media.
Copyright challenges are spreading to all parts of digital media. Content creators on platforms such as YouTube must follow guidelines set by copyright, IP laws, and the platform's copyright guidelines. If these guidelines are not followed, the content will likely get demonetized, deleted, or criticized. Oftentimes we see digital creators loose monetization in their content, get their content deleted, or get criticized for their content. In some instances content creators can be sued. Online times the situation occurs when creators accidentally use audio tracks or background scenes that are under copyright. To avoid or resolve some of these issues, content creators can voluntarily adopt open or copyleft licenses or they can release their work to the public domain. By doing this, creators are giving up some legal rights associated with their content. Fair use is a doctrine of the US Copyright Law that allows limited use of copyrighted materials without the need to obtain permission. There are four factors that make up fair use. The first factor is purpose. Purpose means what the content is being used for, in reference to commercial, nonprofit, or educational purposes. Also, if the use adds a new aspect or meaning to the content. The second factor is what the copyrighted content is. If the content is non-fiction, it is more likely to full under fair use than if the content is a work of fiction. The third factor is how much of the copyrighted content is in use. Small pieces are more likely to be considered fair than large chunks or the real "meat" of the content. The last factor is, will the use of the copyrighted content earn money or effect the value of the content.
Wikipedia uses some of the most common open licenses, Creative Commons licenses and the GNU Free Documentation License. Open licenses are part of a broader open content movement which pushes for the reduction or removal of copyright restrictions from software, data and other digital media. To facilitate the collection and consumption of such licensing information and availability status, tools have been developed like the Creative Commons Search engine, used mostly for web images, and Unpaywall, used for scholarly communication.
Additional software has been developed in order to restrict access to digital media. Digital rights management (DRM) is used to lock material. This allows users to apply the media content to specific cases. For example, DRM allows movie producers to rent at a lower price instead of selling the movie at a higher one. This restricts the movie rental license length, rather than only selling the movie at full price. Additionally, DRM can prevent unauthorized modification or sharing of media.
Digital media copyright protection technologies fall under the umbrella of intellectual property protection technology. This is because a series of computer technologies protect the digital content being created and transmitted. The Digital Millennium Copyright Act (DMCA) provides safe harbor to intermediaries that host user content, such as YouTube, from being held liable for copyright infringement so long as they meet all required conditions. The most notable of which is the "notice and take down" policy. The policy requires online intermediaries to remove and/or disable access to the content in question when there are court orders and/or allegations of illegal use of content on their site. As a result, YouTube has and continues to develop more policies and standards that go far past what the DMCA requires. YouTube has also created an algorithm which continuously scans their cite to make sure all content follows all policies.
One digital media platform that has been known to have copyright concerns is the short video sharing app TikTok. TikTok is a social media app that allows its users to share short videos up to one minute in length, using a variety of visual effects and audio. According to Loyola University’s Chicago School of Law, of the music used on the platform, somewhere in the realm of 50% is music that is unlicensed. TikTok does have several music licensing agreements with various artists and labels, creating a library of fair and legal use music. However, this does not cover all bases for its users. A user could still commit a copyright violation. An example being accidentally having music playing on a stereo in the background, or by recording a laptop screen playing a song.
Online magazines or digital magazines are one of the first and largest targets for copyright issues. According to the Audit Bureau of Circulations report from March 2011 the definition of this medium is when a digital magazine involves the distribution of a magazine content by electronic means; it may be a replica. This definition can be considered outdated now that PDF replicas of print magazines are no longer common practice. These days digital magazines refer to magazines specifically created to be interactive digital platform such as the internet, mobile phones, private networks, iPad or other device. The barriers for digital magazine distribution are thus decreasing. However, these platforms are also broadening the scope of where digital magazines can be published. Smartphones being a prime example. Thanks to the improvements of tablets and other personal electronic devices, digital magazines have become much more readable and enticing through the use of graphic art. Over time, the evolution of online magazines began to focus on becoming more of a social media and entertainment platforms.
Online piracy has become one of the larger issues that have occurred in regards to digital media copyright. The piracy of digital media, such as film and television, directly impact the copyright party (the owner of the copyright). This action can impact the "health" of the digital media industry. Piracy directly breaks laws and morals of copyright. Along with piracy digital media has contributed to the ability to spread false information or fake news. Due to the widespread use of digital media, fake news is able to receive more notoriety. This notoriety enhances the negative effects fake news creates. As a result, peoples' health and well-being can directly be affected.
See also
Electronic media
Media psychology
Virtual artifact
Digital continuity
Content creation
References
Further reading
Ramón Reichert, Annika Richterich, Pablo Abend, Mathias Fuchs, Karin Wenz (eds.), Digital Culture & Society.
Information science
Library science
Mass media
Mass media technology
Articles containing video clips |
7989258 | https://en.wikipedia.org/wiki/Backup%20%28disambiguation%29 | Backup (disambiguation) | Backup is the computing function of making copies of data to enable recovery from data loss.
Backup may also refer to:
Information technology
Backup (backup software), Apple Mac software
Backup and Restore, Windows software
Backup software, the software that performs this function
List of backup software, specific software packages, loosely categorized
Backup site, part of a disaster recovery plan
Electrical power facilities
Battery backup, the use of batteries to continue operation of electrical devices in the absence of utility electric power
Backup battery, similar to above
Backup generator, the use of a generator to achieve a similar purpose
Music
Backup band, a band who plays music in support of a lead musician
Backing vocalist, one who sings in harmony with a "lead vocalist"
"Back Up" (Danity Kane song), 2006
"Back Up" (Pitbull song), 2004
"Back Up" (Dej Loaf song), feat. Big Sean, 2015
"Back Up" (Snoop Dogg song), 2015
Motor vehicle
Backup camera, a camera on the rear of a vehicle, used while moving backwards
Back-up collision, a type of car accident
Backing up, driving a vehicle in reverse gear
Others
Back-up goaltender, an ice hockey team's second-string goaltender
Backup (TV series), a BBC series about a police Operational Support Unit
Backup, a 2009 video game by Gregory Weir
Backup, in sports, a substitute player for a player in the starting lineup
See also |
74954 | https://en.wikipedia.org/wiki/Or | Or | Or or OR may refer to:
Arts and entertainment
Film and television
"O.R.", a 1974 episode of M*A*S*H
Or (My Treasure), a 2004 movie from Israel (Or means "light" in Hebrew)
Music
Or (album), a 2002 album by Golden Boy with Miss Kittin
O*R, the original title of Olivia Rodrigo's album Sour, 2021
"Or", a song by Israeli singer Chen Aharoni in Kdam Eurovision 2011
Or Records, a record label
Organized Rhyme, a Canadian hip-hop group featuring Tom Green
Businesses and organizations
Or (political party), an Israeli political party
OR Books, a New York-based independent publishing house founded by John Oakes and Colin Robinson
Owasco River Railway (reporting mark OR)
Arkefly, Arke (formerly Arkefly), a Dutch charter airline headquartered in Schiphol-Rijk, Netherlands (IATA airline designator OR)
Education
Old Radleian, a former pupil of Radley College in England
Old Roedenian, a former pupil of Roedean School in England
Old Roedenian, a former pupil of Roedean School (South Africa)
Old Rugbeian, a former pupil of Rugby School in England
Old Ruymian, a former pupil of Chatham House Grammar School, now the Chatham and Clarendon Grammar School
Language and linguistics
Or (digraph) in the Uzbek alphabet
Or (letter) (or forfeda) in the Ogham alphabet
Odia language (by ISO 639 code), Indo-Aryan language
Or, an English grammatical conjunction
-or, an English agent noun suffix
Or, a digraph in Taiwanese's daighi tongiong pingim phonetic transcription system
People
Or Eitan (born 1981), Israeli basketball player
Or Goren (born 1956), Israeli basketball player
Or Sasson (born 1990), Israeli Olympic judoka
Or Tokayev (born 1979), Israeli Olympic rhythmic gymnast
Tomer Or (born 1978), Israeli Olympic fencer
Or (monk) (died c. 390), Egyptian Christian monk
Places
Odisha, state in India, formerly known as Orissa
Or (Crimea)
Or (river), a tributary of the Ural in Russia and Kazakhstan
Oregon, a state in the United States whose postal abbreviation is "OR"
Province of Oristano, Italy (vehicle code OR)
Science, technology, and mathematics
Computing
Bitwise OR, an operator in computer programming, typically notated as or or |
The short-circuit operator or, notated or, ||, or or else
Elvis operator, an operator in computer programming that returns its first operand if its value is considered true, and its right operand if not
Null coalescing operator, an operator in computer programming
Onion routing, anonymous networking technique (also Onion Router)
OR gate, an integrated circuit in electronics
Object-relational mapping
Mathematics and logic
Or (logic), logical disjunction
Exclusive or (XOR), a logical operation
Other uses in science and technology
Odds ratio, a measure of effect size in statistics
OR, a previous title of the Journal of the Operational Research Society
Operating room, in medicine
Operations research, or operational research, in British English
Operations readiness
Titles and ranks
Official receiver, a statutory office holder in England and Wales
Order of Roraima of Guyana, an award of the Republic of Guyana
Other ranks, Denmark (disambiguation), military personnel in all branches of the Danish military that are not officers by the NATO system of ranks and insignia
Other ranks (UK), personnel who are not commissioned officers, usually including non-commissioned officers (NCOs), in militaries of many Commonwealth countries
Other uses
Official Records of the American Civil War
Olympic record, a term for the best performances in Olympic Games
Or (heraldry), a gold or yellow tincture (from the French word for "gold")
Own Recognizance, the basis for releasing someone awaiting trial without bail
See also
'0r' (zero r), meaning "no roods", in old measurements of land area
And (disambiguation)
OAR (disambiguation)
Ore (disambiguation)
Either/Or (disambiguation) |
24951 | https://en.wikipedia.org/wiki/PlayStation%203 | PlayStation 3 | The PlayStation 3 (PS3) is a home video game console developed by Sony Computer Entertainment. The successor to PlayStation 2, it is part of the PlayStation brand of consoles. It was first released on November 11, 2006, in Japan, November 17, 2006, in North America, and March 23, 2007, in Europe and Australia. The PlayStation 3 competed primarily against Microsoft's Xbox 360 and Nintendo's Wii as part of the seventh generation of video game consoles.
The console was first officially announced at E3 2005, and was released at the end of 2006. It was the first console to use Blu-ray Disk technology as its primary storage medium. The console was the first PlayStation to integrate social gaming services, including the PlayStation Network, as well as the first to be controllable from a handheld console, through its remote connectivity with PlayStation Portable and PlayStation Vita. In September 2009, the Slim model of the PlayStation 3 was released. It no longer provided the hardware ability to run PS2 games. It was lighter and thinner than the original version, and featured a redesigned logo and marketing design, as well as a minor start-up change in software. A Super Slim variation was then released in late 2012, further refining and redesigning the console.
During its early years, the system received a mixed reception, due to its high price ($599 for a 60-gigabyte model, $499 for a 20 GB model), a complex processor architecture, and lack of quality games but was praised for its Blu-ray capabilities and "untapped potential". The reception would get more positive over time. The system had a slow start in the market but managed to recover, particularly after the introduction of the Slim model. Its successor, the PlayStation 4, was released later in November 2013. On September 29, 2015, Sony confirmed that sales of the PlayStation 3 were to be discontinued in New Zealand, but the system remained in production in other markets. Shipments of new units to Europe and Australia ended in March 2016, followed by North America which ended in October 2016. Heading into 2017, Japan was the last territory where new units were still being produced until May 29, 2017, when Sony confirmed the PlayStation 3 was discontinued in Japan.
History
The PlayStation 3 began development in 2001 when Ken Kutaragi, then the President of Sony Computer Entertainment, announced that Sony, Toshiba, and IBM would collaborate on developing the Cell microprocessor. At the time, Shuhei Yoshida led a group of programmers within this hardware team to explore next-generation game creation. By early 2005, focus within Sony shifted towards developing PS3 launch titles. Sony officially unveiled PlayStation 3 to the public on May 16, 2005, at E3 2005, along with a boomerang-shaped prototype design of the Sixaxis controller. A functional version of the system was not present there,
nor at the Tokyo Game Show in September 2005, although demonstrations (such as Metal Gear Solid 4: Guns of the Patriots) were held at both events on software development kits and comparable personal computer hardware. Video footage based on the predicted PlayStation 3 specifications was also shown (notably a Final Fantasy VII tech demo).
The initial prototype shown in May 2005 featured two HDMI ports, three Ethernet ports and six USB ports; however, when the system was shown again a year later at E3 2006, these were reduced to one HDMI port, one Ethernet port and four USB ports, presumably to cut costs. Two hardware configurations were also announced for the console: a 20 GB model and a 60 GB model, priced at US$499 (€499) and US$599 (€599), respectively. The 60 GB model was to be the only configuration to feature an HDMI port, Wi-Fi internet, flash card readers and a chrome trim with the logo in silver. Both models were announced for a simultaneous worldwide release: November 11, 2006, for Japan and November 17, 2006, for North America and Europe.
On September 6, 2006, Sony announced that PAL region PlayStation 3 launch would be delayed until March 2007, because of a shortage of materials used in the Blu-ray drive. At the Tokyo Game Show on September 22, 2006, Sony announced that it would include an HDMI port on the 20 GB system, but a chrome trim, flash card readers, silver logo and Wi-Fi would not be included. Also, the launch price of the Japanese 20 GB model was reduced by over 20%, and the 60 GB model was announced for an open pricing scheme in Japan. During the event, Sony showed 27 playable PS3 games running on final hardware.
Launch
The PlayStation 3 was first released in Japan on November 11, 2006, at 07:00. According to Media Create, 81,639 PS3 systems were sold within 24 hours of its introduction in Japan. There were reports that many of the initial systems were obtained by businessmen who paid mainly Chinese nationals to buy the system without any software to resell on eBay, and, as a result of this, there were more hardware units sold than there were games. Ridge Racer 7 was the highest selling game on launch day. Soon after its release in Japan, the PS3 was released in North America on November 17, 2006. Reports of violence surrounded the release of the PS3. A customer was shot, campers were robbed at gunpoint, customers were shot in a drive-by shooting with BB guns, and 60 campers fought over 10 systems. The PS3 was released on the same day in Hong Kong and Taiwan as well.
The console was originally planned for a global release through November, but at the start of September the release in Europe and the rest of the world was delayed until March. With it being a somewhat last-minute delay, some companies had taken deposits for pre-orders, at which Sony informed customers that they were eligible for full refunds or could continue the pre-order. On January 24, 2007, Sony announced that PlayStation 3 would go on sale on March 23, 2007, in Europe, Australia, the Middle East, Africa and New Zealand.
On March 7, 2007, the 60 GB PlayStation 3 launched in Singapore with a price of S$799. In the United Arab Emirates, the system retailed for 2499 dirhams on March 23, slightly less than the price in Europe. Sony also hosted a large launch party with singer Shakira performing at the Dubai Autodrome.
The PS3 sold 600,000 units in the first two days of its release in Europe. It became the fastest-selling home system in the United Kingdom with 165,000 units sold in two days, and became the second-fastest-selling system in the UK overall, the fastest being the PlayStation Portable. Some British retailers claim that the PS3 was subjected to as many as 20,000 pre-order cancellations, while others cited a "huge demand" for the system. System sales for the following week were down 82%, selling 30,000 units, with a 60% drop in sales of the two most popular titles, MotorStorm and Resistance: Fall of Man. Its UK launch price of £425 was higher than its Japanese and American prices, with value-added tax cited as a reason by a staff member. The continental Europe price was €599, while in Ireland it was €629.
Over 27,000 units were sold in Australia over the course of the first ten days of sales and nine of the top ten best-selling games, including systems and handheld, of the week were for the PS3; overall, software and hardware sales resulted in A$33 million netted for Sony. One analyst called it "a spike in retail spending not previously witnessed at the launch of any other system in Australia". In New Zealand, over 4,800 units were sold in the first week generating "over NZ$6.8 million dollars in hardware and software retail sales."
On April 27, 2007, it launched in India, with the 60GB model retailing for ₹39,990 (US$1000 at the conversion rate at the time). In Mexico, the 20GB model launched with a price of 10,495 pesos, or US$974 at the time. The console was launched in South Korea on June 16, 2007, as a single version equipped with an 80 GB hard drive and IPTV.
Slim model
Following speculation that Sony was working on a 'slim' model, Sony officially announced the PS3 CECH-2000 model on August 18, 2009, at the Sony Gamescom press conference. New features included a slimmer form factor, decreased power consumption, and a quieter cooling system. It was released in major territories by September 2009. At the same time, a new logo was introduced for the console to replace the previous "Spider-Man" wordmarks (named due to their use of the same font as the logos of Sony's then-current Spider-Man films), with a new "PS3" wordmark evoking the design of the PlayStation 2 wordmark replacing the capitalized PlayStation 3 lettering.
Super Slim model
In September 2012 at the Tokyo Game Show, Sony announced that a new, slimmer PS3 redesign (CECH-4000) was due for release in late 2012 and that it would be available with either a 250 GB or 500 GB hard drive. Three versions of the Super Slim model were revealed: one with a 500 GB hard drive, a second with a 250 GB hard drive which is not available in PAL regions, and a third with a 12 GB flash storage that was available in PAL regions, and in Canada. The storage of 12 GB model is upgradable with an official standalone 250 GB hard drive. A vertical stand was also released for the model. In the United Kingdom, the 500 GB model was released on September 28, 2012; and the 12 GB model was released on October 12, 2012. In the United States, the PS3 Super Slim was first released as a bundled console. The 250 GB model was bundled with the Game of the Year edition of Uncharted 3: Drake's Deception and released on September 25, 2012; and the 500 GB model was bundled with Assassin's Creed III and released on October 30, 2012. In Japan, the black colored Super Slim model was released on October 4, 2012; and the white colored Super Slim model was released on November 22, 2012. The Super Slim model is 20 percent smaller and 25 percent lighter than the Slim model and features a manual sliding disc cover instead of a motorized slot-loading disc cover of the Slim model. The white colored Super Slim model was released in the United States on January 27, 2013, as part of the Instant Game Collection Bundle. The Garnet Red and Azurite Blue colored models were launched in Japan on February 28, 2013. The Garnet Red version was released in North America on March 12, 2013, as part of the God of War: Ascension bundle with 500 GB storage and contained God of War: Ascension as well as the God of War Saga. The Azurite Blue model was released on October 8, 2013, as a GameStop exclusive with 250GB storage.
Games
PlayStation 3 launched in North America with 14 titles, with another three being released before the end of 2006. After the first week of sales it was confirmed that Resistance: Fall of Man from Insomniac Games was the top-selling launch game in North America. The game was heavily praised by numerous video game websites, including GameSpot and IGN, both of whom awarded it their PlayStation 3 Game of the Year award for 2006. Some titles missed the launch window and were delayed until early 2007, such as The Elder Scrolls IV: Oblivion, F.E.A.R. and Sonic the Hedgehog. During the Japanese launch, Ridge Racer 7 was the top-selling game, while Mobile Suit Gundam: Crossfire also fared well in sales, both of which were offerings from Namco Bandai Games. PlayStation 3 launched in Europe with 24 titles, including ones that were not offered in North American and Japanese launches, such as Formula One Championship Edition, MotorStorm and Virtua Fighter 5. Resistance: Fall of Man and MotorStorm were the most successful titles of 2007, and both games subsequently received sequels in the form of Resistance 2 and MotorStorm: Pacific Rift.
At E3 2007, Sony was able to show a number of their upcoming video games for PlayStation 3, including Heavenly Sword, Lair, Ratchet & Clank Future: Tools of Destruction, Warhawk and Uncharted: Drake's Fortune; all of which were released in the third and fourth quarters of 2007. It also showed off a number of titles that were set for release in 2008 and 2009; most notably Killzone 2, Infamous, Gran Turismo 5 Prologue, LittleBigPlanet and SOCOM: U.S. Navy SEALs Confrontation. A number of third-party exclusives were also shown, including the highly anticipated Metal Gear Solid 4: Guns of the Patriots, alongside other high-profile third-party titles such as Grand Theft Auto IV, Call of Duty 4: Modern Warfare, Assassin's Creed, Devil May Cry 4 and Resident Evil 5. Two other important titles for PlayStation 3, Final Fantasy XIII and Final Fantasy Versus XIII, were shown at TGS 2007 in order to appease the Japanese market.
Sony have since launched their budget range of PlayStation 3 titles, known as the Greatest Hits range in North America, the Platinum range in Europe and Australia and The Best range in Japan. Among the titles available in the budget range include Resistance: Fall of Man, MotorStorm, Uncharted: Drake's Fortune, Rainbow Six: Vegas, Call of Duty 3, Assassin's Creed and Ninja Gaiden Sigma. As of October 2009 Metal Gear Solid 4: Guns of the Patriots, Ratchet & Clank Future: Tools of Destruction, Devil May Cry 4, Army of Two, Battlefield: Bad Company and Midnight Club: Los Angeles have also joined the list.
, there have been 595 million games sold for PlayStation 3. The best selling PS3 games are Grand Theft Auto V, Gran Turismo 5, The Last of Us, Uncharted 3: Drake's Deception and Uncharted 2: Among Thieves.
The last game released on the PlayStation 3 was Shakedown: Hawaii, on August 20, 2020.
Stereoscopic 3D
In December 2008, the CTO of Blitz Games announced that it would bring stereoscopic 3D gaming and movie viewing to Xbox 360 and PlayStation 3 with its own technology. This was first demonstrated publicly on PS3 using Sony's own technology in January 2009 at the Consumer Electronics Show. Journalists were shown Wipeout HD and Gran Turismo 5 Prologue in 3D as a demonstration of how the technology might work if it is implemented in the future. Firmware update 3.30 officially allowed PS3 titles to be played in 3D, requiring a compatible display for use. System software update 3.50 prepared it for 3D films. While the game itself must be programmed to take advantage of the 3D technology, titles may be patched to add in the functionality retroactively. Titles with such patches include Wipeout HD, Pain, and Super Stardust HD.
Hardware
PlayStation 3 is convex on its left side, with the PlayStation logo upright, when vertical (the top side is convex when horizontal) and has a glossy black finish. PlayStation designer Teiyu Goto stated that the Spider-Man-font-inspired logo "was one of the first elements SCEI president Ken Kutaragi decided on and the logo may have been the motivating force behind the shape of PS3".
On March 22, 2007, SCE and Stanford University released the Folding@home software for PlayStation 3. This program allows PS3 owners to lend the computing power of their consoles to help study the process of protein folding for disease research.
Use in supercomputing
PS3's hardware has also been used to build supercomputers for high-performance computing. Fixstars Solutions sells a version of Yellow Dog Linux for PlayStation 3 (originally sold by Terra Soft Solutions). RapidMind produced a stream programming package for PS3, but were acquired by Intel in 2009. Also, on January 3, 2007, Dr. Frank Mueller, Associate Professor of Computer Science at NCSU, clustered 8 PS3s. Mueller commented that the 256 MB of system RAM is a limitation for this particular application and is considering attempting to retrofit more RAM. Software includes: Fedora Core 5 Linux ppc64, MPICH2, OpenMP v 2.5, GNU Compiler Collection and CellSDK 1.1. As a more cost-effective alternative to conventional supercomputers, the U.S. military has purchased clusters of PS3 units for research purposes. Retail PS3 Slim units cannot be used for supercomputing, because PS3 Slim lacks the ability to boot into a third-party OS.
In December 2008, a group of hackers used a cluster of 200 PlayStation 3 computers to crack SSL authentication.
In November 2010 the Air Force Research Laboratory (AFRL) created a powerful supercomputer by connecting together 1,760 Sony PS3s which include 168 separate graphical processing units and 84 coordinating servers in a parallel array capable of performing 500 trillion floating-point operations per second (500 TFLOPS). As built the Condor Cluster was the 33rd largest supercomputer in the world and would be used to analyze high definition satellite imagery.
Technical specifications
PlayStation 3 features a slot-loading 2× speed Blu-ray Disc drive for games, Blu-ray movies, DVDs, and CDs. It was originally available with hard drives of 20 and 60 GB (20 GB model was not available in PAL regions) but various sizes up to 500 GB have been made available since then (see: model comparison). All PS3 models have user-upgradeable 2.5" SATA hard drives.
PlayStation 3 uses the Cell microprocessor, designed by Sony, Toshiba and IBM, as its CPU, which is made up of one 3.2 GHz PowerPC-based "Power Processing Element" (PPE) and eight Synergistic Processing Elements (SPEs).
To increase yields and reduce costs, the chip has 8 SPEs. After manufacture, every chip is tested and a defective SPE disconnected using laser trimming, leaving 7 SPEs. This means that otherwise discarded processors can be used, reducing costs and waste. Only six of the seven SPEs are accessible to developers as the seventh SPE is reserved by the console's operating system. Graphics processing is handled by the Nvidia RSX 'Reality Synthesizer', which can produce resolutions from 480i/576i SD up to 1080p HD. PlayStation 3 has 256 MB of XDR DRAM main memory and 256 MB of GDDR3 video memory for the RSX.
The system has Bluetooth 2.0 (with support for up to seven Bluetooth devices), Gigabit Ethernet, USB 2.0 and HDMI 1.4 built in. Wi-Fi networking is also built-in on all but the 20 GB models, while a flash card reader (compatible with Memory Stick, SD/MMC and CompactFlash/Microdrive media) is built-in on 60 GB and CECHExx 80 GB models.
Models
PlayStation 3 has been produced in various models: the original, the Slim, and the Super Slim. Successive models have added or removed various features, reduced the console's initial purchase price and weight, and increased storage capacity (with exceptions).
Controllers and accessories
Numerous accessories for the console have been developed. These accessories include the wireless Sixaxis and DualShock 3 controllers, the Logitech Driving Force GT, the Logitech Cordless Precision Controller, the BD Remote, the PlayStation Eye camera, and the PlayTV DVB-T tuner/digital video recorder accessory.
At Sony's E3 press conference in 2006, the then-standard wireless Sixaxis controller was announced. The controller was based on the same basic design as the PlayStation 2's DualShock 2 controller but was wireless, lacked vibration capabilities, had a built-in accelerometer (that could detect motion in three directional and three rotational axes; six in total, hence the name Sixaxis) and had a few cosmetic tweaks.
At its press conference at the 2007 Tokyo Game Show, Sony announced DualShock 3 (trademarked DUALSHOCK 3), a PlayStation 3 controller with the same function and design as Sixaxis, but with vibration capability included. Hands-on accounts describe the controller as being noticeably heavier than the standard Sixaxis controller and capable of vibration forces comparable to DualShock 2. It was released in Japan on November 11, 2007; in North America on April 5, 2008; in Australia on April 24, 2008; in New Zealand on May 9, 2008; in mainland Europe on July 2, 2008, and in the United Kingdom and Ireland on July 4, 2008.
During E3 2009, Sony unveiled plans to release a motion controller later to be named PlayStation Move at GDC 2010. It was released on September 15, 2010, in Europe; September 19, 2010, in North America and October 21, 2010, in Japan.
On October 13, 2010, Sony announced an official surround sound system for PS3 through the official PlayStation YouTube channel.
The PlayStation 3 can also use DualShock 4 controller initially via USB cable, but Firmware update 4.60 enabled wireless connection.
Statistics regarding reliability
According to Ars Technica, the number of PlayStation 3 consoles that have experienced failure is well within the normal failure rates in the consumer electronics industry; a 2009 study by SquareTrade, a warranty provider, found a two-year failure rate of 10% for PlayStation 3s.
In September 2009, BBC's Watchdog television program aired a report investigating the issue, calling it the "yellow light of death" (YLOD). Among the consoles that experienced the failure, they found that it usually occurred 18–24 months after purchase, while the standard Sony warranty covers one year after purchase. After this time period, PlayStation 3 owners can pay Sony a fixed fee for a refurbished console.
Sony claimed that, according to its statistics of returned consoles, approximately 0.5% of consoles were reported as showing the YLOD. In response to the televised report, Sony issued a document criticizing the program's accuracy and conclusions; specifically that the faults were evidence of a manufacturing defect. The document also complained that the report had been inappropriate in tone and might damage Sony's brand name.
Software
System software
Sony has included the ability for the operating system, referred to as System Software, to be updated. The updates can be acquired in several ways:
If PlayStation 3 has an active Internet connection, updates may be downloaded directly from the PlayStation Network to PlayStation 3 and subsequently installed. Systems with active Internet will automatically check online for software updates each time the console is started.
Using an external PC, a user may download the update from the official PlayStation website, transfer it to portable storage media and install it on the system.
Some game discs come with system software updates on the disc. This may be due to the game requiring the update in order to run. If so, the software may be installed from the disc.
The original PlayStation 3 also included the ability to install other operating systems, such as Linux. This was not included in the newer slim models and was removed from all older PlayStation 3 consoles with the release of firmware update 3.21 in April 2010. The functionality is now only available to users of original consoles who choose not to update their system software beyond version 3.15 or who have installed third-party, modified and unofficial versions of the firmware instead.
Graphical user interface
The standard PlayStation 3 version of the XrossMediaBar (pronounced "Cross Media Bar" and abbreviated XMB) includes nine categories of options. These are: Users, Settings, Photo, Music, Video, TV/Video Services, Game, Network, PlayStation Network and Friends (similar to the PlayStation Portable media bar). TheTV/Video Services category is for services like Netflix and/or if PlayTV or torne is installed; the first category in this section is "My Channels", which lets users download various streaming services, including Sony's own streaming services Crackle and PlayStation Vue. By default, the What's New section of PlayStation Network is displayed when the system starts up. PS3 includes the ability to store various master and secondary user profiles, manage and explore photos with or without a musical slide show, play music and copy audio CD tracks to an attached data storage device, play movies and video files from the hard disk drive, an optical disc (Blu-ray Disc or DVD-Video) or an optional USB mass storage or Flash card, compatibility for a USB keyboard and mouse and a web browser supporting compatible-file download function. Additionally, UPnP media will appear in the respective audio/video/photo categories if a compatible media server or DLNA server is detected on the local network. The Friends menu allows mail with emoticon and attached picture features and video chat which requires an optional PlayStation Eye or EyeToy webcam. The Network menu allows online shopping through the PlayStation Store and connectivity to PlayStation Portable via Remote Play.
Digital rights management
PlayStation 3 console protects certain types of data and uses digital rights management to limit the data's use. Purchased games and content from the PlayStation Network store are governed by PlayStation's Network Digital Rights Management (NDRM). The NDRM allows users to access the data from up to 2 different PlayStation 3's that have been activated using a user's PlayStation Network ID. PlayStation 3 also limits the transfer of copy protected videos downloaded from its store to other machines and states that copy protected video "may not restore correctly" following certain actions after making a backup such as downloading a new copy protected movie.
Photo management
Photo Gallery
Photo Gallery is an optional application to view, create and group photos from PS3, which is installed separately from the system software at 105 MB. It was introduced in system software version 2.60 and provides a range of tools for sorting through and displaying the system's pictures. The key feature of this application is that it can organize photos into groups according to various criteria. Notable categorizations are colors, ages, or facial expressions of the people in the photos. Slideshows can be viewed with the application, along with music and playlists. The software was updated with the release of system software version 3.40 allowing users to upload and browse photos on Facebook and Picasa.
PlayMemories Studio
PlayMemories is an optional stereoscopic 3D (and also standard) photo viewing application, which is installed from the PlayStation Store at 956 MB. The application is dedicated specifically to 3D photos and features the ability to zoom into 3D environments and change the angle and perspective of panoramas. It requires system software 3.40 or higher; 3D photos; a 3D HDTV, and an HDMI cable for the 3D images to be viewed properly.
Video services
Video editor and uploader
A new application was released as part of system software version 3.40 which allows users to edit videos on PlayStation 3 and upload them to the Internet. The software features basic video editing tools including the ability to cut videos and add music and captions. Videos can then be rendered and uploaded to video sharing websites such as Facebook and YouTube.
Video on demand
In addition to the video service provided by the Sony Entertainment Network, the PlayStation 3 console has access to a variety of third-party video services, dependent on region:
Since June 2009, VidZone has offered a free music video streaming service in Europe, Australia and New Zealand. In October 2009, Sony Computer Entertainment and Netflix announced that the Netflix streaming service would also be available on PlayStation 3 in the United States. A paid Netflix subscription was required for the service. The service became available in November 2009. Initially users had to use a free Blu-ray disc to access the service; however, in October 2010 the requirement to use a disc to gain access was removed.
In April 2010, support for MLB.tv was added, allowing MLB.tv subscribers to watch regular season games live in HD and access new interactive features designed exclusively for PSN.
In November 2010, access to the video and social networking site MUBI was enabled for European, New Zealand, and Australian users; the service integrates elements of social networking with rental or subscription video streaming, allowing users to watch and discuss films with other users. Also in November 2010 the video rental service VUDU, NHL GameCenter Live, and subscription service Hulu Plus launched on PlayStation 3 in the United States.
In August 2011, Sony, in partnership with DirecTV, added NFL Sunday Ticket. Then in October 2011, Best Buy launched an app for its CinemaNow service. In April 2012, Amazon.com launched an Amazon Video app, accessible to Amazon Prime subscribers (in the US).
Upon reviewing the PlayStation and Netflix collaboration, Pocket-Lint said "We've used the Netflix app on Xbox too and, as good as it is, we think the PS3 version might have the edge here." and stated that having Netflix and LoveFilm on PlayStation is "mind-blowingly good."
In July 2013, YuppTV OTT player launched its branded application on the PS3 computer entertainment system in the United States.
Audio capabilities
The PlayStation 3 has the ability to play standard audio CDs, a feature that was notably removed from its successors. PlayStation 3 added the ability for ripping audio CDs to store them on the system's hard disk; the system has transcoders for ripping to either MP3, AAC, or Sony's own ATRAC (ATRAC3plus) formats. Early models were also able to playback Super Audio CDs, however this support was dropped in the third generation revision of the console from late 2007. However, all models do retain Direct Stream Digital playback ability.
ATRAC formatted tracks from Walkman digital audio players can be natively played on the PlayStation 3 by connecting the player to the system's USB port. The PlayStation 3 did not feature the Sony CONNECT Music Store.
OtherOS support
PlayStation 3 initially shipped with the ability to install an alternative operating system alongside the main system software; Linux and other Unix-based operating systems were available. The hardware allowed access to six of the seven Synergistic Processing Elements of the Cell microprocessor, but not the RSX 'Reality Synthesizer' graphics chip.
The 'OtherOS' functionality was not present in the updated PS Slim models, and the feature was subsequently removed from previous versions of the PS3 as part of the machine's firmware update version 3.21 which was released on April 1, 2010; Sony cited security concerns as the rationale. The firmware update 3.21 was mandatory for access to the PlayStation Network. The removal caused some controversy; as the update removed officially advertised features from already sold products, and gave rise to several class action lawsuits aimed at making Sony return the feature or provide compensation.
On December 8, 2011, U.S. District Judge Richard Seeborg dismissed the last remaining count of the class action lawsuit (other claims in the suit had previously been dismissed), stating: "As a legal matter, ... plaintiffs have failed to allege facts or articulate a theory on which Sony may be held liable."
, the U.S. Court of Appeals for the Ninth Circuit partially reversed the dismissal and have sent the case back to the district court.
Leap year bug
On March 1, 2010 (UTC), many of the original "fat" PlayStation 3 models worldwide were experiencing errors related to their internal system clock. The error had many symptoms. Initially, the main problem seemed to be the inability to connect to the PlayStation Network. However, the root cause of the problem was unrelated to the PlayStation Network, since even users who had never been online also had problems playing installed offline games (which queried the system timer as part of startup) and using system themes. At the same time, many users noted that the console's clock had gone back to December 31, 1999. The event was nicknamed the ApocalyPS3, a play on the word apocalypse and PS3, the abbreviation for the PlayStation 3 console.
The error code displayed was typically 8001050F and affected users were unable to sign in, play games, use dynamic themes and view/sync trophies. The problem only resided within the first- through third-generation original PS3 units while the newer "Slim" models were unaffected because of different internal hardware for the clock.
Sony confirmed that there was an error and stated that it was narrowing down the issue and were continuing to work to restore service. By March 2 (UTC), 2010, owners of original PS3 models could connect to PSN successfully and the clock no longer showed December 31, 1999. Sony stated that the affected models incorrectly identified 2010 as a leap year, because of a bug in the BCD method of storing the date. However, for some users, the hardware's operating system clock (mainly updated from the internet and not associated with the internal clock) needed to be updated manually or by re-syncing it via the internet.
On June 29, 2010, Sony released PS3 system software update 3.40, which improved the functionality of the internal clock to properly account for leap years.
Features
PlayStation Portable connectivity
PlayStation Portable can connect with PlayStation 3 in many ways, including in-game connectivity. For example, Formula One Championship Edition, a racing game, was shown at E3 2006 using a PSP as a real-time rear-view mirror. In addition, users are able to download original PlayStation format games from the PlayStation Store, transfer and play them on PSP as well as PS3 itself. It is also possible to use the Remote Play feature to play these and some PlayStation Network games, remotely on PSP over a network or internet connection.
Sony has also demonstrated PSP playing back video content from PlayStation 3 hard disk across an ad hoc wireless network. This feature is referred to as Remote Play located under the browser icon on both PlayStation 3 and PlayStation Portable. Remote play has since expanded to allow remote access to PS3 via PSP from any wireless access point in the world.
PlayStation Network
PlayStation Network is the unified online multiplayer gaming and digital media delivery service provided by Sony Computer Entertainment for PlayStation 3 and PlayStation Portable, announced during the 2006 PlayStation Business Briefing meeting in Tokyo. The service is always connected, free, and includes multiplayer support. The network enables online gaming, the PlayStation Store, PlayStation Home and other services. PlayStation Network uses real currency and PlayStation Network Cards as seen with the PlayStation Store and PlayStation Home.
PlayStation Plus
PlayStation Plus (commonly abbreviated PS+ and occasionally referred to as PSN Plus) is a premium PlayStation Network subscription service that was officially unveiled at E3 2010 by Jack Tretton, President and CEO of SCEA. Rumors of such service had been in speculation since Kaz Hirai's announcement at TGS 2009 of a possible paid service for PSN but with the current PSN service still available. Launched alongside PS3 firmware 3.40 and PSP firmware 6.30 on June 29, 2010, the paid-for subscription service provides users with enhanced services on the PlayStation Network, on top of the current PSN service which is still available with all of its features. These enhancements include the ability to have demos and game updates download automatically to PlayStation 3. Subscribers also get early or exclusive access to some betas, game demos, premium downloadable content and other PlayStation Store items. North American users also get a free subscription to Qore. Users may choose to purchase either a one-year or a three-month subscription to PlayStation Plus.
PlayStation Store
The PlayStation Store is an online virtual market available to users of Sony's PlayStation 3 (PS3) and PlayStation Portable (PSP) game consoles via the PlayStation Network. The Store offers a range of downloadable content both for purchase and available free of charge. Available content includes full games, add-on content, playable demos, themes and game and movie trailers. The service is accessible through an icon on the XMB on PS3 and PSP. The PS3 store can also be accessed on PSP via a Remote Play connection to PS3. The PSP store is also available via the PC application, Media Go. , there have been over 600 million downloads from the PlayStation Store worldwide.
The PlayStation Store is updated with new content each Tuesday in North America, and each Wednesday in PAL regions. In May 2010 this was changed from Thursdays to allow PSP games to be released digitally, closer to the time they are released on UMD.
On March 29, 2021, Sony announced that it was would shut down the PS3 version of the Store on July 2, though previous purchases on the store will remain downloadable. However, on April 19, following fan feedback, Sony reversed their decision and confirmed that the PS3 store would remain operational.
What's New
What's New was announced at Gamescom 2009 and was released on September 1, 2009, with PlayStation 3 system software 3.0. The feature was to replace the existing [Information Board], which displayed news from the PlayStation website associated with the user's region. The concept was developed further into a major PlayStation Network feature, which interacts with the [Status Indicator] to display a ticker of all content, excluding recently played content (currently in North America and Japan only).
The system displays the What's New screen by default instead of the [Games] menu (or [Video] menu, if a movie was inserted) when starting up. What's New has four sections: "Our Pick", "Recently Played", latest information and new content available in PlayStation Store. There are four kinds of content the What's New screen displays and links to, on the sections. "Recently Played" displays the user's recently played games and online services only, whereas, the other sections can contain website links, links to play videos and access to selected sections of the PlayStation Store.
The PlayStation Store icons in the [Game] and [Video] section act similarly to the What's New screen, except that they only display and link to games and videos in the PlayStation Store, respectively.
PlayStation Home
PlayStation Home was a virtual 3D social networking service for the PlayStation Network. Home allowed users to create a custom avatar, which could be groomed realistically. Users could edit and decorate their personal apartments, avatars, or club houses with free, premium, or won content. Users could shop for new items or win prizes from PS3 games, or Home activities. Users could interact and connect with friends and customize content in a virtual world. Home also acted as a meeting place for users that wanted to play multiplayer video games with others.
A closed beta began in Europe from May 2007 and expanded to other territories soon after. Home was delayed and expanded several times before initially releasing. The Open Beta test was started on December 11, 2008. It remained as a perpetual beta until its closure on March 31, 2015. Home was available directly from the PlayStation 3 XrossMediaBar. Membership was free, but required a PSN account.
Home featured places to meet and interact, dedicated game spaces, developer spaces, company spaces, and events. The service underwent a weekly maintenance and frequent updates. At the time of its closure in March 2015, Home had been downloaded by over 41 million users.
Life with PlayStation
Life with PlayStation, released on September 18, 2008 to succeed Folding@home, was retired November 6, 2012. Life with PlayStation used virtual globe data to display news and information by city. Along with Folding@home functionality, the application provided access to three other information "channels", the first being the Live Channel offering news headlines and weather which were provided by Google News, The Weather Channel, the University of Wisconsin–Madison Space Science and Engineering Center, among other sources. The second channel was the World Heritage channel which offered historical information about historical sites. The third channel was the United Village channel. United Village was designed to share information about communities and cultures worldwide. An update allowed video and photo viewing in the application. The fourth channel was the U.S. exclusive PlayStation Network Game Trailers Channel for direct streaming of game trailers.
Outage
On April 20, 2011, Sony shut down the PlayStation Network and Qriocity for a prolonged interval, revealing on April 23 that this was due to "an external intrusion on our system". Sony later revealed that the personal information of 77 million users might have been taken, including: names; addresses; countries; email addresses; birthdates; PSN/Qriocity logins, passwords and handles/PSN online IDs. It also stated that it was possible that users' profile data, including purchase history and billing address, and PlayStation Network/Qriocity password security answers may have been obtained. There was no evidence that any credit card data had been taken, but the possibility could not be ruled out, and Sony advised customers that their credit card data may have been obtained. Additionally, the credit card numbers were encrypted and Sony never collected the three digit CVC or CSC number from the back of the credit cards which is required for authenticating some transactions. In response to the incident, Sony announced a "Welcome Back" program, 30 days free membership of PlayStation Plus for all PSN members, two free downloadable PS3 games, and a free one-year enrollment in an identity theft protection program.
Sales and production costs
Although its PlayStation predecessors had been very dominant against the competition and were hugely profitable for Sony, PlayStation 3 had an inauspicious start, and Sony chairman and CEO Sir Howard Stringer initially could not convince investors of a turnaround in its fortunes. The PS3 lacked the unique gameplay of the more affordable Wii which became that generation's most successful console in terms of units sold. Furthermore, PS3 had to compete directly with Xbox 360 which had a market head start, and as a result the platform no longer had exclusive titles that the PS2 enjoyed such as the Grand Theft Auto and Final Fantasy series (regarding cross-platform games, Xbox 360 versions were generally considered superior in 2006, although by 2008 the PS3 versions had reached parity or surpassed), and it took longer than expected for PS3 to enjoy strong sales and close the gap with Xbox 360. Sony also continued to lose money on each PS3 sold through 2010, although the redesigned "slim" PS3 cut these losses.
PlayStation 3's initial production cost is estimated by iSuppli to have been US$805.85 for the 20 GB model and US$840.35 for the 60 GB model. However, they were priced at US$499 and US$599, respectively, meaning that units may have been sold at an estimated loss of $306 or $241 depending on model, if the cost estimates were correct, and thus may have contributed to Sony's games division posting an operating loss of ¥232.3 billion (US$1.97 billion) in the fiscal year ending March 2007. In April 2007, soon after these results were published, Ken Kutaragi, President of Sony Computer Entertainment, announced plans to retire. Various news agencies, including The Times and The Wall Street Journal reported that this was due to poor sales, while SCEI maintains that Kutaragi had been planning his retirement for six months prior to the announcement.
In January 2008, Kaz Hirai, CEO of Sony Computer Entertainment, suggested that the console may start making a profit by early 2009, stating that, "the next fiscal year starts in April and if we can try to achieve that in the next fiscal year that would be a great thing" and that "[profitability] is not a definite commitment, but that is what I would like to try to shoot for". However, market analysts Nikko Citigroup have predicted that PlayStation 3 could be profitable by August 2008. In a July 2008 interview, Hirai stated that his objective is for PlayStation 3 to sell 150 million units by its ninth year, surpassing PlayStation 2's sales of 140 million in its nine years on the market. In January 2009 Sony announced that their gaming division was profitable in Q3 2008.
After the system's launch, production costs were reduced significantly as a result of phasing out the Emotion Engine chip and falling hardware costs. The cost of manufacturing Cell microprocessors had fallen dramatically as a result of moving to the 65 nm production process, and Blu-ray Disc diodes had become cheaper to manufacture. As of January 2008, each unit cost around $400 to manufacture; by August 2009, Sony had reduced costs by a total of 70%, meaning it only cost Sony around $240 per unit.
Critical reception
Early PlayStation 3 reviews after launch were critical of its high price and lack of quality games. Game developers regarded the architecture as difficult to program for. PS3 was, however, commended for its hardware including its Blu-ray home theater capabilities and graphics potential.
Critical and commercial reception to PS3 improved over time, after a series of price revisions, Blu-ray's victory over HD DVD, and the release of several well received titles. Ars Technicas original launch review gave PS3 only a 6/10, but second review of the console in June 2008 rated it a 9/10. In September 2009, IGN named PlayStation 3 the 15th-best gaming console of all time, behind both of its competitors: Wii (10th) and Xbox 360 (6th). However, PS3 has won IGN's "Console Showdown"—based on which console offers the best selection of games released during each year—in three of the four years since it began (2008, 2009 and 2011, with Xbox winning in 2010). IGN judged PlayStation 3 to have the best game line-up of 2008, based on their review scores in comparison to those of Wii and Xbox 360. In a comparison piece by PC Magazines Will Greenwald in June 2012, PS3 was selected as an overall better console compared to Xbox 360.
Pocket-Lint said of the console "The PS3 has always been a brilliant games console," and that "For now, this is just about the best media device for the money."
Original model
PS3 was given the number-eight spot on PC World magazine's list of "The Top 21 Tech Screwups of 2006", where it was criticized for being "Late, Expensive and Incompatible". GamesRadar ranked PS3 as the top item in a feature on game-related PR disasters, asking how Sony managed to "take one of the most anticipated game systems of all time and—within the space of a year—turn it into a hate object reviled by the entire internet", but added that despite its problems the system has "untapped potential". Business Week summed up the general opinion by stating that it was "more impressed with what the PlayStation 3 could do than with what it currently does".
Developers also found the machine difficult to program for. In 2007, Gabe Newell of Valve said "The PS3 is a total disaster on so many levels, I think it's really clear that Sony lost track of what customers and what developers wanted". He continued "I'd say, even at this late date, they should just cancel it and do a do over. Just say, 'This was a horrible disaster and we're sorry and we're going to stop selling this and stop trying to convince people to develop for it'". Doug Lombardi VP of Marketing for Valve has since stated that Valve is interested in developing for the console and is looking to hire talented PS3 programmers for future projects. He later restated Valve's position, "Until we have the ability to get a PS3 team together, until we find the people who want to come to Valve or who are at Valve who want to work on that, I don't really see us moving to that platform". At Sony's E3 2010 press conference, Newell made a live appearance to recant his previous statements, citing Sony's move to make the system more developer-friendly, and to announce that Valve would be developing Portal 2 for the system. He also claimed that the inclusion of Steamworks (Valve's system to automatically update their software independently) would help to make the PS3 version of Portal 2 the best console version on the market.
Activision Blizzard CEO Bobby Kotick has criticized PS3's high development costs and inferior attach rate and return to that of Xbox 360 and Wii. He believes these factors are pushing developers away from working on the console. In an interview with The Times Kotick stated "I'm getting concerned about Sony; the PlayStation 3 is losing a bit of momentum and they don't make it easy for me to support the platform." He continued, "It's expensive to develop for the console, and the Wii and the Xbox are just selling better. Games generate a better return on invested capital (ROIC) on the Xbox than on the PlayStation." Kotick also claimed that Activision Blizzard may stop supporting the system if the situation is not addressed. "[Sony has] to cut the [PS3's retail] price, because if they don't, the attach rates are likely to slow. If we are being realistic, we might have to stop supporting Sony." Kotick received heavy criticism for the statement, notably from developer BioWare who questioned the wisdom of the threatened move, and referred to the statement as "silly."
Despite the initial negative press, several websites have given the system very good reviews mostly regarding its hardware. CNET United Kingdom praised the system saying, "the PS3 is a versatile and impressive piece of home-entertainment equipment that lives up to the hype [...] the PS3 is well worth its hefty price tag." CNET awarded it a score of 8.8 out of 10 and voted it as its number one "must-have" gadget, praising its robust graphical capabilities and stylish exterior design while criticizing its limited selection of available games. In addition, both Home Theater Magazine and Ultimate AV have given the system's Blu-ray playback very favorable reviews, stating that the quality of playback exceeds that of many current standalone Blu-ray Disc players.
In an interview, Kazuo Hirai, chairman of Sony Computer Entertainment argued for the choice of a complex architecture. Hexus Gaming reviewed the PAL version and summed the review up by saying, "as the PlayStation 3 matures and developers start really pushing it, we'll see the PlayStation 3 emerge as the console of choice for gaming." At GDC 2007, Shiny Entertainment founder Dave Perry stated, "I think that Sony has made the best machine. It's the best piece of hardware, without question".
Slim model and rebranding
The PlayStation 3 Slim received extremely positive reviews as well as a boost in sales; less than 24 hours after its announcement, PS3 Slim took the number-one bestseller spot on Amazon.com in the video games section for fifteen consecutive days. It regained the number-one position again one day later. PS3 Slim also received praise from PC World giving it a 90 out of 100 praising its new repackaging and the new value it brings at a lower price as well as praising its quietness and the reduction in its power consumption. This is in stark contrast to the original PS3's launch in which it was given position number-eight on their "The Top 21 Tech Screwups of 2006" list.
CNET awarded PS3 Slim four out of five stars praising its Blu-ray capabilities, 120 GB hard drive, free online gaming service and more affordable pricing point, but complained about the lack of backward compatibility for PlayStation 2 games. TechRadar gave PS3 Slim four and a half stars out of five praising its new smaller size and summed up its review stating "Over all, the PS3 Slim is a phenomenal piece of kit. It's amazing that something so small can do so much". However, they criticized the exterior design and the build quality in relation to the original model.
Eurogamer called it "a product where the cost-cutting has—by and large—been tastefully done" and said "It's nothing short of a massive win for Sony."
Super Slim model
The Super Slim model of PS3 has received positive reviews. Gaming website Spong praised the new Super Slim's quietness, stating "The most noticeable noise comes when the drive seeks a new area of the disc, such as when starting to load a game, and this occurs infrequently." They added that the fans are quieter than those of Slim, and went on to praise the new smaller, lighter size.
Criticism was placed on the new disc loader, stating: "The cover can be moved by hand if you wish, there's also an eject button to do the work for you, but there is no software eject from the triangle button menus in the Xross Media Bar (XMB) interface. In addition, you have to close the cover by hand, which can be a bit fiddly if it's upright, and the PS3 won't start reading a disc unless you do [close the cover]." They also said there is no real drop in retail price.
Tech media website CNET gave new Super Slim 4 out of 5 stars ("Excellent"), saying "The Super Slim PlayStation 3 shrinks a powerful gaming machine into an even tinier package while maintaining the same features as its predecessors: a great gaming library and a strong array of streaming services [...]", whilst also criticising the "cheap" design and disc-loader, stating: "Sometimes [the cover] doesn't catch and you feel like you're using one of those old credit card imprinter machines. In short, it feels cheap. You don't realize how convenient autoloading disc trays are until they're gone. Whether it was to cut costs or save space, this move is ultimately a step back." The criticism also was due to price, stating the cheapest Super Slim model was still more expensive than the cheapest Slim model, and that the smaller size and bigger hard drive shouldn't be considered an upgrade when the hard drive on a Slim model is easily removed and replaced. They did praise that the hard drive of the Super Slim model is "the easiest yet. Simply sliding off the side panel reveals the drive bay, which can quickly be unscrewed." They also stated that whilst the Super Slim model is not in any way an upgrade, it could be an indicator as to what's to come. "It may not be revolutionary, but the Super Slim PS3 is the same impressive machine in a much smaller package. There doesn't seem to be any reason for existing PS3 owners to upgrade, but for the prospective PS3 buyer, the Super Slim is probably the way to go if you can deal with not having a slot-loading disc drive."
Pocket-Lint gave Super Slim a very positive review saying "It's much more affordable, brilliant gaming, second-to-none video and media player." They think it is "A blinding good console and one that will serve you for years to come with second-hand games and even new releases. Without doubt, if you don't have a PS3, this is the time to buy." They gave Super Slim 4-and-a-half stars out of 5.
Technology magazine T3 gave the Super Slim model a positive review, stating the console is almost "nostalgic" in the design similarities to the original "fat" model, "While we don't know whether it will play PS3 games or Blu-ray discs any differently yet, the look and feel of the new PS3 Slim is an obvious homage to the original PS3, minus the considerable excess weight. Immediately we would be concerned about the durability of the top loading tray that feels like it could be yanked straight out off the console, but ultimately it all feels like Sony's nostalgic way of signing off the current-generation console in anticipation for the PS4."
Notes
References
External links
Official websites
Asia
Australia
Canada
New Zealand
United Kingdom
United States
Auxiliary sites by Sony
Hardware press images
User's guide
Directories
2000s toys
2010s toys
2006 in video gaming
2015 disestablishments in New Zealand
2017 disestablishments in Japan
Backward-compatible video game consoles
Blu-ray Disc
Cell BE architecture
Discontinued products
Computer-related introductions in 2006
Home video game consoles
PlayStation (brand)
Products introduced in 2006
Products and services discontinued in 2017
Regionless game consoles
Seventh-generation video game consoles
Sony consoles |
8973605 | https://en.wikipedia.org/wiki/ThalesRaytheonSystems | ThalesRaytheonSystems | Thales-Raytheon Systems Company LLC (ThalesRaytheonSystems or TRS) is an aerospace and defence company co-headquartered in Massy, Paris, France and Fullerton, California, United States. It is operated as a 50:50 joint venture between Raytheon and Thales Group.
ThalesRaytheon was formed in June 2001, for the purpose of combining the radar and Command, Control, Communications, Computers and Intelligence or C4I systems product lines of the two firms. aerospace control systems. In addition to the supply of various defense electronics systems, the company provides maintenance, repair, logistic support, technical assistance, software support, overhaul, and system life extension services to end users.
History
On 16 December 2000, French electronics and defense specialist Thales S.A. and American defense contractor Raytheon Corporation announced the creation of ThalesRaytheon Systems. Prior to this, the two companies already cooperated on 17 individual programs of various sizes; they both possessed extensive military electronics product lines that generated between $500 million and $700 million in annual revenue by 2000. Upon its formation, ThalesRaytheon Systems held roughly 40 percent of the global market for air defense command and control centers, air defense radars and battlefield surveillance systems.
In March 2002, Thales Raytheon Systems received a contract valued at roughly $99 million from the French defence procurement agency (DGA) to deliver the first version of the mobile command and control centre (C3M); this system formed the deployable component of France’s Air Command and Control System.
During 2008, American defense conglomerate Lockheed Martin entered into a commercial relationship with ThalesRaytheonSystems to deliver NATO's Theater Missile Defence capability. During July 2018, Lockheed Martin and ThalesRaytheonSystems signed a teaming agreement to deepen their cooperation and to jointly pursue the provision of a territorial Ballistic Missile Defence (BMD) command and control capability to NATO member states.
In February 2011, ThalesRaytheonSystems, was awarded a contract by NATO ACCS Management Agency (NACMA) to implement various enhancements to the Air Command and Control System (ACCS) as a part of the Active Layered Theatre Ballistic Missile Defence (ALTBMD). The improvements included sensor and weapon system configuration, management and coverage, air and missile track processing, dissemination, classification, display and alerting modifications. Four years later, the firm was awarded a follow-on contract covering further functionality improvements by NATO.
During June 2016, ThalesRaytheonSystems was restructured at the direction of its parent companies, it was restructured to focus solely on NATO agencies and NATO member nations for the delivery of the Air Command and Control System, Theatre Missile Defense, and Ballistic Missile Defense. The joint venture's former business in ground-based radars and non-ACCS-related air command and control systems transitioned back to their parent companies.
In July 2019, Italian aerospace conglomerate Leonardo S.p.A. signed a memorandum of understanding with ThalesRaytheonSystems to cooperation on air command and control system (ACCS) for NATO customers.
References
Avionics companies
Defence companies of France
Raytheon Company
Thales Group joint ventures
Defense companies of the United States |
8595920 | https://en.wikipedia.org/wiki/Key%20wrap | Key wrap | In cryptography, key wrap constructions are a class of symmetric encryption algorithms designed to encapsulate (encrypt) cryptographic key material.the Key Wrap algorithms are intended for applications such as protecting keys while in untrusted storage or transmitting keys over untrusted communications networks. The constructions are typically built from standard primitives such as block ciphers and cryptographic hash functions.
Key Wrap may be considered as a form of key encapsulation algorithm, although it should not be confused with the more commonly known asymmetric (public-key) key encapsulation algorithms (e.g., PSEC-KEM). Key Wrap algorithms can be used in a similar application: to securely transport a session key by encrypting it under a long-term encryption key.
Background
In the late 1990s, the National Institute of Standards and Technology (NIST) posed the "Key Wrap" problem: to develop secure and efficient cipher-based key encryption algorithms. The resulting algorithms would be formally evaluated by NIST, and eventually approved for use in NIST-certified cryptographic modules. NIST did not precisely define the security goals of the resulting algorithm, and left further refinement to the algorithm developers. Based on the resulting algorithms, the design requirements appear to be (1) confidentiality, (2) integrity protection (authentication), (3) efficiency, (4) use of standard (approved) underlying primitives such as the Advanced Encryption Standard (AES) and the Secure Hash Algorithm (SHA-1), and (5) consideration of additional circumstances (e.g., resilience to operator error, low-quality random number generators). Goals (3) and (5) are particularly important, given that many widely deployed authenticated encryption algorithms (e.g., AES-CCM) are already sufficient to accomplish the remaining goals.
Several constructions have been proposed. These include:
AES Key Wrap Specification (November 2001, )
Implemented by the WebCrypto subtle API.
American Standards Committee ANSX9.102, which defines four algorithms:
AESKW (a variant of the AES Key Wrap Specification)
TDKW (similar to AESKW, built from Triple DES rather than AES).
AKW1 (TDES, two rounds of CBC)
AKW2 (TDES, CBC then CBC-MAC)
Each of the proposed algorithms can be considered as a form of authenticated encryption algorithm providing confidentiality for highly entropic messages such as cryptographic keys. The AES Key Wrap Specification, AESKW, TDKW, and AKW1 are intended to maintain confidentiality under adaptive chosen ciphertext attacks, while the AKW2 algorithm is designed to be secure only under known-plaintext (or weaker) attacks. (The stated goal of AKW2 is for use in legacy systems and computationally limited devices where use of the other algorithms would be impractical.) AESKW, TDKW and AKW2 also provide the ability to authenticate cleartext "header", an associated block of data that is not encrypted.
Rogaway and Shrimpton evaluated the design of the ANSX9.102 algorithms with respect to the stated security goals. Among their general findings, they noted the lack of clearly stated design goals for the algorithms, and the absence of security proofs for all constructions.
In their paper, Rogaway and Shrimpton proposed a provable key-wrapping algorithm (SIV—the Synthetic Initialization Vector mode) that authenticates and encrypts an arbitrary string and authenticates,
but does not encrypt, associated data which can be bound into the wrapped key. This has been standardized as a
new AES mode in .
See also
Authenticated encryption
Deterministic encryption
Key management
Offline private key protocol
Further reading
P. Rogaway, T. Shrimpton. A Provable-Security Treatment of the Key-Wrap Problem.
NIST, AES Key Wrap Specification (November 2001)
NIST Special Publication 800-38F, Recommendation for Block Cipher Modes of Operation: Methods for Key Wrapping (December 2012)
American Standards Committee, Request for Review of Key Wrap Algorithms
References
Cryptographic algorithms |
193105 | https://en.wikipedia.org/wiki/PowerPC%207xx | PowerPC 7xx | The PowerPC 7xx is a family of third generation 32-bit PowerPC microprocessors designed and manufactured by IBM and Motorola (now Freescale Semiconductor). This family is called the PowerPC G3 by its well-known customer Apple Inc., which introduced it on November 10, 1997. The term "PowerPC G3" is often, and incorrectly, imagined to be a microprocessor when in fact a number of microprocessors from different vendors have been used. Such designations were applied to Macintosh computers such as the PowerBook G3, the multicolored iMacs, iBooks and several desktops, including both the Beige and Blue and White Power Macintosh G3s. The low power requirements and small size made the processors ideal for laptops and the name lived out its last days at Apple in the iBook.
The 7xx family is also widely used in embedded devices like printers, routers, storage devices, spacecraft, and video game consoles. The 7xx family had its shortcomings, namely lack of SMP support and SIMD capabilities and a relatively weak FPU. Motorola's 74xx range of processors picked up where the 7xx left off.
Processors
PowerPC 740/750
The PowerPC 740 and 750 (codename Arthur) were introduced in late 1997 as an evolutionary replacement for the PowerPC 603e. Enhancements included a faster 60x system bus (66 MHz), larger L1 caches (32 KB instruction and 32 KB data), a second integer unit, an enhanced floating point unit, and higher core frequency. The 750 had support for an optional 256, 512 or 1024 KB external unified L2 cache. The cache controller and cache tags are on-die. The cache was accessed via a dedicated 64-bit bus.
The 740 and 750 added dynamic branch prediction and a 64-entry branch target instruction cache (BTIC). Dynamic branch prediction uses the recorded outcome of a branch stored in a 512-entry by 2-bit branch history table (BHT) to predict its outcome. The BTIC caches the first two instructions at a branch target.
The 740/750 models had 6.35 million transistors and were initially manufactured by IBM and Motorola in an aluminium based fabrication process. The die measured 67 mm2 at 0.26 μm and it reached speeds of up to 366 MHz while consuming 7.3 W.
In 1999, IBM fabricated versions in a 0.20 μm process with copper interconnects, which increased the frequency up to 500 MHz and decreased power consumption to 6 W and the die size to 40 mm2.
The 740 slightly outperformed the Pentium II while consuming far less power and with a smaller die. The off-die L2 cache of the 750 increased performance by approximately 30% in most situations. The design was so successful that it quickly surpassed the PowerPC 604e in integer performance, causing a planned 604 successor to be scrapped.
The PowerPC 740 is completely pin compatible with the older 603, allowing upgrades to the PowerBook 1400, 2400, and even a prototype PowerBook 500/G3. The 750 with its L2 cache bus required more pins and thus a different package, a 360-pin ball grid array (BGA).
The PowerPC 750 was used in many computers from Apple, including the original iMac.
RAD750
The RAD750 is a radiation-hardened processor, based on the PowerPC 750. It is intended for use in high radiation environments such as experienced on board satellites and other spacecraft. The RAD750 was released for purchase in 2001. The Mars Science Laboratory (Curiosity), Mars Reconnaissance Orbiter, and Mars 2020 (Perseverance) spacecraft have a RAD750 on board.
The processor has 10.4 million transistors, is manufactured by BAE Systems using either 250 or 150 nm process and has a die area of 130 mm². It operates at 110 to 200 MHz. The CPU itself can withstand 200,000 to 1,000,000 Rads and temperature ranges between −55 and 125 °C.
The RAD750 packaging and logic functions has a price tag in excess of US$200,000: the high price is mainly due to radiation hardening revisions to the PowerPC 750 architecture and manufacturing, stringent quality control requirements, and extended testing of each processor chip manufactured.
PowerPC 745/755
Motorola revised the 740/750 design in 1998 and shrunk die size to 51 mm2 thanks to a newer aluminium based fabrication at 0.22 μm. The speeds increased to up to 600 MHz. The 755 was used in some iBook models. After this model, Motorola chose not to keep developing the 750 processors in favour of their PowerPC 7400 processor and other cores.
PowerPC 750CX
IBM continued to develop the PowerPC 750 line and introduced the PowerPC 750CX (code-named Sidewinder) in 2000. It has a 256 KiB on-die L2 cache; this increased performance while reducing power consumption and complexity. At 400 MHz, it drew under 4 W. The 750CX had 20 million transistors including its L2 cache. It had a die size of 43 mm2 through a 0.18 μm copper process. The 750CX was only used in one iMac and iBook revision.
PowerPC 750CXe
750CXe (codename Anaconda), introduced in 2001, was a minor revision of 750CX which increased its frequency up to 700 and memory bus from 100 MHz to 133 MHz. The 750CXe also featured improved floating-point performance over the 750CX. Several iBook models and the last G3-based iMac used this processor.
A cost reduced version of 750CXe, called 750CXr, is available at lower frequencies.
Gekko
Gekko is the custom central processor for the Nintendo GameCube game console. It is based on a PowerPC 750CXe and adds about 50 new instructions as well as a modified FPU capable of some SIMD functionality. It has 256 KiB of on die L2 cache, operates at 485 MHz with a 162 MHz memory bus, is fabricated by IBM on a 180 nm process. The die is 43 mm2 large.
PowerPC 750FX
The 750FX (code-named Sahara) came in 2002 and increased frequency up to 900 MHz, the bus speed to 166 MHz and the on-die L2 cache to 512 KiB. It also featured a number of improvements to the memory subsystem: an enhanced and faster (200 MHz) 60x bus controller, a wider L2 cache bus, and the ability to lock parts of the L2 cache. It is manufactured using a 0.13 μm copper based fabrication with Low-K dielectric and Silicon on insulator technology. 750FX has 39 million transistors, a die size of 35 mm2 and consumes less than 4 W at 800 MHz at typical loads. It was the last G3 type processor used by Apple (employed on the iBook G3).
A low powered version of 750FX is available called 750FL.
750FX powers NASA's Orion Multi-Purpose Crew Vehicle. Orion is using Honeywell International Inc. flight computer originally built for Boeing's 787 jet airliner.
PowerPC 750GX
750GX (codenamed Gobi), revealed in 2004, was a 7xx processor from IBM. It has an on-die 1 MB L2 cache, a top frequency of 1.1 GHz, and support for bus speeds up to 200 MHz among other enhancements compared to 750FX. It is manufactured using a 0.13 μm process with copper interconnects, low-K dielectric, and silicon on insulator technology. The 750GX has 44 million transistors, a die size of 52 mm2 and consumes less than 9 W at 1 GHz at typical loads.
A low-power version of the 750GX is available, called the 750GL.
PowerPC 750VX
750VX (codenamed "Mojave") is a rumored, not confirmed and canceled version of the 7xx line. It would be the most powerful and featured version to date with up to 4MB of off die L3 cache, a 400Mhz DDR front side bus and the same implementation of AltiVec used in the PowerPC 970. It was expected to clock as high as 1.8 GHz (starting at 1.5 GHz) and reported to have additional pipeline stages, and advanced power management features. It was reported to be finished and ready for production in December 2003, but said timing was too late for it to get significant orders seeing Apple's iBook line had switched to G4s in October the same year, and thus it quickly fell off the roadmap. It was never released or heard from since.
There were follow up chips planned, such the 750VXe, which would have surpassed 2 GHz.
PowerPC 750CL
The 750CL is an evolved 750CXe, with speeds ranging from 400 MHz to 1 GHz with a system bus up to 240 MHz, L2 cache prefetch features and graphics related instructions have been added to improve performance. As the added graphics-related functions match closely the ones found in the Gekko processor it is very likely that the 750CL is a shrink of the same processor for general purpose use. The 750CL is manufactured using a 90 nm copper based fabrication with Low-K dielectric and Silicon on insulator technology and features 20 million transistors on a 16 mm2 die. It draws up to 2.7 W at 600 MHz, 9.8 W at 1 GHz.
Broadway
The CPU in Wii is virtually identical to the 750CL but it runs at 729 MHz, a frequency not supported by stock 750CL. It measures only 4.2 x 4.5 mm (18.9 mm2). This is smaller than half the size of the "Gekko" microprocessor (43 mm2) incorporated in the GameCube at its first release.
Espresso
The CPU in Wii U is believed to be an evolution of the Broadway architecture. The largely unconfirmed characteristics are a triple core CPU which runs at 1.24 GHz and a 45 nm process.
Future
IBM has ceased to publish a roadmap to the 750 family, in favor of marketing themselves as a custom processor vendor. Given IBM's resources, the 750 core will be produced with new features as long as there is a willing buyer. In particular, IBM has no public plans to produce an ordinary 750-based microprocessor in a process smaller than 90 nm, effectively phasing it out as a commodity chip competitive in such markets as networking equipment. However IBM did make the Espresso processor for Nintendo, which is a 750 based design with improvements such as multiprocessor support (the part is a triple core), new 45 nm fabrication process and eDRAM instead of regular L2 cache; it's unknown if further changes were made to the design.
In 2015 Rochester Electronics started providing legacy support for the devices.
Freescale has discontinued all 750 designs in favor of designs based on the PowerPC e500 core (PowerQUICC III).
Device list
This list is a complete list of known 750 based designs. The pictures are illustrations and not to scale.
See also
iMac G3, the first model of the iMac line of personal computers made by Apple Computer, Inc.
iBook G3, the first two models of the iBook line of personal computers made by Apple, later replaced by the white Macbook (non-pro), it was the last mass-produced personal computer to use the G3 (discontinued October 2003).
PowerBook G3, a line of laptop Macintosh computers made by Apple Computer between 1997 and 2000.
Power Macintosh G3, commonly called "beige G3s" or "platinum G3s" for the color of their cases, was a series of personal computers designed, manufactured, and sold by Apple Computer, Inc. from November 1997 to January 1999
Power Macintosh G3 (Blue & White), series (commonly known as the "Blue and White G3", or sometimes just the "B&W G3" (to distinguish it from the original Power Macintosh G3) was a series of personal computers designed, manufactured and sold by Apple Computer Inc. as part of their Power Macintosh line
Nintendo GameCube, a sixth-generation game console sold by Nintendo.
Nintendo Wii, a seventh-generation game console sold by Nintendo.
Nintendo Wii U, an eighth-generation game console sold by Nintendo.
Notes
References
Gwennap, Linley (17 February 1997). "Arthur Revitalizes PowerPC Line". Microprocessor Report. pp. 10–13.
7xx
G3
IBM microprocessors
Motorola microprocessors
Freescale microprocessors
Superscalar microprocessors
32-bit microprocessors |
34173929 | https://en.wikipedia.org/wiki/National%20Dong%20Hwa%20University | National Dong Hwa University | National Dong Hwa University (NDHU; ; shortened as "") is a national research university located in Hualien, Taiwan. Established in 1994, NDHU is widely considered as a high potential research university and the most prestigious university in Eastern Taiwan by Liberty Times, THE, QS, US News. The university offers a wide range of disciplines, including the sciences, engineering, computer science, environmental studies, law, arts, design, humanities, social sciences, education sciences, marine science, music, and business.
NDHU is renowned for its liberal atmosphere and rigorous academics. It's organized into eight colleges, 38 academic departments, and 56 graduate institutes, which enrolled about 10,000 undergraduate & graduate students, and over 1,000 international students pursuing degrees and joining exchange programs. The NDHU Library holds more than two million volumes and is eighth largest academic library in Taiwan. The University's main campus is located in Shoufeng, in the northern half of Hualien County. Encompassing an area of , the main campus houses almost all colleges and research institutes except the College of Marine Science, which is jointly founded in National Museum of Marine Biology and Aquarium.
In 2021, NDHU was ranked Top 10% Universities in Taiwan by THE, QS, U.S.News, and leading in Computer Science, Electrical & Electronic Engineering, Hospitality & Tourism Management in Taiwan by THE, ARWU.
History
Foundation
National Dong Hwa University was established in 1994 in Shoufeng. As the first university established after the full democratization of Taiwan, NDHU set "Freedom, Democracy, Creativity, Excellence" as its founding spirit to reflect the notable timing the university was established. The establishment of NDHU attracted many notable Taiwanese scholars as professor, chair of department or vice-chancellor in universities in United States to work at NDHU, such as Yang Mu, Distinguished Professor of Comparative Literature at University of Washington, Mu Tzung-Tsann, the vice-chancellor of California State University, Los Angeles, Cheng Chih-Ming, Professor of Economics at Georgia Institute of Technology, Zheng Qing Mao, Professor of East Asia Studies at University of California, Berkeley, Kuo Syhyen, Professor of Electrical & Computer Engineering at University of Arizona, and Chiao Chien, Professor of Anthropology at Indiana University Bloomington.
NDHU was the first university in Taiwan to offer master's degree in Environmental Policy, Recreation & Tourism Management, Natural Resources Management, Ethnic Relations & Cultures, Indigenous Arts, Indigenous Development, Creative Writing, Global Logistics Management. In 2000, NDHU establish the first College of Indigenous Studies in Taiwan, which is commonly regarded as most leading institution for Indigenous Studies in Asia.
In 2005, NDHU was in academic partnership with the National Museum of Marine Biology and Aquarium (NMMBA), the most notable institution dedicated to public education and research of marine biology in Taiwan, to jointly establish the College of Marine Sciences and Graduate Institute of Marine Biology in Kenting National Park, Checheng, Pingtung.
National Hualien University of Education (1947-2008)
The National Hualien University of Education (Commonly known as Hua-Shih; ) was established in 1947 in Hualien City, Hualien, as Taiwan Provincial Hualien Normal School (TPHNS). In 1949, Hua-Shih established Affiliated Primary School of Taiwan Provincial Hualien Normal School to provide a training ground for the School's students.
Followed the rapidly arising demand for teachers serving for compulsory education purpose, the school was renamed Taiwan Provincial Hualien Junior Teachers' College in 1964, National Hualien Teachers' College in 1987, and granted university status in 2005. Hua-Shih was the first institution in Taiwan to offer MEd and PhD in Multicultural Education, and one of four institutions in Taiwan to offer MS and PhD in Science Education.
Prior to the merge with NDHU, Hua-Shih earned reputation as one of leading and most prestigious institutions in teacher's education in Taiwan. It has cultivated many notable alumni, many of whom are excellent educators teaching in Taiwan's leading institutions, such as Tsai Ping-kun, Principal of Taipei Municipal Jianguo High School and Taichung Municipal Taichung First Senior High School, and Tang Chih-Min, the Founding Principal of Affiliated Senior High School of National Chengchi University.
National Dong Hwa University
In 2008, with 2.5 billion support from Ministry of Education, National Dong Hwa University merged with the National Hualien University of Education into the university with 5th widest range of disciplines in Taiwan, and renamed the newly integrated College of Education into Hua-Shih College of Education in memory of sixty-year dedication of education by National Hualien University of Education.
Meanwhile, NDHU established College of The Arts as the first art school in Eastern Taiwan and College of Environmental Studies as the first college in Taiwan dedicated to sustainability and environmental management in interdisciplinary approach.
In 2020 anniversary day, NDHU signs the first "Commitment to Sustainable Development Goals" among Universities in Taiwan to announce it's determination to sustainable world.
The fully integrated NDHU consolidated its reputation in many disciplines and prestige in Eastern Taiwan, and reflected in its rankings and other accolades.
Campuses
National Dong Hwa University's three campuses are known for their natural settings.
Shoufeng
NDHU's main campus, Shoufeng Campus, is situated in rural town of Shoufeng, Hualien. It's located in Papaya Creek Alluvial Plain of East Rift Valley, ringed by Central Mountain Range and Coastal Mountain Range, about south of Taroko National Park, south of Hualien City, and north of the Tropic of Cancer.
NDHU's campus is Taiwan's largest flat land university campus, which was designed in postmodern style by Charles Moore, the dean of Yale School of Architecture.
The Shoufeng campus is renowned for its distinctive architecture and natural setting. The academic area are on the center of the campus, containing library, art museum, concert hall, arts workshop, and academic & research buildings. The NDHU Concert Hall within the College of The Arts is the largest performing arts hall in Eastern Taiwan.
The two dormitory communities are adjacent to academic area, one is on the east, the another is on the west. The Athletic facilities surrounds academic area, including stadium, swimming pools, athletic field, baseball field, basketball court, tennis court, volleyball court, kayak, and facilities for Project Adventure.
In 2016, NDHU runs solar university project, constructing the first batch of rooftop photovoltaic solar panels. As of 2021, the solar energy covers 40% of NDHU's annual electric usage, which is the best ever performance achieved by any university in Taiwan.
Meilun
The Meilun campus was the main campus of National Hualien University of Education before it was merged with NDHU. The campus is located in Hualien City's Meilun district near the Qixingtan Beach on the Pacific Ocean. In 2020, NDHU collaborated with Lee Yuen-Cheng, founder of Boston International Experimental Education Institution and an alumnus of NDHU, to establish the first international school within the campus, namely Hualien International School.
Pingtung
NDHU's College of Marine Sciences is based at the National Museum of Marine Biology and Aquarium (NMMBA) in Chechung Township in Pingtung County, standing in Kenting National Park and facing the Taiwan Strait in the west. The National Museum of Marine Biology and Aquarium is a well-known marine museum for academic research and marine education in Taiwan.
Academic organization
NDHU serves over 10,000 students and confers undergraduate, master's and doctoral degrees in a comprehensive range of fields. The university is organized from its 39 departments and 56 graduate institutes into 8 colleges and over 70 research centers.
It comprises of College of Humanities and Social Sciences (CHASS), College of Science and Engineering (CSAE), College of Management (CMGT), Hua-Shih College of Education (HSCE), College of The Arts (ARTS), College of Indigenous Studies (CIS), College of Environmental Studies (CES), and College of Marine Sciences (CMS).
College of Humanities and Social Sciences
NDHU's College of Humanities and Social Sciences (CHASS) was founded by Yang Mu, Professor Emeritus at University of Washington and Founding Dean at Hong Kong University of Science and Technology. CHASS supports interdisciplinary academic training and research in fields including law, economics, history, sociology, psychology, Taiwan & regional studies, public administration, creative writing, sinophone literature, Chinese classical literature, English literature.
NDHU's M.F.A. in Creative Writing was the first and most recognized MFA program in Chinese-speaking world, which had cultivated many emerging outstanding literary talents in Taiwan.
The College offers four PhD programs, Ph.D. in Teaching Chinese as a Second Language (TCASL), Ph.D. in Asia-Pacific Regional Studies (APRS), Ph.D. in Economics, and Ph.D. in Chinese Literature, and has over 10 research centers. The European Union Research Centre (EURC) is funded by European Union (EU) and one of the seven university centers allied with the European Union Center in Taiwan (EUTW) to facilitate academic exchange between Eastern Taiwan and European Union.
College of Science and Engineering
The College of Science and Engineering (CSAE) is founded by Hsia Yu-Ping, Chair Professor at California Institute of Technology and Yale University, as the two largest of 8 colleges within NDHU.
NDHU CSAE has 8 departments: Applied Mathematics (AM), Physics (PHYS), Life Science (LS), Chemistry (CHEM), Electrical Engineering (EE), Computer Science and Information Engineering (CSIE), Materials Science and Engineering (MSE), Opto-Electronic Engineering (OEE). The College provides more than 40 degree programs at the bachelor's, master's and doctoral levels.
In 2021, Times Higher Education World University Rankings by Subject ranked NDHU No.5 in Computer Sciences, No.8 in Engineering, and No.8 in Physical Sciences in Taiwan, the best ever position achieved by any university in Eastern Taiwan.
ARWU Global Ranking of Academic Subjects ranked NDHU No.4 in Electrical & Electronic Engineering in Taiwan, which was only next to National Taiwan University (NTU), National Yang Ming Chiao Tung University (NYCU), and National Tsing Hua University (NTHU).
College of Management
NDHU College of Management was founded as Graduate Institute of Business Administration in 1994. It has six academic departments and a graduate institute: Business Administration, International Business, Finance, Accounting, Information Management, Logistics Management, Tourism, Recreation, and Leisure Studies (TRLS). The College offers degree programs– undergraduate, MBA, MIM, EMBA, MSc, PhD, and dual degree with oversea partner universities.
In 2021, ARWU Global Ranking of Academic Subjects ranked NDHU College of Management 101-150th in the world for Hospitality & Tourism Management, holding the same statue with Indiana University Bloomington, University of Illinois at Urbana-Champaign, and University of Ottawa.
Hua-Shih College of Education
Hua-Shih College of Education (Hua-Shih) traces its roots back to Taiwan Provincial Hualien Normal School established in 1947, as one of nine exclusive schools dedicated to early childhood and primary education in contemporary Taiwan. Hua-Shih was the first institution in Taiwan to offer MEd and PhD in Multicultural Education, which was the most recognized program in Taiwan. The College become one of 8 colleges at NDHU in 2008. Nowadays, Hua-Shih offers over 20 programs- BEd, MEd, MSc, PhD in Curriculum and Instruction, Early Childhood Education, Educational Administration, Special Education, Physical Education and Kinesiology, Multicultural Education, and Science Education.
College of The Arts
The College of The Arts (ARTS) is the first art school to be established in Eastern Taiwan, which is organized into three departments — Music, Arts and Design, Arts and Creative Industry.
The College offers seven programs — B.M., B.A., B.F.A., M.M., M.A., M.F.A., into Creative Design, Studio Art, Indigenous Art, Visual Art Education, Creative Industry Management, Music Performance, Music Education.
College of Indigenous Studies
The College of Indigenous Studies (CIS) was first in Taiwan and most renowned institution for Indigenous Studies in Asia. Established in 2001, NDHU CIS traced its roots back to Graduate Institute of Ethnic Relations and Cultures in 1995, which was founded by Chiao Chien, Professor of Anthropology at Indiana University Bloomington and Founding Chair of Anthropology at Chinese University of Hong Kong.
The College was the first institution in Taiwan to grant degree in Ethnic Relations and Cultures (ERC), Indigenous Arts, Indigenous Development, Indigenous Affair, Indigenous Social Work (ISW), and Indigenous Language and Communication (ILC), which offering over 10 programs- B.A., B.S.S., B.S.W., B.I.A., M.S.S, M.S.W. Ph.D. in these disciplines.
College of Environmental Studies
The College of Environmental Studies (CES) was established in 2009 by merging five graduate institutes — Natural Resources and Management, Environmental Policy, Ecological and Environmental Education, Earth Science, and Biological Resources and Technology into a school of Environmental Studies.
NDHU CES emphasize on interdisciplinary collaborative approach in solving environmental issues with its one single department — Department of Natural Resources and Environmental Studies (NRES).
The College offers four programs — BSc, MSc, PhD in Natural Resources and Environmental Studies (NRES), and MSc in Humanity and Environmental Science (HES) jointly with College of Humanities and Social Sciences (CHASS), College of Indigenous Studies (CIS), and five research centers – Center for Interdisciplinary Research on Ecology and Sustainability (CIRES), Center for Disaster Prevention Research (CDPR), Environmental Education Center (EEC), Campus Center for the Environment (CCE), and Eastern Taiwan Earthquake Research Center (ETERC).
College of Marine Sciences
The College of Marine Sciences is a graduate school at NDHU that was founded in 2005 as an academic collaboration with National Museum of Marine Biology and Aquarium (NMMBA) in Kenting National Park, which set the first record on academic collaboration between higher education and museum in Taiwan. The College set Graduate Institute of Marine Biology (GIMB) which offers three programs- MSc in Marine Biotechnology, MSc in Marine Biodiversity and Evolutionary Biology, and PhD in Marine Biology.
Reputation and rankings
Research
National Dong Hwa University is considered one of top research impact institutions in Taiwan. NDHU is ranked 7th greatest research impact university in Taiwan in CNCI Index, the research impact evaluation undertaken by Ministry of Education (MOE).
NDHU holds largest research impact in Computer Science (No.1 Citations in Taiwan, 149th in the world) and Engineering (No.3 Citations in Taiwan) in Times Higher Education World University Ranking.
The University Ranking by Academic Performance ranked NDHU No.1 research impact in Taiwan for Information & Computer Science. The Academic Ranking of World Universities ranked NDHU No.5 in Taiwan for Electrical & Electronic Engineering (401–500 in the World) and No.5 in Taiwan for Hospitality & Tourism Management (101–150 in the World).
NDHU's Master's program in Ethnic Relations and Cultures is selected as Fulbright Program, which is considered as most premier program in Ethnic & Culture Studies in Asia by Institute of International Education and funded by U.S. Government.
Admission
The admission of undergraduate program, Rift Valley Interdisciplinary Shuyuen (縱谷跨域書院), is one of most selective undergraduate program in Taiwan. For the Fall 2021 program students, NDHU merely accepted for an 7.1% admission rate.
Partnerships
NDHU has academic partnerships in teaching and research with more than 440 universities across the Americas, Asia, Oceania, Europe, Middle East and Africa, including University of Edinburgh in Edinburgh, University of California, San Diego in San Diego, Purdue University in West Lafayette, Freie Universitaet Berlin in Berlin, University of Mannheim in Mannheim, University of New South Wales in Sydney, Tokyo Institute of Technology in Tokyo, Peking University in Beijing, and Fudan University in Shanghai.
NDHU runs two oversea Chinese Language Centers in Howard University and Oakland University in United State with support from Ministry of Education (MOE) and Ministry of Foreign Affairs (MOFA) to bring high-quality Mandarin Education to the partner universities. NDHU is also selected one of nine universities in "Taiwan-Europe Connectivity Scholarship" by MOFA to expand academic cooperation and Mandarin education with European universities and selected one of five universities in "Africa Elite Talent Cultivation Program" with National Taiwan University, National Taiwan Normal University by MOE to expand Taiwan's academic impact to African.
NDHU further establishes academic partnerships with global research institutes, including Academia Sinica, National Museum of Marine Biology and Aquarium, National Center for Research on Earthquake Engineering, Central Geological Survey, Central Weather Bureau, European Union Centre in Taiwan, Defence Institute of Advanced Technology, Polish Academy of Sciences.
Rankings
NDHU is commonly regarded as Top 10% University in Taiwan by Times Higher Education World University Rankings (THE), QS World University Rankings (QS), and U.S.News Global University Rankings (U.S.News).
University Rankings
Top 5 High Potential University in Taiwan by THE Young University Ranking
In 2021, NDHU is awarded as Top 5 high potential University in Taiwan and 251–300th in the world by Times Higher Education Young University Rankings.
Top 10 Universities in Taiwan by THE World University Ranking
In 2020, NDHU is ranked No.10 in Taiwan by Times Higher Education World University Rankings. In terms of Research Citation and International Outlook, NDHU is ranked No.9 and No.7 in Taiwan by 2021 Times Higher Education World University Rankings.
Top 7 National University in Taiwan
NDHU was elected No.7 National University in "Top 10 National Universities " Evaluated by all presidents of higher education in Taiwan.
Field Rankings
Top 4 in Electrical & Electronic Engineering in Taiwan by ARWU
NDHU's Electrical & Electronic Engineering (E&EE) program in is ranked No.4 in Taiwan and 401–500th in the world by ARWU Global Ranking of Academic Subjects, which holding equal status as Rice University, University of Missouri, and University of Illinois at Chicago.
Top 5 in Computer Science in Taiwan by THE
NDHU's Computer Science program is ranked No.5 in Taiwan and 301–400th in the world by THE World University Rankings 2022 by Subject, which holding equal status as McMaster University, University of Florida, and Case Western Reserve University. In terms of Research Citation, NDHU is ranked No.1 as the most influential university in Computer Science in Taiwan.
Top 7 Engineering Schools in Taiwan by THE
NDHU's Engineering programs are ranked No.7 in Taiwan and 600–800th in the world byTHE World University Rankings 2022 by Subject .
Top 4 in Hospitality & Tourism Management in Taiwan by ARWU
NDHU's Hospitality & Tourism Management program is ranked No.4 in Taiwan and 101–150th in the world by ARWU Global Ranking of Academic Subjects, which holding equal status as University of Illinois Urbana-Champaign, Indiana University Bloomington, and University of Ottawa.
Best 6 Universities in Humanity, Social Science, Law, Business in Taiwan by GVM
NDHU was selected "Best 6 Universities in Humanity, Social Science, Law, Business (文法商) " by Global Views Monthly (GVM; 遠見雜誌), which is the best position for any university in Eastern Taiwan.
Other Accolades
Top 10 Premier Mandarin Center in Taiwan
In 2021, NDHU is selected as Top 10 Premier Mandarin Training Center by Ministry of Foreign Affairs (MOFA) and Ministry of Education (MOE) to jointly promote international cooperation project in Mandarin Education in United States and Europe. NDHU further establish Mandarin Language Centers within Howard University and Oakland University, which selecting qualified mandarin teachers to promote Mandarin Education in partner universities.
Asia Leadership and Management Team of the Year by THE Awards Asia
In 2019, NDHU is recently appraised of "THE Asia Leadership and Management Team of the Year", which is the award included National University of Singapore and National University of Malaysia by THE Awards Asia 2019.
Alumni
T.H. Tung (Honorary Doctorate Degree), incumbent chairperson of Pegatron, co-founder of ASUS Computer Inc. and its former vice chairman.
Yoga Lin, famous Taiwan Pop Music Singer
Gan Yao-ming, famous Taiwan novel and essay writer
Fu Kun-chi, Legislator and former Magistrate of Hualien County.
Tsai Ping-kun, Deputy Mayor of Taipei, former Deputy Mayor of Taichung, Deputy Minister of Culture in Taiwan, and Principal of Taipei Municipal Jianguo High School and Taichung Municipal Taichung First Senior High School.
Tang Chih-Min, Founding Principal of Affiliated Senior High School of National Chengchi University, Distinguished Professor of Graduate Institute of Administration and Policy at National Chengchi University, Director of Department of Education at Taipei City Government.
Wu Wu-Hsiung, 13th Principal of Taipei Municipal Jianguo High School, the 1st high school in Taiwan.
Lin Da-kuei, Vice President at KPMG Cybersecurity in Taiwan.
Lin Li-yu, Founder of Chatime, the world's leading teahouse.
Wang Ching-chi, President at Château Hotels & Resorts.
Chen Sao-liang, President at Taiwan International Ports Corporation.
Liu Wei-zhi, Direct of the Board at Advantech, the No.1 worldwide industrial PC and industrial IoT & platform solution provider.
Wang Chia-ching, Vice President at Google Inc. in Taiwan.
Lin Allen, Assistant Vice President at Foxconn.
Wang Cheng-pang, the member of Chinese Taipei's silver medal men's archery team at 2004 Summer Olympics.
See also
List of universities in Taiwan
EUTW university alliance
Notes
References
External links
National Dong Hwa University
Universities and colleges in Taiwan
Universities and colleges in Hualien County
Universities and colleges in Pingtung County
Educational institutions established in 1994
Hualien City
1994 establishments in Taiwan |
32736 | https://en.wikipedia.org/wiki/OpenVMS | OpenVMS | OpenVMS, often referred to as just VMS, is a multi-user, multiprocessing virtual memory-based operating system designed to support time-sharing, batch processing, transaction processing and workstation applications. It was first announced by Digital Equipment Corporation (DEC) as VAX/VMS (Virtual Address eXtension/Virtual Memory System) alongside the VAX-11/780 minicomputer in 1977. OpenVMS has subsequently been ported to run on DEC Alpha systems, the Itanium-based HPE Integrity Servers, and select x86-64 hardware and hypervisors. Since 2014, OpenVMS is developed and supported by a company named VMS Software Inc. (VSI).
OpenVMS offers high availability through clustering and the ability to distribute the system over multiple physical machines. This allows clustered applications and data to remain continuously available while operating system software and hardware maintenance and upgrades are performed, or when a whole data center is destroyed. VMS cluster uptimes of 17 years have been reported. Customers using OpenVMS include banks and financial services, hospitals and healthcare, telecommunications operators, network information services, and industrial manufacturers. During the 1990s and 2000s, there were approximately half a million VMS systems in operation worldwide.
History
Origin and name changes
In April 1975, Digital Equipment Corporation embarked on a hardware project, code named Star, to design a 32-bit virtual address extension to its PDP-11 computer line. A companion software project, code named Starlet, was started in June 1975 to develop a totally new operating system, based on RSX-11M, for the Star family of processors. These two projects were tightly integrated from the beginning. Gordon Bell was the VP lead on the VAX hardware and its architecture. Roger Gourd was the project lead for the Starlet program, with software engineers Dave Cutler (who would later lead development of Microsoft's Windows NT), Dick Hustvedt, and Peter Lipman acting as the technical project leaders, each having responsibility for a different area of the operating system. The Star and Starlet projects culminated in the VAX-11/780 computer and the VAX/VMS operating system. The Starlet name survived in VMS as a name of several of the main system libraries, including STARLET.OLB and STARLET.MLB.
In September 1984, Digital created a dedicated distribution of VMS named MicroVMS for the MicroVAX and VAXstation, which had significantly less memory and disk space than larger VAX systems of the time. MicroVMS split up VAX/VMS into multiple kits, which a customer could use to install a subset of VAX/VMS tailored to their specific requirements. MicroVMS also differed by various simplifications to the setup and management of the operating system, and came with a condensed documentation set. MicroVMS kits were released on TK50 tapes and RX50 floppy disks, corresponding to VAX/VMS V4.0 to V4.7. MicroVMS was merged back into VAX/VMS in the V5.0 release, by which time the ability to customize a VAX/VMS installation had advanced to a point where MicroVMS became redundant.
Beginning in 1989, a short lived distribution of VMS named Desktop-VMS was sold with VAXstation systems. It consisted of a single CD-ROM containing a bundle of VMS, DECwindows, DECnet, VAXcluster support, and a setup process designed for non-technical users. Desktop-VMS could either be run directly from the CD, or could be installed onto a hard drive. Desktop-VMS had its own versioning scheme beginning with V1.0, which corresponded to the V5.x releases of VMS.
With the V5.0 release in April 1988, DEC began to refer to VAX/VMS as simply VMS in its documentation. In July 1992, DEC renamed VAX/VMS to OpenVMS as an indication for its support of "open systems" industry standards such as POSIX and Unix compatibility, and to drop the VAX connection since the port to Alpha was underway. The OpenVMS name was first used with the OpenVMS AXP V1.0 release in November 1992. DEC began using the OpenVMS VAX name with the V6.0 release in June 1993.
Port to DEC Alpha
During the 1980s, DEC planned to replace the VAX platform and the VMS operating system with the PRISM architecture and the MICA operating system. When these projects were cancelled in 1988, a team was set up to design new VAX/VMS systems of comparable performance to RISC-based Unix systems. After a number of failed attempts to design a faster VAX-compatible processor, the group demonstrated the feasibility of porting VMS and its applications to a RISC architecture based on PRISM. This led to the creation of the Alpha architecture. The project to port VMS to Alpha began in 1989, and first booted on a prototype Alpha EV3-based Alpha Demonstration Unit in early 1991. Prior to the availability of Alpha hardware, the Alpha port was developed and booted on an emulator named Mannequin, which implemented many of the Alpha instructions in custom microcode on a VAX 8800 system.
The main challenge in porting VMS to a new architecture was that VMS and the VAX were designed together, meaning that VMS was dependent on certain details of the VAX architecture. Furthermore, a significant amount of the VMS kernel, layered products, and customer-developed applications were implemented in VAX MACRO assembly code. Some of the changes needed to decouple VMS from the VAX architecture included:
The creation of the MACRO-32 compiler, which treated VAX MACRO as a high-level language, and compiled it to Alpha object code.
The creation of a VAX to Alpha binary translator, known as the VAX Environment Software Translator (VEST), which was capable of translating VAX executables when it was not possible to recompile the code for Alpha.
The emulation of certain low-level details of the VAX architecture in PALcode, such as interrupt handling and atomic queue instructions. This decreased the amount of VAX-dependent code which had to be rewritten for Alpha.
The conversion of the VMS compilers, many of which had their own bespoke VAX code generators, to use a common compiler backend named GEM.
The VMS port to Alpha resulted in the creation of two separate source code libraries (based on a source code management tool known as the VMS Development Environment, or VDE) for VAX, and for Alpha. The Alpha code library was based on a snapshot of the VAX/VMS code base circa V5.4-2. 1992 saw the release of the first version of OpenVMS for Alpha AXP systems, designated OpenVMS AXP V1.0. In 1994, with the release of OpenVMS V6.1, feature (and version number) parity between the VAX and Alpha variants was achieved, this was the so-called Functional Equivalence release. The decision to use the 1.x version numbering stream for the pre-production quality releases of OpenVMS AXP caused confusion for some customers, and was not repeated in the subsequent ports of OpenVMS to new platforms.
When VMS was ported to Alpha, it was initially left as a 32-bit only operating system. This was done to ensure backwards compatibility with software written for the 32-bit VAX. 64-bit addressing was first added for Alpha in the V7.0 release. In order to allow 64-bit code to interoperate with older 32-bit code, OpenVMS does not create a distinction between 32-bit and 64-bit executables, but instead allows for both 32-bit and 64-bit pointers to be used within the same code. This is known as mixed pointer support. The 64-bit OpenVMS Alpha releases support a maximum virtual address space size of 8TiB (a 43-bit address space), which is the maximum supported by the Alpha 21064 and Alpha 21164.
One of the more noteworthy Alpha-only features of OpenVMS was OpenVMS Galaxy - which allowed the partitioning of a single SMP server to run multiple instances of OpenVMS. Galaxy supported dynamic resource allocation to running partitions, and the ability to share memory between partitions.
Port to Intel Itanium
In 2001, prior to its acquisition by Hewlett-Packard, Compaq announced the port of OpenVMS to the Intel Itanium architecture. The Itanium port was the result of Compaq's decision to discontinue future development of the Alpha architecture in favour of adopting the then-new Itanium architecture. The porting began in late 2001, and the first boot on took place on the 31st of January 2003. The first boot consisted of booting a minimal system configuration on a HP i2000 workstation, logging in as the SYSTEM user, and running the DIRECTORY command. The Itanium port of OpenVMS supports specific models and configurations of HPE Integrity Servers. The Itanium releases were originally named HP OpenVMS Industry Standard 64 for Integrity Servers, although the names OpenVMS I64 or OpenVMS for Integrity Servers are more commonly used.
The Itanium port was accomplished using source code maintained in common within the OpenVMS Alpha source code library, with the addition of conditional code and additional modules where changes specific to Itanium were required. Whereas the VAX and Alpha architectures were specifically designed to support the low-level needs of OpenVMS, Itanium was not. This required certain architectural dependencies of OpenVMS to be replaced, or emulated in software. Some of the changes included:
The Extensible Firmware Interface (EFI) is used to boot OpenVMS on Integrity hardware, taking over the role of the System Reference Manual (SRM) firmware on Alpha. Support for ACPI was also added to OpenVMS, since this is used to discover and manage hardware devices on the Integrity platform.
For Itanium, the functionality which was implemented using PALcode for Alpha was moved into a component of the OpenVMS kernel named the Software Interrupt Services (SWIS).
The Itanium port adopted a new calling standard based on Intel's Itanium calling convention, with extensions to support the OpenVMS Common Language Environment. Furthermore, it replaced the OpenVMS-specific executable formats used on the VAX and Alpha with the standard Executable and Linking Format (ELF) and DWARF formats.
IEEE 754 was adopted as the default floating point format, replacing the VAX floating point format that was the default on both the VAX and Alpha architectures. For backwards compatibility, it is possible to compile code on Itanium to use the VAX floating point format, but it relies on software emulation.
The operating system was modified to support the 50-bit physical addressing available on Itanium, allowing 1PiB of memory to be addressed. The Itanium port otherwise retained the mixed 32-bit/64-bit pointer architecture which was introduced in OpenVMS Alpha V7.0.
As with the VAX to Alpha port, a binary translator for Alpha to Itanium was made available, allowing user mode OpenVMS Alpha software to be ported to Itanium in situations where it was not possible to recompile the source code. This translator is known as the Alpha Environment Software Translator (AEST), and it also supported translating VAX executables which had already translated with VEST.
Two pre-production releases, OpenVMS I64 V8.0 and V8.1, were available on June 30, 2003 and on December 18, 2003. These releases were intended for HP organizations and third-party vendors involved with porting software packages to OpenVMS I64. The first production release, V8.2, was released in February 2005. V8.2 was also released for Alpha, subsequent V8.x releases of OpenVMS have maintained feature parity between the Alpha and Itanium architectures.
Port to x86-64
When VMS Software Inc. (VSI) announced that they had secured the rights to develop the OpenVMS operating system from HP, they also announced their intention to port OpenVMS to the x86-64 architecture. The porting effort ran concurrently with the establishment of the company, as well as the development of VSI's own Itanium and Alpha releases of OpenVMS V8.4-x.
The x86-64 port is targeted for specific servers from HPE and Dell, as well as certain virtual machine hypervisors. Initial support was targeted for KVM and VirtualBox. Support for VMware was announced in 2020, and Hyper-V has been described as a future target. In 2021, the x86-64 port was demonstrated running on an Intel Atom-based single-board computer.
The x86-64 port is built from the same source code library as the Alpha and Itanium architectures, using conditional compilation to manage the architecture-specific code needed to support the x86-64 platform. As with the Alpha and Itanium ports, the x86-64 port made some changes to simplify porting and supporting OpenVMS on the new platform:
VSI adopted the open source LLVM compiler backend, replacing the proprietary GEM backend used in the Alpha and Itanium ports. A translator was developed to map the GEM IR to LLVM IR, allowing the existing compiler frontends to be reused. In addition, the open source Clang compiler was adopted as the officially supported C++ compiler for OpenVMS under x86-64.
On x86-64, OpenVMS makes more extensive use of UEFI and ACPI to detect and initialize hardware on boot. As part of this, VMS is now booted from a memory disk, instead of the traditional VMS boot mechanism – which relied on boot drivers containing a basic implementation of the filesystem, and which was tied to specific hardware devices. The changes to the boot process necessitated the creation of a Dump Kernel – this is a secondary kernel which is loaded in the background at boot time, and is invoked in case OpenVMS needs to write a crash dump to disk.
OpenVMS assumes the presence of four hardware-provided privilege levels to provide isolation between user applications, and various parts of the operating system. While x86-64 nominally provides four privilege levels, they are only equivalent to two of the privilege levels on the VAX, Alpha and Itanium. In the x86-64 port, the Software Interrupt Services (SWIS) module of the kernel is extended to emulate the missing privilege levels.
As with the Itanium port, the calling standard for x86-64 is an extension of the platform's standard calling convention, specifically the System V AMD64 ABI. Certain characteristics of the x86-64 architecture created challenges for defining a suitable calling standard. For example, due to the small number of general purpose registers for x86-64, the MACRO-32 compiler has to store the contents of the emulated VAX registers in an in-memory "pseudo registers" structure instead of using the processor's hardware registers as is done on Alpha and Itanium.
The first boot was announced on 14 May 2019. This involved booting OpenVMS on VirtualBox, and successfully running the DIRECTORY command. Later in 2019, the first "real boot" was announced - this consisted of the operating system booting in a completely standard manner, a user logging into the system, and running the DIRECTORY command. In May 2020, the V9.0 Early Adopter's Kit release was made available to a small number of customers. This consisted of the OpenVMS operating system running in a VirtualBox VM with certain limitations - most significantly, few layered products were available, and code can only be compiled for x86-64 using cross compilers which run on Itanium-based OpenVMS systems. Following the V9.0 release, VSI released a series of updates on a monthly or bimonthly basis which added additional functionality and hypervisor support. These were designated V9.0-A through V9.0-H. In June 2021, VSI released the V9.1 Field Test, which is available to VSI's customers and partners. V9.1 shipped as an ISO image which can be installed onto a variety of hypervisors, and onto HPE ProLiant DL380 servers starting with the V9.1-A release.
Architecture
The OpenVMS operating system has a layered architecture, consisting of a privileged Executive, a Command Language Interpreter which runs at an intermediate level of privilege, and utilities and run-time libraries (RTLs) which run in an unprivileged mode, but can potentially run at a higher level of privilege if authorized to do so. Unprivileged code typically invokes the functionality of the Executive through system services (equivalent to system calls in other operating systems).
OpenVMS' layers and mechanisms are built around certain features of the VAX architecture, including:
The availability of four processor access modes (named Kernel, Executive, Supervisor and User, in order of decreasing privilege). Each mode has its own stack, and each memory page can have memory protections specified per-mode.
A virtual address space which is partitioned between process-private space sections, and system space sections which are common to all processes.
32 interrupt priority levels which are used for synchronization.
Hardware support for delivering asynchronous system traps to processes.
These VAX architecture mechanisms are implemented on Alpha, Itanium and x86-64 by either mapping to corresponding hardware mechanisms on those architectures, or through emulation (via PALcode on Alpha, or in software on Itanium and x86-64).
Executive and Kernel
The OpenVMS Executive comprises the privileged code and data structures which reside in the system space. The Executive is further subdivided between the Kernel, which consists of the code which runs at the kernel access mode, and the less-privileged code outside of the Kernel which runs at the executive access mode.
The components of the Executive which run at executive access mode include the Record Management Services, and certain system services such as image activation. The main distinction between the kernel and executive access modes is that most of the operating system's core data structures can be read from executive mode, but require kernel mode to be written to. Code running at executive mode can switch to kernel mode at will, meaning that the barrier between the kernel and executive modes is intended as a safeguard against accidental corruption as opposed to a security mechanism.
The Kernel comprises the operating system's core data structures (e.g. page tables, the I/O database and scheduling data), and the routines which operate on these structures. The Kernel is typically described as having three major subsystems: I/O, Process and Time Management, Memory Management. In addition, other functionality such as logical name management, synchronization and system service dispatch are implemented inside the Kernel.
Extension mechanisms
OpenVMS allows user mode code with suitable privileges to switch to executive or kernel mode using the $CMEXEC and $CMKRNL system services, respectively. This allows code outside of system space to have direct access to the Executive's routines and system services. In addition to allowing third-party extensions to the operating system, Privileged Images are used by core operating system utilities to manipulate operating system data structures through undocumented interfaces.
OpenVMS also allows Shareable Images (i.e. shared libraries) to be granted privilege, allowing the creation of user-written system services, which are privileged routines which can be linked into a non-privileged program. User written system services are invoked using the same mechanism as standard system services, which prevents the unprivileged program from gaining the privileges of the code in the Privileged Shareable Image. Despite what the name may suggest, user-written system services are also used to implement infrequently-used operating system functionality such as volume mounting.
OpenVMS provides a device driver interface, which allows support for new I/O devices to be added to the operating system.
File system
The typical user and application interface into the file system is the Record Management Services (RMS), although applications can interface directly with the underlying file system through the QIO system services. RMS supports multiple record-oriented file access methods and record formats (including fixed length, variable length, and a stream format where the file is treated as a stream of bytes, similar to Unix). RMS also supports remote file access via DECnet, and optional support for journaling.
The file systems supported by VMS are referred to as the Files-11 On-Disk Structures (ODS), which provide disk quotas, access control lists and file versioning. The most significant structure levels are ODS-2, which is the original VMS file system, and ODS-5, which extends ODS-2 with support for Unicode file names, case sensitivity, hard links and symbolic links. VMS is also capable of accessing files on ISO 9660 CD-ROMs and magnetic tape with ANSI tape labels.
Alongside the OpenVMS Alpha V7.0 release in 1995, DEC released a log-structured file system named Spiralog which was intended as a potential successor to Files-11. Spiralog shipped as an optional product, and was discontinued at the release of OpenVMS Alpha 7.2. Spiralog's discontinuation was due to a variety of problems, including issues with handling full volumes. The developers of Spiralog began work on a new file system in 1996, which was put on hold and later resumed by VSI in 2016 as the VMS Advanced File System (VAFS, not to be confused with DEC's AdvFS for Tru64). VAFS no longer appears on recent roadmaps, and instead VSI have discussed porting the open source GFS2 file system to OpenVMS. One of the major motivations for replacing the Files-11 structures is that they are limited to 2TiB volumes.
Command Language Interpreter
An OpenVMS Command Language Interpreter (CLI) implements a command line interface for OpenVMS; responsible for executing individual commands, as well as command procedures (equivalent to shell scripts or batch files). The standard CLI for OpenVMS is the DIGITAL Command Language, although other options are available as well.
Unlike Unix shells, which typically run in their own isolated process and behave like any other user mode program, OpenVMS CLIs are an optional component of a process, which exist alongside any executable image which that process may run. Whereas a Unix shell will typically run executables by creating a separate process using fork-exec, an OpenVMS CLI will typically load the executable image into the same process, transfer control to the image, and ensure that control is transferred back to CLI once the image has exited and that the process is returned to its original state. A CLI gets mapped into a process' private address space through execution of the LOGINOUT image, which can either be executed manually, or automatically by certain system services for process creation.
Due to the fact that the CLI is loaded into the same address space as user code, and that the CLI is responsible for invoking image activation and image rundown, the CLI is mapped into the process address space at supervisor access mode. This is in order to prevent accidental or malicious manipulation of the CLI's code and data structures by user mode code.
Features
Clustering
OpenVMS supports clustering (first called VAXcluster and later VMScluster), where multiple systems run their own instance of the operating system, but share disk storage, processing, a distributed lock manager, a common management and security domain, job queues and print queues, providing a single system image abstraction. The systems are connected either by proprietary specialized hardware (Cluster Interconnect) or an industry-standard Ethernet LAN. OpenVMS supports up to 96 nodes in a single cluster, and allows mixed-architecture clusters, where VAX and Alpha systems, or Alpha and Itanium systems can co-exist in a single cluster. VMS clusters allow the creation of applications which can withstand planned or unplanned outages of part of the cluster.
Networking
The DECnet protocol suite is tightly integrated into VMS, allowing remote logins, as well as transparent access to files, printers and other resources on VMS systems over a network. Modern versions of VMS support both the traditional Phase IV DECnet protocol, as well as the OSI-compatible Phase V (also known as DECnet-Plus). Support for TCP/IP is provided by the optional TCP/IP Services for OpenVMS layered product (originally known as the VMS/ULTRIX Connection, then as the ULTRIX Communications Extensions or UCX). TCP/IP Services is based on a port of the BSD network stack to OpenVMS, along with support for common protocols such as SSH, DHCP, FTP and SMTP. Due to the fact that the official TCP/IP stack was not introduced until 1988, and the limited feature set of the early versions, multiple third party TCP/IP stacks were created for VMS.
DEC sold a software package named PATHWORKS (originally known as the Personal Computer Systems Architecture or PCSA) which allowed personal computers running MS-DOS, Microsoft Windows or OS/2, or the Apple Macintosh to serve as a terminal for VMS systems, or to use VMS systems as a file or print server. PATHWORKS was based on LAN Manager and supported either DECnet or TCP/IP as a transport protocol. PATHWORKS was later renamed to Advanced Server for OpenVMS, and was eventually replaced with a VMS port of Samba at the time of the Itanium port.
DEC provided the Local Area Transport (LAT) protocol which allowed remote terminals and printers to be attached to a VMS system through a terminal server such as one of the DECserver family.
Programming
DEC (and its successor companies) provided a wide variety of programming languages for VMS. Officially supported languages on VMS, either current or historical, include:
VAX MACRO
BLISS
C
DCL
Fortran
Pascal
COBOL
BASIC
C++
Java
Common Lisp
APL
Ada
PL/I
DIBOL
CORAL
OPS5
RPG II
MUMPS
MACRO-11
DECTPU
VAX SCAN
Among OpenVMS's notable features is the Common Language Environment, a strictly defined standard that specifies calling conventions for functions and routines, including use of stacks, registers, etc., independent of programming language. Because of this, it is possible to call a routine written in one language (for example, Fortran) from another (for example, COBOL), without needing to know the implementation details of the target language. OpenVMS itself is implemented in a variety of different languages and the common language environment and calling standard supports freely mixing these languages. DEC created a tool named the Structure Definition Language (SDL), which allowed data type definitions to be generated for different languages from a common definition.
Development Tools
DEC provided a collection of software development tools in a layered product named DECset (originally named VAXset). This consisted of the Language-Sensitive Editor (LSE), a version control system (the Code Management System or CMS), a build tool (the Module Management System or MMS), a static analyzer (the Source Code Analyzer or SCA), a profiler (the Performance and Coverage Analyzer or PCA) as well as a test manager (the Digital Test Manager or DTM). In addition, a number of text editors are included in the operating system, including EDT, EVE and TECO.
The OpenVMS Debugger supports all DEC compilers and many third party languages. It allows breakpoints, watchpoints and interactive runtime program debugging either using a command line or graphical user interface. A pair of lower-level debuggers, named DELTA and XDELTA, can be used to debug privileged code in additional to normal application code.
In 2019, VSI released an officially-supported Integrated Development Environment for VMS based on Visual Studio Code. This allows VMS applications to be developed and debugged remotely from a Microsoft Windows, macOS or Linux workstation.
Database management
DEC created a number of optional database products for VMS, some of which were marketed as the VAX Information Architecture family. These products included:
Rdb – A relational database system which originally used the proprietary Relational Data Operator (RDO) query interface, but later gained SQL support.
DBMS – A database management system which uses the CODASYL network model and Data Manipulation Language (DML).
Digital Standard MUMPS (DSM) – an integrated programming language and key-value database.
Common Data Dictionary (CDD) – a central database schema repository, which allowed schemas to be shared between different applications, and data definitions to be generated for different programming languages.
DATATRIEVE – a query and reporting tool which could access data from RMS files as well as Rdb and DBMS databases.
Application Control Management System (ACMS) – A transaction processing monitor, which allows applications to be created using a high-level Task Description Language (TDL). Individual steps of a transaction can be implemented using DCL commands, or Common Language Environment procedures. User interfaces can be implemented using TDMS, DECforms or Digital's ALL-IN-1 office automation product.
RALLY, DECadmire – Fourth-generation programming languages (4GLs) for generating database-backed applications. DECadmire featured integration with ACMS, and later provided support for generating Visual Basic client-server applications for Windows PCs.
In 1994, DEC sold Rdb, DBMS and CDD to Oracle, where they remain under active development. In 1995, DEC sold DSM to InterSystems, who renamed it Open M, and eventually replaced it with their Caché product.
Examples of third-party database management systems for OpenVMS include MariaDB, Mimer SQL and System 1032.
User interfaces
VMS was originally designed to be used and managed interactively using DEC's text-based video terminals such as the VT100, or hardcopy terminals such as the DECwriter series. Since the introduction of the VAXstation line in 1984, VMS has optionally supported graphical user interfaces for use with workstations or X terminals such as the VT1000 series.
Text-based user interfaces
The DIGITAL Command Language (DCL), has served as the primary command language interpreter (CLI) of OpenVMS since the first release. Other official CLIs available for VMS include the RSX-11 MCR (VAX only), and various Unix shells. DEC provided tools for creating text-based user interface applications – the Form Management System (FMS) and Terminal Data Management System (TDMS), later succeeded by DECforms. A lower level interface named Screen Management Services (SMG$), comparable to Unix curses, also exists.
Graphical user interfaces
Over the years, VMS has gone through a number of different GUI toolkits and interfaces:
The original graphical user interface for VMS was a proprietary windowing system known as the VMS Workstation Software (VWS), which was first released for the VAXstation I in 1984. It exposed an API called the User Interface Services (UIS). It ran on a limited selection of VAX hardware.
In 1989, DEC replaced VWS with a new X11-based windowing system named DECwindows. It was first included in VAX/VMS V5.1. Early versions of DECwindows featured an interface built on top of a proprietary toolkit named the X User Interface (XUI). A layered product named UISX was provided to allow VWS/UIS applications to run on top of DECwindows. Parts of XUI were subsequently used by the Open Software Foundation as the foundation of the Motif toolkit.
In 1991, DEC replaced XUI with the Motif toolkit, creating DECwindows Motif. As a result, the Motif Window Manager became the default DECwindows interface in OpenVMS V6.0, although the XUI window manager remained as an option.
In 1996, as part of OpenVMS V7.1, DEC released the New Desktop interface for DECwindows Motif, based on the Common Desktop Environment (CDE). On Alpha and Itanium systems, it is still possible to select the older MWM-based UI (referred to as the "DECwindows Desktop") at login time. The New Desktop was never ported to the VAX releases of OpenVMS.
Versions of VMS running on DEC Alpha workstations in the 1990s supported OpenGL and Accelerated Graphics Port (AGP) graphics adapters. VMS also provides support for older graphics standards such as GKS and PHIGS. Modern versions of DECwindows are based on X.Org Server.
Security
OpenVMS provides various security features and mechanisms, including security identifiers, resource identifiers, subsystem identifiers, ACLs, intrusion detection and detailed security auditing and alarms. Specific versions evaluated at Trusted Computer System Evaluation Criteria Class C2 and, with the SEVMS security enhanced release at Class B1. OpenVMS also holds an ITSEC E3 rating (see NCSC and Common Criteria). Passwords are hashed using the Purdy Polynomial.
Vulnerabilities
Early versions of VMS included a number of privileged user accounts (including SYSTEM, FIELD, SYSTEST and DECNET) with default passwords which were often left unchanged by system managers. A number of computer worms for VMS including the WANK worm and the Father Christmas worm exploited these default passwords to gain access to nodes on DECnet networks. This issue was also described by Clifford Stoll in The Cuckoo's Egg as a means by which Markus Hess gained unauthorized access to VAX/VMS systems. In V5.0, the default passwords were removed, and it became mandatory to provide passwords for these accounts during system setup.
A 33-year-old vulnerability in VMS on VAX and Alpha was discovered in 2017 and assigned the CVE ID . On the affected platforms, this vulnerability allowed an attacker with access to the DCL command line to carry out a privilege escalation attack. The vulnerability relies on exploiting a buffer overflow bug in the DCL command processing code, the ability for a user to interrupt a running image (program executable) with and return to the DCL prompt, and the fact that DCL retains the privileges of the interrupted image. The buffer overflow bug allowed shellcode to be executed with the privileges of an interrupted image. This could be used in conjunction with an image installed with higher privileges than the attacker's account to bypass system security.
Cross platform compatibility
VAX/VMS originally included an RSX-11M compatibility layer named the RSX Application Migration Executive (RSX AME) which allowed user mode RSX-11M software to be run unmodified on top of VMS. This relied on the PDP-11 compatibility mode implemented in the VAX-11 processors. The RSX AME played an important role on early versions of VAX/VMS, which used re-used certain RSX-11M user space utilities before native VAX versions had been developed. This was discontinued in VAX/VMS V3.0 when all compatibility mode utilities were replaced with native implementations. In VAX/VMS V4.0, RSX AME was removed from the base system, and replaced with an optional layered product named VAX-11 RSX, which relied on software emulation to run PDP-11 code on newer VAX processors. A VAX port of the RTEM compatibility layer for RT-11 applications was also available from DEC.
Various official Unix and POSIX compatibility layers were created for VMS. The first of which was DEC/Shell - which was a layered product consisting of ports of the Version 7 Unix Bourne shell and several other Unix utilities to VAX/VMS. In 1992, DEC released the POSIX for OpenVMS layered product, which included a shell based on the Korn Shell. POSIX for OpenVMS was later replaced by the open source GNV (GNU's not VMS) project, which was first included in OpenVMS media in 2002. Amongst other GNU tools, GNV includes a port of the Bash shell to VMS. Examples of third party Unix compatibility layers for VMS include Eunice.
DEC licensed SoftPC (and later SoftWindows), and sold it as a layered product for both the VAX and Alpha architectures, allowing Windows and DOS applications to run on top of VMS.
Hobbyist programs
In 1997 OpenVMS and a number of layered products were made available free of charge for hobbyist, non-commercial use as part of the OpenVMS Hobbyist Program. Since then, several companies producing OpenVMS software have made their products available under the same terms, such as Process Software. Prior to the x86-64 port, the age and cost of hardware capable of running OpenVMS made emulators such as SIMH a common choice for hobbyist installations.
In March 2020, HPE announced the end of the OpenVMS Hobbyist Program. This was followed by VSI's announcement of the Community License Program (CLP) in April 2020, which was intended as a replacement for the HPE Hobbyist Program. The CLP was launched in July 2020, and provides licenses for VSI OpenVMS releases on Alpha and Integrity systems. OpenVMS x86-64 licenses will be made available when a stable version is released for this architecture. OpenVMS for VAX is not covered by the CLP, since there are no VSI releases of OpenVMS VAX, and the old versions are still owned by HPE.
Open source applications
There are a number of community projects to port open source software to VMS, including VMS-Ports and GNV (GNU's Not VMS). Some of the open source applications which have been ported to OpenVMS, both by community groups and the developers of VMS, include:
Samba
Apache HTTP Server
Apache Tomcat
Info-Zip
GNU Privacy Guard
Perl
Python
Ruby
Lua
PHP
git
Subversion
MariaDB
Apache ActiveMQ
OpenSSL
Redis
ZeroMQ
SWIG
Wget
cURL
OpenJDK
Apache Axis
Scala
Gearman
Memcached
Firefox
Xpdf
Erlang
RabbitMQ
OpenSSH
Influence
During the 1980s, the MICA operating system for the PRISM architecture was intended to be the eventual successor to VMS. MICA was designed to maintain backwards compatibility with VMS applications while also supporting Ultrix applications on top of the same kernel. MICA was ultimately cancelled along with the rest of the PRISM platform, leading Dave Cutler to leave DEC for Microsoft. At Microsoft, Cutler led the creation of the Windows NT operating system, which was heavily inspired by the architecture of MICA. As a result, VMS is considered an ancestor of Windows NT, together with RSX-11, VAXELN and MICA, and many similarities exist between VMS and NT. This lineage is made clear in Cutler's foreword to "Inside Windows NT" by Helen Custer.
A now-defunct project named FreeVMS attempted to develop an open source operating system following VMS conventions. FreeVMS was built on top of the L4 microkernel and supported the x86-64 architecture. Prior work investigating the implementation of VMS using a microkernel-based architecture had previously been undertaken as a prototyping exercise by DEC employees with assistance from Carnegie Mellon University using the Mach 3.0 microkernel ported to VAXstation 3100 hardware, adopting a multiserver architectural model.
An unofficial derivative of VAX/VMS named MOS VP () was created in the Soviet Union during the 1980s for the SM 1700 line of VAX clone hardware. The main difference between MOS VP and the official DEC releases was the translation of commands, messages and documentation into Russian, and support for the Cyrillic script using KOI-8 encoding. Similarly modified derivatives of MicroVMS known as MicroMOS VP () or MOS-32M () were also created.
Release history
See also
Comparison of operating systems
Terry Shannon
Event flag
References
Further reading
Getting Started with OpenVMS, Michael D. Duffy,
Introduction to OpenVMS, 5th Edition, Lesley Ogilvie Rice,
OpenVMS Alpha Internals and Data Structures: Memory Management, Ruth Goldenberg,
OpenVMS Alpha Internals and Data Structures : Scheduling and Process Control : Version 7.0, Ruth Goldenberg, Saro Saravanan, Denise Dumas,
VAX/VMS Internals and Data Structures: Version 5.2 ("IDSM"), Ruth Goldenberg, Saro Saravanan, Denise Dumas,
Writing Real Programs in DCL, second edition, Stephen Hoffman, Paul Anagnostopoulos,
Writing OpenVMS Alpha Device Drivers in C, Margie Sherlock, Leonard Szubowicz,
OpenVMS Performance Management, Joginder Sethi,
Getting Started with OpenVMS System Management, 2nd Edition, David Donald Miller, Stephen Hoffman, Lawrence Baldwin,
The OpenVMS User's Guide, Second Edition, Patrick Holmay,
Using DECwindows Motif for OpenVMS, Margie Sherlock,
The hitchhiker's guide to VMS : an unsupported-undocumented-can-go-away-at-any-time feature of VMS, Bruce Ellis,
External links
VMS Software: Current Roadmap and Future Releases
VMS Software: Documentation
Hoffmanlabs.org HP OpenVMS FAQ
comp.os.vms Usenet group, archives on Google Groups
OpenVMS
OpenVMS software
1977 software
Cluster computing
High-availability cluster computing
Fault-tolerant computer systems
Digital Equipment Corporation
DEC operating systems
HP software
Compaq software
Parallel computing
X86-64 operating systems
Proprietary operating systems
Time-sharing operating systems |
4018823 | https://en.wikipedia.org/wiki/Computer%20Consoles%20Inc. | Computer Consoles Inc. | Computer Consoles Inc. or CCI was a telephony and computer company located in Rochester, New York, United States, which did business first as a private, and then ultimately a public company from 1968 to 1990. CCI provided worldwide telephone companies with directory assistance equipment and other systems to automate various operator and telephony services, and later sold a line of 68k-based Unix computers and the Power 6/32 Unix supermini.
History
Computer Consoles Inc. (CCI, incorporated May 20, 1968) was founded by three Xerox employees, Edward H. Nutter, Alfred J. Moretti, and Jeffrey Tai, to develop one of the earliest versions of a smart computer terminal, principally for the telephony market. Raymond J. Hasenauer (Manufacturing), Eiji Miki (Electronic design), Walter Ponivas (Documentation) and James M. Steinke (Mechanical design) joined the company at its inception. Due to the state of the art in electronics at the time, this smart terminal was the size of an average sized office desk.
Automating Operator Services
Due to the success of the smart computer terminal, and the expertise the company gained in understanding Operator Services, the company started development programs to offer networked computer systems that provided contract managed access time, specified as a guaranteed number of seconds to paint the operator's first screen of information, to various telephony databases such as directory assistance and intercept messages. The largest such system was designed and installed for British Telecom to provide initially Directory Assistance throughout Great Britain and Ireland. These systems combined Digital Equipment Corporation PDP-11 computers with custom hardware and software developed by CCI.
Automatic Voice Response
To provide higher levels of automation to operator services, CCI introduced in the early 1980s various Automatic Voice Response (AVR) systems tightly integrated with its popular Directory Assistance systems. AVR provided voice response of the customer requested data, almost universally starting the prompt with a variant of the phrase, "The number is". Early systems were based on very small vocabulary synthesised speech chips, follow-on systems utilized 8-bit PCM, and later ADPCM voice playback using audio authored either by CCI or the local phone company.
Digital Switching
To provide even higher levels of automation, CCI started a very aggressive program in the early 1980s to develop a PCM digital telephone switching system targeted for automated, user defined call scenarios. Initial installations handled intercept and calling card calls by capturing multi-frequency and DTMF audio band signaling via the DSP based multi-frequency receiver board. Later systems added speaker independent speech recognition via a quad digital audio processor board to initially automate collect calls.
PERPOS, Perpetual Processing Operating System
To provide better control over transaction processing, significant improvements in fault tolerance, and richer support for networking, CCI developed PERPOS, a Unix derivative that provided integrated support for real-time transaction processing, load balancing, and fault tolerant features such as hot and cold standby.
Power 5 and Power 6 computers
PERPOS was developed for a line of Motorola 68000-based computers called the Power 5 series, which CCI developed. They were a line of multi-processor, fault-tolerant computers, code-named after the Great Lakes. The Power 5 line also included single-processor 68000-based computers, code-named after the Finger Lakes, running a regular Unix port called PERPOS-S, which was originally a Version 7-derived kernel with a System III-derived userland; the kernel was later modified to provide System III compatibility.
Later, Computer Consoles opened a development center in Irvine, California, United States, which developed a proprietary minicomputer, competitive with the Digital Equipment Corporation VAX, called the Power 6/32, code-named "Tahoe" after Lake Tahoe. It ran an internally developed BSD port, and the Computer Systems Research Group at the University of California, Berkeley also ported 4.3BSD to it, producing the release known as "4.3-Tahoe". Unisys corporation remarketed the Power 6 as the U7000 series. Harris Corporation also sold the Power 6 as the HCX-7 and HCX-9. A companion 68010-based machine, the Power 5/32, also ran the internally developed BSD port; it was code-named "Walden" after Walden Pond.
Targeted as a competitor to the Unix/VAX platform, it succeeded for solutions where processing power was paramount. Universities requiring time-shared compilation engines for their students were particularly keen. The machine suffered when applied to general purpose database application environments, not least because the I/O subsystem over-relied on the central processing power (much as the VAX did) and thus used relatively dumb I/O processors. The Power 6 running either version of Unix also suffered from the inefficient memory management inherent in BSD 4.3. The core of this was the use of a 512-byte page rather than a 4K-byte page. Leffer et al. suggest they did this due to concerns about VAX support of 4k dynamic paging. The Power 6 had no such problems, but no operating system to support it.
The final issue with the Power 6/32 running Unix was the lack of support for symmetric multiprocessing: All system calls would have to run on the "Master" processor, forcing a dual processing machine to reschedule a process from the "slave" processor for every system call. The net result of this meant database benchmarks often ran faster on a single processor than a dual.
Office automation
Due to the success the firm had in network based data management, they partnered with, and ultimately acquired, a small company in Reston, Virginia, called RLG Corporation (named after founder Richard L Gauthier), to develop a terminal-based integrated office automation system. RLG had had experience developing this kind of system for the United States Department of Transportation. The office suite, called OfficePower, provided an integrated set of functions such as word processing, spreadsheet, email, and database access via a compact desktop smart terminal backed by a mini, or super mini-computer. Although the system software was ported to various Unix variants, most installations were hosted on CCI's Power 5 and Power 6 machines running CCI's Unix ports.
One installation was at the US Naval Surface Weapons Center in Dahlgren, Virginia; it consisted of two VAXes running 4.2BSD and a number of Power 5/20 machines running PERPOS-S. The VAXes were connected to each other by an ethernet, but, at the time, it wasn't cost-effective to provide ethernet adapters on all the Power 5/20 machines. The Power 5/20s were using 3Com's UNET as their TCP/IP implementation; it included an encapsulation scheme for sending IP datagrams over serial lines. Rick Adams implemented this encapsulation scheme as a line discipline for 4.2BSD; this was the origin of SLIP.
After the takeover of CCI by Standard Telephones and Cables (STC) (see below), OfficePower was developed as the primary office system for International Computers Limited (ICL), owned by STC, with ports for the ICL DRS range and later servers with Power 6/32, Motorola 68030, Intel x86 and Sun SPARC architectures. It continued to be used widely by ICL customers into the late 1990s.
CCI (Europe) Inc
CCI (Europe) Inc was the wholly owned European Sales, Marketing and Support operation based in West London and established with Richard Levy (Altergo, Wang) as European Vice President, with responsibilities for all business aspects outside of North America. Richard Levy recruited industry professionals to target specific market sectors and distribution channels for the European and International markets for the entire CCI range of computer and telephony products.
CCI (Europe) maintained close co-operation with Rochester, NY for the manufacturing, stock & shipping and Irvine, CA for planning & management. Liaising closely with the Israeli R&D operation for international systems translation, CCI Europe established a solid base in major European accounts and International third-party Distribution channels such as ICL & BT and became an integral aspect of the parent company.
CCI Israel, Inc.
CCI Israel, Inc. was a separately incorporated Delaware corporation however it was closely affiliated with the Rochester, NY, Irvine, CA and Reston, VA operations of Computer Consoles, Inc (CCI). It was first established to manage a telephony project for the Israeli national telephone company, Bezeq. The initial Israeli project was based on products developed in the Rochester-based group.
In Israel, development and installation was managed by CCI-Israel's managing director, Jacob "Jack" Mark. Mr. Mark, was earlier affiliated with the original Bell Labs team to which the core development of the Unix operating system is attributed. The small Ramat Gan-based office later grew to support the efforts of the U.S.-based CCI offices, eventually becoming a major research and development center for machine level/operating systems products, telephony products, office automation products (particularly for British and foreign language "OfficePower").
CCI Israel also undertook local development projects for major clients - notably Motorola and Israel Aircraft Industries. In the mid-1980s CCI-Israel introduced the U.S. companies' brand of 5/32 and 6/32 micro- and mini-computers to the local Israeli market. CCI-Israel - through seminars and training groups - was also instrumental in developing and popularizing the Unix operating system and the C programming language in Israel. CCI-Israel was also responsible for establishing the first Unix "User Group" in that country.
Accomplishments
CCI actively participated in various telecom and public standard bodies such as ANSI, and in the development of Unix and the C programming language. It was a pioneer of design and deployment of real-time, transaction processing computer systems, of true fault tolerant computing systems, distributed database access and distributed file system access. CCI was one of earliest commercial entities connected to the Internet as cci.com.
CCI deployed the largest multi-processor, shared file-system, Unix based (PERPOS) system of the era in British Telecom in the late 1980s. The design concepts of the system were years ahead of its time. The company was also a pioneer of design and deployment of voice response and speech recognition to the public telephone networks to automate traditional operator based services.
CCI controlled over 90% of the world market for equipment to automate telephony Directory services at the time of acquisition by STC.
Acquisition by Standard Telephones and Cables
STC acquired CCI effective January 1, 1989. At this time CCI was organized as two major business units: one in Rochester ("CCI - Rochester"), which manufactured telecommunications equipment, and a Computer Products Division in Irvine ("CCI - Irvine"), which manufactured computer hardware. Office systems software was produced at Reston, Virginia. In reality there was a third operation which was a financing group that held the commercial leases for equipment typically sold to telephone companies. At the time of the acquisition the lease base was rumored to be valued at over $700M US dollars.
Also at the time of the acquisition, CCI was involved in a dispute with General Telephone and Electronics ("GTE") over GTE's failure to supply CCI with certain "computer chips" for a new generation of computers being developed by CCI (the "GTE litigation").
After completion of the acquisition, CCI - Rochester became a subsidiary of an STC operating unit known as STC Telecom. Shortly thereafter, the Computer Products Division at Irvine and Office Products Centre at Reston were sold to another STC operating unit, ICL, for net book value of the assets. CCI - Rochester was kept under the jurisdiction of STC Telecom, which was also in the telecommunications business.
Acquisition by Northern Telecom Ltd.
STC Telecom was acquired by Northern Telecom effective March 1991 and became part of the company's European operations. Effective January 1, 1992, CCI was transferred to the Northern Telecom U.S. entity, and was eventually merged into this business unit. At that time, CCI was dissolved and Northern Telecom assumed its assets and liabilities.
Notable Historic Uses
Pixar Computer Animation Group employed a Power 6/32 machine to render the "Glass Man" sequence in Steven Spielberg's Young Sherlock Holmes movie (1985).
References
Computer companies established in 1968
Computer companies disestablished in 1992
Manufacturing companies based in Rochester, New York
1968 establishments in New York (state)
1992 disestablishments in New York (state) |
52511031 | https://en.wikipedia.org/wiki/AsteroidOS | AsteroidOS | AsteroidOS is an open source operating system designed for smartwatches. It is available as a firmware replacement for some Android Wear devices. The motto for the AsteroidOS project is "Hack your wrist."
Wareable.com reviewed version 1.0 and gave it 3.5 out of 5 stars.
Software Architecture
AsteroidOS is built like an embedded Linux distribution with OpenEmbedded. It works on top of the Linux kernel and the systemd service manager. AsteroidOS also includes various mobile Linux middlewares originally developed for Mer and Nemo Mobile such as lipstick and MCE.
The user interface is completely written with the Qt5 framework. Applications are coded in QML with graphic components coming from Qt Quick and QML-Asteroid. An SDK with a cross-compilation toolchain integrated to Qt Creator can be generated from OpenEmbedded for easier development.
Asteroid-launcher is a Wayland compositor and customizable home screen managing applications, watchfaces, notifications and quick settings. Asteroid-launcher runs on top of the libhybris compatibility layer to make use of Bionic GPU drivers.
AsteroidOS offers Bluetooth Low Energy synchronization capabilities with the asteroid-btsyncd daemon running on top of BlueZ5. A reference client named AsteroidOS Sync is available for Android users.
Shipped Applications
As of the 1.0 release, the following applications are shipped and pre-installed by default in AsteroidOS:
Agenda: Provides simple event scheduling capabilities
Alarm Clock: Makes the watch vibrate at a specific time of day
Calculator: Allows basic calculations
Music: Controls a synchronized device's music player
Settings: Configures time, date, language, Bluetooth, brightness, wallpapers, watchfaces and USB
Stopwatch: Measures an elapsed time
Timer: Counts down a specified time interval
Weather: Provides weather forecast for five days
See also
Wear OS
Sailfish OS
OpenEmbedded
Hybris (software)
Qt (software)
References
Smartwatches
Wearable computers
Free software operating systems
Mobile operating systems |
8725162 | https://en.wikipedia.org/wiki/List%20of%20Windows%20Mobile%20Professional%20games | List of Windows Mobile Professional games | This is a list of games released for the Windows Mobile Professional operating system (formerly known as Pocket PC).
0-9
Constructo Combat - Concrete Software, Inc. (2006)
Lawn Darts - Concrete Software, Inc. (2007)
"4Pinball" - Limelight Software Limited
A
Aces Texas Hold'em - No Limit - Concrete Software, Inc. (2004)
Aces Texas Hold'em - Limit - Concrete Software, Inc. (2004)
Aces Omaha - Concrete Software, Inc. (2005)
Aces Blackjack - Concrete Software, Inc. (2006)
Aces Tournament Timer - Concrete Software, Inc. (2006)
Add-Venture - Qsoftz (2006)
Atomic Battle Dragons - Isotope 244 (2006)
Age of Empires Gold edition - Microsoft, ZIO Interactive
Age of Empires III - Microsoft, Glu Mobile
B
Baccarat - Midas Interactive Entertainment (2003)
Bass Guitar Hero - www.iPocketPC.net (2009)
Batty - Applian Technologies (1999)
Bejeweled 2 - Astraware (2006) - Also known as Diamond Mine 2
Bingo - Midas Interactive Entertainment (2003)
Blackjack - Midas Interactive Entertainment (2003)
Blaster - Fognog (1999)
Break My Bricks - www.iPocketPC.net (2009)
Burning Sand 2 - (2009)
Blade of Betrayal - HPT Interactive (2003)
C
Call of Duty 2 - Mforma (2006)
Call of Duty 2 Pocket PC Edition - Aspyr (2007)
Caribbean Poker - Midas Interactive Entertainment (2003)
Craps - Midas Interactive Entertainment (2003)
Cubis - Astraware (2003)
D
Diamond Mine - Astraware (2002)
Domination - Smart-Thinker
Dopewars - Jennifer Glover (2000)
Dragon Bane II - Mythological Software (2003)
Drum Kit Ace - Momentum Games (2006)
Dragon Bird
F
Fade - Fade Team
Fish Tycoon - Last Day of Work (2004)
Fruit Bomb - Momentum Games (2004)
G
Glyph - Astraware (2006)
Gold Mine - Momentum Games (2004)
Guitar Hero III Mobile - Glu Mobile (2009)
H
Hoyle Puzzle & Board Games 2005 - VU Games (2004)
"Harry Putter's Crazy Golf" - Limelight Software
I
Insaniquarium - Astraware (2003)
Intelli Cube - Midas Interactive Entertainment (2003)
Interstellar Flames - XEN Games
J
Jawbreaker - Oopdreams Software, Inc. (2003)
K
K-Rally - Infinite Dreams Inc.
"JIGaSAWrus" - Limelight Software
L
Lemonade Inc. (aka Lemonade Tycoon) - Hexacto Games (2002)
Leo's Flight Simulator Leo's Space Combat SimulatorM
Madden NFL 2005 - Mobile Digital Media (2005)
Metalion - ZIO Interactive (2001)
Metalion 2 - ZIO Interactive (2003)
Microsoft Entertainment Pak 2004 - Microsoft Game Studios (2004)
Monopoly - Infogrames (2002)
Multi Machine - Midas Interactive Entertainment (2003)
My Little Tank - Astraware (2005)
Mystery of the Pharaoh - Midas Interactive Entertainment (2003)
"Marble Worlds" 2 - Limelight Software
Q
Quake - Pulse Interactive, Inc (2004)
Quake III Arena - noctemware (2005)
The Quest (2006)
P
PBA Bowling - Concrete Software, Inc. (2008)
Plant Tycoon - Last Day of Work (2004)
Pocket Humanity - Alexis Laferriere (2005)
Pocket UFO - SMK Software (2006)
Pocket Mini Golf - Momentum Games (2003)
Pocket Mini Golf 2 - Momentum Games (2005)
Pop Drop - Momentum Games (2005)
PT CatchFish - PlayfulTurtle.com (2007)
PT PuzzleChase - PlayfulTurtle.com (2008)
R
Reversi - Midas Interactive Entertainment (2003)
Roulette - Midas Interactive Entertainment (2003)
RocketElite - Digital Concepts
Royal 21 - Fury Ultd. (2009)
"Red Sector 2112" - Limelight Software
S
Seven Seas - Astraware (2003)
Scrabble - Handmark (2006)
Shadow of Legend - SmartCell Technology, LLC (2007–2008, no longer available now) - 2D Fantasy MMORPG game.
SimCity 2000 - ZIO Interactive (1999) - A port of the popular SimCity 2000 game.
Sink My Ships - www.iPocketPC.net (2009) - An awesome clone of Battleship.
Slot Machine - Midas Interactive Entertainment (2003)
Snails - Futech Ltd.
Sokoban - XComsoft Ltd
Solitaire
Spb Brain Evolution - SPB Software (2007)
Spb AirIslands - SPB Software (2006)
StarPop - Astraware (2006)
Strategic Assault - XEN Games (1998)
Sudoku - Diladele (2006)
Super Elemental
T
TextTwist - Astraware
Tic Tac Toe - www.iPocketPC.net (2009)
Tomb Raider - Eidos Interactive (2002)
Tower of Hanoi - Midas Interactive Entertainment (2003)
Tony Hawk's Pro Skater 2 - Activision
Tradewinds - Astraware
Turjah - Jimmy Software
Turjah 2 - Jimmy Software (2002)
Tripeaks Solitaire - Diladele (2006)
U
Ultima Underworld - ZIO Interactive
UNO - Concrete Software, Inc. (2007)
UNO Free Fall - Concrete Software, Inc. (2007)
V
Video Poker - Midas Interactive Entertainment (1999)
W
Warfare Incorporated - Spiffcode
Worms World Party - Team17 (2001)
"WordPlay - Limelight Software
X
Xwords - Open Source word game
X Ranger
Z
Zuma - PopCap
References
External links
GameFAQs List of Pocket PC Games
Video game lists by platform
Personal digital assistant software |
182369 | https://en.wikipedia.org/wiki/Whitelisting | Whitelisting | A whitelist (or, less commonly, a passlist or allowlist) is a mechanism which explicitly allows some identified entities to access a particular privilege, service, mobility, or recognition i.e. it is a list of things allowed when everything is denied by default. It is the opposite of a blacklist which is list of things denied when everything is allowed by default.
Email whitelists
Spam filters often include the ability to "whitelist" certain sender IP addresses, email addresses or domain names to protect their email from being rejected or sent to a junk mail folder. These can be manually maintained by the user or system administrator - but can also refer to externally maintained whitelist services.
Non-commercial whitelists
Non-commercial whitelists are operated by various non-profit organisations, ISPs, and others interested in blocking spam. Rather than paying fees, the sender must pass a series of tests; for example, their email server must not be an open relay and have a static IP address. The operator of the whitelist may remove a server from the list if complaints are received.
Commercial whitelists
Commercial whitelists are a system by which an Internet service provider allows someone to bypass spam filters when sending email messages to its subscribers, in return for a pre-paid fee, either an annual or a per-message fee. A sender can then be more confident that their messages have reached recipients without being blocked, or having links or images stripped out of them, by spam filters. The purpose of commercial whitelists is to allow companies to reliably reach their customers by email.
Advertising whitelists
Many websites rely on ads as a source of revenue, but the use of ad blockers is increasingly common. Websites that detect an adblocker in use often ask for it to be disabled - or their site to be "added to the whitelist" - a standard feature of most adblockers.
Network whitelists
Network Whitelisting can occur at different layers of the OSI model.
LAN whitelists
LAN whitelists are enforced at layer 2 of the OSI model. Another use for whitelists is in local area network (LAN) security. Many network admins set up MAC address whitelists, or a MAC address filter, to control who is allowed on their networks. This is used when encryption is not a practical solution or in tandem with encryption. However, it's sometimes ineffective because a MAC address can be faked
Firewall whitelists
Some firewalls can be configured to only allow data-traffic from/ to certain (ranges of) IP-addresses. A firewall generally works at layer 3 and 4 of the OSI model. Layer 3 is the Network Layer where IP works and Layer 4 is the Transport Layer, where TCP and UDP function.
Application whitelists
The application layer is layer 7 in the Open Systems Interconnection (OSI) seven-layer model and in the TCP/IP protocol suite. Whitelisting is commonly enforced by applications at this level.
One approach in combating viruses and malware is to whitelist software which is considered safe to run, blocking all others. This is particularly attractive in a corporate environment, where there are typically already restrictions on what software is approved.
Leading providers of application whitelisting technology include Bit9, Velox, McAfee, Lumension, Airlock Digital and SMAC
On Microsoft Windows, recent versions include AppLocker, which allows administrators to control which executable files are denied or allowed to execute. With AppLocker, administrators are able to create rules based on file names, publishers or file location that will allow certain files to execute. Rules can apply to individuals or groups. Policies are used to group users into different enforcement levels. For example, some users can be added to a report-only policy that will allow administrators to understand the impact before moving that user to a higher enforcement level.
Linux system typically have AppArmor and SE Linux features available which can be used to effectively block all applications which are not explicitly whitelisted, and commercial products are also available.
On HP-UX introduced a feature called "HP-UX Whitelisting" on 11iv3 version.
Controversy
In 2018, a journal commentary on a report on predatory publishing was released making claims that 'white' and 'black' are racially charged terms that need to be avoided in instances such as 'whitelist' and 'blacklist'. The journal hit mainstream in Summer 2020 following the George Floyd protests in America wherein a black man was murdered by an officer, sparking protests on police brutality.
The premise of the journal is that 'black' and 'white' have negative and positive connotations respectively. It states that since 'blacklist's first recorded usage was during "the time of mass enslavement and forced deportation of Africans to work in European-held colonies in the Americas," the word is therefore related to race. There is no mention of 'whitelist' and its origin or relation to race.
This issue is most widely disputed in computing industries where 'whitelist' and 'blacklist' are prevalent (e.g. IP whitelisting). Despite the commentary-nature of the journal, some companies and individuals in others have taken to replacing 'whitelist' and 'blacklist' with new alternatives such as 'allow list' and 'deny list'.
Those that oppose these changes question its attribution to race, citing the same etymology quote that the 2018 journal uses. The quote suggests that the term 'blacklist' arose from 'black book' almost 100 years prior. 'Black book' does not appear to have any etymology or sources that support ties to race, instead coming from the 1400s referring "to a list of people who had committed crimes or fallen out of favor with leaders" and popularized by King Henry VIII's literal usage of a book bound in black. Others also note the prevalence of positive and negative connotations to 'white' and 'black' in the Bible, predating attributions to skin tone and slavery. It wasn't until the 1960s Black Power movement that "Black" became a widespread word to refer to one's race as a person of color in America (alternate to African-American) lending itself to the argument that the negative connotation behind 'black' and 'blacklist' both predate attribution to race.
See also
Blacklisting
Blacklist (computing)
DNSWL, whitelisting based on DNS
Walled garden (technology), a whitelist that a device's owner cannot control
References
Spamming
Antivirus software
Malware
Social privilege
Social status
Databases
Blacklisting |
52606598 | https://en.wikipedia.org/wiki/Wire%20%28software%29 | Wire (software) | Wire is an encrypted communication and collaboration app created by Wire Swiss. It is available for iOS, Android, Windows, macOS, Linux, and web browsers such as Firefox. Wire offers a collaboration suite featuring messenger, voice calls, video calls, conference calls, file-sharing, and external collaboration –all protected by a secure end-to-end-encryption. Wire offers three solutions built on its security technology: Wire Pro –which offers Wire's collaboration feature for businesses, Wire Enterprise –includes Wire Pro capabilities with added features for large-scale or regulated organizations, and Wire Red –the on-demand crisis collaboration suite. They also offer Wire Personal, which is a secure messaging app for personal use.
History
Skype's co-founder Janus Friis helped create Wire and many Wire employees previously worked for Skype. Wire Swiss GmbH launched the Wire app on 3 December 2014. In August 2015, the company added group calling to their app. From its launch until March 2016, Wire's messages were only encrypted between the client and the company's server. In March 2016, the company added end-to-end encryption for its messaging traffic, as well as a video calling feature. Wire Swiss GmbH released the source code of the Wire client applications in July 2016. In 2018, Wire launched its collaboration solution featuring end-to-end encrypted chat, conferencing, video calls and file-sharing on desktop and mobile for businesses.
Features
Wire offers end-to-end encrypted messaging, file-sharing, video and voice calls, and guest rooms for external communication.
The app allows group calling with up to twenty-five participants and video conferences support up to 12 people. A stereo feature places participants in "virtual space" so that users can differentiate voice directionality. The application adapts to varying network conditions.
The application supports the exchange of animated GIFs up to 5MB through a media integration with Giphy. The iOS and Android versions also include a sketch feature that allows users to draw a sketch into a conversation or over a photo.
Wire is available on mobile, desktop and web. The web service is called Wire for Web. Wire activity is synced on iOS, Android and web apps. The desktop version supports screen sharing.
Wire’s technology solution can be deployed either in the cloud, private cloud or on-premises.
One of the latest features rolled out by Wire is a secure external collaboration capability called 'guest room'. Wire’s secure guest rooms feature extends end-to-end encryption to conversations with external parties without requiring them to register, or even download anything.
Wire also includes a function for ephemeral messaging in 1:1 and group conversations.
Technical
Wire provides end-to-end encryption for all features. Wire's instant messages are encrypted with Proteus, a protocol that Wire Swiss developed based on the Signal Protocol. Wire's voice calls are encrypted with DTLS and SRTP. In addition to this, client-server communication is protected by Transport Layer Security.
Wire is currently in the midst of working to develop Messaging Layer Security (MLS), a new protocol designed to facilitate more secure enterprise messaging platforms under The Internet Engineering Task Force (IETF). In 2016, during the IETF meeting in Berlin, Wire proposed a standard that was protected by modern security properties and could be used by companies large and small. During an interview with Dark Reading, Raphael Robert, Head of Security at Wire, mentioned that Messaging Layer Security (MLS) should be ready to integrate into messaging platforms by 2021.
Wire's source code is accompanied by the GPLv3 but the readme file states that a number of additional restrictions specified by the Wire Terms of Use take precedence. Among other things, users who have compiled their own applications may not change the way it connects and interacts with the company's centralised servers.
Security
Wire implemented a security by design approach, with security and privacy as core values. Wire is 100% open source with its source code available on GitHub, independently audited, and ISO, CCPA, GDPR, SOX-compliant.
In December 2016, Wire's whitepapers were reviewed by a security researcher at the University of Waterloo. The researcher praised Wire for its open approach to security, but identified serious issues that still need addressing. These included a man-in-the-middle attack on voice and video communications, possible audio and video leakage depending on unspecified codec parameters, the fact that all user passwords are uploaded to Wire's servers, significant attack surface for code replacement in the desktop client, and the fact that the server was not open-sourced, at the time when that article was written. The researcher described the security of Wire as weak in comparison to Signal, but also depicted its problems as surmountable. Wire's developers announced the addition of end-to-end authentication to Wire's calls on 14 March 2017, and started open-sourcing Wire's server code on 7 April 2017. In March 2017, the review was updated with the conclusion that "the remaining issues with Wire are relatively minor and also affect many of its competitors." However, one major issue that remained was detailed as "the Wire client authenticates with a central server in order to provide user presence information. (Wire does not attempt to hide metadata, other than the central server promising not to log very much information.) The Wire whitepapers spend an unusual amount of space discussing the engineering details of this part of the protocol. However, the method of authentication is the same as it is on the web: the Wire client sends the unencrypted, unhashed password to the central server over TLS, the server hashes the plaintext password with scrypt, and the hash is compared to the hash stored by the server. This process leaks the user's password to the central server; the server operators (or anyone who compromises the server) could log all of the plaintext passwords as users authenticate."
On 9 February 2017, Kudelski Security and X41 D-Sec published a joint review of Wire’s encrypted messaging protocol implementation. Non-critical issues were found that had the potential of leading to a degraded security level. The review found that "invalid public keys could be transmitted and processed without raising an error." The report also recommended that other security improvements be implemented to address thread-unsafety risks and sensitive data in memory. Wire's developers have said that "the issues that were discovered during the review have been fixed and deployed on iOS and Android. Deployment is ongoing for Wire for Web and desktop apps."
In 2017, Wire published an article going over the implementation of its end-to-end encryption in a multi-device scenario in response to anonymous accounts on social media publishing misleading information about the app and its security.
In May 2017, Motherboard published an article saying that the Wire servers "keep a list of all the users a customer contacted until they delete their account". Wire Swiss confirmed that the statement was accurate, saying that they keep the data in order to "help with syncing conversations across multiple devices", and that they might change their approach in the future.
Awards
In July 2019, Wire won Capterra's Best Ease of Use award in the team communication software category for its B2B solution. Later that year in October, Wire was recognized by Cybersecurity Breakthrough Awards as the first-ever Secure Communications Solution of the Year awardee. In February 2020, Wire won the Cybersecurity Excellence Awards in the following categories: fastest-growing cybersecurity company, best start-up (EU), open-source security, encryption, and zero-trust security. Simultaneously, Cyber Defense Magazine announced Wire as the Best Messaging Security in an RSA 2020 special edition for the Cyber Defense Awards.
Privacy policy changes
In late 2019, Wire holding moved from Luxembourg to the US, which according to ThinkPrivacy and other critics made it unclear how much jurisdiction the United States will have over Wire data. Some people considered this especially problematic as Wire stores unencrypted meta data for every user. At the time of the transfer Wire also changed its privacy policy from "sharing user data when required by law" to "sharing user data when necessary", which was seen by critics as a more unclear determination that could also mean increasing profits, co-operation with law enforcement or any other reason.
The Wire Group Holding moved back to Germany as of 2020.
Business model
Wire Swiss GmbH receives financial backing from a firm called Iconical.
In July 2017, Wire Swiss announced the beta version of an end-to-end encrypted team messaging platform. In October 2017, Wire officially released the team messaging platform as a subscription-based communication solution for businesses and in 2019, announced that Ernst & Young chose Wire to develop a self-hosted, secure collaboration and communication platform.
See also
Comparison of instant messaging clients
Comparison of VoIP software
List of video telecommunication services and product brands
Gartner - Market Guide for Workstream Collaboration
The Forrester New Wave™: Secure Communications
References
Cross-platform software
IOS software
Free and open-source Android software
Free instant messaging clients
Instant messaging clients
Formerly proprietary software
Free security software
Free VoIP software
Cryptographic software
Secure communication
Internet privacy software
Swiss brands |
19390010 | https://en.wikipedia.org/wiki/RawTherapee | RawTherapee | RawTherapee is application software for processing photographs in raw image formats, as created by many digital cameras. It comprises a subset of image editing operations specifically aimed at non-destructive post-production of raw photos and is primarily focused on improving a photographer's workflow by facilitating the handling of large numbers of images. It is notable for the advanced control it gives the user over the demosaicing and developing process. It is cross-platform, with versions for Microsoft Windows, macOS and Linux.
RawTherapee was originally written by Gábor Horváth of Budapest, Hungary, and was re-licensed as free and open-source software under the GNU General Public License Version 3 in January 2010. It is written in C++, using a GTK+ front-end and a patched version of dcraw for reading raw files. The name "Therapee" was originally an acronym derived from "The Experimental Raw Photo Editor".
Features
RawTherapee involves the concept of non-destructive editing, similar to that of some other raw conversion software. Adjustments made by the user are immediately reflected in the preview image, though they are not physically applied to the opened image but the parameters are saved to a separate sidecar file. These adjustments are then applied during the export process.
All the internal processing is done in a high precision 32-bit floating point engine.
Input file formats
RawTherapee supports most raw formats, including Pentax Pixel Shift, Canon Dual-Pixel, and those from Foveon and X-Trans sensors. It also supports common non-raw image formats like JPEG, PNG and TIFF as well as high dynamic range, 16/24/32-bit raw DNG images.
RawTherapee uses a patched version of dcraw code to read and parse raw formats, with additional tweaks and constraints to parameters such as white levels and the raw crop area based on in-house measurements. Thus, RawTherapee supports all the formats supported by dcraw.
User interface
RawTherapee provides the user with a file browser, a queue, a panel for batch image adjustments, a 1:1 preview of the embedded JPEG image in the case of raw files, and an image editing tab.
The file browser shows photo thumbnails along with a caption of the shooting information metadata. The browser includes 5-star rating, flagging, and an Exif-based filter. It can be used to apply a profile, or parts of a profile, to a whole selection of photos in one operation.
A toolbox alongside the file browser allows for batch image adjustments.
The queue tab allows one to put exporting photos on hold until done adjusting them in the Editor, so that the CPU is fully available to the user while tweaking a photo, instead of processing photos while the user is trying to tweak new ones which could result in a sluggish interface. Alternatively, it can be used to process photos alongside tweaking new ones if one has a CPU capable of handling the workload.
The Editor tab is where the user tweaks photos. While the image is opened for editing, the user is provided with a preview window with pan and zoom capabilities. A color histogram is also present offering linear and logarithmic scales and separate R, G, B and L channels. All adjustments are reflected in the history queue and the user can revert any of the changes at any time. There is also the possibility of taking multiple snapshots of the history queue allowing for various versions of the image being shown. These snapshots are not written to the sidecar file and are subsequently lost once the photo has been closed, however work is underway on migrating the PP3 sidecar system to XMP which already supports storing snapshots.
Adjustment tools and processing
Bayer demosaicing algorithms: AMaZE, IGV, LMMSE, EAHD, HPHD, VNG4, DCB, AHD, fast or mono, as well as none.
Raw files from X-Trans sensors have the 3-pass, 1-pass and fast demosaicing methods at their disposal.
Processing profiles support via sidecar files with the ability to fully and partially load, save and copy profiles between images
Processing parameters can be generated dynamically based on image metadata using the Dynamic Profile Builder.
Exposure control and curves in the L*a*b* and RGB color spaces
CIECAM02 mode
Advanced highlight reconstruction algorithms and shadow/highlight controls
Tone mapping using edge-preserving decomposition
Pre-crop vignetting correction and post-crop vignetting for artistic effect
Graduated filter
Various methods of sharpening
Various methods of noise reduction
Detail recovery
Removal of purple fringing
Manual and automatic pre- and post-demosaic chromatic aberration correction
Advanced wavelet processing
Retinex processing
White balance (presets, color temperature, spot white balance and auto white balance)
Channel mixer
Black-and-white conversion
Color boost and vibrance (saturation control with the option of preserving natural skin tones)
Hue, saturation and value adjustments using curves
Various methods of color toning
Lockable color picker
Wide gamut preview support on Microsoft Windows and Linux (the macOS preview is limited to sRGB)
Soft-proofing support
Color-managed workflow
ICC color profiles (input, working and output)
DCP color profiles (input)
Adobe Lens Correction Profiles (LCP)
Cropping, resizing, post-resize sharpening
Rotation with visual straightening tool
Distortion correction
Perspective adjustment
Dark frame subtraction
Flat field removal (hue shifts, dust removal, vignetting correction)
Hot and dead pixel filters
Metadata (Exif and IPTC) editor
A processing queue to free up the CPU during editing where instant feedback is important and to make maximal use of it afterwards
Output formats
The output format can be selected from:
TIFF (8-bit, 16-bit, 16-bit float, 32-bit float)
JPEG (8-bit)
PNG (8-bit and 16-bit)
See also
Darktable
Rawstudio
UFRaw
References
External links
Digital photography
Formerly proprietary software
Free graphics software
Free photo software
Free software programmed in C++
Graphics software that uses GTK
Photo software for Linux
Raw image processing software |
549159 | https://en.wikipedia.org/wiki/CRYPTREC | CRYPTREC | CRYPTREC is the Cryptography Research and Evaluation Committees set up by the Japanese Government to evaluate and recommend cryptographic techniques for government and industrial use. It is comparable in many respects to the European Union's NESSIE project and to the Advanced Encryption Standard process run by National Institute of Standards and Technology in the U.S.
Comparison with NESSIE
There is some overlap, and some conflict, between the NESSIE selections and the CRYPTREC draft recommendations. Both efforts include some of the best cryptographers in the world therefore conflicts in their selections and recommendations should be examined with care. For instance, CRYPTREC recommends several 64 bit block ciphers while NESSIE selected none, but CRYPTREC was obliged by its terms of reference to take into account existing standards and practices, while NESSIE was not. Similar differences in terms of reference account for CRYPTREC recommending at least one stream cipher, RC4, while the NESSIE report specifically said that it was notable that they had not selected any of those considered. RC4 is widely used in the SSL/TLS protocols; nevertheless, CRYPTREC recommended that it only be used with 128-bit keys. Essentially the same consideration led to CRYPTREC's inclusion of 160-bit message digest algorithms, despite their suggestion that they be avoided in new system designs. Also, CRYPTREC was unusually careful to examine variants and modifications of the techniques, or at least to discuss their care in doing so; this resulted in particularly detailed recommendations regarding them.
Background and sponsors
CRYPTREC includes members from Japanese academia, industry, and government. It was started in May 2000 by combining efforts from several agencies who were investigating methods and techniques for implementing 'e-Government' in Japan. Presently, it is sponsored by
the Ministry of Economy Trade and Industry,
the Ministry of Public Management, Home Affairs and Post and Telecommunications,
the Telecommunications Advancement Organization, and
the Information-Technology Promotion Agency.
Responsibilities
It is also the organization that provides technical evaluation and recommendations concerning regulations that implement Japanese laws. Examples include the Electronic Signatures and Certification Services (Law 102 of FY2000, taking effect as from April 2001), the Basic Law on the Formulation of an Advanced Information and Telecommunications Network Society of 2000 (Law 144 of FY2000), and the Public Individual Certification Law of December 2002. Furthermore, CRYPTEC has responsibilities with regard to the Japanese contribution to the ISO/IEC JTC 1/SC27 standardization effort.
Selection
In the first release in 2003, many Japanese ciphers were selected for the "e-Government Recommended Ciphers List": CIPHERUNICORN-E (NEC), Hierocrypt-L1 (Toshiba), and MISTY1 (Mitsubishi Electric) as 64 bit block ciphers, Camellia (Nippon Telegraph and Telephone, Mitsubishi Electric), CIPHERUNICORN-A (NEC), Hierocrypt-3 (Toshiba), and SC2000 (Fujitsu) as 128 bit block ciphers, and finally MUGI and MULTI-S01 (Hitachi) as stream ciphers.
In the revised release of 2013, the list was divided into three: "e-Government Recommended Ciphers List", "Candidate Recommended Ciphers List", and "Monitored Ciphers List". Most of the Japanese ciphers listed in the previous list (except for Camellia) have moved from the "Recommended Ciphers List" to the "Candidate Recommended Ciphers List". There were several new proposals, such as CLEFIA (Sony) as a 128 bit block cipher as well as KCipher-2 (KDDI) and Enocoro-128v2 (Hitachi) as stream ciphers. However, only KCipher-2 has been listed on the "e-Government Recommended Ciphers List". The reason why most Japanese ciphers have not been selected as "Recommended Ciphers" is not that these ciphers are necessarily unsafe, but that these ciphers are not widely used in commercial products, open-source projects, governmental systems, or international standards. There is the possibility that ciphers listed on "Candidate Recommended Ciphers List" will be moved to the "e-Government Recommended Ciphers List" when they are utilized more widely.
In addition, 128 bit RC4 and SHA-1 are listed on "Monitored Ciphers List". These are unsafe and only permitted to remain compatible with old systems.
CRYPTREC Ciphers List
e-Government Recommended Ciphers List
Public key ciphers
Signature
DSA: NIST FIPS 186-2
ECDSA: Certicom
RSA-PSS: PCKS#1, RSA Laboratories
RSASSA-PKCS1-v1_5: PCKS#1, RSA Laboratories
Confidentiality
RSA-OAEP: PCKS#1, RSA Laboratories
Key exchange
DH: NIST SP 800-56A Revision 1
ECDH: NIST SP 800-56A Revision 1
Symmetric key ciphers
64-bit block ciphers
3-key Triple DES: NIST SP 800-67 Revision 1
128-bit block ciphers
AES: NIST FIPS PUB 197
Camellia: Nippon Telegraph and Telephone, Mitsubishi Electric
Stream ciphers
KCipher-2: KDDI
Hash functions
SHA-256: NIST FIPS PUB 180-4
SHA-384: NIST FIPS PUB 180-4
SHA-512: NIST FIPS PUB 180-4
Modes of operation
Encryption modes
CBC: NIST SP 800-38A
CFB: NIST SP 800-38A
CTR: NIST SP 800-38A
OFB: NIST SP 800-38A
Authenticated encryption modes
CCM: NIST SP 800-38C
GCM: NIST SP 800-38D
Message authentication codes
CMAC: NIST SP 800-38B
HMAC: NIST FIPS PUB 198-1
Entity authentication
ISO/IEC 9798-2: ISO/IEC 9798-2:2008
ISO/IEC 9798-3: ISO/IEC 9798-3:1998, ISO/IEC 9798-3:1998/Amd 1:2010
Candidate Recommended Ciphers List
Public key ciphers
Signature
N/A
Confidentiality
N/A
Key exchange
PSEC-KEM: Nippon Telegraph and Telephone
Symmetric key ciphers
64-bit block ciphers
CIPHERUNICORN-E: NEC
Hierocrypt-L1: Toshiba
MISTY1: Mitsubishi Electric
128-bit block ciphers
CIPHERUNICORN-A: NEC
CLEFIA: Sony
Hierocrypt-3: Toshiba
SC2000: Fujitsu
Stream ciphers
MUGI: Hitachi
Enocoro-128v2: Hitachi
MULTI-S01: Hitachi
Hash functions
N/A
Modes of operation
N/A
Authenticated encryption modes
N/A
Message authentication codes
PC-MAC-AES: NEC
Entity authentication
ISO/IEC 9798-4: ISO/IEC 9798-4:1999
Monitored Ciphers List
Public key ciphers
Signature
N/A
Confidentiality
RSAES-PKCS1-v1_5: PCKS#1, RSA Laboratories
Key exchange
N/A
Symmetric key ciphers
64-bit block ciphers
N/A
128-bit block ciphers
N/A
Stream ciphers
128-bit RC4: RSA Laboratories
Hash functions
RIPEMD-160: Hans Dobbertin, Antoon Bosselaers, Bart Preneel
SHA-1: NIST FIPS PUB 180-4
Modes of operation
N/A
Authenticated encryption modes
N/A
Message authentication codes
CBC-MAC: ISO/IEC 9797-1:2011
Entity authentication
N/A
Notes
References
External links
Cryptography standards
Government research
Standards of Japan |
38678572 | https://en.wikipedia.org/wiki/Ganymede%20%28software%29 | Ganymede (software) | Ganymede is an open source network directory management framework, designed to allow administrator teams to collaboratively manage subsets of an organization's directory services, such as NIS, DNS, Active Directory / LDAP, DHCP, and RADIUS, among others. First announced and released at the 1998 USENIX LISA conference, Ganymede has been under public development and use since then.
Ganymede uses a central server which supports clients connecting via Java RMI. The Ganymede server maintains a transactional object graph database of network information such as user objects, group objects, system objects, network objects, etc. Users and administrators run Ganymede clients (GUI or XML based) to create, modify, or delete objects in the database. Whenever a user commits a transaction, the Ganymede server schedules a number of background threads to write out updated network source files and run whatever system scripts are required to propagate the new data into the managed network directory services. If multiple users are working concurrently, the scheduler makes sure that the entire network environment is updated with transactionally consistent directory images as builds finish and new ones are issued.
The Ganymede server is meant to be programmed by the adopter, who can define arbitrary object data types along with custom logic to interact with the user through the GUI and to maintain consistency within and between objects. Adopters can also create custom tasks which can be executed at specified times by the internal Ganymede scheduler. Such custom tasks can make changes in the server's object database and/or can run external scripts to update external services.
Ganymede has an elaborate XML data format which can be used to import and export the server's object database schema and object data. Importing XML will typically result in the creation, modification, or deletion of database objects, and will trigger one or more network directory service rebuilds just as using the GUI client would do.
Above all, Ganymede is designed around administration teams. Administrators are members of 'Owner Groups', which own objects. Any object that is changed by a user or an automated task can result in change report email being sent to administrators in the appropriate Owner Group, making it possible for admins to keep up to date with changes that others in their groups are making. Owner Groups can be granted authority over arbitrary subsets of the object database, making it easy to slice up the network directory space in any fashion that may be desired.
As a programmable framework, Ganymede must be programmed for a specific set of directory management tasks. Fundamental Generic Networking in Germany has used it as the basis of their Doctor DNS project, which is being used to manage DNS for the Kaiserslautern University of Technology.
References
External links
Directory services
Cross-platform free software
DNS software
Identity management
Identity management systems
Free software
Free network management software
Free software programmed in Java (programming language)
Software using the GPL license
1998 software |
5955775 | https://en.wikipedia.org/wiki/Goodiepal | Goodiepal | Goodiepal or Gæoudjiparl van den Dobbelsteen, whose real name is Parl Kristian Bjørn Vester, is a Danish/Faroese experimental electronic musician, performance artist, composer and lecturer, as well as a self-described horologist. His work discusses the future of computer music, his own compositional practices and resonance computing, and in the past his own idea of Radical Computer Music. His tours have included 150 universities internationally.
In 2014, Goodiepal sold Kommunal Klon Komputer 2, a DIY velomobile that he used for personal transport, to the National Gallery of Denmark, where it is now on display.
Biography
Early life
Goodiepal went to a Steiner School, where electronics were prohibited. This led to Goodiepal's participation in secret demo groups releasing demos on floppy discs for Commodore and Amiga computers, as well as experiments with UNIX and general computer coding. Goodiepal departed from the Steiner school after eight years, due to experiments with explosives, and embarked on various activities with hacker groups in computer software programming and distribution. He began seriously making and performing music in the winter of 1986.
Teaching and lecturing
In 2002, Goodiepal gave his first lectures in the USA, among other places at CalArts, Brown University and University of Iowa, on the subjects of Radical Computer Music and, similarly, radical software. In 2004 Goodiepal was hired as professor of History and Aesthetics of Electronic Music at DIEM (Danish Institute of Electro-acoustic Music), The Royal Danish Academy of Music in Aarhus, Denmark, and served as head of the electronic music department. Goodiepal also eventually taught Composition at DIEM. In 2008, Goodiepal left The Royal Danish Academy of Music. Upon his resignation, he produced the declaration Five steps in a Gentleman's War on the stupidity of modern computer music and media based art, released as a supplement to the audio piece The Official Mort Aux Vaches Ekstra Extra Walkthrough and the series of images called Snappidaggs explaining the methodology behind Goodiepal's concept of Radical Computer Music. In between releases, projects, and installations, Goodiepal lectures extensively all over the world.
Since 2016 Goodiepal has been performing in a performance group, under the name GP&PLS, which includes a diverse set of members including his partner Nynne Roberta Pedersen, Danish actress Rosalinde Mynster, poet Lars Skinnebach, and the American artist Jeffrey Alan Scudder. Their intent as a group is to create protest music, while helping and donating to refugees during the ongoing European migrant crisis.
Selected works
Mort Aux Vaches Ekstra Extra
In 2002, Goodiepal created a compositional musical language, based around 'musical bricks'. This language was then developed into the Mort Aux Vaches Ekstra Extra compositional game scenario.
Mort Aux Vaches Ekstra Extra was performed as a lecture at Gallery Andersen Contemporary, Copenhagen, Denmark in 2007 and the 5th Berlin Biennale for Contemporary Art, 2008. The game scenario of the lecture is an exercise in the creation of musical scores to challenge the mindset of 'other' intelligences, considering issues such as utopia, time, notation techniques, language, artificial intelligence, 'unscannability' (being undetected by technology), and the role of the composer.
Circus Pentium
Goodiepal went on to participate in the installation Circus Pentium by Danish artist Henrik Plenge Jakobsen opening at Statens Museum for Kunst, Copenhagen, Denmark, also in 2004. The work comprised a circus installation with Goodiepal playing the lute alongside other actors and was also shown at Art Basel, Switzerland, and Stedelijk Museum in Amsterdam, Netherlands.
EuroBOT
Goodiepal has consistently kept up performances of a game piece titled 'EuroBOT', as well as classes on 'EuroBOT mythology'. In practice, EuroBOT is akin to a board game or a tabletop role-playing game, with polygonal shapes and figures representing planets. Performances of EuroBOT are characterized by Goodiepal whistling the musical score. In 2008, he performed EuroBOT on the Danish talkshow Den 11. time, which he also composed the opening theme for.
The Autonomous Music School
In 2007, Goodiepal opened an autonomous school on the first floor of The Blue House in London, designed by FAT Architecture, for people interested in Radical Computer Music and other of his arts. The school, open from 9am to 10:10am every weekday, was free of charge.
EMEGO 211
In 2016, Goodiepal released an album on Editions Mego entitled EMEGO 211, using the Wikimedia Commons as the device of dissemination, which also thereby released the album into the public domain.
Books
Goodiepal & ALKU. El Camino del Hardcore - Rejsen Til Nordens Indre. 2012.
Radical Computer Music & Fantastisk Medie Manipulation. Co-published with Pork Salad Press. 2009. .
Audio releases
Goodiepal - Morx Aux Vaches, CD, Morx Aux Vaches, 2005
Goodiepal/Misaki Kawai - 24 Advertisements - CD, Pork Salad press, 2012
Goddiepla - Battlefleet Gothic/Live at Roskilde 200, 2xLP, Fonal, 2014
Goodiepal - Morendo Morendo/My Paris Is Called Colchester, 3xLP, Editions Mego, 2014
Goodiepal - Untitled, LP, EMEGO, 2016
GP&PLS - Pro-monarkist Extratone, LP, Goodiepal, 2017
Goodiepal - The Goodiepal Equation Original Soundtrack With Appendix, 4xLP, Fonal, 2018
Jonathan Meese/Family Fodder/Kommissar Hjuler/Mama Baer/GP&PLS - Die Aberkennung als Beleuchtungstraeger, LP, Psych.KG, 2019
Kommissar Hjuler/Goodiepal and GP&PLS - Jemand vergraebt Erdaushub, LP, Psych.KG, 2019
Kommissar Hjuler/Mama Baer/GP&PLS - The European Impro Facism, LP, Psych.KG, 2019
Kommissar Hjuler/Mama Baer/GP&PLS - Osteskære SIF / Hujan Puting, MC, Bawah Tanah Rekod, 2020
Anny Franny - S.V.H.O.N.
Honors
Narc Beacon was awarded an honourable mention at the Ars Electronica Festival in Linz, Austria, in 2002.
In 2005, Goodiepal was commissioned by the Nordic Council of Ministers to create a sound piece representing Norway, Denmark, Sweden, Iceland, and Greenland at the World Expo 2005 in Aichi, Japan. The result was a generative computer music program playing Goodiepal music through countless speakers in three exhibition spaces in the exhibition's Nordic pavilion.
Mort Aux Vaches Ekstra Extra was presented at the 5th Berlin Biennale for Contemporary Art, 2008.
On 27 October 2008, Goodiepal received a first class merit certificate at StoryTellerScotland. He is now officially allowed to call himself a Master Storyteller of the highest accord in the UK.
In November 2010, Goodiepal received the Danish Heinrich Prize at Den Grå Hal, Christiania, for his war on the Royal Academy of Music.
References
External links
Gooodiepal on Discogs
Danish electronic musicians
Living people
1974 births
Experimental composers
Danish performance artists
Danish activists |
3748675 | https://en.wikipedia.org/wiki/Qualcomm%20Atheros | Qualcomm Atheros | Qualcomm Atheros is a developer of semiconductor chips for network communications, particularly wireless chipsets. Founded under the name T-Span Systems in 1998 by experts in signal processing and VLSI design from Stanford University, the University of California, Berkeley and private industry. The company was renamed Atheros Communications in 2000 and it completed an initial public offering in February 2004 trading on NASDAQ under the symbol ATHR.
On January 5, 2011, it was announced that Qualcomm had agreed to a takeover of the company for a valuation of US$3.7 billion. When the acquisition was completed on May 24, 2011, Atheros became a subsidiary of Qualcomm operating under the name Qualcomm Atheros.
Qualcomm Atheros chipsets for the IEEE 802.11 standard of wireless networking are used by over 30 different wireless device manufacturers.
History
T-Span Systems was co-founded in 1998 by Teresa Meng, professor of engineering at Stanford University and John L. Hennessy, provost at the time and then president of Stanford University through 2016.
The company's first office was a converted house on Encina Avenue, Palo Alto, adjacent to a car wash and Town & Country Village.
In September 1999, the company moved to an office at 3145 Porter Drive, Building A, Palo Alto.
In 2000, T-Span Systems was renamed Atheros Communications and the company moved to a larger office at 529 Almanor Avenue, Sunnyvale. Atheros publicly demonstrated its inaugural chipset, the world's first WLAN implemented in CMOS technology and the first high-speed 802.11a 5 GHz technology.}
In 2002, Atheros announced a dual-band wireless product, the AR5001X 802.11a/b.
In 2002, Craig H. Barratt joined Atheros as vice presidents and in March 2003 became CEO.
In 2003, the company shipped its 10-millionth wireless chip.
In 2004, Atheros unveiled a number of products, including the first video chipset for mainstream HDTV-quality wireless connectivity.
In 2005, Atheros introduced the industry's first MIMO-enabled WLAN chip, as well as the ROCm family for mobile handsets and portable consumer electronics.
In 2006, Atheros announced its XSPAN product line, which featured a single-chip, triple-radio for 802.11n. In this same year, they began to collaborate with Qualcomm on a product for CDMA and WCDMA-enabled handsets.
In 2008, Atheros announced the Align 1-stream 802.11n product line for PCs and networking equipment.
In 2010, Atheros shipped its 500-millionth WLAN chipset and 100-millionth Align 1-stream chipset. They released the first HomePlug AV chipset with a 500 Mbit/s PHY rate.
IPO
On February 12, 2004, Atheros completed its initial public offering on the NASDAQ exchange trading under the symbol ATHR. Shares opened at $14 per share with 9 million offered. Prices on the first day ranged up to $18.45 and closed at $17.60 per share. At the time, Atheros had approximately 170 employees.
Acquisition by Qualcomm
In January 2011, Qualcomm agreed to acquire Atheros at $45 per share cash. This agreement was subject to shareholder regulatory approvals. In May 2011, Qualcomm completed its acquisition of Atheros Communications for a total of US$3.7 billion. Atheros became a subsidiary of Qualcomm under the name Qualcomm Atheros.
After the acquisition, the division unveiled the WCN3660 Combo Chip, which integrated dual-band Wi-Fi, Bluetooth, and FM into Qualcomm Snapdragon mobile processors. Qualcomm Atheros launched the Skifta media shifting application for Android and released the first HomePlug Green PHY at the end of the year.
In 2012, Qualcomm Atheros announced a Wi-Fi display product at the Consumer Electronics Show 2012, along with a new chip for HomePlug AV power line networking. At Mobile World Congress 2012, Qualcomm Atheros demonstrated a suite of 802.11ac enabled products. This included the WCN3680, a mobile 802.11ac combo chip targeting smartphones and tablets. In June 2012 at Computex, Qualcomm Atheros added new 802.11ac products.
Products
WLAN – Qualcomm Atheros offers wireless connectivity products, including their Align 1-stream 802.11n chips, and the XSPAN 2-stream with SST2 and 3-stream with SST3 chips for 802.11n. The Align 1 also supports WLAN for mobile with up to 150Mbit/s PHY rates for smartphones and portable consumer electronics. Qualcomm Atheros also offers legacy WLAN designs for 802.11a/g.
PAS/PHS In March 2005, Atheros introduced the AR1900, the first single-chip for personal handyphone system (PHS), which was widely deployed in China, Japan and Taiwan at the time. PHS, or personal access system (PAS) as it is known in China, was a digital TDMA-TDD technology operating at 1.9 GHz providing high-quality voice, advanced data services, and long battery life.
Power line communication (PLC) – Qualcomm Atheros is a member of the HomePlug Powerline Alliance. Its AMP brand of powerline chips support the IEEE 1901 global powerline standard that supports high-definition multimedia and real-time gaming at a 500Mbit/s PHY rate. Low powered chips, such as those built for HomePlug Green PHY, are targeted toward smart grid and smart home applications.
Ethernet – Qualcomm Atheros offers the ETHOS line of Ethernet interfaces, as well as the low-energy EDGE line, which supports the IEEE 802.3az-2010 Energy Efficient standard.
Hybrid Networking – Qualcomm Atheros' hybrid networking technology, Hy-Fi, integrates WLAN, PLC, and Ethernet technologies. The technology, which complies with the IEEE 1905.1 standard for hybrid home networking, is capable of detecting the optimal path for data to be transferred at any given moment.
Location Technology – In 2012, Qualcomm Atheros announced its IZat location technology. The technology uses multiple sources, such as satellites and WLAN networks, to pinpoint the location of the user.
Bluetooth – Qualcomm Atheros offers Bluetooth chips for a variety of platforms. The company also offers integrated combo WLAN and Bluetooth chips.
PON – Qualcomm Atheros passive optical network (PON) technologies incorporate standards such as IEEE 802.3ah, multiple-channel, software-based, digital signal processing for the G.711 and G.729 ITU standards for VoIP, and TR-156 Broadband Forum PON standard.
Acquisitions
CodeTelligence – SDIO software/firmware developer, acquired in 2005.
ZyDAS Technology – a USB Wireless LAN company headquartered in Hsinchu, Taiwan, acquired in 2006.
Attansic Technology – a Fast and Gigabit Ethernet chip maker headquartered in Taiwan, acquired in early 2007.
u-Nav Microelectronics – a GPS chipmaker headquartered in Irvine, CA, acquired in 2007.
Intellon Corporation – a public company with powerline communication (PLC) for home networking, networked entertainment, broadband-over-powerline (BPL) access, Ethernet-over-Coax (EoC), and smart grid management applications. They were acquired in late 2009.
Opulan Technology Corp – EPON broadband access technology developer in Shanghai, China, acquired in August 2010.
Bigfoot Networks – an Austin, Texas-based company acquired in September 2011, with application-aware networking technologies that are being marketed under the trademarked brand-name of StreamBoost.
Ubicom – a company known for their processor and software designed to optimize network data, acquired in February 2012.
DesignArt – small cell chip company that combined several radio technologies on a single chip, used to provide wireless backhaul to smaller base stations. Acquired in August 2012.
Wilocity - a fabless semiconductor company focusing on IEEE 802.11ad (60 GHz) was purchased by Qualcomm in July 2014.
Free and open-source software support
Support for Atheros devices on Linux and FreeBSD once relied on the hobbyist project MadWifi, originally created by Sam Leffler and later supported by Greg Chesson. MadWifi later evolved into ath5k. In July 2008, Atheros released an open-source Linux driver called ath9k for their 802.11n devices. Atheros also released some source from their binary HAL under ISC license to add support for their abg chips. Atheros has since been actively contributing towards the ath9k driver in Linux. Atheros has also been providing documentation and assistance to the FreeBSD community to enable updated support for 802.11n chipsets in FreeBSD-9.0 and up.
The flexibility and openness of ath9k makes it a prime candidate for experiments around improving Wi-Fi. It is the first subject of a FQ-CoDel-based radio fairness improvement experiment by Make-Wifi-Fast. The driver has also been modified by radio hobbyists to broadcast in licensed frequency bands.
The article comparison of open-source wireless drivers lists free and open-source software drivers available for all Qualcomm Atheros IEEE 802.11 chipsets. The most recent generations of Atheros wireless cards (802.11ac and 802.11ax) require non-free binary blob firmware to work, whereas earlier generations generally do not require non-free firmware.
Atheros was featured in OpenBSD's songs that relate to the ongoing efforts of freeing non-free devices.
References
External links
Atheros Madwifi support in Linux – historical
Community-driven FreeBSD atheros chipset support, pre-802.11n and 802.11n chipsets
Open source utility package for management INT6400, AR7400 and AR7420 chipsets, known as HomePlug technology
1998 establishments in California
Electronics companies established in 1998
Fabless semiconductor companies
Manufacturing companies based in San Jose, California
Qualcomm
Semiconductor companies of the United States
Technology companies based in the San Francisco Bay Area
Companies formerly listed on the Nasdaq
2004 initial public offerings
2011 mergers and acquisitions |
62607489 | https://en.wikipedia.org/wiki/Oh%20Shit%21 | Oh Shit! | Oh Shit! is a Pac-Man clone released in 1985 for the MSX by The ByteBusters (Aackosoft's in-house development team) and published by Dutch publisher Aackosoft under the Classics range of games; a range that consists of clones of arcade games, i.e Scentipede being a clone of Atari's Centipede. Oh Shit!'s level and art design is identical to that of Pac-Man.
Oh Shit! was later republished with differing names and cover art several times; Oh Shit! was renamed to Oh No! for the game's UK release due to the name being considered 'too obscene', and the name was shortened to Shit! for its release by Premium III Software Distribution. The European re-release Shit! notably uses cover art from 1985 horror novel The Howling III: Echoes, possibly without permission. Oh Shit! features digitized speech; when the player loses a life, the eponymous phrase "Oh Shit!" is said. For the renamed releases, Oh No! and Shit!, the speech is changed accordingly.
Releases
The 1985 MSX release were published by Aackosoft, but later releases of the MSX version were published by different publishers; the European version of Oh Shit! was later published by Eaglesoft (an alternate label of Aackosoft), and Oh Shit! was published by Compulogical in Spain. The UK release, Oh No!, was also published by Eaglesoft. The European re-release, Shit!, was developed by Eurosoft and published by Premium III Software Distribution, notably using cover art from 1985 horror novel The Howling III: Echoes, possibly without permission. The original MSX version of Oh Shit! was made for compatibility with MSX 32K computers, and later re-releases offer MSX 64K compatibility. Unlike other Aackosoft titles in the Classics range, Oh Shit! is incompatible with MSX 16K computers.
Aackosoft went bankrupt in 1988, after which Shit!, alongside other Aackosoft titles, were re-published by Premium III Software Distribution and developed by Eurosoft (a former label of Aackosoft) in the same year. Premium III Software Distribution released the 30 MSX Hits compilation in 1988, including Oh Shit! as part of its lineup. According to Dutch gaming magazine MSX-DOS Computer Magazine, after Aackosoft went bankrupt in 1988, their intellectual property was transferred to a company called Methodic Solutions, and all previous MSX Aackosoft titles were re-published by Premium III Software Distribution and developed by Eurosoft, both separately and in a compilation titled 30 MSX Hits.
The 1988 30 MSX Hits compilation release of Oh Shit! offers MSX2 compatibility. All MSX releases of Oh Shit!, Shit! and Oh No! are cassette releases, except for the 30 MSX Hits release, which had both cassette and floppy disk releases.
Version Differences
Oh Shit! introduces the game's ghosts on the title screen using digitized speech stating "This is Joey, Paul, Willy and Frankie", however the UK version Oh No! says "This is Joey, this is Paul, this is Willy, this is Frankie". "This is" has the same enunciation all four times it is said. Unlike Oh Shit!, where "Oh Shit!" is said every time the player dies, in Oh No!, "Oh No!" is only said after the player has lost all their lives and gets a game over.
Gameplay
Oh Shit!'s gameplay is identical to that of Pac-Man, down to the level design. This was noted as a positive by reviewers who deemed it a faithful reproduction of the arcade original. The ghosts in Oh Shit! are named Joey, Paul, Willy, and Frankie.
Development
Oh Shit! was coded by Steve Course. The speech generation code was written by Ronald van der Putten, and Oh Shit!'s speech was performed by Ronald van der Putten of The ByteBusters.
MSX Computing states in their review that they received two copies of the game for their review, both the UK Oh No! version and the European Oh Shit! version, stating that the European version's name was "deigned unsuitable for the UK". The MSX UK Oh No! version cost £2.99 in 1986. The MSX version of Oh Shit! originally cost ƒ29.50 Dutch Guilder in 1985, and was reduced to ƒ14.95 in 1987. In 1988, the cassette release of 30 MSX Hits was ƒ49.90, and the floppy disk release was ƒ79.90.
Reception
Oh Shit! was generally positively received by reviewers, who considered it to be a faithful reproduction of Pac-Man, and several reviewers praised the addition of digitized speech. Oh Shit! was predominantly reviewed in Dutch gaming magazines, as Oh Shit! was developed & originally published in the Netherlands.
Dutch gaming magazine MSX Gids gave the MSX version of Oh Shit! an overall score of 4.5 out of five, rating graphics, game quality, and price five stars, but giving sound three stars. MSX Gids criticises Oh Shit!'s sound effects, stating that "The speech, which gets boring quickly, has been added at the expense of the original wokka-wokka sounds. Too bad."
Dutch gaming magazine MSX Computer Magazine reviewed the MSX version of Oh Shit! alongside other Aackosoft titles based upon arcade titles, Boom (Galaxian), Scentipede (Centipede), and Hopper (Frogger). MSX Computer Magazine praises Oh Shit!'s gameplay, calling Oh Shit! a "perfect reproduction of the original arcade game", and praising the inclusion of the 'coffee break' cutscenes from the original Pac-Man that play as intermissions between levels. MSX Computer Magazine further notes Oh Shit!'s similarity to Pac-Man, stating that the levels are "identical to the arcade original", but expresses that Oh Shit! differentiates itself through the addition of speech. MSX Computer Magazine criticises Oh Shit!'s incompatibility with MSX 16Ks.
MSX Computer Magazine, now named MSX-DOS Computer Magazine, reviewed the MSX version of Shit! alongside other arcade clones, particularly comparing it to another Pac-Man clone, Maze Master, stating that they prefer the original Pac-Man or Shit! over Maze Master. MSX-DOS expresses that they mourned Aackosoft's bankruptcy, stating that "Shit! used to be a favorite of mine, Pac-Man fan that I am, and with the loss of Aackosoft a good program was withdrawn from rotation", praising the game's re-publishing by Premium III Software. MSX-DOS criticises the shortening of the game's speech of "Oh Shit!" to just "Shit!", but still expresses that "Despite that, Shit! still always remains a sublime Pac-Man, too bad about the change of voice acting."
MSX-DOS Computer Magazine reviewed the MSX version of Oh Shit! in 1988 as part of the compilation release 30 MSX Hits, expressing that "Oh Shit! is a good Pac-Man-clone with a great name". MSX-DOS Computer Magazine notes 30 MSX Hits' MSX2 compatibility, further expressing that not all MSX games offer this compatibility, stating "So you thought that any MSX program could be used on any MSX computer? As long as you don't try MSX2 software on MSX1 hardware? Well, everyone thought that, in the past. Before the MSX standard was well defined, game programmers sometimes did not adhere to that standard. There has been a lot of trouble with non-running games in the past." Oh Shit!'s MSX2 compatibility was also noted by MSX Club Magazine in their review of 30 MSX Hits in 1988.
British gaming magazine MSX Computing gave the UK MSX version, Oh No!, an overall score of two out of three stars, noting its similarity to Pac-Man, stating that "Pac-man fans will love this game as it is based very much along the same lines." MSX Computing praises Oh No!'s digitized speech, expressing that "The speech is a really novel and fun feature and does much to enhance the game" and further noting Oh No! as "far superior" to similar games due to its speech capability. MSX Computing praises Oh No!'s gameplay, calling it "addictive" and "an easy game to play", further recommending it due to its low price of £2.99 in 1986.
Dutch gaming magazine MSX Club Magazine reviewed the MSX version of Oh Shit! in 1986, giving it an overall score of 9/10, beginning their description of Oh Shit!'s gameplay by stating "You already know how to play it: it's Pac-Man." MSX Club called Oh Shit!'s graphics "not graphically amazing, but this doesn't hinder gameplay", and criticised Oh Shit!'s sound effects, stating that "Beyond the typical irritating Pac-Man sounds there's also speech present", and calls the death message of "Oh Shit!" "terrible shouting". MSX Club notes a difficulty curve in Oh Shit! as the game progresses, and praises the addition of cutscenes.
Notes
References
External links
1984 video games
MSX games
Video games developed in the Netherlands
Pac-Man clones
Video games about ghosts
Video game clones
MSX2 games
Censored video games
Video games about food and drink |
25164992 | https://en.wikipedia.org/wiki/Revenue%20model | Revenue model | A revenue model is a framework for generating financial income. It identifies which revenue source to pursue, what value to offer, how to price the value, and who pays for the value. It is a key component of a company's business model. It primarily identifies what product or service will be created in order to generate revenues and the ways in which the product or service will be sold.
Without a clear and well-defined revenue model, with a clear plan of how to generate revenues, new businesses will more likely struggle due to costs which they will not be able to sustain. By having a clear revenue model, a business can focus on a target audience, fund development plans for a product or service, establish marketing plans, begin a line of credit and raise capital.
Types
The type of revenue model that is available to a firm depends, in large part, on the activities the firm performs, and how it charges for those. Various models by which to generate revenue include the following.
Production model
In the production model, the business that creates the product or service sells it to customers who value and thus pay for it. An example would be a company that produces paper, who then sells it to either the direct public or to other businesses, who pay for the paper, thus generating revenue for the paper company.
Manufacturing model
Manufacturing is the production of merchandise using labour, materials, and equipment, resulting in finished goods. Revenue is generated by selling the finished goods. They may be sold to other manufacturers for the production of more complex products (such as aircraft, household appliances or automobiles), or sold to wholesalers, who in turn sell them to retailers, who then sell them to end users and consumers. Manufacturers may market directly to consumers, but generally do not, for the benefits of specialization.
Construction model
Construction is the process of constructing a building or infrastructure. Construction differs from manufacturing in that manufacturing typically involves mass production of similar items without a designated purchaser, while construction typically takes place on location for a known client, but may be done speculatively for sale on the real estate market.
Rental or leasing model
Renting is an agreement where a payment is made for the temporary use of a good, service or property owned by another. A gross lease is when the tenant pays a flat rental amount and the landlord pays for all property charges regularly incurred by the ownership. Things that can be rented or leased include land, buildings, vehicles, tools, equipment, furniture, etc.
Advertising model
The advertising model is often used by Media businesses which use their platforms where content is provided to the customer as an advertising space. Possible examples are newspapers and magazines which generate revenue through the various adverts encountered in their issues. Internet businesses which often provide services will also have advertising spaces on their platforms. Examples include Google and Taobao. Mobile applications also use this specific revenue model to generate revenues. By incorporating some ad space, many popular apps such as Twitter and Instagram have strengthened their mobile revenue potential after previously having no real revenue stream.
Sponsored ranking model
The sponsored ranking model is a variant of the Advertising model. The sponsored ranking model is mainly used by search engine platforms like Google and specialized products- and IT-services- platforms where users are offered free search functionality in return for sponsored results in front of other search results. The sponsor is often paying per click, per view or as a Subscription model
Commission model
The commission model is similar to the markup model as it is used when a business charges a fee for a transaction that it mediates between two parties. Brokerage companies or auction companies often use it as they provide a service as intermediaries and generate revenue through commissions on the sales of either stock or products.
E-commerce model
This revenue model is the implementation of any of the other revenue models online
Fee-for-service model
In the fee-for-service model, unlike in the subscription model, the business only charges customers for the amount of service or product they use. Many phone companies provide pay-as-you-go services whereby the customer only pays for the number of minutes he actually uses.
Licensing model
With the licensing model, the business that owns a particular content retains copyright while selling licenses to third parties. Software publishers sell licenses to use their programs rather than straight-out sell copies of the program. Media companies also obtain their revenues in this manner, as do patent holders of particular technologies.
Software licensing model
Rather than selling units of software, software publishers generally sell the right to use their software through a limited license which defines what the purchaser can and cannot do with it.
Shareware model
In the shareware model, users are encouraged to make and share copies of a software product, which helps distribute it. Payment may be left entirely up to the goodwill of the customer (donationware), or be optional with an occasional reminder (nagware), or the software may be designed to stop working after a trial period unless the user pays a license fee (trialware or demoware), or be crippled so that key features don't work. Or it may be a free feature-limited "lite" version (freemium), with a more advanced version available for a fee.
Donationware
Donationware is a licensing model that supplies fully operational unrestricted software to the user and requests an optional donation be paid to the programmer or a third-party beneficiary (usually a non-profit). The amount of the donation may also be stipulated by the author, or it may be left to the discretion of the user, based on individual perceptions of the software's value. Since donationware comes fully operational (i.e. not crippleware) with payment optional, it is a type of freeware.
Nagware
Nagware is a type of shareware that persistently reminds (nags) the user to register it by paying a fee. It usually does this by popping up a message when the user starts the program, or intermittently while the user is using the application. These messages can appear as windows obscuring part of the screen, or as message boxes that can quickly be closed. Some nagware keeps the message up for a certain time period, forcing the user to wait to continue to use the program.
Crippleware model
In software, crippleware means that "vital features of the program such as printing or the ability to save files are disabled until the user purchases a registration key". This allows users to take a close look at the features of a program without being able to use it to generate output.
Freemium models
Freemium works by offering a product or service free of charge (typically digital offerings such as software, content, games, web services or other) while charging a premium for advanced features, functionality, or related products and services. For example, a fully functional feature-limited ("lite") version may be given away for free, with advanced features disabled until a license fee is paid. The word "freemium" is a portmanteau combining the two aspects of the business model: "free" and "premium". It has become a highly popular model, with notable success.
Markup model
In the markup model, unlike with previous models, the business buys a product or service and increases its price before reselling it to customers. This model characterises wholesalers and retailers, who buy products from manufacturers, mark up their prices, and resell them to end customers.
Wholesale
Wholesaling, jobbing, or distributing is the sale of goods or merchandise to retailers; to industrial, commercial, institutional, or other professional business users; or to other wholesalers and related subordinated services. In general, it is the sale of goods to anyone other than the end-consumer. Wholesaling can be implemented online via electronic transactions.
Retail
Retail is the process of selling consumer goods or services to customers through multiple channels of distribution to earn a profit. Demand is identified and then satisfied through a supply chain. Attempts are made to increase demand through advertising.
Brick and mortar retail
Conventional retail or brick and mortar retail is selling products from a physical sales outlet or store.
Mail order
The mail order revenue model and distribution method entails sending goods by mail delivery. The buyer places an order for the desired products with the merchant through some remote method such as by telephone call or web site. Then, the products are delivered to the customer, typically to a home address, but occasionally the orders are delivered to a nearby retail location for the customer to pick up. Some merchants also allow the goods to be shipped directly to a third party consumer, which is an effective way for someone to buy a gift for an out-of-town recipient.
E-tail
E-tail is on-line retail. Retail is the process of selling consumer goods and/or services directly to end-consumers to earn a profit. Demand is created through promotion, and by satisfying consumers' wants and needs effectively (which generates word-of-mouth-advertising).
In the 21st century, an increasing amount of retailing is e-tailing, done online using electronic payment and delivery via a courier or postal mail. Via e-tail, the customer can shop and order through the internet and the merchandise is dropped at the customer's doorstep. This format is ideal for customers who do not want to travel to retail stores and are interested in home shopping.
The online retailer may handle the merchandize directly, or use the drop shipping technique in which they accept the payment for the product but the customer receives the product directly from the manufacturer or a wholesaler.
Subscription model
In the subscription model, the business provides a product or service to a customer who in return pays a pre-determined fee at contracted periods of time to the business. The customer will be required to pay the fee until the contract with the business is terminated or expires, even if he is not using the product or service but is still adhering to the contract. Possible examples are flat-rate cellular services, magazines and newspapers.
Revenue streams
A revenue stream is an amount of money that a business gets from a particular source. A revenue model describes how a business generates revenue streams from its products and services. They are resultantly a key aspect of the revenue model. They are generated through the use of the revenue model components listed in the section above. Businesses continually seek for new ways of generating revenues, thus new revenue streams. Finding a new revenue stream has gradually taken on a distinct and specialized meaning in certain contexts to mean a new, novel, undiscovered, potentially lucrative, innovative, and creative means of generating income or exploiting a potential. This approach in particular can especially be applied to new technology and internet businesses which find extremely innovative ways of generating revenues, often ways which seemed not to be possible. As a result, technology-based businesses are constantly updating their revenue models in order to remain competitive.
Advertising can be seen as a component of the revenue model, however, when the business is advertising its own products, this would result as a cost for the business which is the exact opposite of revenue. On the other hand, advertising can lead to an increase in sales thus revenue over a period of time. For the majority of businesses which will add value to a product or service that will be purchased by a customer, advertising is often a component of their business plan. Expenditure for this particular component is forecasted as it can generate greater revenues over periods of time.
Component of a business model
A revenue model is part of a business model. A business model shows the framework for an entire business and allows investors and bankers, as well as the entrepreneur, to have a quick way of evaluating that business. Business models can be viewed in many different ways, but they are generally composed of the following six elements:
Offer significant value to customers
Fund the business
Acquire high value customers
Deliver products or services with high margins
Provide for customer satisfaction
Maintain market position
The revenue model is a key component of the business model as it is an essential factor for delivering products or services with high margins and funding the business. Less than 50% of the investment required to set up a business will be used in revenue-producing areas. It can not resultantly be viewed as being identical to the business model as it does not influence all the six elements but more should be viewed as an inner component of it.
Having a well-structured business model is necessary for the success of any business adding value to a product or service for customers. This will consequently include having a clear and tailored revenue model which will ensure its financial health. It provides the owners of the business with a necessary understanding of cash flows as well as how it will generate revenue and maximize profitability. In addition to the business model, financial targets have to be forecasted when creating an initial business plan whereby expected revenues and profits will have to be presented and thus calculated through the use of revenue models applied by the business.
References
Business models
Revenue
Revenue models |
26004545 | https://en.wikipedia.org/wiki/Motorola%20A910 | Motorola A910 | The Motorola A910 is a clamshell mobile phone from Motorola, which uses MontaVista Linux as the operating system.
Motorola started selling this phone in the first quarter of the year 2006. Utilizing a balanced Linux-Java operating system and Wi-Fi connectivity, the Motorola A910 surpasses its predecessors with user-friendly features, everything from text messaging to email management. It is also the only clamshell phone from Motorola with Wi-Fi, as well as the only non-touchscreen Motorola with Wi-Fi in Europe.
Features
The phone is supplied with a number of applications including a POP and IMAP email client, Opera web browser, calendar and a viewer for PDF and Microsoft Office files. Calendar and address book can be synchronized with a Microsoft Exchange or SyncML server. The phone has a 1.3 megapixel camera with Self Portrait Viewfinder External Display and photo lighting, recording still and video images. RealPlayer is included to play sound audio files and streamed audio and video. The phone has 48 megabytes of internal flash memory for storing user data and a slot for a microSD card, which supports additional 2 GB of storage. Both Bluetooth and USB are provided for communication with another computer. Character entry is made by the keypad interface.
Linux enthusiasts
This phone is popular with Linux enthusiasts. It is able to establish an Ethernet connection between the phone and another computer over USB, Bluetooth or Wi-Fi. One can then telnet to the phone and be presented with a bash prompt. From the prompt one can, for example, mount a NFS drive(s) on the phone. The underlying operating system, Motorola EZX is Linux based, its kernel is open source. With the source code hosted on opensource.motorola.com, it is possible to recompile and replace the kernel for this operating system. However Motorola did not publish a software development kit for native applications. Instead, they expect third-party programs to be written in Java ME. The OpenEZX website is dedicated to providing free opensource software for this phone and others using the same OS.
See also
Motorola
List of Motorola products
List of mobile phones running Linux
OpenEZX
References
External links
Motorola A910 official website
OpenEZX Wiki - A910 Hardware details
Motorola Open source - Makes the Linux source code and drivers available in compliance with GPL
Motorolafans - fansite with many applications for Motorola Linux Phones
Smartphones
A910
Information appliances |
893433 | https://en.wikipedia.org/wiki/Lightweight%20Extensible%20Authentication%20Protocol | Lightweight Extensible Authentication Protocol | The Lightweight Extensible Authentication Protocol (LEAP) is a proprietary wireless LAN authentication method developed by Cisco Systems. Important features of LEAP are dynamic WEP keys and mutual authentication (between a wireless client and a RADIUS server). LEAP allows for clients to re-authenticate frequently; upon each successful authentication, the clients acquire a new WEP key (with the hope that the WEP keys don't live long enough to be cracked). LEAP may be configured to use TKIP instead of dynamic WEP.
Some 3rd party vendors also support LEAP through the Cisco Compatible Extensions Program.
An unofficial description of the protocol is available.
Security considerations
Cisco LEAP, similar to WEP, has had well-known security weaknesses since 2003 involving offline password cracking. LEAP uses a modified version of MS-CHAP, an authentication protocol in which user credentials are not strongly protected. Stronger authentication protocols employ a salt to strengthen the credentials against eavesdropping during the authentication process. Cisco's response to the weaknesses of LEAP suggests that network administrators either force users to have stronger, more complicated passwords or move to another authentication protocol also developed by Cisco, EAP-FAST, to ensure security. Automated tools like ASLEAP demonstrate the simplicity of getting unauthorized access in networks protected by LEAP implementations.
References
Cisco protocols
Wireless networking |
34901333 | https://en.wikipedia.org/wiki/Trojan%20horse%20defense | Trojan horse defense | The Trojan horse defense is a technologically based take on the classic SODDI defense, believed to have surfaced in the UK in 2003. The defense typically involves defendant denial of responsibility for (i) the presence of cyber contraband on the defendant's computer system; or (ii) commission of a cybercrime via the defendant's computer, on the basis that a malware (such as a Trojan horse, virus, worm, Internet bot or other program) or on some other perpetrator using such malware, was responsible for the commission of the offence in question.
A modified use of the defense involves a defendant charged with a non-cyber crime admitting that whilst technically speaking the defendant may be responsible for the commission of the offence, he or she lacked the necessary criminal intent or knowledge on account of malware involvement.
The phrase itself is not an established legal term, originating from early texts by digital evidence specialists referring specifically to trojans because many early successful Trojan horse defenses were based on the operation of alleged Trojan horses. Due to the increasing use of Trojan programs by hackers, and increased publicity regarding the defense, its use is likely to become more widespread.
Legal basis of the defense
Excluding offences of strict liability, criminal law generally requires the prosecution to establish every element of the actus reus and the mens rea of an offence together with the "absence of a valid defence". Guilt must be proved, and any defense disproved, beyond a reasonable doubt.
In a trojan horse defense the defendant claims he did not commit the actus reus. In addition (or, where the defendant cannot deny that they committed the actus reus of the offence, then in the alternative) the defendant contends lack of the requisite mens rea as he "did not even know about the crime being committed".
With notable exception, the defendant should typically introduce some credible evidence that (a) malware was installed on the defendant's computer; (b) by someone other than the defendant; (c) without the defendant's knowledge. Unlike the real-world SODDI defense, the apparent anonymity of the perpetrator works to the advantage of the defendant.
Prosecution rebuttal of the defense
Where a defense has been put forward as discussed above, the prosecution are essentially in the position of having to "disprove a negative" by showing that malware was not responsible. This has proved controversial, with suggestions that "should a defendant choose to rely on this defense, the burden of proof (should) be on that defendant". If evidence suggest that malware was present and responsible, then the prosecution need to seek to rebut the claim of absence of defendant requisite mens rea.
Much will depend on the outcome of the forensic investigative process, together with expert witness evidence relating to the facts. Digital evidence such as the following may assist the prosecution in potentially negating the legal or factual foundation of the defense by casting doubt on the contended absence of actus reus and/or mens rea:-
Absence of evidence of malware or backdoors on the defendant's computer.
Where malware absence is attributed by the defense to a self wiping trojan, evidence of anti-virus/firewall software on the computer helps cast doubt on the defense (such software can result in trojan detection rates of up to 98%) as does evidence of the absence of wiping tools, as "it is practically impossible that there would be no digital traces of [...] the use of wiping tools".
Evidence showing that any located malware was not responsible, or was installed after the date/s of the offences.
Incriminating activity logs obtained from a network packet capture.
In cyber-contraband cases, the absence of evidence of automation - e.g. of close proximity of load times, and contraband time/date stamps showing regularity. Volume of contraband is also relevant.
Corroborating digital-evidence showing defendant intent/knowledge (e.g. chat logs).
Such properly obtained, processed and handled digital evidence may prove more effective when also combined with corroborating non-digital evidence for example (i) that the defendant has enough knowledge about computers to protect them; and (ii) relevant physical evidence from the crime scene that is related to the crime.
The role of computer forensics
Whilst there is currently there is "no established standard method for conducting a computer forensic examination", the employment of digital forensics good practice and methodologies in the investigation by computer forensics experts can be crucial in establishing defendant innocence or guilt. This should include implementation of the key principles for handling and obtaining computer based electronic evidence - see for example the (ACPO) Good Practice Guide for Computer-Based Electronic Evidence . Some practical steps should potentially include the following:-
Making a copy of the computer system in question as early as possible to prevent contamination (unlike in the case of Julie Amero where the investigator worked directly off Amero's hard drive rather than creating a forensic image of the drive.
Mounting as a second disk onto another machine for experts to run a standard anti-virus program.
Correct handling of volatile data to ensure evidence is acquired without altering the original.
If a Trojan is found, it is necessary to examine the totality of the circumstances and the quantity of incriminating materials.
Including a "network forensic approach" e.g. by way of legally obtained packet capture information.
Cases involving the Trojan Horse Defense
There are different cases where the Trojan horse defense has been used, sometimes successfully. Some key cases include the following:-
Regina v Aaron Caffrey (2003):
The first heavily publicised case involving the successful use of the defense, Caffrey was arrested on suspicion of having launched a Denial of Service attack against the computer systems of the Port of Houston, causing the Port's webserver to freeze and resulting in huge damage being suffered on account of the Port's network connections being rendered unavailable thereby preventing the provision of information to "ship masters, mooring companies, and support companies responsible for the support of ships saling and leaving the port". Caffrey was charged with an unauthorised modification offence under section 3 of the Computer Misuse Act 1990 (section 3 has since been amended by the Police and Justice Act 2006 creating an offence of temporary impairment.
The prosecution and defense agreed that the attack originated from Caffrey's computer. Whilst Caffrey admitted to being a "member of a hacker group", Caffrey's defense claimed that, without Caffrey's knowledge, attackers breached his system and installed "an unspecified Trojan...to gain control of his PC and launch the assault" and which also enabled the attackers to plant evidence on Caffrey's computer.
No evidence of any trojan, backdoor services or log alterations were found on Caffrey's computer. However evidence of the Denial of Service script itself was found with logs showing the attack program has been run. Incriminating chat logs were also recovered. Caffrey himself testified that a Trojan horse "armed with a wiping tool" could have deleted all traces of itself after the attack. Despite expert testimony that no such trojans existed, the jury acquitted Caffrey.
The case also raises issues regarding digital forensics best practice as evidence may have been destroyed when the power to Caffrey's computer was terminated by investigators.
Julian Green (2003)'':
A United Kingdom-based case, Julian Green was arrested after 172 indecent pictures of children were found on Green's hard drive. The defense argued that Green had no knowledge of the images on his computer and that someone else could have planted the pictures. Green's computer forensics consultant identified 11 Trojan horses on Green's computer, which in the consultant's expert witness testimony, were capable of putting the pornography on Green's computer without Green's knowledge or permission. The jury acquitted Green of all charges after the prosecution offered no evidence at Exeter Crown Court, due to their failure to prove that Green downloaded the images onto the computer.
The case also raises issues related to the evidential chain of custody, as the possibility of evidence having been planted on Green's computer could not be excluded.
Karl Schofield (2003):Karl Schofield was also acquitted by using the Trojan horse defense. He was accused of creating 14 indecent images of children on his computer but forensic testimony was given by a defense expert witness that a Trojan horse had been found on Schofield's computer and that the program was responsible for the images found on the computer Prosecutors accepted the expert witness testimony and dismissed the charges, concluding they could not establish beyond a reasonable doubt that Schofield was responsible for downloading the images.
Eugene Pitts (2003):A US-based case involving an Alabama accountant who was found innocent of nine counts of tax evasion and filing fraudulent personal and business state income tax returns with the Alabama state revenue department. The prosecution claimed he knowingly underreported more than $630,000 in income over a three-year period and was facing a fine of $900,000 and up to 33 years in prison. Pitts apparently had previously been accused in preceding years of under reporting taxes. Pitts argued that a computer virus was responsible for modifying his electronic files resulting in the under-reporting the income of his firm, and that the virus was unbeknown to him until investigators alerted him. State prosecutors noted that the alleged virus did not affect the tax returns of customers, which were prepared on the same machine. The jury acquitted Pitts of all charges.
The future of the defense
Increased publicity, increased use
As the defense gains more publicity, its use by defendants may increase. This may lead to criminals potentially planting Trojans on their own computers and later seeking to rely on the defense. Equally, innocent defendants incriminated by malware need to be protected. Cyberextortionists are already exploiting the public's fears by "shaking down" victims, extorting payment from them failing which the cyber-criminals will plant cyber-contraband on their computers.
As with many criminal offences, it is difficult to prevent the problematic matters that arise during the term of the investigation. For example, in the case of Julian Green, before his acquittal, he spent one night in the cells, nine days in prison, three months in a bail hostel and lost custody of his daughter and possession of his house. In the following case of Karl Schofield, he was attacked by vigilantes following reports of his arrest, lost his employment and the case took two years to come to trial.
Appropriate digital forensic techniques and methodologies must be developed and employed which can put the "forensic analyst is in a much stronger position to be able to prove or disprove a backdoor claim''". Where applied early on in the investigation process, this could potentially avoid a reputationally damaging trial for an innocent defendant.
Juries
For a layman juror, the sheer volume and complexity of expert testimonies relating to computer technology, such as Trojan horse, could make it difficult for them to separate facts from fallacy. It is possible that some cases are being acquitted since juror are not technologically knowledgeable. One possible suggested method to address this would involve be to educate juries and prosecutors in the intricacies of information security
Mobile Technology
The increasing dominance of Smart Device technology (combined with consumer's typically lax habits regarding smart device security) may lead to future cases where the defense is invoked in the context of such devices
Government Trojans
Where the use of Government Trojans results in contraband on, or commission of a cybercrime via, a defendant's computer, there is a risk that through a gag order (for example a US National security letter) the defendant could be prevented from disclosing his defense, on national security grounds. The balancing of such apparent national security interests against principles of civil liberties, is a nettle which, should the use of government trojans continue, may need to be grasped by Legislatures in the near future.
See also
SODDI Defense
Trojan horse
Blackmail
Botnet
DoSnet
Hacker (computer security)
References
Legal defenses
Criminal defenses |
122780 | https://en.wikipedia.org/wiki/Avilla%2C%20Missouri | Avilla, Missouri | Avilla is a rural village in Jasper County, Missouri, United States. The population was 103 at the 2020 census. It is part of the Joplin, Missouri Metropolitan Statistical Area. Avilla is the fourth-oldest settlement in Jasper County today, founded in 1856. It was platted and laid out for public use July 23, 1858, by Andrew L. Love and David S. Holman.
Geography
Avilla is located at (37.193821, −94.128991), ten miles east of Carthage, Missouri, on MO Route 96 (formerly Historic U.S. Route 66) and four miles west of the Lawrence County line. The village is surrounded by pasture and farmland, small forested areas and branching spring-fed streams. White Oak Creek is the nearest to the south and east, and Dry Fork & Deer Creek to the north. Spring River runs past about three miles to the south which is eventually fed by these headwater streams.
According to the United States Census Bureau, the village has a total area of , all land.
History
1831–1861
Founders of Avilla
Pioneers who came to this region in the 1830s and 1840s saw a "beautiful prairie land, interspersed with timbered belts along winding streams". Settlement of the grasslands presented more challenges than other types of terrain, and for this reason northeastern Jasper County developed more slowly than the rest of the county. Split-log homes were built near wooded locations and rock and sod were also used in early constructions. Although families were many miles apart, they still called each other neighbor. Some of the earliest settlers near present-day Avilla were John K. Gibson in 1831 (just across the Lawrence County line), James Blackwell in 1835 and John Fishburn on White Oak Creek in 1836. Nelson Knight was the first settler on the prairie north of Avilla, building a cabin and farm in 1837, and Jasper County itself was established in 1841. Thomas Buck came all the way from Indiana in a wagon drawn by a team of horses in the 1840s and built a farm just east of the future town site. The first schoolhouse in the Avilla area was a one-room, dirt-floor log cabin also founded in the 1840s, called White Oak School, located about southeast near White Oak Creek. Arriving with his family in 1853, Dr. Jaquillian M. Stemmons was the first physician to practice medicine in the Avilla area, doing so from his farm. The town of Avilla was founded in 1856 and platted and laid out for public use July 23, 1858, by Andrew L. Love and David S. Holman. Mr. Love was the first Justice of the Peace, and Mr. Holman was the first merchant and postmaster, establishing the post office in 1860. A Dr Young later came just before the Civil War and established a medical office within the town limits.
Indigenous People
This had been the hunting grounds of the Osage Indians who were known to have camped at nearby Spring River, about to the south. Their lands to the east had been previously purchased by the government in 1808 (Treaty of Fort Clark) and other tribes had been moved to this location as well, and then later all were moved again to the Osage Nation areas elsewhere. Notwithstanding, a few that possibly returned or had simply refused to leave could still be seen trading in Avilla and the nearby towns throughout the Antebellum Period.
Avilla at the Beginning of the Civil War
By 1861 there were several small river mill settlements, some mining camps and about nine or ten towns (seven platted) in Jasper County, Missouri. Avilla was newly formed and "bustling" with over one hundred citizens (compare with the county seat of Carthage that had an estimated population between four and five hundred at that time). As in all of the border state towns, families in Avilla were split over the question of Missouri secession, and some were slave owners. Dr. Jaquillian M. Stemmons actually owned eight inherited slaves himself; however, he and the other town leaders were Unconditional Unionists and remained aligned with newly elected president Abraham Lincoln. Dr. J.M. Stemmons never bought or sold slaves and was known to have retained his family inherited slaves simply for their very own safety. He supported the abolition of slavery in the United States. Avilla's political alignment was in sharp contrast to neighboring Sarcoxie to the south, where the first regional Confederate flag was raised. The rebel "Stars & Bars" also flew over Carthage to the west in 1861, following an early Confederate victory at the Battle of Carthage on July 5. At a distance of only two counties away, Arkansas had already become the ninth state to secede, and on October 28, 1861, Governor Claiborne Fox Jackson met with the Missouri General Assembly in Neosho and declared Missouri as the twelfth state to join the Confederate States of America. In spite of being engulfed by the Confederacy, the United States flag continued to fly over Avilla, boldly hoisted to the top of a flagpole in the town center park and guarded by the townsmen. Schoolhouses were closed and many families evacuated their women and small children to safer areas in other states.
1861–1865
Confederate guerrillas attack
Dr. Jaquillian M. Stemmons, an early settler, town leader and staunch Union man, organized a company of local men and neighbors in Avilla for the protection of their own homes from roaming bands of bushwhackers. In 1861 this town militia, also known as the "Avilla Home Guard", was one of the first in the area and consisted predominantly of the oldest citizens, as most of the younger men were leaving to join regular military forces. This action was strongly opposed by local secessionists, and it was even rumored that a price had been placed on the doctor's head. By March 1862, the town militia had been tipped off about an impending assault and General James G. Blunt at Fort Scott, Kansas, had pledged reinforcements, but they had not yet arrived. After nightfall on March 8, 1862, a group of over a hundred pro-Confederate guerrillas believed to have been led by William T. Anderson attacked northeast of Avilla, routed perimeter sentries and engaged defenders at Dr. Stemmon’s home. Defending were about twenty-five town militiamen and some men from Carthage who were there attending a meeting about the organization of a county-wide patrol. A US Cavalry officer named Captain Tanner was also there recruiting men for the Union Army. The rebels surrounded the two-story log home and were met with heavy gunfire, but the doctor and three of his sons, Bud, Pole and Jimmie were trapped inside with many of the men. After numerous attempts to penetrate the defense, amidst flying buckshot and bullets, the attackers managed to ignite the cabin and it eventually burned to the ground. Dr. Stemmons and Lathan Duncan, an Avilla militiaman, were killed, several others shot and burned, and two were taken prisoner (the number of guerrilla casualties was not recorded). After the house was lost to flames, the heavily out-numbered militia withdrew and scattered in the darkness. They re-formed near the north edge of Avilla and braced for another onslaught, but it did not occur. The guerrilla force instead ended the attack and rode east toward Springfield, where the two elderly prisoners were later "given stern warnings to leave the state" and released.
Dr. J.M. Stemmons had been considered an "influential area figure against secession", and this was thought to be a chief motive for the attack and his murder. Nevertheless, the "defiant and heroic actions" of Dr. Stemmons, Mr. Duncan and the town militia's "bold resistance" undoubtedly repelled further violence and probably prevented the burning of Avilla on that or ensuing dates. Names that are known of the courageous militiamen and allies defending on that night also included: Miles Overton, George Moose, Jap Moody, Ben Key, Cavalry Chapman, Robert Seymour, Orange Clark, Humphrey Robinson, Tom Driver, James S. Carter, Reuben Fishburn, William Club (seriously wounded), Nelson Knight (taken prisoner), Rabe Paul and Coleman Paul, Isaac Schooler, "Dutch" Brown (taken prisoner), Nip Walker, Peter Baker, Renard Napper and Cpt. Tanner from Fort Scott (The Captain was a Union Army Recruiting Officer and reportedly continued to fight after taking a shotgun blast to the face).
Humphrey Robinson (1812–1864) was later abducted by bushwackers from his Lawrence County, Missouri, farm, in 1864, and never seen again. Many, at the time, believed he was recognized by his captors as one who stood against them in the defense of Avilla and killed in retaliation. A marker to his memory was laid beside his widow at Gray's Point Cemetery, near Miller, Missouri.
A Gruesome Warning to Bushwhackers
The rebel attack on the Stemmons home was intended to terrorize and diffuse but essentially had the opposite effect, infuriating the townsmen and altering the defensive efforts to offensive as everyone in Avilla took up arms. The Union Army gained possession of Missouri in 1862, but the terrain encompassing Avilla remained plagued with bushwhackers and occasionally small bands of Confederate regulars or guerrilla raiders on horseback. The town militia inherently became the earliest county militia for a period, headquartered in Avilla (this was before the formation of the Missouri County Militias in 1864). The patrol areas were then extended within eastern Jasper and western Lawrence Counties. Patrols of mounted militiamen were augmented by a few Union soldiers of the US Cavalry and continued to protect the town and countryside in several local skirmishes. Many bushwackers were tracked down and shot, and within a short time the rest of them grew to fear the deadly Avilla "pioneer marksmen". In one account a rebel’s skeleton was found just south of town with a bullet hole in the skull and his name was never identified. He had apparently been killed during a previous skirmish with militiamen, but his remains were not found until they were in an advanced state of decomposition. The skull was then hung from the "Death Tree" in Avilla, suspended from a tree limb for over a year near the road at the Dunlap apple orchard "as a warning to all other bushwhackers".
Union Army Garrison at Avilla
By 1863, the Enrolled Missouri Militia was stationed at the Union Army garrison in Avilla; these new soldiers were under the command of Major Morgan. Tents were erected and storehouses, barns and homes were converted to temporary Army barracks & headquarters which housed hundreds of soldiers at various times, and a number of refugees. The town became known to the Missouri Militiamen informally as "Camp Avilla'", and by 1864 many of the original town militiamen continued to assist the new Missouri Militia functioning as patrol leaders in the newly organized Jasper County Militia. Avilla supported anti-guerrilla operations in the region while under Lieutenant-Colonel T.T. Crittenden of the Missouri State Militia's 7th Cavalry, and facilitated as a way station when needed in the transportation of Confederate Prisoners of War. Being situated in open grassland Avilla was able to maintain a formidable and effective defense and became a sanctuary for refugees of nearby burned-out towns such as Carthage, but the county remained dangerous until the end and even for some time after the war.
1865–1900
Boom town during the Reconstruction Era
Avilla fended off and avoided destruction during the Civil War, and was an overnight boom town during the Reconstruction Era at war's end. Merchandise and construction materials were hauled by wagon train from Sedalia, Missouri, the closest railroad shipping point to Avilla. Much of Jasper County lay in ruins, and local merchants and businessmen grew wealthy during the rebuilding of Carthage, Sarcoxie, and other nearby war-damaged communities. Many old time residents later claimed that Avilla had actually been the largest operating town in Jasper County after hostilities ended, for a short time. Commerce even came from as far away as Kansas, by farmers traveling to Avilla to buy seed, building supplies and provisions. Captain Thomas Jefferson Stemmons, a Union commander and son of the late Dr. J.M. Stemmons, returned home to Avilla and started a mercantile with partner D. B. Rives, which was the first new business established after the Civil War. The first hotel was called The Avilla House and was erected two years later in 1868 by Justice Hall. Through the 1870s and 1880s there were two general stores (dry goods & clothing), two grocery stores, one or more doctor's offices, one "notion" (sewing) store, two boot & shoe stores, one livery & feed stable, three churches, a drug store, a Grand Army of the Republic post (GAR) and two "secret societies": the Freemasons Lodge and the Independent Order of Odd Fellows Lodge (IOOF) and houses sprang up everywhere. Located at what is now named "School Street", the first Avilla school in town was built in the 1880s and taught grades 1–12 (called lower, upper and high school). Sources disagree, but some documents cite the town's population at over five hundred during these years, not including families on the farms encircling just beyond the town limits. Despite the initial spurt after the Civil War, growth of the town was stunted because the railroad was not built through Avilla. Farmland was the primary natural resource and without industry the population never increased after this time, and regular stage lines were eventually discontinued.
Avilla Zouaves in the Spanish–American War
After the battleship USS Maine exploded at Havana, Cuba in February 1898, a "war fever" against Spain swept through America with the cry: "Remember the Maine!" On March 4, 1898, a highly charged war rally was held at the Avilla Methodist Church. Three attorneys gave rousing speeches to the packed house, fueled with patriotic songs from a ladies choir. As a result, fifty-three young men immediately volunteered for military service in a new company that would be known as the Avilla Zouaves. These units were characterized by colorful uniforms and/or precision drilling patterned after the French Zouaves, and were very stylish in the 19th century military. The Avilla youth would be designated in the US Army as Company G, 5th Missouri Infantry Regiment. Two months later, the new Avilla unit was escorted by flag-bearing GAR members to the Carthage Train Depot, and with music from the Light Guard Band were ceremoniously sent off for battle in the Spanish–American War. The fighting in Cuba was over quickly and the 5th Missouri Infantry was mustered out on November 9, 1898, before the Avilla Zouaves saw any action. Although they did not fight in the war, this event illustrates the raw patriotic spirit of Avilla, Missouri, still present in 1898.
1900–1970
The Bank of Avilla and Robbery of 1932
The Bank of Avilla was established September 18, 1914, and the building was completed in 1915. Samuel Salyer was the first majority stockholder and cashier (the title "cashier" was applied to bank officers & managers). Mr. Ivy E. Russell became majority stockholder and cashier in 1919, remaining for its duration, with the stock ledger ending in 1944. Handling farm and business loans, the small bank remained profitable even through the Great Depression of the 1930s, though records are incomplete. This in itself is quite remarkable as almost half of the banks in America had either closed or merged in the 1930s. The productive farms surrounding the town had established Avilla as a valuable agricultural and livestock raising community.
The Bank of Avilla was the target of a successful armed robbery on May 18, 1932, by members of the notorious "Irish O'Malley Gang", which also resulted in the kidnapping of the cashier. The O'Malley Gang were typical Depression-era outlaws who had merged with another group of thugs known as the "Ozark Mountain Boys". On that Wednesday in 1932, the bank cashier Mr. Ivy E. Russell was robbed at gunpoint inside the bank by two men. He was then kidnapped and driven toward Carthage, Mo, where he was tossed out of the car and left by the roadside. One of the culprits was a "sawed-off shotgun wielding gangster" named Jack Miller, who drove the getaway car. It is not known if the undisclosed amount was ever recovered, and records do not show if or how bank customers were reimbursed (notes and deposits were not insured at this time). After a lengthy spree of bank hold-ups, store robberies and murders throughout the Midwest, all of the O'Malley Gang were eventually captured. Some gang members were killed or found dead, and one was sentenced to a seventy-five year prison term for the Avilla bank robbery. Additional facts about the crime can be pieced together through various computerized data sources, some of which include: archived editions of the Miami Daily News Record (Oklahoma) dated May 19, 1932, and from The Joplin Globe dated Oct 8, 1939. In 1938 Frank Layton and Jack Miller were pulled over by police in Arkansas, and were charged with violating the 1934 National Firearms Act (because of Jack's sawed-off shotgun). This in turn became part of a famous landmark Second Amendment case known as "The Miller Case" and United States v. Miller. Jack Miller himself was murdered one month prior to the Supreme Court's decision. Jack's bullet-riddled body turned up on the bank of Spencer Creek in Rogers County, Oklahoma.
In spite of having been robbed and kidnapped, Mr. Ivy E. Russell continued to operate the Bank of Avilla for at least twelve more years. A great crime wave of robberies and violence swept across the Midwest in 1932. Following the Avilla caper, Mr. Russell increased security measures by keeping a large caliber firearm behind the teller window, and additional measures that remain a secret to this day. It is known that Bonnie & Clyde of the infamous "Barrow Gang" were near the area in Joplin, Missouri, mere months after the Bank of Avilla robbery by the O'Malley Gang. The Barrow Gang were undoubtedly "casing-out" banks to rob as well. Local lore has it that Clyde Barrow entered the Bank of Avilla, looked Mr. Russell in the eye, and then saw his .45 holstered while he stood behind the teller window. Clyde allegedly tipped his hat, said "'Afternoon", then turned around and promptly left. Though this is local legend, it is safe to assume that many other Depression-era hoodlums passed through town as well. The bank always remained open during normal hours, with Ivy E. Russell as the cashier. The Bank of Avilla was never robbed again.
As more roads were paved and transportation in the vicinity significantly improved, the need for a local bank diminished. At some point around 1944 its assets were transferred to the Bank of Carthage, and the bank building was vacated for a few years. The property was then leased by the government April 1, 1952, to house the old US Post Office which was in need of a new location by that time. The historic building has remained a post office ever since.
Avilla "Gets Kicks" on Route 66
The trail that went through the center of Avilla east & west was known as "Old Carthage Road", and it was paved and became part of U.S. Route 66 in the late 1920s. This kept business flowing as the little town became one of the stops on "The Mother Road", the main highway through the heart of America in those years. Population growth had already apexed before the 20th century but the town continued to make modern improvements such as a volunteer fire department, a hardware store & lumber yard (owned by Raymond Ziler, burned in 1971), a barbershop, a beauty salon, (Florida Melugin's or "Old Flo's") tavern, The Avilla Cafe (Jack & Nadine "Sours" Couch), several auto service stations (in town and close proximity) and repair shops for farm equipment and automobiles, a farm implement sales (Chapman-Follmers), a seed mill, a Boy Scout meeting hall (Scoutmaster Joseph A. Norris Sr.) and in later years even an arena grounds was constructed for the Avilla Rodeo (Avilla Saddle Club) west of town. A larger school building was also built, and the old one-room school houses which were still operating and spread out in that part of Jasper County were consolidated and centralized in Avilla. The original country schoolhouse teachers were brought together to form the elementary/middle school Avilla R-13 School District. The Avilla school became the only one in the district. Because the school spans grades kindergarten though eighth, high school level students thereafter were sent to neighboring Carthage, Sarcoxie, Jasper, Miller or Golden City, Missouri, for continued studies.
1970 – present
Living ghost town
Avilla had actually started to decline in the 1940s after World War II, when greater numbers of people (especially young adults) from the already small community began moving to larger industrial cities for employment opportunities. The final turning point was in the 1960s, when US Route 66 was bypassed with I-44 (the Interstate Highway System). The lost commerce due to the diverted traffic caused many of the remaining businesses to fail or to be relocated in the 1970s. In 1971 a large fire broke out at the Avilla lumberyard which destroyed several buildings including most of the lumber company, the Boy Scout Meeting Hall and some private residences. The lumberyard was later rebuilt but by the late 1970s deteriorating town shops had been sold & resold, and finally deserted. The only trades that survived were the ones that could be sustained by the dwindling local population and area farming operations. Most of the earliest buildings are now gone, replaced by noticeable empty spaces and vacant lots. US Route 66 was redesignated MO Route 96 in 1985 but by then Avilla was already a small, quiet rural community not unlike what is witnessed there today. Few abandoned structures remain within the present village as silent reminders of the towns heyday.
Avilla is considered one of the living "ghost towns of old Route 66". It was never completely abandoned and retains its village status today. Many antique country home' and farmhouses can be seen dotted about the Avilla countryside and long established family traditions in livestock raising and agriculture continue in the area. The rural community with local 4-H clubs & the Harvest Community Church are currently restoring the Avilla Saddle Club Arena. A few examples of period architecture can still be viewed such as the iconic 19th century Avilla (United) Methodist Church which was the first church established in Avilla, located in the northeast part of town. A Civil War era mercantile edifice (Stemmons & Rives) also endures near the old park at the west village entrance. Although it has been threatened with closure because of government cut-backs, 21st century visitors and residents can enjoy the nostalgic and well-preserved 1915 bank building, complete with the old time teller windows, vault and vintage postal equipment as it continues to fly Old Glory and serve as Avilla's US Post Office.
Notable people
Homer L. Hall Award-winning American journalist, educator and widely published author of teaching and students' textbooks, including the critically acclaimed High School Journalism. Inducted into the Missouri Interscholastic Press Association Hall of Fame and the National Scholastic Journalism Hall of Fame, he grew up on a farm near Avilla, and graduated in the class of 1956 at Carthage High School.
James M. Craven A state representative of Missouri in Avilla, Craven was born in Indiana in 1831 and became a livestock dealer in California and Oregon in the 1850s. He moved to Avilla and went into politics in 1866.
Walter Stemmons A professor of journalism and university publications editor-in-chief at the University of Connecticut, Stemmons was born in Avilla in 1884.
Etymology
It is verified that Avilla, Missouri, was named on or before 1858 because of the plat documentation, but who selected the name and why remains a mystery. The town was founded by Andrew L. Love and David S. Holman, merchant-landowners who basically wanted to sell goods and property at the edge of the frontier in the mid-1850s, but little information was recorded by them, and even less about them. An authentic explanation for the name may exist in an old document or letter, but it has yet to be discovered or made public, and the knowledge may have died with the founders. In the United States, the list of similarly named places includes: a small town in Noble County, Indiana, a township in Comanche County, Kansas, and an unincorporated community in Saline County, Arkansas.
A myth that Avilla, Missouri, was named after Avilla, Indiana, was started in 1930 when it was published in the M.A. thesis from Robert Lee Meyers, University of Missouri-Columbia "Place Names In The Southwest Counties Of Missouri". Although early settlers did come to Avilla, Missouri, from Indiana, the Indiana town was documented to have been named seventeen years after the Missouri town, by Judge Edwin Randal in 1875. Before that time, Avilla, Indiana, was known as “Hill Town” until 1875.
Demographics
2010 census
As of the census of 2010, there were 125 people, 44 households, and 35 families residing in the village. The population density was . There were 54 housing units at an average density of . The racial makeup of the village was 90.4% White, 2.4% Native American, and 7.2% from two or more races. Hispanic or Latino of any race were 4.8% of the population.
There were 44 households, of which 43.2% had children under the age of 18 living with them, 63.6% were married couples living together, 11.4% had a female householder with no husband present, 4.5% had a male householder with no wife present, and 20.5% were non-families. 18.2% of all households were made up of individuals, and 4.5% had someone living alone who was 65 years of age or older. The average household size was 2.84 and the average family size was 3.00.
The median age in the village was 38.5 years. 28.8% of residents were under the age of 18; 8% were between the ages of 18 and 24; 20.8% were from 25 to 44; 28.8% were from 45 to 64; and 13.6% were 65 years of age or older. The gender makeup of the village was 50.4% male and 49.6% female.
2000 census
As of the census of 2000, there were 137 people, 53 households, and 41 families residing in the town. The population density was 680.9 people per square mile (264.5/km2). There were 56 housing units at an average density of 278.3 per square mile (108.1/km2). The racial makeup of the town was 98.54% White, and 1.46% from two or more races. Hispanic or Latino of any race were 0.73% of the population.
There were 53 households, out of which 32.1% had children under the age of 18 living with them, 62.3% were married couples living together, 9.4% had a female householder with no husband present, and 22.6% were non-families. 17.0% of all households were made up of individuals, and 3.8% had someone living alone who was 65 years of age or older. The average household size was 2.58 and the average family size was 2.88.
In the town the population was spread out, with 27.7% under the age of 18, 7.3% from 18 to 24, 27.0% from 25 to 44, 28.5% from 45 to 64, and 9.5% who were 65 years of age or older. The median age was 36 years. For every 100 females, there were 117.5 males. For every 100 females age 18 and over, there were 102.0 males.
The median income for a household in the town was $21,750, and the median income for a family was $25,000. Males had a median income of $27,500 versus $13,750 for females. The per capita income for the town was $11,673. There were 13.3% of families and 18.9% of the population living below the poverty line, including 33.3% of under eighteens and 16.7% of those over 64.
Legends and folklore
The legend of the “Avilla Phantom Bushwhacker” of the "Death Tree Legend", also known as "Rotten Johnny Reb", is an enduring Avilla ghost story describing various hauntings involving the ghost of a dead Confederate Bushwhacker whose remains were never properly buried, and an old tree charged with dark or evil energy after his skull was hung on it. According to one version of the legend, the phantom is not only searching for his head, but vengeance on the town citizens as well.
References
Further reading
Missouri's Wicked Route 66: Gangsters and Outlaws on the Mother Road By Lisa Livingston-Martin
A history of Jasper County, Missouri, and its people, Volume 1 By Joel Thomas Livingston
Jasper County: The first two hundred years by Marvin L VanGilder
Irish O'Malley & the Ozark Mountain Boys by R. D. Morgan, New Forums Press
Civil War Ghosts of Southwest Missouri, by Lisa Livingston-Martin
A history of Jasper County, Missouri, and its people, Volume 1 By Joel Thomas Livingston
Encyclopedia of the History of Missouri Vol. III, edited by Howard L. Conard, 1901, ppg 418
The War of Rebellion: A compilation of the Official Records of the Union and Confederate Armies, Series 2 Volume 5, Government Printing Office 1899
Haunted Carthage, Missouri (Haunted America) by Lisa Livingston-Martin
External links
1883 History of Jasper County Missouri, (McDonald Township)
Southwest Missouri's Role In The Spanish–American War By Todd Wilkinson, published in The Ozarks Mountaineer March/April 2008
The biographical record of Jasper County, Missouri By Malcolm G. McGregor
Bank of Avilla Minute Book 1914–1924
M.A. Thesis “Place Names In The Southwest Counties Of Missouri" by Robert Lee Meyers, University of Missouri-Columbia, 1930
Villages in Jasper County, Missouri
Joplin, Missouri, metropolitan area
Reportedly haunted locations in Missouri
Villages in Missouri
1856 establishments in Missouri
Missouri in the American Civil War |
43045000 | https://en.wikipedia.org/wiki/Building%20performance%20simulation | Building performance simulation | Building performance simulation (BPS) is the replication of aspects of building performance using a computer-based, mathematical model created on the basis of fundamental physical principles and sound engineering practice. The objective of building performance simulation is the quantification of aspects of building performance which are relevant to the design, construction, operation and control of buildings. Building performance simulation has various sub-domains; most prominent are thermal simulation, lighting simulation, acoustical simulation and air flow simulation. Most building performance simulation is based on the use of bespoke simulation software. Building performance simulation itself is a field within the wider realm of scientific computing.
Introduction
From a physical point of view, a building is a very complex system, influenced by a wide range of parameters. A simulation model is an abstraction of the real building which allows to consider the influences on high level of detail and to analyze key performance indicators without cost-intensive measurements. BPS is a technology of considerable potential that provides the ability to quantify and compare the relative cost and performance attributes of a proposed design in a realistic manner and at relatively low effort and cost. Energy demand, indoor environmental quality (incl. thermal and visual comfort, indoor air quality and moisture phenomena), HVAC and renewable system performance, urban level modeling, building automation, and operational optimization are important aspects of BPS.
Over the last six decades, numerous BPS computer programs have been developed. The most comprehensive listing of BPS software can be found in the BEST directory. Some of them only cover certain parts of BPS (e.g. climate analysis, thermal comfort, energy calculations, plant modeling, daylight simulation etc.). The core tools in the field of BPS are multi-domain, dynamic, whole-building simulation tools, which provide users with key indicators such as heating and cooling load, energy demand, temperature trends, humidity, thermal and visual comfort indicators, air pollutants, ecological impact and costs.
A typical building simulation model has inputs for local weather; building geometry; building envelope characteristics; internal heat gains from lighting, occupants and equipment loads; heating, ventilation, and cooling (HVAC) system specifications; operation schedules and control strategies. The ease of input and accessibility of output data varies widely between BPS tools. Advanced whole-building simulation tools are able to consider almost all of the following in some way with different approaches.
Necessary input data for a whole-building simulation:
Climate: ambient air temperature, relative humidity, direct and diffuse solar radiation, wind speed and direction
Site: location and orientation of the building, shading by topography and surrounding buildings, ground properties
Geometry: building shape and zone geometry
Envelope: materials and constructions, windows and shading, thermal bridges, infiltration and openings
Internal gains: lights, equipment and occupants including schedules for operation/occupancy
Ventilation system: transport and conditioning (heating, cooling, humidification) of air
Room units: local units for heating, cooling and ventilation
Plant: Central units for transformation, storage and delivery of energy to the building
Controls: for window opening, shading devices, ventilation systems, room units, plant components
Some examples for key performance indicators:
Temperature trends: in zones, on surfaces, in construction layers, for hot or cold water supply or in double glass facades
Comfort indicators: like PMV and PPD, radiant temperature asymmetry, CO2-concentration, relative humidity
Heat balances: for zones, the whole building or single plant components
Load profiles: for heating and cooling demand, electricity profile for equipment and lighting
Energy demand: for heating, cooling, ventilation, light, equipment, auxiliary systems (e.g. pumps, fans, elevators)
Daylight availability: in certain zone areas, at different time points with variable outside conditions
Other use of BPS software
System sizing: for HVAC components like air handling units, heat exchanger, boiler, chiller, water storage tanks, heat pumps and renewable energy systems.
Optimizing control strategies: Controller setup for shading, window opening, heating, cooling and ventilation for increased operation performance.
History
The history of BPS is approximately as long as that of computers. The very early developments in this direction started in the late 1950s and early 1960s in the United States and Sweden. During this period, several methods had been introduced for analyzing single system components (e.g. gas boiler) using steady state calculations. The very first reported simulation tool for buildings was BRIS, introduced in 1963 by the Royal Institute of Technology in Stockholm. Until the late 1960s, several models with hourly resolution had been developed focusing on energy assessments and heating/cooling load calculations. This effort resulted in more powerful simulation engines released in the early 1970s, among those were BLAST, DOE-2, ESP-r, HVACSIM+ and TRNSYS. In the United States, the 1970s energy crisis intensified these efforts, as reducing the energy consumption of buildings became an urgent domestic policy interest. The energy crisis also initiated development of U.S. building energy standards, beginning with ASHRAE 90-75.
The development of building simulation represents a combined effort between academia, governmental institutions, industry, and professional organizations. Over the past decades the building simulation discipline has matured into a field that offers unique expertise, methods and tools for building performance evaluation. Several review papers and state of the art analysis were carried out during that time giving an overview about the development.
In the 1980s, a discussion about future directions for BPS among a group of leading building simulation specialists started. There was a consensus that most of the tools, that had been developed until then, were too rigid in their structure to be able to accommodate the improvements and flexibility that would be called for in the future. Around this time, the very first equation-based building simulation environment ENET was developed, which provided the foundation of SPARK. In 1989, Sahlin and Sowell presented a Neutral Model Format (NMF) for building simulation models, which is used today in the commercial software IDA ICE. Four years later, Klein introduced the Engineering Equation Solver (EES) and in 1997, Mattsson and Elmqvist reported on an international effort to design Modelica.
BPS still presents challenges relating to problem representation, support for performance appraisal, enabling operational application, and delivering user education, training, and accreditation. Clarke (2015) describes a future vision of BPS with the following, most important tasks which should be addressed by the global BPS community.
Better concept promotion
Standardization of input data and accessibility of model libraries
Standard performance assessment procedures
Better embedding of BPS in practice
Operational support and fault diagnosis with BPS
Education, training, and user accreditation
Accuracy
In the context of building simulation models, error refers to the discrepancy between simulation results and the actual measured performance of the building. There are normally occurring uncertainties in building design and building assessment, which generally stem from approximations in model inputs, such as occupancy behavior. Calibration refers to the process of "tuning" or adjusting assumed simulation model inputs to match observed data from the utilities or Building Management System (BMS).
The number of publications dealing with accuracy in building modeling and simulation increased significantly over the past decade. Many papers report large gaps between simulation results and measurements, while other studies show that they can match very well. The reliability of results from BPS depends on many different things, e.g. on the quality of input data, the competence of the simulation engineers and on the applied methods in the simulation engine. An overview about possible causes for the widely discussed performance gap from design stage to operation is given by de Wilde (2014) and a progress report by the Zero Carbon Hub (2013). Both conclude the factors mentioned above as the main uncertainties in BPS.
ASHRAE Standard 140-2017 "Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ANSI Approved)" provides a method to validate the technical capability and range of applicability of computer programs to calculate thermal performance. ASHRAE Guideline 4-2014 provides performance indices criteria for model calibration. The performance indices used are normalized mean bias error (NMBE), coefficient of variation (CV) of the root mean square error (RMSE), and R2 (coefficient of determination). ASHRAE recommends a R2 greater than 0.75 for calibrated models. The criteria for NMBE and CV RMSE depends on if measured data is available at a monthly or hourly timescale.
Technological aspects
Given the complexity of building energy and mass flows, it is generally not possible to find an analytical solution, so the simulation software employs other techniques, such as response function methods, or numerical methods in finite differences or finite volume, as an approximation. Most of today's whole building simulation programs formulate models using imperative programming languages. These languages assign values to variables, declare the sequence of execution of these assignments and change the state of the program, as is done for example in C/C++, Fortran or MATLAB/Simulink. In such programs, model equations are tightly connected to the solution methods, often by making the solution procedure part of the actual model equations. The use of imperative programming languages limits the applicability and extensibility of models. More flexibility offer simulation engines using symbolic Differential Algebraic Equations (DAEs) with general purpose solvers that increase model reuse, transparency and accuracy. Since some of these engines have been developed for more than 20 years (e.g. IDA ICE) and due to the key advantages of equation-based modeling, these simulation engines can be considered as state of the art technology.
Applications
Building simulation models may be developed for both new or existing buildings. Major use categories of building performance simulation include:
Architectural Design: quantitatively compare design or retrofit options in order to inform a more energy-efficient building design
HVAC Design: calculate thermal loads for sizing of mechanical equipment and help design and test system control strategies
Building Performance Rating: demonstrate performance-based compliance with energy codes, green certification, and financial incentives
Building Stock Analysis: support development of energy codes and standards and plan large scale energy efficiency programs
CFD in buildings: simulation of boundary conditions like surface heat fluxes and surface temperatures for a following CFD study of the situation
Software tools
There are hundreds of software tools available for simulating the performance of buildings and building subsystems, which range in capability from whole-building simulations to model input calibration to building auditing. Among whole-building simulation software tools, it is important to draw a distinction between the simulation engine, which dynamically solves equations rooted in thermodynamics and building science, and the modeler application (interface).
In general, BPS software can be classified into
Applications with integrated simulation engine (e.g. EnergyPlus, ESP-r, TAS, IES-VE, IDA ICE)
Software that docks to a certain engine (e.g. Designbuilder, eQuest, RIUSKA, Sefaira)
Plugins for other software enabling certain performance analysis (e.g. DIVA for Rhino, Honeybee, Autodesk Green Building Studio)
Contrary to this presentation, there are some tools that in fact do not meet these sharp classification criteria, such as ESP-r which can also be used as a modeler application for EnergyPlus and there are also other applications using the IDA simulation environment, which makes "IDA" the engine and "ICE" the modeler. Most modeler applications support the user with a graphical user interface to make data input easier. The modeler creates an input file for the simulation engine to solve. The engine returns output data to the modeler application or another visualization tool which in turn presents the results to the user. For some software packages, the calculation engine and the interface may be the same product. The table below gives an overview about commonly used simulation engines and modeler applications for BPS.
BPS in practice
Since the 1990s, building performance simulation has undergone the transition from a method used mainly for research to a design tool for mainstream industrial projects. However, the utilization in different countries still varies greatly. Building certification programs like LEED (USA), BREEAM (UK) or DGNB (Germany) showed to be a good driving force for BPS to find broader application. Also, national building standards that allow BPS based analysis are of good help for an increasing industrial adoption, such as in the United States (ASHRAE 90.1), Sweden (BBR), Switzerland (SIA) and the United Kingdom (NCM).
The Swedish building regulations are unique in that computed energy use has to be verified by measurements within the first two years of building operation. Since the introduction in 2007, experience shows that highly detailed simulation models are preferred by modelers to reliably achieve the required level of accuracy. Furthermore, this has fostered a simulation culture where the design predictions are close to the actual performance. This in turn has led to offers of formal energy guarantees based on simulated predictions, highlighting the general business potential of BPS.
Performance-based compliance
In a performance-based approach, compliance with building codes or standards is based on the predicted energy use from a building simulation, rather than a prescriptive approach, which requires adherence to stipulated technologies or design features. Performance-based compliance provides greater flexibility in the building design as it allows designers to miss some prescriptive requirements if the impact on building performance can be offset by exceeding other prescriptive requirements. The certifying agency provides details on model inputs, software specifications, and performance requirements.
The following is a list of U.S. based energy codes and standards that reference building simulations to demonstrate compliance:
ASHRAE 90.1
International Energy Conservation Code (IECC)
Leadership in Energy and Environmental Design (LEED)
Green Globes
California Title 24
EnergyStar Multifamily High rise Program
Passive House Institute US (PHIUS)
Living Building Challenge
Professional associations and certifications
Professional associations
International Building Performance Simulation Association (IBPSA)
American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE)
Certifications
BEMP - Building Energy Modeling Professional, administered by ASHRAE
BESA - Certified Building Energy Simulation Analyst, administered by AEE
See also
Energy modeling
Computer simulation
References
External links
Bldg-sim mailing list for building simulation professionals: http://lists.onebuilding.org/listinfo.cgi/bldg-sim-onebuilding.org
Simulation modeling instruction and discussion: http://energy-models.com/forum
Architecture
Building engineering
Energy conservation
Low-energy building |
3362737 | https://en.wikipedia.org/wiki/Jon%20Hare | Jon Hare | Jon "Jops" Hare (born 20 January 1966, Ilford, Essex, England) is an English computer game designer, video game artist, musician and one of many founder members of the early UK games industry as co-founder and director, along with Chris Yates, of Sensible Software, one of the most successful European games development companies of the late 1980s and 1990s.
Hare has the unique distinction of a #1 football game in each of 4 consecutive decades MicroProse Soccer 1988, Sensible World of Soccer 1994, Sensible Soccer (mobile) 2004 and Sociable Soccer 2019, developed by Tower Studios for whom he has been co-founder and CEO since 2004.
Visiting Professor of Games for Anglia Ruskin University, Cambridge since 2017, Hare has also been a voting member of BAFTA since 2004 for whom he frequently chairs Games Awards Juries.
Biography
Gaming career
1980s
Following a year of working as a consultant games artist on various ZX81, ZX Spectrum and Commodore 64 games in 1985, Hare became co-founder of Sensible Software with school friend Chris Yates in 1986 working as co-designer and lead artist of all of Sensible's 8-bit era games including Parallax, Wizball, Microprose Soccer and SEUCK.
1990s
As Sensible Software moved into the 16-bit era in the 1990s Hare took a more active role in overseeing the business activities of the company while continuing his role as the lead designer, creative director, and predominant lead artist and musical composer of games such as Wizkid and the Sensible Soccer series, the Cannon Fodder series and Mega Lo Mania, some of the most popular software franchises of the mid-1990s. Hare and Yates sold Sensible Software to Codemasters in May 1999.
2000s
Since the sale of Sensible Software to Codemasters in 1999, Hare has worked in the capacity of a consultant designer on many games including numerous strategy, action and sports games including Real World Golf and Sensible Soccer 2006. Hare is also one of the founders and owner of games company Tower Studios, founded in 2004 with two former Bitmap Brothers it has developed a number of successful titles including mobile phone versions of Cannon Fodder and Sensible Soccer.
Hare has been a voting member of BAFTA across all media since 2004 and works periodically for BAFTA as both a juror and a mentor.
In 2006, Hare contributed a weekly politics feature to UK video game radio show One Life Left.
Hare then became a director of development at Nikitova Games, a games developer with offices in Chicago, Los Angeles and London; and development studios in Kyiv, Ukraine. They worked on several projects for Nintendo DS and Wii, such as Showtime Championship Boxing and the as-yet-unreleased CCTV.
In July 2009, Hare joined Jagex (makers of browser-based MMORPG RuneScape and casual gaming website FunOrb) as their Head of Publishing.
2010s & 2020s
In January 2010, Hare announced the launch of a new independent online games publisher, Me-Stars, a games network for browser and iPhone platforms. All Me-Stars games were to feature in game Me-Stars to pick up, win and redeem and interactive high scores and friends lists that appear inside each game during gameplay, depicted by the animated photo-realistic heads found in Me-Motes Messenger (released January 2010). However the Me-Stars games network was never launched and instead became a relaunch of Tower Studios as a publisher on many mobile and online downloadable platforms of classic licensed games from the 80s - 90s as well as new designs from Jon and other game developers. Tower's most successful title as a publisher to date has been 'Speedball 2 Evolution' a remake of The Bitmap Brothers classic game which topped the iPad and Mac charts across Europe in 2011 and was followed up with 'Speedball 2 HD' in 2013 on PC.
In 2014 Hare also announced the imminent release of 'Word Explorer' his first original game in 20 years, developed in collaboration with award-winning Polish development team Vivid Games and published through a number of different publishing partners including Mastertronic and Big Fish.
Hare enjoyed a 6-year tenure as visiting lecturer at University of Westminster, London for "Professional Practice in Games Development" from 2011 to 2016 as well as numerous national and international lectures on games design, business and his career at universities across Europe as far apart as Cambridge, Istanbul, Copenhagen and Stockholm. Following this grounding in higher education in 2014 he launched, in collaboration with Professor Carsten Maple, a network of UK Games Industry Courses and Games companies known as B.U.G.S. 'Business and University Games Syndicate'. The launch event of BUGS at BAFTA featured numerous talks from games industry bodies and endorsements of BUGS from Ian Livingstone and Ed Vaizey MP, the then government minister for culture. BUGS function being to vet and host links to the completed and published games of students from all BUGS universities (approximately 30% of all UK games students) and to make these games accessible to all games industry companies signed to BUGS (approximately 35% of all UK games companies by employee numbers) to help the companies to identify the top games students in the UK and to give the students industry oriented objectives during their studies.
Since 2017 Hare has been Visiting Professor of Games for Anglia Ruskin University in Cambridge
In 2013, Sensible Software 1986–1999 a biographical book about Sensible Software featuring extensive interviews with Hare, and numerous other Sensible members and game industry personalities, plus over 100 pages of artwork reproductions of much of Hare's earlier work as a game artist, was launched by independent book publisher ROM. Written by games journalist Gary Penn it was the first and, to date, only book about the computer games industry to feature in the BAFTA library and archive in London.
In 2016, via exhibitions in the London Science Museum, Gamescom and numerous other European games events, Jon Hare demonstrated the continuing development of a new football game Sociable Soccer, developed in partnership with Finnish development team Combo Breaker on numerous PC, console, mobile and VR platforms, despite a cancelled Kickstarter for the game in the previous year. Following a brief debut in 2017 on Steam Early Access, Sociable Soccer went on to become one of the early titles on the Apple Arcade service in 2019, with annual updates for the same platform following in 2020 and 2021 and the launch of versions for PC and console versions of the title announced for 2022.
Musicianship
Hare has also been a prolific songwriter since 1982 and has featured in a number of bands over the years as a singer and guitarist, including Essex outfits Hamsterfish, Dark Globe and Touchstone, all of which also featured Chris Yates on lead guitar. Dark Globe was particularly important in the formation of the creative relationship between Hare and Yates prior to the formation of Sensible Software and rehearsed in the house of Richard Ashrowan one of Hare's closest friends since childhood. From 1990 onwards, Hare was also a frequent musical collaborator with Richard Joseph, another close friend with whom he co-wrote and arranged all of Sensible Software's best known musical tracks including the soundtrack for Cannon Fodder the GBA version of which was also nominated for a BAFTA in 2000, and is still the only small-format soundtrack to be recognized by BAFTA to this day. In 1995 Hare and Joseph embarked upon an epic 32 track soundtrack for the multimedia product Sex 'n' Drugs 'n' Rock 'n' Roll, signed to Warner Interactive, however in 1998 Warner bowed out of the games market and their Magnum Opus was only ever released as a limited edition audio CD. Since 2000 Hare has also written for and performed with a number of outfits including the Little Big Band featuring Jack Monck and Sid 80s featuring Ben Daglish.
Hare is also known for writing the music for a number of Sensible Software's games, including Cannon Fodder, Sensible Soccer, Sensible Golf and the never released Sex 'n' Drugs 'n' Rock 'n' Roll, which featured over 30 tracks written and arranged by Hare and his frequent musical collaborator, Richard Joseph.
Games designed or co-designed
References
External links
Hare's profile on MobyGames
1966 births
Amiga people
British rock singers
British rock guitarists
British songwriters
British video game designers
Commodore 64 music
Living people
People from Ilford
Sensible Software
Video game artists
Video game composers |
53143191 | https://en.wikipedia.org/wiki/Unlicense | Unlicense | The Unlicense is a public domain equivalent license with a focus on an anti-copyright message. It was first published on January 1 (Public Domain Day), 2010. The Unlicense offers a public domain waiver text with a fall-back public-domain-like license, inspired by permissive licenses but without an attribution clause. In 2015, GitHub reported that approximately 102,000 of their 5.1 million licensed projects (2% of licensed projects on GitHub.com) use the Unlicense.
History
In a post published on January 1 (Public Domain Day), 2010, Arto Bendiken outlined his reasons for preferring public domain software, namely: the nuisance of dealing with licensing terms (for instance license incompatibility), the threat inherent in copyright law, and the impracticability of copyright law.
On January 23, 2010, Bendiken followed-up on his initial post. In this post, he explained that the Unlicense is based on the copyright waiver of SQLite with the no-warranty statement from the MIT License. He then walked through the license, commenting on each part.
In a post published in December 2010, Bendiken further clarified what it means to "license" and "unlicense" software.
On January 1, 2011, Bendiken reviewed the progress and adoption of the Unlicense. He admits that it is "difficult to give estimates of current Unlicense adoption" but suggests there are "many hundreds of projects using the Unlicense".
In January 2012, when discussed on OSI's license-review mailing list, the Unlicense was brushed off as a crayon license. A request for legacy approval was filed in March 2020, which led to a formal approval in June 2020.
License terms
The license terms of the Unlicense is as follows:
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org/>
Reception
The Free Software Foundation states that "Both public domain works and the lax license provided by the Unlicense are compatible with the GNU GPL." However, for dedicating software to the public domain it recommends CC0 over the Unlicense, stating that CC0 "is more thorough and mature than the Unlicense".
The Fedora Project recommends CC0 over the Unlicense because the former is "a more comprehensive legal text".
Google does not allow contributions to projects under public domain equivalent licenses like the Unlicense (and CC0), while allowing contributions to 0BSD licensed and US government PD projects.
In December 2010, Mike Linksvayer, the vice president of Creative Commons at the time, wrote in an identi.ca conversation "I like the movement" in speaking of the Unlicense effort.
The Unlicense has been criticized, for instance by the OSI, for being possibly inconsistent and non-standard, and for making it difficult for some projects to accept Unlicensed code as third-party contributions; leaving too much room for interpretation; and possibly being incoherent in some legal systems.
Notable projects that use the Unlicense include youtube-dl, Second Reality, and the Gloom source code.
See also
CC0
WTFPL
Comparison of free and open-source software licenses
References
External links
Official mailing list
Software licenses
Public-domain software |
700265 | https://en.wikipedia.org/wiki/Video%20game%20development | Video game development | Video game development is the process of developing a video game. The effort is undertaken by a developer, ranging from a single person to an international team dispersed across the globe. Development of traditional commercial PC and console games is normally funded by a publisher, and can take several years to reach completion. Indie games usually take less time and money and can be produced by individuals and smaller developers. The independent game industry has been on the rise, facilitated by the growth of accessible game development software such as Unity platform and Unreal Engine and new online distribution systems such as Steam and Uplay, as well as the mobile game market for Android and iOS devices.
The first video games, developed in the 1960s, were not usually commercialised. They required mainframe computers to run and were not available to the general public. Commercial game development began in the '70s with the advent of first-generation video game consoles and early home computers like the Apple I. At that time, owing to low costs and low capabilities of computers, a lone programmer could develop a full and complete game. However, in the late '80s and '90s, ever-increasing computer processing power and heightened expectations from gamers made it difficult for a single person to produce a mainstream console or PC game. The average cost of producing a triple-A video game slowly rose, from 1–4 million in 2000, to over $5 million in 2006, then to over $20 million by 2010.
Mainstream commercial PC and console games are generally developed in phases: first, in pre-production, pitches, prototypes, and game design documents are written; if the idea is approved and the developer receives funding, then full-scale development begins. The development of a complete game usually involves a team of 20–100 individuals with various responsibilities, including designers, artists, programmers, and testers.
Overview
Games are produced through the software development process. Games are developed as a creative outlet and to generate profit. Game making is considered both art and science. Development is normally funded by a publisher. Well-made games bring profit more readily. However, it is important to estimate a game's financial requirements, such as development costs of individual features. Failing to provide clear implications of game's expectations may result in exceeding allocated budget. In fact, the majority of commercial games do not produce profit. Most developers cannot afford changing their development schedule mid-way, and require estimating their capabilities with available resources before production.
The game industry requires innovations, as publishers cannot profit from constant release of repetitive sequels and imitations. Every year new independent development companies open and some manage to develop hit titles. Similarly, many developers close down because they cannot find a publishing contract or their production is not profitable. It is difficult to start a new company due to high initial investment required. Nevertheless, growth of casual and mobile game market has allowed developers with smaller teams to enter the market. Once the companies become financially stable, they may expand to develop larger games. Most developers start small and gradually expand their business. A developer receiving profit from a successful title may store up capital to expand and re-factor their company, as well as tolerate more failed deadlines.
An average development budget for a multiplatform game is US$18-28M, with high-profile games often exceeding $40M.
In the early era of home computers and video game consoles in the early 1980s, a single programmer could handle almost all the tasks of developing a game — programming, graphical design, sound effects, etc. It could take as little as six weeks to develop a game. However, the high user expectations and requirements of modern commercial games far exceed the capabilities of a single developer and require the splitting of responsibilities. A team of over a hundred people can be employed full-time for a single project.
Game development, production, or design is a process that starts from an idea or concept. Often the idea is based on a modification of an existing game concept. The game idea may fall within one or several genres. Designers often experiment with different combinations of genres. A game designer generally writes an initial game proposal document, that describes the basic concept, gameplay, feature list, setting and story, target audience, requirements and schedule, and finally staff and budget estimates. Different companies have different formal procedures and philosophies regarding game design and development. There is no standardized development method; however commonalities exist.
A game developer may range from a single individual to a large multinational company. There are both independent and publisher-owned studios. Independent developers rely on financial support from a game publisher. They usually have to develop a game from concept to prototype without external funding. The formal game proposal is then submitted to publishers, who may finance the game development from several months to years. The publisher would retain exclusive rights to distribute and market the game and would often own the intellectual property rights for the game franchise. Publisher's company may also own the developer's company, or it may have internal development studio(s). Generally the publisher is the one who owns the game's intellectual property rights.
All but the smallest developer companies work on several titles at once. This is necessary because of the time taken between shipping a game and receiving royalty payments, which may be between 6 and 18 months. Small companies may structure contracts, ask for advances on royalties, use shareware distribution, employ part-time workers and use other methods to meet payroll demands.
Console manufacturers, such as Microsoft, Nintendo, or Sony, have a standard set of technical requirements that a game must conform to in order to be approved. Additionally, the game concept must be approved by the manufacturer, who may refuse to approve certain titles.
Most modern PC or console games take from three to five years to complete., where as a mobile game can be developed in a few months. The length of development is influenced by a number of factors, such as genre, scale, development platform and number of assets.
Some games can take much longer than the average time frame to complete. An infamous example is 3D Realms' Duke Nukem Forever, announced to be in production in April 1997 and released fourteen years later in June 2011. Planning for Maxis' game Spore began in late 1999; the game was released nine years later in September 2008. The game Prey was briefly profiled in a 1997 issue of PC Gamer, but was not released until 2006, and only then in highly altered form. Finally, Team Fortress 2 was in development from 1998 until its 2007 release, and emerged from a convoluted development process involving "probably three or four different games", according to Gabe Newell.
The game revenue from retails is divided among the parties along the distribution chain, such as — developer, publisher, retail, manufacturer and console royalty. Many developers fail to profit from this and go bankrupt. Many developers seek alternative economic models through Internet marketing and distribution channels to improve returns., as through a mobile distribution channel the share of a developer can be up to 70% of the total revenue and through an online distribution channel owned by the developer almost 100%.
History
The history of game making begins with the development of the first video games, although which video game is the first depends on the definition of video game. The first games created had little entertainment value, and their development focus was separate from user experience—in fact, these games required mainframe computers to play them. OXO, written by Alexander S. Douglas in 1952, was the first computer game to use a digital display. In 1958, a game called Tennis for Two, which displayed its output on an oscilloscope, was made by Willy Higinbotham, a physicist working at the Brookhaven National Laboratory. In 1961, a mainframe computer game called Spacewar! was developed by a group of Massachusetts Institute of Technology students led by Steve Russell.
True commercial design and development of games began in the 1970s, when arcade video games and first-generation consoles were marketed. In 1971, Computer Space was the first commercially sold, coin-operated video game. It used a black-and-white television for its display, and the computer system was made of 74 series TTL chips. In 1972, the first home console system was released called Magnavox Odyssey, developed by Ralph H. Baer. That same year, Atari released Pong, an arcade game that increased video game popularity. The commercial success of Pong led other companies to develop Pong clones, spawning the video game industry.
Programmers worked within the big companies to produce games for these devices. The industry did not see huge innovation in game design and a large number of consoles had very similar games. Many of these early games were often Pong clones. Some games were different, however, such as Gun Fight, which was significant for several reasons: an early 1975 on-foot, multi-directional shooter, which depicted game characters, game violence, and human-to-human combat. Tomohiro Nishikado's original version was based on discrete logic, which Dave Nutting adapted using the Intel 8080, making it the first video game to use a microprocessor. Console manufacturers soon started to produce consoles that were able to play independently developed games, and ran on microprocessors, marking the beginning of second-generation consoles, beginning with the release of the Fairchild Channel F in 1976.
The flood of Pong clones led to the video game crash of 1977, which eventually came to an end with the mainstream success of Taito's 1978 arcade shooter game Space Invaders, marking the beginning of the golden age of arcade video games and inspiring dozens of manufacturers to enter the market. Its creator Nishikado not only designed and programmed the game, but also did the artwork, engineered the arcade hardware, and put together a microcomputer from scratch. It was soon ported to the Atari 2600, becoming the first "killer app" and quadrupling the console's sales. At the same time, home computers appeared on the market, allowing individual programmers and hobbyists to develop games. This allowed hardware manufacturer and software manufacturers to act separately. A very large number of games could be produced by an individual, as games were easy to make because graphical and memory limitation did not allow for much content. Larger companies developed, who focused selected teams to work on a title. The developers of many early home video games, such as Zork, Baseball, Air Warrior, and Adventure, later transitioned their work as products of the early video game industry.
The industry expanded significantly at the time, with the arcade video game sector alone (representing the largest share of the gaming industry) generating higher revenues than both pop music and Hollywood films combined. The home video game industry, however, suffered major losses following the video game crash of 1983. In 1984 Jon Freeman warned in Computer Gaming World:
Chris Crawford and Don Daglow in 1987 similarly advised prospective designers to write games as a hobby first, and to not quit their existing jobs early. The home video game industry was revitalized soon after by the widespread success of the Nintendo Entertainment System.
Compute!'s Gazette in 1986 stated that although individuals developed most early video games, "It's impossible for one person to have the multiple talents necessary to create a good game". By 1987 a video game required 12 months to develop and another six to plan marketing. Projects remained usually solo efforts, with single developers delivering finished games to their publishers. With the ever-increasing processing and graphical capabilities of arcade, console and computer products, along with an increase in user expectations, game design moved beyond the scope of a single developer to produce a marketable game. The Gazette stated, "The process of writing a game involves coming up with an original, entertaining concept, having the skill to bring it to fruition through good, efficient programming, and also being a fairly respectable artist". This sparked the beginning of team-based development. In broad terms, during the 1980s, pre-production involved sketches and test routines of the only developer. In the 1990s, pre-production consisted mostly of game art previews. In the early 2000s, pre-production usually produced a playable demo.
In 2000 a 12 to 36 month development project was funded by a publisher for US$1M–3M. Additionally, $250k–1.5M were spent on marketing and sales development. In 2001, over 3000 games were released for PC; and from about 100 games turning profit only about 50 made significant profit. In the early 2000s it became increasingly common to use middleware game engines, such as Quake engine or Unreal engine.
In the early 2000s, also mobile games started to gain popularity. However, mobile games distributed by mobile operators remained a marginal form of gaming until the Apple App Store was launched in 2008.
In 2005, a mainstream console video game cost from US$3M to $6M to develop. Some games cost as much as $20M to develop. In 2006 the profit from a console game sold at retail was divided among parties of distribution chain as follows: developer (13%), publisher (32%), retail (32%), manufacturer (5%), console royalty (18%). In 2008 a developer would retain around 17% of retail price and around 85% if sold online.
Since the third generation of consoles, the home video game industry has constantly increased and expanded. The industry revenue has increased at least five-fold since the 1990s. In 2007, the software portion of video game revenue was $9.5 billion, exceeding that of the movie industry.
The Apple App Store, introduced in 2008, was the first mobile application store operated directly by the mobile platform holder. It significantly changed the consumer behaviour more favourable for downloading mobile content and quickly broadened the markets of mobile games.
In 2009 games' market annual value was estimated between $7–30 billion, depending on which sales figures are included. This is on par with films' box office market. A publisher would typically fund an independent developer for $500k–$5M for a development of a title. In 2012, the total value had already reached $66.3 billion and by then the video game markets were no longer dominated by console games. According to Newzoo, the share of MMO's was 19.8%, PC/MAC's 9.8%, tablets' 3.2%, smartphones 10.6%, handhelds' 9.8%, consoles' only 36.7% and online casual games 10.2%. The fastest growing market segments being mobile games with an average annual rate of 19% for smartphones and 48% for tablets.
In the past several years, many developers opened and many closed down. Each year a number of developers are acquired by larger companies or merge with existing companies. For example, in 2007 Blizzard Entertainment's parent company, Vivendi Games merged with Activision. In 2008 Electronic Arts nearly acquired Take-Two Interactive. In 2009 Midway Games was acquired by Time-Warner and Eidos Interactive merged with Square Enix.
Roles
Producer
Development is overseen by internal and external producers. The producer working for the developer is known as the internal producer and manages the development team, schedules, reports progress, hires and assigns staff, and so on. The producer working for the publisher is known as the external producer and oversees developer progress and budget. Producer's responsibilities include PR, contract negotiation, liaising between the staff and stakeholders, schedule and budget maintenance, quality assurance, beta test management, and localization. This role may also be referred to as project manager, project lead, or director.
Publisher
A video game publisher is a company that publishes video games that they have either developed internally or have had developed by an external video game developer. As with book publishers or publishers of DVD movies, video game publishers are responsible for their product's manufacturing and marketing, including market research and all aspects of advertising.
They usually finance the development, sometimes by paying a video game developer (the publisher calls this external development) and sometimes by paying an internal staff of developers called a studio. Consequently, they also typically own the IP of the game. Large video game publishers also distribute the games they publish, while some smaller publishers instead hire distribution companies (or larger video game publishers) to distribute the games they publish.
Other functions usually performed by the publisher include deciding on and paying for any license that the game may utilize; paying for localization; layout, printing, and possibly the writing of the user manual; and the creation of graphic design elements such as the box design.
Large publishers may also attempt to boost efficiency across all internal and external development teams by providing services such as sound design and code packages for commonly needed functionality.
Because the publisher usually finances development, it usually tries to manage development risk with a staff of producers or project managers to monitor the progress of the developer, critique ongoing development, and assist as necessary. Most video games created by an external video game developer are paid for with periodic advances on royalties. These advances are paid when the developer reaches certain stages of development, called milestones.
Independent video game developers create games without a publisher and may choose to digitally distribute their games.
Development team
Developers can range in size from small groups making casual games to housing hundreds of employees and producing several large titles. Companies divide their subtasks of game's development. Individual job titles may vary; however, roles are the same within the industry. The development team consists of several members. Some members of the team may handle more than one role; similarly more than one task may be handled by the same member. Team size can vary from 3 to 100 or more members, depending on the game's scope. The most represented are artists, followed by programmers, then designers, and finally, audio specialists, with one to three producers in management. Many teams also include a dedicated writer with expertise in video game writing. These positions are employed full-time. Other positions, such as testers, may be employed only part-time. Use of contractors for art, programming, and writing is standard within the industry. Salaries for these positions vary depending on both the experience and the location of the employee.
A development team includes these roles or disciplines:
Designer
A game designer is a person who designs gameplay, conceiving and designing the rules and structure of a game. Development teams usually have a lead designer who coordinates the work of other designers. They are the main visionary of the game. One of the roles of a designer is being a writer, often employed part-time to conceive game's narrative, dialogue, commentary, cutscene narrative, journals, video game packaging content, hint system, etc. In larger projects, there are often separate designers for various parts of the game, such as, game mechanics, user interface, characters, dialogue, graphics, etc.
Artist
A game artist is a visual artist who creates video game art. The art production is usually overseen by an art director or art lead, making sure their vision is followed. The art director manages the art team, scheduling and coordinating within the development team.
The artist's job may be 2D oriented or 3D oriented. 2D artists may produce concept art, sprites, textures, environmental backdrops or terrain images, and user interface. 3D artists may produce models or meshes, animation, 3D environment, and cinematics. Artists sometimes occupy both roles.
Programmer
A game programmer is a software engineer who primarily develops video games or related software (such as game development tools). The game's codebase development is handled by programmers. There are usually one to several lead programmers, who implement the game's starting codebase and overview future development and programmer allocation on individual modules. An entry-level programmer can make, on average, around $70,000 annually and an experienced programmer can make, on average, around $125,000 annually.
Individual programming disciplines roles include:
Physics – the programming of the game engine, including simulating physics, collision, object movement, etc.;
AI – producing computer agents using game AI techniques, such as scripting, planning, rule-based decisions, etc.
Graphics – the managing of graphical content utilization and memory considerations; the production of graphics engine, integration of models, textures to work along the physics engine.
Sound – integration of music, speech, effect sounds into the proper locations and times.
Gameplay – implementation of various games rules and features (sometimes called a generalist);
Scripting – development and maintenance of high-level command system for various in-game tasks, such as AI, level editor triggers, etc.
UI – production of user interface elements, like option menus, HUDs, help and feedback systems, etc.
Input processing – processing and compatibility correlation of various input devices, such as keyboard, mouse, gamepad, etc.
Network communications – the managing of data inputs and outputs for local and internet gameplay.
Game tools – the production of tools to accompany the development of the game, especially for designers and scripters.
Level designer
A level designer is a person who creates levels, challenges or missions for video games using a specific set of programs. These programs may be commonly available commercial 3D or 2D design programs, or specially designed and tailored level editors made for a specific game.
Level designers work with both incomplete and complete versions of the game. Game programmers usually produce level editors and design tools for the designers to use. This eliminates the need for designers to access or modify game code. Level editors may involve custom high-level scripting languages for interactive environments or AIs. As opposed to the level editing tools sometimes available to the community, level designers often work with placeholders and prototypes aiming for consistency and clear layout before required artwork is completed.
Sound engineer
Sound engineers are technical professionals responsible for sound effects and sound positioning. They are sometimes involved in creating haptic feedback, as was the case with the Returnal game sound team at PlayStation Studios Creative Arts' in London.
They sometimes oversee voice acting and other sound asset creation. Composers who create a game's musical score also comprise a game's sound team, though often this work is outsourced.
Tester
The quality assurance is carried out by game testers. A game tester analyzes video games to document software defects as part of a quality control. Testing is a highly technical field requiring computing expertise, and analytic competence.
The testers ensure that the game falls within the proposed design: it both works and is entertaining.This involves testing of all features, compatibility, localization, etc. Although, necessary throughout the whole development process, testing is expensive and is often actively utilized only towards the completion of the project.
Development process
Game development is a software development process, as a video game is software with art, audio, and gameplay. Formal software development methods are often overlooked. Games with poor development methodology are likely to run over budget and time estimates, as well as contain a large number of bugs. Planning is important for individual and group projects alike.
Overall game development is not suited for typical software life cycle methods, such as the waterfall model.
One method employed for game development is agile development. It is based on iterative prototyping, a subset of software prototyping. Agile development depends on feedback and refinement of game's iterations with gradually increasing feature set. This method is effective because most projects do not start with a clear requirement outline. A popular method of agile software development is Scrum.
Another successful method is Personal Software Process (PSP) requiring additional training for staff to increase awareness of project's planning. This method is more expensive and requires commitment of team members. PSP can be extended to Team Software Process, where the whole team is self-directing.
Game development usually involves an overlap of these methods. For example, asset creation may be done via waterfall model, because requirements and specification are clear, but gameplay design might be done using iterative prototyping.
Development of a commercial game usually includes the following stages:
Pre-production
Pre-production or design phase is a planning phase of the project focused on idea and concept development and production of initial design documents. The goal of concept development is to produce clear and easy to understand documentation, which describes all the tasks, schedules and estimates for the development team. The suite of documents produced in this phase is called production plan. This phase is usually not funded by a publisher, however good publishers may require developers to produce plans during pre-production.
The concept documentation can be separated into three stages or documents—high concept, pitch and concept; however, there is no industry standard naming convention, for example, both Bethke (2003) and Bates (2004) refer to pitch document as "game proposal", yet Moore, Novak (2010) refers to concept document as "game proposal".
The late stage of pre-production may also be referred to as proof of concept, or technical review when more detailed game documents are produced.
Publishers have started to expect broader game proposals even featuring playable prototypes.
High concept
High concept is a brief description of a game. The high concept is the one-or two-sentence response to the question, "What is your game about?".
Pitch
A pitch, concept document, proposal document, or game proposal is a short summary document intended to present the game's selling points and detail why the game would be profitable to develop.
Verbal pitches may be made to management within the developer company, and then presented to publishers. A written document may need to be shown to publishers before funding is approved. A game proposal may undergo one to several green-light meetings with publisher executives who determine if the game is to be developed. The presentation of the project is often given by the game designers. Demos may be created for the pitch; however may be unnecessary for established developers with good track records.
If the developer acts as its own publisher, or both companies are subsidiaries of a single company, then only the upper management needs to give approval.
Concept
Concept document, game proposal, or game plan is a more detailed document than the pitch document. This includes all the information produced about the game. This includes the high concept, game's genre, gameplay description, features, setting, story, target audience, hardware platforms, estimated schedule, marketing analysis, team requirements, and risk analysis.
Before an approved design is completed, a skeleton crew of programmers and artists usually begins work. Programmers may develop quick-and-dirty prototypes showcasing one or more features that stakeholders would like to see incorporated in the final product. Artists may develop concept art and asset sketches as a springboard for developing real game assets. Producers may work part-time on the game at this point, scaling up for full-time commitment as development progresses. Game producers work during pre-production is related to planning the schedule, budget and estimating tasks with the team. The producer aims to create a solid production plan so that no delays are experienced at the start of the production.
Game design document
Before a full-scale production can begin, the development team produces the first version of a game design document incorporating all or most of the material from the initial pitch. The design document describes the game's concept and major gameplay elements in detail. It may also include preliminary sketches of various aspects of the game. The design document is sometimes accompanied by functional prototypes of some sections of the game. The design document remains a living document throughout the development—often changed weekly or even daily.
Compiling a list of game's needs is called "requirement capture".
Prototype
Writing prototypes of gameplay ideas and features is an important activity that allows programmers and game designers to experiment with different algorithms and usability scenarios for a game. A great deal of prototyping may take place during pre-production before the design document is complete and may, in fact, help determine what features the design specifies. Prototyping at this stage is often done manually, (paper prototyping), not digitally, as this is often easier and faster to test and make changes before wasting time and resources into what could be a canceled idea or project. Prototyping may also take place during active development to test new ideas as the game emerges.
Prototypes are often meant only to act as a proof of concept or to test ideas, by adding, modifying or removing some of the features. Most algorithms and features debuted in a prototype may be ported to the game once they have been completed.
Often prototypes need to be developed quickly with very little time for up-front design (around 15 to 20 minutes of testing). Therefore, usually very prolific programmers are called upon to quickly code these testbed tools. RAD tools may be used to aid in the quick development of these programs. In case the prototype is in a physical form, programmers and designers alike will make the game with paper, dice, and other easy to access tools in order to make the prototype faster.
A successful development model is iterative prototyping, where design is refined based on current progress. There are various technology available for video game development
Production
Production is the main stage of development, when assets and source code for the game are produced.
Mainstream production is usually defined as the period of time when the project is fully staffed. Programmers write new source code, artists develop game assets, such as, sprites or 3D models. Sound engineers develop sound effects and composers develop music for the game. Level designers create levels, and writers write dialogue for cutscenes and NPCs. Game designers continue to develop the game's design throughout production.
Design
Game design is an essential and collaborative process of designing the content and rules of a game, requiring artistic and technical competence as well as writing skills. Creativity and an open mind is vital for the completion of a successful video game.
During development, the game designer implements and modifies the game design to reflect the current vision of the game. Features and levels are often removed or added. The art treatment may evolve and the backstory may change. A new platform may be targeted as well as a new demographic. All these changes need to be documented and disseminated to the rest of the team. Most changes occur as updates to the design document.
Programming
The programming of the game is handled by one or more game programmers. They develop prototypes to test ideas, many of which may never make it into the final game. The programmers incorporate new features demanded by the game design and fix any bugs introduced during the development process. Even if an off-the-shelf game engine is used, a great deal of programming is required to customize almost every game.
Level creation
From a time standpoint, the game's first level takes the longest to develop. As level designers and artists use the tools for level building, they request features and changes to the in-house tools that allow for quicker and higher quality development. Newly introduced features may cause old levels to become obsolete, so the levels developed early on may be repeatedly developed and discarded. Because of the dynamic environment of game development, the design of early levels may also change over time. It is not uncommon to spend upwards of twelve months on one level of a game developed over the course of three years. Later levels can be developed much more quickly as the feature set is more complete and the game vision is clearer and more stable.
Art production
During development, artists make art assets according to specifications given by the designers. Early in production, concept artists make concept art to guide the artistic direction of the game, rough art is made for prototypes, and the designers work with artists to design the visual style and visual language of the game. As production goes on, more final art is made, and existing art is edited based on player feedback.
Audio production
Game audio may be separated into three categories—sound effects, music, and voice-over.
Sound effect production is the production of sounds by either tweaking a sample to a desired effect or replicating it with real objects. Sound effects include UI sound design, which effectively conveys information both for visible UI elements and as an auditory display. It provides sonic feedback for in-game interfaces, as well as contributing to the overall game aesthetic. Sound effects are important and impact the game's delivery.
Music may be synthesized or performed live.
There are four main ways in which music is presented in a game.
Music may be ambient, especially for slow periods of game, where the music aims to reinforce the aesthetic mood and game setting.
Music may be triggered by in-game events. For example, in such games as Pac-Man or Mario, player picking up power-ups triggered respective musical scores.
Action music, such as chase, battle or hunting sequences is fast-paced, hard-changing score.
Menu music, similar to credits music, creates aural impact while relatively little action is taking place.
A game title with 20 hours of single-player gameplay may feature around 1 hour.
Testing
Quality assurance of a video game product plays a significant role throughout the development cycle of a game, though comes more significantly into play as the game nears completion. Unlike other software products or productivity applications, video games are fundamentally meant to entertain, and thus the testing of video games is more focused on the end-user experience rather than the accuracy of the software code's performance, which leads to differences in how game software is developed.
Because game development is focused on the presentation and gameplay as seen by the player, there often is little rigor in maintaining and testing backend code in early stages of development since such code may be readily disregarded if there are changes found in gameplay. Some automated testing may be used to assure the core game engine operates as expected, but most game testing comes via game tester, who enter the testing process once a playable prototype is available. This may be one level or subset of the game software that can be used to any reasonable extent. The use of testers may be lightweight at the early stages of development, but the testers' role becomes more predominant as the game nears completion, becoming a full-time role alongside development. Early testing is considered a key part of game design; the most common issue raised in several published post-mortems on game developer was the failure to start the testing process early.
As code matures and the gameplay features solidify, then development typically includes more rigorous test controls such as regression testing to make sure new updates to the code base do not change working parts of the game. Games are complex software systems, and changes in one code area may unexpected cause a seemingly unrelated part of the game to fail. Testers are tasked to repeated play through updated versions of games in these later stages to look for any issues or bugs not otherwise found from automated testing. Because this can be a monotonous task of playing the same game over and over, this process can lead to games frequently being released with uncaught bugs or glitches.
There are other factors simply inherit to video games that can make testing difficult. This includes the use of randomized gameplay systems, which require more testing for both game balance and bug tracking than more linearized games, the balance of cost and time to devote to testing as part of the development budget, and assuring that the game still remains fun and entertaining to play as changes are made to it.
Despite the dangers of overlooking regression testing, some game developers and publishers fail to test the full feature suite of the game and ship a game with bugs. This can result in customers dissatisfaction and failure to meet sales goals. When this does happen, most developers and publishers quickly release patches that fix the bugs and make the game fully playable again. More recent, certain publishing models are designed specifically to accommodate the fact that first releases of games may be bug-ridden but will be fixed post-release. The early access model invites players to pay into a game before its planned release and help to provide feedback and bug reports. Mobile games and games with live services are also anticipated to be updated on a frequent basis, offset pre-release testing with live feedback and bug reports.
Milestones
Commercial game development projects may be required to meet milestones set by publisher. Milestones mark major events during game development and are used to track game's progress. Such milestones may be, for example, first playable, alpha, or beta game versions. Project milestones depend on the developer schedules.
Milestones are usually based on multiple short descriptions for functionality; examples may be "Player roaming around in game environment" or "Physics working, collisions, vehicle" etc. (numerous descriptions are possible). These milestones are usually how the developer gets paid; sometimes as "an advance against royalty". These milestones are listed, anywhere from three to twenty depending on developer and publisher. The milestone list is usually a collaborative agreement between the publisher and developer. The developer usually advocates for making the milestone descriptions as simple as possible; depending on the specific publisher - the milestone agreements may get very detailed for a specific game. When working with a good publisher, the "spirit of the law" is usually adhered to regarding milestone completion... in other words if the milestone is 90% complete the milestone is usually paid with the understanding that it will be 100% complete by the next due milestone. It is a collaborative agreement between publisher and developer, and usually (but not always) the developer is constrained by heavy monthly development expenses that need to be met. Also, sometimes milestones are "swapped", the developer or publisher may mutually agree to amend the agreement and rearrange milestone goals depending on changing requirements and development resources available. Milestone agreements are usually included as part of the legal development contracts. After each "milestone" there is usually a payment arrangement. Some very established developers may simply have a milestone agreement based on the amount of time the game is in development (monthly / quarterly) and not specific game functionality - this is not as common as detailed functionality "milestone lists".
There is no industry standard for defining milestones, and such vary depending on publisher, year, or project. Some common milestones for two-year development cycle are as follows:
First playable
The first playable is the game version containing representative gameplay and assets, this is the first version with functional major gameplay elements. It is often based on the prototype created in pre-production. Alpha and first playable are sometimes used to refer to a single milestone, however large projects require first playable before feature complete alpha. First playable occurs 12 to 18 months before code release. It is sometimes referred to as the "Pre-Alpha" stage.
Alpha
Alpha is the stage when key gameplay functionality is implemented, and assets are partially finished. A game in alpha is feature complete, that is, game is playable and contains all the major features. These features may be further revised based on testing and feedback. Additional small, new features may be added, similarly planned, but unimplemented features may be dropped. Programmers focus mainly on finishing the codebase, rather than implementing additions.
Code freeze
Code freeze is the stage when new code is no longer added to the game and only bugs are being corrected. Code freeze occurs three to four months before code release.
Beta
Beta is feature and asset complete version of the game, when only bugs are being fixed. This version contains no bugs that prevent the game from being shippable. No changes are made to the game features, assets, or code. Beta occurs two to three months before code release.
Code release
Code release is the stage when many bugs are fixed and game is ready to be shipped or submitted for console manufacturer review. This version is tested against QA test plan. First code release candidate is usually ready three to four weeks before code release.
Gold master
Gold master is the final game's build that is used as a master for production of the game.
Release schedules and "crunch time"
In most AAA game development, games are announced a year or more and given a planned release date or approximate window so that they can promote and market the game, establish orders with retailers, and entice consumers to pre-order the game. Delaying the release of a video game can have negative financial impact for publishers and developers, and extensive delays may lead to project cancellation and employee layoffs. To assure a game makes a set release date, publishers and developers may require their employees to work overtime to complete the game, which is considered common in the industry. This overtime is often referred to it as "crunch time" or "crunch mode". In 2004 and afterwards, the culture of crunch time in the industry came under scrutiny, leading to many publishers and developers to reduce the expectation on developers for overtime work and better schedule management, though crunch time still can occur.
Post-production
After the game goes gold and ships, some developers will give team members comp time (perhaps up to a week or two) to compensate for the overtime put in to complete the game, though this compensation is not standard.
Maintenance
Once a game ships, the maintenance phase for the video game begins.
Games developed for video game consoles have had almost no maintenance period in the past. The shipped game would forever house as many bugs and features as when released. This was common for consoles since all consoles had identical or nearly identical hardware; making incompatibility, the cause of many bugs, a non-issue. In this case, maintenance would only occur in the case of a port, sequel, or enhanced remake that reuses a large portion of the engine and assets.
In recent times popularity of online console games has grown, and online capable video game consoles and online services such as Xbox Live for the Xbox have developed. Developers can maintain their software through downloadable patches. These changes would not have been possible in the past without the widespread availability of the Internet.
PC development is different. Game developers try to account for majority of configurations and hardware. However, the number of possible configurations of hardware and software inevitably leads to discovery of game-breaking circumstances that the programmers and testers didn't account for.
Programmers wait for a period to get as many bug reports as possible. Once the developer thinks they've obtained enough feedback, the programmers start working on a patch. The patch may take weeks or months to develop, but it's intended to fix most accounted bugs and problems with the game that were overlooked past code release, or in rare cases, fix unintended problems caused by previous patches. Occasionally a patch may include extra features or content or may even alter gameplay.
In the case of a massively multiplayer online game (MMOG), such as a MMORPG or MMORTS, the shipment of the game is the starting phase of maintenance. Such online games are in continuous maintenance as the gameworld is continuously changed and iterated and new features are added. The maintenance staff for a popular MMOG can number in the dozens, sometimes including members of the original programming team.
Outsourcing
Several development disciplines, such as audio, dialogue, or motion capture, occur for relatively short periods of time. Efficient employment of these roles requires either large development house with multiple simultaneous title production or outsourcing from third-party vendors. Employing personnel for these tasks full-time is expensive, so a majority of developers outsource a portion of the work. Outsourcing plans are conceived during the pre-production stage; where the time and finances required for outsourced work are estimated.
The music cost ranges based on length of composition, method of performance (live or synthesized), and composer experience. In 2003 a minute of high quality synthesized music cost between US$600-1.5k. A title with 20 hours of gameplay and 60 minutes of music may have cost $50k-60k for its musical score.
Voice acting is well-suited for outsourcing as it requires a set of specialized skills. Only large publishers employ in-house voice actors.
Sound effects can also be outsourced.
Programming is generally outsourced less than other disciplines, such as art or music. However, outsourcing for extra programming work or savings in salaries has become more common in recent years.
Marketing
The game production has similar distribution methods to those of music and film industries.
The publisher's marketing team targets the game for a specific market and then advertises it. The team advises the developer on target demographics and market trends, as well as suggests specific features. The game is then advertised and the game's high concept is incorporated into the promotional material, ranging from magazine ads to TV spots. Communication between developer and marketing is important.
The length and purpose of a game demo depends on the purpose of the demo and target audience. A game's demo may range between a few seconds (such as clips or screenshots) to hours of gameplay. The demo is usually intended for journalists, buyers, trade shows, general public, or internal employees (who, for example, may need to familiarize with the game to promote it). Demos are produced with public relations, marketing and sales in mind, maximizing the presentation effectiveness.
Trade show demo
As a game nears completion, the publisher will want to showcase a demo of the title at trade shows. Many games have a "Trade Show demo" scheduled.
The major annual trade shows are, for example, Electronic Entertainment Expo (E3) or Penny Arcade Expo (PAX). E3 is the largest show in North America. E3 is hosted primarily for marketing and business deals. New games and platforms are announced at E3 and it received broad press coverage. Thousands of products are on display and press demonstration schedules are kept. In recent years E3 has become a more closed-door event and many advertisers have withdrawn, reducing E3's budget. PAX, created by authors of Penny Arcade blog and web-comic, is a mature and playful event with a player-centred philosophy.
Localization
A game created in one language may also be published in other countries which speak a different language. For that region, the developers may want to translate the game to make it more accessible. For example, some games created for PlayStation Vita were initially published in Japanese language, like Soul Sacrifice. Non-native speakers of the game's original language may have to wait for the translation of the game to their language. But most modern big-budget games take localization into account during the development process and the games are released in several different languages simultaneously.
Localization is the process of translating the language assets in a game into other languages. By localizing games, they increase their level of accessibility where games could help to expend the international markets effectively. Game localization is generally known as language translations yet a "full localization" of a game is a complex project. Different levels of translation range from: zero translation being that there is no translation to the product and all things are sent raw, basic translation where only a few text and subtitles are translated or even added, and a full translation where new voice overs and game material changes are added.
There are various essential elements on localizing a game including translating the language of the game to adjusting in-game assets for different cultures to reach more potential consumers in other geographies (or globalization for short). Translation seems to fall into the scope of localization, which itself constitutes a substantially broader endeavor. These include the different levels of translation to the globalization of the game itself. However, certain developers seem to be divided on whether globalization falls under localization or not.
Moreover, in order to fit into the local markets, game production companies often change or redesign the graphic designs or the packaging of the game for marketing purposes. For example, the popular game Assassin's Creed has two different packaging designs for the European and US market. By localizing the graphics and packaging designs, companies might arouse better connections and attention from the consumers from various regions.
Development costs
The costs of developing a video game varies widely depending on several factors including team size, game genre and scope, and other factors such as intellectual property licensing costs. Most video game consoles also require development licensing costs which include game development kits for building and testing software. Game budgets also typically include costs for marketing and promotion, which can be on the same order in cost as the development budget.
Prior to the 1990s, game development budgets, when reported, typically were on the average of , with known outliers, such as the that Atari had paid to license the rights for E.T. the Extra-Terrestrial in addition to development costs. The adoption of technologies such as 3D hardware rendering and CD-ROM integration by the mid-1990s, enabling games with more visual fidelity compared to prior titles, caused developers and publishers to put more money into game budgets as to flesh out narratives through cutscenes and full-motion video, and creating the start of the AAA video game industry. Some of the most expensive titles to develop around this time, approaching costs typical of major motion picture production budgets, included Final Fantasy VII in 1997 with an estimated budget of , and Shenmue in 1999 with an estimated budget of .Final Fantasy VII, with its marketing budget, had a total estimated cost of .
Raph Koster, a video game designer and economist, evaluated published development budgets (less any marketing) for over 250 games in 2017 and reported that since the mid-1990s, there has been a type of Moore's Law in game budgets, with the average budget doubling about every five years after accounting for inflation. Koster reported average budgets were around by 2017, and could reach over by the early 2020s. Koster asserts these trends are partially tied to the technological Moore's law that gave more computational power for developers to work into their games, but also related to expectations for content from players in newer games and the number of players games are expected to draw. Shawn Layden, former CEO of Sony Interactive Entertainment, affirmed that the costs for each generation of PlayStation consoles nearly doubled, with PlayStation 4 games have average budgets of and anticipating that PlayStation 5 games could reach .
The rising costs of budgets of AAA games in the early 2000s led publishers to become risk-averse, staying to titles that were most likely to be high-selling games to recoup their costs. As a result of this risk aversion, the selection of AAA games in the mid-2000s became rather similar, and gave the opportunity for indie games that provided more experimental and unique gameplay concepts to expand around that time.
Indie development
Independent games or indie games are produced by individuals and small teams with no large-scale developer or publisher affiliations. Indie developers generally rely on Internet distribution schemes. Many hobbyist indie developers create mods of existing games. Indie developers are credited for creative game ideas (for example, Darwinia, Weird Worlds, World of Goo). Current economic viability of indie development is questionable, however in recent years internet delivery platforms, such as, Xbox Live Arcade and Steam have improved indie game success. In fact, some indie games have become very successful, such as Braid, World of Goo, and Minecraft. In recent years many communities have emerged in support of indie games such as the popular indie game marketplace Itch.io, indie game YouTube channels and a large indie community on Steam. It is common for indie game developers to release games for free and generate revenue through other means such as microtransactions (in-game transactions), in-game advertisements and crowd-funding services like Patreon and Kickstarter.
Game industry
The video game industry (formally referred to as interactive entertainment) is the economic sector involved with the development, marketing and sale of video games. The industry sports several unique approaches.
Locales
United States
In the United States, in the early history of video game development, the prominent locale for game development was the corridor from San Francisco to Silicon Valley in California. Most new developers in the US open near such "hot beds".
At present, many large publishers still operate there, such as: Activision Blizzard, Capcom Entertainment, Disney Interactive, Eidos Interactive, Electronic Arts, Foundation 9, LucasArts Entertainment, Namco Bandai Games, Sega of America, Sony Computer Entertainment America, THQ. However, due to the nature of game development, many publishers are present in other regions, such as Big Fish Games (Washington), GarageGames (Oregon), Majesco Entertainment (New Jersey), Microsoft Corporation (Washington), Nintendo of America (Washington), Take-Two Interactive (New York), SouthPeak Games (Virginia).
Education
Many universities and design schools are offering classes specifically focused on game development. Some have built strategic alliances with major game development companies. These alliances ensure that students have access to the latest technologies and are provided the opportunity to find jobs within the gaming industry once qualified. Many innovative ideas are presented at conferences, such as Independent Games Festival (IGF) or Game Developers Conference (GDC).
Indie game development may motivate students who produce a game for their final projects or thesis and may open their own game company.
Stability
Video game industry employment is fairly volatile, similar to other artistic industries including television, music, etc. Scores of game development studios crop up, work on one game, and then quickly go under. This may be one reason why game developers tend to congregate geographically; if their current studio goes under, developers can flock to an adjacent one or start another from the ground up.
In an industry where only the top 20% of products make a profit, it's easy to understand this fluctuation. Numerous games may start development and are cancelled, or perhaps even completed but never published. Experienced game developers may work for years and yet never ship a title: such is the nature of the business.
See also
International Game Developers Association
List of video gaming topics
Open source video games
Software development process
Video game controversy
References
https://www.academia.edu/6639017/Challenges_in_video_game_localization_An_integrated_perspective
http://www.erudit.org/revue/meta/2012/v57/n2/1013949ar.html
The Game Localization Handbook (Charles River Media Game Development) Paperback – October, 2004, by Heather M(Heather Chandler) Chandler (Author)
http://bytelevel.com/global/game_globalization.html (Q&A with the author)
http://www.jostrans.org/issue06/art_ohagan.php
Bibliography
External links
GameDev.net, a resource for game development
DevMaster.net, game development site
Gamasutra.com, articles on game development
Wikis
Game Development Wiki at Gamedev.net (discontinued and archived) |
4062207 | https://en.wikipedia.org/wiki/J.%20K.%20Greye%20Software | J. K. Greye Software | J.K. Greye Software was a British software company set up by J.K. Greye in early 1981 and 6 months later joined by Malcolm Evans after they met at a Bath Classical Guitar & Lute Society meeting in Bath in 1981. They produced computer games for the Sinclair ZX81 and ZX Spectrum home computers.
They struck gold with the revolutionary 3D Monster Maze, the first 3D game for a home computer, which John Greye suggested they produce after seeing a basic 3D Maze that Evans had programmed in Z80 Assembler. In the spring of 1982, Evans split up the company and founded his own company, New Generation Software (a name taken from an advertising slogan by J.K. Greye), which continued to produce games for the ZX Spectrum.
J.K.Greye set up J.K. Greye Enterprises Ltd, a separate company which split off around February–March 1983, to produce games for the Sinclair ZX Spectrum.
List of games
This softography is a merged list between J.K. Greye Software and J.K. Greye Enterprises Ltd.
Sinclair ZX81
10 Games for 1K ZX81 (1981)
1K Breakout (1981)
Catacombs (1981)
3D Defender (1981) (later re-released by N.G.S.)
3D Monster Maze (1981) (later re-released by N.G.S.)
Pyramid (1981)
Starfighter (1981)
ZX81 Artist (1981) (these last three released on one "gamestape")
Sinclair ZX Spectrum
Ufo (1982)
Minefield (1982)
Invasion (1982)
Kamikaze (1982)
The Arcadian (1982)
3D Vortex (1983)
4-Star (1984)
References
Software companies of the United Kingdom
Software companies established in 1981
Software companies disestablished in 1984
Defunct video game companies of the United Kingdom
1981 establishments in the United Kingdom
Companies established in 1981 |
3184080 | https://en.wikipedia.org/wiki/Vital%20Product%20Data | Vital Product Data | Vital Product Data (VPD) is a collection of configuration and informational data associated with a particular set of hardware or software. VPD stores information such as part numbers, serial numbers, and engineering change levels. Not all devices attached to a system will provide VPD, but it is often available from PCI and SCSI devices. Parallel ATA and USB devices also provide similar data, but do not refer to it as VPD.
VPD data is typically burned onto EEPROMs associated with various hardware components, or can be queried through attached I2C buses. It is used by firmware (for example, OpenFirmware) to determine the nature of the system hardware, and to shield the operation of the firmware from minor changes and variations of hardware implementations within a given machine model number.
AIX
In IBM's AIX operating system, VPD also refers to a subset of database tables in the Object Data Manager (ODM) obtained from either the Customized VPD object class or platform specific areas, therefore the VPD and ODM terms are sometimes referred to interchangeably.
command can be used in AIX to get the VPD.
lscfg [-v] [-p] [-s] [-l Name]
Other Unix-like systems
Package dmidecode provides commands vpddecode, biosdecode, and dmidecode, which can display hardware Vital Product Data. This package is available for many Unix-like operating systems.
See also
Organizationally unique identifier
World Wide Name
References
Further reading
PCI System Architecture (PC System Architecture Series), 10 Jun 1999, MindShare Inc., Tom Shanley, Don Anderson,
Identifiers
IBM operating systems |
1350425 | https://en.wikipedia.org/wiki/AN/URC-117%20Ground%20Wave%20Emergency%20Network | AN/URC-117 Ground Wave Emergency Network | The Ground Wave Emergency Network (GWEN) was a command and control communications system intended for use by the United States government to facilitate military communications before, during and after a nuclear war. Specifically, the GWEN network was intended to survive the effects of an electromagnetic pulse from a high-altitude nuclear explosion and ensure that the United States President or their survivors could issue a launch order to Strategic Air Command bombers by radio.
AN/URC-117 was the system's Joint Electronics Type Designation System identifier, which signified various radio components installed in different locations. Each GWEN Relay Node site featured a longwave transmitting tower, generally between tall, and emitting an RF output of between 2,000 and 3,000 watts. Of 240 planned GWEN towers, only 58 were built. In 1994, a defense appropriations bill banned the funding of new GWEN tower construction, and a few months later, the GWEN program was cancelled by the US Air Force. The United States Coast Guard later outfitted a number of former GWEN sites to house the Nationwide Differential GPS system.
History
GWEN was part of the Strategic Modernization Program designed to upgrade the nation's strategic communication system, thereby strengthening the value of nuclear deterrence. The GWEN communication system, established in the late 1980s, was designed to transmit critical Emergency Action Messages (EAM) to United States nuclear forces. EMP can produce a sudden power surge over a widespread area that could overload unprotected electronic equipment and render it inoperable. In addition, EMP could interfere with radio transmissions that use the ionosphere for propagation. It was thought that GWEN would use a ground-hugging wave for propagation and so be unaffected by the EMP. The network was conceived as an array of approximately 240 radio transceivers distributed across the continental USA which operated in the Low frequency (LF) radio band.
Analysis showed that low-frequency (150-190 kilohertz) radio transmissions were largely unaffected by high-altitude EMP, and the Air Force Weapons Laboratory (Kirtland Air Force Base) tested a small scale 'groundwave' transmission system in 1978-1982. Based on the groundwave concept's promise, USAF Headquarters issued a draft Program Management Directive (PMD) for a "Proliferated Groundwave Communications System (PGCS)" on 25 August 1981. The name of this proposed network system was changed from PGCS to Groundwave Emergency Network in February 1982 The Air Force placed a tentative initial operating capability for GWEN by January 1992.
When doubts arose regarding the threat of electromagnetic pulse to permanently shut down communications, only 58 of the originally planned 240 GWEN towers were built. In 1994 a defense appropriations bill banned new towers from being built, and shortly after, the GWEN program was cancelled by the Air Force.
Operations
Command and control messages originating at various military installations were transmitted on the 225 to 400 MHz band and received by a network of unmanned relay stations, called "Relay Nodes", dispersed throughout the contiguous 48 states. The Relay Nodes would re-transmit these command and control messages to each other, and to Strategic Air Command operating locations and launch control centers using low frequencies in the 150-175 kHz range in order to take advantage of ground-hugging radio propagation similar to commercial AM radio stations.
Distance between the Relay Nodes were approximately 150–200 miles, determined by the ground wave transmission range. During initial operations, the Relay Nodes would receive and relay brief test messages every 20 minutes. The system had built-in redundancy, using packet switching techniques for reconstruction of connectivity if system damage occurred.
Problems
Early in its lifetime, electrical interference problems caused by GWEN system operation began to surface. Since the stations were using LF, the chosen frequency was within 1 kHz of the operating frequency of nearby electrical carrier current systems. With GWEN handling constant voice, teletype and other data traffic, it caused interference to the power companies diagnostic two kilohertz side carrier tone. When the side carrier tone disappeared due to interference from GWEN, the power grid would interpret that as a system fault.
Site layout
The overall area of a GWEN Relay Node was approximately , approximately × 700 feet. It was surrounded on the perimeter by locked, chain-link fences topped with barbed wire. Typical site features included:
A main Longwave transmitting tower (generally between tall
A radial network of underground wires forming a large ground plane to serve as a reflecting surface for radio waves
Three electronic equipment shelters; two located near the perimeter of the site, and one at the base of the tower containing an antenna-tuning unit (ATU)
UHF and LF receive antennas mounted on either a 10 ft. mast, 30 ft. light pole, or 60–150 ft. tower.
A diesel backup generator, with a two-chambered fuel tank having a capacity of
The main GWEN antenna operated intermittently in the LF band at 150 to 175 kilohertz (kHz) (below the bottom of the AM broadcast band at 530 kHz). The peak broadcasting power was from 2,000 to 3,000 watts. The UHF antenna operated at 20 watts, between 225 and 400 megahertz (MHz).
GWEN site locations
A 1998 Department of Transportation environmental impact survey that proposed repurposing a number of existing GWEN sites for use by the Nationwide Differential Global Positioning System listed the locations of 29 GWEN sites:
Appleton, Washington
Austin, Nevada
Bakersfield, California
Billings, Montana
Bobo, Mississippi
Clark, South Dakota
Edinburg, North Dakota
Fenner, California
Flagstaff, Arizona
Gettysburg, Pennsylvania
Goodland, Kansas
Grady, Alabama
Great Falls, Montana
Hackleburg, Alabama
Hagerstown, Maryland
Hawk Run, Pennsylvania
Kirtland AFB, New Mexico
Klamath Falls, Oregon
Macon, Georgia
Medford, Wisconsin
Medora, North Dakota
Onondaga, Michigan
Penobscot, Maine
Pueblo, Colorado
Ronan, Montana
Savannah Beach, Georgia
Spokane, Washington
Summerfield, Texas
Whitney, Nebraska
Termination
Some of the initial towers had prompted groups of citizens in Massachusetts, Oregon, Pennsylvania, and California to organize to fight construction of GWEN towers in their areas. The groups believed that the presence of a GWEN node would increase the community's "strategic worth" in the eyes of the Soviet Union and thus invite attack. Responding to these groups, the Air Force repeatedly downplayed the importance of the towers, stating they were not worth that kind of attention by the Soviet Union.
Amid controversy and world geopolitical changes, GWEN's value diminished greatly in the post-Cold War environment, in addition to its existence being rendered moot by the sustained effectiveness of predecessor and follow-on systems (Survivable Low Frequency Communication System and Minimum Essential Emergency Communication Network respectively). As early as 1990, legislative measures were enacted to terminate the program.
In 1994, new construction of GWEN towers were banned after a defense appropriations bill eliminated any funding for the towers for one year. A few months later, the United States Air Force announced that they would terminate the construction contract to build the remaining 25 towers, except for monies used to dismantle the system.
Gallery
See also
Command and Control (military)
Differential Global Positioning System
Post Attack Command and Control System (PACCS)
Minimum Essential Emergency Communications Network (MEECN)
Emergency Rocket Communications System (ERCS)
Survivable Low Frequency Communications System (SLFCS)
References
General
Closure of the Ground Wave Emergency Network (GWEN) Relay Nodes, USAF EAIP 1998.
External links
FAS: Federation of American Scientists
USAF Installations
Military communications
United States nuclear command and control
Nuclear warfare
1992 establishments in the United States |
67535031 | https://en.wikipedia.org/wiki/Cart.com | Cart.com | Cart.com is an American technology company with a focus on e-commerce software and services. The company's offerings include its proprietary e-commerce platform and multi-channel management software, fulfillment services, marketing services, customer service, and e-commerce analytics software. The company's aims to eliminate pain points merchants encounter when using Amazon or Shopify to sell online. The company is headquartered in Austin, TX.
History
Cart.com was founded in October 2020 by Jim Jacobsen, the co-founder and former CEO of direct-to-consumer brand RTIC Outdoors and co-founder of alliantgroup, and Omair Tariq, former executive at direct-to-consumer custom blinds provider blinds.com, which was acquired by The Home Depot. Other founding team members included Remington Tonar and Henry Hanley.
On January 2021, Cart.com announced the acquisition of AmeriCommerce, a Texas-based e-commerce software company. In February 2021, Cart.com announced the acquisition of Cheap Cheap Moving Boxes, a direct-to-consumer box company, for its warehouse and logistics capabilities.
In April 2021, the company announced the completion of its Series A funding round to bring the total funding to $45M. The round was led by Houston-based Mercury Fund and Florida-based Arsenal Growth.
In July 2021, Cart.com announced the acquisition of The DuMont Project, a Los Angeles-based e-commerce marketing agency, and the acquisition of Sauceda Industries, an Austin-based third-party logistics provider.
In August 2021, Cart.com announced the closing of a $98M Series B funding round led by Oak HC/FT with participation from PayPal Ventures, Clearco, and G9 Ventures.
In December 2021, the company announced that it moved its global headquarters from Houston, TX to Austin, TX, and had been recognized with venture firm Capital Factory's Startup of the Year award.
In January 2022, Cart.com announced the acquisition of FB Flurry, a software-enabled third-party logistics provider with facilities across the nation, to bring the company's total distribution center footprint to over 2M square feet. That same month, the company also announced the acquisition of SellerActive, a multi-channel management software that connects e-commerce channels.
In February 2022, Cart.com announced the closing of a $240M funding round, disclosing that it grew revenues by over 400% in the past year. Legacy Knight Capital Partners led the round, which included Citi Ventures, Visa, and Fortune retail brands. J.P. Morgan and TriplePoint Capital participated via venture debt financing, bringing Cart.com's total raised to $380M.
As of Q1 2022, the company has over 850 employees.
References
E-commerce websites
American websites |
13679173 | https://en.wikipedia.org/wiki/Electrochemical%20Society | Electrochemical Society | The Electrochemical Society is a learned society (professional association) based in the United States that supports scientific inquiry in the field of electrochemistry and solid-state science and related technology. The Society membership comprises more than 8,000 scientists and engineers in over 85 countries at all degree levels and in all fields of electrochemistry, solid state science and related technologies. Additional support is provided by institutional members including corporations and laboratories.
ECS is a 501(c)(3) non-profit organization.
Scientists around the world rely on the Society as a leading source of scientific research through its peer-reviewed journals, international meetings, and the ECS Digital Library on IOPscience. The Society publishes numerous journals including the Journal of The Electrochemical Society (the oldest peer-reviewed journal in its field), Journal of Solid State Science and Technology, ECS Meeting Abstracts, ECS Transactions, and ECS Interface. The Society sponsors the ECS Monographs Series. These distinguished monographs, published by John Wiley & Sons, are the leading textbooks in their fields.
The ECS Digital Library on IOPscience encompasses over 160,000 journal and magazine articles and meeting abstracts. The Society supports open access through Free the Science, the Society’s initiative to make research freely available to world readers and free for authors to publish.
The Society has 13 topic interest area divisions as well as regional sections in Asia, Europe, Latin America, the Middle East, North America, and Southern Asia; over 100 ECS student chapters are located in major universities in all of these regions as well as Eastern Europe and South Africa. Student members benefit from exposure to experts in their fields, sharing research, volunteer activities, and career development.
ECS administers numerous international awards and supports STEM educational and outreach efforts.
History
The Electrochemical Society was founded in 1902 in Philadelphia, PA. At the beginning, ECS was called the American Electrochemical Society. It was, even then, a melting pot of scientific and technological disciplines, and of their adherents, who participated from around the globe.
The 19th century saw many applications of electricity to chemical processes and chemical understanding. Bridging the gap between electrical engineering and chemistry led innovative young people in industrial and academic circles to search for a new forum to discuss developments in the burgeoning field of electrochemistry.
The original constitution of the Society called for holding meetings and publishing papers presented there and the ensuing discussions. In 1902 the Society ushered in a new publication, Transactions of the American Electrochemical Society. In 1907 the first “local” section was formed at the University of Wisconsin. That same year, the American Electrochemical Society Bulletin was launched; it became the Journal of The Electrochemical Society in 1948.
In the 1920s, topical interest area divisions began to be founded, including the High Temperature Materials Division and the Electrodeposition Division. In 1930, the international nature of the Society was officially recognized by dropping “American” from the name. A new category of membership was started in 1941 to permit industrial companies to support the Society’s mission. ECS began fulfilling the need for critical textbooks with the publication of its second monograph, the Corrosion Handbook, by H. H. Uhlig in 1948.
Throughout the latter half of the 20th century, the Society continued to grow in size and importance, expanding the number of its publications, and the significance of the technical research unveiled at its meetings.
Over time, the Society’s members and publications’ authors have included many distinguished scientists and engineers. The Society’s original charter members included:
E. G. Acheson, who commercialized carborundum, an artificial graphite;
H. H. Dow, the founder of Dow Chemical Company;
C. M. Hall, the inventor of the Hall process for the manufacture of aluminum;
Edward Weston, the founder of Weston Instruments.
Thomas A. Edison joined the Society in 1903 and enjoyed membership for 28 years. In 1965, Moore’s law forever changed the world of technology. That seminal prediction developed its roots within the Society. ECS has included numerous Nobel laureates among its members, most recently the three co-winners of the 2019 Nobel Prize in Chemistry. John B. Goodenough, M. Stanley Whittingham, and Akira Yoshino shared the prize “for the development of lithium-ion batteries.”
For a more complete history of ECS, please consult The Electrochemical Society: The First Hundred Years, 1902 – 2002.
ECS alignment groups
The Society’s alignment groups includes divisions, sections, and student chapters that represent the wide range of interests of the electrochemical and solid state science and technology community.
Divisions
Battery (BATT)
Topical Interest Area (TIA): Batteries and Energy Storage (established 1947)
High-Temperature Energy, Materials, & Processes (H-TEMP)
TIA: Fuel Cells, Electrolyzers, and Energy Conversion (established 1921)
Corrosion (CORR)
TIA: Corrosion Science and Technology (established 1942)
Industrial Electrochemistry and Electrochemical Engineering (IE&EE)
TIA: Electrochemical Engineering (established 1943)
Dielectric Science and Technology (DS&T)
TIA: Dielectric Science and Materials (established 1945)
Luminescence and Display Materials (LDM)
TIA: Luminescence and Display Materials, Devices, and Processing (established 1982)
Electrodeposition (ELDP)
TIA: Electrochemical/Electroless Deposition (established 1921)
Nanocarbons (NANO)
TIA: Carbon Nanostructures and Devices (established 1993)
Electronics and Photonics (EPD)
TIA: Electronic Materials and Processing and Electronic and Photonic Devices and Systems (established 1931)
Organic and Biological Electrochemistry (OBE)
TIA: Organic and Bioelectrochemistry (established 1940)
Energy Technology (ETD)
TIA: Fuel Cells, Electrolyzers, and Energy Conversion (established 1983)
Physical and Analytical Electrochemistry (PAE)
TIA: Physical and Analytical Electrochemistry, Electrocatalysis, and Photoelectrochemistry (established 1936)
Sensor (SENS)
TIA: Sensors (established 1988)
Sections
ECS sections introduce and support activities in electrochemistry and solid state science in Asia, Europe, Latin America, the Middle East, and North America. Involvement in a section provides networking opportunities for those both new to the field or advanced in their careers. For those not able to attend ECS biannual meetings, sections bring technical news and activities within reach. Sections participate in ECS affairs, work to build Society membership, and help create awareness for the science.
ECS Arizona Section
ECS Brazil Section
ECS Canada Section
ECS Chicago Section
ECS Cleveland Section
ECS Detroit Section
ECS Europe Section
ECS Georgia Section
ECS India Section
ECS India Section
ECS Israel Section
ECS Japan Section
ECS Korea Section
ECS Mexico Section
ECS National Capital Section
ECS New England Section
ECS Pacific Northwest Section
ECS Pittsburgh Section
ECS San Francisco Section
ECS Singapore Section
ECS Taiwan Section
ECS Texas Section
ECS Twin Cities Section
Student Chapters
More than 100 ECS student chapters are located in major universities in all of these regions as well as Asia, Europe, Latin America, the Middle East, North America, South Africa, and Southern Asia.
Meetings
The Society has hosted scientific technical meetings since 1902 including its biannual meetings in the spring and fall of each year. The ECS biannual meetings bring together the most active researchers in academia, government, and industry—both professionals and students—to engage, discuss, and innovate in the areas of electrochemistry and solid state science and related technology. They are a premier destination—in person or online—for industry professionals to experience five days of learning, technical presentations, business development, and networking. ECS also sponsors meetings for other renowned scientific organizations including the Storage X International Symposium Series, the International Meeting on Chemical Sensors, and the International Symposium on Solid Oxide Fuel Cells.
Publications
ECS publishes peer-reviewed technical journals, proceedings, monographs, conference abstracts, and a quarterly news magazine.
Journals
Since 1902, the Society has published numerous journals now available through ECS’s publishing partner, IOPscience.
Journal of The Electrochemical Society (JES)
ECS Journal of Solid State Science and Technology (JSS)
ECS Transactions (ECST)
The Electrochemical Society Interface
ECS Meeting Abstracts
Journal History
A number of ECS journals which have ceased publication are now preserved as an archive. These archived publications are available through the ECS Digital Library.
Bulletin of the American Electrochemical Society
Bulletin of the Electrochemical Society
ECS Electrochemistry Letters (EEL)
ECS Solid State Letters (SSL)
Electrochemical and Solid-State Letters (ESL)
ECS Proceedings Volumes (ECS PVs)
Electrochemical Technology
Transactions of the American Electrochemical Society
ECS Books & Monographs
Electrochemical Society Monograph Series
ECS Monographs provide authoritative, detailed accounts on specific topics in electrochemistry and solid-state science and technology. Since the 1940s, ECS and noted publishers have cooperated to publish leading titles in these fields. John Wiley & Sons is the Society’s publishing partner on the series today.
Journal of The Electrochemical Society
JES is the flagship journal of The Electrochemical Society. Published continuously from 1902 to the present, JES is one of the most highly-cited journals in electrochemistry and solid state science and technology.
ECS Journal of Solid State Science and Technology
JSS is a peer-reviewed journal covering fundamental and applied areas of solid state science and technology, including experimental and theoretical aspects of the chemistry and physics of materials and devices.
ECS Interface
The Electrochemical Society Interface is an authoritative yet accessible publication for those in the field of solid state and electrochemical science and technology. Published quarterly, this four-color magazine contains technical articles about the latest developments in the field, and presents news and information about and for Society members.
ECS Meetings Abstracts
ECS Meeting Abstracts contain extended abstracts of the technical papers presented at the ECS biannual meetings and ECS-sponsored meetings. This publication offers a first look into current research in the field. ECS Meeting Abstracts are freely available to all visitors to the ECS Digital Library.
ECS Transactions
ECST is the official conference proceedings publication of The Electrochemical Society. This publication features full-text content of proceedings from ECS meetings and ECS-sponsored meetings. ECST is a high-quality venue for authors and an excellent resource for researchers. The papers appearing in ECST are reviewed to ensure that submissions meet generally-accepted scientific standards.
ECS Meeting Abstracts
ECS Meeting Abstracts contain extended abstracts of the technical papers presented at the ECS biannual meetings and ECS-sponsored meetings. This publication offers a first look into current research in the field. ECS Meeting Abstracts are freely available to all visitors to the ECS Digital Library.
Free the Science
Free the Science is ECS’s initiative to make research freely available to all readers, while remaining free for authors to publish.
Educational activities and programs
Awards
The society recognizes members for outstanding technical achievement in electrochemical and solid state science and technology at various career levels, and recognizes exceptional service to the Society, through the ECS Honors & Awards Program—the international awards, medals, and prizes administered by the Society.
Starting in 1919, Honorary Membership was bestowed for outstanding contributions to the Society. ECS's most prestigious award, the Edward Goodrich Acheson Award, established in 1928, is presented in even-numbered years for "conspicuous contribution to the advancement of the objectives, purposes, and activities of the society".
Supporting students and early career scientists has been a long-held goal of The Electrochemical Society. The Norman Hackerman Young Author Award—established in 1928—is one of the first awards created by the Society. It is given for the best paper published in the Journal of The Electrochemical Society that year by a young author or co-authors. Recipients must be under 31 years of age. Among the significant talent recognized at an early age by this award is Nobel laureate, M. Stanley Whittingham, who received it in 1970.
The Olin Palladium Award (formerly the Palladium Medal Award), established in 1950, is presented in odd-numbered years to recognize "distinguished contributions to the field of electrochemical or corrosion science."
ECS honors members with the designation Fellow of The Electrochemical Society for having made significant accomplishments to the fields of electrochemistry and solid state science and technology, and to the Society.
Fellowships and grants
Through competitive fellowship stipends, ECS supports students and young professionals as they pursue new ideas and forge connections with professionals both within and outside the field. These include the Biannual Meeting Travel Grants Program supported by ECS divisions and sections to help students, postdocs, and early career researchers attend ECS biannual meetings.
ECS Summer Fellowships and the Colin Garfield Fink Fellowship support young researchers’ work through the months of June through August.
The ECS Toyota Young Investigator Fellowship encourages young professors and scholars to pursue research in green energy technology that may promote the development of next-generation vehicles capable of utilizing fuel cells.
Notable members
Notable members of The Electrochemical Society include numerous Nobel Prize laureates including the three co-winners of the 2019 Nobel Prize for Chemistry.
Thomas Edison: Edison became a member on April 4, 1903. Early members recall attending a meeting at Edison's home in the Society’s early days. Most recognized for inventing the phonograph, the motion picture camera, and the electric light bulb, Edison's contributions to electrochemistry were monumental.
John B. Goodenough, M. Stanley Whittingham, and Akira Yoshino, all long time ECS members, shared the 2019 Nobel Prize in Chemistry “for the development of lithium-ion batteries”.
Isamu Akasaki, Hiroshi Amano, and Shuji Nakamura shared the 2014 Nobel Prize in Physics for “the invention of efficient blue light-emitting diodes, which has enabled bright and energy-saving white light sources”.
Jack Kilby’s invention of the integrated circuit earned him half of the 2000 Nobel Prize in Physics "for basic work on information and communication technology".
Steven Chu and William D. Phillips were co-recipients of the 1997 Nobel Prize in Physics “for the development of methods to cool and trap atoms with laser light”.
Richard Smalley shared the 1996 Nobel Prize in Chemistry “for the discovery of fullerenes”.
Rudolph A. Marcus won the 1992 Nobel Prize in Chemistry “for his contributions to the theory of electron transfer reactions in chemical systems".
Jean-Marie Lehn, an early innovator in the field of supramolecular chemistry, shared the 1987 Nobel Prize in Chemistry “for the development and use of molecules with structure-specific interactions of high selectivity”.
Gerd Binnig shared the 1986 Nobel Prize in Physics “for the design of the scanning tunneling microscope”.
Charles W. Tobias: A pioneer in the field of electrochemical engineering, Tobias made a long-lasting and far-reaching impact on the field of electrochemical science by forming the Chemical Engineering Department at UC Berkeley in 1947. He served as ECS president from 1970-1971.
Gordon E. Moore: The co-founder of Intel was known for his 1965 principal which made possible the delivery of more powerful and lower costing semiconductor chips. This was later known as Moore's law.
Norman Hackerman: The internationally known expert in metal corrosion served as ECS president in 1957-1958. Hackerman is most recognized for developing the electrochemistry of oxidation.
Carl Wagner: Often referred to as the father of solid state chemistry, Wagner's work on oxidation rate theory, counter diffusion of ions, and defect chemistry considerably advanced knowledge of how reactions proceed at the atomic level in the solid state. Wagner was the first recipient of the ECS Palladium Award in 1951.
Irving Langmuir: received the 1932 Nobel Prize in Chemistry “for his discoveries and investigations in surface chemistry”.
Edward Goodrich Acheson: The inventor of the Acheson process was a manufacturer of carborundum and graphite. The ECS Acheson Award was named in his honor in 1931.
Theodore William Richards: Richards, whose research helped confirm the existence of isotopes, received the 1914 Nobel Prize in Chemistry, “in recognition of his accurate determinations of the atomic weight of a large number of chemical elements”.
Willis R. Whitney: ECS president from 1911-1912, Whitney is most recognized among his many achievements for founding the research laboratory of the General Electric Company.
Leo Baekeland: Baekland, who served as ECS president in 1909, is most famous for inventing of Bakelite in 1907. His entrepreneurial genius and inventive nature made Baekeland one of the most important players in chemical technology.
Herbert H. Dow: Among his most significant achievements, Dow founded the Dow Chemical Company in 1897. Dow Chemical funded the creation of the ECS Industrial Electrochemistry and Electrochemical Engineering Division H. H. Dow Memorial Student Achievement Award in his honor in 1990.
Edward Weston: Noted for his achievements in electroplating, Weston developed the electrochemical cell – named the Weston cell, for the voltage standard.
Charles Martin Hall: Hall, best known for inventing an inexpensive process to produce aluminum, was one of the founders of Alcoa.
Lawrence Addicks (1878-1964) served as president of The Electrochemical Society from 1915 to 1916.
References
External links
Electrochemistry
Chemistry societies
Scientific organizations established in 1902
1902 establishments in Pennsylvania
Scientific societies based in the United States |
23831574 | https://en.wikipedia.org/wiki/Calibre%20%28software%29 | Calibre (software) | Calibre (, stylised calibre) is a cross-platform open-source suite of e-book software. Calibre supports organizing existing e-books into virtual libraries, displaying, editing, creating and converting e-books, as well as syncing e-books with a variety of e-readers. Editing books is supported for EPUB and AZW3 formats. Books in other formats like MOBI must first be converted to those formats, if they are to be edited.
History
On 31 October 2006, when Sony introduced its PRS-500 e-reader, Kovid Goyal started developing libprs500, aiming mainly to enable use of the PRS-500 formats on Linux. With support from the MobileRead forums, Goyal reverse-engineered the proprietary Broad Band eBook (BBeB) file format. In 2008, the program, for which a graphical user interface was developed, was renamed "calibre", displayed in all lowercase.
Features
Calibre supports many file formats and reading devices. Most e-book formats can be edited, for example, by changing the font, font size, margins, and metadata, and by adding an auto-generated table of contents. Conversion and editing are easily applied to appropriately licensed digital books, but commercially purchased e-books may need to have digital rights management (DRM) restrictions removed. Calibre does not natively support DRM removal, but may allow DRM removal after installing plug-ins with such a function.
Calibre allows users to sort and group e-books by metadata fields. Metadata can be pulled from many different sources, e.g., ISBNdb.com; online booksellers; and providers of free e-books and periodicals in the US and elsewhere, such as the Internet Archive, Munsey's Magazine, and Project Gutenberg; and social networking sites for readers, such as Goodreads and LibraryThing. It is possible to search the Calibre library by various fields, such as author, title, or keyword; however , full-text search has not yet been implemented.
E-books can be imported into the Calibre library, either by sideloading files manually or by wirelessly syncing an e-book reading device with the cloud storage service in which the Calibre library is backed up, or with the computer on which Calibre resides. Also, online content can be harvested and converted to e-books. This conversion is facilitated by so-called recipes, short programs written in a Python-based domain-specific language. E-books can then be exported to all supported reading devices via USB, Calibre's integrated mail server, or wirelessly. Mailing e-books enables, for example, sending personal documents to the Amazon Kindle family of e-readers and tablet computers.
This can be accomplished via a web browser, if the host computer is running and the device and host computer share the same network; in this case, pushing harvested content from content sources is supported on a regular interval (called 'subscription'). Also, if the Calibre library on the host computer is stored in a cloud service, such as Box.net, Google Drive, or Dropbox, then either the cloud service or a third-party app, such as Calibre Cloud or CalibreBox, can remotely access the library.
Since version 1.15, released in December 2013, Calibre also contains an application to create and edit e-books directly, similar to the more full-featured editor tools of the Sigil application, but without the latter's WYSIWYG editing mode.
Associated apps
Calibre Cloud (free) and Calibre Cloud Pro (paid), apps by Intrepid Logic that let one "access your Calibre e-book library from anywhere in the world. Place your calibre library in your Dropbox, Box, or Google Drive folder, and be able to view, search, and download from your library anywhere". As Jane Litte at Dear Author and John Jeremy at Teleread observe: This tool can be used to "create [one's] own Cloud of eBooks" and thereby read and allow downloads and emails from one's Calibre library via the Calibre folder in Box.net, Dropbox, or Google Drive. Because the Calibre-generated local wireless feed (OPDS) can only be accessed on devices sharing the same network as the Calibre library, this feature of the Calibre Cloud apps is particularly useful when away from one's home network, because it allows one to download and read the contents of one's Calibre library via the Calibre folder in Box, Dropbox, or Google Drive.
Calibre Companion (paid), an app by MultiPie, Ltd., recommended by calibre's developers, "brings complete integration with calibre on your desktop, giving you total control over book management on your device." John Jermey at Teleread notes this app can manage Calibre/device libraries as if one's mobile device were plugged into computer; however, unlike Calibre Cloud, Calibre Companion requires users to be at a computer and use the Calibre-generated local wireless feed (OPDS).
Calibre Library (paid), an app by Tony Maro that allows one to "Connect wirelessly to your Calibre e-book library or other Stanza source. Browse and download your e-books on the go." This app's operations and benefits are similar to those offered by Calibre Cloud.
Calibre Sync (free), an app by Seng Jea Lee that "seamlessly connects to your Calibre Library and shows up as a connected device on Calibre. If Auto-Connect option is enabled, your device will attempt to connect to the Calibre Library when it is within the home Wi-Fi network. This allows Calibre to automatically update your device with the latest newspaper or magazines you have scheduled for download!" As with Calibre Companion, this app requires the device to be on the same network as the Calibre library.
CalibreBox (free and paid), an app by Eric Hoffmann that, like Calibre Cloud, accesses Calibre libraries from cloud storage. Unlike Calibre Cloud, it is limited to Dropbox, but CalibreBox supports more than one library at a time, and flexible sorting and filtering. Custom column support for the book detail view, sorting, and filtering by custom columns, and adding more than two libraries are restricted to paid users. The app is built on the design principles of Google's Material Design and is under active development.
Calibre-go (free), app by Litlcode Studios lets you access your Calibre e-book library from cloud storage and access the library through Calibre-go to browse, sort, search and read books on your mobile. Calibre-go supports multiple libraries across multiple accounts simultaneously.
Calibre Sync (paid), an Android app by BIL Studio that lets you access Calibre libraries from cloud storage (Dropbox, OneDrive, Box, and pCloud), or from SD card. Calibre Sync supports multiple libraries across multiple accounts simultaneously, also allows users to browse, sort, search, filter and download books to read on devices.
See also
Adobe Digital Editions
Comparison of e-readers
List of free and open-source software packages
OverDrive Media Console
References
Further reading
External links
2008 software
Cross-platform free software
EPUB readers
Free application software
Free software programmed in C
Free software programmed in Python
Linux text-related software
Reference management software
Software that uses Qt |
41101523 | https://en.wikipedia.org/wiki/Elon%20Ganor | Elon Ganor | Elon A. Ganor (; born 1950) is an Israeli entrepreneur known for his role as one of the world's first VoIP pioneers. He served as chairman and CEO of VocalTec Ltd (Nasdaq CALL), the company behind the creation of "Internet Phone", the world's first commercial software product that enabled voice communication over the internet, known initially as "Internet Telephony" and later as VoIP.
Biography
Elon Ganor was born in Geneva, Switzerland. He grew up in Tel Aviv, and graduated from Tel Aviv University Sackler Medical school in 1975.
Business career
After years of practicing medicine Ganor entered the technology industry. His first company was Virovahl S,A- a Swiss-based biotechnology company founded in 1987 with a group of Swedish virologists. The company's laboratory was located in Gothenburg, Sweden. Virovahl SA developed the world's first HIV synthetic peptide based on diagnostic test.
Under his guidance as President of Virovahl, the test was licensed exclusively to Pharmacia AB from Uppsala, Sweden (later merged with Upjohn).
In 1990 Ganor joined forces with Alon Cohen and Lior Haramaty who had formed VocalTec Ltd six months earlier in Israel. Cohen and Haramaty developed and manufactured a PC sound card (SpeechBoard TM) that was sold mainly to the local visually impaired community in Israel with a unique text to speech software enabling blind people to use a computer in the Hebrew language.
Since that market was limited, as VocalTec CEO and chairman, Ganor decided to shift the company's focus to software.
In 1993 VocalChat was born, software that enabled voice communication from one PC to another on a local and wide area network. The software was developed by a group of developers including Ofer Kahana (later the founder of Kagoor that was sold to Juniper), Elad Sion (served in Israel TOP 8200 Intelligence army unit, died young in a car accident), Ofer Shem Tov (a software developer in Israel) and others. The software was presented in Atlanta in May 1993 at the Network InterOp trade show. In 1994, support for Internet Protocol was added and on February 10, 1995, "Internet Phone" was launched with a near full page Wall Street Journal article by WSJ Boston Correspondent Bill Bulkeley, "Hello World! Audible chats On the Internet" was the header.
VocalTec Ltd became a Nasdaq traded company in February 1996, with Ganor as chairman and CEO.
In 1997 Ganor worked with Michael Spencer (at the time principal at Booz Allen Hamilton who led the Internet Strategy Group of the Communications, Media and Technology practice) to develop a new type of a VoIP exchange phone company. After meeting Tom Evslin from AT&T (who led at the time WorldNet AT&T ISP initiative), ITXC was founded, with Tom Evslin as its CEO and cofounder. VocalTec under Ganor invested the initial $500,000 and gave a credit of $1 million in VoIP Gateway equipment in exchange for 19.9% of the new company; AT&T followed with an additional investment.
ITXC became the world's largest VoIP carrier, reaching a market cap of about $8 billion as a Nasdaq company in 2000 (prior to the March 2008 crash).
In 2008 Ganor became the founder, investor and CEO (with Danny Frumkin, PhD and Adam Wasserstrom, PhD as co-founders) of Nucleix. Nucleix Ltd is a Biotechnology epigenetic company involved in the development of bio-markers and technologies for forensic medicine. The company developed a product for the authentication of DNA.
Art career
Ganor left VocalTec in 2006 to study art at Beit Berl College. He graduated in 2008, majoring in photography.
Among his works, "Wall Street" a series of staged photographs shot in New York and Israel expressing criticism of Wall Street practices (first exhibited in 2008 just before the Lehman Bros collapse). Also among other series, "The Box" (exhibited in 2009 at Volta show in Switzerland) and Earl King (exhibited in October 2010).
Ganor’s work can be found in many art collections including the Tel Aviv Museum of Art, Shpilman institute Photography collection, the Israel Museum in Jerusalem and private collections.
Awards and recognition
Ganor has been covered in Der Spiegel, Die Zeit, Wall Street Journal, BusinessWeek, Newsweek, Von Magazine, Computer Business, WebWeek, The Industry Standard, and Time.
He has appeared on CNN, and participated as a panelist at the World Economic Forum in Davos, Switzerland. He was also interviewed on the podcast Shaping Business Minds Through Art.
Public positions
Member of the Board of Governors, Ben-Gurion University of the Negev, Beer Sheba, Israel.
Member of the Board of Governors and lecturer, Tel Aviv-Yafo Academic college.
References
External links
Interview with Elon Ganor in philly.com
Interview with Elon Ganor in www.18m.co.il
A section about Elon Ganor and VocalTec in Information Superhighways Newsletter from 1995
Elon Ganor in The New York Times
Elon Ganor mentioned in Top Voices of IP Communications
IP Telephony Companies React to FCC Report to Congress
Living people
1950 births
Israeli artists
Israeli businesspeople
People from Geneva |
48661397 | https://en.wikipedia.org/wiki/Kdan%20Mobile | Kdan Mobile | Kdan Mobile Software is a privately owned application and software development company based in Tainan City, Taiwan. Kdan also has branches in Taipei, Huntingdon, Changsha, Beijing, and Tokyo. The company was founded in 2009 by Kenny Su, the company’s CEO.
The company recently completed its Series B round of fundraising, in which it raised 16 million USD total. Four global firms, Dattoz Partners (South Korea), WI Harper Group (U.S.), Taiwania Capital (Taiwan), and Golden Asia Fund Mitsubishi UFJ Capital (Japan), made up the Series B investment. Kdan previously raised 5 million USD in its Series A round in 2018, at which point it was also named a Top-10 Best Software Company by the Silicon Review.
History
Inception
Kdan Mobile got its start in 2009, when Kenny Su developed PDF Reader, an application that converts documents into PDF files. The application was well-received by the market, and inspired Su to start Kdan.
Initial Products and Growth
Kdan’s flagship applications include: PDF Reader series, NoteLedge, and Animation Desk series. They have since added Markup, Write-on Video, and NoteLedge. Their applications are supported on all operating systems and are available on smart devices and desktops.
In July 2015, Kdan launched a new series of apps (Creativity 365), integrated with its own Cloud services.
The company built upon its experience with PDF technology by successfully introducing DottedSign, an eSign service, in spring 2019. DottedSign is available on any device or operating system.
All projects and documents created or edited in the company’s software can be stored on the Kdan Cloud, their online storage service. The Kdan Cloud was released as a standalone application available for download on Apple and Android devices in early 2019.
Partnerships
Kdan has expressed an interest in working with strategic business partners to increase their presence in international markets. In 2014, Kdan partnered with Nokia so their NoteLedge application would come pre-installed in Nokia’s Lumia 1320 for the Taiwanese market. Kdan entered into similar partnerships with Samsung (2014), Adonit (2014), and LKKER (2017), and Qihoo 360(2018). In 2019, the company partnered with Sourcenext, an independent consumer software provider in Japan. The goal of the partnership is to provide Kdan's PDF Reader to Japanese users through App Pass by Softbank, a Japanese multinational conglomerate.
References
Taiwanese companies established in 2009
Electronics companies of Taiwan
Application software
Companies based in Tainan
Software companies of Taiwan
Software companies established in 2009
Taiwanese brands |
44193 | https://en.wikipedia.org/wiki/DocBook | DocBook | DocBook is a semantic markup language for technical documentation. It was originally intended for writing technical documents related to computer hardware and software, but it can be used for any other sort of documentation.
As a semantic language, DocBook enables its users to create document content in a presentation-neutral form that captures the logical structure of the content; that content can then be published in a variety of formats, including HTML, XHTML, EPUB, PDF, man pages, Web help and HTML Help, without requiring users to make any changes to the source. In other words, when a document is written in DocBook format it becomes easily portable into other formats, rather than needing to be rewritten.
Design
DocBook is an XML language. In its current version (5.x), DocBook's language is formally defined by a RELAX NG schema with integrated Schematron rules. (There are also W3C XML Schema+Schematron and Document Type Definition (DTD) versions of the schema available, but these are considered non-standard.)
As a semantic language, DocBook documents do not describe what their contents "look like", but rather the meaning of those contents. For example, rather than explaining how the abstract for an article might be visually formatted, DocBook simply says that a particular section is an abstract. It is up to an external processing tool or application to decide where on a page the abstract should go and what it should look like or whether or not it should be included in the final output at all.
DocBook provides a vast number of semantic element tags. They are divided into three broad categories, namely structural, block-level, and inline.
Structural tags specify broad characteristics of their contents. The book element, for example, specifies that its child elements represent the parts of a book. This includes a title, chapters, glossaries, appendices, and so on. DocBook's structural tags include, but are not limited to:
set: Titled collection of one or more books or articless, can be nested with other sets
book: Titled collection of chapters, articles, and/or parts, with optional glossaries, appendices, etc.
part: Titled collection of one or more chapters—can be nested with other parts, and may have special introductory text
article: Titled, unnumbered collection of block-level elements
chapter: Titled, numbered collection of block-level elements—chapters don't require explicit numbers, a chapter number is the number of previous chapter elements in the XML document plus 1
appendix: Contains text that represents an appendix
dedication: Text represents the dedication of the contained structural element
Structural elements can contain other structural elements. Structural elements are the only permitted top-level elements in a DocBook document.
Block-level tags are elements like paragraph, lists, etc. Not all these elements can directly contain text. Sequential block-level elements render one "after" another. After, in this case, can differ depending on the language. In most Western languages, "after" means below: text paragraphs are printed down the page. Other languages' writing systems can have different directionality; for example, in Japanese, paragraphs are often printed in downward columns, with the columns running from right to left, so "after" in that case would be to the left. DocBook semantics are entirely neutral to these kinds of language-based concepts.
Inline-level tags are elements like emphasis, hyperlinks, etc. They wrap text within a block-level element. These elements do not cause the text to break when rendered in a paragraph format, but typically they cause the document processor to apply some kind of distinct typographical treatment to the enclosed text, by changing the font, size, or similar attributes. (The DocBook specification does say that it expects different typographical treatment, but it does not offer specific requirements as to what this treatment may be.) That is, a DocBook processor doesn't have to transform an emphasis tag into italics. A reader-based DocBook processor could increase the size of the words, or, a text-based processor could use bold instead of italics.
Sample document
<?xml version="1.0" encoding="UTF-8"?>
<book xml:id="simple_book" xmlns="http://docbook.org/ns/docbook" version="5.0">
<title>Very simple book</title>
<chapter xml:id="chapter_1">
<title>Chapter 1</title>
<para>Hello world!</para>
<para>I hope that your day is proceeding <emphasis>splendidly</emphasis>!</para>
</chapter>
<chapter xml:id="chapter_2">
<title>Chapter 2</title>
<para>Hello again, world!</para>
</chapter>
</book>
Semantically, this document is a "book", with a "title", that contains two "chapters" each with their own "titles". Those "chapters" contain "paragraphs" that have text in them. The markup is fairly readable in English.
In more detail, the root element of the document is book. All DocBook elements are in an XML Namespace, so the root element has an xmlns attribute to set the current namespace. Also, the root element of a DocBook document must have a version that specifies the version of the format that the document is built on.
(XML documents can include elements from multiple namespaces at once. For simplicity, the example does not illustrate this.)
A book element must contain a title, or an info element containing a title. This must be before any child structural elements. Following the title are the structural children, in this case, two chapter elements. Each of these must have a title. They contain para block elements, which can contain free text and other inline elements like the emphasis in the second paragraph of the first chapter.
Schemas and validation
Rules are formally defined in the DocBook XML schema. Appropriate programming tools can validate an XML document (DocBook or otherwise), against its corresponding schema, to determine if (and where) the document fails to conform to that schema. XML editing tools can also use schema information to avoid creating non-conforming documents in the first place.
Authoring and processing
Because DocBook is XML, documents can be created and edited with any text editor. A dedicated XML editor is likewise a functional DocBook editor. DocBook provides schema files for popular XML schema languages, so any XML editor that can provide content completion based on a schema can do so for DocBook. Many graphical or WYSIWYG XML editors come with the ability to edit DocBook like a word processor.
Because DocBook conforms to a well-defined XML schema, documents can be validated and processed using any tool or programming language that includes XML support.
History
DocBook began in 1991 in discussion groups on Usenet and eventually became a joint project of HAL Computer Systems and O'Reilly & Associates and eventually spawned its own maintenance organization (the Davenport Group) before moving in 1998 to the SGML Open consortium, which subsequently became OASIS. DocBook is currently maintained by the DocBook Technical Committee at OASIS.
DocBook is available in both SGML and XML forms, as a DTD. RELAX NG and W3C XML Schema forms of the XML version are available. Starting with DocBook 5, the RELAX NG version is the "normative" form from which the other formats are generated.
DocBook originally started out as an SGML application, but an equivalent XML application was developed and has now replaced the SGML one for most uses. (Starting with version 4 of the SGML DTD, the XML DTD continued with this version numbering scheme.) Initially, a key group of software companies used DocBook since their representatives were involved in its initial design. Eventually, however, DocBook was adopted by the open source community where it has become a standard for creating documentation for many projects, including FreeBSD, KDE, GNOME desktop documentation, the GTK+ API references, the Linux kernel documentation (which, as of July 2016, is transitioning to Sphinx/reStructuredText), and the work of the Linux Documentation Project.
Pre DocBook v5.0
Until DocBook 5, DocBook was defined normatively by a Document Type Definition (DTD). Because DocBook was built originally as an application of SGML, the DTD was the only available schema language. DocBook 4.x formats can be SGML or XML, but the XML version does not have its own namespace.
DocBook 4.x formats had to live within the restrictions of being defined by a DTD. The most significant restriction was that an element name uniquely defines its possible contents. That is, an element named info must contain the same information no matter where it is in the DocBook file. As such, there are many kinds of info elements in DocBook 4.x: bookinfo, chapterinfo, etc. Each has a slightly different content model, but they do share some of their content model. Additionally, they repeat context information. The book's info element is that, because it is a direct child of the book; it does not need to be named specially for a human reader. However, because the format was defined by a DTD, it did have to be named as such. The root element does not have or need a version, as the version is built into the DTD declaration at the top of a pre-DocBook 5 document.
DocBook 4.x documents are not compatible with DocBook 5, but can be converted into DocBook 5 documents via an XSLT stylesheet. One (db4-upgrade.xsl) is provided as part of the distribution of the DocBook 5 schema and specification package.
Output formats
DocBook files are used to prepare output files in a wide variety of formats. Nearly always, this is accomplished using DocBook XSL stylesheets. These are XSLT stylesheets that transform DocBook documents into a number of formats (HTML, XSL-FO for later conversion into PDF, etc.). These stylesheets can be sophisticated enough to generate tables of contents, glossaries, and indexes. They can oversee the selection of particular designated portions of a master document to produce different versions of the same document (such as a "tutorial" or a "quick-reference guide", where each of these consist of a subset of the material). Users can write their own customized stylesheets or even a full-fledged program to process the DocBook into an appropriate output format as their needs dictate.
Norman Walsh and the DocBook Project development team maintain the key application for producing output from DocBook source documents: A set of XSLT stylesheets (as well as a legacy set of DSSSL stylesheets) that can generate high-quality HTML and print (FO/PDF) output, as well as output in other formats, including RTF, man pages and HTML Help.
Web help is a chunked HTML output format in the DocBook XSL stylesheets that was introduced in version 1.76.1. The documentation for web help also provides an example of web help and is part of the DocBook XSL distribution.
The major features are its fully CSS-based page layout, search of the help content, and a table of contents in collapsible-tree form. Search has stemming, match highlighting, explicit page-scoring, and the standard multilingual tokenizer. The search and TOC are in a pane that appears as a frameset, but is actually implemented with div tags and cookies (so that it is progressive).
Simplified DocBook
DocBook offers a large number of features that may be overwhelming to a new user. For those who want the convenience of DocBook without a steep learning curve, Simplified DocBook was designed. It is a small subset of DocBook designed for single documents such as articles or white papers (i.e., "books" are not supported). The Simplified DocBook DTD is currently at version 1.1.
Criticism
Ingo Schwarze, the author of OpenBSD's mandoc, considers DocBook inferior to the semantic mdoc macro for man pages. In an attempt to write a DocBook-to-mdoc converter (previous converters like docbook-to-man does not cover semantic elements), he finds the semantic parts "bloated, redundant, and incomplete at the same time" compared to elements covered in mdoc. Moreover, Schwarze finds the DocBook specification not specific enough about the use of tags, the language non-portable across versions, rough in details and overall inconsistent.
See also
List of document markup languages
Comparison of document markup languages
DocBook XSL
Darwin Information Typing Architecture
LinuxDoc
LaTeX
Mumasy
SilkPage
References
Further reading
Norman Walsh is the principal author of the book DocBook: The Definitive Guide, the official documentation of DocBook. This book is available online under the GFDL, and also as a print publication.
External links
DocBook.org - Collection of DocBook information, including a 4.x and 5.0 version of DocBook: The Definitive Guide and all versions of the DocBook schemas/DTDs.
DocBook Repository at OASIS - Normative home of DocBook schema/DTD.
DocBook XSL Project page at SourceForge.net, XSLT 1.0 Stylesheets for DocBook at GitHub
DocBook Demystification HOWTO
DocBook: The Definitive Guide, 1st edition, v. 2.0.6 - Fully bookmarked PDF of the Guide for DocBook 3.x and 4.x.
Writing with DocBook elements.
Markup languages
Document-centric XML-based standards
Technical communication
Technical communication tools
Software documentation
Open formats |
42698790 | https://en.wikipedia.org/wiki/HemoSpat | HemoSpat | HemoSpat is bloodstain pattern analysis software created by FORident Software in 2006. Using photos from a bloodshed incident at a crime scene, a bloodstain pattern analyst can use HemoSpat to calculate the area-of-origin of impact patterns. This information may be useful for determining position and posture of suspects and victims, sequencing of events, corroborating or refuting testimony, and for crime scene reconstruction.
The results of the analyses may be viewed in 2D within the software as top-down, side, and front views, or exported to several 3D formats for integration with point cloud or modelling software. The formats which HemoSpat exports include:
AUTOCAD DXF
COLLADA
PLY
VRML
Wavefront OBJ
HemoSpat is capable of calculating impact pattern origins with only part of the pattern available, as well as impacts on non-orthogonal surfaces.
HemoSpat has also been used in research into what kind of information may be captured from cast-off patterns, methods of scene documentation, and in improving area-of-origin calculations.
References
External links
– official site
Blood
Forensic software
Software that uses Qt |
29282461 | https://en.wikipedia.org/wiki/Launchpad%20%28macOS%29 | Launchpad (macOS) | Launchpad is an application launcher for macOS introduced in Mac OS X Lion. Launchpad is designed to resemble the SpringBoard interface in iOS. The user starts an application by single-clicking its icon. Launchpad provides an alternative way to start applications in macOS, in addition to other options such as the Dock (toolbar launcher), Finder (file manager), Spotlight (desktop search) or Terminal (command-line interface).
Features
Launchpad is populated with icons corresponding to the applications found in the /Applications folder as well as in the ~/Applications, that is, in a folder named "Applications" in user's home directory, and in any subfolders within the two above folders. The user can add application icons to Launchpad. The user can also remove an application's icon, but the application itself might not be deleted if it was not originally downloaded from the Mac App Store. Apps can be arranged in named folders much like iOS. The user can then remove apps downloaded from the Mac App Store. In Mac OS X Lion, Launchpad had eight icons per row; this was changed in OS X Mountain Lion to seven icons per row.However, with proper root permission, by adjusting some settings users can change the number of icon rows and columns in launchpad.
Since Mac OS X Lion, the function key F4 is a keyboard shortcut to Launchpad. If enabled, Apple's gesture recognition software interprets a thumb-and-three-finger pinch on a touchpad as a command to open Launchpad.
The ability to search applications was added in OS X Mountain Lion.
In OS X Mavericks, Launchpad's background became a blurred version of the user's desktop background, and folders departed from the "linen" texture underlay, replaced with a darker translucent background (part of the move away from skeuomorphism).
In OS X Yosemite, folders in Launchpad now closely resemble those of iOS; rounded translucent squares with a 3x3 icon grid preview (of the contained applications) when closed, expanding into larger rectangular variants when opened. Furthermore, folders can now be paginated to accommodate more applications.
In macOS Big Sur, the Launchpad icon changed to a 3x3 grid with icons of different colors, resembling apps. However, the usage of Launchpad remained the same.
See also
At Ease
Comparison of desktop application launchers
Mac App Store
SpringBoard
References
External links
Mac Basics: Launchpad is the fast way to find and open your apps at Apple.com
WinLaunch—Launchpad alternative for Windows
2011 software
MacOS user interface |
9724762 | https://en.wikipedia.org/wiki/WebMethods%20Integration%20Server | WebMethods Integration Server | webMethods Integration Server is one of the core application servers in the webMethods platform.
It is a Java-based, multiplatform enterprise integration server. It supports the integration of diverse services, such as mapping data between formats and communication between systems. An integration server may also be known as the core of webMethods Enterprise Service Bus.
The Integration Server supports Java, C/C++ programming languages for writing services as well as a proprietary graphical language known as flow. It also supports graphical configuration of 3rd party system operations via the concept of "Adapter services".
The Integration Server exposes its administration, configuration and auditing facilities to the user via an HTML web interface.
Product capabilities
Programming languages:
Java
webMethods Flow
webMethods DSP (Dynamic Server Pages)
C/C++
SQL (via graphical adapter services)
Protocols/Standards (core):
HTTP/HTTPS
FTP/FTPS (FTPS from 6.5 onwards)
webServices/SOAP & REST
JMS
XML
JSON
LDAP
SMTP
OAuth 2.0
Protocols/Standards (additional packages)
EDI/Flatfiles
XSLT
EDIINT
RosettaNet
Microsoft .net
SAP
Siebel
PeopleSoft
Remedy
Release history
The following is a list of the significant releases for the webMethods Integration Server.
webMethods Integration Server 10.7 - April 2021
webMethods Integration Server 10.5 - October 2019
webMethods Integration Server 10.4 - April 2019
webMethods Integration Server 10.3 - October 2018
webMethods Integration Server 10.2 - April 2018
webMethods Integration Server 10.1 - October 2017
webMethods Integration Server 10.0 – April 2017
webMethods Integration Server 9.12 – October 2016
webMethods Integration Server 9.10 – April 2016
webMethods Integration Server 9.9 – October 2015
webMethods Integration Server 9.8 – April 2015
webMethods Integration Server 9.7 – October 2014
webMethods Integration Server 9.6 – April 2014
webMethods Integration Server 9.5 – November 2013
webMethods Integration Server 9.0 – June 2013
webMethods Integration Server 8.2 – March 2010
webMethods Integration Server 8.0 – December 2009
webMethods Integration Server 7.1.2 – September 2008
webMethods Integration Server 7.1.1 – January 2008
webMethods Integration Server 7.1 – August 2007
webMethods Integration Server 6.5 SP3 – October 2007
webMethods Integration Server 6.5 SP2 – December 2006
webMethods Integration Server 6.5 SP1 – June 2006
webMethods Integration Server 6.1 SP2 – July 2006
webMethods Integration Server 6.1 SP1 – March 2006
webMethods Integration Server 6.1 FP2 – July 2004
webMethods Integration Server 6.1 – January 2004
webMethods Integration Server 6.0.1 – March 2003
webMethods Integration Server 4.6 – December 2001 (name change previously "webMethods B2B Server")
webMethods B2B Server 4.0.2 – September 2001
webMethods B2B Server 4.0.1 – May 2001
webMethods B2B Server 4.0 – March 2001 (name change, previously "webMethods B2B Integration Server")
webMethods B2B Integration Server 3.6 – October 2000
webMethods B2B Integration Server 3.5.1 – September 2000
webMethods B2B Integration Server 3.5 – August 2000
webMethods B2B Integration Server 3.1.2 – May 2000 (estimate)
webMethods B2B Integration Server 3.1 – March 2000 (estimate)
Environment
webMethods Integration Server is certified to run on the following platforms:
Solaris
HP-UX
AIX
Windows
Linux
AS/400
The application server itself runs on top of Java (1.4.X in version 6.X; 1.7.X in version 9.5; 1.8.X in version 9.10 of the Integration Server).
Installation
Installation of the product is via the webMethods Installer program.
See also
webMethods , Software AG – Company
webMethods Developer – The IDE for development functions (Deprecated. Replaced by webMethods Designer)
SAP Business Connector – A branded version of the Integration Server for SAP R/3 Users
References
External links
webMethods was acquired by Software AG
Software AG
Software AG TECHcommunity – Software AG's community website featuring webMethods Integration Server articles, tutorials and downloads
webMethods Integration Forum – Questions and answers on webMethods Integration
Service Profiler for webMethods Integration Server – 3rd party tool.
IwTest - Advanced Test Automation for webMethods (3rd party tool)
Patent US8650320 - Integration server supporting multiple receiving channels
Web services
Service-oriented architecture-related products
Java enterprise platform
2014 software |
23883203 | https://en.wikipedia.org/wiki/1967%20Oregon%20State%20Beavers%20football%20team | 1967 Oregon State Beavers football team | The 1967 Oregon State Beavers football team represented Oregon State University in the 1967 NCAA University Division football season. The Beavers ended this season with seven wins, two losses, and a tie, and outscored their opponents 187 to 137. Led by third-year head coach Dee Andros, Oregon State finished with 7–2–1 record, 4–1–1 in the Athletic Association of Western Universities (informally Pacific-8, a name it officially adopted the following year) tied for runner-up for a second consecutive year.
In a four-week period, the Beavers became the only team to ever go undefeated against three top two teams in one season since the inception of the AP Poll, earning the nickname
Schedule
The Beavers had a 7–2–1 record, 4–1–1 in the Athletic Association of Western Universities. Ranks are prior to kickoff.
Roster
QB Steve Preece, Jr.
OG Rocky Rasley, Jr.
Skip Vanderbrundt, Sr. (defense)
Bobby Mayes, Jr. Offense
C John Didion, Jr. Offense
Game summaries
Before the season
Oregon State ended the 1966 season on a six-game winning streak. Nobody expected much out of the Beavers in 1967; even the Oregon State media guide said that the Beavers would be "rebuilding" in 1967. Quarterback Paul Brothers, who led Oregon State to the Rose Bowl as a sophomore after the 1964 season, and fullback Pete Pifer, the Beavers' all-time leading rusher, had graduated. The starting quarterback was junior Steve Preece, the wunderkind Andros had recruited from Boise, Idaho, shortly after he arrived at Oregon State. Another newcomer on offense was junior fullback Bill "Earthquake" Enyart, who previously backed up Pifer. Too talented to keep off the field, Enyart had earned All-Coast honors at linebacker in 1966. The 1967 team only boasted six seniors.
Stanford
Stanford made their first trip to Portland, Oregon, in 12 years. Oregon State and Stanford met for the first time in three years, when the Indians almost derailed the Beavers' Rose Bowl trip. Oregon State entered the game a one-point favorite. In the second quarter, the Beavers' Billy Main opened scoring by running in untouched from five yards out. Mike Haggard's extra point gave Oregon State a 7–0 lead. The lead lasted all of fifteen seconds, as Stanford's Nate Kirtman returned the kickoff 98 yards for a touchdown, knotting the score at seven. Six minutes later, a ten-yard punt return by Mark Waletich gave the Beavers the ball at the Indians' 39. Oregon State drove 34 yards to set up Mike Haggard's 22-yard field goal with less than three minutes left. On the ensuing kickoff, Stanford's Gene Washington mistakenly downed the ball at his own one-yard line. The Indians could only manage four yards and Stanford could only manage a 10-yard punt. The Beavers could only manage four yards themselves, but Haggard's 28-yard field goal gave Oregon State a 13–7 lead with just over a minute left in the first half. In the second half, Stanford could only muster a 44-yard field goal attempt, which fell short of the crossbar. The Beavers' Skip Vanderbundt killed off three drives with interceptions. His second was with less than five minutes left. His last was at the Oregon State 16 with 1:25 left, which effectively ended the game. The Beavers did not win another season opener for a decade, under second-year head coach Craig Fertig.
Arizona State
In heat, Andros summed up the playing conditions by saying, "It's hot as hell." Arizona State caught the first break, recovering an Oregon State fumble at the Beaver 20. Oregon State's Mike Groff ended the threat by intercepting Ed Roseborough at the Beaver 15. Six plays later, Steve Preece dove in the end zone for a 7–0 lead. The Sun Devils responded by driving 63 yards for a touchdown of their own but missed the extra point. Late in the second quarter, Oregon State's Don Summers ran in from one-yard and Mike Haggard's extra point split the uprights. The Beavers went into the locker room up 14–6.
In the first four minutes of the second half, Oregon State extended the lead on Preece's six-yard touchdown, but Haggard's extra point was blocked. In the third quarter, Arizona State benched Roseborough in favor of Rick Shaw. Shaw led the Sun Devils on a 47-yard drive and hit J. D. Hill for a two-point conversion to pull within six. Midway through the fourth quarter, Preece ran in for his third touchdown of the game. Arizona State countered with a late touchdown to make the score a more-respectable 27–21 but never seriously threatened after that. The Sun Devils went 8–1 the rest of the way, only losing by two to undefeated Wyoming, which finished #6 in the AP Poll, one spot in front of the Beavers.
Iowa
In the seventeen seasons from 1956–1972, Oregon State and Iowa played 12 times, more times than Oregon State played conference opponents California and UCLA in the same period. The Hawkeyes won the first five meetings, but the Beavers won the sixth 17–3 in 1966. In the first seven minutes, Oregon State built a 14–0 lead. Steve Preece scored the first, running untouched into the end zone from 35 yards out. Billy Main scored the second by dragging two defenders into the end zone. Main tacked on a second touchdown from 40 yards out later in the quarter. The Beavers had a chance to add another first-quarter touchdown but fumbled at the one. Oregon State got the ball back at their own six, and drove 94 yards on 11 plays. Bill Enyart's two-yard second quarter plunge put the Beavers up 28–0. Mike Haggard tacked on a 27-yard field goal for a 31–0 halftime lead. Iowa managed to outscore Oregon State 18–7 in the second half, although the Hawkeyes' final touchdown came with three seconds left. The nine-game winning streak remains the Beavers longest since 1962–1963.
Washington
By the time Oregon State arrived in Seattle, the season was already beginning to take its toll on the Beavers. Defensive starters, Harry Gunner and Mark Waletich, were sidelined. Gary Houser was unable to play tight end but still managed to perform his punting duties. Oregon State fumbled on its second play from scrimmage. Washington drove to the six before Charlie Olds came up with an interception in the end zone. The Beavers subsequently drove 80 yards in 14 plays for a touchdown, capped off by Steve Preece's one-yard plunge. Mike Haggard shanked the extra point, one of only three missed extra points in 1967, to keep the score 6–0. The Huskies responded by driving 72 yards to set up a 21-yard field goal to pull within three. Late in the half, a bad punt was nullified because the Beavers were called for clipping; instead of Oregon State getting the ball at their own 42, Washington took over at the Beaver 36. The Huskies drove 27 yards in five plays to set up a 26-yard field goal, which sent the teams into the locker rooms tied at six. Neither team threatened until the fourth quarter. The Beavers fumbled at their 35. From there, Washington's Carl Wojchiechowski ran in from 18-yards out with two minutes left to win 13–6.
Brigham Young
Brigham Young was 2–1. Their only loss was to undefeated Wyoming in Laramie. Wyoming finished the season #6 in the AP Poll. The Cougars had defeated New Mexico and West Michigan by a combined 55 points. On Oregon State's second play from scrimmage, Steve Preece was intercepted. Eight plays later, the Cougars were up 7–0. Later in the quarter, Don Whitley intercepted a Brigham Young pass and returned it to the Cougar two. On the next play, Bill Enyart plowed in for a touchdown, knotting the score at seven. The Beavers’ best drives in the second quarter ended on a fumble and a missed 50-yard field goal. Brigham Young scored on a 40-yard field goal with less than three minutes left to take a 10–7 lead. They tacked on a 68-yard touchdown pass to take a 10-point lead into halftime.
Early in the fourth quarter, the Cougars added another touchdown for a 24–7 lead. Brigham Young was driving again in the fourth quarter, but Mark Waletich came up with an interception at the Oregon State two yard-line. The Beavers drove 98 yards, including a 31-yard touchdown pass from Preece to Billy Main to pull within 11. The Beavers’ best drive after that point ended after an interception. After Oregon State's defense forced a punt, the Beavers’ next drive ended when Preece's pass bounced off a receiver's helmet and was intercepted by Bobby Smith. Smith returned the interception 27 yards for a touchdown to wrap up the Brigham Young victory. Oregon State committed 11 turnovers in the game, one fewer than the 12 the Beavers committed over the next five games.
Purdue
Entering the game, three undefeated teams topped the AP Poll: the Trojans, the Boilermakers, and the Bruins. All three were on the Beavers’ schedule over the next four weeks. The first was #2 Purdue, the defending Rose Bowl champions on a nine-game winning streak. In 1967, the Boilermakers started the season beating Texas A&M in Texas and #1 Notre Dame. The week before Oregon State came to West Lafayette, Purdue beat Ohio State 41–6 in Columbus. The win remains the largest by any team over the Buckeyes in Ohio Stadium since 1946.
Purdue's stars were Mike Phipps at quarterback, Jim Beirne at end and Leroy Keyes at cornerback, running back, and punt and kick returner. Keyes was the nation's leading scorer in 1967 and would finish third in the Heisman balloting in 1967 and second in 1968. In both years, he was an All-American at both cornerback and running back. He appeared on the cover of Sports Illustrated's 1968 college football preview. In 1987, he was voted the all-time greatest Purdue football player. In 2004, College Football News voted him the 86th best football player of all-time. Entering the game, Phipps led the nation in total offense. Phipps would finish second in the Heisman balloting in 1970. Beirne was an All-American in 1967 and actually broke the Purdue all-time record for receptions during the game. The Boilermaker faithful did not give the Beavers much of a chance, erecting tombstones with the Oregon State players’ names on them. The Beaver coaches made sure to drive the Oregon State bus past them the day before the game. Purdue entered the game 20-point favorites.
The day before the game, the voice of Oregon State football, Bob Blackburn was at a tuxedo-required event in Seattle. After the event ended, he realized that he would not have time to change before his flight, so he flew to Indiana and called the game in his tuxedo.
Oregon State's first drive went 82 yards, ending in a touchdown on an 18-yard touchdown pass from Steve Preece to a wide open Roger Cantlon for a 7–0 lead. The Boilermakers took less than two minutes to drive 62-yards for their own touchdown on Keyes’ 15-yard run to tie the game at seven with more than ten minutes left in the first quarter. At that point, the defenses took over, holding both offenses scoreless for more than 24 minutes. Late in the second quarter, Jess Lewis and Jon Sandstrom combined to recover a fumble at the Purdue 26. The Beavers drove 17 yards to set up Mike Haggard's 26-yard field goal with 46 seconds left in the half. Oregon State went into the locker room 30 minutes away from pulling off the upset.
In the third quarter, Keyes scored his second touchdown on a seven-yard run to give the Boilermakers the lead for the first time, 14–10. After the touchdown, the Beavers’ defense stiffened, not allowing Purdue past the Oregon State 40 for the rest of the game. Late in the third quarter, the Beavers pulled within one on Haggard's 32-yard field goal. With 6:35 left, Jess Lewis came up with his second fumble at the Boilermaker 30. Six of the next seven plays, Preece handed off to Bill Enyart, who capped the drive with a four-yard run with 3:54 left. However, the two-point conversion failed, leaving Oregon State in front by five. Haggard was instructed to kick the ball away from Keyes. He lofted the ball high in the air, and Purdue was unable to field the kick, which was recovered by the Beavers' Mel Easley on the Boilermaker 28. Oregon State only managed seven yards, but Haggard converted his third field goal, a 38-yarder with 1:06 left to put the Beavers on top 22–14. Purdue's last hope evaporated when Mike Groff intercepted the Boilermakers' first pass on their next drive to seal the victory. 2000 people turned out at the Corvallis Airport to welcome the team home. The Beavers' win remained Oregon State's only visit to West Lafayette until 2021. It remains Oregon State's sole win in West Lafayette.
Upon finding out that Blackburn had worn a tuxedo to the game, Dee Andros asked Blackburn to keep wearing the tuxedo, which he did for the rest of the season.
Washington State
After a first quarter Washington State punt, personal fouls on back-to-back plays gave Oregon State the ball at the Cougar 12. Five Bill Enyart carries later, the Beavers were up 7–0. Washington State's best drives of the first half ended in a missed 40-yard field goal and a fumble that Skip Vanderbundt recovered at the Oregon State 31. On the ensuing drive, a 28-yard Billy Main carry gave the Beavers the ball at the Cougar nine. Enyart carried three consecutive times for his second touchdown and a 14–0 Oregon State lead with 1:38 left. The Beavers held Washington State to three-and-out and got the ball back at their own 27. After a 15-yard screen pass from Steve Preece to Main, they hooked up again for a 58-yard touchdown and a 21–0 Oregon State lead at halftime.
On the Cougars' opening drive of the second half, they pulled within two touchdowns. Two minutes later, Washington State recovered a blocked Gary Houser punt at the Beaver four. However, the Cougars got no closer on four plays to end the threat. Oregon State tacked on two fourth-quarter touchdown runs by Don Summers and reserve quarterback Bobby Mayes to take a 35–7 lead. WSU threatened one last time, but the Beavers' Larry Rich made a touchdown-saving tackle at the Oregon State three to preserve the 28-point homecoming victory.
UCLA
Oregon State and UCLA met for the first time since 1958, the final year of the Pacific Coast Conference. The game pitted the Beavers against their former coach, Tommy Prothro, for the first time. The game was the third between Dee Andros and Prothro. Prothro had won the first two. The Bruins began the season by beating #3 Tennessee. They followed the win by beating Pittsburgh on the road by 32. Later in the season, they returned to Pennsylvania and beat Penn State by two. The Nittany Lions wound up tenth in the AP Poll; the loss to UCLA was their biggest loss of the year.
The star for UCLA was quarterback Gary Beban, who went on to win the 1967 Heisman and Maxwell Trophies. He and their three All-Conference linemen were the biggest reasons the Bruins were averaging 31 points an outing, averaging victories over their opponents by more than 15 points a game. There was no All-Conference selection for kicker in 1967, but UCLA's Zenon Andrusyshyn almost certainly would have been the All-Conference selection. The Bruins’ bye week was the previous week, so UCLA had two weeks to prepare for the Beavers. Oddsmakers initially made the Bruins a 13-point favorite but gamblers loaded up on upset-minded Oregon State. At kickoff, the spread was a mere seven points.
In the first quarter, the Beavers got the first break, when Oregon State's Jim Belcher came up with a fumbled punt at the UCLA 38. Two plays after Steve Preece scrambled for 35 yards, Bill Enyart bowled in from one yard out for a 7–0 lead. At the beginning of the second quarter, the Bruins stopped Enyart six inches from the end zone, but Enyart spun into the end zone. The referees ruled that Enyart's forward progress had been stopped and gave the ball to UCLA. On the ensuing drive, the Beavers' Bill Nelson jarred the ball loose from the Bruins' Rick Purdy. Oregon State's Jess Lewis recovered the fumble, but one of the officials had blown the play dead, while the ball was still in the air. The officials ultimately awarded the ball to UCLA, who drove 99 yards for a touchdown, knotting the score at seven. UCLA followed the touchdown with a 52-yard Andrusyshyn field goal. With less than two minutes left, the Bruins recovered a blocked Gary Houser punt at the Beaver 16. Oregon State's defense did not allow the Bruins a yard, but Andrushyn kicked a 33-yard field goal to give UCLA a 13–7 halftime lead.
Both teams' defenses dominated most of the third quarter, but the Beavers' Billy Main managed to scamper into the end zone from nine yards out. Mike Haggard's all-important extra point hit the left upright, which preserved a 13-all tie. In the fourth quarter, the Bruins put together a 71-yard drive aided by an inadvertent whistle, which nullified a UCLA fumble. The Bruins had to settle for a 26-yard field goal. UCLA threatened again later in the quarter, but Mark Waletich intercepted a Beban pass in the Oregon State end zone with less than two minutes left. The Beavers drove 69 yards in less than a minute but faced fourth-and-six at the Bruin 11. Andros opted to try a field goal. Haggard's 28-yard field goal split the uprights with 1:15 left. The Bruins, trying to avoid the first blemish on their record drove to the Oregon State 23. Andrushyn came on for a 40-yard field goal attempt, but Ron Boley batted down the kick with less than 10 seconds left to preserve the tie. The 16 points were the fewest that UCLA would score in 1967. Preece was voted AAWU Back of the Day. Defensive end Harry Gunner was voted AAWU Lineman of the Day. 1000 people turned out at the Corvallis Airport to welcome the team home. The game had barely ended when Dee Andros began being assailed by questions about Oregon State's chances against the #1 Trojans. He finally grew sick of it and said, "I'm tired of playing these number two ranked teams. Bring on number one."
Southern California
The #1 Trojans were a juggernaut. In the 1960s, USC would finish no worse than second in their conference, winning six conference championships, playing in five Rose Bowls and winning two national championships. The 1967 USC Trojans football team may have been the best Trojan team in the decade. The Sporting News ranked that USC team as the #9 team of the 20th century. Their non-conference schedule included #1 Notre Dame in Notre Dame; #3 Michigan State in East Lansing; and #4 Texas in the Coliseum. Southern California started off the non-conference slate with a 17–13 win over Texas. Then, they defeated Michigan State 21–17. In the Battle for the Jeweled Shillelagh, the Trojans defeated the Irish 24–7 at Notre Dame. The 17-point loss served as the largest margin of defeat the Irish would endure at Notre Dame between 1963 and 1976. When the Trojans rolled into Corvallis, they were averaging winning every game by more than 20 points against a very difficult schedule. The game marked the Trojans' first-ever trip to Corvallis. All previous Oregon State "home" games between the two teams had been held in Portland and Tacoma.
USC's two biggest stars were right tackle Ron Yary and halfback O. J. Simpson. Yary was the best lineman in the country and would win the Outland and Rockne Trophies at the end of the year. Simpson led the country with 1050 rushing yards. He would go on to finish second on Heisman ballots in 1967 and would win the trophy in 1968. Both players would wind up as the first overall pick of the NFL draft after their respective senior seasons, and each would enter both the College Football and Pro Football Halls of Fame. On defense, the Trojans had three First Team All-Americans: Tim Rossovich at end, Adrian Young at linebacker, and Mike Battle in the secondary. The game was highly anticipated. California governor (and future President) Ronald Reagan and Oregon governor Tom McCall made the trip. Reagan had famously said he would handpick a box of oranges if Oregon State won. Tom McCall turned the boast into a bet when he offered to put up a freshly caught silver salmon against Ronald Reagan's handpicked box of oranges. The game was held on Veterans Day, so, along with the two governors, ten generals and admirals, including Lt. General Jimmy Doolittle; three Congressional Medal of Honor recipients; and the Air Force Academy Drum and Bugle Corps were on hand. Additionally, the 1942 Oregon State Rose Bowl team was celebrating its 25th anniversary and were in attendance. All told, 41,494 fans filled the 40,750-seat stadium. It was the most-attended single sporting event in the history of Oregon to that date. The weather, which became a topic of contention after the fact, was typical for a November in Oregon. From the eighth to the eleventh only .83" of rain fell. At kickoff, the #1 Trojans were 11-point favorites over the #13 Beavers.
On the Trojans' first play from scrimmage, Simpson quickly showed he was worthy of Heisman consideration, rushing for 40 yards around left end. However, the Trojans were forced to settle for a 36-yard field goal attempt, which sailed wide right. The Trojans did not get any closer to the Beaver end zone for the rest of the game. By the end of the first quarter, Simpson had already rushed for 87 yards. Early in the second quarter, the Juice finally broke loose. He shook off a tackler at the Trojan 37 and steamed upfield with four blockers to lead him. He only had one man to beat, Mark Waletich. Simpson slowed down in an attempt to allow his blockers to make a play on Waletich. Waletich stayed in front of Simpson long enough and out of nowhere, Jess Lewis closed on Simpson, eventually dragging O.J. down from behind at the Beaver 32. USC would get eight yards on the next three plays. Rather than attempt a 41-yard field goal, the Trojans went for it, but Ron Boley tackled the Trojans' quarterback, Steve Sogge, for no gain. Later in the second quarter, Oregon State's Skip Vanderbundt came up with a Southern California fumble at the Trojan 47. Over the next eight plays, the Beavers rushed for 34 yards all on running plays by Bill Enyart, Steve Preece, and Billy Main. On fourth-and-three at the Trojan 13, Mike Haggard's 30 yarder split the uprights for a 3–0 lead. After holding USC to a three-and-out, Oregon State's Bob Mayes ran 25-yards on a reverse. However, Haggard's second attempt from 28 yards sailed wide right, and the half ended with a 3–0 Oregon State lead.
In the third quarter, Enyart took off from the Beaver 24, and was not caught until he reached the Trojans' 19. When he was tackled, Enyart fumbled. The fumble was recovered by USC's team captain, Adrian Young. As the game wore on, both defenses only seemed to get stronger. Early in the fourth quarter, USC faced a third-and-two at its own 23. Ron Boley dropped Steve Sogge for a loss. Later in the quarter, the Trojans had their best scoring opportunity of the second half, when they faced third-and-one at Oregon State's 42. Boley again tackled Sogge in the backfield for a two-yard loss. Oregon State's returner, Charlie Olds, received the ensuing punt at the Beaver nine and raced downfield. He was hit at the Trojan 35-yard line and fumbled. The ball bounced near Olds but not near enough to recover the fumble. Instead, Olds knocked the ball out of bounds. The referees called a penalty for illegal batting, which was a personal foul, penalized by an automatic change of possession. USC was unable to generate a first down on the drive. In the last 44 minutes of the game, the Trojans managed just three first downs and only crossed midfield twice. Perhaps the best boost for the defense was the punting of Gary Houser. USC did not start a drive beyond their own 35 after a Houser punt all game long. With three minutes left, Skip Vanderbundt forced a Simpson fumble. It proved to be Simpson's final carry. He ended with 188 yards rushing but, more importantly, no touchdowns. Fittingly, Jess Lewis came up with the fumble at the Trojan 35. Oregon State's offense was so enthused that they managed their only first downs of the second half, which enabled the Beavers to run out the clock. The Beavers faced fourth-and-one at the Trojan 38 with 34 seconds left. Enyart ran for one yard and a first down. Preece took a knee to end the game.
The Los Angeles Examiner wrote of Oregon State "Giant-killers? Heck, today they're the giants." That night, Andros received congratulatory calls from Governor McCall, Senator Mark Hatfield, and Oklahoma's former head coach (and Andros' old coach), Bud Wilkinson. Oregon Journal sports editor George Pasero reported that Trojan Athletic Director Jess Hill told the Los Angeles papers that an AAWU rule would be passed to require teams to use tarpaulins in the week prior to a game. Governor Reagan issued a statement that he would help to purchase the tarp and that he would have his $1 donation in the mail shortly. Upon hearing the quotes from Hill and Reagan, Andros told the Journal that he would consider using a tarpaulin, if the Los Angeles schools purchased "a couple of big fans and blow the smog out of the Coliseum." No AAWU rule was ever passed about the use of tarpaulins.
Enyart finished with 135 yards. Simpson had 188 yards, 81 yards on two carries. The other Trojans were held to 18 yards combined. The 3–0 loss was the last time the Trojans would be shut out until they went on probation in 1983. USC would not lose another regular season game until 1970; for the rest of the 1960s decade, their only blemishes would be ties against arch-rival Notre Dame in 1968 and 1969, and a loss to national champion Ohio State in the 1969 Rose Bowl. The 1967 Oregon State team remains the only college football team to go undefeated against three top two teams in the same season. It is unclear whether Tom McCall ever received the box of oranges Ronald Reagan had promised to hand-pick. UCLA's 48–0 win over Washington the same day eliminated Oregon State from the Rose Bowl race. Conference rules did not permit more than one team to go to a bowl game at the time, so the Civil War would be Oregon State's last game of the year. On November 18, #1 UCLA and #2 USC battled for the Victory Bell in the Coliseum. UCLA was 7–0–1 and USC was 8–1. It has been dubbed the "Game of the Century" and the "signature game" in the rivalry. The 21–20 win on Simpson's 64-yard fourth quarter scamper helped propel the Trojans to a national championship. The Beavers' 1967 win over USC would be their last over the Trojans until 2000.
Oregon
The final opponent on #8 Oregon State's schedule was 2–7 Oregon. The Webfoots were not another giant, but they were improving. Their two victories had both come in the previous four weeks. Oregon's strength was their defense behind defensive coordinator John Robinson, All-Conference nose guard George Dames, and All-Conference defensive back Jim Smith. This was the first Civil War game at Autzen Stadium, which opened two months earlier with natural grass. The attendance was 40,100, a Civil War record at the time.
Gary Houser's first punt was partially blocked and recovered by the Webfoots at the Beaver 31. On their first play from scrimmage, Oregon's Eric Olson threw a 20-yard pass. The Webfoots would only gain one yard on three plays and had to settle for a 27-yard field goal. Before halftime, Oregon State's Bill Enyart fumbled twice inside Oregon's 10-yard line. Charlie Olds ended the Webfoots' best drive of the second quarter by picking off an Eric Olsen pass in the Beaver red zone.
Oregon State's first drive of the second half ended on a Beaver fumble at Oregon's 43. The Webfoots capitalized, quickly finding themselves with first-and-goal at the Beaver three. Oregon State's defense did not fold, stopping Oregon a foot short of the end zone on third-and-goal. However, the Webfoots dove in on their fourth attempt, increasing their lead to 10–0 with five minutes left in the third quarter. Oregon State was in dire straits after fumbling again at their own 45-yard line. Oregon drove 15 yards but missed a 47-yard field goal. Early in the fourth quarter, Oregon State finally hit their stride. Starting at their own 20, the running game began finding holes over and through the Duck defense. On one third-and-eight, Steve Preece found Don Summers for a 35-yard gain. On the next play, Roger Cantlon slipped and fell down but still managed to haul in a pass at Oregon's one-yard line. From there, Enyart plowed over the Duck defenders and into the end zone, cutting Oregon's lead to 10–7 with nine minutes left. Oregon State's defense responded by forcing Oregon to go three-and-out. The punt only carried to the Beavers’ 45. Nine plays later, Oregon State had the ball first-and-goal on the four-yard line. The Webfoots loaded up the middle to try to stop Enyart; however, Steve Preece threw them a curve, running around left end for a touchdown with two-and-a-half minutes left to take the lead, 14–10. After getting the ball back, Oregon's final four plays only netted seven yards, turning the ball over to Oregon State. The Webfoots did not get the ball back.
Final standing
The Civil War victory propelled the Beavers to #7 in the final AP Poll, which was their best ever final ranking. It would take another 33 years for Oregon State to be ranked any higher. Oregon State's 7–2–1 record was its best between 1962 and 2000. It is all the more impressive because the Beavers were only favored to win three of the ten games they played.
NFL/AFL Draft
Three Beavers were selected in the 1968 NFL/AFL Draft, the second common draft, which lasted seventeen rounds (462 selections).
References
Further reading
John H. Eggers (ed.), Oregon State University: Football Facts, 1967, Corvallis, Oregon: Oregon State University Sports Information Department, 1967.
External links
WSU Libraries: Game video – Washington State at Oregon State – October 28, 1967
Oregon State
Oregon State Beavers football seasons
Oregon State Beavers football |
25317081 | https://en.wikipedia.org/wiki/Postage%20stamps%20and%20postal%20history%20of%20Poland | Postage stamps and postal history of Poland | Poczta Polska, the Polish postal service, was founded in 1558 and postal markings were first introduced in 1764. The three partitions of Poland in 1772, 1793 and 1795 saw the independent nation of Poland disappear. The postal services in the areas occupied by Germany and Austria were absorbed into those countries' postal services. In 1772 the area occupied by Austria was created into the Kingdom of Galicia, a part of the Austrian Empire. This lasted till 1918. The Duchy of Warsaw was created briefly, between 1807 and 1813, by Napoleon I of France, from Polish lands ceded by the Kingdom of Prussia under the terms of the Treaties of Tilsit. In 1815, following Napoleons' defeat in 1813, the Congress of Vienna, created Congress Poland out of the Duchy of Warsaw and also established the Free City of Kraków. Congress Poland was placed under the control of Russia and the postal service was given autonomy in 1815. In 1851 the postal service was put under the control of the Russian post office department regional office in St Petersburg. In 1855 control was restored for a while to the Congress Kingdom but following the uprising in 1863 again came under Russian control from 1866 and continued until World War I. In November 1918 the Second Polish Republic was created.
1958 was the 400th anniversary of the Polish postal service and was commemorated with an issue of seven stamps, a miniature sheet, a book "400 Lat Poczty Polskiej", a stamp exhibition in Warsaw and a number of commemorative postmarks.
History
The earliest record of a postal system in Poland is, from the year 1387, of merchants who organised a private system and introduced horse riders to replace foot letter carriers. In 1530 a monthly postal service from Kraków to Rome was introduced by the Fugger bankers of Venice.
On 17 October 1558 Sigismund II Augustus appointed Prospero Provano, an Italian merchant living in Kraków, to organise a postal service in Poland. He was paid 1,500 thalers per annum by the royal treasury to run the postal service. He merged all the private postal services into a single postal service. Royal mail and mail from some monastic orders was carried free. All other mail was paid for.
Meanwhile, since 1516, the house of Thurn and Taxis had been running an international postal delivery service. The Polish King decided to transfer the Polish postal system to the Taxis family and did this on 11 July 1562. Christopher Taxis received the same annual salary as Provano. He ran the system as a commercial venture and because of his extravagance the postal system deteriorated. Sigismund II Augustus terminated the contract with the Taxis family.
On 9 January 1564 Peter Moffon was appointed postmaster general by the Polish King. Moffon, another Italian merchant living in Kraków, was given the postal contract for five years. On 15 June 1569 he was replaced by Sebastiano Montelupi. When King Sigismund Augustus died in 1572, Montelupi continued the service at his own expense for two years. The public postal service then ceased for a period of some 11 years, although a system reserved to royal use was rebuilt from 1574 onwards.
On 29 January 1583 Sebastiano Montelupi and his nephew (and adopted heir), Valerio Montelupi, were given a contract to run the postal service for five years. When giving the contract the King, Stefan Batory, introduced a uniform postal rate of 4 groszy per letter not exceeding 1 łut (about 12.66 grams) for any distance in Poland. This was the first uniform postal rate to be introduced in the world. Sebastiano Montelupi died in 1600 aged 84 and Valerio Montelupi continued to run the postal service till his death in 1613.
Free City of Kraków
In 1815 the Free City of Kraków had a population of 95,000, of whom 23,000 were actually in Kraków, the remainder in the surrounding area. Under the constitution that it had been given the city was responsible for the post.
A central post office was already in existence, from the period when Kraków was part of the Duchy of Warsaw. On 1 June 1816 the Post Office of the Free City of Kraków took control of the existing central post office. Its staff consisted of a director, four secretarial staff, two postmen and a conductor. Two post stations were established in Krzeszowice and Cło. Post routes to and from each of the three Polish areas, Galicia, Congress Poland and Prussia were soon established.
Under the constitution the Free City had the exclusive right to private mail. The three powers could only transport official mail. However, on 1 December 1816 the Prussian Government set up a post office and a mail delivery from Kraków to Prussia. Despite protests from the Free City the Prussians continued. On 16 May 1818 the Austrians followed suit, set up a post office and a mail delivery to Galicia.
In the first full financial year, 1816/17, the Kraków post office had a profit of 18,887 złoty. By 1822/23, because of the competition, this had reduced to 2,802 złoty, despite the increase in population and increase in traffic.
The delivery of letters was undertaken by two postmen, they collected 4 groszy for each letter delivered or 8 groszy if it was a money letter. In 1825 these fees were reduced by half.
In the year 1833/4 the Kraków post office dealt with a total of 66,910 letters, an average of 185 per day. In December 1834 the senate of the Free City of Kraków received a notice from Congress Poland that they would be setting up a post office in Kraków. Protests brought no result. In August 1836 the Free City of Kraków came to an agreement with Congress Poland to cease operating their own post office and to rent their post office building to them from 1837. In return Kraków was to receive an annual fee of 12,000 złoty.
The Prussian post office used three different date stamps (1) a two line handstamp, (2) a two ring cancel and (3) a single ring cancel – all with KRAKAU and the date. The Austrian post office used a single line handstamp with the text CRACAU. The Congress Poland post office used two datestamps one with a single outer ring and one with a double outer ring with text KRAKÓW.
Postmarks of the Congress Kingdom
From 1815 the postmarks were in Polish. From 1860 the postmarks were in Russian and Polish. From 1871 the postmarks were with Russian inscriptions only.
The different types of standard postmarks that were used are as follows:
One-line inscription in Polish – earliest known is 1821, latest is 1870
One-line inscription in Polish in rectangular frame – earliest 1829, latest 1870
Two-line inscription, with town name in Polish and date in Latin – earliest 1814, latest 1844
Two-line inscription in Russian and Polish – earliest 1860, latest 1870
Two-line inscription in Russian and Polish in rectangular frame – earliest 1860, latest 1870
Two-line inscription in Russian – earliest 1871, latest 1890
Three-line inscription in Russian in rectangular frame – earliest 1871, latest 1877
Circular postmark with inscription in Polish and two-line date in numerals – earliest 1829, latest 1868
Circular postmark with inscription in Russian and Polish and two-line date in numerals, earliest 1860, latest 1870
Circular postmark with inscription in Russian – latest 1917
Four concentric circles with a number in the centre, the number denoting the post office – earliest 1858, latest 1870
Other rare non-standard postmarks are also known.
First Polish stamp
The first Polish stamp was issued for the Congress Kingdom on 1 January 1860 (Gregorian calendar). Because 1 January was a Sunday the stamp was not actually available until the following day. The design was similar to the contemporary Russian stamps with the arms of the Congress Kingdom in the centre. The engraving was done by the Polish Bank engraver Henryk Mejer. The drawings he used were found in the archives at St Petersburg but the name of the artist remains unknown. The stamps were printed by the government printers in Warsaw on the orders of the Congress Kingdom postal service. The letterpress machine used was invented by Izrael Abraham Staffel (1814–1884) for printing in two colours. The machine was capable of printing 1,000 sheets per hour and it had a counting device which ensured an accurate count. Apart from these facts very little more is known about the machine.
The printing was done without consultation of the Russian postal service. The regional office in St Petersburg only approved afterwards, on 4 March 1860 (Gregorian calendar). These stamps could only be used within the Congress Kingdom and to Russia. Letters to other countries had to be paid for in cash and unstamped. It is believed that some three million of these stamps were printed. When the stamps were withdrawn from use on 1 April 1865 (Gregorian calendar) a total of 208,515 stamps were destroyed; Russian stamps had to be used from that day onwards.
In 1915 the Congress Kingdom was occupied by the Central Powers.
Austrian occupation 1915–1918
Austria occupied the southern part of Congress Poland; no special stamps were issued; Austrian stamps were made available. Austrian field post offices were set up which used postmarks with Polish town names.
The stamps that were made available were:
Most of the Austro-Hungarian Military Post general issues from 1915 to 1918 – 75 different stamps
Some of the Bosnia and Herzegovina issues from the period 1904 to 1916 – 41 different stamps
Some of the Austria issues from the period 1908 to 1916 – 42 different stamps
In addition one postcard from Bosnia and Herzegovina and four postcards of the Austro-Hungarian Military Post general issues were made available
Russian datestamps were replaced with Austrian datestamps. The postmarks were inscribed K. u. K. ETAPPENPOSTAMT at the top and the Polish town name at the bottom.
The list of post office with these datestamps are as under, names in bracket are changes in name which took place later
German occupation 1915–1918
The area occupied by Germany was named "General Government Warsaw" (General-Gouvernement Warschau).
On 12 May 1915, five contemporary German stamps, overprinted "Russisch-Polen", by the Imperial Printing Works in Berlin, were first issued for use in the German-occupied area. On 1 August 1916, after the fall of Warsaw and the complete occupation of central Poland, a set 11 stamps overprinted "Gen.-Gouv. Warschau" was issued. They remained in use until November 1918. These stamps only ensured delivery to the post office and not to the addressee.
In addition to stamps, postal stationery items were also overprinted and made available. One postcard and one reply postcard were issued overprinted "Russisch-Polen". Three different postcards and two different reply postcards were issued with the "Gen.-Gouv. Warschau" overprint.
Stamped to order postal stationery was also produced with the "Russisch-Polen" overprint. Items produced were – three different values of postcards (3pf, 5pf, 10pf); five different values of pre paid envelopes (3pf, 5pf, 10pf, 20pf, 40pf); and one 3pf newspaper wrapper. The postcards and envelopes were produced with and without an illustration.
Kingdom of Poland essays
In 1916, Germany and Austria declared a new Kingdom of Poland. Early in 1917 the Germans requested the Chief of the Civil Administration in Warsaw to arrange for the Warszawskie Towarzystwo Artystyczne (Warsaw Society of Artists) to organise a competition of designs, by Polish designers, for a series of definitive stamps for this planned Kingdom of Poland. One of the conditions of this competition was that the stamps be inscribed "KROLESTWO POLSKIE" (literally: Polish Kingdom, i.e. Kingdom of Poland). Monetary prizes were offered from 150 marks to 1000 marks. The closing date was 1 December 1917. A total of 32 artists submitted some 148 designs by the closing date. Essays of all of these 148 designs were printed on sheets in black, brown, green and blue. A booklet was also published on 11 January 1918 containing all these designs.
Thirteen of these designs were chosen and the Imperial Printing Works in Berlin engraved all of the designs. The 13 chosen designs were printed in the proposed colours on 5 sheets. These stamps were mounted in folders and circulated amongst the various German embassies and legations in existence at the time.
The artists include the following: M Bystydzieński, Henyk Oderfeld, Nikodem Romanus, Józef Tom, Apoloniusz Kędzierski, Ludwik Gardowski, Ludwik Sokołowski, Zygmunt Beniulis, Jan Ogorkiewicz, Edmund John, Edward Trojanowski and Mieczysław Neufeld.
When in 1918 Poland became independent, two of the artists, who took part in the 1917 competition, Edward Trojanowski and Edmund Bartłomiejczyk, were asked to modify their designs for use by the new Republic of Poland.
Local postal services 1915–18
The occupying forces did not provide any local delivery service; they left it to town council set up local delivery services. Some of these councils produced stamps for providing this service others used cachets, which were stamped on the letters. Most of the stamps which were issued, were produced without permission of the occupying authorities. None of these were valid for use once Polish stamps had been issued in November 1918.
The list of local postal services which used stamps and/or handstamps for local deliveries is as follows:
Aleksandrów – handstamp
Będzin – handstamps
Chęciny – 8 stamps issued unofficially; considered as fantasy stamps
Czeladż – handstamp
Częstochowa – handstamps
Grodno – handstamp
Kalisz – 2 handstamps
Koło – handstamps
Lipno – handstamp
Luboml – 7 stamps ordered and printed, arrived too late to be used
Łódź – handstamps
Mława – handstamp
Otwock – 1 stamp and handstamps
Pabiannice – handstamp
Pruszków – handstamps
Przedbórz – 18 stamps recognized as being issued
Siedlce – handstamps
Sosnowiec – 7 different stamps issued and handstamps
Skierniewice – handstamp
Tomaszów Mazowiecki – handstamp
Warszawa – 10 different stamps and handstamps
Wilno – handstamps
Włocławek – handstamp
Wysokie Mazowieckie – handstamp
Zawiercie – 2 stamps
Żarki – 9 stamps issued
Żyrardów – handstamp
Polish Republic provisional issues
The generally accepted date of the independence of Poland is 11 November 1918. This date really only applies to General-Gouvernement Warschau. The area occupied by Austria was freed on 29 October 1918. The Wielkopolska area was held by the Germans till 27 December 1918 and the Pomorze area till 10 February 1919. Each area had a different philatelic history until stamps were issued to cover the whole country.
Former German occupied Russian area
Following the Armistice on 11 November 1918 the Polish authorities set to work to organise a postal service. Instructions were issued to temporarily use existing stamps and modify or replace cancelling machines with Polish place names. Many offices, on their own initiative, overprinted the "Russisch-Polen" and the "Gen.-Gouv. Warschau" stamps with "Poczta Polska", "Na Skarb Narodowy" etc.
Local overprints
Local overprints, on the "Russisch-Polen" and the "Gen.-Gouv. Warschau" stamps, are known from the following locations.
Aleksandrów Kujawski – 11 different stamps known used from 18 November 1918 to 3 January 1919.
Błonie – 11 stamps and two postcards were overprinted using a rubber stamp; all known copies are of philatelic origin.
Brzeziny – 14 stamps and two postcards were overprinted; all known copies are of philatelic origin.
Ciechocinek – 6 different stamps are known overprinted; the copies are dated between 13 and 22 December 1918. Only used copies are known. Only two to ten of each of these are known (per 2001 catalogue). These are the highest priced Polish stamps.
Grodzisk – 11 different stamps and two postcards were overprinted; all known copies are of philatelic origin.
Izbica – 4 different stamps are known overprinted.
Kalisz – 15 different stamps were overprinted by the post office in Kalisz. Four other different overprints using rubber handstamps supplied by a philatelic dealer were used on ten different stamps to produce 40 different items; these are considered philatelic.
Koło – 10 different stamps and one postcard are known to have been overprinted with two different handstamps.
Konin – Two different overprints were used on 10 different stamps to produce 20 items. A different overprint in January 1919 was used; all known items with this third overprint are philatelic.
Luzino – known from 2 covers dated 1922
Łęczyca – 14 different stamps and two postcards were overprinted.
Łowicz – 11 different stamps and one postcard were overprinted, all used items are postmarked February or March 1919 which is after definitive stamps had been supplied.
Łuków – Two stamps and one postcard are known to have been supplied from 12 December 1918.
Maków – 10 different stamps and one postcard are known used from 13 November 1918 to 5 January 1919.
Ostrołęka – 8 different stamps and two postcards were overprinted. They are known used from 12 November 1918.
Ostrów Mazowiecka – 9 different stamps and two postcards were overprinted. They are known used from 12 November 1918.
Otwock – all items believed to be philatelic.
Ozorków – 10 different stamps and one postcard were overprinted. These are believed to be philatelic.
Poddębice – 10 different stamps and two postcards were overprinted. These were in use during December 1918.
Płońsk – 11 stamps overprinted, these are found only used and are believed to be philatelic.
Pułtusk – 10 different stamps and two postcards were overprinted. These are known used from 13 November 1918 to early January 1919.
Sieradz – 10 different stamps and two postcards were overprinted. These were in use from 13 November 1918 till the end of January 1919.
Skierniewice – 11 different stamps and two postcards were overprinted in black red and blue. These are considered as philatelic.
Włocławek – Two different overprints and 13 different stamps were used to produce 23 different items. Two postcards were overprinted producing 4 items. They were in use from 13 November 1918.
Zduńska Wola – Two different overprints and 8 different stamps were used to produce 17 different items. Two postcards were also overprinted. These were in use from 13 November 1918 till February 1919.
First provisional issue
The first stamps to be issued by newly established Polish Ministry of Post and Telecommunications in Warsaw were on 17 November 1918. Unissued stamps, which had been produced for the Warsaw Local Post in 1916, were overprinted with the value in "fen" at the top and "Poczta Polska" at the bottom. This is the first known occasion, in the world, on which local stamps were utilised to produce state stamps. They were in use for only a few weeks. The most common postmark to be found on these stamps is "Warschau", "Warszawa" and "Łódź", stamps postmarked "Bendzin" and "Sosnowice" are also known.
The four stamps which were produced were:
5 fen on 2 gr brown – statue of Sigismund III Vasa
10 fen on 6 gr green – "Syrena" the coat of arms of Warsaw
25 fen on 10 gr carmine – White Eagle of Poland
50 fen on 20 gr blue – statue of John III Sobieski
The original stamps had been designed by Professor Edward Trojanowski and printed lithograph at the printing works of Jan Cotty in Warsaw. The overprinting was done at the "Kopytowski i Ska" printing works in Warsaw. Each of the four stamps is known with inverted overprint.
The total number of stamps originally printed in 1916, per the invoice from the printers, was as follows
2 groszy – 874,152
6 groszy – 1,814,400
10 groszy – 290,952
20 groszy – 209,952
About 20,000 of each of these were not used for overprinting with "Poczta Polska" in 1918.
Second provisional issue
A large stock of the "Gen. Gouv. Warschau" stamps had been left behind by the Germans. These were collected up and overprinted with obliterating bars and "Poczta Polska". On 5 December 1918 eight different values (3, 5, 10, 15, 20, 30, 40 and 60 fen) were put on sale in post offices and shops. The supply of 5 fen stamps ran out after two days, so stamps with a face value of 2½ and 3 fen were surcharged "5". In addition to this a 25 fen stamp was issued by surcharging the 7½ value with "25". Due to the haste with which these stamps were produced there are many errors and varieties to be found. The overprinting was done by the "Kopytowski i Ska" private printing works in Warsaw.
On 7 December 1918 the Ministry of Post and Telegraph announced in the daily newspapers that German stamps and postcards without the overprint "Poczta Polska" would not be accepted for postage as from 16 December 1918.
Former Austrian occupied Congress Poland
On 5 November 1918 the Post and Telegraph Administration in Lublin issued the first instructions regarding the organising of the post in the Congress Poland area formerly occupied by the Austrians. All the post office were instructed to follow existing rules and regulations and to use existing Austrian stamps in stock. Three post offices on their own initiative overprinted Austrian stamps and the Lublin Post and Telegraph Administration supplied overprinted Austrian stamps in December 1918. There is no record of a public announcement or any instruction being given to post offices invalidating Austrian stamps.
Local overprints
Local overprints are known as follows, the status of all of these is unknown
Jędrzejów – overprint on some 13 different postage stamps, one postcard and one postage due stamp.
Olkusz – overprint on some 12 different postage stamps and one postage due stamp.
Zwierzyniec nad Wieprzem – three different stamps overprinted.
First Lublin provisional issue
Three values (10 hal, 20 hal and 45 hal) of the Austro-Hungarian K und K Military Post Imperial Welfare Fund stamps were overprinted in Lublin with POLSKA at top, Polish eagle in centre and POCZTA at the bottom. A total of 64,000 copies of each was overprinted. The stamps were distributed to 41 post offices and shops. All the stamps were sold within ten days of being issued on 5 December 1918. Due to the haste of production errors such as inverted overprints and double overprints are known.
Second Lublin provisional issue
Ten different values were produced using the Austro-Hungarian K und K Military Post Emperor Charles stamps and were issued on 19 December 1918. As with the previous issue, numerous varieties exist due to the haste of production. The stamps overprinted and quantities are as follows
3 on 3 hal – 13,000
3 on 15 hal – 164,600
10 on 30 hal – 136,600
25 on 40 hal – 109,900
45 on 60 hal – 91,800
45 on 80 hal – 50,000 with bars
45 on 80 hal – 154,900 with stars
50 on 60 hal – 75,000
50 hal – 22,000
90 hal – 108,800
Former Austrian Kingdom of Galicia
On the departure of the Austrian forces the area was administered by a governing commission which was called the Polish Liquidation Commission (Polska Komisja Likwidacyjna) which was formed on 28 October 1918, in Kraków, by a coalition of Polish political parties in Galicia.
Local overprints
A number of post offices are known to have overprinted or handstamped Austrian stamps. Many of these stamps did see proper postal use. There are also many that are only known off cover and/or on philatelic covers; these are considered as speculative issues. Details of the post offices with local overprints are as follows
Baranów – handstamp on 14 different values.
Bielsko – handstamp on two different postcards.
Bochnia – three stamps were overprinted but not used.
Czermin – handstamp on two different stamps.
Dziedzice – handstamp on two different stamps.
Klimkówka – handstamp on two different stamps.
Krosno – one postcard overprinted.
Mielec – 19 different stamps, three postcards and a letter card are known to have been overprinted or handstamped and used non-philatelically. There are in addition some 100 different stamps that are considered as speculative issues.
Oświęcim – handstamp on one stamp.
Skałat – 24 different stamps were handstamped.
Węgierska Górka – one cover known, handstamped, dated 3 February 1919. and two cuts off with 5 stamps each.
All the stamps with local overprints from the following post offices are considered as speculative issues
Myślenice
Przemyśl
Rozwadów
Świątniki Górne
Tarnów
Krakow issue
All Austrian stamps, which were still in store in Kraków, were sent to two different printers in Kraków for overprinting on 2 January 1919. The two printing works were A Koziański and F Zieliński. The overprinting by Koziański was by typograph and the overprinting by Zieliński was by lithograph. In total 20 different postage stamps, 5 different newspaper stamps and 12 different postage due stamps were overprinted with POCZTA POLSKA in two lines with a diamond shape or an ornament between. The stamps were made available for sale from 10 January 1919.
The following stamps and quantities of postage stamps were produced and issued.
On 12 January 1919 the Post and Telegraph Director in Lwów issued instructions that unoverprinted Austrian stamps and postal stationery would not be valid for postage from 20 January 1919.
Polish Liquidation Commission issue
The Polish Liquidation Commission ordered these stamps, popularly known as the Polish Liquidation Commission issue, for use in Galicia. They were issued on 25 February 1919 and the sale of these stamps was stopped while the Polish Liquidation Commission argued with the Ministry of Post and Telegraph in Warsaw over the competence of the Galicia administration issuing stamps. This argument was resolved in a matter of days and the stamps were allowed to be sold from March with the proviso that these stamps would only be valid for use till 31 May 1919. The stamp was designed by Jan Michalski and printed in the Zieliński printing works in Kraków. They were issued ungummed and imperforate.
The following stamps and quantities were produced and issued.
See also
Bojanowicz Collection
Fischer catalog
Kaluski Collection
Polonus Philatelic Society
Postage stamps and postal history of Żarki
Postage stamps and postal history of Danzig
References and sources
Notes
Sources
Bojanowicz, M A, The Kingdom of Poland. Poland No 1 and associated Postal History, 1979, The Royal Philatelic Society London
Kamienski, Miet, Postal Service between Poland and the Mediterranean, (page 31 to 41 in The Mediterranean Mails, 1993, Philatelic Specialists Society of Canada)
400 Lat Poczty Polskiej, 1958, Wydawnictwa Komunikacyine Warszawa, (in Polish)
Larking, R N W, Poland The postal issues during and after the Great War, Gibbons Stamp Monthly, published in 22 parts starting in October 1929 and finishing in August 1932.
Polskie Znaki Pocztowe, Vol I, 1960; Vol II, 1960; Vol III, 1962; Vol IV, 1966; Vol V, 1973, Biuro Wydawnicze Ruch, Warszawa, (in Polish), the joint work of Stanisław Adamski, Stanisław Babiński, Tadeusz Grodecki, Tadeusz Gryżewski, Tadeusz Hampel, Maxymilian Herwich, Antoni Łaszkiewicz (joint ed.), Józef Machowski, Stanisław Mikstein, Zbigniew Mikulski (joint ed.), Maciej Perzyński, Józef Tislowitz and Stanisław Żółkiewski.
Śnieżko, Aleksander, Poczta Miejska Miasta St. Warszawy 1915–1918, Agencja Wydawnicza Ruch, 1965 (in Polish)
Fischer, Andrzej, Katalog Polskich Znaków Pocztowych, 2001, Vol 2, Andrzej Fischer,
Ilustrowany Katalog Znaczkow Polskich, 1973, Agencja Wydawnicza Ruch, Warsaw (in Polish)
Melnichak, Michael E, The Typographic Overprints of the 1919 Krakow Issues of Poland, 1990, (self published)
External links
Poczta Polska Polish Post Office
Online shop
Poland Resource Page for the Stamp Collector
Polonus Philatelic Society
Society for Polish Philately in Great Britain
Stamp Encyclopaedia Poland
Books and publications about Polish stamps and postal history.
Postal system of Poland |
39677596 | https://en.wikipedia.org/wiki/Tresorit | Tresorit | Tresorit is an online cloud storage service based in Switzerland and Hungary that emphasizes enhanced security and data encryption for businesses and individuals. The Business version offers up to 1TB of storage space per user (the Solo version offers 2TB for one user) and extra security features such as DRM, granular access levels and other functions, which Tresorit cites to creating a safer collaborative environment.
Tresorit's service is accessible through client desktop software, a web-based application and mobile apps. Currently, the software is available for Windows, macOS, Android, Windows Phone 8, iOS, and Linux.
Currently as of 2021, Swiss Post owns a majority stake in the cloud storage service. Tresorit works as an independent entity under Swiss Post.
History
Tresorit was founded in 2011 by Hungarian programmers Istvan Lam, who remains CEO, Szilveszter Szebeni, who is currently CIO and Gyorgy Szilagyi, who is the CPO of the company.
Tresorit officially launched its client-side encrypted cloud storage service after emerging from its stealth beta in April 2014.
In August 2015, Wuala (owned by LaCie and Seagate), a pioneer of secure cloud storage, announced it was closing its service after 7 years, and recommended their users to choose Tresorit as their secure cloud alternative.
By the end of 2016, Tresorit launched a beta of the software development kit (SDK) ZeroKit. In January 2017, Apple's SDK project CareKit announced the option for mobile app developers using CareKit to integrate ZeroKit, enabling zero knowledge user authentication and encryption for medical and health apps.
Technology
Tresorit claims to encrypt files using client-side encryption with AES-256 before uploading them. Files are also secured by HMAC message authentication codes applied on SHA-512 hashes.
"Tresors" (German for safes) are encrypted counterparts of uploaded directories. Tresors automatically sync with the cloud as files are added or removed from them, similar to Box.com and Dropbox's desktop software. The main difference between Tresorit and its competition is that Tresorit applies AES-256 client-side encryption to files while they are still local and then uploads them to the cloud. The company claims that due to its end-to-end encryption, users can share protected files and folders with others and work together on them, keeping the documents synced and secure in every step of the process. There are additional layers of security, but the core privacy feature of the service is that the encryption key never leaves the user: Using Zero-Knowledge encryption protocols, Tresorit is not in possession of the users’ authentication data, so the content of files cannot be accessed from their servers nor delivered to authorities upon request.
Hacking contest
In 2013 and 2014, Tresorit hosted a hacking contest offering $10,000 to anyone who hacked their data encryption methods to gain access to their servers. After some months, the reward was increased to $25,000 and later to $50,000, challenging experts from institutions like Harvard, Stanford or MIT. The contest ran for 468 days and according to the company, nobody was able to break the encryption.
Reception
Tresorit has received a number of nominations and awards. Up-Cloud Rewards named it one of the top 5 Cloud security solutions for 2012. Early 2016, Forbes listed Tresorit's cofounder Istvan Lam among the European "30 under 30". In 2017, Tresorit was listed as finalist in the Cybersecurity Excellence Awards, category Encryption.
See also
Comparison of file hosting services
Comparison of online backup services
Encryption
Remote backup service
References
File sharing services
Cloud storage
Cloud applications
Online backup services
Data_synchronization
Email attachment replacements
File hosting
File hosting for macOS
File hosting for Windows
File hosting for Linux
MacOS software
Linux software
Cryptographic software
Office software
Companies' terms of service |
2905 | https://en.wikipedia.org/wiki/Artemis | Artemis | Artemis (; ) is the Greek goddess of the hunt, the wilderness, wild animals, the Moon, and chastity. The goddess Diana is her Roman equivalent.
Artemis is the daughter of Zeus and Leto, and the twin sister of Apollo. She was the patron and protector of young children and women, and was believed to both bring disease upon women and children and relieve them of it. Artemis was worshipped as one of the primary goddesses of childbirth and midwifery along with Eileithyia. Much like Athena and Hestia, Artemis preferred to remain a maiden and was sworn never to marry.
Artemis was one of the most widely venerated of the Ancient Greek deities, and her temple at Ephesus was one of the Seven Wonders of the Ancient World. Artemis' symbols included a bow and arrow, a quiver, and hunting knives, and the deer and the cypress were sacred to her. Diana, her Roman equivalent, was especially worshipped on the Aventine Hill in Rome, near Lake Nemi in the Alban Hills, and in Campania.
Etymology
The name Artemis (noun, feminine) is of unknown or uncertain etymology, although various sources have been proposed. R. S. P. Beekes suggested that the e/i interchange points to a Pre-Greek origin. Artemis was venerated in Lydia as Artimus. Georgios Babiniotis, while accepting that the etymology is unknown, also states that the name is already attested in Mycenean Greek and is possibly of Pre-Greek origin.
The name may be possibly related to Greek árktos "bear" (from PIE *h₂ŕ̥tḱos), supported by the bear cult the goddess had in Attica (Brauronia) and the Neolithic remains at the Arkoudiotissa Cave, as well as the story of Callisto, which was originally about Artemis (Arcadian epithet kallisto); this cult was a survival of very old totemic and shamanistic rituals and formed part of a larger bear cult found further afield in other Indo-European cultures (e.g., Gaulish Artio). It is believed that a precursor of Artemis was worshipped in Minoan Crete as the goddess of mountains and hunting, Britomartis. While connection with Anatolian names has been suggested, the earliest attested forms of the name Artemis are the Mycenaean Greek , a-te-mi-to /Artemitos/ (gen.) and , a-ti-mi-te /Artimitei/ (dat.), written in Linear B at Pylos.
According to J. T. Jablonski, the name is also Phrygian and could be "compared with the royal appellation Artemas of Xenophon. Charles Anthon argued that the primitive root of the name is probably of Persian origin from *arta, *art, *arte, all meaning "great, excellent, holy", thus Artemis "becomes identical with the great mother of Nature, even as she was worshipped at Ephesus". Anton Goebel "suggests the root στρατ or ῥατ, "to shake", and makes Artemis mean the thrower of the dart or the shooter".
Ancient Greek writers, by way of folk etymology, and some modern scholars, have linked Artemis (Doric Artamis) to ἄρταμος, artamos, i.e. "butcher" or, like Plato did in Cratylus, to , artemḗs, i.e. "safe", "unharmed", "uninjured", "pure", "the stainless maiden".
Mythology
Birth
Various conflicting accounts are given in Classical Greek mythology regarding the birth of Artemis and Apollo, her twin brother. However, in terms of parentage, all accounts agree that she was the daughter of Zeus and Leto and that she was the twin sister of Apollo. In some sources, she is born at the same time as Apollo, in others, earlier or later.
According to Callimachus, Hera, angry with her husband Zeus for impregnating Leto, forbade her from giving birth on either terra firma (the mainland) or on an island, but the island of Delos disobeyed and allowed Leto to give birth there. According to the Homeric Hymn to Artemis, however, the island where she and her twin were born was Ortygia. In ancient Cretan history Leto was worshipped at Phaistos and, in Cretan mythology, Leto gave birth to Apollo and Artemis on the islands known today as Paximadia.
A scholium of Servius on Aeneid iii. 72 accounts for the island's archaic name Ortygia by asserting that Zeus transformed Leto into a quail (ortux) in order to prevent Hera from finding out about his infidelity, and Kenneth McLeish suggested further that in quail form Leto would have given birth with as few birth-pains as a mother quail suffers when it lays an egg.
The myths also differ as to whether Artemis was born first, or Apollo. Most stories depict Artemis as firstborn, becoming her mother's midwife upon the birth of her brother Apollo.
Childhood
The childhood of Artemis is not fully related to any surviving myth. A poem by Callimachus to the goddess "who amuses herself on mountains with archery" imagines a few vignettes of a young Artemis. While sitting on the knee of her father, she asks him to grant her ten wishes:
to always remain a virgin
to have many names to set her apart from her brother Phoebus (Apollo)
to have a bow and arrow made by the Cyclopes
to be the Phaesporia or Light Bringer
to have a short, knee-length tunic so she could hunt
to have 60 "daughters of Okeanos", all nine years of age, to be her choir
to have 20 Amnisides Nymphs as handmaidens to watch her hunting dogs and bow while she rested
to rule all the mountains
to be assigned any city, and only to visit when called by birthing mothers
to have the ability to help women in the pains of childbirth.
Artemis believed she had been chosen by the Fates to be a midwife, particularly as she had assisted her mother in the delivery of her twin brother Apollo. All of her companions remained virgins, and Artemis closely guarded her own chastity. Her symbols included the golden bow and arrow, the hunting dog, the stag, and the moon.
Callimachus then tells how Artemis spent her girlhood seeking out the things she would need to be a huntress, and how she obtained her bow and arrows from the isle of Lipara, where Hephaestus and the Cyclopes worked. While Oceanus' daughters were initially fearful, the young Artemis bravely approached and asked for a bow and arrows. He goes on to describe how she visited Pan, god of the forest, who gave her seven female and six male hounds. She then captured six golden-horned deer to pull her chariot. Artemis practiced archery first by shooting at trees and then at wild game.
Relations with men
The river god Alpheus was in love with Artemis, but as he realized he could do nothing to win her heart, he decided to capture her. When Artemis and her companions at Letrenoi go to Alpheus, she becomes suspicious of his motives and covers her face with mud so he does not recognize her. In another story, Alphaeus tries to rape Artemis' attendant Arethusa. Artemis pities the girl and saves her, transforming her into a spring in the temple Artemis Alphaea in Letrini, where the goddess and her attendant drink.
Bouphagos, son of the Titan Iapetus, sees Artemis and thinks about raping her. Reading his sinful thoughts, Artemis strikes him down at Mount Pholoe.
Daphnis was a young boy, a son of Hermes, who was accepted by and became a follower of the goddess Artemis; Daphnis would often accompany her in hunting and entertain her with his singing of pastoral songs and playing of the panpipes.
Artemis also herself taught a man, Scamandrius, how to be a great archer, and he excelled in use of bow and arrow with her guidance.
According to Antoninus Liberalis, Siproites was a Cretan who was metamorphized into a woman by Artemis, for, while hunting, seeing the goddess bathing.
Actaeon
Multiple versions of the Actaeon myth survive, though many are fragmentary. The details vary but at the core, they involve the great hunter Actaeon whom Artemis turns into a stag for a transgression, and who is then killed by hunting dogs. Usually, the dogs are his own, but no longer recognize their master. Occasionally they are said to be the hounds of Artemis.
According to Lamar Ronald Lacey's The Myth of Aktaion: Literary and Iconographic Studies, the standard modern text on the work, the most likely original version of the myth portrays Actaeon as the hunting companion of the goddess who, seeing her naked in her sacred spring, attempts to force himself on her. For this hubris, he is turned into a stag and devoured by his own hounds. However, in some surviving versions, Actaeon is a stranger who happens upon Artemis. According to the Latin version of the story told by the Roman Ovid, having accidentally seen Diana on Mount Cithaeron while she was bathing, he was changed by her into a stag, then pursued and killed by his 50 hounds. Various tellings also diverge in terms of the hunter's transgression: sometimes merely seeing the virgin goddess naked, sometimes boasting he is a better hunter than she, or even merely being a rival of Zeus for the affections of Semele.
Adonis
In some versions of the story of Adonis, Artemis sent a wild boar to kill him as punishment for boasting that he was a better hunter than her.
In other versions, Artemis killed Adonis for revenge. In later myths, Adonis is a favorite of Aphrodite, who was responsible for the death of Hippolytus, who had been a hunter of Artemis. Therefore, Artemis killed Adonis to avenge Hippolytus's death.
In yet another version, Adonis was not killed by Artemis, but by Ares as punishment for being with Aphrodite.
Orion
Orion was Artemis' hunting companion; after giving up on trying to find Oenopion, Orion met Artemis and her mother Leto, and joined the goddess in hunting. A great hunter himself, he bragged that he would kill every beast on earth. Gaia, the earth, was not too pleased to hear that, and sent a giant scorpion to sting him. Artemis then transferred him into the stars as the constellation Orion. In one version Orion died after pushing Leto out of the scorpion's way.
In another version, Orion tries to violate Opis, one of Artemis' followers from Hyperborea, and Artemis kills him. In a version by Aratus, Orion grabs Artemis' robe and she kills him in self-defense. Other writers have Artemis kill him for trying to rape her or one of her attendants.
Istrus wrote a version in which Artemis fell in love with Orion, apparently the only person she ever did. She meant to marry him, and no talk from her brother Apollo would change her mind. Apollo then decided to trick Artemis, and while Orion was off swimming in the sea, he pointed at him (barely a spot in the horizon) and wagered that Artemis could not hit that small "dot". Artemis, ever eager to prove she was the better archer, shot Orion, killing him. She then placed him among the stars.
In Homer's Iliad, Eos seduces Orion, angering the gods, causing Artemis to kill him.
The Aloadae
The twin sons of Poseidon and Iphimedeia, Otos and Ephialtes, grew enormously at a young age. They were aggressive and skilled hunters who could not be killed except by each other. The growth of the Aloadae never stopped, and they boasted that as soon as they could reach heaven, they would kidnap Artemis and Hera and take them as wives. The gods were afraid of them, except for Artemis who captured a fine deer which jumped out between them. In another version of the story, she changed herself into a doe jumped between them. The Aloadae threw their spears and so mistakenly killed one another. In another version, Apollo sent the deer into the Aloadae's midst, causing their accidental killing of each other. In another version, they start pilling up mountains to reach Mount Olympus in order to catch Hera and Artemis, but the gods spot them and attack. When the twins had retreated the gods learn that Ares has been captured. The Aloadae, not sure about what to do with Ares, lock him up in a pot. Artemis then turns into a deer and causes them to kill each other.
Callisto
Callisto, the daughter of Lycaon, King of Arcadia,
was one of Artemis's hunting attendants, and, as companion of Artemis, took a vow of chastity.
According to Hesiod in his lost poem Astronomia, Zeus appeared to Callisto, and seduced her, resulting in her becoming pregnant. Though she was able to hide her pregnancy for a time, she was soon found out while bathing. Enraged, Artemis transformed Callisto into a bear, and in this form she gave birth to her son Arcas. Both of them were then captured by shepherds and given to Lycaon, and Callisto thus lost her child. Some time later, Callisto "thought fit to go into" a forbidden sanctuary of Zeus, and was hunted by the Arcadians, her son among them. When she was about to killed, Zeus saved her by placing her in the heavens as a constellation of a bear.
In his De Astronomica, Hyginus, after recounting the version from Hesiod, presents several other alternative versions. The first, which he attributes to Amphis, says that Zeus seduced Callisto by disguising himself as Artemis during a hunting session, and that when Artemis found out that Callisto was pregnant, she replied saying that it was the goddess's fault, causing Artemis to transform her into a bear. This version also has both Callisto and Arcas placed in the heavens, as the constellations Ursa Major and Ursa Minor. Hyginus then presents another version in which, after Zeus lay with Callisto, it was Hera who transformed her into a bear. Artemis later, while hunting, kills the bear, and "later, on being recognized, [Callisto] was placed among the stars". Hyginus also gives another version, in which Hera tries to catch Zeus and Callisto in the act, causing Zeus to transform her into a bear. Hera, finding the bear, points it out to Artemis, who is hunting; Zeus, in panic, places Callisto in the heavens as a constellation.
Ovid gives a somewhat different version: Zeus seduced Callisto once again disguised as Artemis, but she seems to realise that it is not the real Artemis, and she thus does not blame Artemis when, during bathing, she is found out. Callisto is, rather than being transformed, simply ousted from the company of the huntresses, and she thus gives birth to Arcas as a human. Only later is she transformed into a bear, this time by Hera. When Arcas, fully grown, is out hunting, he nearly kills his mother, who is saved only by Zeus placing her in the heavens.
In the Bibliotheca, a version is presented in which Zeus raped Callisto, "having assumed the likeness, as some say, of Artemis, or, as others say, of Apollo". He then turned her into a bear himself so as to hide the event from Hera. Artemis then shot the bear, either upon the persuasion of Hera, or out of anger at Callisto for breaking her virginity. Once Callisto was dead, Zeus made her into a constellation, took the child, named him Arcas, and gave him to Maia, who raised him.
Pausanias, in his Description of Greece, presents another version, in which, after Zeus seduced Callisto, Hera turned her into a bear, which Artemis killed to please Hera. Hermes was then sent by Zeus to take Arcas, and Zeus himself placed Callisto in the heavens.
Iphigenia and the Taurian Artemis
Artemis punished Agamemnon after he killed a sacred stag in a sacred grove and boasted that he was a better hunter than the goddess. When the Greek fleet was preparing at Aulis to depart for Troy to commence the Trojan War, Artemis becalmed the winds. The seer Calchas erroneously advised Agamemnon that the only way to appease Artemis was to sacrifice his daughter Iphigenia. In some version of the myth, Artemis then snatched Iphigenia from the altar and substituted a deer; in others, Artemis allowed Iphigenia to be sacrificed. In versions where Iphigenia survived, a number of different myths have been told about what happened after Artemis took her; either she was brought to Tauros and led the priests there, or she became Artemis' immortal companion.
Niobe
A queen of Thebes and wife of Amphion, Niobe boasted of her superiority to Leto, having 14 children (Niobids), seven boys and seven girls, while Leto had only one of each. When Artemis and Apollo heard this impiety, they killed her children using poisoned arrows. Apollo killed Niobe's sons as they practiced athletics, and Artemis shot her daughters, who died instantly without a sound. According to some versions, two of the Niobids were spared, one boy and one girl.
Amphion, at the sight of his dead sons, killed himself. A devastated Niobe and her remaining children were turned to stone by Artemis as they wept. The gods themselves entombed them.
Chione
Chione was a princess of Pokis. She was beloved by two gods, Hermes and Apollo, and boasted that she was more beautiful than Artemis because she had made two gods fall in love with her at once. Artemis was furious and killed Chione with an arrow, or struck her mute by shooting off her tongue. However, some versions of this myth say Apollo and Hermes protected her from Artemis' wrath.
Atalanta, Oeneus and the Meleagrids
Artemis saved the infant Atalanta from dying of exposure after her father abandoned her. She sent a female bear to nurse the baby, who was then raised by hunters. In some stories, Artemis later sent a bear to injure Atalanta because others claimed Atalanta was a superior hunter.
Among other adventures, Atalanta participated in the Calydonian boar hunt, which Artemis had sent to destroy Calydon because King Oeneus had forgotten her at the harvest sacrifices. In the hunt, Atalanta drew the first blood and was awarded the prize of the boar's hide. She hung it in a sacred grove at Tegea as a dedication to Artemis.
Meleager was a hero of Aetolia. King Oeneus ordered him to gather heroes from all over Greece to hunt the Calydonian boar. After the death of Meleager, Artemis turns his grieving sisters, the Meleagrids, into guineafowl that Artemis favoured.
Aura
In Nonnus' Dionysiaca, Aura, the daughter of Lelantos and Periboia, was a companion of Artemis. When out hunting one day with Artemis, she asserts that the goddess's body is too womanly and doubts her virginity. Artemis asks Nemesis for help to avenge her dignity, leading to Aura being raped by Dionysus, after which she becomes a deranged killer. When she bore twin sons, she ate one, while the other, Iacchus, was saved by Artemis.
Polyphonte
Polyphonte was a young woman who fled home in pursuit of a free, virginal life with Artemis, as opposed to the conventional life of marriage and children favoured by Aphrodite. As a punishment, Aphrodite cursed her, causing her to have children by a bear. Her resulting offspring, Agrius and Oreius, were wild cannibals who incurred the hatred of Zeus. Ultimately the entire family was transformed into birds who became ill portents for mankind.
Trojan War
Artemis may have been represented as a supporter of Troy because her brother Apollo was the patron god of the city, and she herself was widely worshipped in western Anatolia in historical times. In the Iliad she comes to blows with Hera when the divine allies of the Greeks and Trojans engage each other in conflict. Hera strikes Artemis on the ears with her own quiver, causing the arrows to fall out. As Artemis flees, crying to Zeus, Leto gathers up the bow and arrows.
Artemis plays a significant role in the war. Like Leto and Apollo, Artemis took the side of the Trojans. At the beginning of the Greek's journey to Troy, Artemis stilled the sea, stopping the journey until an oracle came saying they could win the goddess' heart by sacrificing Iphigenia, Agamemnon's daughter. Agamemnon once promised the goddess he would sacrifice the dearest thing to him, which was Iphigenia, but broke that promise. Other sources said he boasted about his hunting ability and provoked the goddess' anger. However, Artemis saved Iphigenia because of her bravery. In some versions of the myth, Artemis made Iphigenia her attendant or turned her into Hecate, goddess of night, witchcraft, and the underworld.
Aeneas was also helped by Artemis, Leto, and Apollo. Apollo found him wounded by Diomedes and lifted him to heaven. There, the three deities secretly healed him in a great chamber.
Worship
Artemis, the goddess of forests and hills, was worshipped throughout ancient Greece. Her best known cults were on the island of Delos
(her birthplace), in Attica at Brauron and Mounikhia (near Piraeus), and in Sparta. She was often depicted in paintings and statues in a forest setting, carrying a bow and arrows and accompanied by a deer.
The ancient Spartans used to sacrifice to her as one of their patron goddesses before starting a new military campaign.
Athenian festivals in honor of Artemis included Elaphebolia, Mounikhia, Kharisteria, and Brauronia. The festival of Artemis Orthia was observed in Sparta.
Pre-pubescent and adolescent Athenian girls were sent to the sanctuary of Artemis at Brauron to serve the Goddess for one year. During this time, the girls were known as arktoi, or little she-bears. A myth explaining this servitude states that a bear had formed the habit of regularly visiting the town of Brauron, and the people there fed it, so that, over time, the bear became tame. A girl teased the bear, and, in some versions of the myth, it killed her, while, in other versions, it clawed out her eyes. Either way, the girl's brothers killed the bear, and Artemis was enraged. She demanded that young girls "act the bear" at her sanctuary in atonement for the bear's death.
Artemis was worshipped as one of the primary goddesses of childbirth and midwifery along with Eileithyia. Dedications of clothing to her sanctuaries after a successful birth was common in the Classical era. Artemis could be a deity to be feared by pregnant women, as deaths during this time were attributed to her. As childbirth and pregnancy was a very common and important event, there were numerous other deities associated with it, many localized to a particular geographic area, including but not limited to Aphrodite, Hera and Hekate. According to Pseudo-Apollodorus, she assisted her mother in the delivery of her twin. Older sources, such as Homeric Hymn to Delian Apollo (in Line 115), have the arrival of Eileithyia on Delos as the event that allows Leto to give birth to her children. Contradictory is Hesiod's presentation of the myth in Theogony, where he states that Leto bore her children before Zeus’ marriage to Hera with no commentary on any drama related to their birth.
During the Classical period in Athens, she was identified with Hekate. Artemis also assimilated Caryatis (Carya).
There was a women's cult at Cyzicus worshiping Artemis, which was called Dolon (Δόλων).
Epithets
As Aeginaea, she was worshipped in Sparta; the name means either huntress of chamois, or the wielder of the javelin ().
Also in Sparta, Artemis Lygodesma was worshipped. This epithet means "willow-bound" from the Greek lygos (λυγός, willow) and desmos (δεσμός, bond). The willow tree appears in several ancient Greek myths and rituals. According to Pausanias (3.16.7), a statue of Artemis was found by the brothers Astrabacus and Alopecus under a bush of willows (λύγος), by which it was surrounded in such a manner that it stood upright.
As Artemis Orthia (Ὀρθία, "upright") and was common to the four villages originally constituting Sparta: Limnai, in which it is situated, Pitana, Kynosoura, and Mesoa.
In Athens she was worshipped under the epithet Aristo ("the best").
Also in Athens, she was worshipped as Aristoboule, "the best adviser".
As Artemis Isora also known as Isoria or Issoria, in the temple at the Issorium near lounge of the Crotani (the body of troops named the Pitanatae) near Pitane, Sparta. Pausanias mentions that although the locals refer to her as Artemis Isora, he says "They surname her also Lady of the Lake, though she is not really Artemis hut Britomartis of Crete"
She was worshipped at Naupactus as Aetole; in her temple in that town, there was a statue of white marble representing her throwing a javelin. This "Aetolian Artemis" would not have been introduced at Naupactus, anciently a place of Ozolian Locris, until it was awarded to the Aetolians by Philip II of Macedon. Strabo records another precinct of "Aetolian Artemos" at the head of the Adriatic. As Agoraea she was the protector of the agora.
As Agrotera, she was especially associated as the patron goddess of hunters. In Athens Artemis was often associated with the local Aeginian goddess, Aphaea. As Potnia Theron, she was the patron of wild animals; Homer used this title. As Kourotrophos, she was the nurse of youths. As Locheia, she was the goddess of childbirth and midwives.
She was sometimes known as Cynthia, from her birthplace on Mount Cynthus on Delos, or Amarynthia from a festival in her honor originally held at Amarynthus in Euboea.
She was sometimes identified by the name Phoebe, the feminine form of her brother Apollo's solar epithet Phoebus.
Alphaea, Alpheaea, or Alpheiusa (Gr. , , or ) was an epithet that Artemis derived from the river god Alpheius, who was said to have been in love with her. It was under this name that she was worshipped at Letrini in Elis, and in Ortygia. Artemis Alphaea was associated with the wearing of masks, largely because of the legend that while fleeing the advances of Alpheius, she and her nymphs escaped him by covering their faces.
As Artemis Anaitis, the 'Persian Artemis' was identified with Anahita. As Apanchomene, she was worshipped as a hanged goddess.
She was also worshiped as Artemis Tauropolos, variously interpreted as "worshipped at Tauris", "pulled by a yoke of bulls", or "hunting bull goddess". A statue of Artemis "Tauropolos" in her temple at Brauron in Attica was supposed to have been brought from the Taurians by Iphigenia. Tauropolia was also a festival of Artemis in Athens. There was a Tauropolion, a temple in a temenos sacred to Artemis Tauropolos, in the north Aegean island of Doliche (now Ikaria). There is a Temple to 'Artemis Tauropolos' (as well as a smaller temple to an unknown goddess about south, on the beach) located on the eastern shore of Attica, in the modern town of Artemida (Loutsa). An aspect of the Taurian Artemis was also worshipped as Aricina.
At Castabala in Cilicia there was a sanctuary of Artemis Perasia. Strabo wrote that: "some tell us over and over the same story of Orestes and Tauropolos, asserting that she was called Perasian because she was brought from the other side."
Pausanias at the Description of Greece writes that near Pyrrhichus, there was a sanctuary of Artemis called Astrateias (), with an image of the goddess said to have been dedicated by the Amazons. He also wrote that at Pheneus there was a sanctuary of Artemis, which the legend said that it was founded by Odysseus when he lost his mares and when he traversed Greece in search of them, he found them on this site. For this the goddess was called Heurippa (), meaning horse finder.
One of the epithets of Artemis was Chitone (). Ancient writers believed that the epithet derived from the chiton that the goddess was wearing as a huntress or from the clothes in which newborn infants were dressed being sacred to her or from the Attic village of Chitone.
Syracusans had a dance sacred to the Chitone Artemis. At the Miletus there was a sanctuary of Artemis Chitone and was one of the oldest sanctuaries in the city.
The epithet Leucophryne (Λευκοφρύνη), derived from the city of Leucophrys. At the Magnesia on the Maeander there was a sanctuary dedicated to her. In addition, the sons of Themistocles dedicated a statue to her at the Acropolis of Athens, because Themistocles had once ruled the Magnesia. Bathycles of Magnesia dedicated a statue of her at Amyclae.
Festivals
Artemis was born on the sixth day, which made it sacred for her.
Festival of Artemis in Brauron, where girls, aged between five and ten, dressed in saffron robes and played at being bears, or "act the bear" to appease the goddess after she sent the plague when her bear was killed.
Festival of Amarysia is a celebration to worship Artemis Amarysia in Attica. In 2007, a team of Swiss and Greek archaeologists found the ruin of Artemis Amarysia Temple, at Euboea, Greece.
Festival of Artemis Saronia, a festival to celebrate Artemis in Trozeinos, a town in Argolis. A king named Saron built a sanctuary for the goddess after the goddess saved his life when he went hunting and was swept away by a wave. He held a festival in her honor.
On the 16th day of Metageitnio (second month on the Athenian calendar), people sacrificed to Artemis and Hecate at Deme in Erchia.
Kharisteria Festival on 6th day of Boidromion (third month) celebrates the victory of the Battle of Marathon, also known as the Athenian "Thanksgiving".
Day six of Elaphobolia (ninth month) festival of Artemis the Deer Huntress where she was offered cakes shaped like stags, made from dough, honey and sesame seeds.
Day 6 or 16 of Mounikhion (tenth month) is a celebration of her as the goddess of nature and animals. A goat was sacrificed to her.
Day 6 of Thargelion (eleventh month), is the Goddess's birthday, while the seventh was Apollo's.
A festival for Artemis Diktynna (of the net) was held in Hypsous.
Laphria, a festival for Artemis in Patrai. The procession starts by setting logs of wood around the altar, each of them 16 cubits long. On the altar, within the circle, the driest wood is placed. Just before the festival, a smooth ascent to the altar is built by piling earth upon the altar steps. The festival begins with a splendid procession in honor of Artemis, and the maiden officiating as priestess rides last in the procession upon a chariot yoked to four deer, Artemis' traditional mode of transport (see below). However, the sacrifice is not offered until the next day.
In Orchomenus, a sanctuary was built for Artemis Hymnia where her festival was celebrated every year.
Attributes
Virginity
An important aspect of Artemis' persona and worship was her virginity, which may seem contradictory given her role as a goddess associated with childbirth. It is likely that the idea of Artemis as a virgin goddess is related to her primary role as a huntress. Hunters traditionally abstained from sex prior to the hunt as a form of ritual purity and out of a belief that the scent would scare off potential prey. The ancient cultural context in which Artemis' worship emerged also held that virginity was a prerequisite to marriage, and that a married woman became subservient to her husband. In this light, Artemis' virginity is also related to her power and independence. Rather than a form of asexuality, it is an attribute that signals Artemis as her own master, with power equal to that of male gods. It is also possible that her virginity represents a concentration of fertility that can be spread among her followers, in the manner of earlier mother goddess figures. However, some later Greek writers did come to treat Artemis as inherently asexual and as an opposite to Aphrodite. Furthermore, some have described Artemis along with the goddesses Hestia and Athena as being asexual, this is mainly supported by the fact that in the Homeric Hymns, 5, To Aphrodite, where Aphrodite is described as having "no power" over the three goddesses.
As a mother goddess
Despite her virginity, both modern scholars and ancient commentaries have linked Artemis to the archetype of the mother goddess. Artemis was traditionally linked to fertility and was petitioned to assist women with childbirth. According to Herodotus, the Greek playwright Aeschylus identified Artemis with Persephone as a daughter of Demeter. Her worshipers in Arcadia also traditionally associated her with Demeter and Persephone. In Asia Minor, she was often conflated with local mother goddess figures, such as Cybele, and Anahita in Iran. However, the archetype of the mother goddess was not highly compatible with the Greek pantheon, and though the Greeks had adopted the worship of Cybele and other Anatolian mother goddesses as early as the 7th century BCE, she was not directly conflated with any Greek goddesses; instead, bits and pieces of her worship and aspects were absorbed variously by Artemis, Aphrodite, and others as Eastern influence spread.
As the Lady of Ephesus
At Ephesus in Ionia, Turkey, her temple became one of the Seven Wonders of the World. It was probably the best-known center of her worship except for Delos. There the Lady whom the Ionians associated with Artemis through interpretatio graeca was worshipped primarily as a mother goddess, akin to the Phrygian goddess Cybele, in an ancient sanctuary where her cult image depicted the "Lady of Ephesus" adorned with multiple large beads. Excavation at the site of the Artemision in 1987–88 identified a multitude of tear-shaped amber beads that had been hung on the original wooden statue (xoanon), and these were probably carried over into later sculpted copies. In Acts of the Apostles, Ephesian metalsmiths who felt threatened by Saint Paul's preaching of Christianity, jealously rioted in her defense, shouting "Great is Artemis of the Ephesians!" Of the 121 columns of her temple, only one composite, made up of fragments, still stands as a marker of the temple's location.
Symbols
Bow and arrow
According to one of the Homeric Hymns to Artemis, she had a golden bow and arrows, as her epithet was Khryselakatos ("she of the golden shaft") and Iokheira ("showered by arrows"). The arrows of Artemis could also bring sudden death and disease to girls and women. Artemis got her bow and arrow for the first time from the Cyclopes, as the one she asked from her father. The bow of Artemis also became the witness of Callisto's oath of her virginity.
Chariots
Artemis' chariot was made of gold and was pulled by four golden horned deer. The bridles of her chariot were also made of gold.
Spears, nets, and lyre
Although quite seldom, Artemis is sometimes portrayed with a hunting spear. Her cult in Aetolia, the Artemis Aetolian, showed her with a hunting spear. The description of Artemis' spear can be found in Ovid's Metamorphoses, while Artemis with a fishing spear connected with her cult as a patron goddess of fishing. As a goddess of maiden dances and songs, Artemis is often portrayed with a lyre in ancient art.
Deer
Deer were the only animals held sacred to Artemis herself. On seeing a deer larger than a bull with horns shining, she fell in love with these creatures and held them sacred. Deer were also the first animals she captured. She caught five golden horned deer and harnessed them to her chariot. The third labour of Heracles, commanded by Eurystheus, consisted of catching the Cerynitian Hind alive. Heracles begged Artemis for forgiveness and promised to return it alive. Artemis forgave him but targeted Eurystheus for her wrath.
Hunting dog
Artemis got her hunting dogs from Pan in the forest of Arcadia. Pan gave Artemis two black-and-white dogs, three reddish ones, and one spotted one – these dogs were able to hunt even lions. Pan also gave Artemis seven bitches of the finest Arcadian race. However, Artemis only ever brought seven dogs hunting with her at any one time.
Bear
The sacrifice of a bear for Artemis started with the Brauron cult. Every year a girl between five and ten years of age was sent to Artemis' temple at Brauron. The Byzantine writer Suidos relayed the legend in Arktos e Brauroniois. A bear was tamed by Artemis and introduced to the people of Athens. They touched it and played with it until one day a group of girls poked the bear until it attacked them. A brother of one of the girls killed the bear, so Artemis sent a plague in revenge. The Athenians consulted an oracle to understand how to end the plague. The oracle suggested that, in payment for the bear's blood, no Athenian virgin should be allowed to marry until she had served Artemis in her temple ('played the bear for the goddess').
Boar
The boar is one of the favorite animals of the hunters, and also hard to tame. In honor of Artemis' skill, they sacrificed it to her. Oeneus and Adonis were both killed by Artemis' boar.
Guinea fowl
Artemis felt pity for the Meleagrids as they mourned for their lost brother, Meleager, so she transformed them into Guinea Fowl to be her favorite animals.
Buzzard hawk
Hawks were the favored birds of many of the gods, Artemis included.
In art
The oldest representations of Artemis in Greek Archaic art portray her as Potnia Theron ("Queen of the Beasts"): a winged goddess holding a stag and lioness in her hands, or sometimes a lioness and a lion. This winged Artemis lingered in ex-votos as Artemis Orthia, with a sanctuary close by Sparta.
In Greek classical art she is usually portrayed as a maiden huntress, young, tall, and slim, clothed in a girl's short skirt, with hunting boots, a quiver, a bow and arrows. Often, she is shown in the shooting pose, and is accompanied by a hunting dog or stag. When portrayed as a moon goddess, Artemis wore a long robe and sometimes a veil covered her head. Her darker side is revealed in some vase paintings, where she is shown as the death-bringing goddess whose arrows fell young maidens and women, such as the daughters of Niobe.
Artemis was sometimes represented in Classical art with the crown of the crescent moon, such as also found on Luna and others.
On June 7, 2007, a Roman-era bronze sculpture of Artemis and the Stag was sold at Sotheby's auction house in New York state by the Albright-Knox Art Gallery for $25.5 million.
Legacy
In astronomy
105 Artemis, the Artemis (crater), the Artemis Chasma, the Artemis Corona, and the Artemis lunar program have all been named after the goddess.
Artemis is the acronym for "Architectures de bolometres pour des Telescopes a grand champ de vue dans le domaine sub-Millimetrique au Sol", a large bolometer camera in the submillimeter range that was installed in 2010 at the Atacama Pathfinder Experiment (APEX), located in the Atacama Desert in northern Chile.
In taxonomy
The taxonomic genus Artemia, which entirely comprises the family Artemiidae, derives from Artemis. Artemia are aquatic crustaceans known as brine shrimp, the best-known species of which, Artemia salina, or Sea Monkeys, was first described by Carl Linnaeus in his Systema Naturae in 1758. Artemia live in salt lakes, and although they are almost never found in an open sea, they do appear along the Aegean coast near Ephesus, where the Temple of Artemis once stood.
In Modern Spaceflight
The Artemis program is an ongoing crewed spaceflight program carried out by NASA, U.S. commercial spaceflight companies, and international partners such as ESA, with the goal of landing "the first woman and the next man" on the lunar south pole region by 2024. NASA is calling this the Artemis program in honor of Apollo's twin sister in Greek mythology, the goddess of the Moon.
Genealogy
See also
Bendis
Dali (goddess)
Janus
List of lunar deities
Palermo Fragment
Regarding Tauropolos:
Bull (mythology)
Iphigenia in Tauris
Taurus (Mythology)
References
Bibliography
Aelian, On Animals, Volume III: Books 12-17, translated by A. F. Scholfield, Loeb Classical Library No. 449, Cambridge, Massachusetts, Harvard University Press, 1959. Online version at Harvard University Press. .
Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library.
Aratus Solensis, Phaenomena translated by G. R. Mair. Loeb Classical Library Volume 129. London: William Heinemann, 1921. Online version at the Topos Text Project.
Athenaeus, The Learned Banqueters, Volume V: Books 10.420e-11. Edited and translated by S. Douglas Olson. Loeb Classical Library 274. Cambridge, MA: Harvard University Press, 2009.
Burkert, Walter, Greek Religion, Harvard University Press, 1985. .
Callimachus. Hymns, translated by Alexander William Mair (1875–1928). London: William Heinemann; New York: G.P. Putnam's Sons. 1921. Internet Archive. Online version at the Topos Text Project.
Celoria, Francis, The Metamorphoses of Antoninus Liberalis: A Translation with a Commentary, Routledge, 1992. .
Diodorus Siculus, Bibliotheca Historica. Vol 1-2. Immanel Bekker. Ludwig Dindorf. Friedrich Vogel. in aedibus B. G. Teubneri. Leipzig. 1888-1890. Greek text available at the Perseus Digital Library.
Evelyn-White, Hugh, The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White. Homeric Hymns. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Google Books. Internet Archive.
Fontenrose, Joseph Eddy, Orion: The Myth of the Hunter and the Huntress, University of California Press, 1981. .
Forbes Irving, P. M. C., Metamorphosis in Greek Myths, Clarendon Press Oxford, 1990. .
Freeman, Kathleen, Ancilla to the Pre-Socratic Philosophers: A Complete Translation of the Fragments in Diels, Fragmente Der Vorsokratiker, Harvard University Press, 1983. .
Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2).
Robert Graves (1955) 1960. The Greek Myths (Penguin)
Grimal, Pierre, The Dictionary of Classical Mythology, Wiley-Blackwell, 1996. .
Hansen, William, Handbook of Classical Mythology, ABC-CLIO, 2004. .
Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books.
Homer, The Iliad with an English Translation by A.T. Murray, PhD in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Hesiod, Astronomia, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Internet Archive.
Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library.
Hyginus, Gaius Julius, De Astronomica, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText.
Hyginus, Gaius Julius, Fabulae, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText.
Kerényi, Karl (1951), The Gods of the Greeks, Thames and Hudson, London, 1951.
Liddell, Henry George, Robert Scott, A Greek-English Lexicon, revised and augmented throughout by Sir Henry Stuart Jones with the assistance of Roderick McKenzie, Clarendon Press Oxford, 1940. Online version at the Perseus Digital Library.
Most, G.W., Hesiod, Theogony, Works and Days, Testimonia, Edited and translated by Glenn W. Most, Loeb Classical Library No. 57, Cambridge, Massachusetts, Harvard University Press, 2018. . Online version at Harvard University Press.
Most, G.W., Hesiod: The Shield, Catalogue of Women, Other Fragments, Loeb Classical Library, No. 503, Cambridge, Massachusetts, Harvard University Press, 2007, 2018. . Online version at Harvard University Press.
Nonnus, Dionysiaca; translated by Rouse, W H D, in three volumes. Loeb Classical Library No. 346, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1940. Internet Archive.
Ovid, Metamorphoses, Brookes More, Boston, Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library.
Ovid. Metamorphoses, Volume I: Books 1-8. Translated by Frank Justus Miller. Revised by G. P. Goold. Loeb Classical Library No. 42. Cambridge, Massachusetts: Harvard University Press, 1977, first published 1916. . Online version at Harvard University Press.
Ovid, Ovid's Fasti: With an English translation by Sir James George Frazer, London: W. Heinemann LTD; Cambridge, Massachusetts, Harvard University Press, 1959. Internet Archive.
The Oxford Classical Dictionary, second edition, Hammond, N.G.L. and Howard Hayes Scullard (editors), Oxford University Press, 1992. .
Papathomopoulos, Manolis, Antoninus Liberalis: Les Métamorphoses, Collection Budé, Paris, Les Belles Lettres, 1968. .
Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library.
Pindar, The Odes of Pindar including the Principal Fragments with an Introduction and an English Translation by Sir John Sandys, Litt.D., FBA. Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1937. Greek text available at the Perseus Digital Library.
Strabo, The Geography of Strabo. Edition by H.L. Jones. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Tripp, Edward, Crowell's Handbook of Classical Mythology, Thomas Y. Crowell Co; First edition (June 1970). .
West, M. L. (2003), Greek Epic Fragments: From the Seventh to the Fifth Centuries BC, edited and translated by Martin L. West, Loeb Classical Library No. 497, Cambridge, Massachusetts, Harvard University Press, 2003. . Online version at Harvard University Press.
External links
Theoi Project, Artemis, information on Artemis from original Greek and Roman sources, images from classical art.
A Dictionary of Greek and Roman Antiquities (1890) (eds. G. E. Marindin, William Smith, LLD, William Wayte)
Fischer-Hansen T., Poulsen B. (eds.) From Artemis to Diana: the goddess of man and beast. Collegium Hyperboreum and Museum Tusculanum Press, Copenhagen, 2009
Warburg Institute Iconographic Database: ca 1,150 images of Artemis
Animal goddesses
Childhood goddesses
Hunting goddesses
Lunar goddesses
Nature goddesses
Night goddesses
Greek Virgin goddesses
Mythological Greek archers
Children of Zeus
Divine twins
Deities in the Iliad
Metamorphoses characters
Characters in Greek mythology
Rape of Persephone
Dogs in religion
Women in Greek mythology |
21405928 | https://en.wikipedia.org/wiki/Chih-Wei%20Huang | Chih-Wei Huang | Chih-Wei Huang (黃志偉) is a developer and promoter of free software who lives in Taiwan. He is famous for his work in the VoIP and internationalization and localization fields in Greater China. The user name he usually uses is cwhuang.
Profile
Huang graduated from National Taiwan University(NTU) in 1993, with a bachelor's degree in physics, and a master's degree in the electrical engineering department of NTU in 2000.
He worked as a director in Top Technology Inc., the CTO of Citron Network Inc., and a project manager of Tecom Inc. Huang currently works as a senior researcher of Core Technology Center in ASUSTeK Computer Inc. He is one of the start members of Software Liberty Association of Taiwan (SLAT), and the first and second members of the SLAT Council.
Free software development
Chih-Wei Huang is the founder and coordinator of the Chinese Linux Documentation Project (CLDP). He wrote the Linux Chinese HOWTO, and translated the HOWTO Index, Linux Meta-FAQ, Serial HOWTO, DNS HOWTO, Linux Information Sheet, Java-CGI HOWTO, IP Masquerade mini-HOWTO and so on. He developed the SGMLtools Chinese Kits to solve the Chinese processing issues of SGML.
He is also the second coordinator to the Chinese Linux Extensions (CLE). He has been a developer of CLE since v0.7 and became the coordinator of CLE v0.9. He pushed Chinese localization in KDE, GNOME and Abiword. He worked alongside Yuan-Chung Cheng and Tung-Han Hsieh to push Arphic Technology to release four Chinese TrueType fonts for the free software community under the Arphic Public License. He also wrote a book for CLE with others.
As Core Developer of GNU Gatekeeper (from 2001 to 2003), he developed new features like thread-safe runtime tables, neighbors and authentication modules, a full H.323 proxy and Citron's NAT technology. He wrote the first version of the English and Chinese manual for GnuGK. He won the first prize of Open Source Contest Taiwan in 2003.
He serves as a committer to KDE and GNOME, where he helps to translate .po files and fixes bugs related to Chinese. He is a contributor to pyDict, OpenH323, Asterisk, GStreamer etc. He works on a way to leverage the ASUS Eee PC with the power of the free software community and aims to provide a complete solution for Android on x86 platform. The Eee PC, VirtualBox, and QEMU are tested OK.
Chih-Wei Huang and Yi Sun started the Android-x86 Open Source Project in 2009. The project aims to bring Android to the x86 platform.
Interviews
Here are some Chinese interviews of Chih-Wei Huang:
2003, Open Source Software Foundry The experiences of free software development from GNU GateKeeper
He talked about how to develop a VoIP business by open source model.
2004, Taiwan.CNET, A special report for seeking the power of free software in Taiwan - An interview of Chih-Wei Huang
He talked about his free software concepts, the experiences of CLE development, and how to combine free software and business model.
See also
GNU Gatekeeper Project
Android-x86 Open Source Project
References
External links
Chinese Linux Documentation Project
[ Chinese Linux Extensions Project]
The GNU Gatekeeper
[ The open source movement in Taiwan (台灣的開放源碼運動)]
[ Eee PC, Android, Linux and Open Source - Cwhuang's blog ]
[ Linux @ NTU]
1970 births
Free software programmers
Taiwanese computer programmers
Living people
Scientists from Taipei |
3609534 | https://en.wikipedia.org/wiki/%C3%89cole%20pour%20l%27informatique%20et%20les%20techniques%20avanc%C3%A9es | École pour l'informatique et les techniques avancées | The École Pour l'Informatique et les Techniques Avancées (), more commonly known as EPITA, is a private French grande école specialized in the field of computer science and software engineering created in 1984 by Patrice Dumoucel. It is a private engineering school, member of IONIS Education Group since 1994, accredited by the Commission des titres d'ingénieur (CTI) to deliver the French Diplôme d'Ingénieur, and based at Le Kremlin-Bicêtre south of Paris.
In June 2013, EPITA becomes member of the Union of Independent Grandes Écoles, which includes 30 grandes écoles.
The school is part of IONIS Education Group.
Studies
French Stream
Preparatory class
The first two years of studies are preparatory years. During these two years, students study mathematics, physics and electronics as well as algorithmics and computer science.
Engineering class
The first year
The third year is the first year of engineering studies, where students learn the fundamentals in information technology and software engineering. This year is also famous for its first month, during which students will be asked to make several projects, which generally lead them to code more than 15 hours per day. Third year students are known to say that "sleeping is cheating" and usually remember this year as their most painstaking year at EPITA.
Majors
During the fourth and fifth years students have to choose one of the nine majors:
IMAGE, Traitement et synthèse d'image ("Image processing and synthesis")
SRS, Systèmes, Réseaux et Sécurité ("Systems, Networks and Security")
MTI, Multimédia et Technologies de l'Information ("Multimedia and Information Technology")
SCIA, Sciences Cognitives et Informatique Avancée ("Cognitive Science and Advanced Computer Science")
GISTRE, Génie Informatique des Systèmes Temps Réel et Embarqués ("Computer Engineering, Real-time Systems and Embedded System")
SIGL, Systèmes d’Information et Génie Logiciel ("Information Systems and Software Engineering")
TCOM, Télécommunications ("Telecommunication")
GITM, Global IT Management (Entirely taught in English)
RECHERCHE, (Majeure double compétence orientée vers la recherche académique)
International Stream
The Department of International Programs is currently offering 5 programs:
International Bachelor of Computer Science The program boosts a comprehensive curriculum, offering interdisciplinary courses in computer programming, algorithms and computer architecture. It is composed of 6 semesters over a period of 3 years, including internship and French classes. Graduates of this program will have the possibility of pursuing our Master programs.
Master of Science in Computer Science The program provides a perfect combination of the most important and powerful theoretical basis of computing, and their applications in the areas of current technology and professional fields. It includes courses common to all students as well as specific semesters depending on students's choice of specialization. The 4 specializations are as follows:
Innovative Information Systems Management
Software Engineering
Computer Security
Data Science & Analytics
Master of Science in Artificial Intelligence Systems The program trains students to solve complex problems using AI techniques and tools through equipping them with a solid foundation of mathematics and programming skills. It also expands students’ interpersonal and commercial capacities so that they can adapt to the ever-evolving business environment.trains students to solve complex problems using AI techniques and tools through equipping them with a solid foundation of mathematics and programming skills. It also expands students’ interpersonal and commercial capacities so that they can adapt to the ever-evolving business environment.
Master of Science in Artificial Intelligence for Marketing Strategy A joint degree with EM Normandie Business School, the program equips students with AI skills so that they can apply the technology to enhancing an organization's marketing strategies and decision-making process. It is open to candidates holding a 4-year bachelor's degree (or higher) or a 3-year bachelor's degree with professional experience, regardless of their discipline.
Master of Computer Engineering Accredited by the Commission des titres d'ingénieur (CTI), the program prepares students to become computer engineers who can easily find a professional position anywhere in the world. Graduates of this program will be awarded the “diplôme d’ingénieur” and will possess both technical and soft skills. They will have a wide choice on the job market whether they choose a career in France or abroad.
Bibliography
De mémoire vive, Une histoire de l'aventure numérique, Philippe Dewost, Cédric Villani, Éditions Première Partie, 2022, 386p. .
References
External links
Official website
The Multimedia and Information Technology major
The Information Systems and Software Engineering major
The Systems, Network and Security major
The Research and Development laboratory
The Systems and Security laboratory
The Innovation laboratory
Educational institutions established in 1984
Education in Île-de-France
Education in Lyon
Education in Rennes
Education in Toulouse
Education in Strasbourg
Grandes écoles
1984 establishments in France |
1724987 | https://en.wikipedia.org/wiki/Larry%20McVoy | Larry McVoy | Larry McVoy (born 1962 in Concord, Massachusetts, United States) is the CEO of BitMover, the company that makes BitKeeper, a version control system that was used from February 2002 to early 2005 to manage the source code of the Linux kernel.
He earned BS and MS degrees in computer science in 1985 and 1987, respectively, from the University of Wisconsin–Madison His work generally included performance enhancements to the various Unix operating systems developed by his employers. While McVoy worked at Sun, he worked on a peer-to-peer SCM system named TeamWare that would form the basis of his later BitKeeper product.
Linux
McVoy started working with the Linux kernel around its 0.97 version (1992) and developed the LMbench kernel benchmark. LMbench was maintained until 2009 by Carl Staelin.
The BitKeeper source control system was also developed and integrated into the Linux development process in 2002, but after McVoy decided to charge for the use of BitKeeper, the Linux development community prompted the development of the git tool that began serving as the source control system for the Linux kernel in 2005.
Sourceware Operating System
While working at Sun in the early 1990s, McVoy and a number of other high-profile Unix community members urged the company to open-source their flagship Unix product, SunOS, in cooperation with Novell, to compete with Microsoft's new Windows NT operating system. The proposal would have involved creating a copyleft version of SunOS at a time before Linux had reached its 1.0 version. McVoy predicted (accurately) that Linux would displace Unix if the companies didn't do so.
Bibliography
References
External links
McVoy's resume
The Sourceware Operating System Proposal
2002 interview with Larry McVoy (archived version)
(April 2005 Newsforge interview about Linux controversy)
1962 births
Living people
People from Concord, Massachusetts
University of Wisconsin–Madison College of Letters and Science alumni
Sun Microsystems people
American computer programmers
Unix people |
2404032 | https://en.wikipedia.org/wiki/Media%20Composer | Media Composer | Avid Media Composer is a film and video editing software application or non-linear editing system (NLE) developed by Avid Technology. Initially released in 1989 on Macintosh II as an offline editing system, the application has since evolved to allow for both offline and online editing, including uncompressed standard definition (SD), high definition (HD), 2K and 4K editing and finishing. Since the 1990s, Media Composer has been the dominant non-linear editing system in the film and television industry, first on Macintosh and later on Windows. Avid NewsCutter, aimed at newsrooms, Avid Symphony, aimed at finishing, were all Avid products that were derived from Media Composer and share similar interfacing, as were Avid Xpress Pro (discontinued in 2008) and its predecessor Avid Xpress DV, which were aimed at the lower end of the market.
There are 4 versions of Avid Media Composer; Media Composer | First (a freeware version), Media Composer, Media Composer | Ultimate, and Media Composer | Enterprise. Media Composer can be used as standalone software, or to which the user can add specific external I/O devices, either from Avid or from specific third parties.
History
According to Eric Peters, one of the company's founders, most prototypes of "the Avid" were built on Apollo workstations. At some point, Avid demo'd one of their products at SIGGRAPH. Says Peters: "Some Apple people saw that demo at the show and said, 'Nice demo. Wrong platform!' It turned out they were evangelists for the then new Mac II (with *six* slots!). When we got back to our office (actually a converted machine shop) after the show, there was a pile of FedEx packages on our doorstep. They were from Apple, and they contained two of their prototype Mac II machines (so early they didn't even have cases, just open chassis). Also there were four large multisync monitors. Each computer was loaded with full memory (probably 4 megs at the time), and a full complement of Apple software (pre-Claris). That afternoon, a consultant knocked on our door saying, 'Hi. I'm being paid by Apple to come here and port your applications from Apollo to Macintosh.' He worked for us for several weeks, and actually taught us how to program the Macs." At the time, Macs were not considered to be fast enough for video purposes. The Avid engineering team, however, managed to get 1,200 kBytes per second, which allowed them to do offline video on the Macs.
The Avid Film Composer was introduced in August 1992. Film Composer was the first non-linear digital editing system to capture and edit natively at 24fps. Steven Cohen was the first editor to use Film Composer for a major motion picture, on Lost in Yonkers (1993).
The system has been used by other top editors such as Walter Murch on The English Patient, the first digitally edited film to receive a Best Editing Oscar.
In 1994, the Academy of Motion Picture Arts and Sciences awarded the Avid Film Composer with a plaque for Science & Technical Achievement. Six persons were recognized in that effort: Bill Warner, Eric Peters, Joe Rice, Patrick O'Connor, Tom Ohanian, and Michael Phillips. For continued development, Avid received an Oscar representing the 1998 Scientific and Technical Award for the concept, design, and engineering of the Avid Film Composer system for motion picture editing.
Film Composer is no longer available, since all of its specific film editing features were implemented into the "regular" Media Composer.
In July 2009, American Cinema Editors (ACE) announced that the ACE Board of Directors had recognized Avid Media Composer software with the Board's first ACE Technical Excellence Award.
User Interface (UI)
The Avid Media Composer user interface has seen many changes and upgrades over the years. Early versions focused on creating somewhat of a digital representation of the film editing process. The idea of organizing clips using bins was a familiar concept, so it was easy for editors to migrate from the flatbed editing world into Avid's digital interface. Also familiar was the Source/Record window which was seen in KEM and Steenbeck systems.
Through the 1990's the interface saw practical upgrades which were made in collaboration between its designers who were also working editors, and professional editors working in Hollywood and at network television studios. The interface design remained decidedly plain and two-dimensional. Interface design was focused more on clip management in the Timeline Window than on UI colors and buttons.
Crossing Y2K and into the early 2000's with Media Composer 10, 11, and 12, the user interface saw significant advancements in not only project organization but also skeuomorphic design. It gave users incredible power in defining their own preferences in button shapes and shading, color coding, workspace architecture, and other intricate customizations. In May 2003 when Avid Adrenaline introduced HD editing and a resetting of the version numbering back to 1.0, work on improving the user interface continued.
With the release of Media Composer 5, the user interface saw a drastic change. After extensive testing, the skeuomorphic design and other visual elements were realized to be causing a slight drain on graphics card performance. It was decided to scale-back the design and chase a flatter approach. Users who upgraded to this version were initially upset at the loss of customizability but were indeed satisfied with the noticeable reduction in interface lag.
By Media Composer 7, 8, and 2018, there was a consistent outcry from customers to upgrade the overall interface.
During 2018, Avid conducted extensive interviews, listening sessions, and ACA meetings with hundreds of users to absorb as much of their opinions as possible. Key outcomes from those sessions included needs for stronger organization abilities for bins (bin containers), tools and other interface elements that could snap-to each other, a "paneled" interface that could mold itself to any screen size or configuration, and a means of toggling between the classic concept of Avid Workspaces in a newer, more accessible way (Workspace Toolbar). Another common complaint of the classic interface was its overall performance, which had laggy timeline behavior in comparison to other nonlinear edit systems (NLEs). While the Media Composer team worked on the new user interface, the engineers and architecture team retooled the underlying code and video engine. In June of 2019, Avid released Media Composer 2019.6.
Users saw consistent upgrades to the user interface throughout 2019, 2020, and 2021. As of late 2021, the majority of Media Composer users were subscription-based, and using the modern user interface.
Hardware
Avid Mojo DX: a newer version of the Mojo with architecture offering faster processing and full 1920x1080 HD resolution in addition to standard definition video. This interface has SDI/HD-SDI inputs and outputs, HDMI outputs and stereo 1/4" TRS audio inputs and outputs.
Avid Nitris DX: a replacement of the Adrenaline hardware, a successor to the original Avid Nitris (used with Avid DS and Avid Symphony), with architecture offering faster processing and full 1920x1080 HD resolution (without extra cards) in addition to standard definition video. This interface also has a hardware DNxHD codec. Video connections include SDI, HD-SDI, Composite, S-Video and Component (SD or HD) inputs and outputs, it also has a HDMI output. Audio connections include XLR, AES, optical S/PDIF and ADAT inputs and outputs. It also has RCA inputs and 1/4" TRS outputs, plus LTC timecode I/O. Starting with Media Composer v5.5 an optional AVC-Intra codec module can be installed in the Nitris DX for native playback of this format. With Media Composer v6.0 is it now possible to have two DNxHD or AVC-Intra modules installed for dual stream stereoscopic capture and full resolution stereoscopic playback.
Hardware history
Media Composer as standalone software (with optional hardware) has only been available since June 2006 (version 2.5). Before that, Media Composer was only available as a turnkey system.
The 1990s
From 1991 until 1998, Media Composer 1000, 4000 and 8000 systems were Macintosh-only, and based on the NuVista videoboard by Truevision. The first-release Avids (US) supported 640x480 30i video, at resolutions and compression identified by the prefix "AVR". Single-field resolutions were AVR 1 through 9s; interlaced (finishing) resolutions were initially AVR 21–23, with the later improvements of AVR 24 through 27, and the later AVR 70 through 77. AVR12 was a two-field interlaced offline resolution. Additionally, Avid marketed the Media Composer 400 and 800 as offline-only editors. These systems exclusively used external fast SCSI drives (interfaced through a SCSI accelerator board) for media storage. Avid media was digitised as OMFI (Open Media Framework Interchange) format.
In the mid-nineties, versions 6 and 7 of Media Composer 1000, 8000 and 9000 were based on the Avid Broadcast Video Board (ABVB), supporting video resolutions up to AVR77. The video image was also improved to 720x480. 3D add-on boards (most notably the Pinnacle Alladin, externally, and the pinnacle genie pro board, internally, through special 100 pin bypass cable ) and 16bit 48K 4-channel and 8-channel audio I/O (Avid/DigiDesign 442 and Avid/DigiDesign 888) were optional.
The 1998 introduction of the Avid Symphony marked the transition from ABVB to the Meridien hardware, allowing for uncompressed SD editing. This introduction was also the first version of Media Composer XL available for the Windows operating system. Many users were concerned that Avid would abandon the Mac platform, which they eventually did not do. Media Composer XL versions 8 through 12.0.5 (models MC Offline XL, MC 1000 XL, MC 9000XL) were built around Meridien hardware. Compression options were expressed in ratios for the first time in the evolution of the product. Even though the video board had changed, the audio I/O was still handled by the Avid/DigiDesign 888 (16bit 48K) hardware. At this time, 16x9 aspect ratios began to be supported.
The 2000s
Avid Media Composer Meridien was released through November, 2003.
In 2003, Avid Mojo and Avid Adrenaline formed the new DNA (Digital Non-linear Accelerator) hardware line. The launch of Avid Media Composer Adrenaline brought along a software version renumbering, as it was labeled Avid Media Composer Adrenaline 1.0. At this time, Avid began using MXF (Material Exchange Format) formatting for media files. Avid products maintain compatibility with OMFI files.
Adrenaline was the first Media Composer system to support 24bit audio. It also meant the end of Film Composer and Media Composer Offline, since the Avid Media Composer Adrenaline featured most of the film options and online resolutions and features. From this point onward, Avid systems have supported media storage using SCSI, PCI-e, SATA, IEEE 1394a & b, Ethernet and fiberoptic interfaces.
In 2006, Media Composer 2.5 was the first version to be offered 'software-only', giving the user the option of purchasing and using the software without the additional cost of the external accelerators. Software-only Avid setups could use third-party breakout boxes, usually interfaced via FireWire, to acquire video from SDI and analog sources.
In 2008, the Mojo DX and Nitris DX were introduced, replacing the Adrenaline. Both are capable of handling uncompressed HD video, with the Nitris DX offering greater processing speed and input/output flexibility.
Avid designed hardware
Avid systems used to ship with Avid branded I/O boxes, like Mojo, Adrenaline and Nitris, but in recent years have ceased to produce their own hardware, and have started collaborating with companies like Blackmagic Design and AJA, releasing customised, Avid-branded I/O boxes, like DNxIO, DNxIQ and DNxIV.
Third-party supported hardware
Starting with Media Composer 6, a new Open IO API allowed third-party companies to interface their hardware into Media Composer. AJA Video Systems, Blackmagic Design, Matrox, BlueFush and MOTU are supporting this API. Avid's own DX hardware is still natively interfaced into the application which currently allows some extra features that Open IO is limited in (LTC timecode support for example). It is expected that over time some of these missing APIs will be added.
AJA IO Express: Starting with Media Composer 5.5, introduced support for the AJA IO Express interface. This interface will allow SD/HD input and output via SDI and HDMI. It also has analog video and audio outputs for monitoring. It connects to a computer via PCIe or ExpressCard/34 interface.
Matrox MXO2 Mini: Starting with Media Composer 5, Avid introduced support for the Matrox MXO2 Mini interface, as a breakout box with no additional processing. While this interface does have input connections, only output is supported by Media Composer v5.x, starting with Media Composer v6.x you can capture with this interface. The connections on the unit support analog video/audio and HDMI in both SD and HD formats. The device is connected by a cable to either a PCIe card or ExpressCard/34 interface, so this unit can be used on a desktop or laptop system.
Avid Media Composer compatible hardware is manufactured by AJA Video Systems, Blackmagic Design, BlueFish, Matrox and MOTU.
Discontinued hardware
Avid Mojo: includes Composite and S-Video with two channels of RCA audio. There is an optional component video cable that can be added to this interface. This interface only supports SD video formats.
Avid Mojo SDI: includes Composite, S-Video, Component and SDI video, with 4 channels RCA, 4 channels AES and 2 channels optical S/PDIF audio. This interface only supports SD video formats.
Avid Adrenaline: rack mountable interface which includes Composite, S-Video, Component and SDI video, 4 channels of XLR, 4 channels of AES, 2 channels of S/PDIF and 8 channels of ADAT audio. This interface also has an expansion slot for the DNxcel card which adds HD-SDI input and output as well as a DVI and HD component outputs. The DNxcel card uses Avid's DNxHD compression which is available in 8-bit color formats up to 220mb as well as a 10-bit color format at 220mb. The DNxcel card also adds real-time SD down-convert and HD cross-convert.
Avid Mojo DX : rack mountable interface with various I/O
Avid Nitris DX: : rack mountable interface with various I/O
Features
Key features
Animatte
3D Warp
Paint
Live Matte Key
Tracker / Stabiliser
Timewarps with motion estimation (FluidMotion)
SpectraMatte (high quality chroma keyer)
Color Correction toolset (with Natural Match)
Stereoscopic editing abilities (expanded in MC v6)
AMA - Avid Media Access, the ability to link to and edit with P2, XDCAM, R3D, QuickTime and AVCHD native material directly without capture or transcoding.
Mix and Match - put clips of any frame rate, compression, scan mode or video format on the same timeline
SmartTools - drag and drop style editing on timeline, can be selectively adjusted to the types of actions that the user wants to use when clicking on timeline.
RTAS - (RealTime AudioSuite), support for realtime track-based audio plug-ins on the timeline.
5.1 and 7.1 Surround Sound audio mixing, compatible with Pro Tools
PhraseFind - analyses clips and indexes all dialog phonetically allowing text search of spoken words. (reacquired as of 8.9.3)
ScriptSync (with Nexidia phonetic indexing and sync) (reacquired as of 8.9.3)
Color correction
Avid Symphony includes Advanced/Secondary/Relational Color Correction and Universal HD Mastering. Starting with version 7, Symphony became paid option for Media Composer; with version 8, it was included with monthly and annual subscription licenses.
Software protection
The software used to be protected by means of "blesser" floppy, tied to the Nubus's TrueVista board (meaning that if the board is replaced, a new "blesser" floppy comes with the board), and later with USB dongles. As of version 3.5 the dongle is optional, and existing users may choose to use software activation or keep using their dongles, while new licenses are sold exclusively with software activation. The software ships with installers for both Mac and Windows and can physically be installed on several computers, allowing the user to move the software license between systems or platforms depending on the licensing method.
Licensing options
With Media Composer 8, Avid introduced monthly and annual subscription licensing systems similar to Adobe Creative Cloud, allowing users to install and activate Avid without purchasing a perpetual license. Media Composer licenses must be confirmed either by Avid's internet servers every 30 days or by an on-site floating license server. Starting with version 8, updates and support for perpetual licenses also require annual support agreements; support is included with subscription licenses.
Installers
The installer used to include installers for:
EDL Manager
Avid Log Exchange (no longer in v8)
FilmScribe
MediaLog (no longer in v8)
Interplay Transfer
MetaSync Manager (no longer in v6)
MetaSync Publisher (no longer in v6)
MetaFuze (Windows only), a standalone application to convert files (R3D, DPX, TIFF) from film scanning, CGI systems or RED camera into MXF media files. Actually based on an import module that was taken from Avid DS.
Third-party software
Some boxed versions of Media Composer came with the following third party software:
Avid FX - 2D & 3D compositing and titling software (aka Boris RED)
Sorenson Squeeze - Compression software to create, Windows Media, QuickTime, MPEG 1/2, MPEG 4 or Flash video (v8 monthly/annual subscription only)
SonicFire Pro 5 - music creation software (includes 2 CDs of music tracks)
Avid DVD by Sonic - DVD and Blu-ray authoring software (Windows only; no longer updated as of v8)
NewBlue Titler Pro - 2D and 3D video title software (v8 perpetual licenses bundled with v1, subscription licenses with v2)
Boris Continuum Complete - 2D and 3D graphics and effects (v8 monthly/annual subscription only)
Revisions and features
References
External links
Video editing software
Film and video technology
Video editing software for macOS
Video editing software for Windows |
55711279 | https://en.wikipedia.org/wiki/DataCore | DataCore | DataCore, also known as DataCore Software, is a developer of software-defined storage based in Fort Lauderdale, Florida, United States. The company is a pioneer in the development of SAN virtualization technology, and offers software-defined storage solutions for block, file, and object storage.
History
DataCore was founded in Fort Lauderdale in February 1998 by George Teixeira and Ziya Aral, co-workers at parallel computing company Encore Computer. The premise behind the company was to allow network operators to purchase commodity disk drives, external storage arrays or SAN disk drive arrays, and treat them all as virtual disks of networked, block-access storage. This storage was controlled using DataCore software.
They were joined by 10 other former Encore colleagues, and they all worked without pay until January 1999, when the company secured its first funding round, of $8 million.
In 2000, the company had a $35 million Series C funding round.
In 2006, seeing an exodus of venture funding, company employees mortgaged their homes to keep the business going, until 2008 when a US$30 million round of funding stabilized company finances.
In 2011, the company launched SANsymphony-V, an upgrade to its storage virtualization software offering faster performance.
In April 2014, the company released version 10 of its SANsymphony product.
In March 2015, DataCore partnered with Chinese technology vendor Huawei to run SANsymphony-V software on Huawei's FusionServer to create virtual storage networks.
In 2016, the company's SANsymphony-V software was reported to have set new price performance records based on testing done by Redwood City, California-based non-profit testing company Storage Performance Council using their SPC-1 storage performance benchmark. The results led to complaints from multiple vendors, who claimed that storing all the "test" data in cache made the results unfair. One of the three SPC-1 benchmark results was later withdrawn.
In March 2017, the company partnered with technology company Lenovo to develop its data center business by integrating DataCore's SANsymphony software defined storage with Lenovo's servers. This was reportedly to compete with companies like Nutanix and SimpliVity (now part of Hewlett Packard Enterprise (HPE)) that were shipping whole hyper-converged stacks rather than just a software-defined storage component. In September 2017, in an attempt to compete with the in-memory database features of SQL Server, the company released its MaxParallel driver, which uses parallel I/O technology to accelerate database-related processing such as with SQL Server databases. This product has been discontinued in August 2018.
In April 2018 DataCore announced that Dave Zabrowski, previously CEO of cloud-based financial services company Cloud Cruiser, was its new CEO, and former CEO George Teixeira was named Executive Chairman.
In October 2019, DataCore was awarded a patent for performing parallel I/O operations.
In February 2020, DataCore, together with AME Cloud Ventures and Insight Partners, invested $26 million in Palo Alto-based MayaData. In the same month, DataCore launched a global research and development center in Bangalore, India.
In January 2021, DataCore acquired Caringo, Inc., enabling the company to offer a complete storage solution portfolio including block, file, and object storage. DataCore announced the global availability of DataCore Swarm object storage software in April 2021 as a result of the acquisition. In November 2021, DataCore acquired MayaData, the original developer of cloud-native storage platform OpenEBS and Mayastor.
Products/technology
DataCore develops software to help companies manage their data storage infrastructure.
SANsymphony – Virtualizes block storage across a range of storage devices (SAN and HCI) and provides uniform data services across all of them.
vFilO – Simplifies shared access, control, and protection of distributed file systems.
Swarm – On-premises object storage platform that simplifies data access, delivery, and archiving.
References
External links
Official website
Computer storage companies
Storage Area Network companies
Companies based in Fort Lauderdale, Florida
Computer companies established in 1998
Storage software
Storage virtualization |
762226 | https://en.wikipedia.org/wiki/GFS2 | GFS2 | In computing, the Global File System 2 or GFS2 is a shared-disk file system for Linux computer clusters. GFS2 allows all members of a cluster to have direct concurrent access to the same shared block storage, in contrast to distributed file systems which distribute data throughout the cluster. GFS2 can also be used as a local file system on a single computer.
GFS2 has no disconnected operating-mode, and no client or server roles. All nodes in a GFS2 cluster function as peers. Using GFS2 in a cluster requires hardware to allow access to the shared storage, and a lock manager to control access to the storage.
The lock manager operates as a separate module: thus GFS2 can use the Distributed Lock Manager (DLM) for cluster configurations and the "nolock" lock manager for local filesystems. Older versions of GFS also support GULM, a server-based lock manager which implements redundancy via failover.
GFS and GFS2 are free software, distributed under the terms of the GNU General Public License.
History
Development of GFS began in 1995 and was originally developed by University of Minnesota professor Matthew O'Keefe and a group of students. It was originally written for SGI's IRIX operating system, but in 1998 it was ported to Linux since the open source code provided a more convenient development platform. In late 1999/early 2000 it made its way to Sistina Software, where it lived for a time as an open-source project. In 2001, Sistina made the choice to make GFS a proprietary product.
Developers forked OpenGFS from the last public release of GFS and then further enhanced it to include updates allowing it to work with OpenDLM. But OpenGFS and OpenDLM became defunct, since Red Hat purchased Sistina in December 2003 and released GFS and many cluster-infrastructure pieces under the GPL in late June 2004.
Red Hat subsequently financed further development geared towards bug-fixing and stabilization. A further development, GFS2 derives from GFS and was included along with its distributed lock manager (shared with GFS) in Linux 2.6.19. Red Hat Enterprise Linux 5.2 included GFS2 as a kernel module for evaluation purposes. With the 5.3 update, GFS2 became part of the kernel package.
GFS2 forms part of the Fedora, Red Hat Enterprise Linux and associated CentOS Linux distributions. Users can purchase commercial support to run GFS2 fully supported on top of Red Hat Enterprise Linux. As of Red Hat Enterprise Linux 8.3, GFS2 is supported in cloud computing environments in which shared storage devices are available.
The following list summarizes some version numbers and major features introduced:
v1.0 (1996) SGI IRIX only
v3.0 Linux port
v4 journaling
v5 Redundant Lock Manager
v6.1 (2005) Distributed Lock Manager
Linux 2.6.19 - GFS2 and DLM merged into Linux kernel
Red Hat Enterprise Linux 5.3 releases the first fully supported GFS2
Hardware
The design of GFS and of GFS2 targets SAN-like environments. Although it is possible to use them as a single node filesystem, the full feature-set requires a SAN. This can take the form of iSCSI, FibreChannel, AoE, or any other device which can be presented under Linux as a block device shared by a number of nodes, for example a DRBD device.
The DLM requires an IP based network over which to communicate. This is normally just Ethernet, but again, there are many other possible solutions. Depending upon the choice of SAN, it may be possible to combine this, but normal practice involves separate networks for the DLM and storage.
The GFS requires a fencing mechanism of some kind. This is a requirement of the cluster infrastructure, rather than GFS/GFS2 itself, but it is required for all multi-node clusters. The usual options include power switches and remote access controllers (e.g. DRAC, IPMI, or ILO). Virtual and hypervisor-based fencing mechanisms can also be used. Fencing is used to ensure that a node which the cluster believes to be failed cannot suddenly start working again while another node is recovering the journal for the failed node. It can also optionally restart the failed node automatically once the recovery is complete.
Differences from a local filesystem
Although the designers of GFS/GFS2 aimed to emulate a local filesystem closely, there are a number of differences to be aware of. Some of these are due to the existing filesystem interfaces not allowing the passing of information relating to the cluster. Some stem from the difficulty of implementing those features efficiently in a clustered manner. For example:
The flock() system call on GFS/GFS2 is not interruptible by signals.
The fcntl() F_GETLK system call returns a PID of any blocking lock. Since this is a cluster filesystem, that PID might refer to a process on any of the nodes which have the filesystem mounted. Since the purpose of this interface is to allow a signal to be sent to the blocking process, this is no longer possible.
Leases are not supported with the lock_dlm (cluster) lock module, but they are supported when used as a local filesystem
dnotify will work on a "same node" basis, but its use with GFS/GFS2 is not recommended
inotify will also work on a "same node" basis, and is also not recommended (but it may become supported in the future)
splice is supported on GFS2 only
The other main difference, and one that is shared by all similar cluster filesystems, is that the cache control mechanism, known as glocks (pronounced Gee-locks) for GFS/GFS2, has an effect across the whole cluster. Each inode on the filesystem has two glocks associated with it. One (called the iopen glock) keeps track of which processes have the inode open. The other (the inode glock) controls the cache relating to that inode. A glock has four states, UN (unlocked), SH (shared – a read lock), DF (deferred – a read lock incompatible with SH) and EX (exclusive). Each of the four modes maps directly to a DLM lock mode.
When in EX mode, an inode is allowed to cache data and metadata (which might be "dirty", i.e. waiting for write back to the filesystem). In SH mode, the inode can cache data and metadata, but it must not be dirty. In DF mode, the inode is allowed to cache metadata only, and again it must not be dirty. The DF mode is used only for direct I/O. In UN mode, the inode must not cache any metadata.
In order that operations which change an inode's data or metadata do not interfere with each other, an EX lock is used. This means that certain operations, such as create/unlink of files from the same directory and writes to the same file should be, in general, restricted to one node in the cluster. Of course, doing these operations from multiple nodes will work as expected, but due to the requirement to flush caches frequently, it will not be very efficient.
The single most frequently asked question about GFS/GFS2 performance is why the performance can be poor with email servers. The solution is to break up the mail spool into separate directories and to try to keep (so far as is possible) each node reading and writing to a private set of directories.
Journaling
GFS and GFS2 are both journaled file systems; and GFS2 supports a similar set of journaling modes as ext3. In data=writeback mode, only metadata is journaled. This is the only mode supported by GFS, however it is possible to turn on journaling on individual data-files, but only when they are of zero size. Journaled files in GFS have a number of restrictions placed upon them, such as
no support for the mmap or sendfile system calls, they also use a different on-disk format from regular files. There is also an "inherit-journal" attribute which when set on a directory causes all files (and sub-directories) created within that directory to have the journal (or inherit-journal, respectively) flag set. This can be used instead of the data=journal mount option which ext3 supports (and GFS/GFS2 does not).
GFS2 also supports data=ordered mode which is similar to data=writeback except that dirty data is synced before each journal flush is completed. This ensures that blocks which have been added to an inode will have their content synced back to disk before the metadata is updated to record the new size and thus prevents uninitialised blocks appearing in a file under node failure conditions. The default journaling mode is data=ordered, to match ext3's default.
, GFS2 does not yet support data=journal mode, but it does (unlike GFS) use the same on-disk format for both regular and journaled files, and it also supports the same journaled and inherit-journal attributes. GFS2 also relaxes the restrictions on when a file may have its journaled attribute changed to any time that the file is not open (also the same as ext3).
For performance reasons, each node in GFS and GFS2 has its own journal. In GFS the journals are disk extents, in GFS2 the journals are just regular files. The number of nodes which may mount the filesystem at any one time is limited by the number of available journals.
Features of GFS2 compared with GFS
GFS2 adds a number of new features which are not in GFS. Here is a summary of those features not already mentioned in the boxes to the right of this page:
The metadata filesystem (really a different root) – see Compatibility and the GFS2 meta filesystem below
GFS2 specific trace points have been available since kernel 2.6.32
The XFS-style quota interface has been available in GFS2 since kernel 2.6.33
Caching ACLs have been available in GFS2 since 2.6.33
GFS2 supports the generation of "discard" requests for thin provisioning/SCSI TRIM requests
GFS2 supports I/O barriers (on by default, assuming underlying device supports it. Configurable from kernel 2.6.33 and up)
FIEMAP ioctl (to query mappings of inodes on disk)
Splice (system call) support
mmap/splice support for journaled files (enabled by using the same on disk format as for regular files)
Far fewer tunables (making set-up less complicated)
Ordered write mode (as per ext3, GFS only has writeback mode)
Compatibility and the GFS2 meta filesystem
GFS2 was designed so that upgrading from GFS would be a simple procedure. To this end, most of the on-disk structure has remained the same as GFS, including the big-endian byte ordering. There are a few differences though:
GFS2 has a "meta filesystem" through which processes access system files
GFS2 uses the same on-disk format for journaled files as for regular files
GFS2 uses regular (system) files for journals, whereas GFS uses special extents
GFS2 has some other "" system files
The layout of the inode is (very slightly) different
The layout of indirect blocks differs slightly
The journaling systems of GFS and GFS2 are not compatible with each other. Upgrading is possible by means of a tool () which is run with the filesystem off-line to update the metadata. Some spare blocks in the GFS journals are used to create the (very small) files required by GFS2 during the update process. Most of the data remains in place.
The GFS2 "meta filesystem" is not a filesystem in its own right, but an alternate root of the main filesystem. Although it behaves like a "normal" filesystem, its contents are the various system files used by GFS2, and normally users do not need to ever look at it. The GFS2 utilities mount and unmount the meta filesystem as required, behind the scenes.
See also
Comparison of file systems
GPFS, ZFS, VxFS
Lustre
GlusterFS
List of file systems
OCFS2
QFS
SAN file system
Fencing
Open-Sharedroot
Ceph (software)
References
External links
Red Hat Red Hat Enterprise Linux 6 - Global File System 2
Red Hat Cluster Suite and GFS Documentation Page
GFS Project Page
OpenGFS Project Page (obsolete)
The GFS2 development git tree
The GFS2 utilities development git tree
Distributed file systems supported by the Linux kernel
Red Hat software
Shared disk file systems
University of Minnesota software
Virtualization-related software for Linux |
30644212 | https://en.wikipedia.org/wiki/IBM%20XL%20C/C%2B%2B%20Compilers | IBM XL C/C++ Compilers | XL C/C++ is the name of IBM's proprietary optimizing C/C++ compiler for IBM-supported environments.
Compiler
The IBM XL compilers are built from modularized components consisting of front ends (for different programming languages), a platform-agnostic high-level optimizer, and platform-specific low-level optimizers/code generators to target specific hardware and operating systems. The XL C/C++ compilers target POWER, BlueGene/Q, and IBM Z hardware architectures.
A common high level optimizer across the POWER and z/OS XL C/C++ compilers optimizes the source program using platform-agnostic optimizations such as interprocedural analysis, profile-directed feedback, and loop and vector optimizations.
A low-level optimizer on each platform performs function-level optimizations and generates optimized code for a specific operating system and hardware platforms.
The particular optimizations performed for any given compilation depend upon the optimization level chosen under option control (O2 to O5) along with any other optimization-related options, such as those for interprocedural analysis or loop optimizations.
A 60-day installable evaluation version is available for download for XL C/C++ for AIX. In June 2016, IBM introduced XL C/C++ for Linux Community Edition, which is a no-charge and fully functional
edition for unlimited production use.
The XL compilers on AIX have delivered leadership scores in the SPEC CPU2000 and CPU2006 benchmarks, in combination with specific IBM POWER system processor announcements, for example, SPEC CPU2006 Floating Point score of 71.5 in May 2010 and score of 4051 in August 2006.
Current versions of XL C/C++ for AIX (16.1) and XL C/C++ for Linux (16.1.1), are based on open-source Clang front end (part of the Clang/LLVM open source project). They provide support for C11, C++03, C++11, and C++14.
A new monthly pricing option is offered in XL C/C++ for AIX 16.1 and XL Fortran for AIX 16.1 to provide more flexibility for cloud-based use cases. This pricing model is on a term or subscription basis, with Software Subscription and Support included.
With the launch of IBM Power10, the IBM XL C/C++ for AIX compiler has been modernized and re-branded to IBM Open XL C/C++ for AIX. IBM Open XL C/C++ for AIX 17.1.0 combines Clang/LLVM technology with IBM's industry-leading optimizations, which provides the following improved capabilities:
Greater application performance
Enhanced language standard support
Enhanced GCC compatibilities
Faster build speed
IBM Open XL compilers offer monthly licenses (per Virtual Processor Core) to facilitate the journey to hybrid cloud. Meanwhile, user-based licenses (i.e. Authorized user and Concurrent user licenses) are still available.
The z/OS XL C/C++ compiler exploits the latest IBM Z® systems, including the latest IBM z15™ servers. It enables the development of high-performing business applications and system programs on z/OS while maximizing hardware use and improving application performance. IBM z/OS XL C/C++ uses services provided by the z/OS Language Environment® and Runtime Library Extensions base elements. It supports embedded CICS® and SQL statements in C/C++ source, which simplifies the operation of C/C++ within CICS and Db2® environments. It works in concert with the IBM Application Delivery Foundation for z/OS.
IBM XL C/C++ V2.4.1 for z/OS® V2.4 web deliverable is the latest offering from the IBM XL C/C++ compiler family, which provides new C and C++ compilers that adopt the Clang infrastructure from the LLVM open source community for a portion of the compilers. z/OS XL C/C++ V2.4.1 is designed to aid in porting code from other platforms to z/OS and to give a more familiar view to developers who are accustomed to a UNIX environment. IBM XL C/C++ V2.4.1 for z/OS V2.4 provides support for the core C11 standard and most of the C++11 and C++14 standard features for easier application migration to IBM Z® servers.
Products
The XL C/C++ compiler family consists of the following products, with most recent version and release dates where known:
XL C/C++ for AIX (Version 16.1, December 2018)
XL C for AIX (Version 13.1.3, December 2015)
XL C/C++ for Linux on Power for little-endian distributions (Version 16.1.1, November 2018)
XL C/C++ for Linux on Power for big-endian distributions (Version 13.1, June 2014)
z/OS XL C/C++ (Version 2.4, Sep 2019)
z/OS XL C/C++ (Version 2.3, September 2017)
z/OS XL C/C++ (Version 2.2, September 2015)
XL C/C++ for z/VM (Version 1.3, December 2011)
XL C/C++ for Linux on z Systems (Version 1.1, January 2015)
XL C/C++ for Blue Gene/Q (Version 12.1, June 2012)
XL C/C++ Advanced Edition for Blue Gene (Version 9.0, September 2007, withdrawn August 2009)
The Open XL C/C++ compiler family consists of the following products, with most recent version and release dates where known:
Open XL C/C++ for AIX (Version 17.1.0, Sep 2021)
See also
IBM VisualAge – the predecessor product
List of compilers
References
External links
Product documentation: Open XL C/C++ for AIX 17.1.0
Product documentation: XL C/C++ for Linux 16.1.1
Product documentation: XL C/C++ for AIX 16.1
Product documentation: XL C for AIX 13.1.3
Product documentation: z/OS XL C/C++, V2.4
Product documentation: z/OS XL C/C++, V2.3
Product documentation: z/OS XL C/C++, V2.2
Product page: z/OS XL C/C++
Community: IBM C/C++ and Fortran compilers on Power® community
Community: IBM C/C++ compilers for IBM Z
C++ compilers
C (programming language) compilers
IBM software |
60847967 | https://en.wikipedia.org/wiki/ADARA%20Networks | ADARA Networks | Adara Networks (stylized as "ADARA Networks") is an American software company.
History
The company creates SDN (software-defined networking) infrastructure orchestration software and provides cloud computing. It has several dozen partners in its channel program. Adara's cloud software includes an SDI Visualizer for topological rendering, an SLA Manager for determining cost efficiency, and use Sirius Routers. Afterwards the company developed its Horizon SDA Platform, which has an Ecliptic SDN controller, Axis vSwitch, SoftSwitch, and cloud computing engine.
In 2008 Adara developed a networking electronic medical records project for the US Congress. Adara has served on Industry Advisory Panels for the Congress as well.
The company has held contracts with the Department of Defense, and spent its first ten years or so working in the public sphere before opening up to private companies, including SMEs, in 2011.
In 2012 Adara created a full stack network for its cloud, and in 2013 its controller became open source. Then in 2016, Adara partnered with Calient Technologies to develop an integrated SD-WAN. The company's CEO is Eric Johnson.
iN 2020 ADARA Networks were acknowledged as an Industry Leader in SDN.
References
External links
Official page
Twitter
Software companies of the United States
Companies based in California
1998 establishments in the United States
1998 establishments in California
Software companies established in 1998
Companies established in 1998 |
64136422 | https://en.wikipedia.org/wiki/2020%E2%80%9321%20USC%20Trojans%20women%27s%20basketball%20team | 2020–21 USC Trojans women's basketball team | The 2020–21 USC Trojans women's basketball team represented the University of Southern California during the 2020–21 NCAA Division I women's basketball season. The Trojans play their home games at the Galen Center and are members of the Pac-12 Conference. The squad was led by head coach Mark Trakh, who was in the 4th year of his 2nd stint (9th year overall). This year, the season was shortened to accommodate safety measures due to the COVID-19 pandemic. As such, no fans were permitted at any of the games.
USC finished the regular season 10–11 (8–10) and earned the 8th seed in the 2020 Pac-12 Conference Women's Basketball Tournament.
Previous season
The 2019–20 Women of Troy finished unranked with an overall record of 17–14. In Pac-12 play, their record was 8–10, and they finished in seventh place. Because of the pandemic, the previous season ended abruptly after the Pac-12 Tournament, at which the Women of Troy reached the quarterfinals.
Offseason changes
Departures
Incoming transfers
2020 Recruiting class
Current roster
Player recognition
Alyson Miura
Pac-12 Academic Honor Roll
Amaya Oliver
All Pac-12 Freshman Team Honorable Mention
India Otto
Pac-12 Academic Honor Roll
Alissa Pili
Last year, Pili was selected as Pac-12 Freshman of the Year for the previous season. She was also a member of the All Pac-12 Team and the Freshman All Pac-12 Team.
Pili was recognized with preseason awards as a member of the 2020–21 Pac-12 Women's Basketball Preseason Media All-Conference Team.
Just before the season started, Pili was added to the watchlists for both the Katrina McClain Power Forward of the Year Award and the 2021 Jersey Mike's Naismith Trophy.
Honored as Pac-12 Player of the Week on February 8, 2021.
All Pac-12 Honorable Mention
Endiya Rogers
The Pac-12 recognized Rogers with a preseason All-Conference Honorable Mention.
On February 1, 2021, Rogers was recognized as the Pac-12 Player of the Week.
All Pac-12 Team
Jordan Sanders
On December 28, 2020, Sanders was announced as the Pac-12 Player of the Week.
Two days later, she was recognized again, this time as a member of the NCAA's Starting Five for Week 5.
All Pac-12 Honorable Mention
Injuries
Tinner's (knee) debut as a Trojan took place on December 13, 2020 against UCLA.
Jenkins (foot) debuted against Utah on January 8, 2021.
Campbell was out from January 8, 2021 until February 5, 2021.
Pili (ankle) returned to play against Washington State on January 15, 2021.
Aaron (ankle) made her return to the court against UCR on January 17, 2021. She had been out for nearly two years.
Jackson missed the Long Beach State game. She was then absent for the away games against Washington and Washington State.
Miura (knee) made her first appearance this season on February 5, 2021 against Washington.
Sanders was injured on January 31, 2021 against ASU. She returned to play on February 12, 2021 against Colorado.
Schedule
|-
!colspan=9 style=| Regular Season
|-
!colspan=9 style=| Pac-12 Women's Tournament
Rankings
2020–21 NCAA Division I women's basketball rankings
References
USC Trojans women's basketball seasons
USC
USC Trojans basketball, women
USC Trojans basketball, women
USC Trojans basketball, women
USC Trojans basketball, women |
23631278 | https://en.wikipedia.org/wiki/Atomic%20authorization | Atomic authorization | Atomic authorization is the act of securing authorization rights independently from the intermediary applications to which they are granted and the parties to which they apply. More formally, in the field of computer security, to atomically authorize is to define policy that permits access to a specific resource, such that the authenticity of such policy may be independently verified without reliance on the application that enforces the policy or the individuals who use the application. Resources include access to individual data, computer programs, computer hardware, computer networks, and physical access.
Traditional vs. atomic authorization
In traditional (non-atomic) authorization, policy is defined and secured at an application level. That is, outside the context of the application, there is no mechanism to verify the legitimacy of traditional authorization policy. Atomic authorization requires a trusted third party to issue authorization policy with a cryptographic guarantee of integrity. Because it is secured independently of the application which use it, atomic authorization policy is equivalent in strength to strong authentication policy.
For an application using strong (N-factor) authentication, traditional authorization techniques pose a security vulnerability. The application must rely upon technologies like database queries or directory lookups, which are protected using single-factor authentication, for authorization information and management. Any application specific hardening of non-atomic authorization methods increases the complexity of identity management and issuing credentials, but does not further legitimize the authorization decisions that the application makes.
See also
Security engineering
Computer security
Authentication
Access control
References
External links
Computer access control |
6749285 | https://en.wikipedia.org/wiki/Ivan%20Trojan | Ivan Trojan | Ivan Trojan (born 30 June 1964) is a Czech actor, widely considered to be one of the greatest Czech actors of all time. With four Czech Lions for Best Actor in a Leading Role, he has also won two for his supporting roles in Seducer and One Hand Can't Clap, making him the most awarded performer at the Czech Lion Awards.
He is acclaimed for his performances in films Loners (2000), Želary (2003), Václav (2007), The Karamazovs (2008), In the Shadow (2012) and Angel of the Lord 2 (2016), all gaining success at the box-office and critic circles. He is also known for his award-winning and lauded appearances at the Dejvice Theatre, the Vinohrady Divadlo and the Summer Shakespeare Festival, including Stanley Kowalski in A Streetcar Named Desire, Demetrius in A Midsummer Night's Dream, Eugen Bazarov in Father and Sons and the Father in The Brothers Karamazov.
Career
Trojan was born in Prague. He graduated from the Faculty of Theatre of the Academy of Performing Arts in Prague in 1988 and Realistické divadlo Zdeňka Nejedlého (RDZN) in Prague-Smíchov. He holds a Master of Fine Arts degree from the Academy. In 1992 he moved to Divadlo na Vinohradech (DNV). In 1997 he decided to move to a newly established Dejvické divadlo (DD).
At the International TV Festival in Monte Carlo 2013, Ivan Trojan was awarded the prize of Golden Nymph for the Best Actor in the mini-series Burning Bush.
Personal life
He is son of actor Ladislav Trojan and brother of producer and director Ondřej Trojan. He is married to actress Klára Pollertová-Trojanová with four sons - František (born 1999), Josef (born 2001), Antonín (born 2009) and Václav (born 2012)
Theatre
Dejvice Theatre
Teremin (2005) .... Léon Theremin, nominated for Alfréd Radok Award
A Streetcar Named Desire (2003) .... Stanley
Sic (2003) .... Theo (by Melissa James Gibson)
Three Sisters (2002) .... Aleksander Ignayevitch Vershinin
Tales of Common Insanity (2001) .... Petr
Oblomov (2000) .... The Title Role - received Thalia Award, nominated for Alfréd Radok Award
The Brothers Karamazov (2000) .... Father Karamazov - Devil
The Incredible and Sad Tale of Innocent Eréndira and her Heartless Grandmother (1999) .... Red Indian
The Government Inspector (1998) .... Anton Antonovitch Skvoznik - Duchanovskij, hetman
Utišující metoda (1997) .... Professor Maillard (by Edgar Allan Poe), nominated for Thalia Award
Vinohrady Theatre
The Brothers Karamazov (1997) .... Ivan
A Flea in Her Ear (1996) .... Kamil Champsboisy (by Georges Feydeau
Jacobowski and the Colonel (1995) .... Head of Policemen (by Franz Werfel)
Fathers and Sons (1995) .... Eugen Bazarov
Clown (August August August) (1994) .... August jr. (by Pavel Kohout
Le baruffe chiozzotte (1994) .... Commissioner (by Carlo Goldoni)
Romeo and Juliet (1992) .... Romeo
A Midsummer Night's Dream (1990) .... Demetrius, Summer Shakespeare Festival
Merlin oder das wüste Land (1988) .... Parsifal (by Tankred Dorst, RDZN)
Other
Nesles Tower (1996) .... Night of Orgies (by Pierre Henri Cami), Divadlo Viola
Selected filmography
2000 – Četnické humoresky (Bedřich Jarý)
2000 – Loners (Ondřej), nominated for Czech Lion Award
2002 – The Brats (Marek Sir (father)), received Czech Lion Award
2002 – Seducer (Karel)
2003 – Želary (Richard)
2003 – One Hand Can't Clap (Zdenek), received Czech Lion Award for best actor, also co-writer of the screenplay
2005 – Wrong Side Up (Petr Hanek)
2005 – Angel of the Lord (Petronel)
2007 – Medvídek (Ivan)
2007 – Václav (Václav Vingl)
2008 – The Karamazovs (father)
2012 – In the Shadow
2012 – Hořící keř (2012) .... Major Jireš
2016 - Angel of the Lord 2 (Petronel)
Dubbing works
2016 - Finding Dory - Marlin (Albert Brooks)
2016 - Kung Fu Panda 3 - Master Monkey (Jackie Chan)
2014 - Touch - Martin Bohm (Kiefer Sutherland)
2013 - Luftslottet som sprängdes - Michael Nyqvist (Mikael Blomkvist)
2013 - Flickan som lekte med elden - Michael Nyqvist (Mikael Blomkvist)
2013 - The Girl With The Dragon Tattoo - Michael Nyqvist (Mikael Blomkvist)
2013 - Epic - Mandrake (Christoph Waltz)
2012 - The Killing - Troels Hartmann (Lars Mikkelsen)
2011 - Kung Fu Panda 2 - Master Monkey (Jackie Chan)
2010 - Megamind - Minion (David Cross)
2009 - Monsters Vs. Aliens - Dr. Cockroach (Hugh Laurie)
2008 - Kung Fu Panda - Master Monkey (Jackie Chan)
2008 - Madagascar: Escape 2 Africa - Makunga Alec Baldwin
2006 - Over the Hedge - RJ (Bruce Willis)
2004 - 2013 -24 - Jack Bauer (Kiefer Sutherland)
2003 - Finding Nemo - Marlin (Albert Brooks)
2002 - Look Who's Talking - Mikey (Bruce Willis)
2001 - Look Who's Talking - Mikey (Bruce Willis)
2001 - Ally McBeal - Mark Albert (James Le Gros)
1997 - Friends - Pete Becker (Jon Favreau)
1992 - Back to the Future Part II - Biff Tannen (Thomas F. Wilson) (Cinema Dubbing)
1991 - Back to the Future'' - Biff Tannen (Thomas F. Wilson) (Cinema Dubbing)
External links
Ivan Trojan at CFN.cz
References
1964 births
Living people
Czech male stage actors
Czech male film actors
Male actors from Prague
Academy of Performing Arts in Prague alumni
20th-century Czech male actors
21st-century Czech male actors
Czech male voice actors
Czech Lion Awards winners |
405489 | https://en.wikipedia.org/wiki/Low-complexity%20art | Low-complexity art | Low-complexity art, first described by Jürgen Schmidhuber in 1997 and now established as a seminal topic within the larger field of computer science, is art that can be described by a short computer program (that is, a computer program of small Kolmogorov complexity).
Overview
Schmidhuber characterizes low-complexity art as the computer age equivalent of minimal art. He also describes an algorithmic theory of beauty and aesthetics based on the principles of algorithmic information theory and minimum description length. It explicitly addresses the subjectivity of the observer and postulates that among several input data classified as comparable by a given subjective observer, the most pleasing one has the shortest description, given the observer's previous knowledge and his or her particular method for encoding the data. For example, mathematicians enjoy simple proofs with a short description in their formal language (sometimes called mathematical beauty). Another example draws inspiration from 15th century proportion studies by Leonardo da Vinci and Albrecht Dürer: the proportions of a beautiful human face can be described by very few bits of information.
Schmidhuber explicitly distinguishes between beauty and interestingness. He assumes that any observer continually tries to improve the predictability and compressibility of the observations by discovering regularities such as repetitions and symmetries and fractal self-similarity. When the observer's learning process (which may be a predictive neural network) leads to improved data compression the number of bits required to describe the data decreases. The temporary interestingness of the data corresponds to the number of saved bits, and thus (in the continuum limit) to the first derivative of subjectively perceived beauty. A reinforcement learning algorithm can be used to maximize the future expected data compression progress. It will motivate the learning observer to execute action sequences that cause additional interesting input data with yet unknown but learnable predictability or regularity. The principles can be implemented on artificial agents which then exhibit a form of artificial curiosity.
While low-complexity art does not require a priori restrictions of the description size, the basic ideas are related to the size-restricted intro categories of the demoscene, where very short computer programs are used to generate pleasing graphical and musical output. Very small (usually C) programs that create music have been written: the style of this music has come to be called "bytebeat".
The larger context
The larger context provided by the histories of both art and science suggests that low-complexity art will continue to be a topic of growing interest.
In respect to the trajectory of science and technology, for example, low-complexity art may represent another case in which the relatively new discipline of computer science is able to shed fresh light on a disparate subject — the classic example being those insights into the functioning of the genetic code garnered in no small part because of a familiarity with issues already raised in the practice of software engineering. We may thus expect the topic of low-complexity art to help foster a continued and fruitful interaction between the fields of computer science and aesthetics. Nor will the insights gained be purely qualitative; indeed, the formalizations on which low-complexity art is based are essentially quantitative.
In respect to art history, likewise, the potential relevance of low-complexity art extends far beyond the minimalistic Renaissance encoding of beauty already cited in its literature. The idea of an intimate relationship between mathematical structure and visual appeal is one of the recurring themes of Western art and is prominent during several of its periods of fluorescence including that of dynastic Egypt; Greece of the classic era; the Renaissance (as already noted); and on into the Geometric abstraction of the 20th century, especially as practiced by Georges Vantongerloo and Max Bill.
See also
Infinite compositions of analytic functions
References
External links
Schmidhuber's Papers on Low-Complexity Art & Theory of Subjective Beauty
Schmidhuber's Papers on Interestingness as the First Derivative of Subjective Beauty
Examples of Low-Complexity Art in a German TV show (May 2008)
Computer art
Computational complexity theory |
69248246 | https://en.wikipedia.org/wiki/Identity%20safety%20cues | Identity safety cues | Identity safety cues are aspects of an environment or setting that signal to members of stigmatized groups that the threat of discrimination is limited within that environment and/or that their social identities are welcomed and valued. Identity safety cues have been shown to reduce the negative impacts impact of social identity threats, which are when people experience situations where they feel devalued on the basis of a social identity (see Stereotype Threat). Such threats have been shown to undermine performance in academic and work-related contexts and make members of stigmatized groups feel as though they do not belong. Identity safety cues have been proposed as a way of alleviating the negative impact of stereotype threat or other social identity threats, reducing disparities in academic performance for members of stigmatized groups (see Achievement Gaps in the US), and reducing health disparities caused by identity related stressors.
Research has shown that identity safety cues targeted towards one specific group can lead individuals with other stigmatized identities to believe their identities will be respected and valued in that environment. Further, the implementation of identity safety cues in existing research did not cause members of non-stigmatized groups feeling threatened or uncomfortable. In fact, some work has suggested that the benefits of identity safety cues extend to members of non-stigmatized groups. For example, implementation of identity safety cues within a university context has been shown to increase student engagement, efficacy, and reduce the average number of student absences for all students, but especially those from stigmatized groups. Several types of identity safety cues have been identified.
Types of cues
Diversity philosophies and programing
There is evidence suggesting that when individuals or organizations communicate that they value diversity highly, concerns about identity threats are reduced. For example, Hall and colleagues tested the impact of communicating gender inclusive policies on self-reported belonging of women working at engineering firms. Across two studies, Hall and colleagues found that when women working at engineering firms were presented with information communicating gender inclusive policies, they reported increased belonging, fewer concerns about experiencing gender stereotyping in the workplace, and expected to have more pleasant conversations with male coworkers.
Within a classroom context, exposure to information stating that instructors or schools hold multicultural philosophies has been shown to increase student agency, self-confidence, and classroom engagement for students from stigmatized groups. Exposure to diversity philosophies and programming can have a lasting effect. In a recent study, Birnbaum and colleagues had first-year college students read a diversity statement that represented the schools’ diversity philosophy as either being in favor of multiculturalism or colorblindness. The students’ academic progress was tracked over the course of the next two years. Students from stigmatized groups who read the multicultural diversity statement had increased academic performance over the course of the two years compared to students who read the colorblind diversity statement. Similarly, a 2021 study found that when university students were presented with information about equity and non-discrimination policies in the classroom, students from stigmatized groups reported greater belonging within the classroom and reported fewer absences than students who were not presented with the same equity and non-discrimination policies. Further, students in this study also reported perceiving the instructor as behaving in a more inclusive manner and reported greater concerns about addressing social inequities when they were presented with information about equity and non-discrimination policies.
However, the evidence for the effectiveness of diversity philosophies and programming alone is mixed. For example, Valarie Purdie-Jones and colleagues ran a study comparing the effects of Black representation within the workplace and organizational claims of valuing diversity on Black professionals’ sense of organizational trust and belonging. Black professionals who were presented with information showing that an organization a higher number of Black employees reported feeling greater organizational trust and belonging. Similarly, organizational claims of valuing diversity led to an increased sense of organizational trust and belonging. However, the type of diversity philosophy communicated influenced how effective the philosophy was at increasing organizational trust and belong. Black professionals who received information stating that the organization held a color-blind philosophy of diversity (i.e., the idea that differences are insignificant and should not be attended to; See Color Blindness) felt lower organizational trust and belonging than Black professionals who received information stating that the organization held a multicultural perspective (i.e., the idea that differences between social groups are meaningful as diverse perspectives offer unique insight and strengths; See Multiculturalism). Similarly, a 2015 study from Wilton and colleagues exposed participants to either a colorblind or multicultural diversity statement and then measured their expectations about anticipated bias and racial and gender diversity. Participants who were exposed to a colorblind diversity statement expected to experience increased levels of bias and expected less racial and gender diversity than participants who were exposed to a multicultural diversity statement.
Minority representation cues
One form of identity safety cues that has shown promise is invoking the real or imagined presence of other members of stigmatized groups as a way of suggesting that ones’ social identity will not be devalued and are safe. The majority of this work has been done amongst racial minorities and women in contexts where they represent a numerical minority (e.g., STEM contexts, in male dominated workplaces). For example, a 2007 study explored the impacts of signaling balanced versus unbalanced gender ratios in STEM on belonging for female and male STEM majors. In this study, women who watch a video that showed a much larger number of men at a STEM conference exhibited greater levels of cognitive and physiological vigilance, reported a lower sense of belonging, and less desire to participate in the conference. However, women who watched a video showing a roughly equal number of men and women at the same STEM conference exhibited less vigilance, reported a heightened sense of belonging, and a greater desire to participate in the conference. Men's vigilance, sense of belonging, and desire to participate in the conference were unaffected by watching either video.
Having a role model with a shared stigmatized identity (e.g., female students having a female professor role model) has also been shown to have similar positive effects. For example, a 2019 study explored the effects of having a Roma or non-Roma role model on Roma children in Slovakia. Presenting Roma children with a known role model from their ethnic group was shown to increase academic achievement and reduce stereotype threat, as opposed to presenting children of role models from different ethnic groups. Similarly, research has found that being exposed to a female role model can help to reduce the identity threat women experience after being exposed to information about the biases women face in STEM.
Even though the majority of this work has been done in STEM contexts, similar work has been done in the context of the workplace. For instance, a 2008 study found that Black professionals who were presented with information showing that an organization had a higher number of Black employees felt a greater sense of trust and belonging in that organization compared to Black professionals who were presented with information showing that an organization had a small number of Black employees. However, for these cues to be effective, they must reasonably reflect the actual percentage of individuals that hold a stigmatized identity within a given context. A 2020 study found that whenever racial and ethnic minorities perceive an organization as falsely inflating the percent of employees that hold a stigmatized identity, racial and ethnic minorities report increased concerns about belonging, performance, and expressing themselves.
Environmental features
Environmental cues are features of an environment that reduce identity threat by communicating inclusive norms and values. Typically, these cues include background objects (e.g., posters, items on a table) or counter-stereotypic imagery (e.g., a rainbow flag in a gym predominantly frequented by heterosexual men and women). Studies exploring the impact of environmental cues on belonging in STEM by members of marginalized groups have found strong evidence for the power of environmental cues to influence one's sense of belonging and concerns about discrimination. For example, a 2009 study found that changing objects in a computer science classroom from objects considered stereotypical of computer science (e.g., Star Trek poster, video games) to non-stereotypical objects (e.g., nature poster, phone books) increased female STEM majors’ sense of belonging and interest in computer science by reducing associations between computer science and masculine stereotypes. In a different domain, the presence of gender-inclusive bathrooms is associated with greater perceived fairness within the workplace, more positive perceptions of workplace climate for women and racial and ethnic minorities, and increased perceptions of the workplace as egalitarian.
The presence of environmental cues has also been associated with differences in academic outcomes as well. For example, a 2013 study randomly assigned male and female students to give a persuasive speech in a virtual-reality classroom that had a photograph of a male world leader, a female world leader, or no photograph. When the room featured either a photograph of a male world leader or no picture, male students gave speeches that were longer and rated as better than the female students’ speeches. However, the presence of female leader photographs increased female students’ speaking time and their speeches were rated as higher quality. In a similar study, American Indian high school students who were randomly assigned to see stereotypic American Indian imagery in a classroom (e.g., Chief Wahoo of the Cleveland Indians) were less likely to mention academic achievement when asked about where they imagined themselves in the future than American Indian students who saw no image or a counter-stereotypic poster of an American Indian woman in front of a microscope.
Identity safe information
Another form of identity safety cues that has shown promise is providing members of stigmatized groups with information that reduces the importance or relevance of negative stereotypes, conveys non-biased expectations, and/or conveys a positive climate for members of stigmatized groups. The majority of this work has been done in academic contexts in order to reduce the impact of stereotype threat. For example, a prominent 1999 study explored if stereotype threat among female students could be reduced by telling the class that prior administrations of the math exam they were about to take had revealed no gender differences in performance. When students were informed that they were taking a “gender fair” math exam, female students performed equally well to male students taking the same exam. However, when female students were told before that the exam had been shown to produce gender differences, female students performed worse than male students. Similarly, telling women that there are no differences in women's and men's leadership abilities has been shown to eliminate gender gaps in leadership aspirations.
However, other studies have found that merely providing identity safe information alone is sometimes not enough to reduce stereotype threat or identity threat. For example, in one study women were presented with a text explaining that stereotypes and not gender differences were responsible for academic performance gaps between men and women and were then asked to complete a math task. It was found that women who were presented with information about stereotype threat and gender differences in academic outcomes performed significantly worse at the math task.
More recently, information about expectations for discrimination (or lack thereof) have also been explored as an identity safety cue. For example, a 2020 study from Murrar and colleagues explored the impact of informing university students that most of their peers endorsed positive diversity related values, cared strongly about inclusion in university classrooms, and typically behaved in a non-discriminatory manner. Being presented with this information caused all students, regardless of their background, to evaluate classroom climate more positively and to report more positive attitudes toward members of stigmatized groups. Further, students from stigmatized groups reported greater sense of belonging and better self-reported physical health. In a similar study, Black women who were informed of the presence of a non-Black female ally reported an increased sense of belonging in the workplace.
Contexts
Education
Much of the research on identity safety cues came from early attempts to mitigate the detrimental effects of stereotype threat. For instance, one of the first studies to use what is now known as an identity safety cue explored the impact of telling female students that there were no gender differences in a math exam (i.e., presenting identity safe information). A large portion of current research on identity safety cues continues to explore ways to reduce educational disparities between members of stigmatized groups and members of stigmatized groups.
Workplace
Another major focus of identity safety cue research is on methods that can successfully increase the belonging and retention of members of stigmatized groups within the workplace. For example, a 2015 study explored the impact of different philosophies of intelligence on female employees expectations to be stereotyped in the workplace and organizational belonging. When a consulting company displayed the belief that intelligence and abilities are malleable on their mission statement or website compared to the belief that intelligence is fixed women trusted the company more and expected to be stereotyped less. However, gender representation within the company did not affect women's trust in the company. Similarly, a 2019 study found that Latina women felt greater trust, belonging, and interest in a fictional STEM company when learning about a Latinx scientist employee than a White scientist (regardless of the gender of the scientist). More recently, a 2021 study explored whether the presence of an employee's pronouns in an employee biography acted as an identity-safety cue for sexual and gender minorities. The inclusion of pronouns resulted in more positive organizational attitudes among gender and sexual minority participants and increased perceptions of coworker allyship, regardless of whether the disclosure of pronouns was required or optional by the organization.
Healthcare
While the majority of research on identity safety cues has been done in either academic or workplace contexts, there has been a recent push to explore the effectiveness of these cues in healthcare contexts to see if they can help address disparities in health outcomes between members of stigmatized groups and members of non-stigmatized groups. For example, a recent study explored the impact of minority representation cues and communicating organizational diversity philosophies on Black and Latinx participants’ perceptions of a physicians’ racial biases, cultural competence, and general expectations of a visit with that physician. While physicians’ diversity statements did not influence participants’ anticipated quality of the visit, being informed that the physician had a diverse clientele increased greater anticipated comfort and perceptions of receiving better treatment for Black and Latinx participants. Similarly, in another recent study, researchers explored how minority representation cues and physicians’ diversity statements might influence sexual minorities’ perceptions of physician bias, cultural competence, anticipated comfort, expectations, and comfort disclosing their sexuality while visiting a physician. Both the diversity statement and minority representation cues reduced perceptions of physician bias, but only minority representation cues increased perceptions of the physician as culturally competent, increased anticipated comfort and quality, and led to greater comfort disclosing their sexuality. Related work has also been done with fathers in medical contexts. For instance, a 2019 study found that prenatal doctor's offices with environmental safety cues (e.g., pictures of fathers with babies) increased expectant fathers’ comfort with attending prenatal appointments and led to greater parenting confidence, comfort, increased intentions to learn about pregnancy, and greater intentions engage in healthy habits to aid their partner (e.g., avoiding smoking and alcohol during their partner's pregnancy).
See also
Stereotype threat
Racial achievement gap in the United States
Health disparities
References
Wikipedia Student Program
Behavior |
68446311 | https://en.wikipedia.org/wiki/Unqork | Unqork | unqork is a cloud computing and enterprise software company based in New York, NY that offers a no-code development platform-as-a-service (PaaS) for building enterprise applications. unqork supports organizations in finance, insurance, healthcare, and government.
History
Unqork was founded in 2017 by CEO Gary Hoberman, who was formerly the CIO of insurance company Metlife.
In February 2020, the company completed a series B funding round of $131M. In March, the New York City Department of Information Technology and Telecommunications used the Unqork platform to build a digital portal that would allow free meals to be delivered to New York City residents during the COVID-19 pandemic. In August, Crain's New York Business reported that the company was one of New York City's fastest growing startups. In October, Unqork announced that it had raised an additional $207 million in a Series C funding round led by accounts managed by BlackRock, Inc., bringing the company's valuation to more than $2 billion.
In May 2021, Chicago's Department of Housing, in partnership with the Resurrection Project, used the Unqork platform to distribute rent relief funds to Chicago residents. That same month, a Forrester report on low-code platforms dubbed Unqork "rookie of the year", though the company differentiates no-code from low-code. In a July report, research firm HFS Solutions reported that the company's customers included Goldman Sachs, Liberty Mutual, and the Cities of New York, Chicago and Washington DC.
Platform and services
Unqork's no-code platform allows users to design applications and other digital solutions through an entirely visual interface. Software is designed by configuring reusable visual components representing end-user-facing elements and application logic.
The company's software includes a marketplace featuring pre-built apps and third-party consultancy services, including integrations with software including SendGrid, Twilio, and DocuSign. Unqork 2021.5 adds new software development life cycle (SDLC) features, such as API auto-documentation, data model auto-documentation, and application rollback.
References
Software companies based in New York City
Business software companies
Software companies established in 2017 |
2302474 | https://en.wikipedia.org/wiki/MBus%20%28SPARC%29 | MBus (SPARC) | MBus is a computer bus designed and implemented by Sun Microsystems for communication between high speed computer system components, such as the central processing unit, motherboard and main memory. SBus is used in the same machines to connect add-on cards to the motherboard.
MBus was first used in Sun's first multiprocessor SPARC-based system, the SPARCserver 600MP series (launched in 1991), and later found use in the SPARCstation 10 and SPARCstation 20 workstations. The bus permits the integration of several microprocessors on a single motherboard, in a multiprocessing configuration with up to eight CPUs packaged in detachable MBus modules. In practice, the number of processors per MBus is limited to four. Single processor systems were also sold that use the MBus protocol internally, but with the CPUs permanently attached to the motherboard to lower manufacturing costs.
MBus specifies a 64-bit datapath, which uses 36-bit physical addressing, giving an address space of 64 GB. The transfer rate is 80 MB/s sustained (320 MB/s peak) at 40 MHz, or 100 MB/s (400 MB/s peak) at 50 MHz. Bus controlling is done by an arbiter. Interrupt, reset, and timeout logic are also specified.
Related buses
Several related buses were also developed:
XBus
XBus is a packet-switched bus used in the SPARCserver 1000, SPARCcenter 2000 and Cray CS6400. This corresponds to the circuit-switched MBus, with identical electrical characteristics and physical form factor but an incompatible signalling protocol.
KBus
KBus is a high-speed interconnection system for linking multiple MBuses, used in Solbourne Computer Series 6 and Series 7 computer systems.
History
The MBus standard was cooperatively developed by Sun and Ross Technology and released in 1991.
Manufacturers who produced computer systems using the MBus included Sun, Ross Technology, Inc., Hyundai/Axil, Fujitsu, Solbourne Computer, Tatung, GCS, Auspex, ITRI, ICL, Cray, Amdahl, Themis, DTK and Kamstrup.
See also
List of device bandwidths
References
External links
The Rough Guide to MBus Modules, sunhelp.org
MBus Specification
Computer buses
Sun Microsystems hardware |
69621036 | https://en.wikipedia.org/wiki/Georgia%20Institute%20of%20Technology%20School%20of%20Cybersecurity%20and%20Privacy | Georgia Institute of Technology School of Cybersecurity and Privacy | The School of Cybersecurity and Privacy (SCP) is an academic unit located within the College of Computing at the Georgia Institute of Technology. This interdisciplinary unit draws its faculty from the College of Computing as well as the College of Engineering, the School of Public Policy, the Sam Nunn School of International Affairs, the Scheller College of Business, and the Georgia Tech Research Institute (GTRI). Faculty are engaged in both research and teaching activities related to computer security and privacy at the undergraduate and graduate levels. The school's unifying vision is to keep "cyberspace safer and more secure."
History
The School of Cybersecurity and Privacy was founded in 2020 and Richard DeMillo was appointed as the school's founding chair. The creation of the school represented an enlargement and continuation of the vision held by the Institute for Information Security & Privacy (IISP), the former organizing locus of cybersecurity research at Georgia Tech.
Degrees offered
The School of Cybersecurity and Privacy offers bachelor's degrees, master's degrees, and doctoral degrees in several fields. These degrees are technically granted by the School's parent organization, the Georgia Tech College of Computing, and often awarded in conjunction with other academic units within Georgia Tech.
Doctoral degrees
Ph.D. in Computer Science
Ph.D. in Electrical & Computer Engineering
Master's degrees
M.S. in Computer Science
M.S. in Cybersecurity
Bachelor's degrees
B.S. in Computer Science
Research
The faculty and students of the school lead and conduct a variety of research in areas including Cyber-physical systems, information security, Internet of Things (IoT), networking, and policy. Notable labs include the GTRI Cyber Technology and Information Security Laboratory (CIPHER) founded in 2010, and the Georgia Tech Information Security Center (GTISC) founded in 1998.
Location
The School of Computational Science & Engineering's administrative offices, as well as those of most of its faculty and graduate students, are located in the CODA Building.
See also
Georgia Institute of Technology College of Computing
References
Georgia Tech colleges and schools
Educational institutions established in 2020
2020 establishments in Georgia (U.S. state)
Computer security organizations
Information technology schools |
1804746 | https://en.wikipedia.org/wiki/Distributed%20version%20control | Distributed version control | In software development, distributed version control (also known as distributed revision control) is a form of version control in which the complete codebase, including its full history, is mirrored on every developer's computer. Compared to centralized version control, this enables automatic management branching and merging, speeds up most operations (except pushing and pulling), improves the ability to work offline, and does not rely on a single location for backups. Git, the world's most popular version control system, is a distributed version control system.
In 2010, software development author Joel Spolsky described distributed version control systems as "possibly the biggest advance in software development technology in the [past] ten years".
Distributed vs. centralized
Distributed version control systems (DVCS) use a peer-to-peer approach to version control, as opposed to the client–server approach of centralized systems. Distributed revision control synchronizes repositories by transferring patches from peer to peer. There is no single central version of the codebase; instead, each user has a working copy and the full change history.
Advantages of DVCS (compared with centralized systems) include:
Allows users to work productively when not connected to a network.
Common operations (such as commits, viewing history, and reverting changes) are faster for DVCS, because there is no need to communicate with a central server. With DVCS, communication is necessary only when sharing changes among other peers.
Allows private work, so users can use their changes even for early drafts they do not want to publish.
Working copies effectively function as remote backups, which avoids relying on one physical machine as a single point of failure.
Allows various development models to be used, such as using development branches or a Commander/Lieutenant model.
Permits centralized control of the "release version" of the project
On FOSS software projects it is much easier to create a project fork from a project that is stalled because of leadership conflicts or design disagreements.
Disadvantages of DVCS (compared with centralized systems) include:
Initial checkout of a repository is slower as compared to checkout in a centralized version control system, because all branches and revision history are copied to the local machine by default.
The lack of locking mechanisms that is part of most centralized VCS and still plays an important role when it comes to non-mergeable binary files such as graphic assets or too complex single file binary or XML packages (e.g. office documents, PowerBI files, SQL Server Data Tools BI packages, etc.).
Additional storage required for every user to have a complete copy of the complete codebase history.
Increased exposure of the code base since every participant has a locally vulnerable copy.
Some originally centralized systems now offer some distributed features. For example, Subversion is able to do many operations with no network. Team Foundation Server and Visual Studio Team Services now host centralized and distributed version control repositories via hosting Git.
Similarly, some distributed systems now offer features that mitigate the issues of checkout times and storage costs, such as the Virtual File System for Git developed by Microsoft to work with very large codebases, which exposes a virtual file system that downloads files to local storage only as they are needed.
Work model
The distributed model is generally better suited for large projects with partly independent developers, such as the Linux kernel project, because developers can work independently and submit their changes for merge (or rejection). The distributed model flexibly allows adopting custom source code contribution workflows. The integrator workflow is the most widely used. In the centralized model, developers must serialize their work, to avoid problems with different versions.
Central and branch repositories
Every project has a central repository that is considered as the official repository, which is managed by the project maintainers. Developers clone this repository to create identical local copies of the code base. Source code changes in the central repository are periodically synchronized with the local repository.
The developer creates a new branch in their local repository and modifies source code on that branch. Once the development is done, the change needs to be integrated into the central repository.
Pull requests
Contributions to a source code repository that uses a distributed version control system are commonly made by means of a pull request, also known as a merge request. The contributor requests that the project maintainer pull the source code change, hence the name "pull request". The maintainer has to merge the pull request if the contribution should become part of the source base.
The developer creates a pull request to notify maintainers of a new change; a comment thread is associated with each pull request. This allows for focused discussion of code changes. Submitted pull requests are visible to anyone with repository access. A pull request can be accepted or rejected by maintainers.
Once the pull request is reviewed and approved, it is merged into the repository. Depending on the established workflow, the code may need to be tested before being included into official release. Therefore, some projects contain a special branch for merging untested pull requests. Other projects run an automated test suite on every pull request, using a continuous integration tool such as Travis CI, and the reviewer checks that any new code has appropriate test coverage.
History
The first open-source DVCS systems included Arch, Monotone, and Darcs. However, open source DVCSs were never very popular until the release of Git and Mercurial.
BitKeeper was used in the development of the Linux kernel from 2002 to 2005. The development of Git, now the world's most popular version control system, was prompted by the decision of the company that made BitKeeper to rescind the free license that Linus Torvalds and some other Linux kernel developers had previously taken advantage of.
See also
References
External links
Essay on various revision control systems, especially the section "Centralized vs. Decentralized SCM"
Introduction to distributed version control systems - IBM Developer Works article
Version control
Free software projects
Free version control software
Distributed version control systems
de:Versionsverwaltung#Verteilte Versionsverwaltung
fr:Gestion de version décentralisée
ja:分散型バージョン管理システム |
59114 | https://en.wikipedia.org/wiki/Packet%20analyzer | Packet analyzer | A packet analyzer, also known as packet sniffer, protocol analyzer, or network analyzer, is a computer program or computer hardware such as a packet capture appliance, that can intercept and log traffic that passes over a computer network or part of a network. Packet capture is the process of intercepting and logging traffic. As data streams flow across the network, the analyzer captures each packet and, if needed, decodes the packet's raw data, showing the values of various fields in the packet, and analyzes its content according to the appropriate RFC or other specifications.
A packet analyzer used for intercepting traffic on wireless networks is known as a wireless analyzer or WiFi analyzer. While a packet analyzer can also be referred to as a network analyzer or protocol analyzer these terms can also have other meanings. Protocol analyzer can technically be a broader, more general class that includes packet analyzers/sniffers. However, the terms are frequently used intechangably.
Capabilities
On wired shared medium networks, such as Ethernet, Token Ring, and FDDI, depending on the network structure (hub or switch), it may be possible to capture all traffic on the network from a single machine. On modern networks, traffic can be captured using a network switch using port mirroring, which mirrors all packets that pass through designated ports of the switch to another port, if the switch supports port mirroring. A network tap is an even more reliable solution than to use a monitoring port since taps are less likely to drop packets during high traffic load.
On wireless LANs, traffic can be captured on one channel at a time, or by using multiple adapters, on several channels simultaneously.
On wired broadcast and wireless LANs, to capture unicast traffic between other machines, the network adapter capturing the traffic must be in promiscuous mode. On wireless LANs, even if the adapter is in promiscuous mode, packets not for the service set the adapter is configured for are usually ignored. To see those packets, the adapter must be in monitor mode. No special provisions are required to capture multicast traffic to a multicast group the packet analyzer is already monitoring, or broadcast traffic.
When traffic is captured, either the entire contents of packets or just the headers are recorded. Recording just headers reduces storage requirements, and avoids some privacy legal issues, yet often provides sufficient information to diagnose problems.
Captured information is decoded from raw digital form into a human-readable format that lets engineers review exchanged information. Protocol analyzers vary in their abilities to display and analyze data.
Some protocol analyzers can also generate traffic. These can act as protocol testers. Such testers generate protocol-correct traffic for functional testing, and may also have the ability to deliberately introduce errors to test the device under test's ability to handle errors.
Protocol analyzers can also be hardware-based, either in probe format or, as is increasingly common, combined with a disk array. These devices record packets or packet headers to a disk array.
Uses
Packet analyzers can:
Analyze network problems
Detect network intrusion attempts
Detect network misuse by internal and external users
Documenting regulatory compliance through logging all perimeter and endpoint traffic
Gain information for effecting a network intrusion
Identify data collection and sharing of software such as operating systems (for strengthening privacy, control and security)
Aid in gathering information to isolate exploited systems
Monitor WAN bandwidth utilization
Monitor network usage (including internal and external users and systems)
Monitor data in transit
Monitor WAN and endpoint security status
Gather and report network statistics
Identify suspect content in network traffic
Troubleshoot performance problems by monitoring network data from an application
Serve as the primary data source for day-to-day network monitoring and management
Spy on other network users and collect sensitive information such as login details or users cookies (depending on any content encryption methods that may be in use)
Reverse engineer proprietary protocols used over the network
Debug client/server communications
Debug network protocol implementations
Verify adds, moves, and changes
Verify internal control system effectiveness (firewalls, access control, Web filter, spam filter, proxy)
Packet capture can be used to fulfill a warrant from a law enforcement agency to wiretap all network traffic generated by an individual. Internet service providers and VoIP providers in the United States must comply with Communications Assistance for Law Enforcement Act regulations. Using packet capture and storage, telecommunications carriers can provide the legally required secure and separate access to targeted network traffic and can use the same device for internal security purposes. Collecting data from a carrier system without a warrant is illegal due to laws about interception. By using end-to-end encryption, communications can be kept confidential from telecommunication carriers and legal authorities.
Notable packet analyzers
Allegro Network Multimeter
Capsa Network Analyzer
Charles Web Debugging Proxy
Carnivore (software)
CommView
dSniff
EndaceProbe Analytics Platform by Endace
ettercap
Fiddler
Kismet
Lanmeter
Microsoft Network Monitor
NarusInsight
NetScout Systems nGenius Infinistream
ngrep, Network Grep
OmniPeek, Omnipliance by Savvius
SkyGrabber
The Sniffer
snoop
tcpdump
Observer Analyzer
Wireshark (formerly known as Ethereal)
Xplico Open source Network Forensic Analysis Tool
See also
Bus analyzer
Logic analyzer
Network detector
pcap
Signals intelligence
Traffic generation model
Notes
References
External links
Multi-Tap Network Packet Capture
WiFi Adapter for Packet analyzer
Network analyzers
Packets (information technology)
Wireless networking
Deep packet capture |
529373 | https://en.wikipedia.org/wiki/Steve%20Furber | Steve Furber | Stephen Byram Furber (born 21 March 1953) is a British computer scientist, mathematician and hardware engineer, currently the ICL Professor of Computer Engineering in the Department of Computer Science at the University of Manchester, UK. After completing his education at the University of Cambridge (BA, MMath, PhD), he spent the 1980s at Acorn Computers, where he was a principal designer of the BBC Micro and the ARM 32-bit RISC microprocessor. , over 100 billion copies of the ARM processor have been manufactured, powering much of the world's mobile computing and embedded systems.
In 1990, he moved to Manchester where he leads research into asynchronous systems, low-power electronics and neural engineering, where the Spiking Neural Network Architecture (SpiNNaker) project is delivering a computer incorporating a million ARM processors optimised for computational neuroscience.
Education
Furber was educated at Manchester Grammar School and represented the UK in the International Mathematical Olympiad in Hungary in 1970 winning a bronze medal. He went on to study the Mathematical Tripos as an undergraduate student of St John's College, Cambridge, receiving a Bachelor of Arts (BA) and Master of Mathematics (MMath - Part III of the Mathematical Tripos) degrees. In 1978, he was appointed a Rolls-Royce research fellow in aerodynamics at Emmanuel College, Cambridge and was awarded a PhD in 1980 for research on the fluid dynamics of the Weis-Fogh principle supervised by John Ffowcs Williams.
Commercial career: Acorn, BBC Micro and ARM
During his PhD studies in the late 1970s, Furber worked on a voluntary basis for Hermann Hauser and Chris Curry within the fledging Acorn Computers (originally the Cambridge Processor Unit), on a number of projects; notably a microprocessor based fruit machine controller, and the Proton - the initial prototype version of what was to become the BBC Micro, in support of Acorn's tender for the BBC Computer Literacy Project.
In 1981, following the completion of his PhD and the award of the BBC contract to Acorn, he formally joined the company where he was a Hardware Designer and then Design Manager. He was involved in the final design and productionization of the BBC Micro and later, the Electron, and the ARM microprocessor. In August 1990 he moved to the University of Manchester to become the ICL Professor of Computer Engineering and established the AMULET microprocessor research group.
Research
Furber's main research interests are in Neural Networks, Networks on Chip and Microprocessors. In 2003, Furber was a member of the EPSRC research cluster in biologically-inspired novel computation. On 16 September 2004, he gave a speech on Hardware Implementations of Large-scale Neural Networks as part of the initiation activities of the Alan Turing Institute.
Furber's most recent project SpiNNaker, is an attempt to build a new kind of computer that directly mimics the workings of the human brain. Spinnaker is an artificial neural network realised in hardware, a massively parallel processing system eventually designed to incorporate a million ARM processors. The finished Spinnaker will model 1 per cent of the human brain's capability, or around 1 billion neurons. The Spinnaker project aims amongst other things to investigate:
How can massively parallel computing resources accelerate our understanding of brain function?
How can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computation?
Furber believes that "significant progress in either direction will represent a major scientific breakthrough". Furber's research interests include asynchronous systems, ultra-low-power processors for sensor networks, on-chip interconnect and globally asynchronous locally synchronous (GALS), and neural systems engineering.
His research has been funded by the Engineering and Physical Sciences Research Council (EPSRC), Royal Society and European Research Council.
Awards and honours
In February 1997, Furber was elected a Fellow of the British Computer Society. In 1998, he became a member of the European Working Group on Asynchronous Circuit Design (ACiD-WG). He was elected a Fellow of the Royal Society (FRS) in 2002 and was Specialist Adviser to the House of Lords Science and Technology Select Committee inquiry into microprocessor technology.
Furber was elected a Fellow of the Royal Academy of Engineering (FREng), the Institute of Electrical and Electronics Engineers (IEEE) in 2005 and a Fellow of the Institution of Engineering and Technology (FIET). He is a Chartered Engineer (CEng). In September 2007 he was awarded the Faraday Medal and in 2010 he gave the Pinkerton Lecture.
Furber was appointed Commander of the Order of the British Empire (CBE) in the 2008 New Year Honours and was elected as one of the three laureates of Millennium Technology Prize in 2010 (with Richard Friend and Michael Grätzel), for development of ARM processor. In 2012, Furber was made a Fellow of the Computer History Museum "for his work, with Sophie Wilson, on the BBC Micro computer and the ARM processor architecture."
In 2004 he was awarded a Royal Society Wolfson Research Merit Award. In 2014, he was made a Distinguished Fellow at the British Computer Society (DFBCS) recognising his contribution to the IT profession and industry. Furber's nomination for the Royal Society reads:
In 2009, Unsworth Academy (formerly called Castlebrook High School) in Manchester introduced a house system, with Furber being one of the four houses. On 15th October 2010, Furber officially opened the Independent Learning Zone in Unsworth Academy.
Furber was played by actor Sam Philips in the BBC Four documentary drama Micro Men, first aired on 8 October 2009.
Personal life
Furber is married to Valerie Elliot with two daughters and plays 6-string and bass guitar.
Others
See List of pioneers in computer science
References
External links
Acorn recollections
BBC News Technology – Home computing pioneer honoured 29 December 2007
BBC News – Scientists to build 'brain box' 17 July 2006
BBC News Technology – The Tech Lab: Steve Furber
Lecture by Furber on the Future of Computer Technology
Steve Furber Video Interview – 17-08-2009
Steve Furber Talk @ Acorn World – 13-09-2009
1953 births
Living people
Acorn Computers
Arm Holdings people
English electrical engineers
British computer scientists
Computer designers
People associated with the Department of Computer Science, University of Manchester
Fellows of the Royal Society
Fellows of the British Computer Society
Fellows of the Royal Academy of Engineering
Fellow Members of the IEEE
Alumni of Emmanuel College, Cambridge
Alumni of St John's College, Cambridge
Academics of the University of Manchester
History of computing in the United Kingdom
Scientists from Manchester
People educated at Manchester Grammar School
Commanders of the Order of the British Empire
Fellows of the Institution of Engineering and Technology
International Mathematical Olympiad participants |
22025044 | https://en.wikipedia.org/wiki/Imagix%204D | Imagix 4D | Imagix 4D is a source code analysis tool from Imagix Corporation, used primarily for understanding, documenting, and evolving existing C, C++ and Java software.
Applied technologies include full semantic source analysis. Software visualization supports program comprehension. Static data flow analysis-based verifications detect problems in variable usage, task interactions and concurrency. Software metrics measure design quality and identify potential testing and maintenance issues.
See also
Rational Rose
Rigi
Software visualization
List of tools for static code analysis
Sourcetrail
References
Use inside SEI's ARMIN Architecture Reconstruction and Mining Tool
Use inside Bosch's Model-Centric Software Architecture Reconstruction
External links
Imagix Corp. website
Code navigation tools
Static program analysis tools
Software metrics
Documentation generators |
51857198 | https://en.wikipedia.org/wiki/Big%20Moose%20Meyer | Big Moose Meyer | Donald Eugene "Big Moose" Meyer (April 22, 1910 – November 10, 2000) was an American professional basketball player. He played for the Kankakee Gallagher Trojans in the National Basketball League for nine games during the 1938–39 season and averaged 3.3 points per game.
He is not related to teammate Little Moose Meyer, who happened to play alongside him with the Trojans.
References
1910 births
2000 deaths
American men's basketball players
Basketball players from Illinois
Centers (basketball)
Forwards (basketball)
Kankakee Gallagher Trojans players
People from Kankakee County, Illinois |
748931 | https://en.wikipedia.org/wiki/Slide%20show | Slide show | A slide show is a presentation of a series of still images (slides) on a projection screen or electronic display device, typically in a prearranged sequence. The changes may be automatic and at regular intervals or they may be manually controlled by a presenter or the viewer. Slide shows originally consisted of a series of individual photographic slides projected onto a screen with a slide projector. When referring to the video or computer-based visual equivalent, in which the slides are not individual physical objects, the term is often written as one word, slideshow.
A slide show may be a presentation of images purely for their own visual interest or artistic value, sometimes unaccompanied by description or text, or it may be used to clarify or reinforce information, ideas, comments, solutions or suggestions which are presented verbally. Slide shows are sometimes still conducted by a presenter using an apparatus such as a carousel slide projector or an overhead projector, but now the use of an electronic video display device and a computer running presentation software is typical.
History
Slide shows had their beginnings in the 1600s, when hand-painted images on glass were first projected onto a wall with a "magic lantern". By the late 1700s, showmen were using magic lanterns to thrill audiences with seemingly supernatural apparitions in a popular form of entertainment called a phantasmagoria. Sunlight, candles and oil lamps were the only available light sources. The development of new, much brighter artificial light sources opened up a world of practical applications for image projection. In the 1800s, a series of hand-painted glass "lantern slides" was sometimes projected to illustrate story-telling or a lecture. Widespread and varied uses for amusement and education evolved throughout the century. By 1900, photographic images on glass had replaced hand-painted images, but the black-and-white photographs were sometimes hand-colored with transparent dyes. The production of lantern slides had become a considerable industry, with dimensions standardized at 3.25 inches high by 4 inches wide in the US and 3.25 inches square in the UK and much of Europe.
"Magic lantern shows" also served as a form of home entertainment and were especially popular with children. They continued to have a place among commercial public amusements even after the coming of projected "moving pictures". Between films, early movie theaters often featured "illustrated songs", which were community sing-alongs with the lyrics and illustrations provided by a series of projected lantern slides. Theaters also used their lanterns to project advertising slides and messages such as "Ladies, kindly remove your hats".
After 35 mm Kodachrome color film was introduced in 1936, a new standard 2×2 inch (5×5 cm) miniature lantern slide format was created to better suit the very small transparencies the film produced. In advertising, the antique "magic lantern" terminology was streamlined, so that the framed pieces of film were simply "slides" and the lantern used to project them was a "slide projector".
Home slide shows were a relatively common phenomenon in middle-class American homes during the 1950s and 1960s. If there was an enthusiast in the family, any visit from relatives or the arrival of a new batch of Kodachrome slides from the film processing service provided an excuse to bring out the entire collection of 35 mm slides, set up the slide projector and the screen, turn out the lights, then test the endurance of the assembled audience with a marathon of old vacation photos and pictures taken at weddings, birthdays and other family events, all accompanied by live commentary.
An image on 35 mm film mounted in a 2×2 inch (5×5 cm) metal, card or plastic frame is still by far the most common photographic slide format.
Uses
A well-organized slide show allows a presenter to fit visual images to an oral presentation. The old adage "A picture is worth a thousand words" holds true, in that a single image can save a presenter from speaking a paragraph of descriptive details. As with any public speaking or lecturing, a certain amount of talent, experience, and rehearsal is required to make a successful slide show presentation.
Presentation software is most commonly used in the business world, where millions of presentations are created daily. Another very important area where it is used is for instructional purposes, usually with the intention of creating a dynamic, audiovisual presentation. The relevant points to the entire presentation are put on slides, and accompany a spoken monologue.
Slide shows have artistic uses as well, such as being used as a screensaver, or to provide dynamic imagery for a museum presentation, for example, or in installation art. David Byrne, among others, has created PowerPoint art.
In art
Since the late 1960s, visual artists have used slide shows in museums and galleries as a device, either for presenting specific information about an action or research or as a phenomenological form in itself. According to the introduction of Slide Show, an exhibition organized at the Baltimore Museum of Art: “Through the simple technology of the slide projector and 35 mm color transparency, artists discovered a tool that enabled the transformation of space through the magnification of projected pictures, texts, and images.” Although some artists have not necessarily used 35 mm or color slides, and some, such as Robert Barry, have even abandoned images for texts, 35 mm color film slides are most commonly used. The images are sometimes accompanied by written text, either in the same slide or as an intertitle. Some artists, such as James Coleman and Robert Smithson, have used a voice-over with their slide presentations.
Slide shows have also been used by artists who use other media such as painting and sculpture to present their work publicly. In recent years there has been a growing use of the slide show by a younger generation of artists. The non-profit organization Slideluck Potshow holds slide show events globally, featuring works by amateur and professional artists, photographers, and gallerists. Participants in the event bring food, potluck style, and have a social dinner before the slide show begins.
Other known artists who have used slide shows in their work include Bas Jan Ader, Francis Alys, Jan Dibbets, Dan Graham, Rodney Graham, Nan Goldin, Louise Lawler, Ana Mendieta, Jonathan Monk, Dennis Oppenheim, Allan Sekula, Carey Young and Krzysztof Wodiczko.
Digital
Digital photo slideshows can be custom-made for clients from their photos, music, wedding invitations, birth announcements, or virtually any other scannable documents. Some producers call the resulting DVDs the new photomontage. Slideshows can be created not only on DVD, but also in HD video formats and as executable computer files. Photo slideshow software has made it easy to create electronic digital slideshows, eliminating the need for expensive color reversal film and requiring only a digital camera and computer.
Photo slideshow software often provides more options than simply showing the pictures. It is possible to add transitions, pan and zoom effects, video clips, background music, narration, captions, etc. By using computer software one therefore has the ability to enhance the presentation in a way that is not otherwise practical. The finished slideshow can then be burned to a DVD, for use as a gift or for archiving, and later viewed using an ordinary DVD player.
Web-based slideshow
A Web-based slideshow is a slide show which can be played (viewed or presented) using a web browser. Some web based slide shows are generated from presentation software and may be difficult to change (usually unintentionally so). Others offer templates allowing the slide show to be easily edited and changed.
Compared to a fully fledged presentation program the web based slide shows are usually limited in features.
A web-based slide show is typically generated to or authored in HTML, JavaScript and CSS code (files).
See also
Photo slideshow software
Presentation software
Diaporama
Multi-image
Slide-tape
Filmstrip
LCD projector
References
Photography
Presentation |
19618157 | https://en.wikipedia.org/wiki/AcetoneISO | AcetoneISO | AcetoneISO is a free and open-source virtual drive software to mount and manage image files. Its goals are to be simple, intuitive and stable. Written in Qt, this software is meant for all those people looking for a "Daemon Tools for Linux". However, AcetoneISO does not emulate any copy protection while mounting.
AcetoneISO also supports Direct Access Archive (*.daa) images because it uses the non-free and proprietary PowerISO Linux software as a backend while converting images to ISO.
In recent releases (as of 2010), AcetoneISO also gained native support at blanking CD/DVD optical discs and burn ISO/CUE/TOC images to CD-R/RW and DVD-+R/RW (including DL) thanks to external open source tools such as cdrkit, cdrdao and growisofs.
Features
Mount automatically ISO, BIN, MDF, and NRG without the need to insert admin password. Only single-track images are supported for the moment.
Burn ISO/TOC/CUE to CD-R/RW optical discs
Burn ISO images to DVD-+R/RW (including DL)
A native utility to blank CD-RW, DVD-RW, and DVD-RW discs
A nice display which shows current images mounted and possibility to click on it to quickly re-open mounted image
Convert 2 ISO image types: bin mdf nrg img daa dmg cdi b5i bwi pdi
Extract images content to a folder: bin mdf nrg img daa dmg cdi b5i bwi pdi
Play a DVD Movie Image with Kaffeine / VLC / SMplayer with auto-cover download from Amazon
Generate an ISO from a Folder or CD/DVD
Check MD5 file of an image and/or generate it to a text file
Calculate ShaSums of images in 128, 256, and 384 bit
Encrypt / Decrypt an image
Split / Merge image in X megabyte
Compress with high ratio an image in 7z format
Rip a PSX CD to *.bin to make it work with ePSXe/pSX emulators
Restore a lost CUE file of *.bin *.img
Convert Mac OS *.dmg to a mountable image
Mount an image in a specified folder from the user
Create a database of images to manage big collections
Extract the Boot Image file of a CD/DVD or ISO
Backup a CD-Audio to a *.bin image
Complete localization for English, Italian, French, Spanish and Polish
Quick and simple utility to rip a DVD to Xvid AVI
Quick and simple utility to convert a generic video (avi, mpeg, mov, wmv, asf) to Xvid AVI
Quick and simple utility to convert a FLV video to AVI
Utility to download videos from YouTube and Metacafe.
Extract audio from a video file
Extract a *.rar archive that has a password
Utility to convert any video for Sony PSP PlayStation Portable
Display History that shows all images you mount in time
Limitations
Does not emulate copy protection mount like Daemon Tools.
Can't mount correctly a multi-session image. Only the first track will be shown.
Converting a multi-session image to ISO will result in a loss of data. Only first track will be converted.
Image conversion to ISO is only possible on x86 and x86-64 CPU architecture due to PowerISO limitations.
Internationalization
AcetoneISO is currently translated to: English, Italian, Polish, Spanish, Romanian, Hungarian, German, Czech, and Russian.
See also
CDemu
Furius ISO Mount
List of ISO image software
References
Notes
"Featured Linux Download: Advanced CD/DVD management with AcetoneISO", Lifehacker, 2007-07-18. Retrieved on 2008-10-05
"Mount and Unmount ISO,MDF,NRG Images Using AcetoneISO (GUI Tool)", Ubuntu Geek, 2007-08-29. Retrieved on 2008-10-05
"Download of the day: AcetoneISO - extract, browse ISO and other CD/DVD formats under Linux", nixCraft: Insight into Linux admin work, 2008-01-14. Retrieved on 2008-10-05
"Featured Linux Download: AcetoneISO 2.0 Makes Disk Mounting Simple", Lifehacker, 2008-06-04. Retrieved on 2008-10-05
"Manipulating CD/DVD images with AcetoneISO2", Linux.com, 2008-10-28. Retrieved on 2009-01-05
External links
Free optical disc authoring software
Optical disc authoring software
Disk image emulators
Software that uses Qt |
8278664 | https://en.wikipedia.org/wiki/Indian%20Association%20for%20Research%20in%20Computing%20Science | Indian Association for Research in Computing Science | The Indian Association for Research in Computing Science (IARCS) provides leadership in computing within India. Its members are leading researchers in Computer Science drawn from major institutions from all over the country. Madhavan Mukund is the President of the association as of 2016.
IARCS aims at promoting excellence in Computing. It does so by facilitating interaction amongst its members, acting as a bridge between Academia and Industry and finally by elevating the quality of Computer Science education within the country.
IARCS runs the hugely successful and the longest running conference in computer science in India; International Conference on Foundations of Software Technology and Theoretical Computer Science. The FSTTCS conference is in its 26th year. Since its inception in 1981, the conference (held in the month of December) has helped in nurturing and creating an environment for exchange of ideas amongst the research community in the country by attracting top scientists around the world to the conference.
IARCS recognizes the impact of computing science in school education. To actively promote good practices, it has become involved in the International Olympiad in Informatics (IOI). IARCS is involved in all aspects of training and selection of the young talent to represent the country at this prestigious Olympiad. It also recognizes a role for itself in correcting the biases in the current curriculum for computer science in the country. The national program is called Indian Computing Olympiad.
IARCS has also initiated a new program to teach the teachers as a way of spreading knowledge down to the grassroots level.
External links
Official website
Computer science organizations |
83137 | https://en.wikipedia.org/wiki/Software-defined%20radio | Software-defined radio | Software-defined radio (SDR) is a radio communication system where components that have been traditionally implemented in hardware (e.g. mixers, filters, amplifiers, modulators/demodulators, detectors, etc.) are instead implemented by means of software on a personal computer or embedded system. While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which were once only theoretically possible.
A basic SDR system may consist of a personal computer equipped with a sound card, or other analog-to-digital converter, preceded by some form of RF front end. Significant amounts of signal processing are handed over to the general-purpose processor, rather than being done in special-purpose hardware (electronic circuits). Such a design produces a radio which can receive and transmit widely different radio protocols (sometimes referred to as waveforms) based solely on the software used.
Software radios have significant utility for the military and cell phone services, both of which must serve a wide variety of changing radio protocols in real time. In the long term, software-defined radios are expected by proponents like the Wireless Innovation Forum to become the dominant technology in radio communications. SDRs, along with software defined antennas are the enablers of the cognitive radio.
A software-defined radio can be flexible enough to avoid the "limited spectrum" assumptions of designers of previous kinds of radios, in one or more ways including:
Spread spectrum and ultrawideband techniques allow several transmitters to transmit in the same place on the same frequency with very little interference, typically combined with one or more error detection and correction techniques to fix all the errors caused by that interference.
Software defined antennas adaptively "lock onto" a directional signal, so that receivers can better reject interference from other directions, allowing it to detect fainter transmissions.
Cognitive radio techniques: each radio measures the spectrum in use and communicates that information to other cooperating radios, so that transmitters can avoid mutual interference by selecting unused frequencies. Alternatively, each radio connects to a geolocation database to obtain information about the spectrum occupancy in its location and, flexibly, adjusts its operating frequency and/or transmit power not to cause interference to other wireless services.
Dynamic transmitter power adjustment, based on information communicated from the receivers, lowering transmit power to the minimum necessary, reducing the near–far problem and reducing interference to others, and extending battery life in portable equipment.
Wireless mesh network where every added radio increases total capacity and reduces the power required at any one node. Each node transmits using only enough power needed for the message to hop to the nearest node in that direction, reducing the near–far problem and reducing interference to others.
Operating principles
Superheterodyne receivers use a variable-frequency oscillator, mixer, and filter to tune the desired signal to a common intermediate frequency or baseband. Typically in SDR, this signal is then sampled by the analog-to-digital converter. However, in some applications it is not necessary to tune the signal to an intermediate frequency and the radio frequency signal is directly sampled by the analog-to-digital converter (after amplification).
Real analog-to-digital converters lack the dynamic range to pick up sub-microvolt, nanowatt-power radio signals. Therefore, a low-noise amplifier must precede the conversion step and this device introduces its own problems. For example, if spurious signals are present (which is typical), these compete with the desired signals within the amplifier's dynamic range. They may introduce distortion in the desired signals, or may block them completely. The standard solution is to put band-pass filters between the antenna and the amplifier, but these reduce the radio's flexibility. Real software radios often have two or three analog channel filters with different bandwidths that are switched in and out.
History
The term "digital receiver" was coined in 1970 by a researcher at a United States Department of Defense laboratory. A laboratory called the Gold Room at TRW in California created a software baseband analysis tool called Midas, which had its operation defined in software.
The term "software radio" was coined in 1984 by a team at the Garland, Texas, Division of E-Systems Inc. (now Raytheon) to refer to a digital baseband receiver and published in their E-Team company newsletter. A 'Software Radio Proof-of-Concept' laboratory was developed by the E-Systems team that popularized Software Radio within various government agencies. This 1984 Software Radio was a digital baseband receiver that provided programmable interference cancellation and demodulation for broadband signals, typically with thousands of adaptive filter taps, using multiple array processors accessing shared memory.
While working under a US Department of Defense contract at RCA in 1982, Ulrich L. Rohde’s department developed the first SDR, which used the COSMAC (Complementary Symmetry Monolithic Array Computer) chip.
Rohde was the first to present on this topic with his highly classified February 1984 talk, “Digital HF Radio: A Sampling of Techniques” at the Third International Conference on HF Communication Systems and Techniques in London.
In 1991, Joe Mitola independently reinvented the term software radio for a plan to build a GSM base station that would combine Ferdensi's digital receiver with E-Systems Melpar's digitally controlled communications jammers for a true software-based transceiver. E-Systems Melpar sold the software radio idea to the US Air Force. Melpar built a prototype commanders' tactical terminal in 1990–1991 that employed Texas Instruments TMS320C30 processors and Harris digital receiver chip sets with digitally synthesized transmission. The Melpar prototype didn't last long because when E-Systems ECI Division manufactured the first limited production units, they decided to "throw out those useless C30 boards," replacing them with conventional RF filtering on transmit and receive, reverting to a digital baseband radio instead of the SpeakEasy like IF ADC/DACs of Mitola's prototype. The Air Force would not let Mitola publish the technical details of that prototype, nor would they let Diane Wasserman publish related software life cycle lessons learned because they regarded it as a "USAF competitive advantage." So instead, with USAF permission, in 1991 Mitola described the architecture principles without implementation details in a paper, "Software Radio: Survey, Critical Analysis and Future Directions" which became the first IEEE publication to employ the term in 1992. When Mitola presented the paper at the conference, Bob Prill of GEC Marconi began his presentation following Mitola with "Joe is absolutely right about the theory of a software radio and we are building one." Prill gave a GEC Marconi paper on PAVE PILLAR, a SpeakEasy precursor. SpeakEasy, the military software radio was formulated by Wayne Bonser, then of Rome Air Development Center (RADC), now Rome Labs; by Alan Margulies of MITRE Rome, NY; and then Lt Beth Kaspar, the original DARPA SpeakEasy project manager and by others at Rome including Don Upmal. Although Mitola's IEEE publications resulted in the largest global footprint for software radio, Mitola privately credits that DoD lab of the 1970s with its leaders Carl, Dave, and John with inventing the digital receiver technology on which he based software radio once it was possible to transmit via software.
A few months after the National Telesystems Conference 1992, in an E-Systems corporate program review, a vice-president of E-Systems Garland Division objected to Melpar's (Mitola's) use of the term "software radio" without credit to Garland. Alan Jackson, Melpar VP of marketing at that time, asked the Garland VP if their laboratory or devices included transmitters. The Garland VP said "No, of course not — ours is a software radio receiver". Al replied "Then it's a digital receiver but without a transmitter, it's not a software radio." Corporate leadership agreed with Al, so the publication stood. Many amateur radio operators and HF radio engineers had realized the value of digitizing HF at RF and of processing it with Texas Instruments TI C30 digital signal processors (DSPs) and their precursors during the 1980s and early 1990s. Radio engineers at Roke Manor in the UK and at an organization in Germany had recognized the benefits of ADC at the RF in parallel, so success has many fathers. Mitola's publication of software radio in the IEEE opened the concept to the broad community of radio engineers. His May 1995 special issue of the IEEE Communications Magazine with the cover "Software Radio" was regarded as watershed event with thousands of academic citations. Mitola was introduced by Joao da Silva in 1997 at the First International Conference on Software Radio as "godfather" of software radio in no small part for his willingness to share such a valuable technology "in the public interest."
Perhaps the first software-based radio transceiver was designed and implemented by Peter Hoeher and Helmuth Lang at the German Aerospace Research Establishment (DLR, formerly DFVLR) in Oberpfaffenhofen, Germany, in 1988. Both transmitter and receiver of an adaptive digital satellite modem were implemented according to the principles of a software radio, and a flexible hardware periphery was proposed.
The term "software defined radio" was coined in 1995 by Stephen Blust, who published a request for information from Bell South Wireless at the first meeting of the Modular Multifunction Information Transfer Systems (MMITS) forum in 1996, organized by the USAF and DARPA around the commercialization of their SpeakEasy II program. Mitola objected to Blust's term, but finally accepted it as a pragmatic pathway towards the ideal software radio. Although the concept was first implemented with an IF ADC in the early 1990s, software-defined radios have their origins in the U.S. and European defense sectors of the late 1970s (for example, Walter Tuttlebee described a VLF radio that used an ADC and an 8085 microprocessor), about a year after the First International Conference in Brussels. One of the first public software radio initiatives was the U.S. DARPA-Air Force military project named SpeakEasy. The primary goal of the SpeakEasy project was to use programmable processing to emulate more than 10 existing military radios, operating in frequency bands between 2 and 2000 MHz. Another SpeakEasy design goal was to be able to easily incorporate new coding and modulation standards in the future, so that military communications can keep pace with advances in coding and modulation techniques.
In 1997, Blaupunkt introduced the term "DigiCeiver" for their new range of DSP-based tuners with Sharx in car radios such as the Modena & Lausanne RD 148.
SpeakEasy phase I
From 1990 to 1995, the goal of the SpeakEasy program was to demonstrate a radio for the U.S. Air Force tactical ground air control party that could operate from 2 MHz to 2 GHz, and thus could interoperate with ground force radios (frequency-agile VHF, FM, and SINCGARS), Air Force radios (VHF AM), Naval Radios (VHF AM and HF SSB teleprinters) and satellites (microwave QAM). Some particular goals were to provide a new signal format in two weeks from a standing start, and demonstrate a radio into which multiple contractors could plug parts and software.
The project was demonstrated at TF-XXI Advanced Warfighting Exercise, and demonstrated all of these goals in a non-production radio. There was some discontent with failure of these early software radios to adequately filter out of band emissions, to employ more than the simplest of interoperable modes of the existing radios, and to lose connectivity or crash unexpectedly. Its cryptographic processor could not change context fast enough to keep several radio conversations on the air at once. Its software architecture, though practical enough, bore no resemblance to any other. The SpeakEasy architecture was refined at the MMITS Forum between 1996 and 1999 and inspired the DoD integrated process team (IPT) for programmable modular communications systems (PMCS) to proceed with what became the Joint Tactical Radio System (JTRS).
The basic arrangement of the radio receiver used an antenna feeding an amplifier and down-converter (see Frequency mixer) feeding an automatic gain control, which fed an analog-to-digital converter that was on a computer VMEbus with a lot of digital signal processors (Texas Instruments C40s). The transmitter had digital to analog converters on the PCI bus feeding an up converter (mixer) that led to a power amplifier and antenna. The very wide frequency range was divided into a few sub-bands with different analog radio technologies feeding the same analog to digital converters. This has since become a standard design scheme for wideband software radios.
SpeakEasy phase II
The goal was to get a more quickly reconfigurable architecture, i.e., several conversations at once, in an open software architecture, with cross-channel connectivity (the radio can "bridge" different radio protocols). The secondary goals were to make it smaller, cheaper, and weigh less.
The project produced a demonstration radio only fifteen months into a three-year research project. This demonstration was so successful that further development was halted, and the radio went into production with only a 4 MHz to 400 MHz range.
The software architecture identified standard interfaces for different modules of the radio: "radio frequency control" to manage the analog parts of the radio, "modem control" managed resources for modulation and demodulation schemes (FM, AM, SSB, QAM, etc.), "waveform processing" modules actually performed the modem functions, "key processing" and "cryptographic processing" managed the cryptographic functions, a "multimedia" module did voice processing, a "human interface" provided local or remote controls, there was a "routing" module for network services, and a "control" module to keep it all straight.
The modules are said to communicate without a central operating system. Instead, they send messages over the PCI computer bus to each other with a layered protocol.
As a military project, the radio strongly distinguished "red" (unsecured secret data) and "black" (cryptographically-secured data).
The project was the first known to use FPGAs (field programmable gate arrays) for digital processing of radio data. The time to reprogram these was an issue limiting application of the radio. Today, the time to write a program for an FPGA is still significant, but the time to download a stored FPGA program is around 20 milliseconds. This means an SDR could change transmission protocols and frequencies in one fiftieth of a second, probably not an intolerable interruption for that task.
2000s
The SpeakEasy SDR system in the 1994 uses a Texas Instruments TMS320C30 CMOS digital signal processor (DSP), along with several hundred integrated circuit chips, with the radio filling the back of a truck. By the late 2000s, the emergence of RF CMOS technology made it practical to scale down an entire SDR system onto a single mixed-signal system-on-a-chip, which Broadcom demonstrated with the BCM21551 processor in 2007. The Broadcom BCM21551 has practical commercial applications, for use in 3G mobile phones.
Military usage
United States
The Joint Tactical Radio System (JTRS) was a program of the US military to produce radios that provide flexible and interoperable communications. Examples of radio terminals that require support include hand-held, vehicular, airborne and dismounted radios, as well as base-stations (fixed and maritime).
This goal is achieved through the use of SDR systems based on an internationally endorsed open Software Communications Architecture (SCA). This standard uses CORBA on POSIX operating systems to coordinate various software modules.
The program is providing a flexible new approach to meet diverse soldier communications needs through software programmable radio technology. All functionality and expandability is built upon the SCA.
SDRs flexibility results in expensive complexity, inability to optimize, slower ability to apply the latest technology, and rarely a tactical user need (since all users must pick and stay with the one, same radio if they're to communicate).
The SCA, despite its military origin, is under evaluation by commercial radio vendors for applicability in their domains. The adoption of general-purpose SDR frameworks outside of military, intelligence, experimental and amateur uses, however, is inherently hampered by the fact that civilian users can more easily settle with a fixed architecture, optimized for a specific function, and as such more economical in mass market applications. Still, software defined radio's inherent flexibility can yield substantial benefits in the longer run, once the fixed costs of implementing it have gone down enough to overtake the cost of iterated redesign of purpose built systems. This then explains the increasing commercial interest in the technology.
SCA-based infrastructure software and rapid development tools for SDR education and research are provided by the Open Source SCA Implementation Embedded (OSSIE) project. The Wireless Innovation Forum funded the SCA Reference Implementation project, an open source implementation of the SCA specification. (SCARI) can be downloaded for free.
Amateur and home use
A typical amateur software radio uses a direct conversion receiver. Unlike direct conversion receivers of the more distant past, the mixer technologies used are based on the quadrature sampling detector and the quadrature sampling exciter.
The receiver performance of this line of SDRs is directly related to the dynamic range of the analog-to-digital converters (ADCs) utilized. Radio frequency signals are down converted to the audio frequency band, which is sampled by a high performance audio frequency ADC. First generation SDRs used a 44 kHz PC sound card to provide ADC functionality. The newer software defined radios use embedded high performance ADCs that provide higher dynamic range and are more resistant to noise and RF interference.
A fast PC performs the digital signal processing (DSP) operations using software specific for the radio hardware. Several software radio implementations use the open source SDR library DttSP.
The SDR software performs all of the demodulation, filtering (both radio frequency and audio frequency), and signal enhancement (equalization and binaural presentation). Uses include every common amateur modulation: morse code, single sideband modulation, frequency modulation, amplitude modulation, and a variety of digital modes such as radioteletype, slow-scan television, and packet radio. Amateurs also experiment with new modulation methods: for instance, the DREAM open-source project decodes the COFDM technique used by Digital Radio Mondiale.
There is a broad range of hardware solutions for radio amateurs and home use. There are professional-grade transceiver solutions, e.g. the Zeus ZS-1 or the Flex Radio, home-brew solutions, e.g. PicAStar transceiver, the SoftRock SDR kit, and starter or professional receiver solutions, e.g. the FiFi SDR for shortwave, or the Quadrus coherent multi-channel SDR receiver for short wave or VHF/UHF in direct digital mode of operation.
RTL-SDR
Eric Fry discovered that some common low-cost DVB-T USB dongles with the Realtek RTL2832U controller and tuner, e.g. the Elonics E4000 or the Rafael Micro R820T, can be used as a wide-band (3 MHz) SDR receiver. Experiments proved the capability of this setup to analyze Perseids meteor shower using Graves radar signals. This project is being maintained at Osmocom.
USRP
More recently, the GNU Radio using primarily the Universal Software Radio Peripheral (USRP) uses a USB 2.0 interface, an FPGA, and a high-speed set of analog-to-digital and digital-to-analog converters, combined with reconfigurable free software. Its sampling and synthesis bandwidth (30-120 MHz) is a thousand times that of PC sound cards, which enables wideband operation.
HPSDR
The HPSDR (High Performance Software Defined Radio) project uses a 16-bit analog-to-digital converter that provides performance over the range 0 to comparable to that of a conventional analogue HF radio. The receiver will also operate in the VHF and UHF range using either mixer image or alias responses. Interface to a PC is provided by a USB 2.0 interface, although Ethernet could be used as well. The project is modular and comprises a backplane onto which other boards plug in. This allows experimentation with new techniques and devices without the need to replace the entire set of boards. An exciter provides of RF over the same range or into the VHF and UHF range using image or alias outputs.
WebSDR
WebSDR is a project initiated by Pieter-Tjerk de Boer providing access via browser to multiple SDR receivers worldwide covering the complete shortwave spectrum. Recently he has analyzed Chirp Transmitter signals using the coupled system of receivers.
Other applications
On account of its increasing accessibility, with lower cost hardware, more software tools and documentation, the applications of SDR have expanded past their primary and historic use cases. SDR is now being used in areas such as wildlife tracking, radio astronomy, medical imaging research, and art.
See also
List of software-defined radios
List of amateur radio software
Digital radio
Digital signal processing (DSP)
Radio Interface Layer (RIL)
Softmodem
Software defined mobile network (SDMN)
Software GNSS Receiver
White space (radio)
White space (database)
Bit banging
References
Further reading
Software defined radio : architectures, systems, and functions. Dillinger, Madani, Alonistioti. Wiley, 2003. 454 pages.
Cognitive Radio Technology. Bruce Fette. Elsevier Science & Technology Books, 2006. 656 pags.
Software Defined Radio for 3G, Burns. Artech House, 2002.
Software Radio: A Modern Approach to Radio Engineering, Jeffrey H. Reed. Prentice Hall PTR, 2002.
Signal Processing Techniques for Software Radio, Behrouz Farhang-Beroujeny. LuLu Press.
RF and Baseband Techniques for Software Defined Radio, Peter B. Kenington. Artech House, 2005,
The ABC's of Software Defined Radio, Martin Ewing, AA6E. The American Radio Relay League, Inc., 2012,
Software Defined Radio using MATLAB & Simulink and the RTL-SDR, R Stewart, K Barlee, D Atkinson, L Crockett, Strathclyde Academic Media, September 2015.
External links
The world's first web-based software-defined receiver at the university of Twente, the Netherlands
Software-defined receivers connected to the Internet
Using software-defined television tuners as multimode HF / VHF / UHF receivers
Free SDR textbook: Software Defined Radio using MATLAB & Simulink and the RTL-SDR
Welcome to the World of Software Defined Radio |
26314078 | https://en.wikipedia.org/wiki/RTMPDump | RTMPDump | RTMPDump is a free software project dedicated to develop a toolkit for RTMP streams. The package includes three programs, rtmpdump, rtmpsrv and rtmpsuck.
rtmpdump is used to connect to RTMP servers just like normal Flash video player clients, and capture the stream from the network, and save it to a file. With it, commands may be constructed using connection and authentication information previously obtained from the RTMP server by rtmpsrv.
rtmpsrv is used to watch connections and streams
rtmpsuck can also be used to capture streams, but can be used to detect parameters to be used with rtmpdump
It has been reviewed as "an excellent utility for recording streams broadcasting TV and video on demand" and has been used in academic research on video streaming rate selection and a developmental media framework. The utility has been noted for its small size and its ability to decrypt both RTMPE (Encrypted RTMP) and RTMPS (Secure RTMP) Digital Rights Management technologies.
Adobe Systems Inc. asserted that rtmpdump, in a 2009 Digital Millennium Copyright Act Cease and Desist order issued against SourceForge, "can be used" to infringe copyrights, without claiming actual use. As of 2009, SourceForge had removed the project files, providing the message "The project specified has been flagged as deleted". From November 2009 onwards, the project has been hosted as a Git repository at MPlayer's website, MplayerHQ.hu.
On-demand streams
In negotiating a connection, an RTMP client sends and receives a data stream containing multiple elements, as a single command line. An on-demand stream typically includes the following elements:
For a Limelight server
-r rtmp://
-a: authentication elements (the alternative --app may be used instead)
Typically in the format -?as=&av=&te=&mp=&et=&fmta-token=
: A path address. For example, a1414/e3
: For example, as=adobe-hmac-sha256
: For example, av=1
: For example, te=connect
: Typically, two or more comma-separated URL addresses, for alternative bitrate streams (MPEG format, MP3 or MP4)
: Typically, a ten-character number (numerical)
: Typically, a 64-character authentication (auth) string [that is, an authentication token] (alphanumeric)
-y: playpath (URL address of the desired bitstream, one of those specified in mp above)
Typically, in the format mp3:/.mp3 or mp4:/.mp4
-o: Output filename
The foregoing are typically the only elements (or "switches") that are essential to a connection, if neither Tunnelling nor Encryption are in use by the server. Although other elements may be encountered in practice, they are normally non-essential.
Hence the following elements are typically sent by the client software application, as a single command line -
rtmpdump -r rtmp://xxxxxxxx.fcod.llnwd.net
-a path?as=data&av=data&te=data&mp=data&et=data&fmta-token=data
-y mp4:URL/filename.mp4 -o file_mp4.flv
The parts comprising the -a (or --app) element must be incorporated in it in the order shown above, as the sequence in which its parts are received by the RTMP server is critical.
The authentication strings (et= and fmta-token=) contain session information, so will change on each fresh connection made to the server (which in practice typically means they will expire if a new session is begun, not literally on every attempt to resume a connection), but the other elements will not usually vary from session to session.
For an Akamai server
The command line is typically as above, except that the -a (or --app) element contains the following parts instead -
: Typically, a 62-character authentication (auth) string [i.e. an authentication token] (alphanumeric)
aifp: For example, aifp=v001
: Typically, the URL address of the stream
Hence the following elements are typically sent by the client software application, as a single command line -
program.exe -r rtmp://xxxxxxx.edgefcs.net
-a ondemand?auth=data&aifp=data&slist=data
-y mp3:URL/filename -o file_mp3.flv
The parts comprising the -a (or --app) element must be incorporated in it in the order shown above, as the sequence in which its parts are received by the RTMP server is critical.
The authentication string (auth=) contains session information, so will change on each fresh connection made to the server (typically, if a new session is begun, e.g. the computer is restarted, not literally on every attempt to resume a connection). The other parts will not usually vary from session to session.
Note - The above describes the simplified form, whereby the stream is first saved to the user's hard disk, to be played back thereafter in a media player capable of playing an FLV encoded file (H.263 or H.264 encoding), such as GOM Player. If it's desired, instead, to play the stream directly from the RTMP server, thus giving immediate playback, additional elements will be needed in the command line including -
-f: This specifies the version of the Flash plugin installed on the user's computer. For example, -f "WIN 9,0,260,0" would indicate the user has the Windows version of Flash Player 9, release 260.
-W: The capital W command. This is the URL address of the SWF player used to play the stream, as indicated by the web page from which the stream is derived. For example, path/9player.swf?revision=18269_21576.
Live streams
The command line for an Akamai server is typically as for an Akamai on-demand stream. But the -a (or --app) element contains the following parts
: Typically, a 62-character authentication (auth) string [i.e. an authentication token] (alphanumeric)
aifp: For example, aifp=v001
: Typically, the URL address of the stream, in the format (e.g. Radio_7_Int@6463); or more than one URL if more than one bitrate is available [see note]
Note - If the string contains two or more alternative streams (i.e. offers a choice of streams at alternative bitrates), the element (--playpath or -y) specifies the one chosen by the user, as the identifier item.
Hence the following sequence is typically sent by the client software application, as a single command line -
rtmpdump.exe --live -r rtmp://xxxxxxx.live.edgefcs.net
-a live?auth=data&aifp=data&slist=data
--playpath {identifier}?auth=data&aifp=data&slist=data -o output.flv
All these items are mandatory, and must be included in the order shown above. The string following the ? (question mark) in both the -a and --playpath elements will typically be identical. The identifier item will typically be a sub-set of the slist data (if the latter offers a choice), otherwise they too will be identical. The -o element can specify an output filename chosen by the user.
Specifying the complete is unnecessary, as that element is constructed in memory by the client application. Typically, in memory it takes the following form:
-y xxxxx_x_@xxxx?auth=&aifp=v001&slist=xxxxx_x_@xxxx,xxxxx_x_@xxxx
References
External links
Adobe Flash
Free multimedia software |
2372281 | https://en.wikipedia.org/wiki/Air%20France%20Flight%20296Q | Air France Flight 296Q | Air France Flight 296Q was a chartered flight of a new Airbus A320-111 operated by Air France for Air Charter International. On 26 June 1988, the plane crashed while making a low pass over Mulhouse–Habsheim Airport (ICAO airport code LFGB) as part of the Habsheim Air Show. Most of the crash sequence, which occurred in front of several thousand spectators, was caught on video. The cause of the crash has been the source of major controversy.
This particular flight was the A320's first passenger flight (most of those on board were journalists and raffle winners). The low-speed flyover, with landing gear down, was supposed to take place at an altitude of ; instead, the plane performed the flyover at , skimmed the treetops of the forest at the end of the runway (which had not been shown on the airport map given to the pilots) and crashed. All the passengers survived the initial impact, but a woman and two children died from smoke inhalation before they could escape.
Official reports concluded that the pilots flew too low, too slow, failed to see the forest and accidentally flew into it. The captain, Michel Asseline, disputed the report and claimed an error in the fly-by-wire computer prevented him from applying thrust and pulling up. In the aftermath of the crash, there were allegations that investigators had tampered with evidence, specifically the aircraft's flight recorders ("black boxes").
This was the first fatal crash of an Airbus A320.
Aircraft
The accident aircraft, an Airbus A320-111, registration F-GFKC, serial number 9, first flew on 6 January 1988 and was delivered to Air France on 23 June, three days prior to its destruction. It was the third A320 delivered to Air France, the launch customer.
Flight deck crew
Captain Michel Asseline, 44, had been a pilot with Air France for almost twenty years and had the following endorsements: Caravelle; Boeing 707, 727, and 737; and Airbus A300 and A310. He was a highly distinguished pilot with 10,463 flight hours. A training captain since 1979, Asseline was appointed to head the company's A320 training subdivision at the end of 1987. As Air France's technical pilot, he had been heavily involved in test flying the A320 type and had carried out maneuvers beyond normal operational limitations. Asseline had total confidence in the aircraft's computer systems.
First Officer Pierre Mazières, 45, had been flying with the airline since 1969 and had been a training captain for six years. He was endorsed on the Caravelle, Boeing 707 and 737, and had qualified as an A320 captain three months before the accident. Mazières had 10,853 hours of flight time.
Flight plan
At the time of the incident, only three of the new aircraft type had been delivered to Air France, and the newest one (in service for two days) had been chosen for the flyover.
The aircraft was to fly from Charles de Gaulle Airport to Basel–Mulhouse Airport for a press conference. Then, sightseeing charter passengers would board and the aircraft would fly the short distance to the small Habsheim aerodrome. The captain would make a low-level fly-pass over Runway 02, climb up and turn back, and repeat the fly-pass over the same runway in the reciprocal direction (Runway 20). This would be followed by a sightseeing trip south to Mont Blanc before the passengers would be returned to Basel–Mulhouse Airport. Finally, the aircraft would return to Paris.
The pilots had each had a busy weekend and did not receive the flight plan until the morning of the flight. They received no verbal details about the flyover or the aerodrome itself.
Flyover
The flight plan was that as they approached the airfield, they would extend third-stage flap, lower the landing gear, and line up for level flight at . The captain would slow the aircraft to its minimum flying speed with maximum angle of attack, disable the "alpha floor" (the function that would otherwise automatically increase engine thrust when the angle of attack reached 15°) and rely on the first officer to adjust the engine thrust manually to maintain 100 feet. After the first pass, the first officer would then apply the takeoff/go-around switch (TOGA) power and climb steeply before turning back for the second pass. "I've done it twenty times!" Asseline assured his first officer. The flyover had been approved by Air France's Air Operations Directorate and Flight Safety Department, and air traffic control and Basel tower had been informed.
Habsheim aerodrome was too small to be listed in the aircraft's flight computer, thereby requiring a visual approach; both pilots were also unfamiliar with the airfield when they began their descent from only from the field. This distance was too short for them to stabilise the aircraft's altitude and speed for the flyover. Additionally, the captain was expecting from the flight plan to do the pass over runway 02 ( long, paved) and was preparing for that alignment. But as the aircraft approached the field, the flight deck crew noticed that the spectators were gathered beside runway 34R ( long, grass). This last-minute deviation in the approach further distracted the crew from stabilising the aircraft's altitude and they quickly dropped to .
From higher up, the forest at the end of 34R had looked like a different type of grass. But now that the aircraft was performing its flyover at only thirty feet, the crew noticed the aircraft was lower than the now-identified hazard that they were fast approaching. The cockpit voice recorder recorded the first officer's call:
First officer: "TOGA power! Go-around track!"
Followed by:
Cockpit area microphone (CAM): [Increase in engine speed]
CAM: [Noises of impact in the trees]
Captain: "Oh shit!"
END OF TAPE
The crew applied full power and Asseline attempted to climb. However, the elevators did not respond to the pilot's commands because the A320's computer system engaged its "alpha protection" mode (meant to prevent the aircraft from entering a stall). Less than five seconds later, the turbines began ingesting leaves and branches as the aircraft skimmed the tops of the trees. The combustion chambers clogged and the engines failed. The aircraft fell to the ground.
Traditionally, pilots respect the inherent dangers of flying at low speeds at low altitude, and normally, a pilot would not attempt to fly an aircraft so close to stalling with the engines at flight idle (minimum thrust setting in flight). In this instance, however, the pilots involved did not hesitate to fly the aircraft below its normal minimum flying speed because the purpose of the flyover was to demonstrate that the aircraft's computer systems would ensure that lift would always be available regardless of how the pilots handled the controls. Asseline's experience of flying the aircraft type at the outer limits of its flight performance envelope may have led to overconfidence and complacency.
Crash and evacuation
During the impact, the right wing was torn off, and the spilling fuel ignited immediately. Two fire trucks at the airshow set off and an ambulance followed. Local emergency services were informed by radio communication.
Inside the aircraft, many of the passengers were dazed from hitting their heads on the backs of the seats in front of them. Some passengers had difficulty unfastening their seatbelts because they were unfamiliar with the mechanism (which differs from the type used in car seatbelts). The purser went to announce instructions to the passengers but the public address system handset had been torn off. He then tried to open the left-side forward door, which was blocked by trees. The door opened partway, and the emergency escape slide began inflating while it was stuck partly inside the fuselage. The purser, a passenger, and a flight attendant (a guest from another airline) managed to push the door fully open. In the process, the purser and the passenger were thrown out of the fuselage with the slide landing on top of them. The flight attendant then began evacuating the passengers but they soon began to pile up at the bottom of the slide as their route was blocked by trees and branches. The egress of the passengers was temporarily halted while the purser and another flight attendant began clearing the branches. When the evacuation continued, the flight attendant stayed at the door, helping passengers, until she began suffering from smoke inhalation.
By this time, the fire had entered the right side of the fuselage through the damaged floor section between seat rows 10 and 15. A passenger tried to open the left-side overwing exit. It would not open, which was fortunate as there was by that time a fire on the left wing.
The panicking passengers now began pushing toward the front of the cabin. A flight attendant standing in the centre of the cabin at seat 12D was pushed into the aisle by a severely burnt passenger from 12F. Then, as she was helping another passenger whose clothes were on fire, she was carried forward by the surge of people rushing to escape. After the rush of people had left and the interior was fast becoming toxic, she stood at the front door and called back into the cabin. There was no reply and the thick black smoke made a visual check impossible, so she exited the fuselage. The evacuation from the rear door had been fast and smooth thanks to the instructions from the flight attendants at the rear of the aircraft.
The medical team from the airshow arrived and began examining the passengers. Ten minutes after the crash, the first of the fire trucks arrived. But because of the forest, only the smaller vehicles were able to reach the wreckage. Apart from the tail section, the aircraft was consumed by fire.
Of 136 people on board, three did not escape. One was a disabled boy in seat 4F who was unable to move. Another was a girl in seat 8C, who was unable to remove her seatbelt, (she was traveling alone). The third was a woman who had reached the front door and then returned to help the girl. They were found all dead lying in the aisle. Thirty-four passengers required hospitalisation for injuries and burns. Both pilots received minor head injuries and also suffered from smoke inhalation and shock.
Accident investigation
The official investigation was carried out by the Bureau of Enquiry and Analysis for Civil Aviation Safety (BEA), the French air accident investigation bureau, in conjunction with Air France and Airbus. Although the official investigation was written in French, the BEA released an English version on 29 November 1989. The translated version of the report can be found on the Aviation Accidents Database and at the Aviation Safety Network.
Flight recorders
The plane's flight recorders were found still attached in the unburnt tail section. The Cockpit Voice Recorder (CVR) continued to operate for about 1.5 seconds after the initial impact. The Digital Flight Data Recorder (DFDR) continued to operate for about one second, then recorded nonsensical data for another two seconds. Interruption of the power occurred forward of the tail section—most probably in the wheel-well area, which was heavily damaged.
The CVR was read during the night of 26 June at the BEA. The transcription was later clarified with the assistance of the pilots involved. The tape speed was set using the 400 Hz frequency of the aircraft's electrical supply and then synchronised with the air traffic control recordings, which included a time track.
The DFDR was read the same night by the Brétigny sur Orge Flight Test Centre:
12:43:44 - the aircraft begins its descent from , initially at a rate of per minute with 'Flaps 1'.
12:44:14 - the engine power is reduced to flight idle. Three seconds later, the undercarriage is extended. A further ten seconds later, 'Flaps 2' is selected.
12:44:45 - 'Flaps 3' is selected as the aircraft descends through at an airspeed of 177 knots.
12:45:06 - the aircraft descends through at an airspeed of 155 knots.
12:45:15 - the aircraft, now at , begins a deviation to the right (maximum bank angle: 30°) to line up with the grass strip 34R.
12:45:23 - the aircraft completes the deviation at a height of and an airspeed of 141 knots. During this manoeuvre, a fluctuation in the radio altimeter height corresponds to the aircraft passing over a patch of trees (whereas before and after this fluctuation, the readings of the radio altimeter and those of the barometric altimeter match perfectly). Three seconds later, the aircraft descends through at an airspeed of 132 knots. The Captain begins to flare the aircraft (he lifts the nose 4°) to level its flight. The aircraft levels off at .
12:45:30 - nose-up attitude increases to 7°.
12:45:35 - nose-up attitude is now 15° and speed is 122 knots. TOGA power is applied. Four seconds later, the aircraft begins striking the treetops.
Aircraft and engines
Investigators found that the aircraft had been airworthy, that its weight and centre-of-gravity had been within limits, and that there was no evidence of mechanical or electronic systems failure.
The flight deck crew believed that the engines had failed to respond to the application of full power. With the CFM56-5 engines, four seconds are required to go from 29% N1 (flight idle) to 67%. It then takes one second more to go from 67 to 83% N1. From the engine parameters recorded on the DFDR and spectral analysis of the engine sounds on the CVR, it was determined that five seconds after TOGA power was applied, the N1 speed of Nº1 engine was 83% while that of Nº2 engine was 84%. Spectral analysis of the engine sounds indicated that 0.6 seconds later, both engines had reached 91% (by this stage, they were starting to ingest vegetation). This response of the engines complied with their certification data.
Official report
The official report from BEA concluded that the probable cause of the accident was a combination of the following:
Very low flyover height, lower than surrounding obstacles;
Speed very slow and reducing to reach maximum possible angle of attack;
Engines speed at flight idle; and
Late application of go-around power.
Furthermore, the bureau concluded that if the descent below 100 feet was not deliberate, it may have resulted from a failure by the crew to take proper account of the visual and aural information available to them regarding the elevation "above ground level" (AGL) of the aircraft.
The report further recommended that:
Passengers should be banned from all demonstration flights
Flight crews should be provided with – and ensure – proper reconnaissance of airfields
Airline company procedures should be reviewed to ensure they comply with official regulations concerning altitude
Prosecutions
In 1996, Captain Asseline, First Officer Mazières, two Air France officials and the president of the flying club sponsoring the air show were all charged with involuntary manslaughter. In 1997, all five were found guilty. Asseline was initially sentenced to six months in prison along with twelve months of probation. Mazières was given a twelve month suspended sentence. The others were sentenced to probation. Asseline walked free from the court and said he would appeal to France's highest court, the Court of Cassation (). According to French law, Asseline was required to submit himself to the prison system before his case could be taken up by the Court of Cassation. In 1998, Asseline's appeal was rejected and his sentence was increased to ten months of imprisonment along with ten months of probation.
Alternative explanation
The television documentary series Mayday also reports claims in Season 9 Episode 3 that the plane's flight recorder might have been tampered with and indicated that four seconds had been cut from the tape; this was shown by playing back a control tower tape and comparing it to the remaining tape. Asseline argues that he attempted to apply thrust earlier than indicated in the flight recorder data. When he increased throttle to level off at 100 ft, the engines did not respond. Asseline claims that this indicated a problem with the aeroplane's fly-by-wire system rather than pilot error. After a few seconds, Asseline claims, he became worried that the plane's completely computerised throttle control had malfunctioned and responded by pulling the throttle all the way back then forward again. By that time the aircraft had touched the trees. Mayday also looks at the theory that it was the computer at fault, not the pilots. Because the aircraft's altitude had fallen below 100 ft, the plane's computer may have been programmed to believe it was landing and therefore prevent any drastic manoeuvres from either pilot. When the crew suddenly asked the plane for more power and lift, it may have simply ignored them.
It was also claimed by the Institute of Police Forensic Evidence and Criminology, based in Switzerland, that the flight data recorders may have been switched and were not the original ones in the airplane. Airbus made a detailed rebuttal of these claims in a document published in 1991, contending that the independent investigator employed by the filmmakers made an error when synchronising the recordings based on a misunderstanding of how the "Radio Transmit" parameter on the flight data recorder functioned.
Dramatization
The episode "Blaming the Pilot" of the TV series Survival in the Sky featured the accident.
The Discovery Channel Canada / National Geographic TV series Mayday featured the accident and subsequent investigation in a season 9 episode titled "Pilot vs. Plane" and included an interview with Captain Michel Asseline, survivors, and accident investigators.
The episode "Disastrous Descents" of the TV series Aircrash Confidential produced by WMR Productions and IMG Entertainment, featured the accident and included an interview with Captain Michel Asseline.
See also
List of accidents and incidents involving commercial aircraft
List of airshow accidents and incidents
Notes
Footnotes
External links
Commission of Inquiry into the accident on 26 June 1988 in Mulhouse–Habsheim (Archive)
1988 in France
296
Accidents and incidents involving the Airbus A320
Aviation accidents and incidents in 1988
Aviation accidents and incidents in France
Airliner accidents and incidents caused by pilot error
Aviation accidents and incidents at air shows
Aviation accident investigations with disputed causes
Airliner accidents and incidents involving controlled flight into terrain
Conspiracy theories in France
Conspiracy theories involving aviation incidents
June 1988 events in Europe
Filmed accidental deaths |
22678 | https://en.wikipedia.org/wiki/OSS | OSS | OSS or Oss may refer to:
Places
Oss, a city and municipality in the Netherlands
Osh Airport, IATA code OSS
People with the name
Oss (surname), a surname
Arts and entertainment
O.S.S. (film), a 1946 World War II spy film about Office of Strategic Services agents
O.S.S. (TV series), a British spy series which aired in 1957 in the UK and the US
Open Source Shakespeare, a non-commercial website with texts and statistics on Shakespeare's plays
Old Syriac Sinaiticus, a Bible manuscript
Organization of Super Spies, a fictional organization in the Spy Kids franchise
Education
ÖSS (Öğrenci Seçme Sınavı), a former university entrance exam in Turkey
Options Secondary School, Chula Vista, California
Otto Stern School for Integrated Doctoral Education, Frankfurt am Main, Germany
Outram Secondary School, Singapore
Organizations
Observatoire du Sahara et du Sahel, dedicated to fighting desertification and drought; based in Tunis, Tunisia
Office for Science and Society, Science Education from Montreal's McGill University
Office of Strategic Services, World War II forerunner of the Central Intelligence Agency
Office of the Supervising Scientist, an Australian Government body under the Supervising Scientist
Offshore Super Series, an offshore powerboat racing organization
Open Spaces Society, a UK registered charity championing public paths and open spaces
Operations Support Squadron, a United States Air Force support squadron
Optimized Systems Software, a former software company
Science and technology
Ohio Sky Survey
Optical SteadyShot, a lens-based image stabilization system by Sony
Optimal Stereo Sound, another name for the Jecklin Disk recording technique
Oriented spindle stop, a type of spindle motion used within some G-code cycles
Ovary Sparing Spay (OSS)
Overspeed Sensor System (OSS), part of the Train Protection & Warning System for railroad trains
Computer software and hardware
OpenSearchServer, search engine software
Open Sound System, a standard interface for making and capturing sound in Unix operating systems
Open-source software, software with its source code made freely available
Operations support systems, computers used by telecommunications service providers to administer and maintain network systems
Other uses
OSS Fighters, a Romania-based kickboxing promotion
Order of St. Sava, a Serbian decoration
Ossetic language code
See also
AAS (disambiguation)
Hoz (disambiguation)
OS (disambiguation) |
42646748 | https://en.wikipedia.org/wiki/WebWatcher | WebWatcher | WebWatcher is a proprietary computer and mobile device monitoring software developed by Awareness Technologies. WebWatcher is compatible with iOS, Android, Windows, Chrome OS and macOS operating systems. WebWatcher Mobile records text messages, call logs, web history, photos, and GPS. WebWatcher for PC and Mac features include email & Instant Message monitoring, keystroke logging, web content filtering and monitoring, and screenshot monitoring. Critics have referred to WebWatcher and other similar pieces of software as "stalkerware".
History
WebWatcher was developed in 2002 initially for uses of counter-terrorism for Windows PCs. The software is now used predominately by parents to monitor their children's online activities and by employers to monitor the activities of their workers. In 2010, a version of the software was released for BlackBerry and Android devices. In 2012, WebWatcher for Mac was released. WebWatcher for iOS was released in 2014.
Reception and criticism
WebWatcher received the PC Magazine editors' choice award in a 2011 review of Parental Control & Monitoring software. In the article, WebWatcher was referred to as "Heavy-handed" saying: "if you find it necessary to track a child who's engaging in risky activities, WebWatcher will record every detail and even send you instant notification when it encounters certain words." Also, About.com readers named WebWatcher as the "Best Internet Safety Tool" as part of its 2011 Readers' Choice Awards.
Critics have noted that since the software runs stealth on a device, there is an opportunity for the software to be installed illegally. In 2020, TechCrunch and other media outlets described WebWatcher as "stalkerware". Shortly thereafter, Google removed advertisements for WebWatcher from its search results.
References
2002 software
Proprietary cross-platform software
Cross-platform software
Windows software
MacOS software
BlackBerry software
C++ software
Android (operating system) software
Content-control software
Stalkerware |
54027815 | https://en.wikipedia.org/wiki/WannaCry%20ransomware%20attack | WannaCry ransomware attack | The WannaCry ransomware attack was a worldwide cyberattack in May 2017 by the WannaCry ransomware cryptoworm, which targeted computers running the Microsoft Windows operating system by encrypting data and demanding ransom payments in the Bitcoin cryptocurrency. It propagated through EternalBlue, an exploit developed by the United States National Security Agency (NSA) for older Windows systems. EternalBlue was stolen and leaked by a group called The Shadow Brokers at least a year prior to the attack. While Microsoft had released patches previously to close the exploit, much of WannaCry's spread was from organizations that had not applied these, or were using older Windows systems that were past their end-of-life. These patches were imperative to organizations' cyber security but many were not implemented due to ignorance of their importance. Some have claimed a need for 24/7 operation, aversion to risking having formerly working applications breaking because of patch changes, lack of personnel or time to install them, or other reasons.
The attack began at 07:44 UTC on 12 May 2017 and was halted a few hours later at 15:03 UTC by the registration of a kill switch discovered by Marcus Hutchins. The kill switch prevented already infected computers from being encrypted or further spreading WannaCry. The attack was estimated to have affected more than 200,000 computers across 150 countries, with total damages ranging from hundreds of millions to billions of dollars. Security experts believed from preliminary evaluation of the worm that the attack originated from North Korea or agencies working for the country.
In December 2017, the United States and United Kingdom formally asserted that North Korea was behind the attack.
A new variant of WannaCry forced Taiwan Semiconductor Manufacturing Company (TSMC) to temporarily shut down several of its chip-fabrication factories in August 2018. The virus spread to 10,000 machines in TSMC's most advanced facilities.
Description
WannaCry is a ransomware cryptoworm, which targeted computers running the Microsoft Windows operating system by encrypting (locking) data and demanding ransom payments in the Bitcoin cryptocurrency. The worm is also known as WannaCrypt, Wana Decrypt0r 2.0, WanaCrypt0r 2.0, and Wanna Decryptor. It is considered a network worm because it also includes a transport mechanism to automatically spread itself. This transport code scans for vulnerable systems, then uses the EternalBlue exploit to gain access, and the DoublePulsar tool to install and execute a copy of itself. WannaCry versions 0, 1, and 2 were created using Microsoft Visual C++ 6.0.
EternalBlue is an exploit of Microsoft's implementation of their Server Message Block (SMB) protocol released by The Shadow Brokers. Much of the attention and comment around the event was occasioned by the fact that the U.S. National Security Agency (NSA) (from whom the exploit was likely stolen) had already discovered the vulnerability, but used it to create an exploit for its own offensive work, rather than report it to Microsoft. Microsoft eventually discovered the vulnerability, and on Tuesday, 14 March 2017, they issued security bulletin MS17-010, which detailed the flaw and announced that patches had been released for all Windows versions that were currently supported at that time, these being Windows Vista, Windows 7, Windows 8.1, Windows 10, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, and Windows Server 2016.
DoublePulsar is a backdoor tool, also released by The Shadow Brokers on 14 April 2017. Starting from 21 April 2017, security researchers reported that there were tens of thousands of computers with the DoublePulsar backdoor installed. By 25 April, reports estimated that the number of infected computers could be up to several hundred thousand, with numbers increasing every day. The WannaCry code can take advantage of any existing DoublePulsar infection, or installs it itself. On 9 May 2017, private cybersecurity company RiskSense released code on GitHub with the stated purpose of allowing legal white hat penetration testers to test the CVE-2017-0144 exploit on unpatched systems.
When executed, the WannaCry malware first checks the kill switch domain name; if it is not found, then the ransomware encrypts the computer's data, then attempts to exploit the SMB vulnerability to spread out to random computers on the Internet, and laterally to computers on the same network. As with other modern ransomware, the payload displays a message informing the user that their files have been encrypted, and demands a payment of around US$300 in bitcoin within three days, or US$600 within seven days, warning that "you have not so enough time." Three hardcoded bitcoin addresses, or wallets, are used to receive the payments of victims. As with all such wallets, their transactions and balances are publicly accessible even though the cryptocurrency wallet owners remain unknown.
Several organizations released detailed technical write-ups of the malware, including a senior security analyst at RiskSense, Microsoft, Cisco, Malwarebytes, Symantec and McAfee.
Attack
The attack began on Friday, 12 May 2017, with evidence pointing to an initial infection in Asia at 07:44 UTC. The initial infection was likely through an exposed vulnerable SMB port, rather than email phishing as initially assumed. Within a day the code was reported to have infected more than 230,000 computers in over 150 countries.
Organizations that had not installed Microsoft's security update from March were affected by the attack. Those still running unsupported versions of Microsoft Windows, such as Windows XP and Windows Server 2003 were at particularly high risk because no security patches had been released since April 2014 for Windows XP (with the exception of one emergency patch released in May 2014) and July 2015 for Windows Server 2003. A Kaspersky Lab study reported, however, that less than 0.1 percent of the affected computers were running Windows XP, and that 98 percent of the affected computers were running Windows 7. In a controlled testing environment, the cybersecurity firm Kryptos Logic found that it was unable to infect a Windows XP system with WannaCry using just the exploits, as the payload failed to load, or caused the operating system to crash rather than actually execute and encrypt files. However, when executed manually, WannaCry could still operate on Windows XP.
Defensive response
Experts quickly advised affected users against paying the ransom due to no reports of people getting their data back after payment and as high revenues would encourage more of such campaigns. As of 14 June 2017, after the attack had subsided, a total of 327 payments totaling US$130,634.77 (51.62396539 XBT) had been transferred.
The day after the initial attack in May, Microsoft released out-of-band security updates for end of life products Windows XP, Windows Server 2003 and Windows 8; these patches had been created in February of that year following a tip off about the vulnerability in January of that year. Organizations were advised to patch Windows and plug the vulnerability in order to protect themselves from the cyber attack. The head of Microsoft's Cyber Defense Operations Center, Adrienne Hall, said that "Due to the elevated risk for destructive cyber-attacks at this time, we made the decision to take this action because applying these updates provides further protection against potential attacks with characteristics similar to WannaCrypt [alternative name to WannaCry]".
Researcher Marcus Hutchins discovered the kill switch domain hardcoded in the malware. Registering a domain name for a DNS sinkhole stopped the attack spreading as a worm, because the ransomware only encrypted the computer's files if it was unable to connect to that domain, which all computers infected with WannaCry before the website's registration had been unable to do. While this did not help already infected systems, it severely slowed the spread of the initial infection and gave time for defensive measures to be deployed worldwide, particularly in North America and Asia, which had not been attacked to the same extent as elsewhere. On 14 May, a first variant of WannaCry appeared with a new and second kill-switch registered by Matt Suiche on the same day. This was followed by a second variant with the third and last kill-switch on 15 May, which was registered by Check Point threat intelligence analysts. A few days later, a new version of WannaCry was detected that lacked the kill switch altogether.
On 19 May, it was reported that hackers were trying to use a Mirai botnet variant to effect a distributed denial-of-service attack on WannaCry's kill-switch domain with the intention of knocking it offline. On 22 May, Hutchins protected the domain by switching to a cached version of the site, capable of dealing with much higher traffic loads than the live site.
Separately, researchers from University College London and Boston University reported that their PayBreak system could defeat WannaCry and several other families of ransomware by recovering the keys used to encrypt the user's data.
It was discovered that Windows encryption APIs used by WannaCry may not completely clear the prime numbers used to generate the payload's private keys from the memory, making it potentially possible to retrieve the required key if they had not yet been overwritten or cleared from resident memory. The key is kept in the memory if the WannaCry process has not been killed and the computer has not been rebooted after being infected. This behaviour was used by a French researcher to develop a tool known as WannaKey, which automates this process on Windows XP systems. This approach was iterated upon by a second tool known as Wanakiwi, which was tested to work on Windows 7 and Server 2008 R2 as well.
Within four days of the initial outbreak, new infections had slowed to a trickle due to these responses.
Attribution
Linguistic analysis of the ransom notes indicated the authors were likely fluent in Chinese and proficient in English, as the versions of the notes in those languages were probably human-written while the rest seemed to be machine-translated. According to an analysis by the FBI's Cyber Behavioral Analysis Center, the computer that created the ransomware language files had Hangul language fonts installed, as evidenced by the presence of the "\fcharset129" Rich Text Format tag. Metadata in the language files also indicated that the computers that created the ransomware were set to UTC+09:00, used in Korea.
A security researcher initially posted a tweet referencing code similarities between WannaCry and previous malware. The cybersecurity companies Kaspersky Lab and Symantec have both said the code has some similarities with that previously used by the Lazarus Group (believed to have carried out the cyberattack on Sony Pictures in 2014 and a Bangladesh bank heist in 2016—and linked to North Korea). This could also be either simple re-use of code by another group or an attempt to shift blame—as in a cyber false flag operation; but a leaked internal NSA memo is alleged to have also linked the creation of the worm to North Korea. Brad Smith, the president of Microsoft, said he believed North Korea was the originator of the WannaCry attack, and the UK's National Cyber Security Centre reached the same conclusion.
On 18 December 2017, the United States Government formally announced that it publicly considers North Korea to be the main culprit behind the WannaCry attack. Then-President Trump's Homeland Security Advisor, Tom Bossert, wrote an op-ed in The Wall Street Journal about this charge, saying "We do not make this allegation lightly. It is based on evidence." In a press conference the following day, Bossert said that the evidence indicates that Kim Jong-un had given the order to launch the malware attack. Bossert said that Canada, New Zealand and Japan agree with the United States' assessment of the evidence that links the attack to North Korea, while the United Kingdom's Foreign and Commonwealth Office says it also stands behind the United States' assertion.
North Korea, however, denied being responsible for the cyberattack.
On 6 September 2018, the US Department of Justice (DoJ) announced formal charges against Park Jin-hyok for involvement in the Sony Pictures hack of 2014. The DoJ contended that Park was a North Korean hacker working as part of a team of experts for the North Korean Reconnaissance General Bureau. The Department of Justice asserted this team also had been involved in the WannaCry attack, among other activities.
Impact
The ransomware campaign was unprecedented in scale according to Europol, which estimates that around 200,000 computers were infected across 150 countries. According to Kaspersky Lab, the four most affected countries were Russia, Ukraine, India and Taiwan.
One of the largest agencies struck by the attack was the National Health Service hospitals in England and Scotland, and up to 70,000 devices – including computers, MRI scanners, blood-storage refrigerators and theatre equipment – may have been affected. On 12 May, some NHS services had to turn away non-critical emergencies, and some ambulances were diverted. In 2016, thousands of computers in 42 separate NHS trusts in England were reported to be still running Windows XP. In 2018 a report by Members of Parliament concluded that all 200 NHS hospitals or other organizations checked in the wake of the WannaCry attack still failed cybersecurity checks. NHS hospitals in Wales and Northern Ireland were unaffected by the attack.
Nissan Motor Manufacturing UK in Tyne and Wear, England, halted production after the ransomware infected some of their systems. Renault also stopped production at several sites in an attempt to stop the spread of the ransomware. Spain's Telefónica, FedEx and Deutsche Bahn were hit, along with many other countries and companies worldwide.
The attack's impact is said to be relatively low compared to other potential attacks of the same type and could have been much worse had Hutchins not discovered that a kill switch had been built in by its creators or if it had been specifically targeted on highly critical infrastructure, like nuclear power plants, dams or railway systems.
According to cyber-risk-modeling firm Cyence, economic losses from the cyber attack could reach up to US$4 billion, with other groups estimating the losses to be in the hundreds of millions.
Affected organizations
The following is an alphabetical list of organisations confirmed to have been affected:
Reactions
A number of experts highlighted the NSA's non-disclosure of the underlying vulnerability, and their loss of control over the EternalBlue attack tool that exploited it. Edward Snowden said that if the NSA had "privately disclosed the flaw used to attack hospitals when they found it, not when they lost it, the attack may not have happened". British cybersecurity expert Graham Cluley also sees "some culpability on the part of the U.S. intelligence services". According to him and others "they could have done something ages ago to get this problem fixed, and they didn't do it". He also said that despite obvious uses for such tools to spy on people of interest, they have a duty to protect their countries' citizens. Others have also commented that this attack shows that the practice of intelligence agencies to stockpile exploits for offensive purposes rather than disclosing them for defensive purposes may be problematic. Microsoft president and chief legal officer Brad Smith wrote, "Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen." Russian President Vladimir Putin placed the responsibility of the attack on U.S. intelligence services, for having created EternalBlue.
On 17 May 2017, United States bipartisan lawmakers introduced the PATCH Act that aims to have exploits reviewed by an independent board to "balance the need to disclose vulnerabilities with other national security interests while increasing transparency and accountability to maintain public trust in the process".
On 15 June 2017, the United States Congress was to hold a hearing on the attack. Two subpanels of the House Science Committee were to hear the testimonies from various individuals working in the government and non-governmental sector about how the US can improve its protection mechanisms for its systems against similar attacks in the future.
Marcus Hutchins, a cybersecurity researcher, working in loose collaboration with UK's National Cyber Security Centre, researched the malware and discovered a "kill switch". Later globally dispersed security researchers collaborated online to develop open source tools that allow for decryption without payment under some circumstances. Snowden states that when "NSA-enabled ransomware eats the Internet, help comes from researchers, not spy agencies" and asks why this is the case.
Adam Segal, director of the digital and cyberspace policy program at the Council on Foreign Relations, stated that "the patching and updating systems are broken, basically, in the private sector and in government agencies". In addition, Segal said that governments' apparent inability to secure vulnerabilities "opens a lot of questions about backdoors and access to encryption that the government argues it needs from the private sector for security". Arne Schönbohm, president of Germany's Federal Office for Information Security (BSI), stated that "the current attacks show how vulnerable our digital society is. It's a wake-up call for companies to finally take IT security [seriously]".
United Kingdom
The effects of the attack also had political implications; in the United Kingdom, the impact on the National Health Service quickly became political, with claims that the effects were exacerbated by Government underfunding of the NHS; in particular, the NHS ceased its paid Custom Support arrangement to continue receiving support for unsupported Microsoft software used within the organization, including Windows XP. Home Secretary Amber Rudd refused to say whether patient data had been backed up, and Shadow Health Secretary Jon Ashworth accused Health Secretary Jeremy Hunt of refusing to act on a critical note from Microsoft, the National Cyber Security Centre (NCSC) and the National Crime Agency that had been received two months previously.
Others argued that hardware and software vendors often fail to account for future security flaws, selling systems that − due to their technical design and market incentives − eventually won't be able to properly receive and apply patches.
The NHS denied that it was still using XP, claiming only 4.7% of devices within the organization ran Windows XP. The cost of the attack to the NHS was estimated as £92 million in disruption to services and IT upgrades.
After the attack, NHS Digital refused to finance the estimated £1 billion to meet the Cyber Essentials Plus standard, an information security certification organized by the UK NCSC, saying this would not constitute "value for money", and that it had invested over £60 million and planned "to spend a further £150 [million] over the next two years" to address key cyber security weaknesses.
See also
References
External links
Ransom:Win32/WannaCrypt at Microsoft Malware Protection Center
, a Twitterbot tracking the ransom payments
2017 in computing
Cyberattacks
Cybercrime
Hacking in the 2010s
May 2017 crimes
Ransomware
Computer security exploits
Windows malware
2010s internet outages |
1349 | https://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry%20computer | Atanasoff–Berry computer | The Atanasoff–Berry computer (ABC) was the first automatic electronic digital computer. Limited by the technology of the day, and execution, the device has remained somewhat obscure. The ABC's priority is debated among historians of computer technology, because it was neither programmable, nor Turing-complete. Conventionally, the ABC would be considered the first electronic ALU (arithmetic logic unit) which is integrated into every modern processor's design.
Its unique contribution was to make computing faster by being the first to use vacuum tubes to do the arithmetic calculations. Prior to this, slower electro-mechanical methods were used by the Harvard Mark I and Konrad Zuse's machines. The first electronic, programmable, digital machine, the Colossus computer from 1943 to 1945, used similar tube-based technology as ABC.
Overview
Conceived in 1937, the machine was built by Iowa State College mathematics and physics professor John Vincent Atanasoff with the help of graduate student Clifford Berry. It was designed only to solve systems of linear equations and was successfully tested in 1942. However, its intermediate result storage mechanism, a paper card writer/reader, was not perfected, and when John Vincent Atanasoff left Iowa State College for World War II assignments, work on the machine was discontinued. The ABC pioneered important elements of modern computing, including binary arithmetic and electronic switching elements, but its special-purpose nature and lack of a changeable, stored program distinguish it from modern computers. The computer was designated an IEEE Milestone in 1990.
Atanasoff and Berry's computer work was not widely known until it was rediscovered in the 1960s, amidst patent disputes over the first instance of an electronic computer. At that time ENIAC, that had been created by John Mauchly and J. Presper Eckert, was considered to be the first computer in the modern sense, but in 1973 a U.S. District Court invalidated the ENIAC patent and concluded that the ENIAC inventors had derived the subject matter of the electronic digital computer from Atanasoff. When, in the mid-1970s, the secrecy surrounding the British World War II development of the Colossus computers that pre-dated ENIAC, was lifted and Colossus was described at a conference in Los Alamos, New Mexico, in June 1976, John Mauchly and Konrad Zuse were reported to have been astonished.
Design and construction
According to Atanasoff's account, several key principles of the Atanasoff–Berry computer were conceived in a sudden insight after a long nighttime drive to Rock Island, Illinois, during the winter of 1937–38. The ABC innovations included electronic computation, binary arithmetic, parallel processing, regenerative capacitor memory, and a separation of memory and computing functions. The mechanical and logic design was worked out by Atanasoff over the next year. A grant application to build a proof of concept prototype was submitted in March 1939 to the Agronomy department, which was also interested in speeding up computation for economic and research analysis. $5,000 of further funding () to complete the machine came from the nonprofit Research Corporation of New York City.
The ABC was built by Atanasoff and Berry in the basement of the physics building at Iowa State College during 1939–1942. The initial funds were released in September, and the 11-tube prototype was first demonstrated in October 1939. A December demonstration prompted a grant for construction of the full-scale machine. The ABC was built and tested over the next two years. A January 15, 1941, story in the Des Moines Register announced the ABC as "an electrical computing machine" with more than 300 vacuum tubes that would "compute complicated algebraic equations" (but gave no precise technical description of the computer). The system weighed more than . It contained approximately of wire, 280 dual-triode vacuum tubes, 31 thyratrons, and was about the size of a desk.
It was not programmable, which distinguishes it from more general machines of the same era, such as Konrad Zuse's 1941 Z3 and the Colossus computers of 1943–1945. Nor did it implement the stored-program architecture, first implemented in the Manchester Baby of 1948, required for fully general-purpose practical computing machines.
The machine was, however, the first to implement three critical ideas that are still part of every modern computer:
Using binary digits to represent all numbers and data.
Performing all calculations using electronics rather than wheels, ratchets, or mechanical switches.
Organizing a system in which computation and memory are separated.
The memory of the Atanasoff–Berry computer was a system called regenerative capacitor memory, which consisted of a pair of drums, each containing 1600 capacitors that rotated on a common shaft once per second. The capacitors on each drum were organized into 32 "bands" of 50 (30 active bands and two spares in case a capacitor failed), giving the machine a speed of 30 additions/subtractions per second. Data was represented as 50-bit binary fixed-point numbers. The electronics of the memory and arithmetic units could store and operate on 60 such numbers at a time (3000 bits).
The alternating current power-line frequency of 60 Hz was the primary clock rate for the lowest-level operations.
The arithmetic logic functions were fully electronic, implemented with vacuum tubes. The family of logic gates ranged from inverters to two- and three-input gates. The input and output levels and operating voltages were compatible between the different gates. Each gate consisted of one inverting vacuum-tube amplifier, preceded by a resistor divider input network that defined the logical function. The control logic functions, which only needed to operate once per drum rotation and therefore did not require electronic speed, were electromechanical, implemented with relays.
The ALU operated on only one bit of each number at a time; it kept the carry/borrow bit in a capacitor for use in the next AC cycle.
Although the Atanasoff–Berry computer was an important step up from earlier calculating machines, it was not able to run entirely automatically through an entire problem. An operator was needed to operate the control switches to set up its functions, much like the electro-mechanical calculators and unit record equipment of the time. Selection of the operation to be performed, reading, writing, converting to or from binary to decimal, or reducing a set of equations was made by front-panel switches and, in some cases, jumpers.
There were two forms of input and output: primary user input and output and an intermediate results output and input. The intermediate results storage allowed operation on problems too large to be handled entirely within the electronic memory. (The largest problem that could be solved without the use of the intermediate output and input was two simultaneous equations, a trivial problem.)
Intermediate results were binary, written onto paper sheets by electrostatically modifying the resistance at 1500 locations to represent 30 of the 50-bit numbers (one equation). Each sheet could be written or read in one second. The reliability of the system was limited to about 1 error in 100,000 calculations by these units, primarily attributed to lack of control of the sheets' material characteristics. In retrospect, a solution could have been to add a parity bit to each number as written. This problem was not solved by the time Atanasoff left the university for war-related work.
Primary user input was decimal, via standard IBM 80-column punched cards, and output was decimal, via a front-panel display.
Function
The ABC was designed for a specific purpose the solution of systems of simultaneous linear equations. It could handle systems with up to 29 equations, a difficult problem for the time. Problems of this scale were becoming common in physics, the department in which John Atanasoff worked. The machine could be fed two linear equations with up to 29 variables and a constant term and eliminate one of the variables. This process would be repeated manually for each of the equations, which would result in a system of equations with one fewer variable. Then the whole process would be repeated to eliminate another variable.
George W. Snedecor, the head of Iowa State's Statistics Department, was very likely the first user of an electronic digital computer to solve real-world mathematics problems. He submitted many of these problems to Atanasoff.
Patent dispute
On June 26, 1947, J. Presper Eckert and John Mauchly were the first to file for patent on a digital computing device (ENIAC), much to the surprise of Atanasoff. The ABC had been examined by John Mauchly in June 1941, and Isaac Auerbach, a former student of Mauchly's, alleged that it influenced his later work on ENIAC, although Mauchly denied this. The ENIAC patent did not issue until 1964, and by 1967 Honeywell sued Sperry Rand in an attempt to break the ENIAC patents, arguing that the ABC constituted prior art. The United States District Court for the District of Minnesota released its judgement on October 19, 1973, finding in Honeywell v. Sperry Rand that the ENIAC patent was a derivative of John Atanasoff's invention.
Campbell-Kelly and Aspray conclude:
The case was legally resolved on October 19, 1973, when U.S. District Judge Earl R. Larson held the ENIAC patent invalid, ruling that the ENIAC derived many basic ideas from the Atanasoff–Berry computer. Judge Larson explicitly stated:
Herman Goldstine, one of the original developers of ENIAC wrote:
Replica
The original ABC was eventually dismantled in 1948, when the university converted the basement to classrooms, and all of its pieces except for one memory drum were discarded.
In 1997, a team of researchers led by Dr. Delwyn Bluhm and John Gustafson from Ames Laboratory (located on the Iowa State University campus) finished building a working replica of the Atanasoff–Berry computer at a cost of $350,000 (equivalent to $ in ). The replica ABC was on display in the first floor lobby of the Durham Center for Computation and Communication at Iowa State University and was subsequently exhibited at the Computer History Museum.
See also
History of computing hardware
List of vacuum-tube computers
Mikhail Kravchuk
References
Bibliography
External links
The Birth of the ABC
Reconstruction of the ABC, 1994-1997
John Gustafson, Reconstruction of the Atanasoff-Berry Computer
The ENIAC patent trial
Honeywell v. Sperry Rand Records, 1846-1973, Charles Babbage Institute, University of Minnesota.
The Atanasoff-Berry Computer In Operation (YouTube)
1940s computers
One-of-a-kind computers
Vacuum tube computers
Computer-related introductions in 1942
History of computing hardware
Iowa State University
Serial computers |
32624 | https://en.wikipedia.org/wiki/Video%20editing%20software | Video editing software | Video editing software is software used performing the post-production video editing of digital video sequences on a non-linear editing system (NLE). It has replaced traditional flatbed celluloid film editing tools and analog video tape-to-tape online editing machines.
NLE software is typically based on a timeline interface where sections moving image video recordings, known as clips, are laid out in sequence and played back. The NLE offers a range of tools for trimming, splicing, cutting and arranging clips across the timeline. Once a project is complete, the NLE system can then be used to export to movies in a variety of formats in a context that may range from broadcast tape formats to compressed file formats for the Internet, DVD and mobile devices. As digital NLE systems have advanced their toolset, their role has expanded and most consumer and professional NLE systems alike now include a host of features for color manipulation, titling and visual effects, as well as tools for editing and mixing audio synchronized with the video image sequence.
See also
Comparison of video editing software
Comparison of video converters
List of video editing software
Photo slideshow software
Video editing
References
Film and video technology |
8967007 | https://en.wikipedia.org/wiki/Software%20quality%20management | Software quality management | Software quality management (SQM) is a management process that aims to develop and manage the quality of software in such a way so as to best ensure that the product meets the quality standards expected by the customer while also meeting any necessary regulatory and developer requirements, if any. Software quality managers require software to be tested before it is released to the market, and they do this using a cyclical process-based quality assessment in order to reveal and fix bugs before release. Their job is not only to ensure their software is in good shape for the consumer but also to encourage a culture of quality throughout the enterprise.
Quality management activities
Software quality management activities are generally split up into three core components: quality assurance, quality planning, and quality control. Some like software engineer and author Ian Sommerville don't use the term "quality control" (as quality control is often viewed as more a manufacturing term than a software development term), rather, linking its associated concepts with the concept of quality assurance. However, the three core components otherwise remain the same.
Quality assurance
Software quality assurance sets up an organized and logical set of organizational processes and deciding on that software development standards — based on industry best practices — that should be paired with those organizational processes, software developers stand a better chance of producing higher quality software. However, linking quality attributes such as "maintainability" and "reliability" to processes is more difficult in software development due to its creative design elements versus the mechanical processes of manufacturing. Additionally, "process standardization can sometimes stifle creativity, which leads to poorer rather than better quality software."
This stage can include:
encouraging documentation process standards, such as the creation of well-defined engineering documents using standard templates
mentoring how to conduct standard processes, such as quality reviews
performing in-process test data recording procedures
identifying standards, if any, that should be used in software development processes
Quality planning
Quality planning works at a more granular, project-based level, defining the quality attributes to be associated with the output of the project and how those attributes should be assessed. Additionally, any existing organizational standards may also be assigned to the project at this phase. Attributes such as "robustness," "accessibility," and "modularity" may be assigned to the software development project. While this may be a more formalized, integral process, those using a more agile method of quality management may place less emphasis on strict planning structures. The quality plan may also address intended market, critical release dates, quality goals, expected risks, and risk management policy.
Quality control
The quality control team tests and reviews software at its various stages to ensure quality assurance processes and standards at both the organizational and project level are being followed. (Some like Sommerville link these responsibilities to quality assurance rather than call it quality control.) These checks are optimally separate from the development team so as to lend more of an objective view of the product to be tested. However, project managers on the development side must also assist, helping to promote as part of this phase a "culture that provides support without blame when errors are discovered." In software development firms implementing a more agile quality approach, these activities may be less formal; however, a switch to agile methods from a more formal quality management structure may create problems if management procedures aren't appropriately adapted.
Activities include:
release testing of software, including proper documentation of the testing process
examination of software and associated documentation for non-conformance with standards
follow-up review of software to ensure any required changes detailed in previous testing are addressed
application of software measurement and metrics for assessment
Software quality and the software lifecycle
The measurement of software quality is different from manufacturing; tolerances aren't applicable (at least in the same way), and objective conclusions concerning if software meets specifications are difficult if not impossible to achieve. However, software's quality and fit-for-purpose status can still be realized in various ways depending on the organization and type of realized project. This done through the support of the entire software development lifecycle, meaning:
collecting requirements and defining the scope of an IT project, focused on verification if defined requirements will be testable;
designing the solution, focused on planning a test process, e.g., what type of tests will be performed and how they will be performed in the context of test environments and test data?;
implementing a solution supported by test cases and scenarios, executing them, and registering defects, including the coordination of resolving the defects;
implementing change management, supported by verification of how planned changes can influence the quality of a created solution and eventual change of a test plan; and
closing the project, supported by the realization of tests focused on complex verification of the overall quality of the created solution.
Links to IT methods
Software quality management is a topic strongly linked with various project management, development, and IT operation methods, including:
Project management method PRINCE2 defines:
component "Quality in a project environment", which describes necessity of double-checked and objective control of created products. It proposes using 4 elements: quality management system, function of quality control, planning quality and quality controls.
"Quality Review Technique" which is focused on verification if created products fulfills defined quality criteria.
Project management method PMBOK 4th edition defines knowledge area Project Quality Management and following processes:
3.4.12 Plan Quality,
3.5.2. Perform Quality Assurance,
3.6.7. Perform Quality Control
Development method RUP defines discipline testing, which is engaged in all phases starting from Inception, finishing at Transition.
Development method MSF defines tester role and stabilization phase, which focuses mainly on testing a solution.
Agile methods do not precisely define the tester's role or mechanisms related to software quality management. The methods define only such techniques as continuous integration and test-driven development. Nevertheless, there appears lastly the publication about agile testing.
Operational method CMMI defines among others process area PPQA "Process and Product Quality Assurance", which is required already at CMMI level 2.
Operational method COBIT defines among others process P08 Manage Quality.
Operational method ITIL is defined among others by publication Continual service improvement.
V-Model – model, which defines the software development lifecycle and test process.
ISO 9000 – family of standards is related to quality management systems and designed to help organizations ensure that they meet the needs of customers and other stakeholders[1] while meeting statutory and regulatory requirements related to the product.
Associations and organizations
The American Society for Quality (ASQ) is a professional organization that provides its members with certification, training, publications, conferences, and other services related to quality management, continual improvement, and product safety.
The International Software Testing Qualifications Board (ISTQBP is a non-profit, international association registered in Belgium. It manages the certification process for software testers and boasts more than 535,000 certificates issues in over 120 countries.
See also
Agile testing
Software assurance
Quality assurance
Software quality
Software quality control
Software quality assurance
Software quality analyst
References
Software quality |
16001916 | https://en.wikipedia.org/wiki/Zero-day%20%28computing%29 | Zero-day (computing) | A zero-day (also known as zero-day) is a computer-software vulnerability either unknown to those who should be interested in its mitigation (including the vendor of the target software) or known and without a patch to correct it. Until the vulnerability is mitigated, hackers can exploit it to adversely affect programs, data, additional computers or a network. An exploit directed at a zero-day is called a zero-day exploit, or zero-day attack.
The term "zero-day" originally referred to the number of days since a new piece of software was released to the public, so "zero-day software" was obtained by hacking into a developer's computer before release. Eventually the term was applied to the vulnerabilities that allowed this hacking, and to the number of days that the vendor has had to fix them. Once the vendors learn of the vulnerability, they will usually create patches or advise workarounds to mitigate it.
The more recently that the vendor has become aware of the vulnerability, the more likely it is that no fix or mitigation has been developed. Once a fix is developed, the chance of the exploit succeeding decreases as more users apply the fix over time. For zero-day exploits, unless the vulnerability is inadvertently fixed, such as by an unrelated update that happens to fix the vulnerability, the probability that a user has applied a vendor-supplied patch that fixes the problem is zero, so the exploit would remain available. Zero-day attacks are a severe threat.
Attack vectors
Malware writers can exploit zero-day vulnerabilities through several different attack vectors. Sometimes, when users visit rogue websites, malicious code on the site can exploit vulnerabilities in Web browsers. Web browsers are a particular target for criminals because of their widespread distribution and usage. Cybercriminals, as well as international vendors of spyware such as Israel’s NSO Group, can also send malicious e-mail attachments via SMTP, which exploit vulnerabilities in the application opening the attachment. Exploits that take advantage of common file types are numerous and frequent, as evidenced by their increasing appearances in databases such as US-CERT. Criminals can engineer malware to take advantage of these file type exploits to compromise attacked systems or steal confidential data.
Window of vulnerability
The time from when a software exploit first becomes active to the time when the number of vulnerable systems shrinks to insignificance is known as the window of vulnerability. The timeline for each software vulnerability is defined by the following main events:
t0: The vulnerability is discovered (by anyone).
t1a: A security patch is published (e.g., by the software vendor).
t1b: An exploit becomes active.
t2: Most vulnerable systems have applied the patch.
Thus the formula for the length of the window of vulnerability is: t2 − t1b.
In this formulation, it is always true that t0 ≤ t1a, and t0 ≤ t1b. Note that t0 is not the same as day zero. For example, if a hacker is the first to discover (at t0) the vulnerability, the vendor might not learn of it until much later (on day zero).
For normal vulnerabilities, t1b > t1a. This implies that the software vendor was aware of the vulnerability and had time to publish a security patch (t1a) before any hacker could craft a workable exploit (t1b). For zero-day exploits, t1b ≤ t1a, such that the exploit becomes active before a patch is made available.
By not disclosing known vulnerabilities, a software vendor hopes to reach t2 before t1b is reached, thus avoiding any exploits. However, the vendor has no guarantees that hackers will not find vulnerabilities on their own. Furthermore, hackers can analyze the security patches themselves, and thereby discover the underlying vulnerabilities and automatically generate working exploits. These exploits can be used effectively up until time t2.
In practice, the length of the window of vulnerability varies between systems, vendors, and individual vulnerabilities. It is often measured in days, with one report from 2006 estimating the average as 28 days.
Protection
Zero-day protection is the ability to provide protection against zero-day exploits. Since zero-day attacks are generally unknown to the public, it is often difficult to defend against them. Zero-day attacks are often effective against "secure" networks and can remain undetected even after they are launched. Thus, users of so-called secure systems must also exercise common sense and practice safe computing habits.
Many techniques exist to limit the effectiveness of zero-day memory corruption vulnerabilities such as buffer overflows. These protection mechanisms exist in contemporary operating systems such as macOS, Windows Vista and beyond (see also: Security and safety features new to Windows Vista), Solaris, Linux, Unix, and Unix-like environments; Windows XP Service Pack 2 includes limited protection against generic memory corruption vulnerabilities and previous versions include even less. Desktop and server protection software also exist to mitigate zero-day buffer overflow vulnerabilities. Typically, these technologies involve heuristic termination analysis in order to stop attacks before they cause any harm.
It has been suggested that a solution of this kind may be out of reach because it is algorithmically impossible in the general case to analyze any arbitrary code to determine if it is malicious, as such an analysis reduces to the halting problem over a linear bounded automaton, which is unsolvable. It is, however, unnecessary to address the general case (that is, to sort all programs into the categories of malicious or non-malicious) under most circumstances in order to eliminate a wide range of malicious behaviors. It suffices to recognize the safety of a limited set of programs (e.g., those that can access or modify only a given subset of machine resources) while rejecting both some safe and all unsafe programs. This does require the integrity of those safe programs to be maintained, which may prove difficult in the face of a kernel-level exploit.
The Zeroday Emergency Response Team (ZERT) was a group of software engineers who worked to release non-vendor patches for zero-day exploits.
Worms
Zero-day worms take advantage of a surprise attack while they are still unknown to computer security professionals. Recent history shows an increasing rate of worm propagation. Well designed worms can spread very fast with devastating consequences to the Internet and other systems.
Ethics
Differing ideologies exist relating to the collection and use of zero-day vulnerability information. Many computer security vendors perform research on zero-day vulnerabilities in order to better understand the nature of vulnerabilities and their exploitation by individuals, computer worms and viruses. Alternatively, some vendors purchase vulnerabilities to augment their research capacity. An example of such a program is TippingPoint's Zero Day Initiative. While selling and buying these vulnerabilities is not technically illegal in most parts of the world, there is a lot of controversy over the method of disclosure. A 2006 German decision to include Article 6 of the Convention on Cybercrime and the EU Framework Decision on Attacks against Information Systems may make selling or even manufacturing vulnerabilities illegal.
Most formal programs follow some form of Rain Forest Puppy's disclosure guidelines or the more recent OIS Guidelines for Security Vulnerability Reporting and Response. In general, these rules forbid the public disclosure of vulnerabilities without notification to the vendor and adequate time to produce a patch.
Viruses
A zero-day virus (also known as zero-day malware or next-generation malware) is a previously unknown computer virus or other malware for which specific antivirus software signatures are not yet available.
Traditionally, antivirus software relied upon signatures to identify malware. A virus signature is a unique pattern or code that can be used to detect and identify specific viruses. The antivirus scans file signatures and compares them to a database of known malicious codes. If they match, the file is flagged and treated as a threat. The major limitation of signature-based detection is that it is only capable of flagging already known malware, making it useless against zero-day attacks. Most modern antivirus software still uses signatures but also carry out other types of analysis.
Code analysis
In code analysis, the machine code of the file is analysed to see if there is anything that looks suspicious. Typically, malware has characteristic behaviour; code analysis attempts to detect if this is present in the code.
Although useful, code analysis has significant limitations. It is not always easy to determine what a section of code is intended to do, particularly if it is very complex and has been deliberately written with the intention of defeating analysis. Another limitation of code analysis is the time and resources available. In the competitive world of antivirus software, there is always a balance between the effectiveness of analysis and the time delay involved.
One approach to overcome the limitations of code analysis is for the antivirus software to run suspect sections of code in a safe sandbox and observe their behavior. This can be orders of magnitude faster than analyzing the same code, but must resist (and detect) attempts by the code to detect the sandbox.
Generic signatures
Generic signatures are signatures that are specific to certain behaviour rather than a specific item of malware. Most new malware is not totally novel, but is a variation on earlier malware, or contains code from one or more earlier examples of malware. Thus, the results of previous analysis can be used against new malware.
Competitiveness in the antivirus software industry
It is generally accepted in the antivirus industry that most vendors' signature-based protection is identically effective. If a signature is available for an item of malware, then every product (unless dysfunctional) should detect it. However, some vendors are significantly faster than others at becoming aware of new viruses and/or updating their customers' signature databases to detect them.
There is a wide range of effectiveness in terms of zero-day virus protection. The German computer magazine c't found that detection rates for zero-day viruses varied from 20% to 68%. It is primarily in the area of zero-day virus performance that manufacturers now compete.
U.S. government involvement
NSA's use of zero-day exploits (2017)
In mid-April 2017 the hackers known as The Shadow Brokers (TSB), who are allegedly linked to the Russian government, released files from the NSA (initially just regarded as alleged to be from the NSA, later confirmed through internal details and by American whistleblower Edward Snowden) which include a series of 'zero-day exploits' targeting Microsoft Windows software and a tool to penetrate the Society for Worldwide Interbank Financial Telecommunication (SWIFT)'s service provider. Ars Technica had reported Shadow Brokers' hacking claims in mid-January 2017, and in April the Shadow Brokers posted the exploits as proof.
Vulnerabilities Equities Process
The Vulnerabilities Equities Process, first revealed publicly in 2016, is a process used by the U.S. federal government to determine on a case-by-case basis how it should treat zero-day computer security vulnerabilities: whether to disclose them to the public to help improve general computer security or to keep them secret for offensive use against the government's adversaries. The process has been criticized for a number of deficiencies, including restriction by non-disclosure agreements, lack of risk ratings, special treatment for the NSA, and a less than full commitment to disclosure as the default option.
See also
Access control
Bug bounty program
Exploit-as-a-Service
Heuristic analysis
Market for zero-day exploits
Network Access Control
Network Access Protection
Network Admission Control
Software-defined protection
Targeted attacks
Vault 7
White hat (computer security)
Zero Days, a documentary about the 4 zero-days in stuxnet
References
Further reading
Examples of zero-day attacks
(Chronological order)
Warez
Types of malware
Computer viruses
Computer security |
67170242 | https://en.wikipedia.org/wiki/Sfdisk | Sfdisk | sfdisk is a Linux partition editor. In contrast to fdisk and cfdisk, sfdisk is not interactive. All three programs are written in C and are part of the util-linux package of Linux utility programs.
Since sfdisk is command driven instead of menu driven, i.e. it reads input from stdin or from a file, it is generally used for partitioning drives from scripts or used by programs, like e.g. KDE Partition Manager.
The current sfdisk implementation utilizes the libfdisk library. sfdisk supports MBR(DOS), GPT, SUN and SGI disk labels, but no longer provides any functionality for CHS (Cylinder-Head-Sector) addressing since version 2.26.
See also
format
gpart
parted, GParted
diskpart
List of disk partitioning software
References
External links
Manual
Debian Package
sfdisk(8) - Linux man page
Free partitioning software
Linux file system-related software |
1065304 | https://en.wikipedia.org/wiki/Link%20encryption | Link encryption | Link encryption is an approach to communications security that encrypts and decrypts all network traffic at each network routing point (e.g. network switch, or node through which it passes) until arrival at its final destination. This repeated decryption and encryption is necessary to allow the routing information contained in each transmission to be read and employed further to direct the transmission toward its destination, before which it is re-encrypted. This contrasts with end-to-end encryption where internal information, but not the header/routing information, is encrypted by the sender at the point of origin and only decrypted by the intended recipient.
Link encryption offers two main advantages:
encryption is automatic so there is less opportunity for human error.
if the communications link operates continuously and carries an unvarying level of traffic, link encryption defeats traffic analysis.
On the other hand, end-to-end encryption ensures only the intended recipient has access to the plaintext.
Link encryption can be used with end-to-end systems by superencrypting the messages.
Bulk encryption refers to encrypting a large number of circuits at once, after they have been multiplexed.
References
Cryptography |
11179505 | https://en.wikipedia.org/wiki/Qualys | Qualys | Qualys, Inc. provides cloud security, compliance and related services and is based in Foster City, California. Founded in 1999, Qualys was the first company to deliver vulnerability management solutions as applications through the web using a "software as a service" (SaaS) model, and as of 2013 Gartner Group for the fifth time gave Qualys a "Strong Positive" rating for these services. It has added cloud-based compliance and web application security offerings.
Qualys has over 10,300 customers in more than 130 countries, including a majority of the Forbes Global 100. The company has strategic partnerships with major managed services providers and consulting organizations including BT, Dell SecureWorks, Fujitsu, IBM, NTT, Symantec, Verizon, and Wipro. The company is also a founding member of the Cloud Security Alliance (CSA).
History
Qualys was founded in 1999. The company launched QualysGuard in December 2000, making Qualys one of the first entrants in the vulnerability management market. In March 2001, angel investor Philippe Courtot became the CEO of the company after a major investment in the company. He also served as the chairman of the board. Philippe originally invested in Qualys in 1999 when the company was founded, then became CEO in March 2001. Philippe brought a vision to the company that was unique from the very start – to build a cloud delivery platform that would allow for the scanning of any network globally. He then set about creating it.
The QualysGuard Intranet Scanner was released in 2002 to automatically scan corporate LANs for vulnerabilities and search for an available patch. The following year, Qualys released FreeMap, a web-based tool for scanning, mapping and identifying possible security holes within networks connected to the Internet.
In 2005, Qualys extended its QualysGuard product line. In 2008, Qualys introduced QualysGuard Policy Compliance, which extended the platform's global scanning capabilities to collect IT compliance data across the organization and map this information into policies to document compliance for auditing purposes. Qualys also released a service for web application scanning named QualysGuard Web Application Scanning (WAS). In 2010, at the RSA Conference USA, Qualys announced QualysGuard Malware Detection Service, a new service designed to scan and identify malware on web sites. It also announced the Qualys SECURE Seal, which allows websites to show visitors that it has passed security scans.
In July 2010, Qualys announced Qualys BrowserCheck, a service for checking web browsers and plug-ins for security vulnerabilities. At RSA Conference 2011, Qualys launched a new open source web application firewall project, IronBee, led by Ivan Ristic, the creator of ModSecurity and a director of engineering at Qualys. At the RSA Conference in 2012, Qualys introduced updates to the QualysGuard Cloud Platform, which extended its capabilities to help customers improve the security of their IT systems and applications, further automate their compliance initiatives for IT-GRC, and provide online protection against cyber-attacks while reducing operational costs and increasing the efficiency of their security programs.
Qualys went public on NASDAQ with the symbol QLYS on September 28, 2012, raising net proceeds of $87.5 million. At the 2014 RSA Conference, Qualys announced general availability of its QualyGuard Web Application Firewall (WAF), which provides protection for websites running on Amazon EC2 and on-premises, as well as a new Continuous Monitoring solution and free Top 4 Security Controls service. In 2015, the company released 2.0 of its WAF software which included virtual patching and customizable event responses. That same year, Qualys also released an IT security and compliance solution called Cloud Agent Platform. In August 2015, the company released a free asset management service, Qualys AssetView, that allowed organizations to keep inventory of computers and their software.
In February 2016, Qualys released ThreatPROTECT, a vulnerability detection service for its cloud-based platform. The company received full certification for its Configuration Management Databases (CMDBs) App for ServiceNow, allowing the app to be integrated into ServiceNow's system in March 2016. In July 2016, it was announced that Qualys would integrate with Microsoft Azure Security Center. In August 2017, Qualys acquired the network analysis assets from Nevis Networks for an undisclosed sum. In December 2017, Qualys announced its acquisition of NetWatcher, a network security company.
In April 2018, Qualys acquired the software assets of Singapore-based 1Mobility, a startup which develops security technologies for mobile phones.
In June 2018, Qualys announced the addition of Asset Inventory to their Cloud Platform.
Awards
SC Magazine has awarded Qualys for its security software solutions every year from 2004–2014. SC Magazine also awarded Qualys "Best Security Company" in 2014.
In 2010, Inc. magazine ranked Qualys within 5000 fastest growing private companies in the USA based on 104% revenue growth from 2006 to 2009. In 2012 Silicon Valley/San Jose Business Journal has recognized Qualys as one of the largest private companies in Silicon Valley – ranking 26th in a list of 51.
In November 2014, Qualys received the 2014 Security Reader's Choice Awards for Best Risk and Policy Compliance Solution, Best Application Security Product, and Best Vulnerability.
In both 2014 and 2015, Qualys received the award for Best Application Security Product from the SANS Institute.
In 2017 Frost & Sullivan recognized Qualys with 2017 Global Vulnerability Management Market Leadership Award.
Qualys honored by Cyber Defense Magazine as a 2017 Cyber Security Leader.
Qualys received 2019 Gartner Peer Insights Customers' Choice Award for Vulnerability Assessment.
Qualys honored by SC Media as the winner for Best Vulnerability Management Solution in its 2020 Trust Awards.
References
External links
Qualys SSL Labs Vulnerability Scanner
Companies based in Foster City, California
Software companies based in the San Francisco Bay Area
Computer security software companies
Software companies established in 1999
Companies listed on the Nasdaq
2012 initial public offerings
Software companies of the United States
1999 establishments in California |
30872538 | https://en.wikipedia.org/wiki/Team17 | Team17 | Team17 Group plc is a British video game developer and publisher based in Wakefield, England. The venture was created in December 1990 through the merger of British publisher 17-Bit Software and Swedish developer Team 7. At the time, the two companies consisted of and were led by Michael Robinson, Martyn Brown and Debbie Bestwick, and Andreas Tadic, Rico Holmes and Peter Tuleby, respectively. Bestwick later became and presently serves as Team17's chief executive officer. After their first game, Full Contact (1991) for the Amiga, the studio followed up with multiple number-one releases on that platform and saw major success with Andy Davidson's Worms in 1995, the resulting franchise of which still remains as the company's primary development output, having developed over 20 entries in it.
Through a management buyout performed by Bestwick, both Robinson and Brown departed from Team17 in 2010, leaving Bestwick as the sole manager. In 2013, Team17 initiated a publishing venture focusing on indie games, which since occupies its own office in Nottingham. The first game to release of this venture was Light (2013). Following a large investment from Lloyds Development Capital in September 2016, Team17 sought corporate expansion through various actions, including the acquisition of Mouldy Toof Studios, the developer behind Team17-published The Escapists (2015), and the hiring of multiple new key staff. In May 2018, the company published their initial public offering and became a public company listed on the Alternative Investment Market, valued around . Team17 employs 140 people in its two offices.
History
Early history (1990–1995)
In 1990, Wakefield-based entrepreneur Michael Robinson was the manager of Microbyte, a United Kingdom-wide computer retail chain, and 17-Bit Software, a video game publisher. Robinson had created 17-Bit Software as part of Microbyte in 1987 specifically to seek young, independent video game developers whose games he could publish through this label and distribute through his Microbyte stores. One of those developers was Andreas Tadic (a nineteen-year-old hobbyist programmer from Olofström, Sweden), who at the time was developing HalfBright, a shoot 'em up for Amiga systems. According to Tadic, the game was "technically impressive, but shite-looking". Martyn Brown, a Microbyte employee, called up Tadic to introduce him to artist Rico Holmes; Tadic and Holmes subsequently became friends and, alongside another Swedish programmer, Peter Tuleby, founded a development team known as Team 7.
Team 7's first game was Miami Chase, a Miami Vice-inspired racing game that was published by Codemasters in 1990, as a budget title for Amiga systems, and received an 82% review score from British Amiga-centric magazine Amiga Power. Brown had followed the game's development closely, because of which he suggested to Robinson that they should not only publish but also develop games at 17-Bit Software, using Team 7 as their internal development team and himself as project manager. Robinson agreed to undergo the venture and moved Debbie Bestwick from her position as sales manager of Microbyte to commercial support for 17-Bit Software. Eventually, 17-Bit Software and Team 7 agreed to formally merge into one team, amalgamating the two teams' names as "Team17". Team17 was officially created on 7 December 1990.
Using Microbyte's experience in game retailing, Team17 was able to easily determine game genres that would sell well, while Team 7's expertise in game development enabled Team17 to also develop games in those genres. Their first game was 1991's Full Contact, a fighting game that, upon release, reached the top spot on British game sales charts. Further Team17 games followed Full Contact success; by 1993, 90% of the studio's games, including Alien Breed (1991), Project-X (1992) and Superfrog (1993), reached the top spot on sales charts, while all Team17 products combined generated half of all Amiga game sales. At the 1993 Golden Joystick Awards, Team17 and Electronic Arts jointly received the "Software House of the Year" award.
Starting in 1992, Future Publishing-owned Amiga Power started criticising Team17's products more harshly than other gaming magazines. According to Stuart Campbell, deputy editor for the magazine at the time, Overdrive, Project-X, F17 Challenge and Superfrog were among the games that received negative reception from Amiga Power between 1992 and 1993. As a response to their reviews, Team17 began implementing derogatory Easter eggs into their games, which included the cheat code "AMIGAPOWER" unlocking a critical statement regarding the magazine's review policy in Alien Breed II: The Horror Continues (1993) and the easiest-difficulty bot opponents in Arcade Pool (1994) being named after Amiga Power staff. However, when the magazine awarded Team17's ATR: All Terrain Racing and Kingpin: Arcade Sports Bowling scores of 38% and 47%, respectively, in 1995, Team17 issued a lawsuit against the magazine, demanding the reviews to be retracted and the issue withdrawn from sale. The lawsuit was not successful for the studio, and it instead turned to not sending review copies of their games to Amiga Power and making other Future Publishing-owned magazines not lend their review copies to Amiga Power.
Worms (1994–2010)
In 1994, programmer Andy Davidson created Artillery, a game in the artillery game genre, for Amiga systems. He entered the game, under the title Wormage or Total Wormage, into a contest held by the Amiga Format magazine. The game failed to make an impact, wherefore Davidson instead opted to take it to the 1994 European Computer Trade Show (ECTS) in London, where he presented it to people at Team17's booth, where the game was signed for development as a commercial title. Bestwick stated they could not stop playing the game and as such realised that the game had potential, although that potential's dimensions were yet unknown. Following the deal struck between the two parties, Team17 promptly lost Davidson's contact details and were forced to call Amiga Format to retrieve them. Once they had retrieved his details, Team17 and Davidson started to jointly develop a commercial version of his game, though retitled Worms, a title that appeared more straightforward.
At the time, Team17 had the strong feeling that the games market for Amiga was dying, wherefore they decided to develop Worms across as many platforms as possible. However, the company had no publishing experience outside the Amiga market and needed to seek a third-party publisher; given the choice between Ocean Software and Virgin Interactive, they chose to go with Ocean Software. Worms was released in 1995 for Amiga and later ported to Sega Mega Drive, Super Nintendo Entertainment System, MS-DOS, PlayStation, among various other platforms. Out of the 60,000 total sales estimated by Ocean Software before the game's release, the game shipped millions of copies within its first year. Bestwick considered the game to have saved Team17. However, following the game's success, Team17 became obsessed with replicating it: Between 1995 and 2010, the studio released a total of sixteen new Worms games. With Team17 turning into a "single intellectual property company", many developers felt fatigue and "creative stagnation".
Restructuring and expansion (2010–present)
In August 2010, Team17 announced that they had turned away from third-party publishers in favour of releasing their games themselves via digital distribution. The company hired Paul Bray and Alan Perrie to act as finance and operations director, and head of global marketing, respectively. Later that year, Team17 underwent a large internal restructuring, which included the management buyout of co-founders Brown and Robinson, making Bestwick, as chief executive officer, the company's sole manager. Bestwick stated that this move had "placed the company in a secure position for the future". Brown announced his departure in February 2011, stating that he would join handheld game developer Double Eleven.
In December 2011, Team17 acquired Iguana Entertainment, a company founded by Jason Falcus and Darren Falcus in 2009. All Iguana staff, including its founders, were effectively absorbed into Team17's Wakefield offices. In 2013, Bestwick and Bray sparked the idea of returning Team17 to its roots by adding an indie game publishing component to the company. An incubation programme was run that tasked two studios to co-develop what would later become Beyond Eyes (2015) and Sheltered (2016). Light by Brighton-based Just a Pixel became the first game to be announced and released through Team17's new venture. The activity was broadened to mobile game publishing in March 2014, with Hay Ewe by Rocket Rainbow announced to have been slated for a release on iOS in the second quarter of that year. To accommodate the publishing label's growth, Team17 opened a separate publishing office in Nottingham in May 2014. Bestwick stated that she despised the term "publisher" and preferred "label", as "[t]he term 'publisher' represents a way of doing business that's completely at odds with the new world of digital distribution". Team17 won the "Publishing Hero" award at 2015's Develop Awards.
One of the label's most successful titles was The Escapists: The game, designed by Chris Davis, a former roofer and founder of Derby-based Mouldy Toof Studios, sold over a million copies within one year of release. On 1 September 2016, Lloyds Development Capital (LDC), the private equity division of Lloyds Banking Group, announced that they had invested into the development of Team17. In return, LDC was awarded a 33% stake in Team17. Using the investment, Team17 acquired Mouldy Toof Studios and The Escapists franchise for an undisclosed sum. In response to LDC's investment, Chris van der Kuyl of 4J Studios joined Team17 as non-executive chairman. As means of further corporate expansion, Team17 hired multiple new management staff by January 2017, including Justin Berenbaum as head of publishing and business development for Asia and the Americas, Matt Benson as business development manager and Ste Stanley as marketing and sales coordinator.
In March 2018, Team17 tasked stockbrokers from Berenberg and GCA Altium to prepare an initial public offering (IPO) valuing Team17 at . The company confirmed their intents to become a public company on 8 May 2018, announcing that a 50% stake in Team17 would be sold over the Alternative Investment Market (AIM), a sub-market of the London Stock Exchange. The flotation was expected to value Team17 between and 230 million. Bestwick and LDC would each sell half of their shareholdings in the process, wherein Bestwick was expected to receive in windfall profit. Chris Bell, formerly chief executive of Ladbrokes Coral, was appointed chairman of Team17 to aid the IPO process. At this time, the company employed 120 people in the Wakefield development studio and another 20 in the Nottingham publishing offices. Team17 was expected to gain in gross profits based on 27,325,482 new shares and 37,849,200 existing shares. The shares became available for purchase via the AIM on 23 May 2018. Following the sale of shareholdings by Bestwick and LDC, they retained 22.2% and 16.6% stake ownerships in the company, respectively.
Through the first half of 2019, Team17's revenue rose significantly; 83% of its revenue was attributed to its publishing activities, of which 80% stemmed from games Team17 had co-developed internally. Notably successful were Hell Let Loose and My Time at Portia, which were the best-performing games for the company in that time frame. Team17 announced that, with this funding, it would be looking into acquiring more development studios. The company's headcount also increased from 154 to 182 in that period, because of which Team17 moved its headquarters to new offices within Wakefield in November 2019. The number of staff further increased to 200 by the end of the year. In September 2019, Martin Hellawell was appointed non-executive director of Team17. In January 2020, Team17 acquired Manchester-based developer Yippee Entertainment for , a combination of in cash and 114,000 consideration shares, worth . The company bought out Golf with Your Friends, which it had published, from developers Blacklight Interactive in January 2021, planning to release further downloadable content (DLC) for it. In July 2021, Team17 acquired TouchPress, the parent company of StoryToys, a developer of edutainment apps, for . In January 2022, Team17 acquired Astragon, a German game publisher focused on simulation video games, for £83 million.
Games developed
Games published
Cancelled games
Witchwood (circa 1994): An action-adventure game in the style of The Legend of Zelda or Al-Qadim: The Genie's Curse about a young hero's quest to destroy an evil witch.
Allegiance (circa 1995): A first-person shooter game with integrated multiplayer features, and was later transformed into a third-person shooter before being cancelled.
Rollcage (circa 1995): An off-road racing game with different types of rally vehicles and aggressive AI opponents, and is unrelated to the 1999 racing game also titled Rollcage.
P.I.G. (circa 1996): A 3D platformer in the style of Super Mario 64 that featured different outfits for the main character and various minigames.
References
External links
1990 establishments in England
2018 initial public offerings
British companies established in 1990
Companies based in Wakefield
Companies listed on the Alternative Investment Market
Software companies of England
Video game companies established in 1990
Video game companies of the United Kingdom
Video game development companies
Video game publishers |
59391 | https://en.wikipedia.org/wiki/Morphing | Morphing | Morphing is a special effect in motion pictures and animations that changes (or morphs) one image or shape into another through a seamless transition. Traditionally such a depiction would be achieved through dissolving techniques on film. Since the early 1990s, this has been replaced by computer software to create more realistic transitions. A similar method is applied to audio recordings in similar fashion, for example, by changing voices or vocal lines.
Early transformation techniques
Long before digital morphing, several techniques were used for similar image transformations. Some of those techniques are closer to a matched dissolve - a gradual change between two pictures without warping the shapes in the images - while others did change the shapes in between the start and end phases of the transformation.
Tabula scalata
Known since at least the end of the 16th century, Tabula scalata is a type of painting with two images divided over a corrugated surface. Each image is only correctly visible from a certain angle. If the pictures are matched properly, a primitive type of morphing effect occurs when changing from one viewing angle to the other.
Mechanical transformations
Around 1790 French shadow play showman François Dominique Séraphin used a metal shadow figure with jointed parts to have the face of a young woman changing into that of a witch.
Some 19th century mechanical magic lantern slides produced changes to the appearance of figures. For instance a nose could grow to enormous size, simply by slowly sliding away a piece of glass with black paint that masked part of another glass plate with the picture.
Matched dissolves
In the first half of the 19th century "dissolving views" were a popular type of magic lantern show, mostly showing landscapes gradually dissolving from a day to night version or from summer to winter. Other uses are known, for instance Henry Langdon Childe showed groves transforming into cathedrals.
The 1910 short film Narren-grappen shows a dissolve transformation of the clothing of a female character.
Maurice Tourneur's 1915 film Alias Jimmy Valentine featured a subtle dissolve transformation of the main character from respected citizen Lee Randall into his criminal alter ego Jimmy Valentine.
The Peter Tchaikovsky Story in a 1959 TV-series episode of Disneyland features a swan automaton transforming into a real ballet dancer.
In 1985, Godley & Creme created a "morph" effect using analogue cross-fades on parts of different faces in the video for "Cry".
Animation
In animation, the morphing effect was created long before the introduction of cinema. A phenakistiscope designed by its inventor Joseph Plateau was printed around 1835 and shows the head of a woman changing into a witch and then into a monster.
Émile Cohl's 1908 animated film Fantasmagorie featured much morphing of characters and objects drawn in simple outlines.
Digital morphing
In the early 1990s, computer techniques capable of more convincing results saw increasing use. These involved distorting one image at the same time that it faded into another through marking corresponding points and vectors on the "before" and "after" images used in the morph. For example, one would morph one face into another by marking key points on the first face, such as the contour of the nose or location of an eye, and mark where these same points existed on the second face. The computer would then distort the first face to have the shape of the second face at the same time that it faded the two faces. To compute the transformation of image coordinates required for the distortion, the algorithm of Beier and Neely can be used.
Early examples
In or before 1986, computer graphics company Omnibus created a digital animation for a Tide commercial with a Tide detergent bottle smoothly morphing into the shape of the United States. The effect was programmed by Bob Hoffman. Omnibus re-used the technique in the movie Flight of the Navigator (1986). It featured scenes with a computer generated spaceship that appeared to change shape. The plaster cast of a model of the spaceship was scanned and digitally modified with techniques that included a reflection mapping technique that was also developed by programmer Bob Hoffman.
The 1986 movie The Golden Child implemented early digital morphing effects from animal to human and back.
Willow (1988) featured a more detailed digital morphing sequence with a person changing into different animals. A similar process was used a year later in Indiana Jones and the Last Crusade to create Walter Donovan's gruesome demise. Both effects were created by Industrial Light & Magic, using software developed by Tom Brigham and Doug Smythe (AMPAS).
In 1991, morphing appeared notably in the Michael Jackson music video "Black or White" and in the movies Terminator 2: Judgment Day and Star Trek VI: The Undiscovered Country. The first application for personal computers to offer morphing was Gryphon Software Morph on the Macintosh. Other early morphing systems included ImageMaster, MorphPlus and CineMorph, all of which premiered for the Commodore Amiga in 1992. Other programs became widely available within a year, and for a time the effect became common to the point of cliché. For high-end use, Elastic Reality (based on MorphPlus) saw its first feature film use in In The Line of Fire (1993) and was used in Quantum Leap (work performed by the Post Group). At VisionArt Ted Fay used Elastic Reality to morph Odo for Star Trek: Deep Space Nine. The Snoop Dogg music video "Who Am I? (What's My Name?)", where Snoop Dogg and the others morph into dogs. Elastic Reality was later purchased by Avid, having already become the de facto system of choice, used in many hundreds of films. The technology behind Elastic Reality earned two Academy Awards in 1996 for Scientific and Technical Achievement going to Garth Dickie and Perry Kivolowitz. The effect is technically called a "spatially warped cross-dissolve". The first social network designed for user-generated morph examples to be posted online was Galleries by Morpheus (morphing software).
In Taiwan, Aderans, a hair loss solutions provider, did a TV commercial featuring a morphing sequence in which people with lush, thick hair morph into one another, reminiscent of the end sequence of the "Black or White" video.
Present use
Morphing algorithms continue to advance and programs can automatically morph images that correspond closely enough with relatively little instruction from the user. This has led to the use of morphing techniques to create convincing slow-motion effects where none existed in the original film or video footage by morphing between each individual frame using optical flow technology. Morphing has also appeared as a transition technique between one scene and another in television shows, even if the contents of the two images are entirely unrelated. The algorithm in this case attempts to find corresponding points between the images and distort one into the other as they crossfade.
While perhaps less obvious than in the past, morphing is used heavily today. Whereas the effect was initially a novelty, today, morphing effects are most often designed to be seamless and invisible to the eye.
A particular use for morphing effects is modern digital font design. Using morphing technology, called interpolation or multiple master tech, a designer can create an intermediate between two styles, for example generating a semibold font by compromising between a bold and regular style, or extend a trend to create an ultra-light or ultra-bold. The technique is commonly used by font design studios.
Software
After Effects
Elastic Reality
FantaMorph
Gryphon Software Morph
Morpheus
MorphThing
Nuke
SilhouetteFX
FotoMorph
See also
Mathematical morphology
Morph target animation
Inbetweening
Beier–Neely morphing algorithm
Visual effects
References
External links
Tutorial on morphing using Adobe After Effects
Morph images on Mac OS X
Xmorph: A Walkthrough of Morphing
Morph2d
javamorph
Paul Salameh - Downloads/Programs
이미지 와핑 소스(그나마 낳은거)
Morph
mukimuki.fr
The contourist package for numeric python generates smoothly morphig triangulations of isosurfaces for arbitrary 4 dimensional functions which can be displayed using HTML5 as illustrated in this jsfiddle
Special effects
Computer graphics
Articles containing video clips
Applications of computer vision
Computer animation |
26023 | https://en.wikipedia.org/wiki/RS-232 | RS-232 | In telecommunications, RS-232 or Recommended Standard 232 is a standard originally introduced in 1960 for serial communication transmission of data. It formally defines signals connecting between a DTE (data terminal equipment) such as a computer terminal, and a DCE (data circuit-terminating equipment or data communication equipment), such as a modem. The standard defines the electrical characteristics and timing of signals, the meaning of signals, and the physical size and pinout of connectors. The current version of the standard is TIA-232-F Interface Between Data Terminal Equipment and Data Circuit-Terminating Equipment Employing Serial Binary Data Interchange, issued in 1997. The RS-232 standard had been commonly used in computer serial ports and is still widely used in industrial communication devices.
A serial port complying with the RS-232 standard was once a standard feature of many types of computers. Personal computers used them for connections not only to modems, but also to printers, computer mice, data storage, uninterruptible power supplies, and other peripheral devices.
Compared with later interfaces such as RS-422, RS-485 and Ethernet, RS-232 has lower transmission speed, shorter maximum cable length, larger voltage swing, larger standard connectors, no multipoint capability and limited multidrop capability. In modern personal computers, USB has displaced RS-232 from most of its peripheral interface roles. Few computers come equipped with RS-232 ports, so one must use either an external USB-to-RS-232 converter or an internal expansion card with one or more serial ports to connect to RS-232 peripherals. Nevertheless, thanks to their simplicity and past ubiquity, RS-232 interfaces are still used—particularly in industrial machines, networking equipment, and scientific instruments where a short-range, point-to-point, low-speed wired data connection is fully adequate.
Scope of the standard
The Electronic Industries Association (EIA) standard RS-232-C as of 1969 defines:
Electrical signal characteristics such as voltage levels, signaling rate, timing, and slew-rate of signals, voltage withstand level, short-circuit behavior, and maximum load capacitance.
Interface mechanical characteristics, pluggable connectors and pin identification.
Functions of each circuit in the interface connector.
Standard subsets of interface circuits for selected telecom applications.
The standard does not define such elements as the character encoding (i.e. ASCII, EBCDIC, or others), the framing of characters (start or stop bits, etc.), transmission order of bits, or error detection protocols. The character format and transmission bit rate are set by the serial port hardware, typically a UART, which may also contain circuits to convert the internal logic levels to RS-232 compatible signal levels. The standard does not define bit rates for transmission, except that it says it is intended for bit rates lower than 20,000 bits per second.
History
RS-232 was first introduced in 1960 by the Electronic Industries Association (EIA) as a Recommended Standard. The original DTEs were electromechanical teletypewriters, and the original DCEs were (usually) modems. When electronic terminals (smart and dumb) began to be used, they were often designed to be interchangeable with teletypewriters, and so supported RS-232.
Because the standard did not foresee the requirements of devices such as computers, printers, test instruments, POS terminals, and so on, designers implementing an RS-232 compatible interface on their equipment often interpreted the standard idiosyncratically. The resulting common problems were non-standard pin assignment of circuits on connectors, and incorrect or missing control signals. The lack of adherence to the standards produced a thriving industry of breakout boxes, patch boxes, test equipment, books, and other aids for the connection of disparate equipment. A common deviation from the standard was to drive the signals at a reduced voltage. Some manufacturers therefore built transmitters that supplied +5 V and −5 V and labeled them as "RS-232 compatible".
Later personal computers (and other devices) started to make use of the standard so that they could connect to existing equipment. For many years, an RS-232-compatible port was a standard feature for serial communications, such as modem connections, on many computers (with the computer acting as the DTE). It remained in widespread use into the late 1990s. In personal computer peripherals, it has largely been supplanted by other interface standards, such as USB. RS-232 is still used to connect older designs of peripherals, industrial equipment (such as PLCs), console ports, and special purpose equipment.
The standard has been renamed several times during its history as the sponsoring organization changed its name, and has been variously known as EIA RS-232, EIA 232, and, most recently as TIA 232. The standard continued to be revised and updated by the Electronic Industries Association and since 1988 by the Telecommunications Industry Association (TIA). Revision C was issued in a document dated August 1969. Revision D was issued in 1986. The current revision is TIA-232-F Interface Between Data Terminal Equipment and Data Circuit-Terminating Equipment Employing Serial Binary Data Interchange, issued in 1997. Changes since Revision C have been in timing and details intended to improve harmonization with the CCITT standard , but equipment built to the current standard will interoperate with older versions.
Related ITU-T standards include V.24 (circuit identification) and (signal voltage and timing characteristics).
In revision D of EIA-232, the D-subminiature connector was formally included as part of the standard (it was only referenced in the appendix of RS-232-C). The voltage range was extended to ±25 volts, and the circuit capacitance limit was expressly stated as 2500 pF. Revision E of EIA-232 introduced a new, smaller, standard D-shell 26-pin "Alt A" connector, and made other changes to improve compatibility with CCITT standards V.24, V.28 and ISO 2110.
Specification document revision history:
EIA RS-232 (May 1960) "Interface Between Data Terminal Equipment & Data"
EIA RS-232-A (October 1963)
EIA RS-232-B (October 1965)
EIA RS-232-C (August 1969) "Interface Between Data Terminal Equipment and Data Communication Equipment Employing Serial Binary Data Interchange"
EIA EIA-232-D (1986)
TIA TIA/EIA-232-E (1991) "Interface Between Data Terminal Equipment and Data Communications Equipment Employing Serial Binary Data Interchange"
TIA TIA/EIA-232-F (October 1997)
ANSI/TIA-232-F-1997 (R2002)
TIA TIA-232-F (R2012)
Limitations of the standard
Because RS-232 is used beyond the original purpose of interconnecting a terminal with a modem, successor standards have been developed to address the limitations. Issues with the RS-232 standard include:
The large voltage swings and requirement for positive and negative supplies increases power consumption of the interface and complicates power supply design. The voltage swing requirement also limits the upper speed of a compatible interface.
Single-ended signaling referred to a common signal ground limits the noise immunity and transmission distance.
Multi-drop connection among more than two devices is not defined. While multi-drop "work-arounds" have been devised, they have limitations in speed and compatibility.
The standard does not address the possibility of connecting a DTE directly to a DTE, or a DCE to a DCE. Null modem cables can be used to achieve these connections, but these are not defined by the standard, and some such cables use different connections than others.
The definitions of the two ends of the link are asymmetric. This makes the assignment of the role of a newly developed device problematic; the designer must decide on either a DTE-like or DCE-like interface and which connector pin assignments to use.
The handshaking and control lines of the interface are intended for the setup and takedown of a dial-up communication circuit; in particular, the use of handshake lines for flow control is not reliably implemented in many devices.
No method is specified for sending power to a device. While a small amount of current can be extracted from the DTR and RTS lines, this is only suitable for low-power devices such as mice.
The 25-pin D-sub connector recommended in the standard is large compared to current practice.
Role in modern personal computers
In the book PC 97 Hardware Design Guide, Microsoft deprecated support for the RS-232 compatible serial port of the original IBM PC design. Today, RS-232 has mostly been replaced in personal computers by USB for local communications. Advantages compared to RS-232 are that USB is faster, uses lower voltages, and has connectors that are simpler to connect and use. Disadvantages of USB compared to RS-232 are that USB is far less immune to electromagnetic interference (EMI) and that maximum cable length is much shorter (15 meters for RS-232 versus 3–5 meters for USB, depending on the USB version and use of active cables).
In fields such as laboratory automation or surveying, RS-232 devices continue to be used. Some types of programmable logic controllers, variable-frequency drives, servo drives, and computerized numerical control equipment are programmable via RS-232. Computer manufacturers have responded to this demand by re-introducing the DE-9M connector on their computers or by making adapters available.
RS-232 ports are also commonly used to communicate to headless systems such as servers, where no monitor or keyboard is installed, during boot when operating system is not running yet and therefore no network connection is possible. A computer with an RS-232 serial port can communicate with the serial port of an embedded system (such as a router) as an alternative to monitoring over Ethernet.
Physical interface
In RS-232, user data is sent as a time-series of bits. Both synchronous and asynchronous transmissions are supported by the standard. In addition to the data circuits, the standard defines a number of control circuits used to manage the connection between the DTE and DCE. Each data or control circuit only operates in one direction, that is, signaling from a DTE to the attached DCE or the reverse. Because transmit data and receive data are separate circuits, the interface can operate in a full duplex manner, supporting concurrent data flow in both directions. The standard does not define character framing within the data stream or character encoding.
Voltage levels
The RS-232 standard defines the voltage levels that correspond to logical one and logical zero levels for the data transmission and the control signal lines. Valid signals are either in the range of +3 to +15 volts or the range −3 to −15 volts with respect to the "Common Ground" (GND) pin; consequently, the range between −3 to +3 volts is not a valid RS-232 level. For data transmission lines (TxD, RxD, and their secondary channel equivalents), logic one is represented as a negative voltage and the signal condition is called "mark". Logic zero is signaled with a positive voltage and the signal condition is termed "space". Control signals have the opposite polarity: the asserted or active state is positive voltage and the de-asserted or inactive state is negative voltage. Examples of control lines include request to send (RTS), clear to send (CTS), data terminal ready (DTR), and data set ready (DSR).
The standard specifies a maximum open-circuit voltage of 25 volts: signal levels of ±5 V, ±10 V, ±12 V, and ±15 V are all commonly seen depending on the voltages available to the line driver circuit. Some RS-232 driver chips have inbuilt circuitry to produce the required voltages from a 3 or 5 volt supply. RS-232 drivers and receivers must be able to withstand indefinite short circuits to the ground or to any voltage level up to ±25 volts. The slew rate, or how fast the signal changes between levels, is also controlled.
Because the voltage levels are higher than logic levels typically used by integrated circuits, special intervening driver circuits are required to translate logic levels. These also protect the device's internal circuitry from short circuits or transients that may appear on the RS-232 interface, and provide sufficient current to comply with the slew rate requirements for data transmission.
Because both ends of the RS-232 circuit depend on the ground pin being zero volts, problems will occur when connecting machinery and computers where the voltage between the ground pin on one end, and the ground pin on the other is not zero. This may also cause a hazardous ground loop. Use of a common ground limits RS-232 to applications with relatively short cables. If the two devices are far enough apart or on separate power systems, the local ground connections at either end of the cable will have differing voltages; this difference will reduce the noise margin of the signals. Balanced, differential serial connections such as RS-422 or RS-485 can tolerate larger ground voltage differences because of the differential signaling.
Unused interface signals terminated to the ground will have an undefined logic state. Where it is necessary to permanently set a control signal to a defined state, it must be connected to a voltage source that asserts the logic 1 or logic 0 levels, for example with a pullup resistor. Some devices provide test voltages on their interface connectors for this purpose.
Connectors
RS-232 devices may be classified as Data Terminal Equipment (DTE) or Data Circuit-terminating Equipment (DCE); this defines at each device which wires will be sending and receiving each signal. According to the standard, male connectors have DTE pin functions, and female connectors have DCE pin functions. Other devices may have any combination of connector gender and pin definitions. Many terminals were manufactured with female connectors but were sold with a cable with male connectors at each end; the terminal with its cable satisfied the recommendations in the standard.
The standard recommends the D-subminiature 25-pin connector up to revision C, and makes it mandatory as of revision D. Most devices only implement a few of the twenty signals specified in the standard, so connectors and cables with fewer pins are sufficient for most connections, more compact, and less expensive. Personal computer manufacturers replaced the DB-25M connector with the smaller DE-9M connector. This connector, with a different pinout (see Serial port pinouts), is prevalent for personal computers and associated devices.
Presence of a 25-pin D-sub connector does not necessarily indicate an RS-232-C compliant interface. For example, on the original IBM PC, a male D-sub was an RS-232-C DTE port (with a non-standard current loop interface on reserved pins), but the female D-sub connector on the same PC model was used for the parallel "Centronics" printer port. Some personal computers put non-standard voltages or signals on some pins of their serial ports.
Pinouts
The following table lists commonly used RS-232 signals and pin assignments:
Signal Ground is a common return for the other connections; it appears on two pins in the Yost standard but is the same signal. The DB-25 connector includes a second Protective Ground on pin 1, which is intended to be connected by each device to its own frame ground or similar. Connecting Protective Ground to Signal Ground is a common practice but not recommended.
Note that EIA/TIA 561 combines DSR and RI, and the Yost standard combines DSR and DCD.
Cables
The standard does not define a maximum cable length, but instead defines the maximum capacitance that a compliant drive circuit must tolerate. A widely used rule of thumb indicates that cables more than long will have too much capacitance, unless special cables are used. By using low-capacitance cables, communication can be maintained over larger distances up to about . For longer distances, other signal standards, such as RS-422, are better suited for higher speeds.
Since the standard definitions are not always correctly applied, it is often necessary to consult documentation, test connections with a breakout box, or use trial and error to find a cable that works when interconnecting two devices. Connecting a fully standard-compliant DCE device and DTE device would use a cable that connects identical pin numbers in each connector (a so-called "straight cable"). "Gender changers" are available to solve gender mismatches between cables and connectors. Connecting devices with different types of connectors requires a cable that connects the corresponding pins according to the table below. Cables with 9 pins on one end and 25 on the other are common. Manufacturers of equipment with 8P8C connectors usually provide a cable with either a DB-25 or DE-9 connector (or sometimes interchangeable connectors so they can work with multiple devices). Poor-quality cables can cause false signals by crosstalk between data and control lines (such as Ring Indicator).
If a given cable will not allow a data connection, especially if a gender changer is in use, a null modem cable may be necessary. Gender changers and null modem cables are not mentioned in the standard, so there is no officially sanctioned design for them.
Data and control signals
The following table lists commonly used RS-232 signals (called "circuits" in the specifications) and their pin assignments on the recommended DB-25 connectors. (See Serial port pinouts for other commonly used connectors not defined by the standard.)
The signals are named from the standpoint of the DTE. The ground pin is a common return for the other connections, and establishes the "zero" voltage to which voltages on the other pins are referenced. The DB-25 connector includes a second "protective ground" on pin 1; this is connected internally to equipment frame ground, and should not be connected in the cable or connector to signal ground.
Ring Indicator
Ring Indicator (RI) is a signal sent from the DCE to the DTE device. It indicates to the terminal device that the phone line is ringing. In many computer serial ports, a hardware interrupt is generated when the RI signal changes state. Having support for this hardware interrupt means that a program or operating system can be informed of a change in state of the RI pin, without requiring the software to constantly "poll" the state of the pin. RI does not correspond to another signal that carries similar information the opposite way.
On an external modem the status of the Ring Indicator pin is often coupled to the "AA" (auto answer) light, which flashes if the RI signal has detected a ring. The asserted RI signal follows the ringing pattern closely, which can permit software to detect distinctive ring patterns.
The Ring Indicator signal is used by some older uninterruptible power supplies (UPSs) to signal a power failure state to the computer.
Certain personal computers can be configured for wake-on-ring, allowing a computer that is suspended to answer a phone call.
RTS, CTS, and RTR
The Request to Send (RTS) and Clear to Send (CTS) signals were originally defined for use with half-duplex (one direction at a time) modems such as the Bell 202. These modems disable their transmitters when not required and must transmit a synchronization preamble to the receiver when they are re-enabled. The DTE asserts RTS to indicate a desire to transmit to the DCE, and in response the DCE asserts CTS to grant permission, once synchronization with the DCE at the far end is achieved. Such modems are no longer in common use. There is no corresponding signal that the DTE could use to temporarily halt incoming data from the DCE. Thus RS-232's use of the RTS and CTS signals, per the older versions of the standard, is asymmetric.
This scheme is also employed in present-day RS-232 to RS-485 converters. RS-485 is a multiple-access bus on which only one device can transmit at a time, a concept that is not provided for in RS-232. The RS-232 device asserts RTS to tell the converter to take control of the RS-485 bus so that the converter, and thus the RS-232 device, can send data onto the bus.
Modern communications environments use full-duplex (both directions simultaneously) modems. In that environment, DTEs have no reason to deassert RTS. However, due to the possibility of changing line quality, delays in processing of data, etc., there is a need for symmetric, bidirectional flow control.
A symmetric alternative providing flow control in both directions was developed and marketed in the late 1980s by various equipment manufacturers. It redefined the RTS signal to mean that the DTE is ready to receive data from the DCE. This scheme was eventually codified in version RS-232-E (actually TIA-232-E by that time) by defining a new signal, "RTR (Ready to Receive)", which is CCITT V.24 circuit 133. TIA-232-E and the corresponding international standards were updated to show that circuit 133, when implemented, shares the same pin as RTS (Request to Send), and that when 133 is in use, RTS is assumed by the DCE to be asserted at all times.
In this scheme, commonly called "RTS/CTS flow control" or "RTS/CTS handshaking" (though the technically correct name would be "RTR/CTS"), the DTE asserts RTS whenever it is ready to receive data from the DCE, and the DCE asserts CTS whenever it is ready to receive data from the DTE. Unlike the original use of RTS and CTS with half-duplex modems, these two signals operate independently from one another. This is an example of hardware flow control. However, "hardware flow control" in the description of the options available on an RS-232-equipped device does not always mean RTS/CTS handshaking.
Equipment using this protocol must be prepared to buffer some extra data, since the remote system may have begun transmitting just before the local system de-asserts RTR.
3-wire and 5-wire RS-232
A minimal "3-wire" RS-232 connection consisting only of transmit data, receive data, and ground, is commonly used when the full facilities of RS-232 are not required. Even a two-wire connection (data and ground) can be used if the data flow is one way (for example, a digital postal scale that periodically sends a weight reading, or a GPS receiver that periodically sends position, if no configuration via RS-232 is necessary). When only hardware flow control is required in addition to two-way data, the RTS and CTS lines are added in a 5-wire version.
Seldom-used features
The EIA-232 standard specifies connections for several features that are not used in most implementations. Their use requires 25-pin connectors and cables.
Signal rate selection
The DTE or DCE can specify use of a "high" or "low" signaling rate. The rates, as well as which device will select the rate, must be configured in both the DTE and DCE. The prearranged device selects the high rate by setting pin 23 to ON.
Loopback testing
Many DCE devices have a loopback capability used for testing. When enabled, signals are echoed back to the sender rather than being sent on to the receiver. If supported, the DTE can signal the local DCE (the one it is connected to) to enter loopback mode by setting pin 18 to ON, or the remote DCE (the one the local DCE is connected to) to enter loopback mode by setting pin 21 to ON. The latter tests the communications link, as well as both DCEs. When the DCE is in test mode, it signals the DTE by setting pin 25 to ON.
A commonly used version of loopback testing does not involve any special capability of either end. A hardware loopback is simply a wire connecting complementary pins together in the same connector (see loopback).
Loopback testing is often performed with a specialized DTE called a bit error rate tester (or BERT).
Timing signals
Some synchronous devices provide a clock signal to synchronize data transmission, especially at higher data rates. Two timing signals are provided by the DCE on pins 15 and 17. Pin 15 is the transmitter clock, or send timing (ST); the DTE puts the next bit on the data line (pin 2) when this clock transitions from OFF to ON (so it is stable during the ON to OFF transition when the DCE registers the bit). Pin 17 is the receiver clock, or receive timing (RT); the DTE reads the next bit from the data line (pin 3) when this clock transitions from ON to OFF.
Alternatively, the DTE can provide a clock signal, called transmitter timing (TT), on pin 24 for transmitted data. Data is changed when the clock transitions from OFF to ON, and read during the ON to OFF transition. TT can be used to overcome the issue where ST must traverse a cable of unknown length and delay, clock a bit out of the DTE after another unknown delay, and return it to the DCE over the same unknown cable delay. Since the relation between the transmitted bit and TT can be fixed in the DTE design, and since both signals traverse the same cable length, using TT eliminates the issue. TT may be generated by looping ST back with an appropriate phase change to align it with the transmitted data. ST loop back to TT lets the DTE use the DCE as the frequency reference, and correct the clock to data timing.
Synchronous clocking is required for such protocols as SDLC, HDLC, and X.25.
Secondary channel
A secondary data channel, identical in capability to the primary channel, can optionally be implemented by the DTE and DCE devices. Pin assignments are as follows:
Related standards
Other serial signaling standards may not interoperate with standard-compliant RS-232 ports. For example, using the TTL levels of near +5 and 0 V puts the mark level in the undefined area of the standard. Such levels are sometimes used with GPS receivers and depth finders.
A 20 mA current loop uses the absence of 20 mA current for high, and the presence of current in the loop for low; this signaling method is often used for long-distance and optically isolated links. Connection of a current-loop device to a compliant RS-232 port requires a level translator. Current-loop devices can supply voltages in excess of the withstand voltage limits of a compliant device. The original IBM PC serial port card implemented a 20 mA current-loop interface, which was never emulated by other suppliers of plug-compatible equipment.
Other serial interfaces similar to RS-232:
RS-422 – a high-speed system similar to RS-232 but with differential signaling
RS-423 – a high-speed system similar to RS-422 but with unbalanced signaling
RS-449 – a functional and mechanical interface that used RS-422 and RS-423 signals; never caught on like RS-232 and was withdrawn by the EIA
RS-485 – a descendant of RS-422 that can be used as a bus in multidrop configurations
MIL-STD-188 – a system like RS-232 but with better impedance and rise time control
EIA-530 – a high-speed system using RS-422 or RS-423 electrical properties in an EIA-232 pinout configuration, thus combining the best of both; supersedes RS-449
EIA/TIA-561 – defines RS-232 pinouts for eight-position, eight-contact (8P8C) modular connectors (which may be improperly called RJ45 connectors)
EIA/TIA-562 – low-voltage version of EIA/TIA-232
TIA-574 – standardizes the 9-pin D-subminiature connector pinout for use with EIA-232 electrical signalling, as originated on the IBM PC/AT
EIA/TIA-694 – similar to TIA/EIA-232-F but with support for higher data rates up to 512 kbit/s
Development tools
When developing or troubleshooting systems using RS-232, close examination of hardware signals can be important to find problems. This can be done using simple devices with LEDs that indicate the logic levels of data and control signals. "Y" cables may be used to allow using another serial port to monitor all traffic on one direction. A serial line analyzer is a device similar to a logic analyzer but specialized for RS-232's voltage levels, connectors, and, where used, clock signals; it collects, stores, and displays the data and control signals, allowing developers to view them in detail. Some simply display the signals as waveforms; more elaborate versions include the ability to decode characters in ASCII or other common codes and to interpret common protocols used over RS-232 such as SDLC, HDLC, DDCMP, and X.25. Serial line analyzers are available as standalone units, as software and interface cables for general-purpose logic analyzers and oscilloscopes, and as programs that run on common personal computers and devices.
See also
Asynchronous serial communication
Baud rate
Comparison of synchronous and asynchronous signalling
Synchronous serial communication
Universal asynchronous receiver/transmitter (UART)
References
Further reading
External links
Telecommunications equipment
Computer hardware standards
Networking standards
EIA standards
Computer-related introductions in 1960 |
61391314 | https://en.wikipedia.org/wiki/Druva | Druva | Druva Inc. is a privately-held software company based in Sunnyvale, California. The company provides SaaS-based data protection and management products. The company was founded in 2008, raised several rounds of funding, and grew to more than 800 employees.
History
In 2008, Jaspreet Singh (CEO), Milind Borate (CTO), and Ramani Kothandaraman, who met working together at Veritas Software, founded Druva in Pune, India. In Sanskrit, "druva" translates to "North Star". Initially, Druva focused on providing data management software to financial companies before shifting to general enterprise data management.
In 2010, the company received Series A funding. In 2011, the company added smartphone support for its inSync app and received Series B funding. The next year, the company moved its headquarters to Silicon Valley, and again shifted focus to cloud-based data management and protection. By 2013, the company had grown 194 employees. The company raised Series C funding the same year.
In 2014, Druva released its Phoenix server backup product and received Series D funding.
By 2016, the company had grown to 400 employees, and set up a subsidiary in Japan and an office in Tokyo. Druva received more funding and FedRAMP authority to operate in 2017. In 2019, Druva grew to 750 employees and more than 4,000 customers, and opened an office in Singapore. The company also received additional late-stage funding, which brought its total amount invested to $328 million and its total valuation to more than $1 billion.
In 2018, Druva acquired Letterkenny-based CloudRanger, a back-up and disaster recovery company. In 2019, Druva acquired CloudLanes to supplement its on-premises to cloud performance. The following year it acquired sfApex, a Texas based backup and migration company focused on Salesforce data. In April 2021, Druva raised $147 million in its eighth funding round, valuing the company at about $2 billion.
Products
Druva creates and sells data protection and management products.
All of Druva's products operate on same cloud-native backup platform built on Amazon Web Services that provides a centralized backup repository.
Druva is focused on storing data in backups and managing those backups for servers, Software-as-a-Service applications, and cloud-based software. For example, in 2018 it introduced features that restore computer systems compromised by ransomware and specialized technology for backups of SQL servers, Azure directories, and network-attached storage.
References
External links
Software companies based in the San Francisco Bay Area
Technology companies established in 2008
Companies based in Sunnyvale, California
Computer security companies
2008 establishments in Maharashtra |
18774171 | https://en.wikipedia.org/wiki/Arabeyes | Arabeyes | Arabeyes is a free and open-source project that is aimed at fully supporting the Arabic language in the Unix/Linux environment. It was established in early 2001 by a number of Arab Linux enthusiasts. They made the "world's first Arabic Linux live CD." The name is a play on the word "Arabise" (which, in this context, means to adapt software so that it is compatible with Arabic), and "eyes" a term for many people.
Project
The project runs a portal for sub-projects such as Arabic free software Unicode fonts and text editor, the "ITL" (Islamic Tools and Libraries) which provide Hijri dates, Muslim prayer times and Qibla. In October 2003, they released Linux distribution named Arabbix, the "world's first Arabic Linux live CD". They have worked on an Arabised version of OpenOffice. They maintain a Linux distribution called Hilali Linux. It includes a translation project to provide an Arabic interface to KDE and GNOME windows managers, and a Linux documentation project. In 2003, the group released the first open source word list for use in English to Arabic translation and dictionary projects; on release it was included in "virtually all online multilingual translation sites".
Located online at arabeyes.org, this network calls itself as a "meta project that is aimed at fully supporting the Arabic language in the Unix/Linux environment." It is designed to be a central location to standardize the Arabization process. Arabeyes relies on voluntary contributions by computer professionals and enthusiasts scattered across the globe.
At the 2004 Casablanca GNU/Linux Days conference held in Morocco, Arabeyes was awarded the Best Free/Open Project. The award was presented by Richard Stallman, founder of GNU Project and Free Software Foundation.
References
External links
Internet properties established in 2001
Arabic-language computing
Unix software |
634280 | https://en.wikipedia.org/wiki/S.M.A.R.T. | S.M.A.R.T. | S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology; often written as SMART) is a monitoring system included in computer hard disk drives (HDDs), solid-state drives (SSDs), and eMMC drives. Its primary function is to detect and report various indicators of drive reliability with the intent of anticipating imminent hardware failures.
When S.M.A.R.T. data indicates a possible imminent drive failure, software running on the host system may notify the user so preventive action can be taken to prevent data loss, and the failing drive can be replaced and data integrity maintained.
Background
Hard disk and other storage drives are subject to failures (see hard disk drive failure) which can be classified within two basic classes:
Predictable failures which result from slow processes such as mechanical wear and gradual degradation of storage surfaces. Monitoring can determine when such failures are becoming more likely.
Unpredictable failures which occur without warning due to anything from electronic components becoming defective to a sudden mechanical failure, including failures related to improper handling.
Mechanical failures account for about 60% of all drive failures.
While the eventual failure may be catastrophic, most mechanical failures result from gradual wear and there are usually certain indications that failure is imminent. These may include increased heat output, increased noise level, problems with reading and writing of data, or an increase in the number of damaged disk sectors.
PCTechGuide's page on S.M.A.R.T. (2003) comments that the technology has gone through three phases:
Accuracy
A field study at Google covering over 100,000 consumer-grade drives from December 2005 to August 2006 found correlations between certain S.M.A.R.T. information and annualized failure rates:
In the 60 days following the first uncorrectable error on a drive (S.M.A.R.T. attribute 0xC6 or 198) detected as a result of an offline scan, the drive was, on average, 39 times more likely to fail than a similar drive for which no such error occurred.
First errors in reallocations, offline reallocations (S.M.A.R.T. attributes 0xC4 and 0x05 or 196 and 5) and probational counts (S.M.A.R.T. attribute 0xC5 or 197) were also strongly correlated to higher probabilities of failure.
Conversely, little correlation was found for increased temperature and no correlation for usage level. However, the research showed that a large proportion (56%) of the failed drives failed without recording any count in the "four strong S.M.A.R.T. warnings" identified as scan errors, reallocation count, offline reallocation and probational count.
Further, 36% of failed drives did so without recording any S.M.A.R.T. error at all, except the temperature, meaning that S.M.A.R.T. data alone was of limited usefulness in anticipating failures.
History and predecessors
An early hard disk monitoring technology was introduced by IBM in 1992 in its IBM 9337 Disk Arrays for AS/400 servers using IBM 0662 SCSI-2 disk drives. Later it was named Predictive Failure Analysis (PFA) technology. It was measuring several key device health parameters and evaluating them within the drive firmware. Communications between the physical unit and the monitoring software were limited to a binary result: namely, either "device is OK" or "drive is likely to fail soon".
Later, another variant, which was named IntelliSafe, was created by computer manufacturer Compaq and disk drive manufacturers Seagate, Quantum, and Conner. The disk drives would measure the disk's "health parameters", and the values would be transferred to the operating system and user-space monitoring software. Each disk drive vendor was free to decide which parameters were to be included for monitoring, and what their thresholds should be. The unification was at the protocol level with the host.
Compaq submitted IntelliSafe to the Small Form Factor (SFF) committee for standardization in early 1995. It was supported by IBM, by Compaq's development partners Seagate, Quantum, and Conner, and by Western Digital, which did not have a failure prediction system at the time. The Committee chose IntelliSafe's approach, as it provided more flexibility. Compaq placed IntelliSafe into the public domain on 12 May 1995. The resulting jointly developed standard was named S.M.A.R.T..
That SFF standard described a communication protocol for an ATA host to use and control monitoring and analysis in a hard disk drive, but did not specify any particular metrics or analysis methods. Later, "S.M.A.R.T." came to be understood (though without any formal specification) to refer to a variety of specific metrics and methods and to apply to protocols unrelated to ATA for communicating the same kinds of things.
Provided information
The technical documentation for S.M.A.R.T. is in the AT Attachment (ATA) standard. First introduced in 1994, the ATA standard has gone through multiple revisions. Some parts of the original S.M.A.R.T. specification by the Small Form Factor (SFF) Committee were added to ATA-3, published in 1997. In 1998 ATA-4 dropped the requirement for drives to maintain an internal attribute table and instead required only for an "OK" or "NOT OK" value to be returned. Albeit, manufacturers have kept the capability to retrieve the attributes' value. The most recent ATA standard, ATA-8, was published in 2004. It has undergone regular revisions, the latest being in 2011. Standardization of similar features on SCSI is more scarce and is not named as such on standards, although vendors and consumers alike do refer to these similar features at S.M.A.R.T. too.
The most basic information that S.M.A.R.T. provides is the S.M.A.R.T. status. It provides only two values: "threshold not exceeded" and "threshold exceeded". Often these are represented as "drive OK" or "drive fail" respectively. A "threshold exceeded" value is intended to indicate that there is a relatively high probability that the drive will not be able to honor its specification in the future: that is, the drive is "about to fail". The predicted failure may be catastrophic or may be something as subtle as the inability to write to certain sectors, or perhaps slower performance than the manufacturer's declared minimum.
The S.M.A.R.T. status does not necessarily indicate the drive's past or present reliability. If a drive has already failed catastrophically, the S.M.A.R.T. status may be inaccessible. Alternatively, if a drive has experienced problems in the past, but the sensors no longer detect such problems, the S.M.A.R.T. status may, depending on the manufacturer's programming, suggest that the drive is now healthy.
The inability to read some sectors is not always an indication that a drive is about to fail. One way that unreadable sectors may be created, even when the drive is functioning within specification, is through a sudden power failure while the drive is writing. Also, even if the physical disk is damaged at one location, such that a certain sector is unreadable, the disk may be able to use spare space to replace the bad area, so that the sector can be overwritten.
More detail on the health of the drive may be obtained by examining the S.M.A.R.T. Attributes. S.M.A.R.T. Attributes were included in some drafts of the ATA standard, but were removed before the standard became final. The meaning and interpretation of the attributes varies between manufacturers, and are sometimes considered a trade secret for one manufacturer or another. Attributes are further discussed below.
Drives with S.M.A.R.T. may optionally maintain a number of 'logs'. The error log records information about the most recent errors that the drive has reported back to the host computer. Examining this log may help one to determine whether computer problems are disk-related or caused by something else (error log timestamps may "wrap" after 232 ms = 49.71 days)
A drive that implements S.M.A.R.T. may optionally implement a number of self-test or maintenance routines, and the results of the tests are kept in the self-test log. The self-test routines may be used to detect any unreadable sectors on the disk, so that they may be restored from back-up sources (for example, from other disks in a RAID). This helps to reduce the risk of incurring permanent loss of data.
Standards and implementation
Lack of common interpretation
Many motherboards display a warning message when a disk drive is approaching failure. Although an industry standard exists among most major hard drive manufacturers, issues remain due to attributes intentionally left undocumented to the public in order to differentiate models between manufacturers.
From a legal perspective, the term "S.M.A.R.T." refers only to a signaling method between internal disk drive electromechanical sensors and the host computer. Because of this the specifications of S.M.A.R.T. are entirely vendor specific and, while many of these attributes have been standardized between drive vendors, others remain vendor-specific. S.M.A.R.T. implementations still differ and in some cases may lack "common" or expected features such as a temperature sensor or only include a few select attributes while still allowing the manufacturer to advertise the product as "S.M.A.R.T. compatible."
Visibility to host systems
Depending on the type of interface being used, some S.M.A.R.T.-enabled motherboards and related software may not communicate with certain S.M.A.R.T.-capable drives. For example, few external drives connected via USB and FireWire correctly send S.M.A.R.T. data over those interfaces. With so many ways to connect a hard drive (SCSI, Fibre Channel, ATA, SATA, SAS, SSA, NVMe and so on), it is difficult to predict whether S.M.A.R.T. reports will function correctly in a given system.
Even with a hard drive and interface that implements the specification, the computer's operating system may not see the S.M.A.R.T. information because the drive and interface are encapsulated in a lower layer. For example, they may be part of a RAID subsystem in which the RAID controller sees the S.M.A.R.T.-capable drive, but the host computer sees only a logical volume generated by the RAID controller.
On the Windows platform, many programs designed to monitor and report S.M.A.R.T. information will function only under an administrator account.
System BIOS and Windows (Windows Vista and later) may detect S.M.A.R.T. status of hard disk drives and solid state drives, and prompt if S.M.A.R.T. status bad.
Access
For a list of various programs that allow reading of S.M.A.R.T. Data, see Comparison of S.M.A.R.T. tools.
ATA S.M.A.R.T. attributes
Each drive manufacturer defines a set of attributes, and sets threshold values beyond which attributes should not pass under normal operation. Each attribute has a raw value that can be a decimal or a hexadecimal value, whose meaning is entirely up to the drive manufacturer (but often corresponds to counts or a physical unit, such as degrees Celsius or seconds), a normalized value, which ranges from 1 to 253 (with 1 representing the worst case and 253 representing the best) and a worst value, which represents the lowest recorded normalized value. The initial default value of attributes is 100 but can vary between manufacturer.
Manufacturers that have implemented at least one S.M.A.R.T. attribute in various products include Samsung, Seagate, IBM (Hitachi), Fujitsu, Maxtor, Toshiba, Intel, sTec, Inc., Western Digital and ExcelStor Technology.
Known ATA S.M.A.R.T. attributes
The following chart lists some S.M.A.R.T. attributes and the typical meaning of their raw values. Normalized values are usually mapped so that higher values are better (exceptions include drive temperature, number of head load/unload cycles), but higher raw attribute values may be better or worse depending on the attribute and manufacturer. For example, the "Reallocated Sectors Count" attribute's normalized value decreases as the count of reallocated sectors increases. In this case, the attribute's raw value will often indicate the actual count of sectors that were reallocated, although vendors are in no way required to adhere to this convention.
As manufacturers do not necessarily agree on precise attribute definitions and measurement units, the following list of attributes is a general guide only.
Drives do not support all attribute codes (sometimes abbreviated as "ID", for "identifier", in tables). Some codes are specific to particular drive types (magnetic platter, flash, SSD). Drives may use different codes for the same parameter, e.g., see codes 193 and 225.
Threshold Exceeds Condition
Threshold Exceeds Condition (TEC) is an estimated date when a critical drive statistic attribute will reach its threshold value. When Drive Health software reports a "Nearest T.E.C.", it should be regarded as a "Failure date". Sometimes, no date is given and the drive can be expected to work without errors.
To predict the date, the drive tracks the rate at which the attribute changes. Note that TEC dates are only estimates; hard drives can and do fail much sooner or much later than the TEC date.
Self-tests
S.M.A.R.T. drives may offer a number of self-tests:
Short
Checks the electrical and mechanical performance as well as the read performance of the disk. Electrical tests might include a test of buffer RAM, a read/write circuitry test, or a test of the read/write head elements. Mechanical test includes seeking and servo on data tracks. Scans small parts of the drive's surface (area is vendor-specific and there is a time limit on the test). Checks the list of pending sectors that may have read errors, and it usually takes under two minutes.
Long/extended
A longer and more thorough version of the short self-test, scanning the entire disk surface with no time limit. This test usually takes several hours, depending on the read/write speed of the drive and its size.
Conveyance
Intended as a quick test to identify damage incurred during transporting of the device from the drive manufacturer to the computer manufacturer. Only available on ATA drives, and it usually takes several minutes.
Selective
Some drives allow selective self-tests of just a part of the surface.
The self-test logs for SCSI and ATA drives are slightly different. It is possible for the long test to pass even if the short test fails.
The drive's self-test log can contain up to 21 read-only entries. When the log is filled, old entries are removed.
See also
Comparison of S.M.A.R.T. tools
Data scrubbing
Disk utility
List of disk partitioning software
Predictive failure analysis
System monitor
References
Further reading
.
External links
.
by: cornwell.
.
by: ballen4705.
.
GSmartControl is a GUI for smartctl (part of smartmontools) by Alexander Shaduri
.
with Palimpsest (originally by Red Hat)
.
.
Hard Drive SMART Stats, a large-scale field report
Seagate SMART Attribute Specification
Normal SATA SMART Attribute Behavior (Seagate)
Large collection of S.M.A.R.T. reports
Computer storage technologies
Computer hardware standards |
6597967 | https://en.wikipedia.org/wiki/Caledonian%20Steam%20Packet%20Company | Caledonian Steam Packet Company | The Caledonian Steam Packet Company provided a scheduled shipping service, carrying freight and passengers, on the west coast of Scotland. Formed in 1889 to complement the services of the Caledonian Railway, the company expanded by taking over rival ferry companies. In 1973, they were merged with MacBraynes as Caledonian MacBrayne.
Formation
Rival railway companies, the Caledonian Railway (CR), the North British Railway (NBR) and the Glasgow and South Western Railway (GSWR) at first used the services of various early private operators of Clyde steamers. The CR failed to attract private ship owners to their new extension from Greenock to the fishing village of Gourock. They had purchased the harbour at Gourock, which had advantages of a faster line from Glasgow, bypassing the Glasgow and South Western Railway Prince's Pier at Greenock, and being closer to the Clyde resorts. The CR began operating steamers on its own account in 1889.
The Caledonian Steam Packet Company (CSP) was formed as a packet company in May 1889, with Captain James Williamson as secretary and manager. Nominally an independent company, they bought the ships needed to operate steamer services to and from Gourock. On withdrawal of the Wemyss Bay Steamboat Company in 1890, CSP took over services to Rothesay, Largs and Millport. In June 1890, they established a service to Arran from the Lanarkshire and Ayrshire Railway railhead at Ardrossan. In the years that followed, there was significant investment in piers and ships.
Amalgamations
After years of fierce competition between all the fleets, the CR and GSWR amalgamated with several other railways at the start of 1923 to form the London, Midland and Scottish Railway (LMS) and their fleets amalgamated into the Caledonian Steam Packet Company, their funnels being painted yellow with a black top. At the same time the NBR (and its shipping fleet) also amalgamated with other railways to create the London and North Eastern Railway (LNER), which built the in 1947.
In 1935, Williamson-Buchanan Steamers was taken over by the Caledonian Steam Packet Company.
In 1945, the Caledonian Steam Packet Company took responsibility for the Kyleakin to Kyle of Lochalsh ferry.
With nationalisation in 1948, the LMS and LNER fleets were amalgamated as Clyde Shipping Services, under the control of the British Transport Commission.
In 1957 a reorganisation restored the Caledonian Steam Packet Company name, and in 1965 a red lion was added to each side of the black-topped yellow funnels. The headquarters remained at Gourock pierhead.
At the end of December 1968 management of the Caledonian Steam Packet Company passed to the Scottish Transport Group, which gained control of David MacBrayne's the following June. The MacBrayne service from Gourock to Ardrishaig ended on 30 September 1969, leaving the Clyde services entirely to the Caledonian Steam Packet Company.
Merger with MacBraynes
On 1 January 1973 the Caledonian Steam Packet Co. acquired most of the ships and routes of David MacBrayne Ltd and commenced joint Clyde and West Highland operations under the new name of Caledonian MacBrayne, with a combined headquarters at Gourock.
List of ships operated by the company
Sources
References
Ferry companies of Scotland
Packet (sea transport)
Shipping companies of Scotland
Defunct companies of Scotland
Defunct shipping companies of the United Kingdom
Highlands and Islands of Scotland
Transport in Argyll and Bute
Transport in Highland (council area)
Transport in Inverclyde
Transport in the Outer Hebrides
Transport companies established in 1889
Transport companies disestablished in 1973
1889 establishments in Scotland
1973 disestablishments in Scotland
Steam Packet Company
Steam Packet Company
1973 mergers and acquisitions
British companies disestablished in 1973
British companies established in 1889 |
128387 | https://en.wikipedia.org/wiki/Sperry%20Corporation | Sperry Corporation | Sperry Corporation was a major American equipment and electronics company whose existence spanned more than seven decades of the 20th century. Through a series of mergers, it exists today as a part of Unisys while some other of its former divisions became part of Honeywell, Lockheed Martin, Raytheon Technologies, and Northrop Grumman.
The company is best known as the developer of the artificial horizon and a wide variety of other gyroscope-based aviation instruments like autopilots, bombsights, analog ballistics computers and gyro gunsights. In the post-WWII era the company branched out into electronics, both aviation related, and later, computers.
History
Early history
The company was founded in 1910 by Elmer Ambrose Sperry, as the Sperry Gyroscope Company, to manufacture navigation equipment—chiefly his own inventions the marine gyrostabilizer and the gyrocompass—at 40 Flatbush Avenue Extension in Downtown Brooklyn. During World War I the company diversified into aircraft components including bomb sights and fire control systems. In their early decades, Sperry Gyroscope and related companies were concentrated on Long Island, New York, especially in Nassau County. Over the years, it diversified to other locations.
In 1918, Lawrence Sperry split from his father to compete over aero-instruments with the Lawrence Sperry Aircraft Company, including the new automatic pilot. After the death of Lawrence on December 13, 1923, the two firms were brought together in 1924. Then in 1929 it was acquired by North American Aviation. The company again became independent in 1933 as the Sperry Corporation. The new corporation was a holding company for a number of smaller entities such as the original Sperry Gyroscope, Ford Instrument Company, Intercontinental Aviation, Inc., and others. The company made advanced aircraft navigation equipment for the market, including the Sperry Gyroscope and the Sperry Radio Direction Finder.
Sperry supported the work of a group of Stanford University inventors, led by Russell and Sigurd Varian, who had invented the klystron, and incorporated this technology and related inventions into their products.
The company prospered during World War II as military demand skyrocketed, ranking 19th among US corporations in the value of wartime production contracts. It specialized in high technology devices such as analog computer–controlled bomb sights, airborne radar systems, and automated take-off and landing systems. Sperry also was the creator of the Ball Turret Gun mounted under the Boeing B-17 Flying Fortress and the Consolidated B-24 Liberator, as commemorated by the film Memphis Belle and the poem The Death of the Ball Turret Gunner. Postwar, the company expanded its interests in electronics and computing, producing the company's first digital computer, SPEEDAC, in 1953.
During the 1950s, a large part of Sperry Gyroscope moved to Phoenix, Arizona and soon became the Sperry Flight Systems Company. This was to preserve parts of this defense company in the event of a nuclear war. The Gyroscope division remained headquartered in New York—in its massive Lake Success, Long Island, plant (which also served as the temporary United Nations headquarters from 1946 to 1952)—into the 1980s.
Sperry Rand
In 1955, Sperry acquired Remington Rand and renamed itself Sperry Rand. Acquiring then Eckert–Mauchly Computer Corporation and Engineering Research Associates along with Remington Rand, the company developed the successful UNIVAC computer series and signed a valuable cross-licensing deal with IBM. The company remained a major military contractor. From 1967 to 1973 the corporation was involved in an acrimonious antitrust lawsuit with Honeywell, Inc. (see Honeywell v. Sperry Rand).
In 1961, Sperry Rand was ranked 34th on the Fortune 500 list of largest companies in the United States.
In 1978, Sperry Rand decided to concentrate on its computing interests, and sold a number of divisions including Remington Rand Systems, Remington Rand Machines, Ford Instrument Company and Sperry Vickers. The company dropped "Rand" from its title and reverted to Sperry Corporation. At about the same time as the Rand acquisition, Sperry Gyroscope decided to open a facility that would almost exclusively produce its marine instruments. After considerable searching and evaluation, a plant was built in Charlottesville, Virginia, and in 1956, Sperry Piedmont Division began producing marine navigation products. It was later renamed Sperry Marine.
In the 1970s, Sperry Corporation was a traditional conglomerate headquartered in the Sperry Rand Building at 1290 Avenue of Americas in Manhattan, selling typewriters (Sperry Remington); office equipment, electronic digital computers for business and the military (Sperry Univac); construction and farm equipment (Sperry New Holland); avionics, such as gyroscopes, radars, air route traffic control equipment (Sperry Vickers/Sperry Flight Systems); and consumer products such as electric razors (Sperry Remington). In addition, Sperry Systems Management (headquartered in the original Sperry Gyroscope building in Lake Success) performed work on a number of US government defense contracts. Sperry also managed the operation from 1961 to 1975 of the large Louisiana Army Ammunition Plant near Minden. In January 1972, Sperry took over the RCA line of electronic digital computers (architectural cousins to the IBM System/360). In 1983, Sperry sold Vickers to Libbey Owens Ford (later to be renamed TRINOVA Corporation and subsequently Aeroquip-Vickers).
Burroughs takeover
On September 16, 1986, after the success of a second hostile takeover bid engineered by Burroughs CEO and former U.S. Secretary of the Treasury, Michael Blumenthal, Sperry Corporation merged with Burroughs Corporation. The newly merged company was renamed Unisys Corporation—a portmanteau of "united", "information", and "systems". The takeover came about even after Sperry used a "poison pill" in the form of a major share price hike to dissuade the hostile bid, the result of which caused Burroughs to borrow much more funding than was anticipated to complete the bid.
Certain internal divisions of Sperry were sold off after the merger, such as Sperry New Holland (1986, to Ford Motor Company, who in 1991 sold the Ford-New Holland line to Fiat) and Sperry Marine (to Tenneco, in 1987, and is currently part of Northrop Grumman). Also sold—to Honeywell—was Sperry Flight Systems, while Sperry Defense Products Group was sold to Loral; those two units whose functions were originally at the heart of the venerable Sperry Gyroscope division. This group is now part of Lockheed Martin.
British Sperry
Sperry in Britain started with a factory in Pimlico, London, in 1913, manufacturing gyroscopic compasses for the
Royal Navy. It became the Sperry Gyroscope Co Ltd in 1915. In 1923, Lawrence Sperry was killed in an air crash near Rye, Sussex. The company subsequently expanded to the Golden Mile, Brentford in 1931, Stonehouse, Gloucestershire in 1938, and Bracknell in 1957. By 1963, these sites employed some 3,500 people. The Brentford site closed in 1967, with the expansion of Bracknell. Stonehouse closed around 1969. By 1969, the Sperry Gyroscope division of Sperry Rand Corporation employed around 2,500.
The site of the Bracknell factory and development center (sold to British Aerospace in 1982) is commemorated by a 4.5-meter aluminum sculpture by Philip Bentham, Sperry's New Symbolic Gyroscope (1967).
In 1989, the Bracknell site was downsized and work was moved to the Sperry manufacturing site in Plymouth by then under the British Aerospace brand. State of the art, high technology MEMS gyroscopes (together with other avionics equipment) are still made on the site today, although the company is now owned by United Technologies Corporation and is part of UTC Aerospace Systems.
Sperry since 1997
The name Sperry lives on in the company Sperry Marine, headquartered in New Malden, England. This company, formed in 1997, from three well-known brand names in the marine industry—Sperry Marine, Decca, and C. Plath—is now part of Northrop Grumman Corporation. It is a worldwide supplier of navigation, communication, information and automation systems for commercial marine and naval markets.
Products
Aircraft
Missiles and rockets
Sperry MGM-29 Sergeant
In popular culture
The 1986 comedy Jumpin' Jack Flash features many Sperry computers in the bank where the protagonist (played by Whoopi Goldberg) works. Jim Belushi plays the role of a Sperry "repairman".
See also
Hendrik Wade Bode
Director (military)
Gun data computer
Fire-control system
Kerrison Predictor
MAPPER
Rangekeeper
Sperry Drilling Services
References
Notes
Bibliography
Further reading
External links
USStexasbb35.com, Mark 51 Gun director
dreadnoughtproject.org, Director Firing Handbook index from HMS Dreadnought project
Gunnery Pocket book maritime.org
Sperry Gyroscope Company Ltd in Stonehouse Glos UK
Sperry Corporation, UNIVAC Division Photograph Collection at Hagley Museum and Library
Sperry Gyroscope Company Division records at Hagley Museum and Library
Sperry Rand Corporation, Engineering Research Associates (ERA) Division records at Hagley Museum and Library
Sperry Rand Corporation. Remington Rand Division records: Advertising and Sales Promotion Department at Hagley Museum and Library
Sperry Rand Corporation, Univac Division records at Hagley Museum and Library
Sperry-UNIVAC records at Hagley Museum and Library
Albemarle County, Virginia
American companies established in 1910
American companies disestablished in 1986
Avionics companies
Companies formerly listed on the Tokyo Stock Exchange
Companies based in Nassau County, New York
Companies based in Phoenix, Arizona
Computer companies established in 1910
Computer companies disestablished in 1986
Defunct computer hardware companies
Defunct manufacturing companies based in Arizona
Defunct manufacturing companies based in New York (state)
Defunct manufacturing companies based in Virginia
Defunct computer companies of the United States
Electronics companies established in 1910
Electronics companies disestablished in 1986
Honeywell
Instrument-making corporations
Technology companies established in 1910
Technology companies disestablished in 1986
Town of North Hempstead, New York
Unisys |
39114038 | https://en.wikipedia.org/wiki/Irfan%20Essa | Irfan Essa | Irfan Aziz Essa is a professor in the School of Interactive Computing of the College of Computing, and adjunct professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology (Georgia Tech). He is an associate dean in Georgia Tech's College of Computing and the director of the new Interdisciplinary Research Center for Machine Learning at Georgia Tech (ML@GT).
Education
Essa obtained his undergraduate degree in engineering at the Illinois Institute of Technology in 1988. Following this, Essa attended the Massachusetts Institute of Technology, where he received his magister scientiae (Master of Science) in 1990 and his Ph.D. in 1995 at the MIT Media Lab. His doctoral research focused on the implementation of a system to detect emotions from changes in your facial expression, which was later featured in the New York Times. He proceeded to hold a position as a research scientist at MIT from 1994 to 1996 before accepting a position at Georgia Tech.
Professional career
Essa's work focuses mainly in the areas of computer vision, computational photography, computer graphics and animation, robotics, computational perception, human-computer interaction, machine learning, computational journalism and artificial intelligence.
After departing MIT, Essa accepted a position as an assistant professor in the College of Computing at Georgia Tech. Today, he holds the position of a professor, and continues his research endeavors alongside his teaching career.
Essa has taught various courses over the years on digital video special effects, computer vision, computational journalism and computational photography. In the spring of 2013, Essa taught a free online course on computational photography, on the MOOC platform Coursera. He is affiliated with the GVU Center and RIM@GT, and is one of the faculty members of the Computational Perception Laboratory at Georgia Tech.
In addition to this, Essa has organized the Computational Journalism Symposium both in 2008 and 2013. He is credited, alongside his doctoral student Nick Diakopoulos, with coining the term computational journalism back in 2006, when they taught the first class on the subject.
Most recently, Essa has worked as a researcher / consultant with Google to develop a video stabilization algorithm alongside two of his doctoral students, Matthias Grundmann and Vivek Kwatra, which now runs on YouTube, and allows users to stabilize their uploaded videos in real-time.
Selected bibliography
Kwatra, Vivek, Arno Schödl, Irfan Essa, Greg Turk, and Aaron Bobick. "Graphcut textures: image and video synthesis using graph cuts." In ACM Transactions on Graphics, vol. 22, no. 3, pp. 277–286. ACM, 2003.
Kidd, Cory D., Robert Orr, Gregory D. Abowd, Christopher G. Atkeson, Irfan A. Essa, Blair MacIntyre, Elizabeth Mynatt, Thad E. Starner, and Wendy Newstetter. "The aware home: A living laboratory for ubiquitous computing research." In Cooperative buildings. Integrating information, organizations, and architecture, pp. 191–198. Springer Berlin Heidelberg, 1999.
Essa, Irfan A., and Alex Paul Pentland. "Coding, analysis, interpretation, and recognition of facial expressions." IEEE Transactions on Pattern Analysis and Machine Intelligence 19, no. 7 (1997): 757-763.
Schödl, Arno, Richard Szeliski, David H. Salesin, and Irfan Essa. "Video textures." In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 489–498. ACM Press/Addison-Wesley Publishing Co., 2000.
Kwatra, Vivek, Irfan Essa, Aaron Bobick, and Nipun Kwatra. "Texture optimization for example-based synthesis." In ACM Transactions on Graphics, vol. 24, no. 3, pp. 795–802. ACM, 2005.
Essa, Irfan Aziz, and Alex P. Pentland. "Facial expression recognition using a dynamic model and motion energy." In Computer Vision, 1995. Proceedings., Fifth International Conference on, pp. 360–367. IEEE, 1995.
Moore, Darnell J., Irfan A. Essa, and Monson H. Hayes III. "Exploiting human actions and object context for recognition tasks." In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, vol. 1, pp. 80–86. IEEE, 1999.
Basu, Sumit, Irfan Essa, and Alex Pentland. "Motion regularization for model-based head tracking." In Pattern Recognition, 1996., Proceedings of the 13th International Conference on, vol. 3, pp. 611–616. IEEE, 1996.
Haro, Antonio, Myron Flickner, and Irfan Essa. "Detecting and tracking eyes by using their physiological properties, dynamics, and appearance." In Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, vol. 1, pp. 163–168. IEEE, 2000.
Mynatt, Elizabeth D., Irfan Essa, and Wendy Rogers. "Increasing the opportunities for aging in place." In Proceedings on the 2000 conference on Universal Usability, pp. 65–71. ACM, 2000.
Grundmann, Matthias, Vivek Kwatra, Mei Han, and Irfan Essa. "Efficient hierarchical graph-based video segmentation." In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 2141–2148. IEEE, 2010.
Grundmann, Matthias, Vivek Kwatra, and Irfan Essa. "Auto-directed video stabilization with robust L1 optimal camera paths." In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 225–232. IEEE, 2011.
References
External links
Essa's Academic Home Page
Computational Perception Lab
Year of birth missing (living people)
Living people
Georgia Tech faculty
Computer vision researchers
MIT Media Lab people
Pakistani emigrants to the United States
American academics of Pakistani descent |
1515047 | https://en.wikipedia.org/wiki/Imagine%20Software | Imagine Software | Imagine Software was a British video games developer based in Liverpool which existed briefly in the early 1980s, initially producing software for the ZX Spectrum and VIC-20. The company rose quickly to prominence and was noted for its polished, high-budget approach to packaging and advertising (at a time when this was not commonplace in the British software industry), as well as its self-promotion and ambition.
Following Imagine's high-profile demise under mounting debts in 1984, the name was bought and used as a label by Ocean Software until the late 1980s.
History
Founding and early success
Imagine Software was founded in 1982 by former members of Bug-Byte including Mark Butler, David Lawson and Eugene Evans. Butler and Evans had previously worked at Microdigital, one of the first computer stores in the UK. The owner of Microdigital, Bruce Everiss, was invited to join the company to run the company day-to-day and run the PR department. Imagine Software produced several very successful games, including Arcadia for the Vic 20 and ZX Spectrum, throughout 1982 and 1983, but some games shipped with serious, game-breaking bugs. The company grew in size through this period, at one point employing upwards of 80 people, a large number for its time, and splashed out large sums of money on company cars and the founding of a racing team to race in the Isle of Man TT race.
Financial troubles and demise
Rumours of Imagine's financial situation began to circulate in December 1983 following the revelations that an estimated £50,000 of its advertising bills had not been paid. The following year the debts mounted, with further advertising and tape duplication bills going unpaid, and Imagine was forced to sell the rights to its games to Beau Jolly in order to raise money. The company then achieved nationwide notoriety when it was filmed the following year by a BBC documentary crew while in the process of going spectacularly bust. Mark Butler also made an appearance on Thames Television's Daytime programme in 1984, talking about his experience of having been a millionaire who lost his money at a young age.
On 28 June 1984 a writ was issued against Imagine by VNU Business Press for money owed for advertising in Personal Computer Games magazine, and the company was wound up on 9 July 1984 at the High Court in London after it was unable to raise the £10,000 required to pay this debt (though by this time its total debts ran to hundreds of thousands of pounds).
Legacy
Former programmers went on to establish Psygnosis and Denton Designs. The company's back catalogue was owned by Beau Jolly, while rights to the Imagine label were acquired by Ocean Software, which primarily used it to publish home computer conversions of popular arcade games.
In other media
The Black Mirror interactive film Bandersnatch, released in 2018, alludes to Imagine Software and the failed work to produce Bandersnatch. The film starts on 9 July 1984, the date of Imagine's closure, and includes a shot of the cover of Crash reporting on the closure. Within the film, the fictional software company Tuckersoft, which had developed both Commodore 64 and ZX Spectrum games, places its financial future on the attempt to produce Bandersnatch, and in some scenarios falls into bankruptcy after the game fails to appear.
Megagames
Imagine had intended to develop six so-called "Megagames", the most well-known of which were Psyclapse and Bandersnatch. These games were designed to push the boundaries of the hardware of the time, even to the extent that they were intended to be released with a hardware add-on which would have increased the capabilities of the computer, as well as guarding against piracy. The games were advertised heavily and would have retailed at around £30 – an expensive price tag when the average price of a game at the time was £7.20 – but Imagine's collapse meant that they remained vaporware and never saw the light of day.
During the BBC documentary it was revealed that Psyclapse was little more than a paper sketch, though the name was later used for a sub-label of Psygnosis. Most of the concepts originally intended for Bandersnatch eventually appeared in another Psygnosis game, Brataccas, for the 16-bit Atari ST, Amiga and Macintosh computers.
Games
Arcadia, 1982
Wacky Waiters, 1982
Frantic, 1982
Catcha Snatcha, 1983
Schizoids, 1983
Ah Diddums, 1983
Molar Maul, 1983
Jumping Jack aka Leggit!, 1983
Zip Zap, 1983
Zzoom, 1983
Bewitched, 1983
Stonkers, 1983
Alchemist, 1983
Pedro, 1984
Cosmic Cruiser, 1984
BC Bill, 1984
References
External links
The Bubble Bursts - article from CRASH documenting the fall of Imagine Software
Imagine Software profile on MobyGames
Defunct companies based in Liverpool
Video game companies established in 1982
Video game companies disestablished in 1984
Defunct video game companies of the United Kingdom
Video game development companies
1982 establishments in England
1984 disestablishments in England
British companies disestablished in 1984
British companies established in 1982 |
475027 | https://en.wikipedia.org/wiki/Estate%20agent | Estate agent | An estate agent is a person or business that arranges the selling, renting, or management of properties and other buildings. An agent that specialises in renting is often called a letting or management agent. Estate agents are mainly engaged in the marketing of property available for sale, and a solicitor or licensed conveyancer is used to prepare the legal documents. In Scotland, however, many solicitors also act as estate agents, a practice that is rare in England and Wales.
The estate agent remains the current title for the person responsible for the management of one group of privately owned, all or mostly tenanted properties under one ownership. Alternative titles are Factor, Steward, or Bailiff, depending on the era, region, and extent of the property concerned.
Origin
The term originally referred to a person responsible for managing a landed estate, while those engaged in the buying and selling of homes were "House Agents", and those selling land were "Land Agents". However, in the 20th century, "Estate Agent" started to be used as a generic term. An estate agent is roughly synonymous with the United States term real estate broker.
Estate agents need to be familiar with their local area, including factors that could increase or decrease property prices. e.g. if a new road or airport is to be built this can blight houses nearby. Equally, the closing of a quarry or improvement of an area can enhance prices. In advising clients on an asking price, the agent must be aware of recent sale prices (or rental values) for comparable properties.
Regulation
The full legal term and definition of an estate agent within the UK can be found on the Office of Fair Trading (OFT) website. Enforcement of these regulations is also the responsibility of the OFT. (Annotation: The OFT was dissolved in 2014).
In the United Kingdom, residential estate agents are regulated by the Estate Agents Act 1979 and the Property Misdescriptions Act 1991 which is due to go in October 2013, as well as, the more recently enacted Consumers, Estate Agents, and Redress Act 2007.
In September 2012 CPRs (consumer protection regulation) was introduced which now regulates the residential sales process.
For residential property, there are also a few trade associations for estate agents, INEA The Independent Network of Estate Agents and NAEA Propertymark (formerly known as National Association of Estate Agents). NAEA Propertymark members can be disciplined for breaches of their code of conduct. Their disciplinary process includes everything from cautions and warnings right through to more severe penalties of up to £5,000 for each rule breached, and a maximum penalty of €5 million for breaches of the specific Anti Money Laundering rules.
Some estate agents are members of the Royal Institution of Chartered Surveyors (RICS), the principal body for UK property professionals, dealing with both residential, commercial, and agricultural property. Members, known as "Chartered Surveyors", are elected based on examination and are required to adhere to a code of conduct, which includes regulations about looking after their clients' money and professional indemnity insurance in case of error or negligence.
The Ombudsman for Estate Agents Scheme, which obtained OFT approval for the Code of Practice for Residential Sales in 2006 and as of December 2018 claims to have over 15,897 sales offices and 14,746 letting offices registered with TPO.
There is a legal requirement to belong to either organisation to trade as an estate agent. Agents can be fined if they are not a member of a redress scheme. The redress scheme was brought in alongside and to govern agents in reference to the HIP (Home Information Pack).
Industry structure in the UK
A handful of national residential estate agents chains exist, with the majority being locally or regionally specialised companies.
Several multi-national commercial agencies exist, typically Anglo-American, pan-European, or global. These firms all seek to provide the full range of property advisory services, not just agency.
Only a handful of large firms trade in both commercial and residential property.
Fees
Estate agents' fees are charged to the seller of the property. Estate agents normally charge the seller, on a 'no sale, no fee' basis, so that if the property doesn't sell, then the customer will not pay anything to the estate agent and the agent will have worked for the customer, free of charge. If the seller does sell the property and complete the sale of their property to a buyer that was introduced by the estate agent, then the estate agent will charge anything from 1% to 3.5%, with the average in 2018 being reported as 1.42% including VAT and this is calculated based on the sale price of the property.
Alternative estate agency models, most notably online estate agents, generally offer a choice of fees, most of which are either paid on an up front basis, or paid later on completion. Fees range from around £300 to £800.
Lettings
Estate agents who handle lettings of commercial property normally charge a fee of 7 to 15% of the first year's rent, plus the whole of the first month's rent. If two agents are charging 10%, they will split the fee between them. Estate agents selling commercial property (known as investment agents) typically charge 1% of the sale price.
The fees charged by residential letting agents vary, depending on whether the agent manages the property or simply procures new tenants. Charges to prospective tenants can vary from zero to £300 in non-refundable fees usually described as application, administration or processing fees (or all three). There are no guidelines for letting agents on charges, except that they are forbidden by law to charge a fee for a list of properties. All charges to tenants are illegal in Scotland. Otherwise, they are free to charge as they please in England and Wales.
The first month's rent in advance plus a refundable bond (usually equal to one month's rent) is also generally required. Most residential lettings in the UK are governed by assured shorthold tenancy contracts. Assured shorthold tenancies (generally referred to simply as "shorthold") give less statutory protection than earlier, mostly obsolete, types of residential lettings. Shorthold tenancy agreements are standard contracts; the wording is generally available from legal stationers and on the internet for around £1.00, although most lettings agents will charge £30 to provide one.
It is important that tenant referencing checks are in place prior to a tenant moving into a property. The credit check can be run using credit history data from Equifax, Experian or Call Credit (the three main UK providers) using an in-house website system or a managed referencing service. A reputable agent will also ask for an employment reference and a previous landlord reference to attempt to verify that the tenant can afford the rental on the property and that there were no serious problems with the previous agent. It is also essential that proof of identity and proof of residency are also collected and filed.
Selling
Estate agents selling residential property generally charge between 0.5% (sole agency) and 3% (multiple agency) of the achieved sale price plus VAT (Value Added Tax). Some agents may charge for additional marketing such as newspapers and websites, however, generally, the advertising is included in the fee. All fees must be clearly agreed upon and noted in the agency agreement before market so there is no confusion of additional charges.
In July 2016, Which? found the national average estate agents fees to be 1.3%, although fees vary widely.
Other than for the cheapest properties, estate agent fees are generally the second most expensive component of the cost of moving house in the United Kingdom after stamp duty.
High Street agents rarely charge upfront costs for selling, nor costs for aborting a sale and withdrawing a home from the market. So whilst other options are available to sell property with online agents they do often charge upfront fees with no guarantee of selling or perhaps the motivation a no-sale no-fee High Street agency will offer.
Other approaches
Since around 2005, online estate agents have provided an alternative to the traditional fee structure, claiming cheaper, fixed fee selling packages. These online estate agents claim to give private property sellers the ability to market their property via the major property portals (the preferred medium used by traditional high street estate agents) for a fraction of the cost of the traditional estate agency. Online estate agents claim that they can advertise a property as effectively as traditional estate agents by using digital marketing techniques and centralising their back office operation to one location, rather than having physical offices in the town in which they are based. Online estate agents normally cover the whole of the UK, therefore claiming to be able to reduce fees due to removing geographical boundaries that traditional estate agents generally have. Lastly, online estate agents often charge up front, instead of a traditional agent, who would normally charge nothing if the property is not sold
In February 2010 the Office of Fair Trading (OFT) announced that a change in the legislation for estate agents has led to a shake up in the way homes are sold, allowing cheaper online agents to become more established than they could before.
Intermediary estate agents and or property portals that are based in the United Kingdom have started to encourage UK and worldwide estate agents to collaborate by showing all their properties, thus allowing site visitors to see a vast array of UK and overseas properties all on one website.
Research undertaken in 2007 said that the most effective way of selling property is via 'For Sale' signs, 28% of customers had seen the estate agent's For Sale signs before researching more in depth into the properties. Searching for houses via the internet came in a close second (21%), with newspapers third at (17%). The fourth most effective way, and the most traditional, was customers visiting an estate agent's office (15%). In 2010 80% to 90% of properties were found via the internet and agents see fewer people walking into their offices. Boards are still very effective, but many agents are now cutting out paper advertising and moving just to digital such as eMags and just the web.
Other methods included auctions (11%), word of mouth (3%) and leaflets (2%).
Technology
Estate agents use estate agency software to manage their buying applicants, property viewings, marketing and property sales. Estate agents can use the software to prepare property particulars which are used to advertise the property either online or in print. They can also record the requirements of a buying applicant and automatically match them against their database of properties. Once a sale is agreed upon, they can manage the chain of linked property sales using the software.
Estate agency software will also help with property marketing by automatically distributing the property details to property portals.
The latest technology has influenced the growth of Online Agents, and the property sector becoming more reliant on the use of technology to appeal to the consumer market. An example of a company conducting this currently is Matterport, who have created a camera that creates digital 3D models and VR floorpans and ultra HD photography. This has led to digital marketers being able to influence online behaviour in the property market through the use of a Web Portal. By using secure websites, marketers then have the ability to monitor the level of user activity and gain invaluable information to help sellers and estate agents to utilise their marketing and better appeal to the needs of their customers.
In recent years agents have started working together again through systems similar to the USA called MLS (multi listing service). This is where a main agent will take on a property and send details via the most to other local (sub) agents. The sub agents will market and introduce applicants to the main agent. MLS can achieve more offers, sell a property quicker, and is offered by agents as a premium service.
In the US property data is passed from the agents software by the RETS data feed schema. In the UK the INEA idx (information data exchange) data feed is being adopted by many software to receive sub (mls) property listings back.
In both cases technology via MLS and idx means that sub agents collaborating can populate many more properties into their websites by working together.
Working as an estate agent
In Britain, no formal qualifications are required to become an estate agent, however, local property knowledge and customer-service skills are considered worthwhile.
Estate agent speak
A humorous guide to estate agent language was published by the BBC in 2002.
See also
Buying agent
Real estate broker
Real estate business
Real estate development
Real estate bubble
References
External links
Housing in the United Kingdom
Real estate in the United Kingdom
Sales occupations |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.