id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
6507929
https://en.wikipedia.org/wiki/Self-extracting%20archive
Self-extracting archive
A self-extracting archive (SFX or SEA) is a computer executable program which contains compressed data in an archive file combined with machine-executable program instructions to extract this information on a compatible operating system and without the necessity for a suitable extractor to be already installed on the target computer. The executable part of the file is known as a decompressor stub. Self-extracting files are used to share compressed files with a party that may not necessarily have the software to decompress the file. Users can also use self-extracting to distribute their own software. For example, the WinRAR installation program is made using the graphical GUI RAR self-extracting module Default.sfx. Overview It incorporates an executable file module, a module used to run uncompressed files from compressed files. Such a compressed file does not require an external program to decompress the contents of the self-extracting file, and it can run the operation itself. However, file archivers like WinRAR can still treat self-extracting files as any other compressed files. Users who are unwilling to run the self-extracting file they received (for example, when it may contain a virus) can use the file archiver to view or decompress its content without running executable code. On executing a self-extracting archive under an operating system which supports it, the archive contents are extracted and stored as files on the disk. Often, the embedded self-extractor supports a number of command line arguments to control the behaviour, i.e. to specify the target location or select only specific files to be extracted. Non-self-extracting archives contain the archived files only and therefore need to be extracted with a compatible program. Self-extracting archives cannot self-extract under a different operating system but most often can still be opened with a suitable extractor as this tool will disregard the executable part of the file and instead extract only the archive resource. In some cases this requires the self-extracting executable to be renamed to hold a file extension associated with the corresponding packer. Self-extracting files usually have an .exe extension like other executable files. For example, an archive may be called somefiles.zip - it can be opened under any operating system by a suitable archive manager which supports both the file format and compression algorithm used. It could alternatively be converted into somefiles.exe which will self-extract on a machine running Microsoft Windows without the need for that suitable archive manager. It will not self-extract under Linux, but can be opened with a suitable Linux archive manager. There are several functionally equivalent but incompatible archive file formats, including ZIP, RAR, 7z and many others. Some programs can manage (create, extract, or modify) only one type of archive whilst many others can handle multiple formats. There is additionally a distinction between the file format and compression algorithm used. A single file format, such as 7z, can support multiple different compression algorithms including LZMA, LZMA2, PPMd and BZip2. For a decompression utility to correctly expand an archive of either the self-extracting or standard variety, it must be able to operate on both the file format and algorithm used. The exact executable code placed at the beginning of a self-extracting archive may therefore need to be varied depending on what options were used to create the archive. The decompression routines will be different for a LZMA 7z archive when compared with a LZMA2 7z archive, for example. Several programs can create self-extracting archives. For Windows there is WinZip, WinRAR, 7-Zip, WinUHA, KGB Archiver, Make SFX, the built-in IExpress wizard and many others, some experimental. For Macintosh there are StuffIt, The Unarchiver, and 7zX. There are also programs that create self-extracting archives on Unix as shell scripts which utilizes programs like tar and gzip (which must be present in destination system). Others (like 7-Zip or RAR) can create self-extracting archives as regular executables in ELF format. An early example of a self-extracting archive was the Unix shar archive in which one or more text files were combined into a shell script that when executed recreated the original files. Self-extracting archives can be used to archive any number of data as well as executable files. They must be distinguished from executable compression, where the executable file contains a single executable only and running the file does not result in the uncompressed file being stored on disk, but in its code being executed in memory after decompression. Advantages Archiving files rather than sending them separately allows several related files to be combined into a single resource. It also has the benefit of reducing the size of files not already efficiently compressed (many compression algorithms cannot make already compressed data any smaller. Compression will therefore usually reduce the size of a plain text document but hardly affect a JPEG picture or a word processor document. This is because most modern Word Processor file formats now involve a certain level of compression already). Self-extracting archives also extend the advantages of compressed archives to users who do not have the necessary programs installed on their computer to otherwise extract their contents, but are running a compatible operating system. However, for users who do have archive managing software, a self-extracting archive may still be slightly more convenient. Self-extracting archives also allow for their contents to be encrypted for security, provided the chosen underlying compression algorithm and format allow for it. In many cases though the file and directory names are not part of the encryption and can be seen by anyone, even without the key or password. Additionally, some encryption algorithms rely on there being no known partial plaintexts available so if an attacker is able to guess part of the contents of the files from their names or context alone they may be able to break the encryption on the entire archive with only a reasonable amount of computing power and time. Care therefore needs to be taken or a more suitable encryption algorithm used. Disadvantages A disadvantage of self-extracting archives is that running executables of unverified reliability, for example when sent as an email attachment or downloaded from the Internet, may be a security risk. An executable file described as a self-extracting archive may actually be a malicious program. One protection against this is to open it with an archive manager instead of executing it (losing the minor advantage of self-extraction); the archive manager will either report the file as not an archive or will show the underlying metadata of the executable file - a strong indication that the file is not actually a self-extracting archive. Additionally, some systems for distributing files do not accept executable files in order to prevent the transmission of malicious programs. These systems disallow self-extracting archive files unless they are cumbersomely renamed by the sender to, say, somefiles.exx, and later renamed back again by the recipient. This technique is gradually becoming less effective however as an increasing number of security suites and antivirus software packages instead scan file headers for the underlying format rather than relying on a correct file extension. These security systems will not be fooled by an incorrect file extension and are particularly prevalent in the analysis of email attachments. Self-extracting archives will only run under the operating system family and platform with which they are compatible, making it more difficult to extract their contents under other systems. Examples of self-extracting archives, which can be run on multiple targets (such as DOS and CP/M) rather than only an archive's contents to be usable under multiple systems, are very rare, because they require the embedded decompressor stub to be a fat binary. Also, since the self-extracting archives must include executable code to handle the extraction of the contained archive file, they are a little larger than the original archive. See also Installer Self-booting disk Shar Kolmogorov complexity, a theoretical lower bound on the size of a self-extracting archive References External links http://www.winzip.com http://www.7-zip.org http://www.jackmccarthy.com/malware/WinRAR_Archive_Creation.htm (About SFX) http://hem.bredband.net/magli143/exo/ for 6502/Z80/6809 executables http://74.cz/make-sfx/ Data compression
12271117
https://en.wikipedia.org/wiki/Software%20aging
Software aging
In software engineering, software aging is the tendency for software to fail or cause a system failure after running continuously for a certain time, or because of ongoing changes in systems surrounding the software. Software aging has several causes, including the inability of old software to adapt to changing needs or changing technology platforms, and the tendency of software patches to introduce further errors. As the software gets older it becomes less well-suited to its purpose and will eventually stop functioning as it should. Rebooting or reinstalling the software can act as a short-term fix. A proactive fault management method to deal with the software aging incident is software rejuvenation. This method can be classified as an environment diversity technique that usually is implemented through software rejuvenation agents (SRA). The phenomenon was first identified by David Parnas, in an essay that explored what to do about it: "Programs, like people, get old. We can't prevent aging, but we can understand its causes, take steps to limit its effects, temporarily reverse some of the damage it has caused, and prepare for the day when the software is no longer viable." From both an academic and industrial point of view, the software aging phenomenon has increased. Recent research has focussed on clarifying its causes and effects. Memory bloating and leaking, along with data corruption and unreleased file-locks are particular causes of software aging. Proactive management of software aging Software aging Software failures are a more likely cause of unplanned systems outages compared to hardware failures. This is because software exhibits over time an increasing failure rate due to data corruption, numerical error accumulation and unlimited resource consumption. In widely used and specialized software, a common action to clear a problem is rebooting because aging occurs due to the complexity of software which is never free of errors. It is almost impossible to fully verify that a piece of software is bug-free. Even high-profile software such as Windows and macOS must receive continual updates to improve performance and fix bugs. Software development tends to be driven by the need to meet release deadlines rather than to ensure long-term reliability. Designing software that can be immune to aging is difficult. Not all software will age at the same rate as some users use the system more intensively than others. Rejuvenation To prevent crashes or degradation software rejuvenation can be employed proactively as inevitable aging leads to failures in software systems. This proactive technique was identified as a cost-effective solution during research at the AT&T Bell Laboratories on fault-tolerant software in the 1990s. Software rejuvenation works by removing accumulated error conditions and freeing up system resources, for example by flushing operating system kernel tables, using garbage collection, reinitializing internal data structures, and perhaps the most well known rejuvenation method is to reboot the system. There are simple techniques and complex techniques to achieve rejuvenation. The method most individuals are familiar with is the hardware or software reboot. A more technical example would be the web server software Apache's rejuvenation method. Apache implements one form of rejuvenation by killing and recreating processes after serving a certain number of requests. Another technique is to restart virtual machines running in a cloud computing environment. The multinational telecommunications corporation AT&T has implemented software rejuvenation in the real time system collecting billing data in the United States for most telephone exchanges. Some systems which have employed software rejuvenation methods include: Transaction processing systems Web servers Spacecraft systems The IEEE International Symposium on Software Reliability Engineering (ISSRE) hosted the 5th annual International Workshop on Software Aging and Rejuvenation (woSAR) in 2013. Topics included: Design, implementation, and evaluation of rejuvenation mechanisms Modeling, analysis, and implementation of rejuvenation scheduling Software rejuvenation benchmarking Memory leaks In systems that use an OS user programs have to request memory blocks in order to perform an operation. After this operation (e.g. a subroutine) is completed, the program is expected to free up all the memory blocks allocated for it in order to make it available to other programs for use. In programming languages without a garbage collector (e.g. C and C++) it's up to the programmer to call the necessary memory releasing functions and to account for all the unused data within the program. However this doesn't always happen. Due to software bugs the program might consume more and more memory eventually causing the system to run out of memory. In low memory conditions, the system usually functions slower due to the performance bottleneck caused by intense swapping (thrashing), applications become unresponsive and those that request large amounts of memory unexpectedly may crash. In case the system runs out of both memory and swap even the OS might crash causing the whole system to reboot. Programs written in programming languages that use a garbage collector (e.g. Java) usually rely on this feature for avoiding memory leaks. Thus the "aging" of these programs is at least partially dependent on the quality of the garbage collector built into the programming language's runtime environment itself. Sometimes critical components of the OS itself can be a source of memory leaks and be the main culprit behind system stability issues. In Microsoft Windows, for example, the memory use of Windows Explorer plug-ins and long-lived processes such as services can impact the reliability of the system to the point of making it unusable. A reboot might be needed to make the system work again. Software rejuvenation helps with memory leaks as it forces all the memory used by an application to be released. The application can be restarted but starts with a clean slate. Implementation Two methods for implementing rejuvenation are: Time based rejuvenation Prediction based rejuvenation Memory bloating Garbage collection is a form of automatic memory management whereby the system automatically recovers unused memory. For example, the .NET Framework manages the allocation and release of memory for software running under it. But automatically tracking these objects takes time and is not perfect. .NET based web services manage several logical types of memory such as stack, unmanaged and managed heap (free space). As the physical memory gets full, the OS writes rarely used parts of it to disk, so that it can reallocate it to another application, a process known as paging or swapping. But if the memory does need to be used, it must be reloaded from disk. If several applications are all making large demands, the OS can spend much of its time merely moving data between main memory and disk, a process known as disk thrashing. Since the garbage collector has to examine all of the allocations to decide which are in use, it may exacerbate this thrashing. As a result, extensive swapping can lead to garbage collection cycles extended from milliseconds to tens of seconds. This results in usability problems. References Further reading R. Matias Jr. and P. J. Freitas Filho, "An experimental study on software aging and rejuvenation in web servers," Proceedings of the 30th Annual International Computer Software and Applications Conference (COMPSAC'06), Vol. 01, pp. 189 – 196, 2006. M. Grottke, R. Matias Jr., and K. S. Trivedi, "The Fundamentals of Software Aging," Workshop of Software Aging and Rejuvenation (WoSAR/ISSRE), 2008. R. Matias Jr, P. Barbetta, K. Trivedi, P. Freitas Filho "Accelerated Degradation Tests Applied to Software Aging Experiments," IEEE Transactions on Reliability 59(1): 102–114,2010. M. Grottke, L. Li, K. Vaidyanathan, and K.S. Trivedi, "Analysis of software aging in a web server," IEEE Transactions on Reliability, vol. 55, no. 3, pp. 411–420, 2006. M. Grottke, K. Trivedi, "Fighting Bugs: Remove, Retry, Replicate, and Rejuvenate," IEEE Computer 40(2): 107–109, 2007. More papers on Proceedings of Workshop of Software Aging and Rejuvenation (WoSAR'08,'10, '11, '12, '13, '14) at IEEE Xplore. Software anomalies
24824350
https://en.wikipedia.org/wiki/Andrzej%20Grzegorczyk
Andrzej Grzegorczyk
Andrzej Grzegorczyk (; 22 August 1922 – 20 March 2014) was a Polish logician, mathematician, philosopher, and ethicist noted for his work in computability, mathematical logic, and the foundations of mathematics. Historical family background Andrzej Grzegorczyk's foundational family background has its origins in the Polish intellectual, religious, patriotic, and nationalist traditions. He was the only child to the Galician family of well-educated and wealthy parents, his father Piotr Jan Grzegorczyk (1894–1968) was a polonist and historian of Polish literature involved into literary criticism, bibliographic studies, and chronicles of the Polish cultural life. Andrzej's mother Zofia Jadwiga née Zdziarska was a Medical Doctor from a purely Polish landed gentry family. Rich historical family background was the most fundamental element in the shaping of Andrzej Grzegorczyk's further both intellectual formation and professional academic career. In particular, this heritage laid the foundations of his philosophical system which was a specific apologetics which mixed the Christian doctrine with certain elements of the Communist Ideology, the ecumenic approach towards both the Roman Catholic Church in Poland and the Russian Orthodox Church, the manifestly friendly attitude to the Eastern European countries and their national philosophers, and problems in the social assimilation in the Western European countries which served him at most for short time visits. Education During the 1939–1944 Nazi–Soviet occupation of Poland in the course of the World War II, Grzegorczyk was a voluntary member of the Home Army, the dominant Polish resistance movement formed in February 1942 by the Union of Armed Struggle and other Polish partisans with allegiance to the Polish Government-in-Exile as the armed wing of the Polish Underground State. As an insurgent of the Warsaw uprising under the pseudonym Bułka, he provided a voluntary military service in the rank of Shooter in the 2nd Platoon of the Company 'Harcerska' of the AK Battalion 'Gustaw' of the 'Róg' Group of the Group Północ' of the Home Army. As a result of the explosion of the Nazi heavy remote-control demolition layer Borgward IV of the Panzer Abteilung (Funklenk) 302 on 13 August 1944 at the Jan Kiliński street in the Warsaw Old Town, which killed more than 300 Polish partisans and civilians, he was injured in the legs and placed in a field hospital. Several days later he rejoined the division, but by the health condition he did not participate in the further fights and was forced to leave Warsaw on 31 August 1944 as a civilian in the evacuation by the city canals from the Warsaw Old Town to the Warsaw Śródmieście. He got primary education at a private School of the Roman Catholic Educational Society Future, the same as for other Warsaw Uprising's insurgent a Polish politician Władysław Bartoszewski who under the pseudonym Teofil served to the Bureau of Information and Propaganda of the Headquarters of the Home Army in the rank of Corporal and in 1940–1941 was imprisoned in the Auschwitz concentration camp. Since 1938, Grzegorczyk had obtained the Polish secondary underground education at the 6th Tadeusz Reytan Lyceum in Warsaw to be matriculated in May 1940 on the day of France's capitulation. In order to be saved from being deported to Nazi Germany for a forced labour, he started another education at a chemical high school and then, when the Nazi authorities allowed vocational secondary education in the occupied Poland, at a 3-year chemical school localized on the area of the Warsaw University of Technology where in 1942–1944 he was taught by the professors of this university. At the same time, he completed a clandestine higher education at two then secret Polish universities – in physics at the University of Warsaw. At the stage of higher education, Grzegorczyk was especially attracted to logic by a broadcast lecture on stoic logic by a logician and mathematician Jan Łukasiewicz, who served as the Minister of Religious Denominations and Public Education of the Second Polish Republic in the 1919 government and in 1931–1932 was a rector of the University of Warsaw, and in May 1939 was elected a councillor of the Council of the City of Lwów as a representative of the Camp of National Unity based in the Sanation movement. Practical exercises in mathematical logic were provided by Tadeusz Kotarbiński's student Henryk Hiż, who in the late 1930s was a member of the anti-Sanation Democratic Club, as an insurgent of the Warsaw Uprising in the rank of Second Lieutenant was imprisoned in a Nazi prisoner-of-war camp Oflag II D Gross-Born (Grossborn-Westfalenhof), after liberation by the 4th Infantry Division (Jan Kiliński) of the First Polish Army immigrated to the US, where in 1948 as Henry Thadeus Hiz got a doctoral degree at the Harvard University for a thesis An economical foundation for arithmetic supervised by a famous American philosopher Willard Van Orman Quine and worked for the University of Pennsylvania in Philadelphia. Logic was also lectured by a Roman Catholic priest and Christian philosopher Jan Salamucha. Psychology was taught by a Catholic priest Piotr Julian Chojnacki, historian of philosophy. Fine arts were lectured by a Polish art historian Michał Marian Walicki, who worked at the Department of Polish Architecture of the Warsaw Technical University in 1929–1936 and at the Academy of Fine Arts in Warsaw in 1932–1939 and 1945–1949, was awarded by the Officer's Cross of the Order of Polonia Restituta in 1947 and in 1949 was arrested and imprisoned till 1953 on the grounds of both false accusations and a fabricated evidence. One of Grzegorczyk's underground teachers was a philosopher of science and culture Bogdan Suchodolski, a professor at the Chair of Pedagogy of the Jan Kazimierz University of Lwów since 1938, a professor at the University of Warsaw in 1946–1970 and a member of the PAU since 1946, since 1952 a member and in 1965–1970 a deputy of scientific secretary and in 1969–1980 a member of the presidium of the PAS, in 1953–1973 the chairman of the Committee on Education Studies of the PAS, the head of the Department of History of Science and Technology of the PAS for 1958–1974 and the founding director of the Institute of Pedagogical Sciences at the Faculty of Education of the University of Warsaw for 1958–1968, since 1968 a member of the Front of National Unity (FJN), in 1983 a member of its successor the Patriotic Movement for National Rebirth (PRON) created to demonstrate unity and support for the communist Poland's both government and governing party Polish United Workers' Party (PZPR) in the aftermath of the 1981–1983 martial law in Poland with a crucial role of a Polish centrist political party Alliance of Democrats and a pro-communist secular Roman Catholic organization PAX Association which in 1949 formed a publishing house Instytut Wydawniczy "Pax", in 1985–1989 a member and a Senior Marshal of the communist Poland's Parliament, in 1982–1989 the chairman of the National Council of Culture created by the martial law's main author Wojciech Jaruzelski. Grzegorczyk was able to complete his originally underground education only after the World War II finished. Already in 1945 at the Philosophical Faculty of the Jagiellonian University in Kraków, supervised by a renowned philosopher of physics Zygmunt Michał Zawirski who in 1938–1939 was the Faculty's dean and in 1937 relocated to Kraków after uninterrupted nine years as the head of the Chair of Theory and Methodology of Sciences of the Adam Mickiewicz University in Poznań and investigated the problems on the borderline of philosophy and physics, Grzegorczyk finished the first higher study by a magister degree in philosophy with the thesis The Ontology of Properties, wherein with a help of Kotarbiński's reism, also propagated by Henryk Hiż, in the formal version by Stanisław Leśniewski, he successfully presented interpretation of Leśniewski's elementary ontology as a theory of higher logical types. Soon after graduation, he got a postgraduate scholarship targeted to the study logic and the foundations of mathematics at the University of Warsaw, and on 26 May 1950 by examinations in philosophy and chemistry he completed a doctoral degree in mathematics with the thesis On Topological Spaces in Topologies without Points supervised by a famous mathematician Andrzej Stanisław Mostowski, in 1954 awarded by Knight's Cross of the Order of Polonia Restituta and in 1963 elected a real member of the PAS, who after the World War II also supervised Rasiowa's both master and doctoral theses in logic and the foundations of mathematics. All the aforementioned teachers, both underground and post-war, by their teaching style and general attitude very deeply impacted onto Grzegorczyk's youthful mind. In his later years, the influence was strictly reflected by his various life choices, individual mental features, overall academic career both inside and outside Poland, both style and methods of research and teaching work. What is much more fundamental for his creativity, this impact straightforwardly shaped his personal world view which paradoxically mixed the ideas being in a deep mutual opposition, and resulted in the creation of a borderline philosophical system based on discrimination against the features of a human mind which are either inborn or beyond a personal choice. Academic career In 1946–1948, he was an assistant to Władysław Tatarkiewicz, and, since at this time Tatarkiewicz was the editor-in-chief of the first Polish philosophical journal Przegląd Filozoficzny (Philosophical Review), he served also as the secretary of the editorial board. In 1948, he submitted a paper on the semantics of descriptive language to the 10th International Congress of Philosophy in Amsterdam, but, similarly to most Polish philosophers invited by the organizers or submitting papers, he did not get passport and could not participate in this event, although his abstract was included in the proceedings. This fact was the omen of a difficult political time, and at this time logic focused attention of many Polish philosophers, young Grzegorczyk selected the way of majority. Soon after completion of the doctoral degree, he was employed by the Institute of Mathematics of the PAS and after 3 years he completed qualification procedure to a docent position on the basis of a book Some Classes of Recursive Functions, what at this time was an equivalent of the habilitation procedure. In 1950–1968, he worked at a secondary employment for the Faculty of Mathematics and Mechanics of the University of Warsaw, in 1957 he was awarded by the Stefan Banach Prize of the Polish Mathematical Society, in 1961 he was appointed an associate professor of mathematics. In 1963, along with Andrzej Mostowski and Czesław Ryll-Nardzewski, he participated in the famous logical conference The Theory of Models at the University of California, Berkeley where he met Alfred Tarski and where his both individual research results and research results co-authored with Mostowski and Ryll-Nardzewski were the subjects of few lectures. When a famous Dutch logician Evert Willem Beth died in 1964, the University of Amsterdam offered him the position of the head of the chair after Beth, he came to Amsterdam for few months and returned to Warsaw because he could not find a mental place for himself in the Netherlands, at this time on the eve of the cultural revolution which made Amsterdam the 'magic centre' of Europe. Also in 1967 he lectured four months at the University of Amsterdam and was appointed a member and an assessor to the Division of Logic, Methodology and Philosophy of science and Technology of the International Union of History and Philosophy of Science, while in 1970 he lectured two months at the Sapienza University of Rome. In 1960, he contributed a philosophical essay to the festschrift volume to honor Władysław Tatarkiewicz on the occasion of his 70th birthday anniversary, whereas his father contributed Tatarkiewicz's bibliography to this volume. In the aftermath of the March 1968 Polish political crisis, as a result of a long-lasting confirmed direct involvement into the anti-governmental oppositional actions such like signing every open letter against restriction onto freedoms under communist regime, the inconvenient political circumstances motivated him to leave the University of Warsaw for the Institute of Mathematics of the PAS, where, in the 1960s, he was appointed the head of the Department of Foundations of Mathematics after Andrzej Mostowski left this position and remained the head of the Chair of Foundations of Mathematics at the University of Warsaw. In 1972, he was appointed a Full Professor, in 1973 he organized the pioneering semester in mathematical logic at the Stefan Banach International Mathematical Center of the Institute of Mathematics of the PAS, also known as the Banach Center and formed in January 1972 by the mutual agreement between the PAS and the Bulgarian, Czechoslovak, East German, Hungarian, Romanian, and Soviet national academies of sciences, which started the 1973–1992 cyclic three-to-four-month long lectures and allowed both the logicians and philosophers of the isolated Eastern Bloc to interact with their Western colleagues. In 1974, by mutual agreement he left the Institute of Mathematics of the PAS for the Institute of Philosophy and Sociology of the PAS to work at the Section of Logic then headed by a model theorist Ryszard Wójcicki who founded the series Trends in Logic' and related conferences by the institute's oldest journal Studia Logica: An International Journal for Symbolic Logic, and to become the head of the Section of Ethics since 1982. On the other hand, he was well known as an active member the Roman Catholic Church in Poland, in 1964 as one of the first Poles he appeared in the French commune Taizé, Saône-et-Loire, Burgundy to associate with an ecumenical Christian monastic group known as the Taizé Community, and this visit initiated the systematic contacts of the Polish Catholic intellectuals with this ecumenic group. He was a member of the Warsaw branch of the Club of Catholic Intelligentsia, particularly he was appointed a deputy president in 1972–1973 and a member of the government in 1973–1974 and 1976–1978, formed in 1956 after the Gomułka Thaw to stimulate independent thought and inform Polish Catholics on the Catholic philosophy in the countries outside the Eastern Bloc. Starting the 1970s, he practiced independent ecumenical activities, in particular the dialogue with the Russian Orthodox Church in Poland for which he organized encounters between the Polish Catholic intellectuals the Russian Orthodox intellectuals at his own apartment, which in the late 1970s and the early 1980s was also the location for the lectures of the Flying University organized by the Society of Scientific Courses (TKN). His model of the Catholic-Orthodox ecumenism found a good realization in the collaboration with the Russian Orthodox priest Alexander V. Men of Moscow, an eminent chaplain of independent Russian intellectuals who was murdered in 1990. He was a dedicated supporter of Russia and Ukraine, he felt that the Soviet Union was his second home, he actively collaborated with many Soviet and Russian scholars and claimed that Russians and Poles have a similar expression of the world and many in common by the culture. Among the anti-communist oppositionists in the Polish academia, he was well known as both supporter and propagator of the philosophy of nonviolence, a fight without making use of any kind of violence, and as the spiritual masters he followed the teachings of the leader of Indian nationalism Mahatma Gandhi, the leader in the civil rights movement Martin Luther King Jr. who got the 1964 Nobel Peace Prize, as well as both Henryk Hiż and his mentor Tadeusz Kotarbiński, who in 1958–1968 was a deputy chairman to the All-Poland Committee of the Front of National Unity of the communist Poland under first Aleksander Zawadzki and then Edward Ochab. Censorship in the Polish People's Republic strictly blocked public information on Grzegorczyk, and, similarly to other Polish mathematician Stanisław Hartman of the University of Wrocław who got the Stefan Banach Prize in 1953, already in 1977 his name was enlisted for the strengthened control of the censorship. Every attempt of popularization of his name in the mass media (daily press, radio, TV, socio-political magazines) was immediately signalized to the directors of the central office of the state censorship directly controlled by the authorities of the communist party which then governed Poland. The censorship rules made exception solely for the publications in the specialist press, academic journals, and university lecture notes. However, as compared to the aforementioned famous case of Hartman, he had never been involved into a real political activity, such like direct supporting of anti-governmental student protesters, membership in oppositionist organizations, membership in a political party. Hence, he almost completely smoothly came through the communist period of Polish statehood and avoided the various unpleasant circumstances produced by the state authorities and services against their ideological enemies in the Polish academia, which usually included regular persecutions and repressions by sudden often brutal hearings, firing from a university teacher's job, ban on public lectureship, forced political emigration, internment during the period 1981–1983 of the martial law in Poland. Similarly to Hartman, for his oppositionist activity he was "exiled" from his alma mater to the Institute of Mathematics of the PAS. However, he always placed his personal intellectual development on the top priority, and made professional advancements both inside and outside the communist fatherland, particularly in 1979 he was elected a member of the International Institute of Philosophy (IIP) in Paris, France with respect to which he did not hide the critical views. After the pioneering Revolution of 1989 in Poland, he was celebrated as an intellectual authority of the post-communist Republic of Poland. He was appointed a corresponding member of the PAU in Kraków after its finally successful restoration in 1989, and an active member of the Section of Philosophy of the Scientific Society of the KUL. In 1990, he took an early retirement and enhanced his organizational activity for the Polish philosophical academia as a director of research grant The Hundred Years of the Lwów–Warsaw School, within which there he organized a huge conference in Warsaw and Lvov on the occasion of Kazimierz Twardowski's centenary as the head of a chair at the former Jan Kazimierz University of Lwów, and translation of his book to Ukrainian language. In 1995, he was elected the chairman of the editorial board of the restored weekly Przegląd Filozoficzny. In 1997, under the rationale '''for his outstanding merits to the Polish science' President Aleksander Kwaśniewski awarded him by the Knight's Cross of the Order of Polonia Restituta. In 1999–2003, he was both the first ever honorary member and the chairman of the Committee of Philosophical Sciences of the PAS, the Polish state body responsible for coordination and giving opinions on philosophical activities as well as analysis of philosophical publications and teaching programs in Poland. In 2011–2014, he was a member of the Committee of Ethics in Science of the PAS whose purpose is the diagnosis of ethical consciousness of Polish scientific community and recommendations for its improvements. In 2010, he was awarded by the honorary doctorate of the Blaise Pascal University, Clermont-Ferrand, France, whereas in 2013 he was awarded the honorary doctorate of the Jagiellonian University in Kraków . Finally, under the rationale 'for his outstanding achievements in the scientific and didactic work, for his merits to the development of science and the activity for the democratic transformations in Poland' in 2014 President Bronisław Komorowski posthumously decorated him by the Officer's Cross of the Order of Polonia Restituta. Legacy Mathematical logic and mathematics Both inside and outside Poland he was most known for continuation of the Interwar period's intellectual traditions of the Lwów–Warsaw school of logic, in particular with the substantial help of Grzegorczyk and other Andrzej Mostowski's student Helena Rasiowa, as well as independently Alfred Tarski's student Wanda Szmielew, Warsaw rejoined the chart of the worldly foundational studies. He shared the view of Tarski and Mostowski that logical investigations should respect deductive sciences, in particular, mathematics, and was skeptical in the matter of a value of investigations, developed by Łukasiewicz's school, which looks for the shortest logical axioms or the simplest axiomatic bases of various logical systems. It is important to notice that a role of both Grzegorczyk and Rasiowa for earlier unpublished work in computable analysis by Stanisław Mazur, a member of the Polish United Workers' Party who was a member of communist Poland's Parliament in 1947–1956 and the director of the Institute of Mathematics of the University of Warsaw in 1964–1969 awarded by the Officer's Cross of the Order of Polonia Restituta in 1946, the Stefan Banach Prize in 1949, and the Order of the Banner of Work in 1951 and 1954, who investigated nonlinear functional analysis and Banach algebras and who in 1936–1939 elaborated the concept of computable real numbers and functions along with Stefan Banach. It is well known that Mazur was unfortunate in extension of his pre-war work to general computable mathematical objects, and the results were successfully assembled and published only in 1963 just under edition of Grzegorczyk and Rasiowa, what had a comparatively small influence on development of computable analysis, but developed careers of both the editors at the University of Warsaw. Among his achievements in logic, one can find the issues of computability and decidability such like recursive functions, computable analysis, axiomatic arithmetic, and concatenation theory. On the other hand, he made research in the system of logics such like logical axioms, axiomatic geometry, non-classical logics, and interpretations of logic where, in particular, he defended psychologism since the beginning of his research. His two famous articles of 1958 and 1961 co-authored with Andrzej Mostowski and Czesław Ryll-Nardzewski became the starting point for research in the axiomatic second-order arithmetic and the arithmetic with an infinite conclusion rule, in particular the 1958's paper introduces the second-order arithmetic formalized in the first-order logic wherein both numbers and number sets are taken into account. He contributed research of the fundamental importance for theoretical computer science and precursory for the computational complexity theory, in 1953 he described and investigated classes of recursive functions generated by superposition, restricted recursion and the restricted minimum operation from some prescribed basic functions which contain addition and multiplication, and satisfy the condition that every class in question includes more complicated primitive recursive functions, and he got the sub-recursive hierarchy which fills the class of primitive recursive functions and is named the Grzegorczyk hierarchy. This originally recursion-theoretic hierarchy takes into account a strictly increasing infinite sequence of classes of functions of which the sum is a class of primitive recursive functions considered earlier, the n+1-th level functions are generated by iteration of the n-th level functions the number of times indicated by one of the arguments and by the closure under primitive recursion scheme bounded from above by an already defined function. In 1964–1968, he researched relational and topological semantics for the intuitionistic logic, what in the context of the famous work due to J.C.C. McKinsey and Alfred Tarski was also a semantics for modal logic. Inspired by Paul Joseph Cohen’s notion of forcing and Evert Willem Beth's semantics for intuitionistic logic, on the basis of the Heyting arithmetic and Jaśkowski's formulation of the intuitionistic propositional calculus, he proposed the model theory for the intuitionistic logic of constant domains (CD) and found a modal formula which is valid in all partially ordered frames with descending chain condition but not in all topological spaces. Because the logic resulting from this semantics includes the intuitionistically unprovable Grzegorczyk's schema , where x is not free in a sentence A, it is stronger than intuitionistic predicate logic. He proposed the semantics as a philosophically plausible formal interpretation of intuitionistic logic' independently of the near-contemporary work by an American philosopher and logician Saul Aaron Kripke, observed that his semantics validates the schema and modified the forcing relation for disjunctions and existential formulas to give an exact interpretation for intuitionistic predicate logic. Sabine Koppelberg née Görnemann first in her doctoral dissertation of 1969 proved completeness of a calculus related to Grzegorczyk's semantics by both Kripke's tableau method and an algebraic method which made use of a language with the logical symbols (), and then in 1971, with independent results by Dieter Klemke in 1970–1971 and Dov Gabbay in 1969, proved that addition of the scheme to intuitionistic predicate logic is sufficient to axiomatize Grzegorczyk's logic. Grzegorczyk's model theory has a fixed constant domain and represents a static ontology, whereas Kripke's model theory involves a quasi-ordered set of classical models, where the domains can expand along the quasi-ordering, and represents an expanding ontology with new objects created in a growth of knowledge. In other words, Grzegorczyk's semantics for the intuitionistic predicate logic is the class of predicate Kripke's models which have constant domain function. From the point of view of non-classical logics, with a help of the necessity operator sometimes denoted by , he proposed to consider what was later called the Grzegorczyk formula/axiom, which, although that is not valid in the Lewis normal modal logic S4, when added as a new axiom schema to S4, which can be equipped into the Lindenbaum–Tarski algebra, gives the intuitionistic logic which can be translated into the S4 calculus by virtue of Gödel's interpretation of an intuitionistic logic via provability operator. In particular, Krister Segerberg was the first who proposed for this emerging specific system of a modal logic the name Grzegorczyk's logic and the symbol S4Grz or S4.Grz, whereas George Boolos contributed the foundational investigation of Grzegorczyk's schema in the context of proof theory and arithmetic. In 2005, Grzegorczyk gave a new proof of the undecidability of the first-order functional calculus without making use of Gödel's arithmetization, and, moreover, demonstrated undecidability of a simple concatenation theory, the operations of matching two texts understood as sequences of symbols into one text with the next text being a continuation of the first one. Grzegorczyk's undecidability of Alfred Tarski's concatenation theory is based on the philosophical motivation claiming that investigation of formal systems should be done with a help of operations on visually comprehensible objects, and the most natural element of this approach is the notion of text. On his research, Tarski's simple theory is undecidable although seems to be weaker than the weak arithmetic, whereas, instead of computability, he applies more epistemological notion of the effective recognizability of properties of a text and relationships between different texts. In 2011, Grzegorczyk introduced yet one more logical system, which today is known as the Grzegorczyk non-Fregean logic or the logic of descriptions (LD), to cover the basic features of descriptive equivalence of sentences, wherein he assumed that a human language is applied primarily to form descriptions of reality represented formally by logical connectives. According to this system, the logical language is equipped in at least four logical connectives negation (¬), conjunction (∧), disjunction (∨), and equivalence (≡). Furthermore, he defined this propositional logic from scratch, argued that neither classical propositional logic nor any of its non-classical extensions can be applied as an adequate formal language of descriptions, whereas the paradoxes of implication and equivalence are due to classical logic's restricting itself to considering only one, admittedly the most important, parameter of the content of a claim, namely its truth value. He rejected all the classical logical tautologies except the law of contradiction and added two logical axioms: (LD1): ≡ represents an equivalence relation and obeys the appropriate Extensionality property, that is equal descriptions can be substituted for each other, and (LD2): ≡ joins some of the Boolean properties of descriptions, such like associativity, commutativity, and idempotency of ∧ and ∨, distributivity of ∧ over ∨, distributivity of ∨ over ∧, involution of ¬ that additionally satisfies the De Morgan laws. His results in the concatenation theory and the propositional calculus with the descriptive equivalence connective provided an important addition to his signal achievements. By his investigations in psychologism, he converted the paradox of Eubulides into a positive theorem (formally proved): an ideal human tackling a linguistically properly stated problem is able to think on this consistently, sincerely and fully consciously. In opposition to Alfred Tarski and other anti-psychologist logicians who taught that the natural language leads to contradiction by its very nature, he rejected this conviction and proposed a formal system of the Universal Syntax which imitates the versatility of colloquial language. On his approach, the axiomatization of quotation-operator is the best device which allows to marry logic with metalogic and prove the adequacy theorem for the notion of truth. He dealt with the problem on invasion of logic by postmodernity, because the deconstruction of logical rules is not at stake, there would be no logic, while the foundation of logic could be rocked. Despite that in the mathematical genealogy he was and a son of Andrzej Mostowski and a grandson of Alfred Tarski, he manifestly undermined and rebutted the classical anti-psychologism of modern logic. His best known logico-mathematical book is An Outline of Mathematical Logic: Fundamental Results and Notions Explained with All Details published in Polish in 1961 and in English in 1969. His book Fonctions Récursives became the standard handbook at the French universities. His other book Zarys arytmetyki teoretycznej (An Outline of Theoretical Arithmetic) became the basis for the Mizar system by the University of Białystok's team of a notable computer scientist Andrzej Trybulec. In Poland, Grzegorczyk was the first who popularized logical calculus by the book Logika popularna (Popular logic), also translated into Czech in 1957 and Russian in 1965, and the problems of decidability theory by the book Zagadnienia rozstrzygalności (Decidability problems). He studied computable real numbers, in particular provided few different definitions of these numbers and the ways to development of mathematical analysis based only on these numbers and computable functions determined on these numbers. He investigated computable functionals of higher types and proved undecidability of different weak theories such like elementary topological algebra, he considered axiomatic foundations of geometry by the means of solids instead of points, he showed that mereology is equivalent to the Boolean algebra, he approached intuitionistic logic with a help of semantics of intuitionistic propositional calculus built upon the notion of enforced recognition of sentences in the frames of cognitive procedures, what is similar to the Kripke semantics which was created parallelly, and he studied Kotarbiński's reism. He proposed an interpretation of the Leśniewski ontology as the Boolean algebra without zero, and demonstrated the undecidability of the theory of the Boolean algebras with the operation of closure. He investigated intuitionistic logic, just a modal interpretation of the Grzegorczyk semantics for intuitionism, which predetermined the Kripke semantics, leads to the aforementioned S4.Grz. He sought logic as a vivid area placed in the mainstream of the European philosophy, and, although that he belonged to rather hermetic Lwów–Warsaw School of thought, in opposition to his intellectual masters and academic mentors he contributed the approach to logic by the means of psychology. For him, logic is a collection of principles which preserve the emphatic declarations of the majority of properly educated and free from either violation or corruption people, that is, he attributes logic to a perfect human mind formed as a result of ideal educational background, ideal upbringing process, and ideal propriety. On his idealist views, classical logic is a formal theory of existence lexicalized by the existential quantification, and all that is logical is nothing but a text. As a fervent follower of the classical Aristotelian attitude, he sought logic not only as the fundamental tool but also as the basic ontology wherein semantics dealing with human utterances is preceded by attributes of humankind such like rationality, life, and existence. In his methodological approach, logic lays the foundations of science and the overall European invention. Philosophy and ethics In the field of philosophical logic he defended the ontological interpretation of the logical laws, on the basis of his personal belief on describing the world by these laws. In a straightforward opposition to the teachings by the fathers-founders of the twentieth-century European transcendentalism a German logician Gottlob Frege and an Austrian phenomenologist Edmund Husserl, thanks to which anti-psychologism also known as logical realism or logical objectivism dominated both the common understanding of logic and the formal reasoning in logic, he defended psychologism which he approached as a thesis about dependence of the relationship of meaning and determination on the human factor and its description attributed to a human behavior, what in itself was very far from continental philosophy and appropriate to an English Enlightenment philosopher John Locke and a British utilitarianist philosopher John Stuart Mill who was criticized by Husserl for the logical psychologism. According to Grzegorczyk's interpretation, any description is in a language of someone and done for someone, whereas logic is applied to describe the world strictly. As a result of such an approach, he produced the reinterpretation of the semantic antinomies which claims on limitations of applicability of concepts rather than self-contradiction of a language. Particularly, in the book Logic – a human affair, by the style appropriate to Tarski, he discussed with anti-psychologism and presented the formal construction of the Universal Syntax which led him to the hypothesis 'to say that a sentence A is true is equivalent to the statement of this sentence relativized to the domain wherein one applies the sentence A' which trivializes the notion of truth, but he also stated that the proof of this trivialization is trivial'. On his approach, the liar paradox, which lays the foundations of this construction, transforms the antinomy into the hypothesis on both the human nature and human condition: 'there is a problem about which a human can not think in an accordant, sincere, and fully conscious way, that is having full awareness of the recognized and unrecognized sentences'. Hence, in his fervent battle for psychologism, he linked formal logic to rather non-universal and subjective attributes of a human mind, and, moreover, claimed that anti-psychologist interpretation of meaning is inspired by the related idealist vision of the world as formed by ideally formed intellectuals. According to Grzegorczyk, independently on personal motivations for which one makes considerations, the criterion of the value of these considerations is logic, strictly speaking if a proof is logical, systematic, and self-conscious. Grzegorczyk approached logic as the morality of speech and thought, he sought in logic the foundations of moral discussions, and, therefore, made a straightforward cultivation of discrimination fully appropriate to the German idealists – he did not see a morality beyond a selected system and claimed the selected model of morality is the universal one. Despite that he conceived just logic by wide horizons, included the methodology of science into its foundations and claimed that logic is a basic component of the intellectual attitude which he identified with the European Rationalism, he limited both logic by rationalism and rationalism by logic. His model of rationalism is open to the realm of values, acquirable for a reliable knowledge, and advocates ethics in social relations, whereas logic appears therein as a pure attribute of a human mind – a rational European human. He fought for psychologism in logic, for him semantic relations are always relations for someone and are mediated by language. Consequently, paradoxes demonstrate the limits of concepts and systems rather than inconsistencies of a language. Furthermore, he contributed a series of writings in philosophical anthropology and ethics in the methodological framework of philosophy which he named rationalism open for values, which was basically no more than a mixture of the Christian theology and the selected teachings of the Lwów–Warsaw school, and for which he developed the selected ideas of the Marxist thought in the context of theology and philosophy. The best example of the latter context is his book Mała propedeutyka filozofii naukowej (Short propedeutics of scientific philosophy) issued by the aforementioned pro-communist Catholic publisher Instytut Wydawniczy "Pax" on the eve of the fall of communism in Poland in December 1989, when the Marxist-Leninist references were removed from the July 1952 Constitution of the Polish People's Republic and the country was renamed to the Republic of Poland, the dissolution of the PZPR in late January 1990 by its last First Secretary Mieczysław Rakowski and the dissolution of the Warsaw Pact on 1 July 1991 in Prague. However, Grzegorczyk yet in his 1989's book fully and fervently developed the crucial ideas of the Marxist ideology, where in particular the name 'scientific philosophy' is not equal to philosophy of science understood as the standard philosophical discourse and is reserved to the unique significance of the ideological deformation of philosophy. The ideological misuses of the word 'scientific' in the context of Marxism-Leninism are well-known: 'scientific world view' is equal to the Marxist world view, 'scientific religion' is equal to the Marxist-Leninist atheism, 'scientific art' is equal to socialist realism also known as socrealism, while 'scientific philosophy' is simply the synonym of both dialectical materialism and historical materialism – just making use of the term 'scientific' was making the ideology. Indeed, similarly to ecumenism between different religions, this book, which met a devastative criticism by the Roman Catholic clerical scholars such like a Salesian metaphysicist Andrzej Maryniarczyk of the KUL, under the brightly banners of 'pure philosophy' and 'uniform approach to philosophy which can be the basis of teaching' proposed no more than an ideological uniformity of philosophy without a division onto the subjective branches such like Thomism, Existentialism or Platonism in order to get the alleged advantage of 'maximal objectivity' of philosophy, where Grzegorczyk approaches philosophy as merely a convention based on the alleged values such like 'seriousness' and 'sincerity' claimed by him as the determinants of quality of philosophizing. He did not give the explicit definitions of these alleged merits, but provided some crucial hints on what is 'serious and sincere philosophy': the most general thoughts and insights concerning the world and human life, systematic cognitive activity leading to construction of a philosophical world view, the philosophical views of brilliant writers and even scholars of other disciplines who in the margins of their specialty sometimes made interesting general considerations, amateur general reflection on any topic, professional statements which give very systematic and well exhaustive reflection, texts printed in special philosophical journals, and teachings of university professors. He introduced three categories of philosophers: amateur, traditionalists and professionals, where the only latter one can be performed in the 'scientific' way, that is with a help of natural science and mathematics, but he did not make a difference between a speculation attributed to amateurs and a theoretical approach attributed to philosophy understood as an academic discipline, and, for this reason, made philosophy a part of the collectivist people's culture. The crucial element of his approach is the concept of reality divided onto observable reality formed by 'objects (things) of different properties, joined by relations, creating different sets' by learning of which 'we attribute certain properties to them and combine them into sets', and unobservable reality 'composed of some separate things (...) with certain properties', where he attempted to apply his variant of dialectical materialism to justify an undefined materialistic spiritualism usually attributed to the Marxist-Leninist atheism. Comparatively, in his other book Życie jako wyzwanie (Life as a challenge), published already in 1993 when the primary socio-economic-political transformations were just implemented in the restored Republic of Poland almost free of the fallen Soviet occupation, he presented an unambiguously liberalized ontological declaration '''the world (...) contains objects (...) having various properties, connected by various relations, and belonging to various sets'. From a general philosophical point of view, this book resorted to all the basic notions such like good and evil as the links of the first semantic chain, that is the central ontological notion of an object with the notion of a state of affairs, and his analysis went further precisely along the second ontological chain, that is towards the notions of case and act. In general, the whole book reflected its author's full both conformity with and loyalty towards the economic-political transition, started with the 1990 Polish presidential election which made the leader of the anti-communist opposition and the 1983 Nobel Peace Prize laureate Lech Wałęsa the first freely elected President of the Third Republic of Poland, although until 2007 Lustration in Poland the Polish state authorities did not require a political either purge or thaw or at least a humble self-criticism among the Polish academia, and, moreover, until 2005 the successors of the PZPR – the Social Democracy of the Republic of Poland (SdRP) in 1991 and the Democratic Left Alliance (SLD) in 1999 – governed by Aleksander Kwaśniewski who won the 1995 presidential elections over Lech Wałęsa to be the 3rd President for 1995–2005 and to be instrumental in introduction of Poland into the North Atlantic Treaty Organization (NATO) in 1999 and the European Union in 2004 along with his party comrades Józef Oleksy, Włodzimierz Cimoszewicz, Leszek Miller, and Marek Belka, who were the Prime Ministers of Poland for 1995–1996, 1996–1997, 2001–2004, and 2004–2005 respectively. In the matter of philosophical anthropology, on the basis of the ways of reasoning pivotal for the development of the European philosophy, he provided the systematic analysis of mutual relationships between the forms of thinking and cognition in the context of the beginnings of the European culture, gave precise description of the phenomenon of European rationalism, where rational means successful as well as efficient and well-founded, through joining this concept to the internal flow of intellectual life of the European civilization and to a specific literary topos, that is a place where this way of intellectual life is localized. Meanwhile, in the matter of ethics, he claimed that in a mind cleaned from egoism and disciplined by logic there should appear elements of a general human axiology whose presence in a human expression needs a deep ethical shock, experience of either own or others' strong testimony. Particularly, he sought it in the context of Christianity where cultivated saints are the sinners who came through the stage of a great internal conversion metanoi which principally demolished their earlier life's rules, and pointed out that this procedure was unsuccessfully attempted to be implemented on the area of a secular communist society. He specifically criticized mathematicians, thus including himself, by claiming that sometimes seeing on the might of mathematical brains focused on abstract problems it seemed that there is a satanic force which causes that the most able intellectually individuals are paid for the works meaningless for wellness of humans. By his ethics, he manifestly argued against his own life choices and looked for psychological either self-defense or justification, particularly he claimed that scientists are employed in an intellectual circus whenever they do not try to think on what really is worthy to do, and, for this reason, in the isolated intellectuals there should emerge a remorse and the will of more dedicated participation in realization of socially important tasks of either a country or the world, because quite well-paid mathematical games are a waste of energy which could be utilized for thinking on real actions having a good purpose. In the case of natural scientists, he claimed that they present a world view in a careless way, although by their scientific authority and reference to a concrete research they get a substantial expertise in general philosophic beliefs, but by propagation of unfriendly ways they propagate absence of precision because they avoid logical constructions of proofs in favour of a better visual impact. He sought logic as the method against epistemological particularism, for example, he claimed that a world view demands logical culture and analytic-philosophic insight, and the way against an intellectual trapping, for example, he had a radical and unpopular point of view that the only formal logic can be the security for language against the issues of a system of assessment by clear extracting, indicating and ordering. In his philosophy, a human condition is a free existence restricted by the various limitations. Accordingly, a human as an animal has the specific features, such like persistent enrichment of life quality, creation of an environment, sensitivity on values, spiritual sphere, investigating sainthood and transcendence, ability to creativity and creative thinking, and, moreover, usage of language and symbolic reasoning which gives a control over emotions. However, his psychologist approach collapses in absence of any of his determinants of a perfect human, what means that his philosophical system, including the psychologist logic, is case-dependent and applies to a selected group of humans by emergence on the basis of discrimination against intellectual features, often either inborn or independent on a personal choice. He was interested in an ethical standpoint and the method of conflict solving known as nonviolence, what means an action without a violence. In relation to this of his research interests, he co-organized stays in Poland for a French pacifist Jean Goss and an Austrian Christian theologian and anti-war activist Hilderad Gross-Mayr, who in 2009 got the Pacem in Terris Peace and Freedom Award established by the Roman Catholic Diocese of Davenport in the U.S. state of Iowa to commemorate Pope John XXIII's the 1963 encyclical Pacem in terris (Peace on Earth) and in 1986 along with her husband got the Pope Paul VI Teacher of Peace Award by a Catholic peace organization Pax Christi USA. In particular, at the beginning of the Polish trade union federation Independent Self-governing Labour Union Solidarity, he organized the meeting of Jean Goss with the then movement leader Lech Wałęsa, who after being awarded by the 1983 Nobel Peace Prize to which the Goss couple were also nominated got also the 2001 Pacem in Terris Peace and Freedom Award to recognize his leadership non-violent attitude. The Goss couple became famous as the apostles of nonviolence, particularly for preparation of the 1986 People Power Revolution ('Yellow Revolution') in Philippines, and lobbying for recognition of the conscientious objection by the Roman Catholic Church during the 1962–1965 Second Vatican Council opened by Pope John XXIII and closed by Pope Paul VI. In 1991, just after the dissolution of the Soviet Union, Grzegorczyk was instrumental in organization of a symposium in Moscow dedicated to the nonviolence philosophy with participation of the Goss couple as well as a Canadian Catholic philosopher, theologian, and humanitarian Jean Vanier who was awarded by the 2013 Pacem in Terris Peace and Freedom Award and the 2015 Templeton Prize and an American political scientist and writer on strategy of a nonviolent struggle Gene Sharp who in 1983 was the founding father of the Albert Einstein Institution to explore the methods of nonviolent resistance in conflicts and in 2009–2015 was four times nominated to the Nobel Peace Prize. Despite of some kind of liberalism present in his thought, he was a definitely radical thinker, in particular, he propagated a nonviolent dialogue with everyone including terrorists, he was particularly fervent in forcing towards both logic and dialogue with everyone, and he claimed that going towards defense in any conditions of own dignity and inventory is harmful because the Christian Doctrine of turning the other cheek is correct. He applied his ethics to a conflict resolution, put a particular emphasis onto to the methods of nonviolence professed by Mahatma Gandhi and Martin Luther King Jr., the laureate of the 1965 Pacem in Terris Peace and Freedom Award, he was also one of the first public figures in Poland who focused attention on the ecological issues, and, before it was widely understood in Poland, he popularized the warnings due to the Club of Rome which have claimed that the resources of our planet are scarce and the idea of permanent growth is dangerous. Morality and religious study In his philosophical marriage between religious involvement and both reism and a specific variant of naturalism, the particular position was occupied by the moral dimension of the religious study from the point of view of Christianity. In his pseudo-essays, pseudo-sermons, and pseudo-treatises collected in the book Moralitety awarded in 1987 by the prize Warszawska Premiera Literacka (Warsaw Literary Award), he emphasized the central elements of the Christian doctrine such like the radical command to give a selfless care testimony, especially with respect to an enemy. He attempted to place himself in the role of a Catholic priest who claims that a self-demolition is not important if personal intentions are not allowed to be reduced, whereas in the book Europa: odkrywanie sensu istnienia (Europe: discovering the sense of existence) he emphasized the role of a logical reasoning in the foundations of civilization achievements to give an axiological theory of history. He claimed therein that the evolutionary development of humankind is accompanied by a specific participation of divine forces into development of human cultures, especially he pointed out altruism and a voluntary service as the examples of such either biological theology or theological biology. Without taking into account directly the Roman Catholic Church's domination in Europe and its enormous impact onto the foundations of the European philosophy in the form of various intellectual restrictions, such like in the most known cases of the Italian scholar Galileo Gailei and philosopher Giordano Bruno, he claimed European scientific theories were always logically ordered and based on deduction, empiricism and phenomenology, and limited the sense of the world to nothing but seeing the world as similar to an understandable text – hence he claimed that the explored world is already a humanly ordered structure. For him, history of religion is just deepening of this sense, with Abraham placed as the initiator of a new epoch of monotheism and a biblical reasoning as a pictorial system, which both married with the ancient Greco-Roman intellectual traditionalism gives the most appropriate philosophical system. On his theory, Jesus Christ of Nazareth appeals for a well-defined individual testimony, and the issues of a daily life are nothing when compared to the extreme situations because Christianity for him is a realization of spiritual rather than vital values. He saw the teachings of Jesus as the acceptance of the European logic, he radically saw Jesus Christ as the provider of the European moral pattern because Jesus demanded and demonstrated coherent individual testimony. According to Grzegorczyk, in a wide sense logic lays the foundations of the European rationalism, with respect to which he identified himself as a cultivator, which says that any knowledge must emerge by logic and empiricism and must go only towards the essential points. By his personal religious standpoint shared with the passion to logic, he always fought against himself and was limited to no more than two possibilities: the way of ethics and its universality and allowing the inexpressible in order to kill the conflict between a reason and religion. Thus, by his point of view, a rational theology is useless in favour of an imprecisely defined spiritual religiousness merely based on ecumenic approach to Christianity, that is for Grzegorczyk the biblical traditionalism of Catholicism losses the intellectual value when compared to the moral testimony of its creator Jesus Christ – a reason married with an openness towards values. What is more interesting for Grzegorczyk's religious creativity, although his formal membership at the Roman Catholic Church he frequently expressed critical opinions on concrete policies of this religious organization, even on the pages of free-thinkers periodicals, and, similarly to a Russian religious philosopher Vladimir Slovyov, he manifestly ignored the East–West Schism between the Roman Catholicism and Orthodox Christianity, for example he took the Eucharist sacrament at both the Orthodox Church and the Roman Catholic Church. He was a fervent follower of Roman Catholics who felt an affinity to the Russian Orthodoxy, his religious reflection merged Christian morality to the European cultural tradition and by his views the same values lay the foundations of both Christianity and European rationalism. In his approach, history of Christianity, including its biblical roots, can be considered the history of how a sense and understanding of the world can be deepened by contemplating the sacred and transcendent. He convinced that Modernity with its technological development creates the new challenges for humans, whose realization demands both an appropriate ethical mode, such like dedication to other and conscious self-limitations, and the rationalist standpoint. What is intriguing, in practice he realized this of his ideas for a general public in the form of a monk-like modest appearance joined with straightforward and often manifestly ignorant and arrogant if no blatant speech, which by virtue of the dogmatic Christianity could be interpreted in terms of either demonic possession or act of blasphemy. For example, in his article Odpowiedzialność filozofów (Responsibility of philosophers) he claimed that a philosopher whose texts are at the service of state authorities, creates a vicious circle of unquestionable truths which have not much in common with reality and strengthen the authorities's monopoly on a political violence, similarly to the Roman Catholic Church's theorists who referred to the holy books instead of commenting on the facts, whereas this article was published in May 1981 on the pages of the Polish Catholic quarterly Więź with death of the Primate of Poland Cardinal Stefan Wyszyński as the main subject of the issue and on the eve of introduction of the martial law in Poland by the communist authorities to crush the political opposition. It was not the only one and episodic case when Grzegorczyk could not clearly and unambiguously decide on his intellectual position with respect to the teachings of the Roman Catholic Church and the political ideology of the communist state authorities. He openly presented himself in a straightforward opposition to a religious organization when it was convenient and useful to his professional activities and when it gave him a professional advance or other at least societal profit, whereas he became suddenly manifestly religious when he sought any gains to support his ideology by a religion. In particular, the latter feature was ostentatiously expressed by his idea of ecumenism between the Roman Catholic Church and the Russian Orthodox Church, especially when he sought the profits in the academic contacts with Russian and Ukrainian academic professionals who humbly supported promotion of his borderline philosophical ideas motivated by himself arranged hospitability and grants. The big gap in conformity with respect to both political and social association which he adopted from the Roman Catholic-oriented teachers of his war-time underground education supported by his father's attitude was completely filled with a big conformity towards his academic career based on the politicized part of the Polish academia independently on a political system of Poland, what he made with extraordinary intellectual autonomy and a coherent reasoning. Following his personal pseudo-clerical image and pseudo-homiletic written style formed on the dogmatic philosophical traditionalism appropriate to both the Fathers and Doctors of the Roman Catholic Church enriched by the arguments of neo-Thomism and Christian existentialism, he attempted to defend a manifestly old-fashioned model of academic work understood as a humble dedication to ideas and humanity rather than a professionalism. In particular, in his part of the 2005's survey Gdzie ta nasza filozofia? (Where is this Our Philosophy?) he sought new circumstances of philosophizing as generated by the current fashions and the collective factors such like massiveness of higher education and general commercialization of life, he compared the present-day both philosophers and generally all humanists to businessmen who first of all look for their personal professional success by the way of a relatively fast and an appropriately high return of the incurred hardships rather than a long and arduous effort which leads to a perfect product, and, moreover, he claimed that the sham achievements are fully able to give quite good living conditions to scientists and philosophers because social control over these products is already impossible as a result of development of an academic specialization beyond such control. Social issues For Grzegorczyk, the greatest intellectual challenge was an axiological approach to history of the world. Taking from the Marxist sociology, he approached cultural conflict by wealth of a privileged social class as contrasted to poverty of the rest of a society marginalized and excluded from a system focusing on rich, powerful, and clever individuals. He enriched the Marxian thought by emphasis onto the role of intellectual divisions in perpetuation of conflicts, and, similarly to the creators of the Aryan race theory based on the common proto-language, contributed the Utopian necessity of a common language for agreement and reconciliation in the global scale, as well as the Utopian idea of an identical insight of all people in understanding the wholeness of human affairs. Furthermore, he attempted to convince that the suitable theoretical apparatus is the only way for the worldly peace, that since needs of everyone can not be satisfied there is the necessity to endure limitations collectively and introduce global regulations on the basis of a persuasive argument, and that a synthesis of scientific knowledge is necessary to serve the fair and peaceful coexistence, a theory solidified in 1956 by a Soviet leader Nikita Khrushchev in the Marxist-Leninist-inspired Soviet foreign policy at the 20th Congress of the CPSU and applied by the Soviet Union and the Soviet-allied socialist states such like the communist Poland during the 1947–1991 Cold War, especially in the phases Cold War (1953–1962) and Cold War (1962–1979). Grzegorczyk doctrinally did not respect any form of pluralism and any kind of diversity among humankind, he sought the order in a large scale uniformity which is appropriate to the political ideologies which laid the foundations of the 20th century totalitarianism in Europe – both German Nazism and Soviet communism. For this reason, implicitly, he sought the European totalitarianism as the consequence of the European cultural heritage supported by his logic. On the basis of the Universal Declaration of Human Rights, he appealed to the United Nations to take into account the principle that every person has the right to help any other person in a worse position than himself or herself in whatever country that person may reside, what in 1977 met the applause due to an American-Jewish leftist philosopher and left-wing activist Noam Chomsky who published a fragment of his correspondence with Grzegorczyk. In the aftermath of the famous 1972's report The Limits to Growth by the Club of Rome, he developed his general Utopian idea of all-human solidarity by propagation of self-limitations on consumption and combating wastefulness. In contrary to his 'general well-being' utopia, in the case of political ethics he mostly contributed to criticism of the Solidarity, the Eastern bloc's first independent trade union recognized by a communist regime and created in September 1980 at the Vladimir Lenin Shipyard in Gdańsk under the leadership of Lech Wałęsa, in the time when the communist Poland was strongly divided between its supporters and opponents, and when there was not a place for the 'middle option' between the yes and the no for the anti-communist opposition. Despite that Grzegorczyk at this time was not an open supporter to the Polish government, it was understood as his direct conformity with the communist authorities in the name of his false concept of peace, and, consequently, the society of the Catholic intellectuals related to the magazine Tygodnik Powszechny (The Catholic Weekly) broke collaboration with him. Moreover, although he survived the 1939–1945 Nazi–Soviet occupation of Poland and was injured as an insurgent of the 1944 Warsaw Uprising, he manifestly demonstrated a glaring absence of sensitivity on the problem of the Holocaust, and also in this case his attitude entirely confirmed the crucial impact due to the war-time underground teachers of strictly nationalist-Catholic orientation. In 1993, he authored an article Dekalog rozumu (Ten commandments of reason) which proposed the moral rules 1. You will not clap, 2. You will not whistle, 3. Listen to the content, not the tone of expression, 4. Fight with an argument, not with a human, 5. Do not flatter other and yourself, 6. Distrust other and yourself, 7. Search for what is important, 8. Try to build something better, instead to look for scapegoats, 9. Do not generalize too hastily, 10. Do not use proverbs, they are usually a stupidity of nations, which imitated and caricatured the Ten Commandments by an Old-Testament Jewish leader Moses to ridicule the biblical foundations of Judeo-Christianity by comparing them to the academic work. Education Grzegorczyk had a paradoxically formed mindset, a mind which was as modal as a modal logic. By the variety of his views, he criticized and was criticized by more radical exponents of various ideological positions – Catholics disliked his contacts with Atheists, Atheists accused him of clericalism, anti-communists disliked his attempts for searching the middle position, communists questioned his defense of freedom. He wanted to marry all with everyone, but his idea of unification was able to create only a borderline philosophical system whose followers create a dramatic minority even in the academic philosophy of Poland. The unpopularity of his system is contained in the open discrimination inherited with idealism, especially the German idealism such like of Friedrich Nietzsche which served as the philosophical basis for the Nazi Germany political ideology by Adolf Hitler. His logically trivialized philosophical views were inadequate for both communists and anti-communists, for both religious and irreligious individuals, for both philosophers and scientists, because he manifestly avoided a strict association with any concrete group and, in fact, participated in a one-person struggle for independence and freedom. Despite that he was a well functioned academic personality among few major specialized groups – mathematicians, philosophers, Catholic intellectuals, people of art – and was active in the international society of socio-political activists, he solicitously worked on the public self-picture which presented him as independent on political, social, religious, and scientific relationships to get some popularity among students in the times when the Polish academia was directly dependent on the communist ideology and governed by the politicized scholars. Nevertheless, in his daily work, he tended to the goals which created the appraisal of an inexhaustible way towards extraordinary egoism and self-comfort with a help of conformity with any of the aforementioned systems, he simply used his social links and professional connections within these systems when it was necessary to him. For example, he manifestly declared in public to audiences and students that he had never tended to get any higher positions in academia, whereas actually he easily accepted all higher positions offered to him and abandoned a work only when he sought is as inconvenient with respect to his purposes, such like the cultural and political ones met in Amsterdam which were in a direct conflict with his religious and ideological views adequate to the reduced both intellectual and social expression in the communist Poland. He had never attempted to be a favourite academic teacher, he openly avoided any close cooperations with his students and deflected their attention by lack of will to co-authorship with a parallel will to interact with them for knowing more about them, he preferred a bold individual research work although he had multiple collaborations. He was hungry for knowledge on world and people, such like in the 'ecumenic dialogue' with the Soviets and the various Polish communists, but was never able to fully follow any concrete system, had never joined any political party, and was focused merely on collecting information only for construction of a possible either critique or irony. Even those very few students who decided to prepare their doctoral dissertations under his supervising, never felt to contribute in the work of their mentor and to continue his intellectual traditionalism, because emergence of an academic school dedicated to development of his heritage in mathematics, logic, ethics, philosophy was impossible by his attitude. He was able to attract the attention of students extremely rarely, the facts say for themselves in the best way – over more than six decades of his academic work, he was able to produce no more than three doctoral students with 10–20 years of difference between the graduations, he supervised a computer scientist Stanisław Waligórski's thesis in mathematics Equations in closure algebras and their applications (1964), a Polish-Jewish philosopher and mathematician Stanisław Krajewski's thesis in logic Nonstandard classes of satisfiability and their applications in investigating of some extensions of axiomatic theories (1975), and an aesthetician Bohdan Misiuna's thesis in ethics Philosophical analysis of disgust phenomena and its axiological consequences (1992). Since the time of his underground higher education, both he had supported and he was supported primarily by the politically involved academic teachers such like Bogdan Suchodolski and also Jan Łukasiewicz. When Suchodolski organized interdisciplinary meetings for professors in the Palace of Congresses and Conferences of the PAS in Jabłonna near Warsaw in presence of a meticulously chosen audience, in particular the students who had the preferred 'social origin', then he served humbly as the pillar. In the time of the dominant socialist orthodoxy, he presented himself as the 'Jesus Christ' of the Polish humanities – for students his philosophy was beyond the bias, intellectual enslavement and hardline correctness of the state ideology. For the communist Poland's state authorities he was officially a persona non grata secured by the politically useful historical family background, whereas for the academic youth he served as the 'socrealist Messiah' against the system teachings. He was a talented speaker towards the youth, in both philosophizing and personal action he avoided biased and emotional notions, particularism, cultivation of a scheme, routine, institution in favour of an individual, and impressive words. He impacted onto an authentic individual attitude by his analytical, synthetic and globalist reasoning, criticism on own activities, selflessness in cognitive actions, promotion of a selective truth, constant attention and control over research by the selective values based on the specific union of the Christian theology and liberalized Marxist thought. On the one hand, his style of giving opinions was particularly distinctive in both criticism and straightforwardness, often manifestly explicit and giving the impression of directly either conflicting with or fighting against the ideals which he supported theoretically, but, on the other hand, he expressed opinions only when it was fully convenient and safe for his professional and intellectual position and preserved his personal world view. His logically radicalized philosophical system was the specific mixture of the Christian theology and the Communist Ideology which directed his primary logical thought onto the intellectual margins to walk on the borderlines with a liberation theology and a religious communism, and to visit these extremist and anarchistic territories time by time for an intellectual explosion. In a clear opposition to pluralism of the world, he manifestly claimed existence of the selective values which are absolute and universal in the earthly scale. His theoretical pedagogy, more precisely philosophy of education, was based on the specific ideological mode of nonviolence philosophy, which was particularly attractive in the period of development of anti-communist opposition and political repressions in the communist Poland. He left an explicit context of the Communist Ideology in favour of his own ideology on both similarity of life and creativity, and action and rationalism which by an ecumenic approach married the Christian theology with selected concepts of Marxism with the background in logic. However, his philosophy was able to inspire educational theory and practice in Poland by freedom, tolerance, independent choice, individual rights, and the dictate of the related professional duties of an educator. On the other hand, also his ideology of democracy was attractive for some young intellectuals of the communist Poland, because it employed the concepts of a civil society and discussion community formed by a scattering between totalitarianism and anti-totalitarianism. Family In 1953, he married Renata Maria Grzegorczykowa née Majewska who is an internationally renowned Polish philologist and expert in polonist linguistics, since 2001 a professor emeritus at the Section of Grammar, Semantics and Pragmatics of the Institute of Polish language of the Faculty of Polish Studies of the University of Warsaw. In 1964 she got a doctoral degree for a thesis in Polish verbs, in 1974 she completed habilitation with a book in semantic and syntactic functions of Polish adverbs, in 1976–1979 she was a deputy-director and in 1981–1984 was the director of the Institute of Polish Language, she was appointed an associate professor in 1983 and a Full Professor in 1995, and, in 1982–2001, she was the head of the Section of Grammatical Structure of Modern Polish Language. Grzegorczykowa is a full member of the Warsaw Scientific Society, in 2007–2011 she was a member the Phraseology Commission and the Commission of Language Theory, and to which in 2015–2018 has been a collaborator along with the Ethnolinguistic Commission, of the Committee of Linguistics of the PAS. In 1997–1999, she was the head of an international project in comparative lexical semantics at the University of Warsaw in collaboration with specialists from the universities of Prague, Moscow, Kiev, and Stockholm. She contributed to word formation, syntax, and semantics of Polish language, her interests were also related to slavistics, neopragmatism, and philosophy, in her investigations she made use of the methods of structural linguistics, cognitive linguistics and generative grammar. In particular, she investigated the semantic-syntactic interpretation of language and text based on the analysis of predicate-argument structures, the logical structure of Polish sentences such like reference and modality, semantic-pragmatic relationships studied in the context of the theory of speech acts and functions of language and text, linguistic foundations of cognitive science and its role for language mechanisms. By many years until retirement she was a member of the Commission of Grammatical Structure of Slavic languages of the International Committee of Slavists. Her father, Polish mining engineer and industrialist Leszek Majewski, in 1920–1948 was the director of the Pruszków factory of pencils Majewski St. i S-ka, created by his father a Polish engineer, industrialist, and writer Stanisław Jan Majewski who was a member of the Legislative Sejm (1919–1922) and the 1st Parliament (1922–1927) of the Second Polish Republic as a representative of the Popular National Union, and on 2 May 1924 was awarded the Commander's Cross of the Order of Polonia Restituta by President Stanisław Wojciechowski. Stanisław's brother a Polish scientist and novelist Erazm Majewski created the foundations of the Polish academic archeology and in 1919 was appointed the head of the Chair of Prehistoric Archeology of the University of Warsaw. Similarly to Grzegorczyk, during the World War II her older brother Second Lieutenant and Sub-Scoutmaster Jacek Majewski was an insurgent of the Warsaw uprising pseudonymed Sielakowa in the rank of acting Commander of Platoon, since 1942 he served in the Assault Groups of the Gray Ranks which was the underground paramilitary of the Polish Scouting Association, in 1944 he commanded the 1st Platoon Sad of the 2nd Company Rudy of the AK Battalion 'Zośka' of the Diversionary Brigade Broda 53 of the 'Radosław' Group of the Home Army, and was killed on 31 August 1944 at the Bielańska street in Warsaw during the unsuccessful attempt of transfer of the Group Północ soldiers, including young Grzegorczyk, by the city canals from the Warsaw Old Town to the Warsaw Śródmieście, and he was awarded by the Cross of Valour. Since her mother Maria née Borsuk was the only daughter of a Polish physician-surgeon Marian Stanisław Borsuk, an active member of the Warsaw Medical Society who in 1907–1923 was the head of the Surgical Department of the former Wolski Hospital in Warsaw and whose origins were in the Wilno Voivodeship of the Second Polish Republic which is a part of the modern-day Lithuania, the family members include also her younger brother a famous Polish mathematician Karol Borsuk of the University of Warsaw and his only daughter a Polish paleontologist and phylogeneticist Maria Magdalena Borsuk-Białynicka who is a Full Professor at the Roman Kozłowski Institute of Paleobiology of the PAS. Grzegorczyk's other kins are also her husband a Polish algebraic geometer Andrzej Szczepan Białynicki-Birula who pioneered in differential algebra and is both a full member of the PAS and a Full Professor at the University of Warsaw, his older brother a Polish theoretical physicist Iwo Białynicki-Birula who is a full member of the PAS with the positions of a Full Professor at the Center for Theoretical Physics of the PAS and a professor emeritus at the University of Warsaw, and his wife a Polish theoretical physicist Zofia Białynicka-Birula née Wiatr who is a Full Professor at the Institute of Physics of the PAS. Grzegorczyk had a daughter and a son. His son Tomasz Grzegorczyk, who was born in 1954 and in 2003 authored the French-Polish translation of a classical academic book in pedagogical ethics Le choix d'éduquer: éthique et pédagogie (1991) by a French theorist of education Philippe Meirieu, studied psychology at the University of Warsaw and at this time was an oppositionist student activist and under the pseudonym Grzegorz Tomczyk collaborated with the Workers' Defense Committee (KOR), a major dissident and civil organization which was founded in June–September 1976 by dominantly Jacek Kuroń as a result of Piotr Jaroszewicz's government crackdown on strikes and imprisonment of striking workers under the 1970–1980 First Secretary of the KC PZPR Edward Gierek when the communist Poland had come into a multi-aspect economic crisis, was intellectually inspired by Kołakowski's 1971 essay Tezy o nadziei i beznadziejności (Theses on hope and hopelessness) published in the No. 6 of the Kultura paryska (Paris Culture) who himself in 1977–1980 officially represented the KOR in the United Kingdom and was responsible for its contacts with the Polish emigration of 1968 to give the idea of creation of the Free Trade Unions of the Coast (WZZW) in April 1978 which by the Gdańsk Agreement in August 1980 generated the Independent Self-governing Labour Union Solidarity which finished the Gierek Decade. Grzegorczyk's son-in-law a Roman Catholic journalist and activist Marcin Przeciszewski is a historian of the Roman Catholic Church graduated at the University of Warsaw who also studied at the Institute of Historical geography of the KUL, and who since 1993 has been the editor-in-chief and the president of the Catholic Information Agency (KAI), is a consulor to the Polish Episcopal Conference's Council for Mass Media and the president to the Polish Episcopal Conference's Foundation for Exchange of Religious Information, since 2003 has been a co-organizer of the cross-cultural Gniezno congresses, former member of the program board of the state-funded and government-controlled Polish Television (TVP S.A.). His father a Polish economist and nationalist activist Tadeusz Przeciszewski was an insurgent of the Warsaw Uprising under the pseudonym Michał, he served at the Home Army in the rank of Corporal Officer Cadet, in 1939 was mobilized as a soldier to the World War II and served in Wołyń, in 1940 joined the National Party and co-organized the Interwar period's nationalist Youth of Grand Poland, few months of 1945 was imprisoned in the Soviet NKVD forced labour camp Kashai in the Sverdlovsk Oblast, in 1945 got a magister degree at the Faculty of Law of the KUL, in 1946 graduated at the Warsaw School of Economics and was appointed an assistant at the Department of Political Economics of the Faculty of Law of the University of Warsaw where in 1948 got a doctoral degree for a thesis on J.M. Keynes' economics, in November 1948 – December 1953 was imprisoned for creation of a massive Catholic-nationalist movement after the common trial with his future wife Hanna Iłowiecka-Przeciszewska and Wiesław Chrzanowski who in 1991–1993 was the Marshal of the Sejm of the post-communist Poland, in 1954 co-created the Catholic quarterly Więź (The Tie) along with Tadeusz Mazowiecki who in 1989–1991 was the first Prime Minister and a Catholic activist Janusz Zabłocki who was a member of the PAX Association in 1950–1955 and the Club of Catholic Intelligentsia in 1957–1976, in 1957 was appointed an assistant professor at the Faculty of Political Economics of the University of Warsaw and in 1963 completed habilitation procedure therein, in 1964 for declining to join the communist party was forced to delegation to the Faculty of Economics of the Maria Curie-Skłodowska University (MCSU) in Lublin as a docent and lectured at the KUL, in 1965–1992 was the founder and head of the Section of Planning and Economic Politics at MCSU, in 1966–1967 stayed at the University of Paris and in 1975 at the Johns Hopkins University in Baltimore, in 1971 was elected the chairman of the Section of Cultural Economics at the Warsaw branch of the Polish Economic Society, in 1973–1979 was a member of the Committee of Work and Social Politics of the PAS, in 1981–1996 was a member of the Committee of Economic Sciences of the PAS, in 1972 was appointed an associate professor and in 1982 was appointed a Full Professor, in 1971–1989 was a member of a centrist political party Alliance of Democrats where attempted to create a Christian-social fraction, in 1990 was elected the chairman of the reactivated Christian Democratic Labour Party, was awarded by the Knight's Cross and the Officer's Cross of the Order of Polonia Restituta as well the Home Army Cross. His wife Hanna Iłowiecka-Przeciszewska was a Catholic and social activist who studied pedagogy at the University of the Western Lands, in 1948–1952 was imprisoned by the communist authorities, since 1956 was a member of the Warsaw branch of the Club of Catholic Intelligentsia and its many-year president, and was awarded by the Gold Cross of Merit. One of Grzegorczyk's publicly known grandchildren are a Polish physician-internist Franciszek Grzegorczyk who has worked for a few hospitals in Warsaw, including the academic Institute of Tuberculosis and Lung Diseases which was earlier the former Wolski Hospital, an economist Jacek Przeciszewski who graduated at the Warsaw School of Economics and is an activist of the Warsaw branch of the Club of Catholic Intelligentsia, and a lawyer Jan Przeciszewski. Grzegorczyk died of natural causes in Warsaw on 20 March 2014 at the age of 91. His body is buried in the Cemetery of Pruszków. Famous quotes "The achievements of Alfred Tarski were the most brilliant result of a favorable cultural entanglement which took place in Poland in the first half of the twentieth century." - from the book Alfred Tarski: Life and Logic (2008) by Anita Burdman Feferman and Solomon Feferman Selected publications Books Grzegorczyk, Andrzej (2003): Psychiczna osobliwość człowieka. Wydawnictwo Naukowe Scholar, Warszawa Grzegorczyk, Andrzej (2001): Europa: Odkrywanie sensu istnienia. Studium Generale Europa: Instytut Politologii Uniwersytet Kardynała Stefana Wyszyńskiego, Warszawa Grzegorczyk, Andrzej (1997): Logic – a Human Affair. Wydawnictwo Naukowe Scholar, Warszawa Grzegorczyk, Andrzej (1993): Życie jako wyzwanie: Wprowadzenie w filozofię racjonalistyczną. Wydawnictwo Instytutu Filozofii i Socjologii Polskiej Akademii Nauk, Warszawa in Ukrainian: Життя як виклик: Вступ до раціоналістичної філософії. Wydawnictwo Naukowe Scholar, Warszawa & ВНТЛ, Львів, 1997 in Russian: Жизнь как вызов: Введение в рационалистическую философию. «Вузовская книга», Москва, 2006 Grzegorczyk, Andrzej (1989): Mała propedeutyka filozofii naukowej. Instytut Wydawniczy "Pax", Warszawa Grzegorczyk, Andrzej (1989): Etyka w doświadczeniu wewnętrznym. Instytut Wydawniczy "Pax", Warszawa Grzegorczyk, Andrzej (1986): Moralitety. Instytut Wydawniczy "Pax", Warszawa Grzegorczyk, Andrzej (1983): Próba treściowego opisu świata wartości i jej etyczne konsekwencje. Zakład Narodowy imienia Ossolińskich: Wydawnictwo Polskiej Akademii Nauk, Wrocław Grzegorczyk, Andrzej (1979): Filozofia czasu próby. Éditions du Dialogue, Paris (2nd edition 1984, Instytut Wydawniczy "Pax", Warszawa) Grzegorczyk, Andrzej (1971): Zarys arytmetyki teoretycznej. Państwowe Wydawnictwo Naukowe, Warszawa Grzegorczyk, Andrzej (1969): An Outline of Mathematical Logic: Fundamental Results and Notions Explained with All Details. D. Reidel, Boston Grzegorczyk, Andrzej (1967): Basic notes in foundations. Instytut Matematyki PAN, Warszawa Grzegorczyk, Andrzej (1963): Schematy i człowiek: Szkice filozoficzne. Wydawnictwo Znak, Kraków Grzegorczyk, Andrzej (1961, 1973, 1975, 1981, 1984): Zarys logiki matematycznej. Państwowe Wydawnictwo Naukowe, Warszawa Grzegorczyk, Andrzej (1961): Fonctions Récursives. Gauthier-Villars, Paris Grzegorczyk, Andrzej (1957): Zagadnienia rozstrzygalności. Państwowe Wydawnictwo Naukowe, Warszawa Grzegorczyk, Andrzej (1955, 1958, 1961, 2010): Logika popularna: Przystępny zarys logiki zdań. Państwowe Wydawnictwo Naukowe, Warszawa in Czech: Populární logika. Státní Nakladatelství Politické Literatury, Praha, 1957 in Russian: Популярная логика: Общедоступный очерк логики предложений, «Наука", Москва, 1965 Journal articles Grzegorczyk, Andrzej (2013): Wizja wartości i dramat wyboru wartości w myśli europejskiej. Wspólnotowość i Postawa Uniwersalistyczna: Rocznik PTU 2012–2013, Number 8, pp. 5–20 Grzegorczyk, Andrzej (2012): Światopoglądowa integracja ludzkiej wiedzy. Przegląd Filozoficzny – Nowa Seria, Volume 21, Issue 2, pp. 29–48 Grzegorczyk, Andrzej (2011): Filozofia logiki I formalna logika niesymplifikacyjna. Zagadnienia Naukoznawstwa, Volume 47, Issue 190, pp. 445–451 (Errata: Zagadnienia Naukoznawstwa, Volume 48, Issue 194, p. 318) Grzegorczyk, Andrzej (2010): Uczciwość w nauce. Próba podsumowania (The moral integrity of the scientific research). Kwartalnik NAUKA, Number 3, pp. 194–200 Przełęcki, Marian; Grzegorczyk, Andrzej; Jadacki, Jacek Juliusz; Brożek, Anna; Baranowska, Małgorzata Maria; Stróżewski, Władysław Antoni (2009): Inspired by the Bible (Z inspiracji biblijnej). Kwartalnik Filozoficzny, Volume 37, Issue 2, pp. 137–160 Grzegorczyk, Andrzej (2008): Prawdziwość cecha ważna, łatwa do określenia, trudniejsza do osiągnięcia (Felieton filozoficzny). Przegląd Humanistyczny, Volume 52, Number 2, Issue 407, pp. 71–81 Stępień, Antoni Bazyli; Krajewski, Radosław; Grzegorczyk, Andrzej (2005): Gdzie ta nasza filozofia? – Dyskusji ciąg dalszy. Miesięcznik Znak, Issue 602–603, pp. 107–114 Heller, Michał Kazimierz; Chwedeńczuk, Bohdan; Szahaj, Andrzej Jarosław; Przełęcki, Marian; Buczyńska-Garewicz, Hanna; Sady, Wojciech Henryk; Bołtuć, Piotr; Piłat, Robert; Grzegorczyk, Andrzej; Koj, Leon Józef; Porębski, Czesław; Ziemiński, Ireneusz; Marciszewski, Witold (2005): Gdzie ta nasza filozofia? – ankieta. Miesięcznik Znak, Issue 600, pp. 29–54 Grzegorczyk, Andrzej (2005): Undecidability without Arithmetization. Studia Logica: An International Journal for Symbolic Logic, Volume 79, Issue 2, pp. 163–230 Grzegorczyk, Andrzej (2004): A Philosophy for That Time: The Philosophy of Selflessness. Dialogue and Universalism, Volume 14, Issue 5–6, pp. 167–171 Grzegorczyk, Andrzej (2004): Decidability without Mathematics. Annals of Pure and Applied Logic, Volume 126, Issue 1–3, pp. 309–312 Grzegorczyk, Andrzej (2003): Czasy i wyzwania. Wspólnotowość i Postawa Uniwersalistyczna: Rocznik PTU 2002–2003, Number 3, pp. 5–20 Grzegorczyk, Andrzej (2002): Europe: discovering the meaning of existence. Dialogue and Universalism, Issue 6–7, pp. 111–126 Grzegorczyk, Andrzej (2000): Racjonalizm europejski jako sposób myślenia. Wspólnotowość i Postawa Uniwersalistyczna: Rocznik PTU 2000–2001, Number 2, pp. 5–8 Grzegorczyk, Andrzej (1999): Antropologiczne podstawy edukacji globalnej. Forum oświatowe, Number 1–2, Issue 20–21, pp. 5–13 Grzegorczyk, Andrzej (1999): The vocation of Europe. Dialogue and Universalism, Issue 5–6, pp. 11–41 Grzegorczyk, Andrzej; Morokhoyeva, Zoya Petrovna; Zapaśnik, Stanisław (1998): Universalistic Social Education. Dialogue and Universalism, Volume 8, Number 5–6, pp. 159–163 Grzegorczyk, Andrzej (1993): Dekalog rozumu. Wiedza i Życie, Volume 3, pp. 18–20 Grzegorczyk, Andrzej (1989): Działania pokojowe a postawy etyczne. Studia Philosophiae Christianae, Issue 1, pp. 141–159 Grzegorczyk, Andrzej (1987): Wierność i świadectwo. Studia Filozoficzne, Issue 5, pp. 33–39 Grzegorczyk, Andrzej (1983): Pojęcie godności jako element poznawczej regulacji ludzkiego zachowania. Studia Filozoficzne, Issue 8–9, pp. 57–76 Grzegorczyk, Andrzej (1981): Odpowiedzialność filozofów. Więź, Volume 23, Issue 5, pp. 49–56 Grzegorczyk, Andrzej (1981): My Version of the Christian Vision of Sense. Dialectics and Humanism: The Polish Philosophical Quarterly, Volume 8, Issue 3, pp. 51–53 Grzegorczyk, Andrzej (1977): On Certain Formal Consequences of Reism. Dialectics and Humanism: The Polish Philosophical Quarterly, Volume 1, pp. 75–80 Grzegorczyk, Andrzej (1974): Przeżycie transcendencji a mity kultury. Znak, Issue 236, pp. 224–235 Grzegorczyk, Andrzej (1971–1972): An unfinitizability proof by means of restricted reduced power. Fundamenta Mathematicae, Volume 73, pp. 37–49 Grzegorczyk, Andrzej (1971): Klasyczne, relatywistyczne i konstruktywistyczne sposoby uznawania twierdzeń matematycznych. Studia Logica: An International Journal for Symbolic Logic, Volume 27, Issue 1, pp. 151–161 Grzegorczyk, Andrzej (1968): Logical uniformity by decomposition and categoricity in . Bulletin de l’Académie Polonaise des Sciences: Série des sciences mathématiques, astronomiques, et physiques, Volume 16, Issue 9, pp. 687–692 Grzegorczyk, Andrzej (1968): Assertions depending on time and corresponding logical calculi. Compositio Mathematica, Volume 20, pp. 83–87 Grzegorczyk, Andrzej (1967): Non-classical propositional calculi in relation to methodological patterns of scientific investigation. Studia Logica: An International Journal for Symbolic Logic, Volume 20, Issue 1, pp. 117–132 Grzegorczyk, Andrzej (1967): Some relational systems and the associated topological spaces. Fundamenta Mathematicae, Volume 60, pp. 223–231 Grzegorczyk, Andrzej (1965): Nasi bracia mariawici. Więź, Volume 12, Issue 92, pp. 40–45 Grzegorczyk, Andrzej (1964): Recursive objects in all finite types. Fundamenta Mathematicae, Volume 54, pp. 73–93 Grzegorczyk, Andrzej (1964): A note on the theory of propositional types. Fundamenta Mathematicae, Volume 54, pp. 27–29 Grzegorczyk, Andrzej (1964): A philosophically plausible formal interpretation of intuitionistic logic. Indagationes Mathematicae, Volume 26, pp. 596–601 Grzegorczyk, Andrzej (1964): Sprawdzalność empiryczna a matematyczna. In Kotarbiński, Tadeusz; Dąmbska, Izydora (editors) (1964): Rozprawy logiczne: Księga pamiątkowa ku czci Kazimierza Ajdukiewicza. Państwowe Wydawnictwo Naukowe, Warszawa, pp. 73–76. Republished as translation Mathematical and Empirical Verifiability in Przełęcki, Marian; Wójcicki, Ryszard (editors) (1977): Twenty-Five Years of Logical Methodology in Poland. D. Reidel, Boston & PWN–Polish Scientific Publishers, Warszawa, pp. 165–169 Grzegorczyk, Andrzej (1963): Zastosowanie logicznej metody wyodrębniania formalnej dziedziny rozważań w nauce, technice i gospodarce. Studia Filozoficzne, Issue 3–4, pp. 63–75 Mazur, Stanisław; edited by Grzegorczyk, Andrzej and Rasiowa, Helena (1963): Computable analysis. Rozprawy Matematyczne, Volume 33 Grzegorczyk, Andrzej (1962): Uzasadnianie aksjomatów teorii matematycznych. Studia Logica: An International Journal for Symbolic Logic, Volume 13, Issue 1, pp. 197–202 Grzegorczyk, Andrzej (1962): On the concept of categoricity. Studia Logica: An International Journal for Symbolic Logic, Volume 13, Issue 1, pp. 39–66 Grzegorczyk, Andrzej (1962): A kind of categoricity. Colloquium Mathematicae, Volume 9, pp. 183–187 Grzegorczyk, Andrzej (1962): A theory without recursive models. Bulletin de l’Académie Polonaise des Sciences: Série des sciences mathématiques, astronomiques, et physiques, Volume 10, pp. 63–69 Grzegorczyk, Andrzej (1962): An example of two weak essentially undecidable theories F and F*. Bulletin de l’Académie Polonaise des Sciences: Série des sciences mathématiques, astronomiques, et physiques, Volume 10, pp. 5–9 Grzegorczyk, Andrzej (1961): Le traitement axiomatique de la notion de prolongement temporel. Studia Logica: An International Journal for Symbolic Logic, Volume 11, Issue 1, pp. 31–34 Grzegorczyk, Andrzej (1961): Aksjomatyczne badanie pojęcia przedłużenia czasowego. Studia Logica: An International Journal for Symbolic Logic, Volume 11, Issue 1, pp. 23–30 Grzegorczyk, Andrzej; Mostowski, Andrzej; Ryll-Nardzewski, Czesław (1961): Definability of sets in models of axiomatic theorems. Bulletin de l'Académie polonaise des sciences. Série des sciences mathématiques, astronomiques, et physiques, Volume 9, pp. 163–167 Grzegorczyk, Andrzej (1961): Metafizyka rzeczy żywych. Znak, Issue 80, pp. 154–162 Grzegorczyk, Andrzej (1960): Metafizyka bez spekulacji. Znak, Issue 77, pp. 1417–1421 Grzegorczyk, Andrzej (1960): O nie najlepszym rodzaju apologetyki. Znak, Issue 76, pp. 1343–1344 Grzegorczyk, Andrzej (1960): Axiomatizability of Geometry without Points. Synthese, Volume 12, pp. 228–235 Grzegorczyk, Andrzej (1959): Analiza filozoficzna, kontemplacja, wartościowanie. Studia Filozoficzne, Volume 5, Issue 14, pp. 161–173 Grzegorczyk, Andrzej (1958): Między dyskursywnym a kontemplacyjnym myśleniem. Znak, Issue 43, pp. 36–57 Grzegorczyk, Andrzej; Mostowski, Andrzej; Ryll-Nardzewski, Czesław (1958): The classical and the ω-complete arithmetic. Journal of Symbolic Logic, Volume 23, pp. 188–206 Grzegorczyk, Andrzej (1957): Uwagi z historii logiki. Myśl Filozoficzna, Volume 1, Issue 27, pp. 164–176 Grzegorczyk, Andrzej (1957): On the definitions of computable real continuous functions. Fundamenta Mathematicae, Volume 44, pp. 61–71 Grzegorczyk, Andrzej (1956): Some proofs of undecidability of arithmetic. Fundamenta Mathematicae, Volume 43, pp. 166–177 Mostowski, Andrzej; Grzegorczyk, Andrzej; Jaśkowski, Stanisław; Łoś, Jerzy; Mazur, Stanisław; Rasiowa, Helena; Sikorski, Roman (1955): The present state of investigations on the foundations of mathematics. Rozprawy Matematyczne, Volume 9 Grzegorczyk, Andrzej (1955): Uwagi o rozumieniu praw logiki. Myśl Filozoficzna, Volume 1, Issue 15, pp. 206–221 Grzegorczyk, Andrzej (1955): Uwagi o nauczaniu logiki. Myśl Filozoficzna, Volume 1, Issue 4, pp. 174–177 Grzegorczyk, Andrzej (1955): On the definition of computable functionals. Fundamenta Mathematicae, Volume 42, pp. 232–239 Grzegorczyk, Andrzej (1955): Computable functionals. Fundamenta Mathematicae, Volume 42, pp. 168–202 Grzegorczyk, Andrzej (1955): Elementarily definable analysis. Fundamenta Mathematicae, Volume 41, pp. 311–338 Grzegorczyk, Andrzej (1955): The Systems of Leśniewski in Relation to Contemporary Logical Research. Studia Logica: An International Journal for Symbolic Logic, Volume 3, pp. 77–95 Grzegorczyk, Andrzej (1953): Some classes of recursive functions. Rozprawy Matematyczne, Volume 4, pp. 1–45 Grzegorczyk, Andrzej (1953): Konferencja logików. Myśl Filozoficzna, Volume 1, Issue 7, pp. 340–349 Grzegorczyk, Andrzej; Kuratowski, Kazimierz (1952): On Janiszewski's property of topological spaces. Annales de la Société Polonaise de Mathématique, Volume 25, pp. 69–82 Grzegorczyk, Andrzej (1951): Undecidability of Some Topological Theories. Fundamenta Mathematicae, Volume 38, pp. 137–152 Grzegorczyk, Andrzej (1950): The Pragmatic Foundations of Semantics. Synthese, Volume 8, pp. 300–324. Republished in Przełęcki, Marian; Wójcicki, Ryszard (editors) (1977): Twenty-Five Years of Logical Methodology in Poland. D. Reidel, Boston & PWN–Polish Scientific Publishers, Warszawa, pp. 135–164 Grzegorczyk, Andrzej (1948): Próba ugruntowania semantyki języka opisowego. Przegląd Filozoficzny, Volume 44, Issue 4, pp. 348–371 Other papers Grzegorczyk, Andrzej; Zdanowski, Konrad (2008): Undecidability and Concatenation. In Ehrenfeucht, Andrzej; Marek, Victor Witold; Srebrny, Marian (editors) (2008): Andrzej Mostowski and Foundational Studies. IOS Press, Amsterdam, pp. 72–91 Grzegorczyk, Andrzej (2006): Empiryczne wyróżnienie duchowości. In Grzegorczyk, Anna; Sójka, Jacek; Koschany, Rafał (editors) (2006): Fenomen duchowości. Wydawnictwo Naukowe UAM, Poznań, pp. 29–34 Grzegorczyk, Andrzej (2006): Prawda przemieniająca. In Grzegorczyk, Anna; Sójka, Jacek; Koschany, Rafał (editors) (2006): Fenomen duchowości. Wydawnictwo Naukowe UAM, Poznań, pp. 247–266 Grzegorczyk, Andrzej (2005): Tezy europejskiej samoświadomości. In Góralski, Andrzej (editor) (2005): Życie pełne, dobrze urządzone a kultura doraźności: intelektualiści i młoda inteligencja w Europie przemian. Materiały VIII Międzynarodowej Konferencji PTU, Warszawa, 9–11 maja 2005 roku. Wspólnotowość i Postawa Uniwersalistyczna: Rocznik PTU 2004–2005, Number 4, pp. 18–25 Grzegorczyk, Andrzek (2004): Używanie "rozumu" a stan aktualny ludzkości. In Wojnar, Irena; Kubin, Jerzy (editors) (2004): Bogdan Suchodolski w stulecie urodzin – trwałość inspiracji: zbiór studiów. Komitet Prognoz "Polska w XXI Wieku" przy Prezydium Polskiej Akademii Nauk, Warszawa, pp. 257–274 Grzegorczyk, Andrzej (1999): Czasy i wyzwania. In Łaszczyk, Jan (editor) (1999): Pedagogika czasu przemian: praca zbiorowa. Wydawnictwo Zakładu Metodologii Wyższej Szkoły Pedagogiki Specjalnej, Warszawa, pp. 9–21 Grzegorczyk, Andrzej (1996): Non-violence: wychowanie do negocjacji, demokracji i współistnienia. In Wojnar, Irena; Kubin, Jerzy (editors) (1996): Edukacja wobec wyzwań XXI wieku: zbiór studiów. Komitet Prognoz "Polska w XXI Wieku" przy Prezydium Polskiej Akademii Nauk & Dom Wydawniczy Elipsa, Warszawa, pp. 57–92 Grzegorczyk, Andrzej (1993): Dekalog rozumu. In Omyła, Mieczysław (editor) (1994): Nauka i język: Marianowi Przełęckiemu w siedemdziesiątą rocznicę urodzin. Biblioteka Myśli Semiotycznej, Volume 32, edited by Pelc, Jerzy. Wydział Filozofii i Socjologii Uniwersytetu Warszawskiego, Warszawa & Znak-Język-Rzeczywistość: Polskie Towarzystwo Semiotyczne, pp. 81–85 Grzegorczyk, Andrzej (1991): The Principle of Transcendence and the Foundation of Axiology. In Geach, Peter; Holowka, Jacek (editors) (1991): Logic and Ethics. Kluwer Academic Publishers, Dordrecht, pp. 71–78 Grzegorczyk, Andrzej (1990): Moralistyczna wizja dziejów. In Ajnenkiel, Andrzej; Kuczyński, Janusz; Wohl, Andrzej (editors) (1990): Sens polskiej historii. Program Badań i Współtworzenia Filozofii Pokoju Uniwersytetu Warszawskiego, Warszawa, pp. 48–60 Grzegorczyk, Andrzej (1989): Filozofia człowieka a pedagogika. In Doktór, Kazimierz; Hajduk, Edward (editors) (1989): Humanizm, prakseologia, pedagogika: materiały konferencji zorganizowanej dla upamiętnienia 100 rocznicy urodzin Tadeusza Kotarbińskiego. Wydawnictwo Instytutu Filozofii i Socjologii Polskiej Akademii Nauk, Warszawa & Zakład Narodowy imienia Ossolińskich: Wydawnictwo Polskiej Akademii Nauk, Wrocław, pp. 199–208 Grzegorczyk, Andrzej (1974): Axiomatic theory of enumeration. In Fenstad, Jens Erik; Hinman, Peter Greayer (editors) (1974): Generalized Recursion Theory: Proceedings of the 1972 Oslo Symposium. North Holland, Amsterdam, pp. 429–436 Grzegorczyk, Andrzej (1970): Decision procedures for theories categorical in . In Laudet, Michel; Lacombe, Daniel; Nolin, Louis; Schützenberger, Marcel (editors) (1970): Symposium on Automatic Demonstration Held at Versailles/France, December 1968. Springer, Heidelberg, pp. 87–100 Grzegorczyk, Andrzej (1961): Axiomatizability of Geometry without Points. In Freudenthal, Hans (editor) (1961): The Concept and the Role of the Model in Mathematics and Natural and Social Sciences: Proceedings of the Colloquium sponsored by the Division of Philosophy of Sciences of the International Union of History and Philosophy of Sciences organized at Utrecht, January 1960. D. Reidel, Dordrecht, pp. 104–111 Grzegorczyk, Andrzej (1960): Przerosty cywilizacyjne a wartości twórcze. In Czeżowski, Tadeusz (editor) (1960): Charisteria: Rozprawy filozoficzne złożone w darze Władysławowi Tatarkiewiczowi w siedemdziesiątą rocznicę urodzin. Państwowe Wydawnictwo Naukowe, Warszawa, pp. 97–107 Grzegorczyk, Andrzej (1959): O pewnych formalnych konsekwencjach reizmu. In Kotarbińska, Janina; Ossowska, Maria; Pelc, Jerzy; Przełęcki, Marian; Szaniawski, Klemens (editors) (1959): Fragmenty filozoficzne, Seria Druga: Księga Pamiątkowa ku Uczczeniu Czterdziestolecia Pracy Nauczycielskiej w Uniwersytecie Warszawskim Profesora Tadeusza Kotarbińskiego. Państwowe Wydawnictwo Naukowe, Warszawa, pp. 7–14 Grzegorczyk, Andrzej (1959): Some Approaches to Constructive Analysis. In Heyting, Arend (editor) (1959): Constructivity in Mathematics: Proceedings of the Colloquium held at Amsterdam, 1957. North Holland, Amsterdam, pp. 43–61 Grzegorczyk, Andrzej (1949): Un Essai d’etablir la Sémantique du Langage Descriptif. In Beth, Evert Willem; Pos, Hendrik Josephus; Hollak, Johannes Hermanus Antonius (editors) (1949): Proceedings of the Tenth International Congress of Philosophy (Amsterdam, 11–18 August 1948), Volume 2. North Holland, Amsterdam, pp. 776–778 Gallery Sources Odintsov, Sergei Pavlovich (2018): Larisa Maksimova on Implication, Interpolation, and Definability. Springer International Publishing, Cham Golińska-Pilarek, Joanna; Huuskonen, Taneli (2017): Grzegorczyk's non-Fregean logics and their formal properties. In Urbaniak, Rafał; Payette, Gillman (editors) (2017): Applications of Formal Philosophy: The Road Less Travelled. Springer International Publishing, Cham, Chapter 12, pp. 243–263 Brożek, Anna; Stadler, Friedrich; Woleński, Jan (editors) (2017): The Significance of the Lvov–Warsaw School in the European Culture. Springer International Publishing, Cham Majewska, Lucyna (2017): Этическое творчество Анджея Гжегорчика (Andrzej Grzegorczyk's works in ethics). ВЕЧЕ: Журнал русской философии и культуры, Volume 29, pp. 285–295 (Publishing House of Saint Petersburg State University, Saint Petersburg) Śliwerski, Bogusław (2016): O kluczowej dla pedagogiki twórczości filozofa Andrzeja Grzegorczyka. Blog Pedagog, 4 January 2016 Niwiński, Damian (2016): Contribution of Warsaw Logicians to Computational Logic. Axioms, Volume 5, Issue 16, 8 pages Golińska-Pilarek, Joanna (2016): On the Minimal Non-Fregean Grzegorczyk Logic: To the Memory of Andrzej Grzegorczyk. Studia Logica: An International Journal for Symbolic Logic, Volume 104, Issue 2, pp. 209–234 Гірний, Олег Ігорович (Hirnyy, Oleg Igorovich) (2016): Анджей Гжегорчик як філософ освіти (Andrzej Grzegorczyk as a Philosopher of Education). Філософія освіти. Philosophy of Education, Issue 1, pp. 242–256 Visser, Albert (2016): The Second Incompleteness Theorem: Reflections and Rumination. In Horsten, Leon; Welch, Philip (editors) (2016): Gödel's Disjunction: The Scope and Limits of Mathematical Knowledge. Oxford University Press, Oxford, pp. 67–91 Huuskonen, Taneli (2015): Grzegorczyk’s Logics: Part I. Formalized Mathematics, Volume 23, Issue 3, pp. 177–187 Huuskonen, Taneli (2015): Polish Notation. Formalized Mathematics, Volume 23, Issue 3, pp. 161–176 Biłat, Andrzej (2015): Non-Fregean Logics of Analytic Equivalence (II). Bulletin of the Section of Logic, Volume 44, Issue 1–2, pp. 69–79 Kładoczny, Piotr (2015): The Release and Rehabilitation of Victims of Stalinist Terror in Poland. In McDermott, Kevin; Stibbe, Matthew (editors) (2015): De-Stalinising Eastern Europe: The Rehabilitation of Stalin's Victims after 1953. Palgrave Macmillan, Basingstoke, pp. 67–86 Góralski, Andrzej (editor) (2015): Andrzej Grzegorczyk – Człowiek i dzieło. Biblioteka Dialogu. Universitas Rediviva, Warszawa Woleński, Jan; Marek, Victor Witold (2015): Logic in Poland after 1945 (until 1975). European Review, Volume 23, pp. 159–197 Jankowska, Małgorzata (2015): Życie to wyzwanie: Pamięci Profesora Andrzeja Grzegorczyka. Kwartalnik Filozoficzny, Volume 43, Issue 1, pp. 5–13 Pelc, Jerzy (2015): Od wydawcy: Pożegnanie ze "Studiami Semiotycznymi". Studia Semiotyczne, Volume 28–29, pp. 5–30 Krajewski, Stanisław (2015): Andrzej Grzegorczyk (1922–2014). Studia Semiotyczne, Volume 28–29, pp. 63–88 Krajewski, Stanisław (2014): Andrzej Grzegorczyk (1922–2014). Wiadomości Matematyczne, Volume 50, Issue 1, pp. 171–173 Jankowska, Małgorzata (2014): Filozoficzne dekalogi – tekst dedykowany pamięci profesora Andrzeja Grzegorczyka (1922–2014). Zeszyty Naukowe Centrum Badań im. Edyty Stein, Number 12: Wobec Samotności, Wydawnictwo Naukowe UAM, Poznań, pp. 251–265 Trela, Grzegorz (2014): Logika – sprawa ludzka: Wspomnienie o profesorze Andrzeju Grzegorczyku (1922–2014). Argument, Volume 4, Number 2, pp. 491–498 Maksimova, Larisa Lvovna (2014): The Lyndon property and uniform interpolation over the Grzegorczyk logic. Siberian Mathematical Journal, Volume 55, Number 1, pp. 118–124 (translated from the Russian version) Avigad, Jeremy; Brattka, Vasco (2014): Computability and Analysis: The Legacy of Alan Turing. In Downey, Rod (editor) (2014): Turing's Legacy: Developments from Turing's Ideas in Logic. Cambridge University Press, Cambridge, pp. 1–47 Duchliński, Piotr (2014): W stronę aporetycznej filozofii klasycznej: Konfrontacja tomizmu egzystencjalnego z wybranymi koncepcjami filozofii współczesnej. Akademia Ignatianum, Wydawnictwo WAM, Kraków Murawski, Roman (2014): The Philosophy of Mathematics and Logic in the 1920s and 1930s in Poland. Birkhäuser, Basel Urbaniak, Rafał (2014): Leśniewski's Systems of Logic and Foundations of Mathematics. Springer International Publishing, Cham Kamiński, Łukasz; Waligóra, Grzegorz (editors) (2014): Kryptonim "Pegaz". Służba Bezpieczeństwa wobec Towarzystwa Kursów Naukowych 1978–1980. Instytut Pamięci Narodowej – Komisja Ścigania Zbrodni przeciwko Narodowi Polskiemu, Warszawa Kamiński, Łukasz; Waligóra, Grzegorz (editors) (2014): Kryptonim ''Wasale'': Służba bezpieczeństwa wobec studenckich komitetów Solidarności 1977–1980. Instytut Pamięci Narodowej – Komisja Ścigania Zbrodni przeciwko Narodowi Polskiemu, Warszawa Feferman, Solomon (2013): About and around Computing over the Reals. In Copeland, Brian Jack; Posy, Carl; Shagrir, Oron (editors) (2013): Computability: Turing, Gödel, Church, and Beyond. MIT Press, Cambridge, Massachusetts, pp. 55–76 Mints, Grigori; Olkhovikov, Grigory; Urquhart, Alasdair (2013): Failure of Interpolation in Constant Domain Intuitionistic Logic. Journal of Symbolic Logic, Volume 78, Issue 3, pp. 937–950 Trzęsicki, Kazimierz; Krajewski, Stanisław; Woleński, Jan (editors) (2012): Papers on Logic and Rationality: Festschrift in Honour of Andrzej Grzegorczyk. Studies in Logic, Grammar and Rhetoric, Volume 27, Issue 40. University of Białystok, Białystok Mikołajczuk, Agnieszka (2012): O życiu zawodowym i dokonaniach naukowych Profesor Renaty Grzegorczykowej. Etnolingwistyka, Volume 24, pp. 7–10 Tavana, Nazanin Roshandel; Weihrauch, Klaus (2011): Turing machines on represented sets, a model of computation for analysis. Logical Methods in Computer Science, Volume 7, Issue 2, pp. 1–21 Resnick, Rebecca Abigail (2011): Finding the best model for continuous computation. Senior Thesis, Harvard University, Cambridge Woleński, Jan (2011): Jews in Polish Philosophy. Shofar: An Interdisciplinary Journal of Jewish Studies, Volume 29, Number 3, pp. 68–82 Murawski, Roman (2011): Logos and Mathema: Studies in the Philosophy of Mathematics and History of Logic. Peter Lang, Frankfurt am Main Murawski, Roman (2010): Essays in the Philosophy and History of Logic and Mathematics. Rodopi, Amsterdam Čačić, Vedran; Pudlák, Pavel; Restall, Greg; Urquhart, Alasdair; Visser, Albert (2010): Decorated linear order types and the theory of concatenation. In Delon, Françoise; Kohlenbach, Ulrich; Maddy, Penelope; Stephan, Frank (editors) (2010): Logic Colloquium 2007. Lecture Notes in Logic, Volume 35. Association for Symbolic Logic, Cambridge University Press, Cambridge, pp. 1–13 Grzegorczyk, Franciszek (2010): Doktor Marian Borsuk — ordynator Oddziału Chirurgicznego Szpitala Wolskiego (1907–1923) (Marian Borsuk MD — head of Surgical Department, Wolski Hospital (1907–1923)). Pneumonologia i Alergologia Polska, Volume 78, Issue 4, pp. 306–309 Maksimova, Larisa Lvovna (2009): Restricted interpolation property in superintuitionistic logics. Algebra i Logika, Volume 48, Number 1, pp. 54–89 Švejdar, Vítězslav (2009): On Interpretability in the Theory of Concatenation. Notre Dame Journal of Formal Logic, Volume 50, Number 1, pp. 87–95 Barra, Matthias (2009): Notes on small inductively defined classes and the majorisation relation. Dissertation presented for the degree of Philosophiae Doctor (PhD), Department of Mathematics, University of Oslo, November 2009, supervised by Lars Kristiansen Ehrenfeucht, Andrzej; Marek, Victor Witold; Srebrny, Marian (editors) (2008): Andrzej Mostowski and Foundational Studies. IOS Press, Amsterdam Krajewski, Stanisław; Marek, Victor Witold; Mirkowska, Grażyna; Salwicki, Andrzej; Woleński, Jan (editors) (2008): Topics in Logic, Philosophy and Foundations of Mathematics and Computer Science: In Recognition of Professor Andrzej Grzegorczyk. Fundamenta Informaticae, Volume 81, Issue 1–3. IOS Press, Amsterdam Krajewski, Stanisław (2008): Andrzej Grzegorczyk – logika i religia, samotność i solidarność. Wiadomości Matematyczne, Volume 44, Number 01, pp. 53–59 Matuszewski, Roman; Zalewska, Anna (editors) (2007): From Insight to Proof: Festschrift in Honour of Andrzej Trybulec. Studies of Logic, Grammar, and Rhetoric, Volume 10, Issue 23 Švejdar, Vítězslav (2007): An interpretation of Robinson's Arithmetic in its Grzegorczyk's weaker variant. Fundamenta Informaticae, Volume 81, Issue 1–3, pp. 347–354 Maksimova, Larisa Lvovna (2006): Projective Beth Property in Extensions of Grzegorczyk Logic. Studia Logica: An International Journal for Symbolic Logic, Volume 83, pp. 365–391 Eisler, Jerzy Krzysztof (2006): Polski rok 1968. Instytut Pamięci Narodowej – Komisja Ścigania Zbrodni przeciwko Narodowi Polskiemu, Warszawa Jadacki, Jacek Juliusz (2006): The Lvov–Warsaw School and Its Influence on Polish Philosophy of the Second Half of the 20th Century. In Jadacki, Jacek Juliusz and Paśniczek, Jacek (editors): The Lvov–Warsaw School – The New Generation. Rodopi, Amsterdam, pp. 41–83 Trzęsicki, Kazimierz (2006): Wkład logików polskich w światową informatykę. Filozofia Nauki – kwartalnik, Volume 14, Issue 3, pp. 5–19 Zieliński, Wojciech (2006): W poszukiwaniu filozofii znaczącej (uwagi na marginesie dyskusji). Diametros, Issue 10, pp. 78–92 Kostrzycka, Zofia (2006): Density of truth in modal logics. Discrete Mathematics and Theoretical Computer Science Proceedings Volume AG Fourth Colloquium on Mathematics and Computer Science: Algorithms, Trees, Combinatorics and Probabilities, Nancy, France pp. 161–170 Hainry, Emmanuel (2006): Modèles de calcul sur les réels, résultats de comparaison. Doctoral thesis, Institut National Polytechnique de Lorraine, Laboratoire Lorrain de Recherche en Informatique et ses Applications – UMR 7503, supervised by Olivier Bournez Bournez, Olivier; Hainry, Emmanuel (2005): Elementarily computable functions over the real numbers and -sub-recursive functions. Theoretical Computer Science, Volume 348, Issues 2–3, pp. 130–147 Gawor, Leszek; Zdybel, Lech (2005): Elements of Twentieth Century Polish Ethics. In Jedynak, Stanisław (editor) (2005): Polish Axiology: The 20th Century and Beyond. Polish Philosophical Studies V. The Council for Research in Values and philosophy, Washington, D.C., Chapter II, pp. 37–61 ERCOM: Stefan Banach International Mathematical Center. Newsletter of European Mathematical Society, Issue 58, December 2005, pp. 37–38 Gabbay, Dov; Maksimova, Larisa Lvovna (2005): Interpolation and Definability: Modal and Intuitionistic Logic. Oxford University Press, Oxford Maksimova, Larisa Lvovna (2004): Definability in Normal Extensions of S4. Algebra i Logika, Volume 43, Number 4, pp. 387–410 Wybraniec-Skardowska, Urszula (2004): Foundations for the formalization of metamathematics and axiomatizations of consequence theories. Annals of Pure and Applied Logic, Volume 127, pp 243–266 Jeřábek, Emil (2004): A note on Grzegorczyk's logic. Mathematical Logic Quarterly, Volume 50, Number 3, pp. 295–296 Szałas, Andrzej Piotr (2004): Logic for Computer Science. Lecture Notes. October 2004 Hasuo, Ichiro; Kashima, Ryo (2003): Kripke Completeness of First-Order Constructive Logics with Strong Negation. Logic Journal of the IGPL, Volume 11, Issue 6, pp. 615–646 Wójcicki, Ryszard; Zygmunt, Jan (2003): Polish Logic in Postwar Period. In Hendricks, Vincent Fella; Malinowski, Jacek (editors) (2003): Trends in Logic: 50 Years of Studia Logica. Kluwer Academic Publishers, Dordrecht, pp. 11–33 Mackiewicz, Witold (2003): Ludzie i Idee: Polska filozofia najnowsza. Zarys problematyki. Agencja Wydawniczo-Poligraficzna "Witmark", Warszawa Jaworowski, Zbigniew (2003): Eurocentrism. Res Humana, Number 1, Issue 62, pp. 11–15. Republished in The Polish Foreign Affairs Digest: Quarterly, Volume 3, Number 2, Issue 7, pp. 29–37 Słowik, Zdzisław (2002): O duchu Europy i jej powołaniu: Rozmowa z profesorem Andrzejem Grzegorczykiem. Res Humana, Number 4, Issue 59, pp. 24–28 Chmurzyński, Jerzy Andrzej (2002): Searching Europe's Destination. Dialogue and Universalism, Number 6-7, pp. 133–144 Ciesielski, Remigiusz Tadeusz (2002): Sens Europy. Kultura Współczesna: Teorie, Interpretacje, Praktyka, Issue 3–4, pp. 111–117 Fiorentini, Camillo; Miglioli, Pierangelo (1999): A Cut-free Sequent Calculus for the Logic of Constant Domains with a Limited Amount of Duplications. Logic Journal of the IGPL, Volume 7, Issue 6, pp. 733–753 Wiśniewski, Ryszard; Tyburski, Włodzimierz (editors) (1999): Polska filozofia analityczna: W kręgu szkoły lwowsko-warszawskiej. Wydawnictwo Naukowe Uniwersytetu Mikołaja Kopernika, Toruń Murawski, Roman (1999): Recursive Functions and Metamathematics: Problems of Completeness and Decidability, Gödel's theorems. Kluwer Academic Publishers, Dordrecht Woleński, Jan; Köhler, Eckehart (editors) (1999): Alfred Tarski and the Vienna Circle: Austro–Polish Connections in Logical Empiricism. Kluwer Academic Publishers, Dordrecht Kijania-Placek, Katarzyna; Woleński, Jan (editors) (1998):The Lvov–Warsaw School and Contemporary philosophy. Kluwer Academic Publishers, Dordrecht Srzednicki, Jan Tadeusz Jerzy; Stachniak, Zbigniew (editors) (1998): Leśniewski's Systems Protothetic. Kluwer Academic Publishers, Dordrecht Weihrauch, Klaus (1997): A Foundation for Computable Analysis. In Freksa, Christian; Jantzen, Matthias; Valk, Rüdiger (editors) (1997): Foundations of Computer Science: Potential-Theory-Cognition. Springer, Heidelberg, pp. 185–199 Kaczor, Anna (1997): Perspektywy edukacji. Przegląd Humanistyczny, Volume 41, Number 3, Issue 342, pp. 188–193 Jadacki, Jacek Juliusz (1996): The Conceptual System of the Lvov–Warsaw School. Axiomathes, Number 3, pp. 325–333 Błaszczyk, Jolanta (1994): Dziewięć Lat Warszawskiej Premiery Literackiej (Nine years of the Warsaw Literary Award). Bibliotekarz, Issue 3, pp. 25–27 Mardaev, Sergei Il'ich (1993): Least fixed points in Grzegorczyk's logic and in the intuitionistic propositional logic. Algebra and Logic, Volume 32, Issue 5, pp. 279–288 Boolos, George (1993): The Logic of Provability. Cambridge University Press, Cambridge Woleński, Jan (editor) (1990): Philosophical Logic in Poland. Kluwer Academic Publishers, Dordrecht Woleński, Jan (editor) (1990): Kotarbiński: Logic, Semantics and Ontology. Kluwer Academic Publishers, Dordrecht Woleński, Jan (1989): Logic and Philosophy in the Lvov–Warsaw School. Kluwer Academic Publishers, Dordrecht Maryniarczyk, Andrzej (1989): Filozofia naukowa czy kolejny zabobon?. Ethos: Kwartalnik Instytutu Jana Pawła II KUL, Volume 2, Number 8, pp. 332–337 Srzednicki, Jan Tadeusz Jerzy; Rickey, Vincent Frederick; Czelakowski, Janusz (editors) (1984): Leśniewski's Systems: Ontology and Mereology. Martinus Nijhoff Publishers, The Hague & Ossolineum: Publishing House of the Polish Academy of Sciences, Wrocław Gabbay, Dov; Guenthner, Franz (editors) (1984): Handbook of Philosophical Logic, Volume II: Extensions of Classical Logic. D. Reidel, Dordrecht Paris, Jeffrey Bruce; Wilkie, Alex James (1984): sets and induction. In Guzicki, Wojciech; Marek, Wiktor Witold; Pelc, Andrzej; Rauszer, Cecylia (editors) (1984): Open Days in Model Theory and Set Theory: Proceedings of a Conference held in September 1981 at Jadwisin, near Warsaw, Poland. University of Leeds, Leeds, pp. 237–248 Cichoń, Eugeniusz Adam; Wainer, Stanley Scott (1983): The Slow-Growing and the Grzegorczyk Hierarchies. Journal of Symbolic Logic, Volume 48, Issue 2, pp. 399–408 Boolos, George (1980): Provability in arithmetic and a schema of Grzegorczyk. Fundamenta Mathematicae, Volume 60, Number 1, pp. 41–45 Boolos, George (1979): The Unprovability of Consistency: An Essay in Modal Logic. Cambridge University Press, Cambridge Hass, Ludwik (1979): Wolnomularze i loże wolnomularskie Płocka (1803–1821). Rocznik Mazowiecki, Volume 7, pp. 69–126 Tatarkiewicz, Teresa; Tatarkiewicz, Władysław (1979): Wspomnienia. Państwowy Instytut Wydawniczy, Warszawa Mostowski, Andrzej (1979): Thirty Years of Foundational Studies: Lectures on the Development of Mathematical Logic and the Study of the Foundations of Mathematics in 1930–1964. In Kuratowski, Kazimierz; Marek, Wiktor Witold; Pacholski, Leszek; Rasiowa, Helena; Ryll-Nardzewski, Czesław; Zbierski, Paweł (editors): Andrzej Mostowski: Foundational Studies, Selected Works, Volume I. North-Holland, Amsterdam & PWN–Polish Scientific Publishers, Warszawa, pp. 1–176 Chomsky, Noam (1977): The Right to Help: Noam Chomsky and Andrzej Grzegorczyk. The New York Review of Books, 4 August 1977. Ptaczek, Józef (editor) (1976): Memoriał 59, inne dokumenty protestu oraz list otwarty prof. dr Edwarda Lipińskiego do Gierka. Wydawnictwo Komitetu Głównego P.P.S. w Niemczech, Munich Wainer, Stanley Scott (1972): Ordinal Recursion, and a Refinement of the Extended Grzegorczyk Hierarchy. Journal of Symbolic Logic, Volume 37, Issue 2, pp. 281–292 Segerberg, Karl Krister (1971): An Essay in Classical Modal Logic. PhD dissertation under Dana Stewart Scott, Stanford University. Filosofiska Studier utgivna av Filosofiska Föreningen och Filosofiska Institutionen vid Uppsala Universitet, Number 13, Uppsala, 1971 Klemke, Dieter (1971): Ein Henkin-Beweis für die Vollständigkeit eines Kalküls relativ zur Grzegorczyk-Semantik. Archiv für Mathematische Logik und Grundlagenforschung, Volume 14, pp. 148–161 Görnemann, Sabine (1971): A logic stronger than intuitionism. Journal of Symbolic Logic, Volume 36, Issue 2, pp. 249–261 Klemke, Dieter (1970): Ein vollständiger Kalkül für die Folgerungsbeziehung der Grzegorczyk-Semantik. Dissertation, Albert-Ludwigs-Universität Freiburg im Breisgau, Naturwissenschaftlich-Mathematische Fakultät Görnemann, Sabine (1969): Über eine Verschärfung der intuitionistischen Logik. Proefschrift, Technische Hochschule Hannover, Fakultät für Mathematik und Naturwissenschaft Gabbay, Dov (1969): Montague Type Semantics for Non-Classical Logics I. U.S. Air Force Office of Scientific Research, contract No. F 61052-68-C-0036, Report No. 4 Kostanecki, Stanisław (1969): Mirosław Zdziarski (1892–1939). Rocznik Mazowiecki, Volume 2, pp. 309–339 Starnawski, Jerzy (1969): Piotr Grzegorczyk (17 listopada 1894 – 20 maja 1968). Pamiętnik Literacki: Czasopismo kwartalne poświęcone historii i krytyce literatury polskiej, Volume 60, Issue 2, pp. 409–415 Addison, John West; Henkin, Leon; Tarski, Alfred (editors) (1965): The Theory of Models: Proceedings of the 1963 International Symposium at Berkeley. North-Holland, Amsterdam Jordan, Zbigniew Antoni (1963): Philosophy and Ideology: The Development of Philosophy and Marxism-Leninism in Poland since the Second World War. D. Reidel, Dordrecht Luschei, Eugene Charles (1962): The Logical Systems of Lesniewski. North-Holland, Amsterdam Czeżowski, Tadeusz (editor) (1960): Charisteria: Rozprawy filozoficzne złożone w darze Władysławowi Tatarkiewiczowi w siedemdziesiątą rocznicę urodzin. Państwowe Wydawnictwo Naukowe, Warszawa Załęski, Stanisław (1908): O masonii w Polsce od roku 1738 do 1822: na źródłach wyłącznie masońskich. Druk W.L. Anczyca i Spółki, Kraków Wójcicki, Kazimierz Władysław (1858): Cmentarz Powązkowski oraz cmentarze katolickie i innych wyznań pod Warszawą i w okolicach tegoż miasta, Tom III. Drukarnia S. Orgelbranda, Warszawa External links Andrzej Grzegorczyk profile at Mathematics Genealogy Project Andrzej Grzegorczyk's Warsaw Uprising biogram (in Polish) Andrzej Grzegorczyk's obituary (in Polish) by Polish Mathematical Society Andrzej Grzegorczyk's profile by Calculemus 2012 Interview with Andrzej Grzegorczyk (in Polish) Andrzej Grzegorczyk's 90th birthday at the Warsaw branch of the Polish Philosophical Society Andrzej Grzegorczyk's lecture on 17 May 2011, the University of Opole, Poland 2006 Appeal of the Polish Science Representatives to the Minister of Environment Jan Szyszko in Defense of the Tatra Mountains (in Polish) 1922 births 20th-century mathematicians 21st-century mathematicians Computability theorists Mathematical logicians Polish mathematicians Polish logicians Warsaw Uprising insurgents Home Army members Knights of the Order of Polonia Restituta Officers of the Order of Polonia Restituta Recipients of the Order of Polonia Restituta Christian ethicists Catholic philosophers Polish Christians 2014 deaths 20th-century Polish philosophers 21st-century Polish philosophers
22653119
https://en.wikipedia.org/wiki/Dynamic%20provisioning%20environment
Dynamic provisioning environment
Dynamic provisioning environment (DPE) is a simplified way to explain a complex networked server computing environment where server computing instances or virtual machines (VMs) are provisioned (deployed or instantiated) from a centralized administrative console or client application by the server administrator, network administrator, or any other enabled user. The server administrator or network administrator has the ability to parse out control of the provisioning environment to users or accounts in the network environment (end users, organizational units, network accounts, other administrators). The provisioned servers or VMs can be inside the firewall, outside the firewall, or hosted depending on how the supporting pool of networked server computing resources is defined. From the perspective of the end user/client the requested server is deployed automatically. From an easy-to-use client or desktop application, any administrator or designated end user is able to easily instantiate a server instance or virtual machine (VM) instance. The server instance is provisioned for the eligible administrator or end user without anyone having to physically touch the supporting server infrastructure. While defining the server or VM to instantiate the client application gives the end user or administrator the ability to define the operating system and applications that will run within the server instance to be provisioned automatically. Desktop dynamic provisioning environment (desktop DPE) or client dynamic provisioning environment (client DPE) is the scenario where a DPE is being used to provision client computing or desktop computing instances. Origin of DPE (dynamic provisioning environment): First documented in 2007; Cambridge, Massachusetts. DPE can be a vendor independent environment or an environment defined by a specific vendor. A dynamic provisioning environment (DPE) is flexible and can be defined as supporting a set of heterogeneous applications, defined as supporting a single application, or could be created in an appliance model to deploy a discrete application customized for a specific usage scenario. From an operating system perspective, a DPE server infrastructure can exist on one server operating system (homogeneous server infrastructure) or exist as on a defined set of servers with different operating systems (heterogeneous server infrastructure). The server instances or VMs provisioned by the DPE could be one specific server operating system or multiple server operating systems. Same idea for client systems instantiated by the DPE. The client instantiated by the DPE could be one or multiple client/desktop operating systems. Components of a DPE will vary based on the density of the computing environment. Would commonly include, servers or virtual server instances, directory server, network connectivity( TCP/IP), management layer, virtual machine management tools, server provisioning tools, client application, client interface. References Computer networking
21200649
https://en.wikipedia.org/wiki/XtratuM
XtratuM
XtratuM is a bare-metal hypervisor specially designed for embedded real-time systems available for the instruction sets LEON2/3/4 (SPARC v8) and ARM v7 processors. It has been developed by the Universidad Politécnica de Valencia (Spain) with contributions of the Lanzhou University (China). XtratuM is released as free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2 or any later. Professional versions are commercialized by fentISS under a proprietary license. XtratuM is a hypervisor designed for embedded systems to meet safety critical real-time requirements. It provides a framework to run several operating systems (or real-time executives) in a robust partitioned environment. XtratuM can be used to build a MILS (Multiple Independent Levels of Security) architecture. History The name XtratuM derives from the word stratum. In geology and related fields it means: Layer of rock or soil with internally consistent characteristics that distinguishes it from contiguous layers. In order to stress the tight relation with Linux and the open-source movements, the “S” was replaced by “X”. XtratuM would be the first layer of software (the one closest to the hardware), which provides a solid basis for the rest of the system. XtratuM 1.0 was initially designed as a substitution of the RTLinux HAL (Hardware Abstraction Layer) to meet temporal and spatial partitioning requirements. The goal was to virtualize the essential hardware devices to execute several OSes concurrently, with at least one of these OSes being a RTOS. The other hardware devices (including booting) were left to a special domain, named root domain. After this experience, it was redesigned to be independent of Linux and bootable. The result of this is XtratuM 2.0 which is type 1 hypervisor that uses para-virtualization. The para-virtualized operations are as close to the hardware as possible. Therefore, porting an operating system that already works on the native system is a simple task: replace some parts of the operating system HAL with the corresponding hypercalls. Overview The design of a hypervisor for critical real-time embedded systems follows these criteria: Strong temporal isolation: fixed cyclic scheduler. Strong spatial isolation: all partitions are executed in processor user mode, and do not share memory. Basic resource virtualization: clock and timers, interrupts, memory, CPU and special devices. Real-time scheduling policy for partition scheduling. Efficient context switch for partitions. Deterministic hypercalls (hypervisor system calls). Health monitoring support. Robust and efficient inter-partition communication mechanisms (sampling and queuing ports). Low overhead. Small size. Static system definition via configuration file (XML). In the case of embedded systems, particularly avionics systems, the ARINC 653 standard defines a partitioning scheme. Although this standard was not designed to describe how a hypervisor must operate, some parts of the model are quite close to the functionality provided by a hypervisor. The XtratuM API and internal operations resemble the ARINC 653 standard. XtratuM is not an ARINC 653 compliant system. The standard relies on the idea of a separation kernel defining both the API and operations of the partitions and also how the threads or processes are managed inside each partition. XtratuM hypervisor supports the LEON 2/LEON 3/LEON 4 (SPARCv8) and Cortex R4/Cortex R5/Cortex A9 (ARMv7) architectures. XtratuM support as execution environments: XAL (XtratuM Abstraction Layer) for bare-C applications POSIX PSE51 Partikle RTOS ARINC-653 P1 compliant LITHOS RTOS ARINC-653 P4 compliant uLITHOS runtime Ada Ravenscar profile ORK+ RTEMS Linux See also Kernel-based Virtual Machine L4 microkernels Xen Paravirtualization Nanokernel References External links XtratuM Official Page fentISS Free virtualization software Virtualization-related software for Linux
301236
https://en.wikipedia.org/wiki/VIC%20cipher
VIC cipher
The VIC cipher was a pencil and paper cipher used by the Soviet spy Reino Häyhänen, codenamed "VICTOR". If the cipher were to be given a modern technical name, it would be known as a "straddling bipartite monoalphabetic substitution superenciphered by modified double transposition." However, by general classification it is part of the Nihilist family of ciphers. It was arguably the most complex hand-operated cipher ever seen, when it was first discovered. The initial analysis done by the American National Security Agency (NSA) in 1953 did not absolutely conclude that it was a hand cipher, but its placement in a hollowed out 5¢ coin implied it could be decoded using pencil and paper. The VIC cipher remained unbroken until more information about its structure was available. Although certainly not as complex or secure as modern computer operated stream ciphers or block ciphers, in practice messages protected by it resisted all attempts at cryptanalysis by at least the NSA from its discovery in 1953 until Häyhänen's defection in 1957. A revolutionary leap The VIC cipher can be regarded as the evolutionary pinnacle of the Nihilist cipher family. The VIC cipher has several important integrated components, including mod 10 chain addition, a lagged Fibonacci generator (a recursive formula used to generate a sequence of pseudorandom digits), a straddling checkerboard, and a disrupted double transposition. Until the discovery of VIC, it was generally thought that a double transposition alone was the most complex cipher an agent, as a practical matter, could use as a field cipher. History During World War II, several Soviet spy rings communicated to Moscow Centre using two ciphers which are essentially evolutionary improvements on the basic Nihilist cipher. A very strong version was used by Max Clausen in Richard Sorge's network in Japan, and by Alexander Foote in the Lucy spy ring in Switzerland. A slightly weaker version was used by the Rote Kapelle network. In both versions, the plaintext was first converted to digits by use of a straddling checkerboard rather than a Polybius square. This has the advantage of slightly compressing the plaintext, thus raising its unicity distance and also allowing radio operators to complete their transmissions quicker and shut down sooner. Shutting down sooner reduces the risk of the operator being found by enemy radio direction finders. Increasing the unicity distance increases strength against statistical attacks. Clausen and Foote both wrote their plaintext in English, and memorized the 8 most frequent letters of English (to fill the top row of the checkerboard) through the mnemonic phrase "a sin to err" (dropping the second "r"). The standard English straddling checkerboard has 28 character slots and in this cipher the extra two became "full stop" and "numbers shift". Numbers were sent by a numbers shift, followed by the actual plaintext digits in repeated pairs, followed by another shift. Then, similarly to the basic Nihilist, a digital additive was added in, which was called "closing". However a different additive was used each time, so finally a concealed "indicator group" had to be inserted to indicate what additive was used. Unlike basic Nihilist, the additive was added by non-carrying addition (digit-wise addition modulo 10), thus producing a more uniform output which doesn't leak as much information. More importantly, the additive was generated not through a keyword, but by selecting lines at random from almanacs of industrial statistics. Such books were deemed dull enough to not arouse suspicion if an agent was searched (particularly as the agents' cover stories were as businessmen), and to have such high entropy density as to provide a very secure additive. Of course the figures from such a book are not actually uniformly distributed (there is an excess of "0" and "1" (see Benford's Law), and sequential numbers are likely to be somewhat similar), but nevertheless they have much higher entropy density than passphrases and the like; at any rate, in practice they seem never to have been successfully cryptanalysed. The weaker version generated the additive from the text of a novel or similar book (at least one Rote Kapelle member actually used The Good Soldier Schweik), This text was converted to a digital additive using a technique similar to a straddling checkerboard. The ultimate development along these lines was the VIC cipher, used in the 1950s by Reino Häyhänen. By this time, most Soviet agents were instead using one-time pads. However, despite the theoretical perfection of the one-time pad, in practice they were broken, while VIC was not. The one-time cipher could however only be broken when cipher pages were re-used, due to logistic problems, and therefore was no longer truly one-time. Mechanics overview The secret key for the encryption is the following: A short Phrase (e.g. the first line of a song) A Date (in a 6-digit format) A Personal Number (unique to agent, a 1 or 2 digit number) The encryption was also aided by the adversary not knowing a 5-digit Keygroup which was unique to each message. The Keygroup was not strictly a 'secret', (as it was embedded in-clear in the ciphertext), but it was at a location in the ciphertext that was not known to an adversary. The cipher broadly worked as follows: Use the secrets above (Phrase, Date, Keygroup and Personal Number) create a 50 digit block of pseudo random-numbers Use this block to create the message keys for: A Straddling Checkerboard Two Columnar transpositions Encrypt the Plaintext message via the straddling checkerboard Apply two transpositions to the resultant (intermediary) ciphertext through two columnar A 'Standard' Columnar Transposition A Diagonal Columnar Transposition Insertion of the Keygroup into the ciphertext - as determined by the Personal Number Detailed mechanics Note: this section tracks the calculations by referring to [Line-X] or similar. This is to align with the notation stated in the CIA archive description. Pseudorandom block derivation [Line-A]: Generate a random 5-digit Keygroup [Line-B]: Write the first 5 digits of the secret Date [Line-C]: Subtract [Line-B] from [Line-A] by modular arithmetic (digit-by-digit, not 'borrowing' any tens from a neighboring column) [Line-D]: Write out the first 20 letters from the secret Phrase [Line-E.1&2]: Sequence (see below) the first and second ten characters separately (to get [Line-E.1] & [Line-E.2] respectively) [Line-F.1]: Write out the 5-Digits from [Line-C], then apply Chain Addition (see below) applied to create five more digits [Line-F.2]: The digit sequence '1234567890' is written out (under [Line-E.2]) as an aide for encoding when creating [Line-H] [Line-G]: Addition of [Line-E.1] to [Line-F.1] - this is digit-by-digit by mod-10 arithmetic, i.e. no 'carrying' over tens to the next column [Line-H]: Encoding (see below) of the digits in [Line-G] under [Line-E.2] as the key [Line-I]: No [Line-I] used, presumably to avoid confusion (as 'I' may be misread as a '1' or 'J') [Line-J]: The Sequencing of [Line-H] [Lines-K,L,M,N,P]: These are five 10-digit lines created by chain addition of [Line-H]. The last two non-equal digits are added to the agent's personal number to determine the key length of the 2 transpositions. (Lines K-to-P are in-effect a key-driven pseudo-random block used for the next stage of encryption) [Line-O]: No [Line-O] used, presumably to avoid confusion (as 'O' may be misread as a zero or 'Q') Message key derivation [Line-Q]: The first 'a' digits extracted from [Lines-K,L,M,N,P] when Transposed via [Line-J]. (Where 'a' is the first value resulting from the addition of the last non-equal digits in [Line-P] to the Personal Number). These are used to key the Columnar Transposition. [Line-R]: The next 'b' digits extracted (after the 'a' digits have been extracted) from [Lines-K,L,M,N,P] when transposed via [Line-J]. (Where 'b' is the second value resulting from the addition of the last non-equal digits in [Line-P] to the Personal Number). These are used to key the Diagonal Transposition. [Line-S]: The Sequencing of [Line-P], this is used as the key to the Straddling Checkerboard Example of key generation Personal Number: 6 Date: 13 Sept 1959 // Moon Landing - 13 Sept 1959 ('139195' - truncated to 6 digits) Phrase: 'Twas the night before Christmas' // from 'A visit from St. Nicholas' - poem Keygroup: 72401 // randomly generated [Line-A]: 72401 // Keygroup [Line-B]: 13919 // Date - truncated to 5 digits [Line-C]: 69592 // subtract [Line-B] from [Line-A] [Line-D]: TWASTHENIG HTBEFORECH // Phrase - truncated to 20 characters [Line-E]: 8017942653 6013589427 // via Sequencing [Line-F]: 6959254417 1234567890 // from [Line-C] and chain addition, then '1234567890' [Line-G]: 4966196060 // add [Line-E.1] to [Line-F.1] [Line-H]: 3288628787 // encode [Line-G] with [Line-E.2], [Line-F.2] helps [Line-J]: 3178429506 // The Sequencing of [Line-H] [Line-K]: 5064805552 // BLOCK: Chain addition of [Line-H] for 50 digits [Line-L]: 5602850077 [Line-M]: 1620350748 [Line-N]: 7823857125 [Line-P]: 5051328370 Last two non-equal digits are '7' and '0', added to Personal Number (6) means that the permutation keys are 13 and 6 digits long. [Line-Q]: 0668005552551 // first 13 digits from block [Line-R]: 758838 // next 6 digits from block [Line-S]: 5961328470 // Sequencing of [Line-P] Message encryption Straddling checkerboard Once the key has been generated, the first stage of actually encrypting the Message is to convert it to a series of digits, this is done via a Straddling checkerboard. The key (header row) for the checkerboard is based on [Line-S]. Then a pre-agreed series of common letters used on the second row. The example below uses the English mnemonic 'AT ONE SIR', however the Cyrillic mnemonic used by Hayhanen was 'snegopad', the Russian word for snowfall. The remaining cells are filled in, with the rest of the alphabet/symbols filled in in order. An example encoding is below: MESSAGE: 'Attack at dawn. By dawn I mean 0500. Not 0915 like you did last time.' 59956 96459 66583 38765 88665 83376 02538 00005 55000 00080 87319 80000 99911 15558 06776 42881 86667 66675 49976 0287- Transpositions: columnar transposition The message is transposed via standard columnar transposition keyed by [Line-Q] above. (Note: if the message encoded length is not a multiple of 5 at this stage, an additional digit is added) The message is then transposed via Diagonal Transposition keyed by [Line-R] above. Keygroup insertion The (unencrypted) Keygroup is inserted into the ciphertext 'P' groups from the end; where 'P' is the unused sixth digit of the Date. Modular addition/subtraction Modular addition or subtraction, also known as 'false adding/subtraction', in this context (and many pen and paper ciphers) is digit-by-digit addition and subtraction without 'carrying' or 'borrowing'. For example: 1234 + 6789 = 7913 1234 - 6789 = 5555 Sequencing Sequencing in this context is ordering the elements of an input from 1-10 (where '0' represents 10). This occurs either to letters (whereby alphabetical order is used), or numbers (where numerical value is used). In the event of equal values, then the leftmost value is sequenced first. For example: LETTERS: The word 'Octopus' is sequenced as '2163475' - (i.e. C=1, first 'O'=2, second 'O'=3, ...) NUMBERS: The number '90210' is sequenced as '34215' - (by numerical order. Zero is valued at '10' in terms of ordering) Chain addition Chain addition is akin to a Linear-feedback shift register, whereby a stream of number is generated as an output (and fed back in as an input) to a seed number. Within the VIC Cipher chain addition works by (1) taking the original (seed) number, (2) false-adding the first two digits, (3) putting this new number at the end of the chain. This continues, however the digits being added are incremented by one. For example, if the seed was '90210', the first 5 iterations are shown below: 90210           // Initial seed value 90210 9         // 9 = 9+0 (first two digits) 90210 92        // 2 = 0+2 (next two...) 90210 923       // 3 = 2+1 90210 9231      // 1 = 1+0 90210 92319     // 9 = 0+9; note how the first '9' generated is being fed back in Digit encoding The encoding step replaces each digit in a number (i.e. [Line-G] in the cipher) with one from a key sequence (i.e. [Line-E.2]) that represents its position in the 1-10 ordering. It should be seen that by writing out the series '1234567890' (shown as [Line-F.2]) underneath [Line.E.2] each value from 0-9 has another above it. Simply replace every digit in the number to be encoded with the one above it in the key sequence. For example the number '90210' would have encodings as follows; . So the output would be: '27067'. Decryption To decrypt the VIC Cipher is as follows: Extract the Keygroup - By knowledge of the agent's Personal Number, remove the 5 digits of the Keygroup from the ciphertext Generate the Message Keys - By using the knowledge of the various secrets (Phrase, Date, Personal Number, Keygroup) generate the keys in the same manner as the encryption process Decrypt the Ciphertext - By using knowledge of the Message Keys for the transpositions and straddling checkerboard decrypt them Cryptanalysis The cipher is one of the strongest pen and paper ciphers actually used in the real world, and was not broken (in terms of determining the underlying algorithm) by the NSA at the time. However, with the advent of modern computing, and public disclosure of the algorithm this would not be considered a strong cipher. It can be observed that the majority of the entropy in the secret key converges to a 10-digit number [Line-H]. This 10-digit number is approximately 34 bits of entropy, combined with the last digit of the date (needed to identify where the KeyGroup is) would make about 38 bits of entropy in terms of Message Key strength. 38 bits is subject to a Brute-force attack within less than a day on modern computing. See also Topics in cryptography References External links FBI page on the hollow nickel case with images of the hollow nickel that contained the VIC encrypted message "The Cipher in a Hollow Nickel" The VIC Cipher Straddling Checkerboards Various different versions of checkerboards on Cipher Machines and Cryptology SECOM, a VIC variant with extended checkerboard "The Rise Of Field Ciphers: straddling checkerboard ciphers" by Greg Goebel 2009 Classical ciphers Science and technology in the Soviet Union
158871
https://en.wikipedia.org/wiki/Root%20directory
Root directory
In a computer file system, and primarily used in the Unix and Unix-like operating systems, the root directory is the first or top-most directory in a hierarchy. It can be likened to the trunk of a tree, as the starting point where all branches originate from. The root file system is the file system contained on the same disk partition on which the root directory is located; it is the filesystem on top of which all other file systems are mounted as the system boots up. Unix-like systems Unix abstracts the nature of this tree hierarchy entirely and in Unix and Unix-like systems the root directory is denoted by the / (slash) sign. Though the root directory is conventionally referred to as /, the directory entry itself has no name its path is the "empty" part before the initial directory separator character (/). All file system entries, including mounted file systems are "branches" of this root. chroot In UNIX-like operating systems, each process has its own idea of what the root directory is. For most processes this is the same as the system's actual root directory, but it can be changed by calling the chroot system call. This is typically done to create a secluded environment to run software that requires legacy libraries and sometimes to simplify software installation and debugging. Chroot is not meant to be used for enhanced security as the processes inside can break out. FreeBSD offers a stronger jail() system call that enables operating-system-level virtualization and also serves security purposes to restrict which files a process may access to just a subset of the file system hierarchy. Super-root Some Unix systems support a directory below the root directory. Normally, "/.." points back to the same inode as "/", however, under , this can be changed to point to a super-root directory, where remote trees can be mounted. If, for example, two workstations "pcs2a" and "pcs2b" were connected via "connectnodes" and "uunite" startup script, "/../pcs2b" could be used to access the root directory of "pcs2b" from "pcs2a". DOS, OS/2, and Windows Under DOS, OS/2, and Microsoft Windows, each partition has a drive letter assignment (labeled C:\ for a particular partition C) and there is no common root directory above that. DOS, OS/2, and Windows do support more abstract hierarchies, with partitions mountable within a directory of another drive, though this is rarely seen. This has been possible in DOS through the command JOIN since it first was added to DOS, and can be achieved in all Windows versions as well. In some contexts, it is also possible to refer to a root directory containing all mounted drives, although it cannot contain files directly as it does not exist on any file system. For instance, when linking to a local file using the "file:" URI scheme, the syntax is of the form "file:///C:/...", where "file://" is the standard prefix, and the third '/' represents the root of the local system. Related uses On many Unixes, there is also a directory named /root (pronounced "slash root"). This is the home directory of the 'root' superuser. On many Macintosh, and iOS systems this superuser home directory is /var/root. See also Filesystem Hierarchy Standard (FHS) Parent directory Working directory References File system directories
23190060
https://en.wikipedia.org/wiki/Speak%20%28Unix%29
Speak (Unix)
was a Unix utility that used a predefined set of rules to turn a file of English text into phoneme data compatible with a Federal Screw Works (later Votrax) model VS4 "Votrax" Speech Synthesizer. It was first included in Unix v3 and possibly later ones, with the OS-end support files and help files persisting until v6. As of late 2011, the original source code for , and portions of speak.m (which is generated from speak.v) were discovered. At least three versions of the man page are known to still exist. The main program (speak) was around 4500 bytes, the rule tables (/etc/speak.m) were around 11,000 bytes, and the table viewer (speakm) was around 1900 bytes. History The speak utility was developed by Douglas McIlroy in the early 1970s at AT&T Bell Labs in Murray Hill, New Jersey. It was included with the 1st Edition of Unix in 1973. In 1974, McIlroy published a paper describing the workings of this algorithm. According to the McIlroy paper, "K. Thompson and D. M. Ritchie integrated the device smoothly into the operating system", which is evident from /usr/sys/dev/vs.c "Screw Works Interface via DC-11". McIlroy Algorithm The McIlroy Algorithm is a large set of rules, sub-rules, and sub-sub-rules, applied to a word to isolate long vowels, silent 'e's, and slowly convert each letter into its "Screw Works" equivalent phoneme code. The intention of the algorithm is to convert any English text into Votrax Phoneme codes, which could be played back/recited by a Federal Screw Works "Votrax" speech synthesizer. A later (1976), simpler text-to-speech algorithm developed jointly by Votrax and the U.S. Naval Research Laboratory, known as the "NRL Algorithm", serves a similar purpose. References Unix software
932505
https://en.wikipedia.org/wiki/Beerware
Beerware
Beerware is a tongue-in-cheek term for software released under a very relaxed license (beerware licensed software). It provides the end user with the right to use a particular program (or do anything else with the source code). Description Should the user of the product meet the author and consider the software useful, they are encouraged to either buy the author a beer "in return" or drink one themselves. The Fedora project and Humanitarian-FOSS project at Trinity College recognized the "version 42" beerware license variant as extremely permissive "copyright only" license, and consider it as GPL compatible. the Free Software Foundation does not mention this license explicitly, but its list of licenses contains an entry for informal licenses, which are listed as free, non-copyleft, and GPL-compatible. However, the FSF recommends the use of more detailed licenses over informal ones. Many variations on the beerware model have been created. Poul-Henning Kamp's beerware license is simple and short, in contrast to the GPL, which he has described as a "joke". The full text of Kamp's license is: /* * ---------------------------------------------------------------------------- * "THE BEER-WARE LICENSE" (Revision 42): * <[email protected]> wrote this file. As long as you retain this notice you * can do whatever you want with this stuff. If we meet some day, and you think * this stuff is worth it, you can buy me a beer in return. Poul-Henning Kamp * ---------------------------------------------------------------------------- */ See also Anti-copyright license Careware Comparison of free and open-source software licenses Donationware WTFPL References Free and open-source software licenses Permissive software licenses Software licenses
45708737
https://en.wikipedia.org/wiki/Liquid%20Rhythm
Liquid Rhythm
Liquid Rhythm is a beat sequencing and rhythm generation software developed by WaveDNA and initially released in 2010. The software’s core technology, the Music Molecule, visualizes patterns and relationships between MIDI notes and allows users to create and edit note clusters and patterns rather than individual notes. Liquid Rhythm operates as a standalone program for macOS and Windows, and as a DAW plug-in in the Max for Live, VST, AU, and RTAS formats. The Music Molecule Liquid Rhythm is based on WaveDNA’s patented Music Molecule technology, which visualizes notes and rests in BeatForms. BeatForms are one 8th note long with the number of note events indicated by the colour of the BeatForm: red BeatForms contain three note events, blue BeatForms contain two note events, and purple BeatForms contain one. BarForms are groups of BeatForms that are one bar in length. By grouping these notes into modular containers, the Music Molecule provides structure to raw MIDI and captures the relationships between notes. The Music Molecule engine is patented in both the USA and Canada. Features Liquid Rhythm’s main ‘view’ is the Arranger window, where users can create and edit BarForms and BeatForms. Users begin by selecting an instrument and dragging it onto a slot in the Arranger. There are a wide selection of genre-based sounds, ranging from Acoustic to Techno, within the program. Users can import custom sample libraries as well. There are a number of features in the software for the formulation of entire BarForms as well as for editing minuscule details of the rhythms. Liquid Rhythm populates the BarForm List with commonly occurring BarForms for the chosen instrument and includes a number of filters to refine the selection. Below the BarForm List is the BeatForm Sequencer, a grid that allows users to insert different BeatForms into each of the eight sections in the selected BarForm. The other BarForm creation tool is the BeatWeaver, wherein the user chooses a series of BeatForms and the BeatWeaver creates every possible combination that can be weaved into a BarForm. The Accent Modifiers set the MIDI velocity and timing based on the BeatForm accent color as well as give users the ability set a range to “humanize” velocity and timing. The GrooveMover changes the arrangement of notes in the bar and the BeatForm Tumblers increases the complexity of a rhythm selection. The Randomizer will populate a selected portion of the Arranger with a random rhythm created within user-set parameters. Development In the 1980s, Vice President and Lead Inventor at WaveDNA, David Beckford worked with The University of Toronto’s music cognition lab to study how musicians visualize and conceptualize music. This led to his extensive work in trying to unify MIDI and traditional music notations into a new and integrated language for musicians. By 2009, David Beckford had developed a software prototype that allowed him to isolate and catalogue different rhythmic, melodic, and harmonic characteristics and manipulate them digitally. After partnering with Douglas Mummenhoff (CEO), the two founded WaveDNA. See also List of music software References Audio software Music software Software drum machines
17888698
https://en.wikipedia.org/wiki/Sugon
Sugon
Sugon (), officially Dawning Information Industry Company Limited, is a supercomputer manufacturer based in the People's Republic of China. The company is a spin-off from research done at the Chinese Academy of Sciences (CAS), and still has close links to it. History The company is a development of work done at the Institute of Computer Science, CAS. Under the Chinese government's 863 Program, for the research and development of high technology products, the group launched their first supercomputer (Dawning No. 1) in 1993. In 1996 the group launched the Dawning Company to allow the transfer of research computers into the market. The company was tasked with developing further supercomputers under the 863 program, which led to the Dawning 5000A and 6000 computers. The company was listed on the Shanghai Stock Exchange in 2014. CAS still retains stock in the company. U.S. sanctions According to the United States Department of Defense the company has links to the People's Liberation Army and, in 2019, Sugon was added to the Bureau of Industry and Security's Entity List due to U.S. national security concerns. In November 2020, the then President of the United States Donald Trump issued an executive order prohibiting any American company or individual from owning shares in companies that the United States Department of Defense has listed as having links to the People's Liberation Army, which included Sugon. Supercomputers Dawning was the company's initial name; it was later changed to Sugon. The computers are originally known by their Dawning moniker, but can also use Sugon names in the literature. The model series is as below. Dawning No.1 The first supercomputer created was Dawning No.1 (Shuguang Yihao, 曙光一号), which received state certification in October 1993. This supercomputer achieved 640 million FLOPS, and utilizes Motorola 88100 CPUs (4 total) and 88200 CPUs (8 total), and over 20 were built. The operating system is UNIX V. Dawning 1000 The Dawning 1000 was Sugon's second generation supercomputer, and was originally named Dawning No.2 (Shuguang Erhao, 曙光二号). Dawning 1000 was released in 1995, and received state certification on 11 May 1995. The family of supercomputers could achieve 2.5 GFLOPS. This series of the Dawning family consists of the Dawning 1000A and 1000L. Dawning 2000 The Dawning 2000 was initially released in 1996, and could achieve a peak performance of 4 GFLOPS. A further variant, the Dawning 2000-I, was released in 1998 with a peak performance of 20 GFLOPS. The final model in the series, the Dawning 2000-II, was released in 1999 with a peak performance of 111.7 GFLOPS. The Dawning 2000 passed state certification on 28 January 2000. The supercomputer model was designed as a cluster to achieve over 100 GLOPS peak performance. The number of CPUs used was greatly increased to 164 in comparison to older models, and like earlier models, the operating system is UNIX. Dawning 3000 The Dawning 3000 passed state certification on 9 March 2001. Like the Dawning 2000, the system was designed as a cluster, and could achieve 400 GFLOPS peak performance. The number of CPU increased to 280, and the system consists of ten 2-meter tall racks, weighing 5 tons total. Power consumption is 25 kW, and one of the tasks it was used for was the part of human genome mapping that China was responsible for. Dawning 4000A The fifth member of the Dawning family, Dawning 4000A, debuted as one of the top 10 fastest supercomputers in the world on the TOP500 list, capable of 806.1 billion FLOPS. The system, at the Shanghai Supercomputer Center, utilizes over 2,560 AMD Opteron processors, and can reach speeds of 8 teraflops. Dawning 5000 The Dawning 5000 series was initially planned to use indigenous Loongson processors. However Shanghai Supercomputer Center required Microsoft Windows support whereas Loongson only ran under Linux. The resulting Dawning 5000A uses 7,680 1.9 GHz AMD Opteron Quad-core processors, resulting in 30,720 cores, with an Infiniband interconnecting network. The computer occupies an area of 75 square meters and the power consumption is 700 kW. The supercomputer is capable of 180 teraflops and received state certification in June 2008. The Dawning 5000A was ranked 10th in the November 2008 TOP500 list. Additionally at the time, it was also the largest system using Windows HPC Server 2008 for this benchmark. This system is also installed at the Shanghai Supercomputer Center and runs with SUSE Linux Enterprise Server 10. Dawning 6000 The Dawning 6000 was announced in 2011, at 300 TFLOPS, incorporating 3000 8-core Loongson 3B processors at 3.2 GFLOP/W. It is the "first supercomputer made exclusively of Chinese components" and has a projected speed of over a PFLOP (one quadrillion operations per second). For comparison, the fastest supercomputer as of June 2014 runs at 33 PFLOPS. The same announcement said that a petascale supercomputer was under development and that the launch was anticipated in 2012 or 2013. See also Shanghai Supercomputer Center Nebulae - Dawning TC3600 References External links Supercomputers Computer hardware companies Supercomputing in China Defence companies of the People's Republic of China
1248846
https://en.wikipedia.org/wiki/Direct%20cable%20connection
Direct cable connection
Direct Cable Connection (DCC) is a feature of Microsoft Windows that allows a computer to transfer and share files (or connected printers) with another computer, via a connection using either the serial port, parallel port or the infrared port of each computer. It is well-suited for computers that do not have an ethernet adapter installed, although DCC in Windows XP can be configured to use one (with a proper crossover cable if no Ethernet hub is used) if available. Connection types Serial port If using the serial ports of the computer, a null modem cable (or a null modem adapter connected to a standard serial cable) must be used to connect each of the two computers to communicate properly. Such connection uses PPP protocol. Parallel port If the parallel ports are used, Windows supports standard or basic 4-bit cable (commonly known as LapLink cable), Enhanced Capabilities Port (ECP) cable, or Universal Cable Module (UCM) cable (which was known as DirectParallel cable by Parallel Technologies). IR Infrared communication ports, like the ones found on laptop computers (such as IrDA), can also be used. USB Connecting any two computers using USB requires a special proprietary bridge cable. A directly connected pin-to-pin USB type A cable does not work, as USB does not support such a type of communication. In fact, attempting to do so may even damage the connecting computers, as it will effectively short the two computers' power supplies together by connecting their 5V and GND lines. This can possibly destroy one or both machines and cause a fire hazard since the two machines may not have exactly the same USB source voltage. Therefore, Direct Cable Connection over USB is not possible; a USB link cable must be used, as seen in the Microsoft knowledge base article 814982. However, with a USB link cable, a program which supports data transfer using that cable must be used. Typically, such a program is supplied with the USB link cable. The DCC wizard or Windows Explorer cannot be used to transfer files over a USB link cable. Windows Vista changes Windows Vista drops support for the Direct cable connection feature as ethernet, Wi-Fi and Bluetooth have become ubiquitous on current generation computers. To transfer files and settings, Windows Vista includes Windows Easy Transfer, which uses a proprietary USB-to-USB bridge cable known as the Easy Transfer Cable. See also Null modem LapLink cable Serial line internet protocol (SLIP) Parallel line internet protocol (PLIP) Point-to-Point Protocol (PPP) INTERSVR (DOS command) INTERLNK (DOS command) References External links Direct-Cable Connection Introduction from WindowsNetworking.com How To Set Up a Direct Cable Connection Between Two Computers in Windows XP from Microsoft Support File sharing
991365
https://en.wikipedia.org/wiki/System%20X%20%28telephony%29
System X (telephony)
System X is the digital switching system installed in telephone exchanges throughout the United Kingdom, from 1980 onwards. History Development System X was developed by the Post Office (later to become British Telecom), GEC, Plessey, and Standard Telephones and Cables (STC), and was first shown in public in 1979 at the Telecom 79 exhibition in Geneva, Switzerland. In 1982, STC withdrew from System X and, in 1988, the telecommunications divisions of GEC and Plessey merged to form GPT, with Plessey subsequently being bought out by GEC and Siemens. In the late 1990s, GEC acquired Siemens' 40% stake in GPT and, in 1999, the parent company of GPT, GEC, renamed itself Marconi. When Marconi was sold to Ericsson in January 2006, Telent plc retained System X and continues to support and develop it as part of its UK services business. Implementation The first System X unit to enter public service was in September 1980 and was installed in Baynard House, London and was a 'tandem junction unit' which switched telephone calls amongst some 40 local exchanges. The first local digital exchange started operation in 1981 in Woodbridge, Suffolk (near BT's Research HQ at Martlesham Heath). The last electromechanical trunk exchange (in Thurso, Scotland) was closed in July 1990—completing the UK's trunk network transition to purely digital operation and becoming the first national telephone system to achieve this. The last electromechanical local exchanges, Crawford, Crawfordjohn and Elvanfoot, all in Scotland, were changed over to digital on 23 June 1995 and the last electronic analogue exchanges, Selby, Yorkshire and Leigh on Sea, Essex were changed to digital on 11 March 1998. In addition to the UK, System X was installed in the Channel Islands and several systems were installed in other countries, although it never achieved significant export sales. Small exchanges: UXD5 Separately from System X, BT developed the UXD5 ("unit exchange digital"), a small digital exchange which was cost-effective for small and remote communities. Developed by BT at Martlesham Heath and based on the Monarch PABX, the first example was put into service at Glenkindie, Scotland, in 1979, the year before the first System X. Several hundred of these exchanges were manufactured by Plessey and installed in rural areas, largely in Scotland and Wales. The UXD5 was included as part of the portfolio when System X was marketed to other countries. System X units System X covers three main types of telephone switching equipment. Many of these switches reside all over the United Kingdom. Concentrators are usually kept in local telephone exchanges but can be housed remotely in less populated areas. DLEs and DMSUs operate in major towns and cities and provide call routing functions. The BT network architecture designated exchanges as DLEs / DMSUs / DJSUs etc. but other operators configured their exchanges differently depending on their network architecture. With the focus of the design being on reliability, the general architectural principle of System X hardware is that all core functionality is duplicated across two 'sides' (side 0 and side 1). Either side of a functional resource can be the 'worker' with the other being an in-service 'standby'. Resources continually monitor themselves and should a fault be detected the associated resource will mark itself as 'faulty' and the other side will take the load instantaneously. This resilient configuration allows for hardware changes to fix faults or perform upgrades without interruption to service. Some critical hardware such as switchplanes and waveform generators are triplicated and work on an 'any 2 out of 3' basis. The CPUs in an R2PU processing cluster are quadruplicated to retain 75% performance capability with one out of service instead of 50% if they were simply duplicated. The SystemX processing system was the first multi-processor cluster in the world [requires confirmation]. Line cards providing customer line ports or the 2Mbps E1 terminations on the switch have no 'second side' redundancy, but of course a customer can have multiple lines or an interconnect have multiple E1s to provide resilience. Concentrator unit The concentrator unit consists of four main sub-systems: line modules, digital concentrator switch, digital line termination (DLT) units and control unit. Its purpose is to convert speech from analogue signals to digital format, and concentrate the traffic for onward transmission to the digital local exchange (DLE). It also receives dialled information from the subscriber and passes this to the exchange processors so that the call can be routed to its destination. In normal circumstances, it does not switch signals between subscriber lines but has limited capacity to do this if the connection to the exchange switch is lost. Each analogue line module unit converts analogue signals from a maximum of 64 subscriber lines in the access network to the 64 kilobit/s digital binary signals used in the core network. This is done by sampling the incoming signal at a rate of 8 kS/s and coding each sample into an 8-bit word using pulse-code modulation (PCM) techniques. The line module also strips out any signalling information from the subscriber line, e.g., dialled digits, and passes this to the control unit. Up to 32 line modules are connected to a digital concentrator switch unit using 2 Mbit/s paths, giving each concentrator a capacity of up to 2048 subscriber lines. The digital concentrator switch multiplexes the signals from the line modules using time-division multiplexing and concentrates the signals onto up to 480 time slots on E1s up to the exchange switch via the digital line termination units. The other two time slots on each channel are used for synchronisation and signalling. These are timeslots 0 and 16 respectively. Depending on hardware used, concentrators support the following line types: analogue lines (either single or multiple line groups), ISDN2 (basic rate ISDN) and ISDN30 (primary rate ISDN). ISDN can run either UK-specific DASS2 or ETSI(euro) protocols. Subject to certain restrictions a concentrator can run any mix of line types, this allows operators to balance business ISDN users with residential users to give a better service to both and efficiency for the operator. Concentrator units can either stand alone as remote concentrators or be co-located with the exchange core (switch and processors). Digital local exchange The Digital Local Exchange (DLE) hosts a number of concentrators and routes calls to different DLEs or DMSUs depending on the destination of the call. The heart of the DLE is the Digital Switching Subsystem (DSS) which consists of Time Switches and a Space Switch. Incoming traffic on the 30 channel PCM highways from the Concentrator Units is connected to Time Switches. The purpose of these is to take any incoming individual Time Slot and connect it to an outgoing Time Slot and so perform a switching and routing function. To allow access to a large range of outgoing routes, individual Time Switches are connected to each other by a Space Switch. The Time Slot inter-connections are held in Switch Maps which are updated by Software running on the Processor Utility Subsystem (PUS). The nature of the Time Switch-Space Switch architecture is such that the system is very unlikely to be affected by a faulty time or space switch, unless many faults are present. The switch is a 'non-blocking' switch. Digital main switching unit The Digital Main Switching Unit (DMSU) deals with calls that have been routed by DLEs or another DMSU and is a 'trunk / transit switch', i.e. it does not host any concentrators. As with DLEs, DMSUs are made up of a Digital Switching Subsystem and a Processor Utility Subsystem, amongst other things. In the British PSTN network, each DMSU is connected to every other DMSU in the country, enabling almost congestion-proof connectivity for calls through the network. In inner London, specialised versions of the DMSU exist and are known as DJSU's - they are practically identical in terms of hardware - both being fully equipped switches, the DJSU has the distinction of carrying inter-London traffic only. The DMSU network in London has been gradually phased out and moved onto more modern "NGS" switches over the years as the demand for PSTN phone lines has decreased as BT has sought to reclaim some of its floor-space. The NGS switch referred to is a version of Ericsson's AXE10 product line, phased in between the late '90s and early '00s. It is common to find multiple exchanges (switches) within the same exchange building in large UK cities: DLEs for the directly-connected customers and a DMSU to provide the links to the rest of the UK. Processor utility subsystem The Processor Utility Subsystem (PUS) controls the switching operations and is the brain of the DLE or DMSU. It hosts the Call Processing, Billing, Switching and Maintenance applications Software (amongst other software subsystems). The PUS is divided into up to eight 'clusters' depending on the amount of telephony traffic dealt with by the exchange. Each of the first four clusters of processors contains four central processing units (CPUs), the main memory stores (STRs) and the two types of backing store (primary (RAM) and secondary (hard disk)) memory. The PUS was coded with a version of the CORAL66 programming language known as PO CORAL (Post Office CORAL) later known as BTCORAL. The original processor that went into service at Baynard house, London, was known as the MK2 BL processor. It was replaced in 1980 by the POPUS1 (Post Office Processor Utility Subsystem). POPUS1 processors were later installed in Lancaster House in Liverpool and also, in Cambridge. Later, these too were replaced with a much smaller system known as R2PU or Release 2 Processor Utility. This was the four CPU per cluster and up to 8-cluster system, as described above. Over time, as the system was developed, additional "CCP / Performance 3" clusters were added (clusters 5, 6, 7 and 8) using more modern hardware, akin to late-1990s computer technology, while the original processing clusters 0 to 3 were upgraded with, for example, larger stores (more RAM). There were many very advanced features with this fault-tolerant system which helps explain why these are still in use today – like self fault detection and recovery, battery-backed RAM, mirrored disk storage, auto replacement of a failed memory unit, the ability to trial new software (and roll back, if necessary) to the previous version. In recent times, the hard disks on the CCP clusters have been replaced by with solid-state drives to improve reliability. SystemX was the first multi-processor cluster system in the world [requires confirmation]. In modern times, all System X switches show a maximum of 12 processing clusters; 0–3 are the four-CPU System X-based clusters and the remaining eight positions can be filled with CCP clusters which deal with all traffic handling. Whilst the status quo for a large System X switch is to have four main and four CCP clusters, there are one or two switches that have four main and six CCP clusters. The CCP clusters are limited to call handling only, there was the potential for the exchange software to be re-written to accept the CCP clusters, but this was scrapped as being too costly of a solution to replace a system that was already working well. Should a CCP cluster fail, System X will automatically re-allocate its share of the call handling to another CCP cluster, if no CCP clusters are available then the exchange's main clusters will begin to take over the work of call handling as well as running the exchange. In terms of structure, the System X processor is a "one master, many slaves" configuration – cluster 0 is referred to as the base cluster and all other clusters are effectively dependent to it. If a slave cluster is lost, then call handling for any routes or concentrators dependent to it is also lost; however, if the base cluster is lost then the entire exchange ceases to function. This is a very rare occurrence, as due to the design of System X it will isolate problematic hardware and raise a fault report. During normal operation, the highest level of disruption is likely to be a base cluster restart, all exchange functions are lost for 2–5 minutes while the base cluster and its slaves come back online, but afterwards the exchange will continue to function with the defective hardware isolated. The exchange can and will restart ('rollback') individual processes if it detects problems with them. If that doesn't work then a cluster restart can be performed. Should the base cluster or switch be irrecoverable via restarts, the latest archive configuration can be manually reloaded using the restoration procedure. This can take hours to bring everything fully back into service as the switch has to reload all its semi-permanent paths and the concentrators have to download their configurations. Post-2020, exchange software is being modified to reduce the restoration time significantly. During normal operation, the exchange's processing clusters will sit between 5-15% usage, with the exception of the base cluster which will usually sit between 15-25% usage, spiking as high as 45% - this is due to the base cluster handling far more operations and processes than any other cluster on the switch. Editions of System X System X has gone through two major editions, Mark 1 and Mark 2, referring to the switch matrix used. The Mark 1 Digital Subscriber Switch (DSS) was the first to be introduced. It is a time-space-time switch setup with a theoretical maximum matrix of 96x96 Time Switches. In practice, the maximum size of switch is a 64x64 Time Switch matrix. Each time switch is duplicated into two security planes, 0 and 1. This allows for error checking between the planes and multiple routing options if faults are found. Every timeswitch on a single plane can be out of service and full function of the switch can be maintained, however, if one timeswitch on plane 0 is out, and another on plane 1 is out, then links between the two are lost. Similarly, if a timeswitch has both plane 0 and 1 out, then the timeswitch is isolated. Each plane of the timeswitch occupies one shelf in a three-shelf group – the lower shelf is plane 0, the upper shelf is plane 1 and the middle shelf is occupied by up to 32 DLTs (Digital Line Terminations). The DLT is a 2048 kb/s 32-channel PCM link in and out of the exchange. The space switch is a more complicated entity, but is given a name ranging from AA to CC (or BB within general use), a plane of 0 or 1 and, due to the way it is laid out, an even or odd segment, designated by another 0 and 1. The name of a space switch in software, then, can look like this. SSW H'BA-0-1. The space switch is the entity that provides the logical cross connection of traffic across the switch, and the time switches are dependent to it. When working on a space switch it is imperative to make sure the rest of the switch is healthy as, due to its layout, powering off either the odd or even segment of a space switch will "kill" all of its dependent time switches for that plane. Mark 1 DSS is controlled by a triplicated set of Connection Control Units (CCU's) which run in a 2/3 majority for error checking, and is monitored constantly by a duplicated Alarm Monitoring Unit (AMU) which reports faults back to the DSS Handler process for appropriate action to be taken. The CCU and AMU also play part in diagnostic testing of Mark 1 DSS. A Mark 1 System X unit is built in suites, each 8 racks in length, and there can be 15 or more suites. Considerations of space, power demand and cooling demand led to development of the Mark 2. Mark 2 DSS ("DSS2") is the later revision, which continues to use the same processor system as Mark 1, but made serious and much needed revisions to both the physical size of the switch and the way that the switch functions. It is an optical fibre-based time-space-time-space-time switching matrix, connecting a maximum of 2048 2Mbps PCM systems, much like Mark 1; however the hardware is much more compact. The four-rack group of the Mk1 CCU and AMU is gone, and replaced neatly by a single connection control rack, comprising the Outer Switch Modules (OSMs), Central Switch Modules (CSMs) and the relevant switch/processor interface hardware. The Timeswitch shelves are replaced with Digital Line Terminator Group (DLTG) shelves, which each contain two DLTGs, comprising 16 Double Digital Line Termination boards (DDLTs) and two Line Communication Multiplexors (LCMs), one for each security plane. The LCMs are connected by optical fibre over a forty megabit link to the OSMs. In total, there are 64 DLTG's in a fully sized Mk2 DSS unit, which is analogous to the 64 Time Switches of the Mk1 DSS unit. The Mk2 DSS unit is a lot smaller than the Mk1, and as such consumes less power and also generates less heat to be dealt with as a result. It is also possible to interface directly with SDH transmission over fibre at 40Mbps, thus reducing the amount of 2Mbps DDF and SDH tributary usage. Theoretically, a transit switch (DMSU) could purely interface with the SDH over fibre with no DDF at all. Further to this, due to the completely revised switch design and layout, the Mk2 switch manages to be somewhat faster than the Mk1 (although the actual difference is negligible in practice). It is also far more reliable, having many less discrete components in each of its sections means there is much less to go wrong, and when something does go wrong it is usually a matter of replacing the card tied to the software entity that has failed, rather than needing to run diagnostics to determine possible locations for the point of failure as is the case with Mk1 DSS. Message Transmission Subsystem A System X exchange's processors communicate with its concentrators and other exchanges using its Message Transmission subsystem (MTS). MTS links are 'nailed up' between nodes by re-purposing individual 64kbps digital speech channels across the switch into permanent paths for the signalling messages to route over. Messaging to and from concentrators is done using proprietary messaging, messaging between exchanges is done using C7 / SS7 messaging. UK-specific and ETSI variant protocols are supported. It was also possible to use channel associated signalling, but as the UK and Europe's exchanges went digital in the same era this was hardly used. Replacement system Many of the system X exchanges installed during the 1980s continue in service into the 2020s. System X was scheduled for replacement with Next Generation softswitch equipment as part of the BT 21st Century Network (21CN) programme. Some other users of System X – in particular Jersey Telecom and Kingston Communications – replaced their circuit-switched System X equipment with Marconi XCD5000 softswitches (which were intended as the NGN replacement for System X) and Access Hub multiservice access nodes. However, the omission of Marconi from BT's 21CN supplier list, the lack of a suitable replacement softswitch to match System X's reliability, and the shift in focus away from telephony to broadband all led to much of the System X estate being maintained. Later software versions allow more concentrators to be connected to the core of the exchange, and thus BT are rationalising their SystemX estate by re-parenting concentrators from old exchanges with Mk1 DSS onto newer exchanges with Mk2 DSS, often converting DMSUs into CTLEs (combined trunk & local exchanges). The need for these CTLEs to host large numbers of concentrators (90+) has resulted in unacceptably long restoration times. As such, the exchange software has been heavily re-written to speed up the restoration times. Previously, the exchange would bring resources into service in a fairly random fashion, whereas the new software will concentrate on getting service back as fast as possible - reduced downloads, faster downloads, bringing subsystems and concentrators back up single-sided etc. Closing the subsequently redundant Mk1 exchanges enables a saving in floorspace, power and cooling costs, with some buildings given up entirely. See also System Y AXE telephone exchange TXE PRX References BT Group Plessey Telephone exchange equipment Telecommunications-related introductions in 1980 Telecommunications in the United Kingdom General Electric Company
16836775
https://en.wikipedia.org/wiki/Backscatter%20%28email%29
Backscatter (email)
Backscatter (also known as outscatter, misdirected bounces, blowback or collateral spam) is incorrectly automated bounce messages sent by mail servers, typically as a side effect of incoming spam. Recipients of such messages see them as a form of unsolicited bulk email or spam, because they were not solicited by the recipients, are substantially similar to each other, and are delivered in bulk quantities. Systems that generate email backscatter may be listed on various email blacklists and may be in violation of internet service providers' Terms of Service. Backscatter occurs because worms and spam messages often forge their sender addresses. Instead of simply rejecting a spam message, a misconfigured mail server sends a bounce message to such a forged address. This normally happens when a mail server is configured to relay a message to an after-queue processing step, for example, an antivirus scan or spam check, which then fails, and at the time the antivirus scan or spam check is done, the client already has disconnected. In those cases, it is normally not possible to reject the SMTP transaction, since a client would time out while waiting for the antivirus scan or spam check to finish. The best thing to do in this case, is to silently drop the message, rather than risk creating backscatter. Measures to reduce the problem include avoiding the need for a bounce message by doing most rejections at the initial SMTP connection stage; and for other cases, sending bounce messages only to addresses which can be reliably judged not to have been forged, and in those cases the sender cannot be verified, thus ignoring the message (i.e., dropping it). Cause Authors of spam and viruses wish to make their messages appear to originate from a legitimate source to fool recipients into opening the message, so they often use web-crawling software to scan usenet postings, message boards, and web pages for legitimate email addresses. Due to the design of SMTP mail, recipient mail servers receiving these forged messages have no simple, standard way to determine the authenticity of the sender. If they accept the email during the connection phases and then, after further checking, refuse it (e.g., software determines the message is likely spam), they will use the (potentially forged) sender's address to attempt a good-faith effort to report the problem to the apparent sender. Mail servers can handle undeliverable messages in four fundamentally different ways: Reject. A receiving server can reject the incoming email during the connection stage while the sending server is still connected. If a message is rejected at connect time with a 5xx error code, then the sending server can report the problem to the real sender cleanly. Drop. A receiving server can initially accept the full message, but then determine that it is spam or virus, and then delete it automatically, sometimes by rewriting the final recipient to "/dev/null" or similar. This behavior can be used when the "spam score" of an email is seriously high or the mail contains a virus. says: "silent dropping of messages should be considered only in those cases where there is very high confidence that the messages are seriously fraudulent or otherwise inappropriate." Quarantine. A receiving server can initially accept the full message, but then determine that it is spam, and quarantine it - delivering to "Junk" or "Spam" folders from where it will eventually be deleted automatically. This is common behavior. Bounce. A receiving server can initially accept the full message, but then determine that it is spam or to a non-existent recipient, and generate a bounce message back to the supposed sender indicating that message delivery failed. Backscatter occurs when the "bounce" method is used, and the sender information on the incoming email was that of an unrelated third party. Reducing the problem Every step to control worms and spam messages helps reduce backscatter, but other common approaches, such as those in this section, also reduce the same problem. Connection-stage rejection During the initial SMTP connection, mailservers can do a range of checks, and often reject email with a 5xx error code while the sending server is still connected. Rejecting a message at the connection-stage in this way will usually cause the sending MTA to generate a local bounce message or Non-Delivery Notification (NDN) to a local, authenticated user. Reasons for rejection include: Failed recipient validation Failed anti-forgery checks such as SPF, DKIM or Sender ID Servers that do not have a forward-confirmed reverse DNS entry Senders on block lists. Temporary rejection via greylisting methods Mail transfer agents (MTAs) which forward mail can avoid generating backscatter by using a transparent SMTP proxy. Checking bounce recipients Mail servers sending email bounce messages can use a range of measures to judge whether a return address has been forged. Filtering backscatter While preventing backscatter is desirable, it is also possible to reduce its impact by filtering for it, and many spam filtering systems now include the option to attempt to detect and reject backscatter email as spam. In addition, systems using schemes such as Bounce Address Tag Validation "tag" their outgoing email in a way that allows them to reliably detect incoming bogus bounce messages. See also Joe job References External links . . . : Recommendations for Automatic Responses to Electronic Mail. . . : why you shouldn't bounce spam. . Spamming Email authentication
533192
https://en.wikipedia.org/wiki/TrueSpace
TrueSpace
TrueSpace (styled as trueSpace) was a commercial 3D computer graphics and animation software developed by Caligari Corporation, bought-out by Microsoft. As of May 2009, it was officially discontinued, but with some 'unofficial support' up to February 2010. History The company was founded in 1985 by Roman Ormandy. A prototype 3D video animation package for the Amiga Computer led to the incorporation of Octree Software in 1986. From 1988 to 1992, Octree released several software packages including Caligari1, Caligari2, Caligari Broadcast, and Caligari 24. Caligari wanted to provide inexpensive yet professional, industrial video and corporate presentation software. In 1993 Octree Software moved from New York to California and became known as Caligari Corporation. In 1994 trueSpace 1.0 was introduced on the Windows platform. In 1998 an employee inadvertently left a copy of the trueSpace 4.0 sourcecode on the company website's public FTP server. The source code was released to the internet by the piracy release group REVOLT. In early 2008, the company was acquired by Microsoft and trueSpace 7.6 was released for free. As of May 19, 2009, Ormandy announced that TrueSpace had been discontinued: Elsewhere he thanks everyone, urges people to download all the free software as soon as possible. After 2010, many of the talented developers helped develop Microsoft's 3D Builder application available for free in the Windows Store. There are many similarities between 3D Builder and the original TrueSpace product even though the former is aimed at a consumer level. Overview TrueSpace was a modeling/animation/rendering package. It featured a plug-in architecture that allowed the user to create tools to enhance the core package. TrueSpace was at the last release version 7 (also known to its users as tS7). Point upgrades had brought it up to version Rosetta Beta 7.61 and had added new modeling features. It also had an interface that beginners found easy to learn. Caligari had enhanced the modeling, surfacing and rendering capabilities of TrueSpace, and the latest version TrueSpace7 allowed all aspects of real-time design, modeling and animation within a virtual 3D space shared by remote participants over the broadband internet. The TrueSpace7 collaboration server enables multiple participants to connect to a shared 3D space to create and manipulate shared content in real-time. Features One of the most distinctive features of trueSpace is its interface, using mainly 3D widgets for most common editing operations. trueSpace can also be scripted, using Python for creating custom scripts, tools and plugins. trueSpace7 introduces the use of VBScript and JScript as scripting tools for developing plugins and interactive scenes. trueSpace is also known for its icon-heavy interface which was drastically overhauled for version 7 onwards. While staff at Caligari had originally made them 'inhouse' during the creation process of earlier releases, trueSpace 7 had a new set of icons made by Paul Woodward, a freelance designer and illustrator. Capabilities of the software include creating visualizations and animations with realistic lighting (through the use of radiosity, HDRI and Global Illumination) and organic modelling using NURBS, subdivision surfaces and metaballs. The software has several native formats: RsScn for scenes, RsObj for objects, RSMat for materials, rsl for layouts, RsLgts for lighting, etc. Older formats native to trueSpace6.6 and earlier are also supported, e.g. one for standalone objects (with the file extension .cob), and another for the scenes (with the file extension .scn). Objects in trueSpace can be embedded in Active Worlds. In addition to its native formats, trueSpace can also import and export several additional model types. Modeling Polygon modeling NURBS Subdivision surface Rendering and surfacing Currently TrueSpace has two native internal rendering engines and also support for DX9 pixel shader output quality images alongside the traditional style Render engines. These engines are: LightWorks (from Lightwork Design Ltd.) VirtuaLight TrueSpace7 also includes support for the VRay rendering engine. Rendering HDRI Caustics Multi-pass Rendering for the Lightworks rendering engine with output to Photoshop layers integrated into TrueSpace7 Hybrid radiosity, ray tracing, Phong shading Image-based lighting Non-linear tone mapping editor Post process editor Advanced shaders (color, reflectance, transparency, displacement, background, foreground, post processing) Volumetric, anisotropic reflectance Surfacing DX9 (SL2.0) pixel shaders and HLSL editing Procedural shaders editable in Link Editor Normal mapping Shader trees Modeless UV Editor Advanced UV Editor with real time UV mapping controls Unwrapper with Slice Breaking and welding of vertices in a UV map See also Lightwave 3D Cinema 4D Modo Blender Aladdin4D References External links Caligari's official website (archive) An archive of the original Caligari sponsored forums trueSpace FAQs including download links trueSpace Unofficial Website YouTube TrueSpace Videos Moonman's trueSpace Resources trueSpace Wiki Truespacers - a trueSpace group on DeviantArt 3D graphics software Freeware 3D graphics software Amiga raytracers
24178316
https://en.wikipedia.org/wiki/Ambrose%20Schindler
Ambrose Schindler
Ambrose "Amblin' Amby" Schindler (April 21, 1917 – December 30, 2018) was an American collegiate football player, coach, and on-field official. He played college football for the University of Southern California. Sports career Schindler prepped at San Diego High School. A star quarterback for the USC Trojans, during the 1937 season he led the team in rushing, scoring and total offense and was named to all-conference honors. His senior year, he led the Trojans to a share of the 1939 national championship: At the 1940 Rose Bowl, capping the 1939 season, Schindler ran for a touchdown and passed for another in a 14-0 victory over a Tennessee Volunteers team that had previously gone undefeated for 23 games and unscored upon for the previous 16 games (including the entire 1939 regular season); he was named the game's most valuable player. He went on to be the MVP in the 1940 College All-Star Game, held at Soldier Field in Chicago. Film and stunt work During the end of his college career, he appeared in The Wizard of Oz (1939) as a Winkie guard and as Jack Haley's Tin Man stunt double. At the time of his death, Schindler was one of the last surviving living people working on the film classic. He also appeared in Sailor's Lady (1940). Later sport career and honours Although selected by the Green Bay Packers in the 1940 draft, Schindler did not play in the National Football League. At the time, coaching at high school and college offered more financial security than the low pay NFL of the early 1940s; he would later admit that he had lifelong doubts about his decision. His first offer out of college was to coach at Glendale High School, so chose it over a professional career. He served in the Navy during World War II and returned to move into a long career as coach and instructor at El Camino College in Torrance, California. In addition, Schindler also was a longtime football game official, working for years in the American Football League and later officiating high school and college games. He was inducted into the San Diego Hall of Champions Breitbard Hall of Fame in 1973. He was inducted into the USC Athletic Hall of Fame in 1997, and the Rose Bowl Hall of Fame in 2002. Personal life Schindler was one of three children born to Charles Anthony Schindler (1880–1961) and Nellie Ethel Parks (1880–1957). Schindler married his wife, Lucille Frances West (1917–1984), on August 29, 1943, and they together had two children. He did occasionally think about what his life would have been like if he played professional football, but part of his decision to select a more, at the time, stable career was because of his wife. His descendants noted that Schindler had suffered several concussions during his college career and that his short-term memory during his 90s had deteriorated rapidly compared to his sister's at a similar age; thus not going professional as a football player may have spared him from worse chronic traumatic encephalopathy. Schindler loved surfing and bicycling and was an active surfer until age 75. He drove a Jaguar with a vanity license plate reading "X USC QB." He turned 100 in April 2017 and died in December 2018 of undisclosed causes at the age of 101. See also List of American Football League officials References External links 1917 births 2018 deaths American centenarians American Football League officials American football quarterbacks College football officials El Camino College faculty Green Bay Packers players Male actors from San Diego Players of American football from San Diego Sportspeople from San Diego USC Trojans football players Men centenarians United States Navy personnel of World War II
1022738
https://en.wikipedia.org/wiki/Cisco%20Discovery%20Protocol
Cisco Discovery Protocol
Cisco Discovery Protocol (CDP) is a proprietary Data Link Layer protocol developed by Cisco Systems in 1994 by Keith McCloghrie and Dino Farinacci. It is used to share information about other directly connected Cisco equipment, such as the operating system version and IP address. CDP can also be used for On-Demand Routing, which is a method of including routing information in CDP announcements so that dynamic routing protocols do not need to be used in simple networks. Operation Cisco devices send CDP announcements to the destination MAC address , out each connected network interface. These multicast frames may be received by Cisco switches and other networking devices that support CDP into their connected network interface. This multicast destination is also used in other Cisco protocols such as Virtual Local Area Network (VLAN) Trunking Protocol (VTP). By default, CDP announcements are sent every 60 seconds on interfaces that support Subnetwork Access Protocol (SNAP) headers, including Ethernet, Frame Relay and Asynchronous Transfer Mode (ATM). Each Cisco device that supports CDP stores the information received from other devices in a table that can be viewed using the show cdp neighbors command. This table is also accessible via Simple Network Management Protocol (SNMP). The CDP table information is refreshed each time an announcement is received, and the holdtime for that entry is reinitialized. The holdtime specifies the lifetime of an entry in the table - if no announcements are received from a device for a period in excess of the holdtime, the device information is discarded (default 180 seconds). The information contained in CDP announcements varies by the type of device and the version of the operating system running on it. This information may include the operating system version, hostname, every address (i.e. IP address) from all protocol(s) configured on the port where CDP frame is sent, the port identifier from which the announcement was sent, device type and model, duplex setting, VTP domain, native VLAN, power draw (for Power over Ethernet devices), and other device specific information. The details contained in these announcements is easily extended due to the use of the type–length–value (TLV) frame format. See external links for a technical definition. Support Hewlett-Packard removed support for transmitting CDP from HP Procurve products shipped after February 2006 and all future software upgrades. Receiving and processing CDP information is still supported. CDP support was replaced with IEEE 802.1AB Link Layer Discovery Protocol (LLDP), an IEEE standard that is implemented by multiple vendors and is functionally similar to CDP. Several other manufacturers, including Dell and Netgear have used the brand-neutral name Industry Standard Discovery Protocol (ISDP) to refer to their implementations of a CDP-compatible protocol. CDP was the abbreviation used by Cabletron who wrote the RFC 2641 for the discovery protocol. Cabletron's VlanHello Protocol Specification Version 4 See also CDP Spoofing References External links Breakdown and explanation of a CDP packet by Wireshark packet sniffer. cdp-tools FOSS GPL limited set of tools last updated 2007. Cisco Discovery Protocol Configuration Guide, Cisco IOS Release 15M&T CDP Cisco protocols Device discovery protocols
3338112
https://en.wikipedia.org/wiki/Economy%20of%20Scotland
Economy of Scotland
The economy of Scotland had an estimated nominal gross domestic product (GDP) of $205 billion in 2020 including oil and gas extraction in Scottish waters. Since the Acts of Union 1707, Scotland's economy has been closely aligned with the economy of the rest of the United Kingdom (UK), and England has historically been its main trading partner. Scotland still conducts the majority of its trade within the UK: in 2017, Scotland's exports totalled £81.4 billion, of which £48.9 billion (60%) was with constituent nations of the UK, £14.9 billion with the rest of the European Union (EU), and £17.6 billion with other parts of the world. Scotland was one of the industrial powerhouses of Europe from the time of the Industrial Revolution onwards, being a world leader in manufacturing. This left a legacy in the diversity of goods and services which Scotland produces, from textiles, whisky and shortbread to jet engines, buses, computer software, ships, avionics and microelectronics, as well as banking, insurance, investment management and other related financial services. In common with most other advanced industrialised economies, Scotland has seen a decline in the importance of both manufacturing industries and primary-based extractive industries. This has, however, been combined with a rise in the service sector of the economy, which has grown to be the largest sector in Scotland. The governments which involve themselves in Scotland's economy are largely the central UK Government (responsible for reserved matters) and the Scottish Government (responsible for devolved matters) via HM Treasury. Their respective financial functions are headed by the Chancellor of the Exchequer, and the Cabinet Secretary for Finance, Constitution and Economy. Since 1979, management of the UK economy (including Scotland) has followed a broadly laissez-faire approach. The Bank of England is Scotland's central bank and its Monetary Policy Committee is responsible for setting interest rates. The currency of Scotland, as part of the United Kingdom, is the Pound sterling, which is also the world's fourth-largest reserve currency after the US dollar, the euro and Japanese yen. Scotland is a nation within the United Kingdom, which is a member of the Commonwealth of Nations, the G7, the G8, the G20, the International Monetary Fund, the Organisation for Economic Co-operation and Development, the World Bank, the World Trade Organization, Asian Infrastructure Investment Bank and the United Nations. Overview After the Industrial Revolution in Scotland, the Scottish economy concentrated on heavy industry, dominated by the shipbuilding, coal mining and steel industries. Scottish participation in the British Empire also allowed Scotland to export its output throughout much of the world. However heavy industry declined in the late 20th century, leading to a shift in the economy of Scotland towards technology and the service sector. The 1980s saw an economic boom in the Silicon Glen corridor between Edinburgh and Glasgow, with many large technology firms relocating to Scotland. In 2007 the industry employed over 41,000 people. Scottish-based companies have strengths in information systems, defence, electronics, instrumentation and semiconductors. There is also a dynamic and fast growing electronics design and development industry, based around links between the universities and indigenous companies. There was a significant presence of global players like National Semiconductor and Motorola. Other major industries include banking and financial services, construction, education, entertainment, biotechnology, transport equipment, oil and gas, whisky, and tourism. The Gross Domestic Product (GDP) of Scotland in 2013 was $248.5 billion including revenue generated from North Sea oil and gas. Edinburgh is the financial services centre of Scotland, with many large financial firms based there. Glasgow is the fourth-largest manufacturing centre in the UK, accounting for well over 60% of Scotland's manufactured exports. Shipbuilding, although significantly diminished from its heights in the early 20th century, is still a large part of the Glasgow economy. Aberdeen is the centre of North Sea offshore oil and gas production, with giants such as Shell and BP housing their European exploration and production HQs in the city. Other important industries include textile production, chemicals, distilling, agriculture, brewing and fishing. History When Scotland ratified the 1707 Act of Union, Scotland's national debt was at zero, England had £20,000,000, taxes were low due to war avoidance and trade thrived from the Baltic to the Caribbean. (For the purpose of balance to this perspective, see Darien scheme.) As a consequence of the Act of Union Scotland's established trade with France and the Low Countries was cut off abruptly. The economic benefits of Union which had been promised by proponents of the Act were slow to materialise, causing widespread discontent amongst the population. Despite their new status as citizens of the United Kingdom, it took many decades for Scottish traders to gain a noticeable foothold in the colonial markets which had long been dominated by English merchants and concerns. The economic effects of the Union on Scotland were negative in the short term, due to an increase in unpopular forms of taxation (such as the Malt Tax in 1712) and the introduction of duties on imports, which the Scottish exchequer had previously been neglectful in enforcing on most trade goods. Eventually, the Union gave Scotland access to England's global marketplace, triggering an economic and cultural boom . German sociologist Max Weber credited the Calvinist "Protestant Ethic", involving hard work and a sense of divine predestination and duty, for the entrepreneurial spirit of the Scots. Growth was rapid after 1700, as Scottish ports, especially those on the Clyde, began to import tobacco from the American colonies. Scottish industries, especially linen manufacturing, were developed. Scotland embraced the Industrial Revolution, becoming a small commercial and industrial powerhouse of the British Empire. Many young men built careers as imperial administrators. Many Scots became soldiers, returning home after 20 years with their pension and skills. From 1790 the chief industry in the west of Scotland became textiles, especially the spinning and weaving of cotton. This flourished until the American Civil War in 1861 cut off the supplies of raw cotton; the industry never recovered. However, by that time Scotland had developed heavy industries based on its coal and iron resources. The invention of the hot blast for smelting iron (1828) had revolutionised the iron industry, and Scotland became a centre for engineering, shipbuilding, and locomotive construction. Toward the end of the 19th century steel production largely replaced iron production. Emigrant Andrew Carnegie (1835–1919) built the American steel industry, and spent much of his time and philanthropy in Scotland. Agriculture gained after the union, and standards remained high. However the adoption of free trade in mid-19th century brought cheap American corn which undersold local farmers. The industrial developments, while they brought work and wealth, were so rapid that housing, town planning, and provision for public health did not keep pace with them, and for a time living conditions in some of the towns were notoriously bad. Shipbuilding reached a peak in the early 20th century, especially during the Great War, but quickly went into a long downward slide when the war ended. The disadvantage of concentration on heavy industry became apparent: other countries were themselves industrialising and were no longer markets for Scottish products. Within Britain itself there was also more centralisation, and industry tended to drift to the south, leaving Scotland as a neglected fringe. The entire period between the world wars was one of economic depression, of which the worldwide Great Depression of 1929–1939 was the most acute phase. The economy revived with munitions production during the Second World War. After 1945, however, the older heavy industries continued to decline and the government provided financial encouragement to many new industries, ranging from atomic power and petrochemical production to light engineering. The economy has thus become more diversified and therefore stabler. Effects of the pandemic Like other international economies, the economy of Scotland had suffered loses in revenue as a result of temporary business closures resulted by the national lockdown implemented by the Scottish Government in March 2020. The Scottish Government announced that, based on economic predictions in 2020, that Scottish unemployment figures were expected to increase to 8.2% by the end of 2020. Many areas of the Scottish economy, such as production markets, began to operate at reduced capacity in an attempt to slow the spread of the virus, meaning that productions rates were slower than normal levels prior to the outbreak. The Scottish Government cautioned that Scotland's economic output could fall 9.8% in 2020, with global economic uncertainty remaining elevated. The tourism and hospitality sector has particularly suffered. In October 2020, the Scottish Tourism Alliance made this comment: "The devastating impact of this pandemic will make recovery incredibly challenging, if not questionable, without the assurance of continued targeted support from both the Scottish and UK Governments". In a March 2021 speech, First Minister Nicola Sturgeon acknowledged the "acute challenges our tourism and hospitality sectors have faced". Some travel restrictions were loosened in England on 12 April 2021, but not yet in Scotland. There have been suggestions that the economic impact as a result of the COVID-19 outbreak could permanently affect the economy of Scotland, similar to the way deindustrialization of Scotland's shipyards and coal production had a major, long-lasting effect on the economy during the 1980s. Exports of many food and drink products from the UK declined significantly, including Scotch whisky. Distillers were required to close for some time and the hospitality industry worldwide experienced a major slump. According to news reports in February 2021, the Scotch whisky sector had experienced £1.1 billion in lost sales. Exports to the US were also affected by the 25% tariff that had been imposed. Scotch whisky exports to the US during 2020 "fell by 32%" from the previous year. Worldwide exports declined by 70%. A BBC News headline on 12 February 2021 summarized the situation: "Scotch whisky exports slump to 'lowest in a decade'". Primary Industry Agriculture and forestry A very small proportion of Scotland's total land mass is classified as arable – circa 10% based on Scottish Government figures Only about one quarter of the land is under cultivation – mainly in cereals. Barley, wheat and potatoes are grown in eastern parts of Scotland such as Aberdeenshire, Moray, Highland, Fife and the Scottish Borders. The Tayside and Angus area is a centre of production of soft fruits such as strawberries, raspberries and loganberries, owing to the climate. Sheep-raising is important in the less arable mountainous regions, such as the northwest of Scotland, which are used for rough grazing, due to its geographical isolation, poor climate and acidic soils. Parts of the east of Scotland (areas such as Aberdeenshire, Fife and Angus) are major centres of cereal production and general cropping. In such areas, the land is generally flatter, coastal, and the climate less harsh, and more suited to cultivation. The south-west of Scotland – principally Ayrshire and Dumfries and Galloway – is a centre of dairying. Agriculture, especially cropping in Scotland, is highly mechanised and generally efficient. Farms tend to cover larger areas than their European counterparts. Hill farming is also prominent in the Southern Uplands in the south of Scotland, resulting in the production of wool, lamb and mutton. Cattle rearing, particularly in the east and south of Scotland, results in the production of large amounts of beef. Farming in Scotland was affected by BSE and the European ban on the importation of British beef from 1996. Dairy and cattle farmers in south-west Scotland were affected by the 2001 UK Foot and Mouth outbreak, which resulted in the destruction of much of their livestock as part of the biosecurity effort to control the spread of the disease. Because of the persistence of feudalism and the land enclosures of the 19th century the ownership of most land is concentrated in relatively few hands (some 350 people own about half the land). In 2003, as a result, the Scottish Parliament passed a Land Reform Act that empowered tenant farmers and communities to purchase land even if the landlord did not want to sell. As of 2019, a century since the founding of the Forestry Commission, about 18.5% of Scotland is forested. The majority of forests are in public ownership, with forestry policy being controlled by Scottish Forestry. The biggest plantations and timber resources are to be found in Dumfries and Galloway, Tayside, Argyll and the Scottish Highlands. The economic activities generated by forestry in Scotland include planting and harvesting as well as sawmilling, the production of pulp and paper and the manufacture of higher value goods. Forests, especially those surrounding populated areas in Central Scotland, also provide a recreation resource. Fishing The waters surrounding Scotland are some of the richest in Europe. Fishing is an economic mainstay in parts of the North East of Scotland and along the west coast, with important fish markets in places such as Aberdeen and Mallaig. Fish and shellfish such as herring, crab, lobster, haddock and cod are landed at ports such as Peterhead, the biggest white fish port in Europe, Fraserburgh, the biggest shellfish port in Europe, Stornoway, Lerwick and Oban. There has been a large scale decrease in employment in the fishing industry within Scotland, due initially to the sacrifice of national fishing rights to the EEC on the UK's accession to the Common Market in the 1970s, and latterly to the historically low abundances of commercially valuable fish in the North Sea and parts of the North Atlantic. To rebuild stocks the EU's Common Fisheries Policy places restrictions on the total tonnage of catch that can be landed, on the days at sea allowed and on fishing gear that can be deployed. In tandem with the decline of sea-fishing, commercial fish farms, especially in salmon, have increased in prominence in the rivers and lochs of the north and west of Scotland. Inland waters are rich in fresh water fish such as salmon and trout although here too there has been an inexorable and so far unexplained decline in abundance over the past decades. Mineral Resources Scotland has a large abundance of natural resources from fertile land suitable for agriculture, to oil and gas. In terms of mineral resources, Scotland produces coal, zinc, iron and oil shale. The coal seams beneath central Scotland, in particular in Ayrshire and Fife contributed significantly to the industrialisation of Scotland during the 19th and 20th centuries. The mining of coal – once a major employer in Scotland has declined in importance since the later half of the 20th century, due to cheaper foreign coal and the exhaustion of many seams. The last deep-coal mine was at Longannet on the Firth of Forth which closed in 2016. A modest amount of opencast coal mining continues. Fracking "In October 2017 the Scottish Government announced a ban on fracking after a 6 year struggle that saw massive opposition to the industry across the country." The Scottish Government states that it is taking a cautious, considered and evidenced-based approach to fracking. In January 2015 the Scottish Government placed a moratorium on granting consents for unconventional oil and gas extraction. This will allow health and environmental impact tests to be carried out as well as a full public consultation to allow every interested organisation and any member of the public to input their views. The Scottish Government has stated that no fracking can or will take place in Scotland while the moratorium remains in place. Oil and gas Scottish waters consisting of a large sector of the North Atlantic and the North Sea, containing the largest oil resources in Western Europe – The UK is one of Europe's largest petroleum producers, with the discovery of North Sea oil transforming the Scottish economy. Oil was discovered in the North Sea in 1966, with the first year of full production taking place in 1976. With the growth of oil exploration during that time, as well as the ancillary industries needed to support it, the city of Aberdeen became the UK's centre of the North Sea Oil Industry, with the port and harbour serving many oil fields off shore. Sullom Voe in Shetland is the site of a major oil terminal, where oil is piped in and transferred to tankers. Similarly the Flotta Oil Terminal in Orkney is linked by a 230 km long pipeline to the Piper and Occidental oil fields in the North Sea. Grangemouth is at the centre of Scotland's petrochemicals industry. The oil related industries are a major source of employment and income in these regions. It is estimated that the industry employs around 100,000 workers (or 6% of the working population) of Scotland. Although North Sea oil production has been declining since 1999, an estimated 920 million tonnes of recoverable crude oil remained in 2009. Over two and a half billion tonnes were recovered from UK offshore oil fields between the first North Sea crude coming ashore in 1975 and 2002, with most oil fields being expected to remain economically viable until at least 2020. High oil prices have resulted in a resurgence of oil exploration, specifically in the North East Atlantic basin to the west of Shetland and the Outer Hebrides, in areas that were previously considered marginal and unprofitable. The North Sea oil and gas industry contributed £35 billion to the UK economy (a little under 1% of GDP) in 2014 and is expected to decline in the coming years. Secondary Sector Heavy Industry Scotland's heavy industry began to develop in the second half of the 18th century. The Carron Company established its ironworks at Falkirk in 1759, initially using imported ore but later using locally sourced Ironstone. The iron industry expanded tenfold between 1830 and 1844. The shipbuilding industry on the River Clyde increased greatly from the 1840s and by 1870 the Clyde was producing more than half of Britain's tonnage of shipping. The heavy industries based around shipbuilding and locomotives went into severe decline after the Second World War. Light Industry Manufacturing in Scotland has shifted its focus, with heavy industries such as shipbuilding and iron and steel declining in their importance and contribution to the economy. It is generally argued that this has been in response to increasing globalisation and competition from low-cost producers across the world, which has eroded Scotland's comparative advantage in such industries over the later half of the 20th century. However, the decline in heavy industry in Scotland has been supplanted with the rise in the manufacture of lighter, less labour-intensive products such as optoelectronics, software, chemical products and derivatives as well as life sciences. The engineering and defence sectors employ around 30,000 people in Scotland. The principal companies operating in the sector include; BAE Systems, Rolls-Royce, Raytheon, Alexander Dennis, Thales, SELEX Galileo, Weir Group and Babcock. The decline of heavy industry resulted in a sectoral shift of labour. This led to smaller firms strengthening links with the academic community and substantial, industry-specific retraining programmes for the workforce. Whisky Whisky is probably the best known of Scotland's manufactured products. Exports have increased by 87% from 2003 to 2013 and it contributes over £4.25 billion to the UK economy, making up a quarter of all its food and drink revenues. It is also one of the UK's overall top five manufacturing export earners and it supports around 35,000 jobs. Principal whisky producing areas include Speyside and the Isle of Islay, where there are eight distilleries providing a major source of employment. In many places, the industry is closely linked to tourism, with many distilleries also functioning as attractions worth £30 million GVA each year. Textiles Historically Scotland's export trade was based around animal hides and wool. This trade was firstly organised around religious centres such as Melrose Abbey. The trade expanded towards long-established maritime bases for Scottish trade at Bruges and then Veere in the Low Countries and at Elbląg and Gdańsk in the Baltic. During the 18th century, the trade in linen overtook that in wool, peaking at over 12 million yards produced in 1775. Production remained in cottage industry units but the trading conditions were locked into the modern economy and gave rise to institutions such as the British Linen Bank. By 1770, Glasgow was the largest linen manufacturer in Britain. Cotton began to replace linen in economic importance during the 1770s, with the first mill opening in Penicuik in 1778. The trade brought urbanisation of the population, including large numbers of migrants from the Highlands and from Ireland. The thread manufacturers Coats plc had its origin in that trade. In 1782, George Houston built what was then one of the largest cotton mills in the country in Johnstone. In modern times, knitwear and tweed are seen as traditional cottage industries but names like Pringle have given Scottish knitwear and apparel a presence on the international market. Despite increasing competition from low-cost textile producers in SE Asia and the Indian subcontinent, textiles in Scotland is still a major employer with a workforce of around 22,000. Furthermore, the textiles industry is the seventh-largest exporter in Scotland accounting for over 3% of all Scottish manufactured products. Electronics Silicon Glen is the phrase that was used to describe the growth and development of Scotland's hi-tech and electronics industries in the Central Belt through the 1980s and 1990s, analogous to the larger concentration of hi-tech industries in Silicon Valley, California. Companies such as IBM and Hewlett-Packard have been in Scotland since the 1950s being joined in the 1980s by others such as Sun Microsystems (now owned by Oracle). The IBM factory campus in Greenock was demolished in 2019/2020. IBM no longer manufacture any electronics in Scotland but instead provide consultancy services. 45,000 people are employed by electronics and electronics-related firms, accounting for 12% of manufacturing output. In 2006, Scotland produced 28% of Europe's PCs; more than seven per cent of the world's PCs; and 29% of Europe's notebooks. Construction Scotland builds around 21,000 to 22,000 new homes per year, about 0.1% of its existing dwelling stock. According to Property Wire, the number of new homes built in Scotland during 2018 reached over 20,000 for the first time in a decade, a rise of 15% in the previous year, official data shows . In 2019 there were 21,805 new homes built according to the 'Housing Statistcs for Scotland Quarterly Update', published on 10 March 2020. The home building industry in Scotland directly and indirectly contributed around £5 billion to the Scottish economy in 2006 – about 2% of GDP – greater than that of higher profile industries such as agriculture, fishing, electronics and tourism. The net value of new building and repairs, maintenance and improvements combined is just under £11.6 billion, which is about 4.5% of Scottish GDP. Tertiary and Service industries Financial services Edinburgh was ranked 15th in the list of world financial centres in 2007, but fell to 37th in 2012, following damage to its reputation, and in 2015 was ranked 71st out of 84. Big financial institutions such as The Royal Bank of Scotland, the Bank of Scotland, Scottish Widows and Standard Life all have a presence in the city. Centred primarily on the cities of Edinburgh and Glasgow, the financial services industry in Scotland grew by over 35% between 2000 and 2005. The financial services sector employs around 95,000 people and generates £7bn or 7% of Scotland's GDP. By 2020, Edinburgh was ranked 17th in the world for its financial services sector, and 6th in Western Europe, according to the Global Financial Services Centres Index. Banking Banking in Scotland has a long history, beginning with the creation of the Bank of Scotland in Edinburgh in 1695, and expanding greatly to support the trading developments of the 18th and 19th centuries. Retail banking services to individuals followed in the 19th century, on the trustee savings bank model pioneered by Rev. Henry Duncan . Scotland has four clearing banks: the Bank of Scotland, The Royal Bank of Scotland, the Clydesdale Bank and TSB Bank. The Royal Bank of Scotland expanded internationally to be the second-largest bank in Europe, the fourth largest in the world by market capitalisation in 2008, but collapsed in the 2008 financial crisis and had to be bailed out by the UK Government at a cost of £76 billion; its new global headquarters in Edinburgh augmented the city's position as a major financial centre. Prior to the 2008 financial crisis Scotland ranked second only to London in the European league of headquarters locations of the 30 largest banks in Europe as measured by market value. Although the Bank of England remains the central bank for the UK Government, three Scottish clearing banks still issue their own banknotes: the Bank of Scotland, the Royal Bank of Scotland and the Clydesdale Bank. These notes are legal currency but have no status as legal tender, which does not exist in Scots' Law (neither have Bank of England banknotes in Scotland); but in practice they are accepted throughout Scotland and by some retailers in the rest of the UK. The full range of Scottish bank notes commonly accepted are £5, £10, £20, £50 and £100. (See British banknotes for further discussion). Investment, insurance and asset servicing The first half of the 19th century brought the creation of many life assurance companies in Scotland, predominantly on the mutual model. By the 1980s there were nine members of the Association of Scottish Life Offices (the counterpart of the Life Offices Association) but these have demutualised and most were taken over. Standard Life, based in Edinburgh, demutualised and has remained independent. Starting in 1873 with Robert Fleming's Scottish American Investment Trust, a relatively broad stratum of Scots invested in international investment trust ventures. Around 80,000 Scots held foreign investment assets in the early 20th century. Nowadays Scotland is one of the world's biggest fund management centres with over £300bn worth of assets directly serviced or managed in the country. Scottish fund management centres have a major presence in areas such as pensions, property funds and investment trusts, as well as in retail and private client markets. Similarly asset servicing on behalf of fund managers has become an increasingly important component of the financial services industry in Scotland, with Scottish-based companies providing expertise in securities servicing, investment accounting, performance measurement, trustee and depositary services and treasury services. Software The software sector in Scotland developed rapidly and in 2016 there were an estimated 40,226 people working in the digital economy across Edinburgh, Glasgow and Dundee. Scotland's history in manufacturing is being transferred into the software sector and this is attracting companies from around the world. Software companies developing in Scotland include Skyscanner, FanDuel, Amazon and a thriving fintech community. Several universities are playing an important role by producing research in Computing Science, including the University of Edinburgh's School of Informatics. According to the REF 2014 assessment for computer science and informatics the School of Informatics has produced more world-leading and internationally excellent research (4* and 3*) than any other university in the UK. Tourism It is estimated that tourism accounts for 5% of Scotland's GDP. Scotland is a well-developed tourist destination with attractions ranging from unspoilt countryside, mountains and abundant history. The tourism economy and tourism related industries in Scotland support c. 196,000 in 2014 mainly in the service sector accounting for around 7.7% of employment in Scotland. In 2014, over 15.5 million overnight tourism trips were taken in Scotland, for which visitor expenditure totalled £4.8 billion. Domestic tourists (those from the United Kingdom) make up the bulk of visitors to Scotland. In 2014, for example, UK visitors made 12.5 million visits to Scotland, staying 41.6 million nights and spending £2.9 billion. In contrast, overseas residents made 2.7 million visits to Scotland, staying 21.5 million nights and spending £1.8 billion. In terms of overseas visitors, those from the United States made up 15% of visits to Scotland, with the United States being the largest source of overseas visitors, and Germany (13%), France (7%), Australia (6%) and Canada (5%) following behind. Effects of the COVID-19 pandemic Like all of the UK, Scotland was negatively impacted by the restrictions and lockdowns necessitated by the worldwide COVID-19 pandemic. Tourism has particularly suffered. A report published in March 2021 by the Fraser of Allander Institute at the University of Strathclyde indicated that in Scotland, there was "no sign of a trend reversal with more than 70% of businesses in the sector reporting lower turnover than usual". The Scottish Tourism Alliance Task Force published its recommendations in October 2020, with "Immediate Actions" for both the Scottish government and the UK government, including financial grants, the funding of marketing for the sector, and a "temporary removal of Air Passenger Duty to boost route competitiveness". On 24 March 2021, the First Minister announced a £25 million tourism recovery programme "to support the industry for the next 6 months to two years". Sturgeon also reminded the hospitality/tourism industry that the government had provided "over £129 million" in support "for this sector". Trade Regional trade Excluding intra UK trade, the European Union and the United States constitute the largest markets for Scotland's exports. In the 21st century, with the high rates of growth in many emerging economies of southeast Asia such as China, Thailand and Singapore, there was a drive towards marketing Scottish products and manufactured goods in these countries. Note: Revenues from North Sea oil and gas are not included in these figures. International trade The total value of international exports from Scotland in 2014 (excluding oil and gas) was estimated at £27.5 billion. The top five exporting industries in 2014 were food and drink (£4.8 billion), legal, accounting, management, architecture, engineering, technical testing and analysis activities (£2.3 billion), manufacture of refined petroleum and chemical products (£2.1 billion), mining and quarrying (£1.9 billion) and wholesale and retail trade (£1.8 billion). The total value of exports from Scotland to the rest of the UK in 2014 (excluding oil and gas) was estimated at £48.5 billion. Infrastructure Transport Ports The primary airports in Scotland are: Edinburgh Airport, Glasgow International Airport, Glasgow Prestwick Airport, Aberdeen Airport, Inverness Airport and Dundee Airport. Most airports in Scotland were privatised in the 1980s. with the exception of those owned and operated by HIAL which operate airports in many islands which provide flights to mainland Scotland. In 2004, 22.6 million passengers used Scotland's airports, with there being 514,000 aircraft movements with Scottish airports being amongst the fastest growing in the United Kingdom in terms of passenger numbers. Plans have been published by the major airport operator BAA plc to facilitate the expansion of capacity at the major international airports of Aberdeen, Edinburgh and Glasgow, including new terminals and runways to cope with a large forecast rise in passenger use. Prestwick Airport also has large air freight operations and cargo handling facilities. Scotland is well-served by many airlines and has an expanding international route network, with long-haul services to Dubai, New York, Atlanta and Canada. Scotland also has seaports are: Clydeport, Hunterston Ore Terminal, Grangemouth, Port of Dundee, Port of Aberdeen, and Port of Inverness however there are significant ports at Scapa Flow, Sullom Voe and Flotta. Grangemouth is the only port with a railway connection with W12 gauge clearance; allowing for intermodal containers to be transported by rail. The Greenock and Ayrshire railway railway tunnel underneath Greenock connecting Clydeport to the rail network is disused and its tunnels are too small for intermodal containers to be transported by rail. The Port of Aberdeen has a rail connection, but is unable to transport all types of intermodal containers. Many island communities on Scotland's north and western seaboard are served by lifeline ferry services commissioned by the Scottish Government. These ferry services are vital to island communities' economy by bringing in goods and tourists and in exporting textiles, whisky and other produce. These services usually and interface with the trunk road network on the Scottish mainland. Trunk Roads The Trunk Road network is a network of high-priority roads covering mainland Scotland, ensuring that all the major ports, population centres, and islands remain connected. These roads are maintained to a specified standard with traffic alerts published through Traffic Scotland in order to assist hauliers and are available on the Transport Scotland website. Trunk roads can be single-carriageway, dual carriageway or motorways and are not correlated to traffic volumes, but are determined on how significant the roads' closure would affect the local economy and supplies. With the exception to the M90 between Perth and western Edinburgh - a dual-carriageway road upgraded to motorway standard - all motorways in Scotland radiate outwards from Strathclyde. Infrastructure in Scotland is varied in its provision and its quality. The densest network of roads and railways is concentrated in the Central Lowlands of the country where around 70% of the population live. The motorway and trunk road network is principally centred on the cities of Edinburgh and Glasgow and connecting them to other major concentrations of population, and is vitally important to the economy of Scotland. Key routes include the M8 motorway, which is one of the busiest and most important major routes in Scotland, with other primary routes such as the A9 connecting the Highlands to the Central Belt, and the A90/M90 connecting Edinburgh and Aberdeen in the east. The M74 and A1, in the west and east of the country, respectively, provide the main road corridors from Scotland to England. Many roads in the Highlands are single track, with passing places. Railways The Scottish railway networks were built in the Victorian era by private investors, primarily for the movement of goods, such as coal. In 1963, the UK Transport Minister Ernest Marples commissioned a review of the railway network which recommended closures in many places, which were adopted. A reprieve was earned by Liberal MPs in the Highlands who successfully argued that the skeletal railway network in the highlands was the only system which still functioned during periods of heavy snow when roads were blocked. These railways were kept open, however many uneconomic local railway services were closed as they had lost passenger traffic to the bus and the car, and goods traffic to road hauliers. Most railway services today are local passenger services in and around Glasgow, and intercity or regional passenger rail services as part of the ScotRail franchise, currently operated by Abellio. The Scottish Government also commissions the Caledonian Sleeper franchise, providing overnight sleeper services to London. Both cross-border mainline services are commissioned by the UK government on the West Coast Main Line and the East Coast Main Line. Most railways in Scotland are not electrified north of Central Belt rail, however there are plans to electrify the majority of railway lines. Local Transport The majority of transport infrastructure are local roads, maintained by Councils. Quality of local roads varies according to that Council's funding priorities. Bus services are private businesses operating commercially or under contract by the local Council. SPT operates the Glasgow Subway - a circular narrow-gauge railway disconnected from the rest of the railway network. The City of Edinburgh embarked on a disastrous tram project - the only light railway in Scotland - which is the subject of an on-going judge-led inquiry into failings at the Council in designing and commissioning the project. In many of the seven crofting counties, many roads are single-track with passing places. Drivers negotiate these passing places by pulling into the left - if the pulling place is to the driver's left, the on-coming vehicle has priority. Communications Scotland is considered to have an advanced communications infrastructure, similar to other Western nations, and has an extensive framework of developed radio, television, landline and mobile phone, as well as broadband internet networks. As Scotland's landmass is immense, and the population sparse, the most populated areas have been focused on for 4G connection; mainly the Central Belt regions, Aberdeen, Dundee and Inverness. Scotland's primary public broadcaster is BBC Scotland and operates a substantial number of television channels, including satellite channels, and numerous radio stations. Privately owned commercial TV and radio broadcasters operate a multitude national, regional and local channels. Energy Energy policy in Scotland is the responsibility of the UK government, however the Scottish Parliament and Scottish Government have used their devolved powers to influence energy policy in Scotland by controlling or preventing construction of new thermal power stations or encouraging construction of wind turbines. While matters such as preventing blackouts and regulated energy costs are UK government matters. Some matters are negotiated between the UK and Scottish Government such as the Beauly-Denny power line and other projects. Electricity The Energy Market is reserved by the UK government, with only the planning policy within the competency of the Scottish Parliament. Electricity Transmission infrastructure is split between two privately owned Distribution network operators; Scottish Power, and Scottish and Southern Energy and is regulated by OfGEM. In addition to the regulated utilities, private wires are also available. A very large proportion of energy is generated by wind, hydroelectric and nuclear sources. National Grid plc is the transmission system operator for the whole of the UK. Longannet Power Station - the last coal power station in Scotland - closed earlier than anticipated following the reform of electricity connection charges. Scotland has been identified as having significant potential for the development of wind power. Scotland is endowed with some of the best renewable energy resources in Europe, and is ordinarily a net exporter of electricity, with a generating capacity of 2.2 GWs from nuclear generation, 1 GW from hydro-electric dams, and 9.347 GW of installed wind generation capacity. The Carbon intensity of the Scottish grid is among the lowest in Europe, at 44gCO2e/kWh The Scottish Government set a target of 40% of Scotland's electricity generation be derived from renewable sources by 2020. In Q1 2020, 90.1% of electricity was generated from renewable sources with onshore wind generation making the largest contribution, and supporting several thousand jobs. There are many windfarms along the coast and hills, with plans to create one of the world's largest onshore windfarms at Barvas Moor on the Hebridean Isle of Lewis. Gas Gas infrastructure in Scotland is owned and operated by SGN and regulated by OfGEM. The UK is no longer self-sufficient with natural gas from the North Sea as the UK moved away from coal powered electricity stations. Gas is used across much of the UK for cooking and domestic heating - which also transitioned away from coal over the second half of the 20th century. The Scottish Government plans to decarbonise the gas supply by 2030 by substituting natural gas with hydrogen. As elemental hydrogen does not exist in nature on earth, Scottish energy policy intends that hydrogen be sourced from electrolysis powered by renewable energy and from steam-reformed methane with carbon capture and storage. Taxation The majority of public sector revenue payable by Scottish residents and enterprises is collected at the UK level. Generally it is not possible to identify separately the proportion of revenue receivable from Scotland. GERS therefore uses a number of different methodologies to apportion revenue to Scotland. Following the implementation of the Scotland Act 2012 and Scotland Act 2016, an increasing amount of revenue is set to be devolved to the Scottish Parliament, whereby direct Scottish measures of these revenues will be available. The first revenues which have been devolved are landfill tax and property transaction taxes, with Scottish revenue collected for these taxes from 2015‑16 onwards. With a nominal gross domestic product (GDP) of up to £152 billion in 2015, total public sector non-North Sea current revenue in Scotland was estimated to be £53.7 billion in 2015‑16 approx. 36.5% of GDP. Current non-North Sea revenue in Scotland is estimated to have grown by 13.4% between 2011–12 and 2015–16 in nominal terms. Total public sector expenditure for Scotland has been declining, as a share of GDP, since 2011–12, and in 2015-16 is estimated to be £68.6 billion which is around 46.6% FY2015-16. Labour market As of March 2016, there were 348,045 Small and Medium-sized Enterprises (SMEs) operating in Scotland, providing an estimated 1.2 million jobs. SMEs accounted for 99.3% of all private sector enterprises and for 54.6% of private sector employment and 40.5% of private sector turnover. As of March 2016, there were an estimated 350,410 private sector enterprises operating in Scotland. Almost all of these enterprises (98.2%) were small (0 to 49 employees); 3,920 (1.1%) were medium-sized (50 to 249 employees) and 2,365(0.7%) were large (250 or more employees). Public sector The public sector, in Scotland, has a significant impact upon the economy and comprises central government departments, local government, and public corporations. As of 2016, there were approximately 545,000 people employed in the public sector, which accounts for 20.9% of employment in Scotland – this includes all medical professionals employed within the National Health Service in Scotland, those employed in the emergency services and those employed in the state education and higher education sector. This is in addition to employees of the government in the civil service and in local government as well as public bodies and corporations. Public sector spending in Scotland was reported in 2017 to be more than £1,400 per head more than the UK average. Since the Devolution Referendum of 1997, in which the Scottish electorate voted for devolution, a Scottish Parliament was reconvened under the Scotland Act 1998 and is considered to be a devolved national, unicameral legislature of Scotland. The Act delineates the legislative competence of the Parliament – the areas in which it can make laws – by explicitly specifying powers that are "reserved" to the Parliament of the United Kingdom. The Scottish Parliament has the power to legislate in all areas that are not explicitly reserved to Westminster. There is a clear separation of responsibility of the powers of both the UK government and the devolved Scottish Government in relation to the formulation and execution of national economic policy as it affects Scotland – this is set out under Section 5 of the Scotland Act 1998. UK Government The UK Government along with the Parliament of the United Kingdom retains control over Scotland's fiscal environment, in relation to taxation (including tax rates, tax collection, and tax criteria) and the overall share of central government expenditure apportioned to Scotland, in the form of an annual block grant. Defence There are several military bases within Scotland, as well as The Royal Scots' Battalion based in Bourlon Barracks, Yorkshire. RAF Lossiemouth, RAF Kirknewton, RAF West Freugh Kinloss Barracks, Redford Barracks, Dreghorn Barracks, Glencourse Barracks, Cameron Barracks, Forthside Barracks, Gordon Barracks, Walcheren Barracks HMNB Clyde, RNAD Coulport Social Security HM Treasury retains responsibility for the Welfare State. National Insurance rates and bands are reserved as is the National Insurance Fund. The State pension age is also reserved, as is the rate and eligibility of the UK State pensions system. HMRC are also responsible for calculating and paying Child Benefit and Working Tax Credit in addition to collecting Scottish income taxes. The Department for Work and Pensions are responsible for determining eligibility criteria, processing and paying benefits and the development of Universal Credit. The Scottish Government has introduced the Scottish Welfare Fund to lessen the impact of cuts to social security benefits. Scottish Government The Scottish Government has complete control over Scottish taxes collected by Revenue Scotland and has complete power to set tax rates and bands (but not the personal allowance) for income tax in Scotland which is collected by HMRC. It also provides the majority of local authority funding and can exert control over Council Tax - such as capping rates. The Scottish Government has full control over how Scotland's annual block grant is spent, such as healthcare, education and on state-owned enterprises, e.g. Scottish Water and Caledonian MacBrayne. The Scottish Government does not control macroeconomic policy, however it does use public procurement to influence private sector behaviour on reserved matters such as requiring the Real Living Wage to be paid to all its contractors and sub-contractors. In 2016, the budget of the Scottish Government was around £37bn, which the Scottish Government can spend on the areas not reserved under the Scotland Act 1998. Taxation Taxes on fuel, motor vehicles, and Insurance, Corporation Tax, and VAT, are reserved to Westminster. The Scottish Government has the power to set tax rates and bands on income earned in Scotland. Income taxes are collected by HM Revenue & Customs on behalf of Scottish ministers and all revenues are paid into the Scottish Consolidated Fund. The Scottish Parliament has full autonomy over Landfill Tax and Land and Buildings Transaction Tax the Scottish equivalent of Stamp Duty and is collected by Revenue Scotland. LBTT is a progressive tax, with different rates of tax paid on different bands of value. This differs to UK Stamp Duty, where only one rate of tax applies to the whole value of the property on a 'slab system' which distorted prices near thresholds between bands. There are many aspects of Income Tax in Scotland which are reserved to Westminster, such as the setting of exemptions and allowances, most notably the personal allowance. The Scottish Parliament first diverged from UK tax policy in FY 2017-18 which increased the threshold for the higher rate of income tax(£43,000 as opposed to £45,000) following the Scotland Act 2016, which allowed rates and bands to be set with no reference to UK tax policy. The following financial year, two new bands were created; the Starter Rate, and the Intermediate Rate. The Additional Rate was renamed the Top Rate. Scottish taxpayers have an 'S' prefix to their PAYE code. †Assumes individuals are in receipt of the Standard UK Personal Allowance. ††Those earning more than £100,000 will see their Personal Allowance reduced by £1 for every £2 earned over £100,000. Economic Development The Scottish Government has several economic development agencies, with Highlands and Islands' Enterprise, Scottish Enterprise, and Scottish Development International. The Scottish Government recently established the Scottish National Investment Bank whose aim is to provide finance to small and medium-sized enterprises to grow and develop. Skills Development Scotland was also established to focus on workforce training, apprenticeships and industrial skills. Local Government Local government in Scotland currently consists of 32 Councils, which govern many aspects of daily life in Scotland, including: Council Tax Non-domestic rates Maintenance of all roads and pavements (except trunk roads which are the responsibility of Transport Scotland) Parking policy Bus stops (but not bus services) Commissioning socially necessary bus services Provides some Community Transport Nurseries Primary and secondary schooling Care of vulnerable adults Children's social services Domestic refuse collection and disposal Town and Country Planning Licensing of hours of sale for alcohol Licensing of cultural music parades Licensing of taxis and private hire vehicles Licensing of window cleaners, market traders, scrap metal merchants, and street hawkers Licensing of sexual entertainment venues Food Hygiene inspections Regulation of landlords Arm's Length Council leisure centres and swimming baths Public parks Administering the Scottish Welfare Fund Non-domestic rates in Scotland were previously collected by councils, pooled and redistributed to councils according to a set formula without any passing through central government funds with nationally set exemptions, rebates and other measures. This was abolished in 2020 and non-domestic rates are now entirely controlled by councils. Social Housing Scotland had some of the worst overcrowding in the postwar period and many areas of cities were comprehensively redeveloped with new modernist housing built either in tower blocks on the site of former slum housing, greenfield sites on the periphery of the cities, or in entirely new towns, such as Cumbernauld, Livingstone, Glenrothes or East Kilbride. Many former council houses are now run by Housing Associations while others were sold to the tenant under the right-to-buy at a heavy discount. Some of these have been sold on again and are now leased as private rental housing inside what was once a wholly council-owned housing scheme. The right to buy council housing was abolished in Scotland in 2017. Water & Drainage Water and sewerage utilities were never privatised in Scotland and were previously run by local water boards which were gradually amalgamated until in 2002 one national body was created; Scottish Water. Competition for retailing water to business customers was introduced in 2008. Unlike in England, water infrastructure remains property of Scottish Water, however metering and billing of business customers is now undertaken by water supply companies. The water industry is regulated by the Water Industry Commission for Scotland. Scottish Water's retail company Business stream competes in the water retail market. Council tax bills in Scotland still include water rates if the property has a water mains connection - it is important to note that some properties in rural areas are not connected to the mains network and have their own private water supply. Water for residential properties is not metered in Scotland. Education Scotland's public education system mostly follows comprehensive education principles, with two major types of public school; non-denominational schools, and denominational schools. Most denominational schools in Scotland are Roman Catholic. Public education in Scotland is more standardised than in England - Scotland has no equivalents of publicly funded grammar schools, free schools, nor academies except for Jordanhill School which is maintained by the Scottish Government through direct Grant-in-Aid. Scotland also has networks of private schools which are separate from the public schooling system. Confusion over the terminology can occur between Scotland and England as 'public schools' in England charge fees for educating pupils, whereas public schools in Scotland refer to local authority run schools. 'Public' schools in England offered their services openly (to the public) rather than under running under the patronage of the Church. Council-run schools in Scotland were traditionally referred to as 'public schools' and many Victorian-era schoolhouses to this day have 'public' inscribed on their exterior. Terminology common to both systems are 'state schools' for publicly funded education and 'independent schools'. Education in Scotland is 100% devolved and all of the universities in Scotland are public universities, as are the colleges which provide Further Education. Most universities are linked with a research and development sector; the University of Dundee is at the heart of a biotechnology and medical research cluster; the University of Edinburgh is a centre of excellence in the field of Artificial Intelligence and the University of Aberdeen is a world-leader in the study of offshore technology in the oil and gas industry. Health Another major component of public expenditure in Scotland is on medical and social care services delivered by the devolved National Health Service (NHS), which delivers the majority of medical services in Scotland, and Local Authorities responsible for social care services. NHS Scotland is a major employer with just under 140,000 whole-time equivalent (WTE) staff. A further 150,000 WTE staff work in social care and services. The NHS in Scotland began in 1948 under a separate Act from England and Wales and was the responsibility of the Secretary of State for Scotland rather than the Health Secretary before devolution. There is no healthcare purchaser-provider split in Scotland, and the abolition of internal market in NHS Scotland was completed in 2004. The Cabinet Secretary for Health and Sport is now responsible for the NHS in Scotland. The NHS and social care services are funded from Scottish taxation and the UK block grant and is an almost entirely devolved matter - with procurement of prescription medicines done on a UK-wide basis. Medical care is provided free at the point of use to patients registered with a GP Practice in Scotland. Scotland has a more generous social care system than England, with free personal nursing care for adults over 65 and those under 60 with certain medical conditions. Scotland's more generous social care provision results in Scotland's per capita spending being 43% higher per capita than England. Prescribed drugs were made free at the point of use in 2011, leaving England as the only UK-nation with prescription charges in place (a flat fee of £9.35 per item). Dental and optometry examinations are also free at the point of use, however charges for procedures and appliances apply for adults over 18, except in certain circumstances. Per capital spending on medical and social care is the highest in Great Britain due to a more dispersed population and worse health inequalities with higher rates of alcohol dependency, alienation, drug addiction, suicide, and violence, which was dubbed 'the Glasgow effect' by the media. Medical and social care spending is forecast to increase as the population is aging faster than in England. Justice The Scottish Legal system draws from the civil law tradition, and has more in common with civil law traditions such as in France, than the Common-Law of England and Wales. The Judiciary of Scotland run the Civil and Criminal courts and set court procedure through Acts of Sederunt, or Acts of Ajournal, respectively. Solicitors in Scotland are regulated by the Law Society of Scotland, rather than through the Solicitors Regulation Authority. Advocates are regulated by the Faculty of Advocates whereas in England and Wales; barristers are regulated by their Inn. The criminal justice system is almost entirely devolved; including the Procurator Fiscal (the Scottish public prosecutor), the police force employing ~17,000 Full-Time Equivalent (FTE) staff in 2019, and HM prisons in Scotland which collectively imprison 8,500 people. The most distinct differences in the Scottish criminal justice system is that only a simple majority of 15 is required to convict, the requirement for corroberation of evidence, and the existence of a third verdict. The Cabinet Secretary for Justice is responsible for policy matters affecting these systems such as legal aid, prison governance, drugs rehabilitation, reoffending, victims and witnesses, sentencing guidelines, and anti-social behaviour, but has a legal duty to uphold the independence of the courts and the legal profession. Following its creation from the merger of eight regional fire & rescue services, the Cabinet Secretary for Justice is also responsible for the Scottish Fire and Rescue Service. The civil justice system also has many differences from England and Wales with many differences in Contract Law, Property Law, and Family Law. Scots law has 'Delict' rather than 'tort' law, and no legal concept of equity. 'Heritable title' is equivalent to a freehold in England and Wales, however there is no equivalent of a leasehold in Scots' law. Economic performance In Scotland, GDP per capita varies from €16,200 in North & East Ayrshire to €50,400 in Edinburgh city. 1.1 million (20% of Scots) live in these five deprived [GDP per person is under €20,000] Scottish districts: Clackmannshire & Fife, East & Mid Lothian, West Dumbartonshire, East & North Ayrshire, Caithness Sutherland & Ross. According to Eurostat figures (2013) there are huge regional disparities in the UK with GDP per capita ranging from €15,000 in West Wales to €179,800 in Inner-London West. The average GDP per capita in the South East England region (excludes London) is €34,200 with no local government area showing a GDP per capita of less than €20,000. Equally, there are 21 areas in the rest of the UK where the GDP per person is under €20,000: 4.5 million (8.5% of English) live in these deprived English districts. The figures below, noting the economic position of Scottish regions in terms of GDP and GDP per capita, come from Eurostat (2013) and are denoted in Euros. It should also be noted that the Scottish figures exclude offshore oil revenue. There are 26 areas in the UK where the GDP per person is under €20,000. Relationship with the rest of the United Kingdom The Scottish Parliament has full control of Income tax, land taxes, property tax and local taxation and some fiscal policy. The Scottish Parliament also controls the areas of Health and Education policy. Other aspects of economic and fiscal policies remain a matter for Westminster - including currency, corporate tax, energy policy, foreign policy. See also Barnett formula Council of Economic Advisers (Scotland) Economy of the United Kingdom Economy of the European Union Full fiscal autonomy for Scotland Geography of Scotland Politics of Scotland Scottish Council for Development and Industry Scottish Enterprise 2014 Scottish independence referendum Sustainable Growth Commission References External links Scottish Government UK Government HM Treasury British Chambers of Commerce Quarterly Economic Survey The "Top Secret" 1974 Gavin McCrone Report into Scotland's Economy – classified The "Top Secret" 1974 Gavin McCrone Report into Scotland's Economy – unclassified OECD member economies World Trade Organization member economies
52994038
https://en.wikipedia.org/wiki/Pillars%20of%20Eternity%20II%3A%20Deadfire
Pillars of Eternity II: Deadfire
Pillars of Eternity II: Deadfire is a role-playing video game developed by Obsidian Entertainment and published by Versus Evil. It is the sequel to 2015's Pillars of Eternity, and was released for Microsoft Windows, Linux, macOS in May 2018, and for PlayStation 4, and Xbox One in January 2020. It will be released for the Nintendo Switch at a later date. The game was announced in January 2017 with a crowdfunding campaign on Fig, where it reached its funding goal within a day. Gameplay Pillars of Eternity II: Deadfire is a role-playing video game that is played from an isometric perspective. Both returning and new companions are available, depending upon the choices made by the player, which play an optional story role within the game. Deadfire focuses on seafaring and island exploration via a ship. Crews can also be hired to look over them and assist in ship combat. Class based gameplay returns, with each class having at least four optional sub-classes with unique skills. A new feature in Deadfire compared to the original are sub-classes. Plot Deadfire is a direct sequel to Pillars of Eternity, taking place in the world of Eora. As with the first game, the player assumes the role of a "Watcher", a character with the ability to look into other people's souls and read their memories, as well as the ones of their past lives. The story begins five years after the events of the first game. Eothas, the god of light and rebirth who was believed dead, awakens under the player's stronghold Caed Nua from the first game. Eothas' awakening is extremely violent, and he destroys Caed Nua, while he drains the souls of the people in the surrounding area. The Watcher similarly has a piece of his soul torn out during the attack, but manages to barely cling to life. In this near-dead state, he is contacted by Berath, the god of death, who offers to restore his soul in exchange for agreeing to become Berath's herald and take on the task of pursuing Eothas to discover what he is planning. The hunt for Eothas takes the Watcher (and his crew) via ship to the Deadfire Archipelago, where they must try to seek out answers—answers which could throw mortals and the gods themselves into chaos. The player's actions and decisions in the first game influence certain storyline elements of Deadfire. Throughout the story, the Watcher meets four different factions all vying for control over the Deadfire area: the imperialistic Royal Deadfire Company, acting on behalf of the expansionist Rauatai empire; the more profit-oriented and mercantile Vailian Trading Company, acting on behalf of the Vailian Republics; the traditionalist Huana, a tribal alliance of natives seeking to uphold their people's independence; and the Príncipi sen Patrena, a federation of pirates seeking to establish a republic of their own. The Watcher can help or hinder these factions along the way. Through their pursuit of Eothas, the Watcher eventually discovers the god's true intentions: he aims to break the Wheel, the cycle of reincarnation that governs the souls of Eora and by extension feeds the gods with the energy they need to sustain themselves, hoping that in doing so he can break the other gods' control over all mortal beings, allowing them to be free to pursue their own destinies. To that end, he seeks the mythical lost city of Ukaizo, where the mechanism controlling the Wheel is housed. Though the other gods intervene several times in an attempt to stop Eothas, he is undeterred and continues towards his goal. By either swearing fealty to one of the factions and gaining their help or acting independently, the Watcher and their ship braves the stormy sea of Ondra's Mortar, which protects the city of Ukaizo, just as Eothas makes his final approach to the Wheel, and confronts him there. Eothas, though sympathetic to the Watcher, refuses to back down from his endeavor, explaining to the Watcher that destroying the Wheel would most likely kill him and the rest of the gods for good, but that his death will also give him the power to enact a great change upon all of Eora. Before destroying the Wheel, he returns the piece of the Watcher's soul he took from him, thereby freeing him from his debt to Berath, and asks for his advice on what that change should be. An epilogue then follows, detailing the effects the Watcher's choices had on their companions, the different factions, the Deadfire, and the world at large. In the end, the Watcher resolves to head home to the Dyrwood, uncertain of what the future now holds for both gods and mortals. Development The game was developed by Obsidian Entertainment, creators of the original Pillars of Eternity, and was published by Versus Evil with partial funding by Fig. In May 2016, Obsidian CEO Feargus Urquhart announced that the game had entered production. Like its predecessor, Obsidian chose to launch a crowdfunding campaign on Fig to raise money for the development. The campaign launched on January 26, 2017, with a funding goal of million with 2.25 million open for equity. The funding goal was achieved in under 23 hours, and surpassed $4.4 million by the end of the campaign. The game was released for Microsoft Windows, Linux, and macOS on May 8, 2018, and will be released at a later date for Nintendo Switch, PlayStation 4, and Xbox One. A downloadable content pack, Critical Role Pack was released for free alongside the game's launch, adding additional character voices and portraits from the original campaign of Critical Role. In January 2019, an update for the game was released that added a turn-based combat style. Pillars of Eternity design director Josh Sawyer explained that if the team were to create a sequel, they would set it in a different location within the game's fictional world to ensure the setting felt new and interesting. Sawyer stated that one focus of Deadfire was to address criticisms raised over the abundance of filler combat encounters in the original game. The game's size is significantly larger than the original. Reception Pillars of Eternity II: Deadfire was met with generally positive reviews, according to review aggregator Metacritic. Sawyer said that the game's sales was "relatively low" compared to their expectations. Pillars of Eternity II: Deadfire was nominated for "Best Storytelling" and "PC Game of the Year" at the 2018 Golden Joystick Awards, for "Best Role-Playing Game" at The Game Awards 2018, for "Fan Favorite Role Playing Game" at the Gamers' Choice Awards, for "Role-Playing Game of the Year" at the D.I.C.E. Awards, for "Outstanding Achievement in Videogame Writing" at the Writers Guild of America Awards 2018, for "Outstanding Video Game" at the 30th GLAAD Media Awards, and for "Adventure Game" and "Best Writing" at the 2019 Webby Awards. References External links 2018 video games Crowdfunded video games Fantasy video games Linux games MacOS games Nintendo Switch games Obsidian Entertainment games Open-world video games PlayStation 4 games Role-playing video games Single-player video games Video game sequels Video games featuring protagonists of selectable gender Video games about pirates Video games developed in the United States Video games with isometric graphics Windows games Xbox Cloud Gaming games Xbox One games
19109098
https://en.wikipedia.org/wiki/Feng%20Office%20Community%20Edition
Feng Office Community Edition
Feng Office Community Edition (formerly OpenGoo) is an open-source collaboration platform developed and supported by Feng Office and the OpenGoo community. It is a fully featured online office suite with a similar set of features as other online office suites, like G Suite, Microsoft Office Live, Zimbra, LibreOffice Online and Zoho Office Suite. The application can be downloaded and installed on a server. Feng Office could also be categorized as collaborative software and as personal information manager software. Features Feng Office Community Edition main features include project management, document management, contact management, e-mail and time management. Text documents and presentations can be created and edited online. Files can be uploaded, organized and shared, independent of file formats. Organization of the information in Feng Office Community Edition is done using workspaces and tags. The application presents the information stored using different interfaces such as lists, dashboards and calendar views. Licensing Feng Office Community Edition is distributed under the GNU Affero General Public License, version 3 only. Technology used Feng Office uses PHP, JavaScript, AJAX (ExtJS) and MySQL technology. Several open source projects served as a basis for development. ActiveCollab's last open sourced release was used as the initial code base. It includes CKEditor for online document editing. System requirements The server could run on any operating system. The system needs the following packages: Apache HTTP Server 2.0+ PHP 5.0+ MySQL 4.1+ (InnoDB support recommended) On the client side, the user is only required to use a modern Web browser. History OpenGoo started as a degree project at the faculty of Engineering of the University of the Republic, Uruguay. The project was presented and championed by Software Engineer Conrado Viña. Software Engineers Marcos Saiz and Ignacio de Soto developed the first prototype as their thesis. Professors Eduardo Fernández and Tomás Laurenzo served as tutors. Conrado, Ignacio and Marcos founded the OpenGoo community and remain active members and core developers. The thesis was approved with the highest score. In 2008, Viña joined the Uruguayan software development company Moove It. Currently there is a second project for OpenGoo at the same university being developed by students Fernando Rodríguez, Ignacio Vázquez and Juan Pedro del Campo. Their project aims to build an open source Web-based spreadsheet. In December 2009 the OpenGoo name was changed to Feng Office Community Edition. See also Collaborative software Free Software licensing List of AGPL web applications List of project management software Notes References External links Feng Office site Fengoffice opensource site Sourceforge Project site Collaborative software Web applications Personal information managers Free groupware Free project management software Open-source office suites Online office suites Free content management systems Software using the GNU AGPL license
23958411
https://en.wikipedia.org/wiki/Techno
Techno
Techno is a genre of electronic dance music (EDM) characterized by a repetitive four on the floor beat which is generally produced for use in a continuous DJ set. The central rhythm is often in common time (4/4), while the tempo typically varies between 120 and 150 beats per minute (bpm). Artists may use electronic instruments such as drum machines, sequencers, and synthesizers, as well as digital audio workstations. Drum machines from the 1980s such as Roland's TR-808 and TR-909 are highly prized, and software emulations of such retro instruments are popular. Use of the term "techno" to refer to a type of electronic music originated in Germany in the early 1980s. In 1988, following the UK release of the compilation Techno! The New Dance Sound of Detroit, the term came to be associated with a form of electronic dance music produced in Detroit. Detroit techno resulted from the melding of synthpop by artists such as Kraftwerk, Giorgio Moroder and Yellow Magic Orchestra with African American styles such as house, electro, and funk. Added to this is the influence of futuristic and science-fiction themes relevant to life in American late capitalist society, with Alvin Toffler's book The Third Wave a notable point of reference. The music produced in the mid to late 1980s by Juan Atkins, Derrick May, and Kevin Saunderson (collectively known as the Belleville Three), along with Eddie Fowlkes, Blake Baxter, James Pennington and others is viewed as the first wave of techno from Detroit. After the success of house music in a number of European countries, techno grew in popularity in the UK, Germany, Belgium and the Netherlands. In Europe regional variants quickly evolved and by the early 1990s techno subgenres such as acid, hardcore, ambient, and dub techno had developed. Music journalists and fans of techno are generally selective in their use of the term, so a clear distinction can be made between sometimes related but often qualitatively different styles, such as tech house and trance. Detroit techno In exploring Detroit techno's origins writer Kodwo Eshun maintains that "Kraftwerk are to techno what Muddy Waters is to the Rolling Stones: the authentic, the origin, the real." Juan Atkins has acknowledged that he had an early enthusiasm for Kraftwerk and Giorgio Moroder, particularly Moroder's work with Donna Summer and the producer's own album E=MC2. Atkins also mentions that "around 1980 I had a tape of nothing but Kraftwerk, Telex, Devo, Giorgio Moroder and Gary Numan, and I'd ride around in my car playing it." Regarding his initial impression of Kraftwerk, Atkins notes that they were "clean and precise" relative to the "weird UFO sounds" featured in his seemingly "psychedelic" music. Derrick May identified the influence of Kraftwerk and other European synthesizer music in commenting that "it was just classy and clean, and to us it was beautiful, like outer space. Living around Detroit, there was so little beauty... everything is an ugly mess in Detroit, and so we were attracted to this music. It, like, ignited our imagination!". May has commented that he considered his music a direct continuation of the European synthesizer tradition. He also identified Japanese synthpop act Yellow Magic Orchestra, particularly member Ryuichi Sakamoto, and British band Ultravox, as influences, along with Kraftwerk. YMO's song "Technopolis" (1979), a tribute to Tokyo as an electronic mecca, is considered an "interesting contribution" to the development of Detroit techno, foreshadowing concepts that Atkins and Davis would later explore with Cybotron. Kevin Saunderson has also acknowledged the influence of Europe but he claims to have been more inspired by the idea of making music with electronic equipment: "I was more infatuated with the idea that I can do this all myself." These early Detroit techno artists additionally employed science fiction imagery to articulate their visions of a transformed society. School days Prior to achieving notoriety, Atkins, Saunderson, May, and Fowlkes shared common interests as budding musicians, "mix" tape traders, and aspiring DJs. They also found musical inspiration via the Midnight Funk Association, an eclectic five-hour late-night radio program hosted on various Detroit radio stations, including WCHB, WGPR, and WJLB-FM from 1977 through the mid-1980s by DJ Charles "The Electrifying Mojo" Johnson. Mojo's show featured electronic music by artists such as Giorgio Moroder, Kraftwerk, Yellow Magic Orchestra and Tangerine Dream, alongside the funk sounds of acts such as Parliament Funkadelic and dance oriented new wave music by bands like Devo and the B-52's. Atkins has noted: Despite the short-lived disco boom in Detroit, it had the effect of inspiring many individuals to take up mixing, Juan Atkins among them. Subsequently, Atkins taught May how to mix records, and in 1981, "Magic Juan", Derrick "Mayday", in conjunction with three other DJ's, one of whom was Eddie "Flashin" Fowlkes, launched themselves as a party crew called Deep Space Soundworks (also referred to as Deep Space). In 1980 or 1981 they met with Mojo and proposed that they provide mixes for his show, which they did end up doing the following year. During the late 1970s-early 1980s high school clubs such as Brats, Charivari, Ciabattino, Comrades, Gables, Hardwear, Rafael, Rumours, Snobs, and Weekends allowed the young promoters to develop and nurture a local dance music scene. As the local scene grew in popularity, DJs began to band together to market their mixing skills and sound systems to clubs that were hoping to attract larger audiences. Local church activity centers, vacant warehouses, offices, and YMCA auditoriums were the early locations where the musical form was nurtured. Juan Atkins Of the four individuals responsible for establishing techno as a genre in its own right, Juan Atkins is widely cited as "The Originator". In 1995, the American music technology publication Keyboard Magazine honored him as one of 12 Who Count in the history of keyboard music. In the early 1980s, Atkins began recording with musical partner Richard Davis (and later with a third member, Jon-5) as Cybotron. This trio released a number of rock and electro-inspired tunes, the most successful of which were Clear (1983) and its moodier followup, "Techno City" (1984). Atkins used the term techno to describe Cybotron's music, taking inspiration from Futurist author Alvin Toffler, the original source for words such as cybotron and metroplex. Atkins has described earlier synthesizer based acts like Kraftwerk as techno, although many would consider both Kraftwerk's and Juan's Cybotron outputs as electro. Atkins viewed Cybotron's Cosmic Cars (1982) as unique, Germanic, synthesized funk, but he later heard Afrika Bambaataa's "Planet Rock" (1982) and considered it to be a superior example of the music he envisioned. Inspired, he resolved to continue experimenting, and he encouraged Saunderson and May to do likewise.<ref>Cosgrove 1988b. "At the time, [Atkins] believed ["Techno City"] was a unique and adventurous piece of synthesizer funk, more in tune with Germany than the rest of black America, but on a dispiriting visit to New York, Juan heard Afrika Bambaataa's 'Planet Rock' and realized that his vision of a spartan electronic dance sound had been upstaged. He returned to Detroit and renewed his friendship with two younger students from Belleville High, Kevin Saunderson and Derrick May, and quietly over the next few years the three of them became the creative backbone of Detroit Techno. "Techno City" was released in 1984. Sicko 1999:73 clarifies Atkins was in New York in 1982, trying to get Cybotron's "Cosmic Cars" into the hands of radio DJs, when he first heard "Planet Rock"; so "Cosmic Cars", not "Techno City", is the unique and adventurous piece of synthesizer funk.</ref> Eventually, Atkins started producing his own music under the pseudonym Model 500, and in 1985 he established the record label Metroplex. The same year saw an important turning point for the Detroit scene with the release of Model 500's "No UFO's," a seminal work that is generally considered the first techno production.Butler 2006:43"In 1985 Juan Atkins released the first record on his fledgling label Metroplex, 'No UFO's', now widely regarded as Year Zero of the techno movement." Cox, T. (2008), Model 500:Remake/remodel, interview with Atkins and Mike Banks hosted on www.residentadvisor.net Of this time, Atkins has said: Chicago The music's producers, especially May and Saunderson, admit to having been fascinated by the Chicago club scene and influenced by house in particular. May's 1987 hit "Strings of Life" (released under the alias Rhythim Is Rhythim) is considered a classic in both the house and techno genres. "RIR singles like 'Strings of Life'...are among the few classics in the debased world of techno" Juan Atkins also believes that the first acid house producers, seeking to distance house music from disco, emulated the techno sound. Atkins also suggests that the Chicago house sound developed as a result of Frankie Knuckles' using a drum machine he bought from Derrick May. He claims: In the UK, a club following for house music grew steadily from 1985, with interest sustained by scenes in London, Manchester, Nottingham, and later Sheffield and Leeds. The DJs thought to be responsible for house's early UK success include Mike Pickering, Mark Moore, Colin Faver, and Graeme Park. Detroit sound The early producers, enabled by the increasing affordability of sequencers and synthesizers, merged a European synthpop aesthetic with aspects of soul, funk, disco, and electro, pushing electronic dance music into uncharted terrain. They deliberately rejected the Motown legacy and traditional formulas of R&B and soul, and instead embraced technological experimentation.Cosgrove 1988a. Although the Detroit dance music has been casually lumped in with the jack virus of Chicago house, the young techno producers of the Seventh City claim to have their own sound, music that goes 'beyond the beat', creating a hybrid of post-punk, funkadelia and electro-disco...a mesmerizing underground of new dance which blends European industrial pop with black American garage funk...If the techno scene worships any gods, they are a pretty deranged deity, according to Derrick May. "The music is just like Detroit, a complete mistake. It's like George Clinton and Kraftwerk stuck in an elevator." ...And strange as it may seem, the techno scene looked to Europe, to Heaven 17, Depeche Mode and the Human League for its inspiration. ...[Says an Underground Resistance-related group] "Techno is all about simplicity. We don't want to compete with Jimmy Jam and Terry Lewis. Modern R&B has too many rules: big snare sounds, big bass and even bigger studio bills." Techno is probably the first form of contemporary black music which categorically breaks with the old heritage of soul music. Unlike Chicago House, which has a lingering obsession with seventies Philly, and unlike New York Hip Hop with its deconstructive attack on James Brown's back catalogue, Detroit Techno refutes the past. It may have a special place for Parliament and Pete Shelley, but it prefers tomorrow's technology to yesterday's heroes. Techno is a post-soul sound...For the young black underground in Detroit, emotion crumbles at the feet of technology. ...Despite Detroit's rich musical history, the young techno stars have little time for the golden era of Motown. Juan Atkins of Model 500 is convinced there is little to be gained from the motor-city legacy... "Say what you like about our music," says Blake Baxter, "but don't call us the new Motown...we're the second coming."Rietveld 1998:124–127 The resulting Detroit sound was interpreted by Derrick May and one journalist in 1988 as a "post-soul" sound with no debt to Motown, but by another journalist a decade later as "soulful grooves" melding the beat-centric styles of Motown with the music technology of the time. May famously described the sound of techno as something that is "...like Detroit...a complete mistake. It's like George Clinton and Kraftwerk are stuck in an elevator with only a sequencer to keep them company." Juan Atkins has stated that it is "music that sounds like technology, and not technology that sounds like music, meaning that most of the music you listen to is made with technology, whether you know it or not. But with techno music, you know it." One of the first Detroit productions to receive wider attention was Derrick May's "Strings of Life" (1987), which, together with May's previous release, "Nude Photo" (1987), helped raise techno's profile in Europe, especially the UK and Germany, during the 1987–1988 house music boom (see Second Summer of Love). It became May's best known track, which, according to Frankie Knuckles, "just exploded. It was like something you can't imagine, the kind of power and energy people got off that record when it was first heard. Mike Dunn says he has no idea how people can accept a record that doesn't have a bassline." Acid house By 1988, house music had exploded in the UK, and acid house was increasingly popular. There was also a long-established warehouse party subculture based around the sound system scene. In 1988, the music played at warehouse parties was predominantly house. That same year, the Balearic party vibe associated with Ibiza-based DJ Alfredo Fiorito was transported to London, when Danny Rampling and Paul Oakenfold opened the clubs Shoom and Spectrum, respectively. Both night spots quickly became synonymous with acid house, and it was during this period that the use of MDMA, as a party drug, started to gain prominence. Other important UK clubs at this time included Back to Basics in Leeds, Sheffield's Leadmill and Music Factory, and in Manchester The Haçienda, where Mike Pickering and Graeme Park's Friday night spot, Nude, was an important proving ground for American underground dance music. Acid house party fever escalated in London and Manchester, and it quickly became a cultural phenomenon. MDMA-fueled club goers, faced with 2 A.M. closing hours, sought refuge in the warehouse party scene that ran all night. To escape the attention of the press and the authorities, this after-hours activity quickly went underground. Within a year, however, up to 10,000 people at a time were attending the first commercially organized mass parties, called raves, and a media storm ensued. The success of house and acid house paved the way for wider acceptance of the Detroit sound, and vice versa: techno was initially supported by a handful of house music clubs in Chicago, New York, and Northern England, with London clubs catching up later; but in 1987, it was "Strings of Life" which eased London club-goers into acceptance of house, according to DJ Mark Moore.Cosgrove 1988a. Although it can now be heard in Detroit's leading clubs, the local area has shown a marked reluctance to get behind the music. It has been in clubs like the Powerplant (Chicago), The World (New York), The Hacienda (Manchester), Rock City (Nottingham) and Downbeat (Leeds) where the techno sound has found most support. Ironically, the only Detroit club which really championed the sound was a peripatetic party night called Visage, which unromantically shared its name with one of Britain's oldest new romantic groups. The New Dance Sound of Detroit The mid-1988 UK release of Techno! The New Dance Sound of Detroit, an album compiled by ex-Northern Soul DJ and Kool Kat Records boss Neil Rushton (at the time an A&R scout for Virgin's "10 Records" imprint) and Derrick May, introduced of the word techno to UK audiences. Although the compilation put techno into the lexicon of music journalism in the UK, the music was initially viewed as Detroit's interpretation of Chicago house rather than as a separate genre. Detroit's "techno" ... and many more stylistic outgrowths have occurred since the word "house" gained national currency in 1985. The compilation's working title had been The House Sound of Detroit until the addition of Atkins' song "Techno Music" prompted reconsideration. Rushton was later quoted as saying he, Atkins, May, and Saunderson came up with the compilation's final name together, and that the Belleville Three voted down calling the music some kind of regional brand of house; they instead favored a term they were already using, techno. Derrick May views this as one of his busiest times and recalls that it was a period where he Commercially, the release did not fare as well and failed to recoup, but Inner City's production "Big Fun" (1988), a track that was almost not included on the compilation, became a crossover hit in fall 1988. The record was also responsible for bringing industry attention to May, Atkins and Saunderson, which led to discussions with ZTT records about forming a techno supergroup called Intellex. But, when the group were on the verge of finalising their contract, May allegedly refused to agree to Top of the Pops appearances and negotiations collapsed. According to May, ZTT label boss Trevor Horn had envisaged that the trio would be marketed as a "black Petshop Boys." Despite Virgin Records' disappointment with the poor sales of Rushton's compilation, the record was successful in establishing an identity for techno and was instrumental in creating a platform in Europe for both the music and its producers. Ultimately, the release served to distinguish the Detroit sound from Chicago house and other forms of underground dance music that were emerging during the rave era of the late 1980s and early 1990s, a period during which techno became more adventurous and distinct.Sicko 1999:102. Once Rushton and Atkins set techno apart with the Techno! compilation, the music took off on its own course, no longer parallel to the Windy City's progeny. And as the 1980s came to a close, the difference between techno and house music became increasingly pronounced, with techno's instrumentation growing more and more adventurous. Music Institute In mid-1988, developments in the Detroit scene led to the opening of a nightclub called the Music Institute (MI), located at 1315 Broadway in downtown Detroit. The venue was secured by George Baker and Alton Miller with Darryl Wynn and Derrick May participating as Friday night DJs, and Baker and Chez Damier playing to a mostly gay crowd on Saturday nights. The club closed on November 24, 1989, with Derrick May playing "Strings of Life" along with a recording of clock tower bells. May explains: Though short-lived, MI was known internationally for its all-night sets, its sparse white rooms, and its juice bar stocked with "smart drinks" (the Institute never served liquor). The MI, notes Dan Sicko, along with Detroit's early techno pioneers, "helped give life to one of the city's important musical subcultures – one that was slowly growing into an international scene." German techno In 1982, while working at Frankfurt's City Music record store, DJ Talla 2XLC started to use the term techno to categorize artists such as New Order, Depeche Mode, Kraftwerk, Heaven 17 and Front 242, with the word used as shorthand for technologically created dance music. Talla's categorization became a point of reference for other DJs, including Sven Väth. Talla further popularized the term in Germany when he founded Technoclub at Frankfurt's No Name Club in 1984, which later moved to the Dorian Gray club in 1987. Talla's club spot served as the hub for the regional EBM and electronic music scene, and according to Jürgen Laarmann, of Frontpage magazine, it had historical merit in being the first club in Germany to play almost exclusively electronic dance music. Frankfurt tape scene Inspired by Talla's music selection, in the early 80s several young artists from Frankfurt started to experiment on cassette tapes with electronic music coming from the City Music record store, mixing the latest catalogue with additional electronic sounds and pitched BPM. This became known as the Frankfurt tape scene. The Frankfurt tape scene evolved around the early and experimental work done by the likes of Tobias Freund, Uwe Schmidt, Lars Müller and Martin Schopf. Some of the work done by Andreas Tomalla, Markus Nikolai and Thomas Franzmann evolved in collaborative work under the Bigod 20 collective. While this early work was strongly characterized as experimental electronic music fused with strong EBM, krautrock, synthpop and technopop influences, the later work during the mid and late 80's clearly transitioned to a clear techno sound. Influence of Chicago and Detroit Germany's engagement with American underground dance music during the 1980s paralleled that in the UK. By 1987 a German party scene based around the Chicago sound was well established. The following year (1988) saw acid house making as significant an impact on popular consciousness in Germany as it had in England. In 1989 German DJs Westbam and Dr. Motte established the Ufo club, an illegal party venue, and co-founded the Love Parade. After the Berlin Wall fell on 9 November 1989, free underground techno parties mushroomed in East Berlin, and a rave scene comparable to that in the UK was established. East German DJ Paul van Dyk has remarked that techno was a major force in reestablishing social connections between East and West Germany during the unification period. Growth of German scene In 1991 a number of party venues closed, including Ufo, and the Berlin Techno scene centered itself around three locations close to the foundations of the Berlin Wall: Planet, E-Werk, Bunker, and the long-lived Tresor. It was in Tresor at this time that a trend in paramilitary clothing was established (amongst the techno fraternity) by DJ Tanith; possibly as an expression of a commitment to the underground aesthetic of the music, or perhaps influenced by UR's paramilitary posturing. In the same period, German DJs began intensifying the speed and abrasiveness of the sound, as an acid infused techno began transmuting into hardcore. DJ Tanith commented at the time that "Berlin was always hardcore, hardcore hippie, hardcore punk, and now we have a very hardcore house sound." This emerging sound is thought to have been influenced by Dutch gabber and Belgian hardcore; styles that were in their own perverse way paying homage to Underground Resistance and Richie Hawtin's Plus 8 Records. Other influences on the development of this style were European electronic body music (EBM) groups of the mid-1980s such as DAF, Front 242, and Nitzer Ebb. Changes were also taking place in Frankfurt during the same period but it did not share the egalitarian approach found in the Berlin party scene. It was instead very much centered around discothèques and existing arrangements with various club owners. In 1988, after the Omen opened, the Frankfurt dance music scene was allegedly dominated by the club's management and they made it difficult for other promoters to get a start. By the early 1990s Sven Väth had become perhaps the first DJ in Germany to be worshipped like a rock star. He performed center stage with his fans facing him, and as co-owner of Omen, he is believed to have been the first techno DJ to run his own club. One of the few real alternatives then was The Bruckenkopf in Mainz, underneath a Rhine bridge, a venue that offered a non-commercial alternative to Frankfurt's discothèque-based clubs. Other notable underground parties were those run by Force Inc. Music Works and Ata & Heiko from Playhouse records (Ongaku Musik). By 1992 DJ Dag & Torsten Fenslau were running a Sunday morning session at Dorian Gray, a plush discothèque near the Frankfurt airport. They initially played a mix of different styles including Belgian new beat, Deep House, Chicago House, and synthpop such as Kraftwerk and Yello and it was out of this blend of styles that the Frankfurt trance scene is believed to have emerged. In 1993-94 rave became a mainstream music phenomenon in Germany, seeing with it a return to "melody, New Age elements, insistently kitsch harmonies and timbres". This undermining of the German underground sound lead to the consolidation of a German "rave establishment," spearheaded by the party organisation Mayday, with its record label Low Spirit, DJ Westbam, Marusha, and a music channel called VIVA. At this time the German popular music charts were riddled with Low Spirit "pop-Tekno" German folk music reinterpretations of tunes such as "Somewhere Over The Rainbow" and "Tears Don't Lie", many of which became hits. At the same time, in Frankfurt, a supposed alternative was a music characterized by Simon Reynolds as "moribund, middlebrow Electro-Trance music, as represented by Frankfurt's own Sven Väth and his Harthouse label." Tekkno versus techno In Germany, fans started to refer to the harder techno sound emerging in the early 1990s as Tekkno (or Brett). This alternative spelling, with varying numbers of ks, began as a tongue-in-cheek attempt to emphasize the music's hardness, but by the mid-1990s it came to be associated with a controversial point of view that the music was and perhaps always had been wholly separate from Detroit's techno, deriving instead from a 1980s EBM-oriented club scene cultivated in part by DJ/musician Talla 2XLC in Frankfurt. At some point tension over "who defines techno" arose between scenes in Frankfurt and Berlin. DJ Tanith has expressed that Techno as a term already existed in Germany but was to a large extent undefined. Dimitri Hegemann has stated that the Frankfurt definition of techno associated with Talla's Technoclub differed from that used in Berlin. Frankfurt's Armin Johnert viewed techno as having its roots in acts such DAF, Cabaret Voltaire, and Suicide, but a younger generation of club goers had a perception of the older EBM and Industrial as handed down and outdated. The Berlin scene offered an alternative and many began embracing an imported sound that was being referred to as Techno-House. The move away from EBM had started in Berlin when acid house became popular, thanks to Monika Dietl's radio show on SFB 4. Tanith distinguished acid-based dance music from the earlier approaches, whether it be DAF or Nitzer Ebb, because the latter was aggressive, he felt that it epitomized "being against something," but of acid house he said, "it's electronic, it's fun it's nice." By Spring 1990, Tanith, along with Wolle XDP, an East-Berlin party organizer responsible for the X-tasy Dance Project, were organizing the first large scale rave events in Germany. This development would lead to a permanent move away from the sound associated with Techno-House and toward a hard edged mix of music that came to define Tanith and Wolle's Tekknozid parties. According to Wolle it was an "out and out rejection of disco values," instead they created a "sound storm" and encouraged a form of "dance floor socialism," where the DJ was not placed in the middle and you "lose yourself in light and sound." Developments As the techno sound evolved in the late 1980s and early 1990s, it also diverged to such an extent that a wide spectrum of stylistically distinct music was being referred to as techno. This ranged from relatively pop oriented acts such as Moby to the distinctly anti-commercial sentiments of Underground Resistance. Derrick May's experimentation on works such as Beyond the Dance (1989) and The Beginning (1990) were credited with taking techno "in dozens of new directions at once and having the kind of expansive impact John Coltrane had on Jazz". The Birmingham-based label Network Records label was instrumental in introducing Detroit techno to British audiences. By the early 1990s, the original techno sound had garnered a large underground following in the United Kingdom, Germany, the Netherlands and Belgium. The growth of techno's popularity in Europe between 1988 and 1992 was largely due to the emergence of the rave scene and a thriving club culture. Exodus In America, apart from regional scenes in Detroit, New York City, Chicago and Orlando, interest was limited. Producers from Detroit, frustrated by the lack of opportunity in their home country, looked to Europe for their future livelihood. This first wave of Detroit expatriates was soon joined by a number of up-and-coming artists, the so-called "second wave", including Carl Craig, Octave One, Jay Denham, Kenny Larkin, and Stacey Pullen, with UR's Jeff Mills, Mike Banks, and Robert Hood pushing their own unique sound. A number of New York producers were also making an impression at this time, notably Frankie Bones, Lenny Dee, and Joey Beltram. In the same period, close to Detroit (Windsor, Ontario), Richie Hawtin, with business partner John Acquaviva, launched the influential imprint Plus 8 Records. Developments in American-produced techno between 1990 and 1992 fueled the expansion and eventual divergence of techno in Europe, particularly in Germany.Reynolds 2006:228–229 In Berlin, following the closure of a free party venue called Ufo, the club Tresor opened in 1991. The venue was for a time the standard bearer for techno and played host to many of the leading Detroit producers, some of whom relocated to Berlin. By 1993, as interest in techno in the UK club scene started to wane, Berlin was considered the unofficial techno capital of Europe. Although eclipsed by Germany, Belgium was another focus of second-wave techno in this time period. The Ghent-based label R&S Records embraced harder-edged techno by "teenage prodigies" like Beltram and C.J. Bolland, releasing "tough, metallic tracks...with harsh, discordant synth lines that sounded like distressed Hoovers," according to one music journalist. In the United Kingdom, Sub Club opening in Glasgow in 1987 and Trade which opened its doors to Londoners in 1990 were pioneering venues which helped bring techno into the country. Both clubs were praised for their late opening hours and party-focused clientele. Trade has often been referred to as the 'original all night bender'. A Techno Alliance In 1993, the German techno label Tresor Records released the compilation album Tresor II: Berlin & Detroit – A Techno Alliance, a testament to the influence of the Detroit sound upon the German techno scene and a celebration of a "mutual admiration pact" between the two cities. As the mid-1990s approached, Berlin was becoming a haven for Detroit producers; Jeff Mills and Blake Baxter even resided there for a time. In the same period, with the assistance of Tresor, Underground Resistance released their X-101/X-102/X103 album series, Juan Atkins collaborated with 3MB's Thomas Fehlmann and Moritz Von Oswald and Tresor-affiliated label Basic Channel had its releases mastered by Detroit's National Sound Corporation, the main mastering house for the entire Detroit dance music scene. In a sense, popular electronic music had come full circle, returning to Germany, home of a primary influence on the electronic dance music of the 1980s: Düsseldorf's Kraftwerk. Even the dance sounds of Chicago also had a German connection, as it was in Munich that Giorgio Moroder and Pete Bellotte first produced the 1970s Eurodisco synthpop sound. Minimal techno As techno continued to transmute a number of Detroit producers began to question the trajectory the music was taking. One response came in the form of so-called minimal techno (a term producer Daniel Bell found difficult to accept, finding the term minimalism, in the artistic sense of the word, too "arty"). It is thought that Robert Hood, a Detroit-based producer and one time member of UR, is largely responsible for ushering in the minimal strain of techno. Hood describes the situation in the early 1990s as one where techno had become too "ravey", with increasing tempos, the emergence of gabber, and related trends straying far from the social commentary and soul-infused sound of original Detroit techno. In response, Hood and others sought to emphasize a single element of the Detroit aesthetic, interpreting techno with "a basic stripped down, raw sound. Just drums, basslines and funky grooves and only what's essential. Only what is essential to make people move". Hood explains: Jazz influences Some techno has also been influenced by or directly infused with elements of jazz. This led to increased sophistication in the use of both rhythm and harmony in a number of techno productions. Manchester (UK)-based techno act 808 State helped fuel this development with tracks such as "Pacific State" and "Cobra Bora" in 1989. Detroit producer Mike Banks was heavily influenced by jazz, as demonstrated on the influential Underground Resistance release Nation 2 Nation (1991). By 1993, Detroit acts such as Model 500 and UR had made explicit references to the genre, with the tracks "Jazz Is The Teacher" (1993) and "Hi-Tech Jazz" (1993), the latter being part of a larger body of work and group called Galaxy 2 Galaxy, a self-described jazz project based on Kraftwerk's "man machine" doctrine. "Galaxy 2 Galaxy is a band that was conceptualized with the first hitech Jazz record produced by UR in 1986/87 and later released in 1990 which was Nation 2 Nation (UR-005). Jeff Mills and Mike Banks had visions of Jazz music and musicians operating on the same "man machine" doctrine dropped on them from Kraftwerk. Early experiments with synthesizers and jazz by artists like Herbie Hancock, Stevie Wonder, Weather Report, Return to Forever, Larry Heard and Lenny White's Astral Pirates also pointed them in this direction. UR went on to produce and further innovate this form of music which was coined 'Hitech Jazz' by fans after the historic 1993 release of UR's Galaxy 2 Galaxy (UR-025) album which included the underground UR smash titled 'Hitech Jazz'." This lead was followed by a number of techno producers in the UK who were influenced by both jazz and UR, Dave Angel's "Seas of Tranquility" EP (1994) being a case in point,Angelic Upstart : Mixmag interview with Dave Angel detailing his interest in jazz. Retrieved from Techno.de Other notable artists who set about expanding upon the structure of "classic techno" include Dan Curtin, Morgan Geist, Titonton Duvante and Ian O'Brien. Intelligent techno In 1991 UK music journalist Matthew Collin wrote that "Europe may have the scene and the energy, but it's America which supplies the ideological direction...if Belgian techno gives us riffs, German techno the noise, British techno the breakbeats, then Detroit supplies the sheer cerebral depth." By 1992 a number of European producers and labels began to associate rave culture with the corruption and commercialization of the original techno ideal. Following this the notion of an intelligent or Detroit inspired pure techno aesthetic began to take hold. Detroit techno had maintained its integrity throughout the rave era and was pushing a new generation of so-called intelligent techno producers forward. Simon Reynolds suggests that this progression "involved a full-scale retreat from the most radically posthuman and hedonistically functional aspects of rave music toward more traditional ideas about creativity, namely the auteur theory of the solitary genius who humanizes technology." The term intelligent techno was used to differentiate more sophisticated versions of underground techno from rave-oriented styles such as breakbeat hardcore, Schranz, Dutch Gabber. Warp Records was among the first to capitalize upon this development with the release of the compilation album Artificial Intelligence Of this time, Warp founder and managing director Steve Beckett said Warp had originally marketed Artificial Intelligence using the description electronic listening music but this was quickly replaced by intelligent techno. In the same period (1992–93) other names were also bandied about such as armchair techno, ambient techno, and electronica, but all referred to an emerging form of post-rave dance music for the "sedentary and stay at home". Following the commercial success of the compilation in the United States, Intelligent Dance Music eventually became the name most commonly used for much of the experimental dance music emerging during the mid-to-late 1990s. Although it is primarily Warp that has been credited with ushering the commercial growth of IDM and electronica, in the early 1990s there were many notable labels associated with the initial intelligence trend that received little, if any, wider attention. Amongst others they include: Black Dog Productions (1989), Carl Craig's Planet E (1991), Kirk Degiorgio's Applied Rhythmic Technology (1991), Eevo Lute Muzique (1991), General Production Recordings (1991), In 1993, a number of new "intelligent techno"/"electronica" record labels emerged, including New Electronica, Mille Plateaux, 100% Pure (1993) and Ferox Records (1993). Free techno In the early 1990s a post-rave, DIY, free party scene had established itself in the UK. It was largely based around an alliance between warehouse party goers from various urban squat scenes and politically inspired new age travellers. The new agers offered a readymade network of countryside festivals that were hastily adopted by squatters and ravers alike. Prominent among the sound systems operating at this time were Exodus in Luton, Tonka in Brighton, Smokescreen in Sheffield, DiY in Nottingham, Bedlam, Circus Warp, LSDiesel and London's Spiral Tribe. The high point of this free party period came in May 1992 when with less than 24 hours notice and little publicity more than 35,000 gathered at the Castlemorton Common Festival for 5 days of partying. This one event was largely responsible for the introduction in 1994 of the Criminal Justice and Public Order Act; effectively leaving the British free party scene for dead. Following this many of the traveller artists moved away from Britain to Europe, the US, Goa in India, Koh Phangan in Thailand and Australia's East Coast. In the rest of Europe, due in some part to the inspiration of traveling sound systems from the UK, rave enjoyed a prolonged existence as it continued to expand across the continent. Spiral Tribe, Bedlam and other English sound systems took their cooperative techno ideas to Europe, particularly Eastern Europe where it was cheaper to live, and audiences were quick to appropriate the free party ideology. It was European Teknival free parties, such as the annual Czechtek event in the Czech Republic that gave rise to several French, German and Dutch sound systems. Many of these groups found audiences easily and were often centered around squats in cities such as Amsterdam and Berlin. Divergence By 1994 there were a number of techno producers in the UK and Europe building on the Detroit sound, but a number of other underground dance music styles were by then vying for attention. Some drew upon the Detroit techno aesthetic, while others fused components of preceding dance music forms. This led to the appearance (in the UK initially) of inventive new music that sounded far-removed from techno. For instance jungle (drum and bass) demonstrated influences ranging from hip-hop, soul, and reggae to techno and house. With an increasing diversification (and commercialization) of dance music, the collectivist sentiment prominent in the early rave scene diminished, each new faction having its own particular attitude and vision of how dance music (or in certain cases, non-dance music) should evolve. Some examples not already mentioned are trance, industrial techno, breakbeat hardcore, acid techno, and happy hardcore. Less well-known styles related to techno or its subgenres include the primarily Sheffield (UK)-based bleep techno, a regional variant that had some success between 1989 and 1991. According to Muzik magazine, by 1995 the UK techno scene was in decline and dedicated club nights were dwindling. The music had become "too hard, too fast, too male, too drug-oriented, too anally retentive." Despite this, weekly night at clubs such as Final Frontier (London), House of God (Birmingham), Pure (Edinburgh, whose resident DJ Twitch later founded the more eclectic Optimo), and Bugged Out (Manchester) were still popular. With techno reaching a state of "creative palsy," and with a disproportionate number of underground dance music enthusiasts more interested in the sounds of rave and jungle, in 1995 the future of the UK techno scene looked uncertain as the market for "pure techno" waned. Muzik described the sound of UK techno at this time as "dutiful grovelling at the altar of American techno with a total unwillingness to compromise." By the end of the 1990s, a number of post-techno underground styles had emerged, including ghettotech (a style that combines some of the aesthetics of techno with hip-hop and house music), nortec, glitch, digital hardcore, the so-called no-beat techno, and electroclash. In attempting to sum up the changes since the heyday of Detroit techno, Derrick May has since revised his famous quote in stating that "Kraftwerk got off on the third floor and now George Clinton's got Napalm Death in there with him. The elevator's stalled between the pharmacy and the athletic wear store." Commercial exposure While techno and its derivatives only occasionally produce commercially successful mainstream acts—Underworld and Orbital being two better-known examples—the genre has significantly affected many other areas of music. In an effort to appear relevant, many established artists, for example Madonna and U2, have dabbled with dance music, yet such endeavors have rarely evidenced a genuine understanding or appreciation of techno's origins with the former proclaiming in January 1996 that "Techno=Death".Chaplin, Julia & Michel, Sia. Fire Starters, Spin Magazine, page 40, March 1997, Spin Media LLC. The R&B artist, Missy Elliott, exposed the popular music audience to the Detroit techno sound when she featured material from Cybotron's Clear on her 2006 release "Lose Control"; this resulted in Juan Atkins' receiving a Grammy Award nomination for his writing credit. Elliott's 2001 album Miss E... So Addictive also clearly demonstrated the influence of techno inspired club culture. In the late 90s the publication of relatively accurate histories by authors Simon Reynolds (Generation Ecstasy, also known as Energy Flash) and Dan Sicko (Techno Rebels), plus mainstream press coverage of the Detroit Electronic Music Festival in the 2000s, helped diffuse some of the genre's more dubious mythology. Even the Detroit-based company Ford Motors eventually became savvy to the mass appeal of techno, noting that "this music was created partly by the pounding clangor of the Motor City's auto factories. It became natural for us to incorporate Detroit techno into our commercials after we discovered that young people are embracing techno." With a marketing campaign targeting under-35s, Ford used "Detroit Techno" as a print ad slogan and chose Model 500's "No UFO's" to underpin its November 2000 MTV television advertisement for the Ford Focus. Antecedents Early use of the term 'Techno' In 1977, Steve Fairnie and Bev Sage formed an electronica band called the Techno Twins in London, England. When Kraftwerk first toured Japan, their music was described as "technopop" by the Japanese press. The Japanese band Yellow Magic Orchestra used the word 'techno' in a number of their works such as the song "Technopolis" (1979), the album Technodelic (1981), and a rare flexi disc EP, "The Spirit of Techno" (1983). When Yellow Magic Orchestra toured the United States in 1980, they described their own music as technopop, and were written up in Rolling Stone Magazine. Around 1980, the members of YMO added synthesizer backing tracks to idol songs such as Ikue Sakakibara's "Robot", and these songs were classified as 'techno kayou' or 'bubblegum techno.' In 1985, Billboard reviewed the Canadian band Skinny Puppy's album, and described the genre as techno dance. Juan Atkins himself said "In fact, there were a lot of electronic musicians around when Cybotron started, and I think maybe half of them referred to their music as 'techno.' However, the public really wasn't ready for it until about '85 or '86. It just so happened that Detroit was there when people really got into it." Proto-techno The popularity of Euro disco and Italo disco—referred to as progressive in Detroit—and new romantic synthpop in the Detroit high school party scene from which techno emerged has prompted a number of commentators to try to redefine the origins of techno by incorporating musical precursors to the Detroit sound as part of a wider historical survey of the genre's development.Brewster 2006:343–346 The search for a mythical "first techno record" leads such commentators to consider music from long before the 1988 naming of the genre. Aside from the artists whose music was popular in the Detroit high school scene ("progressive" disco acts such as Giorgio Moroder, Alexander Robotnick, and Claudio Simonetti and synthpop artists such as Visage, New Order, Depeche Mode, The Human League, and Heaven 17), they point to examples such as "Sharevari" (1981) by A Number of Names, danceable selections from Kraftwerk (1977–83), the earliest compositions by Cybotron (1981), Donna Summer and Giorgio Moroder's "I Feel Love" (1977), Moroder's "From Here to Eternity" (1977), and Manuel Göttsching's "proto-techno masterpiece" E2-E4 (1981). Another example is a record entitled Love in C minor, released in 1976 by Parisian Euro disco producer Jean-Marc Cerrone; cited as the first so called "conceptual disco" production and the record from which house, techno, and other underground dance music styles flowed. Yet another example is Yellow Magic Orchestra's work which has been described as "proto-techno" Around 1983, Sheffield band Cabaret Voltaire began including funk and electronic dance music elements into their sound, and in later years, would come to be described as techno. Nitzer Ebb was an Essex band formed in 1982, which also showed funk and electronic dance music influence on their sound around this time. The Danish band Laid Back released "White Horse" in 1983 with a similar funky electronica sound. Prehistory Certain electro-disco and European synthpop productions share with techno a dependence on machine-generated dance rhythms, but such comparisons are not without contention. Efforts to regress further into the past, in search of earlier antecedents, entails a further regression, to the sequenced electronic music of Raymond Scott, whose "The Rhythm Modulator," "The Bass-Line Generator," and "IBM Probe" are considered early examples of techno-like music. In a review of Scott's Manhattan Research Inc. compilation album the English newspaper The Independent suggested that "Scott's importance lies mainly in his realization of the rhythmic possibilities of electronic music, which laid the foundation for all electro-pop from disco to techno." In 2008, a tape from the mid-to-late 1960s by the original composer of the Doctor Who theme Delia Derbyshire, was found to contain music that sounded remarkably like contemporary electronic dance music. Commenting on the tape, Paul Hartnoll, of the dance group Orbital, described the example as "quite amazing," noting that it sounded not unlike something that "could be coming out next week on Warp Records." Music production practice Stylistic considerations In general, techno is very DJ-friendly, being mainly instrumental (commercial varieties being an exception) and is produced with the intention of its being heard in the context of a continuous DJ set, wherein the DJ progresses from one record to the next via a synchronized segue or "mix." Much of the instrumentation in techno emphasizes the role of rhythm over other musical parameters, but the design of synthetic timbres, and the creative use of music production technology in general, are important aspects of the overall aesthetic practice. Unlike other forms of electronic dance music that tend to be produced with synthesizer keyboards, techno does not always strictly adhere to the harmonic practice of Western music and such strictures are often ignored in favor of timbral manipulation alone. Thus techno inherits from the modernist tradition of the so-called Klangfarbenmelodie, or timbral serialism. The use of motivic development (though relatively limited) and the employment of conventional musical frameworks is more widely found in commercial techno styles, for example euro-trance, where the template is often an AABA song structure. The main drum part is almost universally in common time (4/4); meaning 4 quarter note pulses per bar. In its simplest form, time is marked with kicks (bass drum beats) on each quarter-note pulse, a snare or clap on the second and fourth pulse of the bar, with an open hi-hat sound every second eighth note. This is essentially a drum pattern popularized by disco (or even polka) and is common throughout house and trance music as well. The tempo tends to vary between approximately 120 bpm (quarter note equals 120 pulses per minute) and 150 bpm, depending on the style of techno. Some of the drum programming employed in the original Detroit-based techno made use of syncopation and polyrhythm, yet in many cases the basic disco-type pattern was used as a foundation, with polyrhythmic elaborations added using other drum machine voices. This syncopated-feel (funkiness) distinguishes the Detroit strain of techno from other variants. It is a feature that many DJs and producers still use to differentiate their music from commercial forms of techno, the majority of which tend to be devoid of syncopation. Derrick May has summed up the sound as 'Hi-tech Tribalism': something "very spiritual, very bass oriented, and very drum oriented, very percussive. The original techno music was very hi-tech with a very percussive feel... it was extremely, extremely Tribal. It feels like you're in some sort of hi-tech village." Compositional techniques There are many ways to create techno, but the majority will depend upon the use of loop-based step sequencing as a compositional method. Techno musicians, or producers, rather than employing traditional compositional techniques, may work in an improvisatory fashion, often treating the electronic music studio as one large instrument. The collection of devices found in a typical studio will include units that are capable of producing many different sounds and effects. Studio production equipment is generally synchronized using a hardware- or computer-based MIDI sequencer, enabling the producer to combine in one arrangement the sequenced output of many devices. A typical approach to using this type of technology compositionally is to overdub successive layers of material while continuously looping a single measure or sequence of measures. This process will usually continue until a suitable multi-track arrangement has been produced. Once a single loop-based arrangement has been generated, a producer may then focus on developing how the summing of the overdubbed parts will unfold in time, and what the final structure of the piece will be. Some producers achieve this by adding or removing layers of material at appropriate points in the mix. Quite often, this is achieved by physically manipulating a mixer, sequencer, effects, dynamic processing, equalization, and filtering while recording to a multi-track device. Other producers achieve similar results by using the automation features of computer-based digital audio workstations. Techno can consist of little more than cleverly programmed rhythmic sequences and looped motifs combined with signal processing of one variety or another, frequency filtering being a commonly used process. A more idiosyncratic approach to production is evident in the music of artists such as Twerk and Autechre, where aspects of algorithmic composition are employed in the generation of material. Retro technology Instruments used by the original techno producers based in Detroit, many of which are now highly sought after on the retro music technology market, include classic drum machines like the Roland TR-808 and TR-909, devices such as the Roland TB-303 bass line generator, and synthesizers such as the Roland SH-101, Kawai KC10, Yamaha DX7, and Yamaha DX100 (as heard on Derrick May's seminal 1987 techno release Nude Photo). Much of the early music sequencing was executed via MIDI (but neither the TR-808 nor the TB-303 had MIDI, only DIN sync) using hardware sequencers such as the Korg SQD1 and Roland MC-50, and the limited amount of sampling that was featured in this early style was accomplished using an Akai S900. The TR-808 and TR-909 drum machines have since achieved legendary status, a fact that is now reflected in the prices sought for used devices. During the 1980s, the 808 became the staple beat machine in Hip hop production while the 909 found its home in House music and techno. It was "the pioneers of Detroit techno [who] were making the 909 the rhythmic basis of their sound, and setting the stage for the rise of Roland's vintage Rhythm Composer." In November 1995 the UK music technology magazine Sound on Sound noted: By May 1996, Sound on Sound was reporting that the popularity of the 808 had started to decline, with the rarer TR-909 taking its place as "the dance floor drum machine to use." This is thought to have arisen for a number of reasons: the 909 gives more control over the drum sounds, has better programming and includes MIDI as standard. Sound on Sound reported that the 909 was selling for between £900 and £1100 and noted that the 808 was still collectible, but maximum prices had peaked at about £700 to £800. Despite this fascination with retro music technology, according to Derrick May "there is no recipe, there is no keyboard or drum machine which makes the best techno, or whatever you want to call it. There never has been. It was down to the preferences of a few guys. The 808 was our preference. We were using Yamaha drum machines, different percussion machines, whatever." Emulation In the latter half of the 1990s the demand for vintage drum machines and synthesizers motivated a number of software companies to produce computer-based emulators. One of the most notable was the ReBirth RB-338, produced by the Swedish company Propellerhead and originally released in May 1997. Version one of the software featured two TB-303s and a TR-808 only, but the release of version two saw the inclusion of a TR-909. A Sound on Sound review of the RB-338 V2 in November 1998 noted that Rebirth had been called "the ultimate techno software package" and mentions that it was "a considerable software success story of 1997". In America Keyboard Magazine asserted that ReBirth had "opened up a whole new paradigm: modeled analog synthesizer tones, percussion synthesis, pattern-based sequencing, all integrated in one piece of software". Despite the success of ReBirth RB-338, it was officially taken out of production in September 2005. Propellerhead then made it freely available for download from a website called the "ReBirth Museum". The site also features extensive information about the software's history and development. In 2001, Propellerhead released Reason V1, a software-based electronic music studio, comprising a 14-input automated digital mixer, 99-note polyphonic 'analogue' synth, classic Roland-style drum machine, sample-playback unit, analogue-style step sequencer, loop player, multitrack sequencer, eight effects processors, and over 500 MB of synthesizer patches and samples. With this release Propellerhead were credited with "creating a buzz that only happens when a product has really tapped into the zeitgeist, and may just be the one that many [were] waiting for." Reason is as of 2018 at version 10. Technological advances As computer technology became more accessible and music software advanced, interacting with music production technology was possible using means that bore little relationship to traditional musical performance practices: for instance, laptop performance (laptronica) and live coding.Collins, N.(2003a), Generative Music and Laptop Performance, Contemporary Music Review: Vol. 22, Issue 4. London: Routledge: 67–79. By the mid 2000s a number of software-based virtual studio environments had emerged, with products such as Propellerhead's Reason and Ableton Live finding popular appeal. These software-based music production tools offer viable and cost-effective alternatives to typical hardware-based production studios, and thanks to advances in microprocessor technology, can create high quality music using little more than a single laptop computer. Such advances democratized music creation, and lead to a massive increase in the amount of home-produced music available to the general public via the internet. Artists can now also individuate their sound by creating personalized software synthesizers, effects modules, and various composition environments. Devices that once existed exclusively in the hardware domain can easily have virtual counterparts. Some of the more popular software tools for achieving such ends are commercial releases such as Max/Msp and Reaktor and freeware packages such as Pure Data, SuperCollider, and ChucK. In some sense, as a result of technological innovation, the DIY mentality that was once a core part of dance music cultureRietveld, H (1998), Repetitive Beats: Free Parties and the Politics of Contemporary DIY Dance Culture in Britain, in George McKay (ed.), DIY Culture: Party and Protest in Nineties Britain, pp.243–67. London: Verso. is seeing a resurgence.Gillmor, D., Technology feeds grassroots media, BBC news report, published Thursday, March 9, 2006, 17:30 GMT. Notable techno venues In Germany, noted techno clubs of the 1990s include Tresor and E-Werk in Berlin, Omen and Dorian Gray in Frankfurt, Ultraschall and in Munich as well as Stammheim in Kassel. In 2007, Berghain was cited as "possibly the current world capital of techno, much as E-Werk or Tresor were in their respective heydays". In the 2010s, aside from Berlin, Germany continued to have a thriving techno scene with clubs such as Gewölbe in Cologne, Institut für Zukunft in Leipzig, MMA Club and Blitz Club in Munich, Die Rakete in Nuremberg and Robert Johnson in Offenbach am Main. In the United Kingdom, Glasgow's Sub Club has been associated with techno since the early 1990s and clubs such as London's Fabric and Egg London have gained notoriety for supporting techno. In the 2010s, a techno scene also emerged in Georgia, with the Bassiani in Tbilisi being the most notable venue. See also Detroit Electronic Music Archive Bibliography Anz, P. & Walder, P. (eds.), Techno, Hamburg: Rowohlt, 1999 (). Barr, T., Techno: The Rough Guide, Rough Guides, 2000 (). Brewster B. & Broughton F., Last Night a DJ Saved My Life: The History of the Disc Jockey, Avalon Travel Publishing, 2006, (). Butler, M.J., Unlocking the Groove: Rhythm, Meter, and Musical Design in Electronic Dance Music, Indiana University Press, 2006 (). Cannon, S. & Dauncey, H., Popular Music in France from Chanson to Techno: Culture, Identity and Society, Ashgate, 2003 (). Collin, M., Altered State: The Story of Ecstasy Culture and Acid House, Serpent's Tail, 1998 (). Cosgrove, S. (a), "Seventh City Techno", The Face (97), p.88, May 1988 (ISSN 0263-1210). Cosgrove, S. (b), Techno! The New Dance Sound of Detroit liner notes, 10 Records Ltd. (UK), 1988 (LP: DIXG 75; CD: DIXCD 75). Cox, C.(Author), Warner D (Editor), Audio Culture: Readings in Modern Music, Continuum International Publishing Group Ltd., 2004 (). Fritz, J., Rave Culture: An Insider's Overview, Smallfry Press, 2000 (). Kodwo, E., More Brilliant Than the Sun: Adventures in Sonic Fiction, Quartet Books, 1998 (). Nelson, A., Tu, L.T.N., Headlam Hines, A. (eds.), TechniColor: Race, Technology and Everyday Life, New York University Press, 2001 (). Nye, S "Minimal Understandings: The Berlin Decade, The Minimal Continuum, and Debates on the Legacy of German Techno," in Journal of Popular Music Studies 25, no. 2(2013): 154-84. Pesch, M. (Author), Weisbeck, M. (Editor), Techno Style: The Album Cover Art, Edition Olms; 5Rev Ed edition, 1998 (). Rietveld, H.C., This is Our House: House Music, Cultural Spaces and Technologies, Ashgate Publishing, Aldershot, 1998 (). Reynolds, S., Energy Flash: a Journey Through Rave Music and Dance Culture, Pan Macmillan, 1998 (). Reynolds, S., Generation Ecstasy: Into the World of Techno and Rave Culture, Routledge, New York 1999 (); Soft Skull Press, 2012 (). Reynolds, S., Energy Flash: a Journey Through Rave Music and Dance Culture, Faber and Faber, 2013 (). Savage, J., The Hacienda Must Be Built, International Music Publications, 1992 (). Sicko, D., Techno Rebels: The Renegades of Electronic Funk, Billboard Books, 1999 (). Sicko, D., Techno Rebels: The Renegades of Electronic Funk, 2nd ed., Wayne State University Press, 2010 (). St. John, G.(ed.). Rave Culture and Religion, New York: Routledge, 2004. (). St. John, G.(ed.), FreeNRG: Notes From the Edge of the Dance Floor, Common Ground, Melbourne, 2001 (). St John, G. Technomad: Global Raving Countercultures. London: Equinox. 2009. . Toop, D., Ocean of Sound, Serpent's Tail, 2001 [new edition] (). Watten, B., The Constructivist Moment: From Material Text to Cultural Poetics, Wesleyan University Press, 2003 (). Filmography High Tech Soul – Catalog No.: PLX-029; Label: Plexifilm; Released: September 19, 2006; Director: Gary Bredow; Length: 64 minutes. Paris/Berlin: 20 Years Of Underground Techno – Label: Les Films du Garage; Released: 2012; Director: Amélie Ravalec; Length: 52 minutes. We Call It Techno! – A documentary about Germany's early Techno scene and culture – Label: Sense Music & Media, Berlin, DE; Released: June 2008; Directors: Maren Sextro & Holger Wick. Tresor Berlin: The Vault and the Electronic Frontier'' – Label: Pyramids of London Films; Released 2004; Director: Michael Andrawis; Length: 62 minutes Technomania – Released: 1996 (screened at NowHere, an exhibition held at Louisiana Museum of Modern Art, Denmark, between May 15 and September 8, 1996); Director: Franz A. Pandal; Length: 52 minutes. – Label: Les Films à Lou; Released: 1996; Director: Dominique Deluze; Length: 63 minutes. References External links Techno Live Sets - The #1 resource for Techno sets "From the Autobahn to I-94: The Origins of Detroit Techno and Chicago House" – reminiscences in 2005 by techno and house innovators Sounds Like Techno – online historical documentary produced by the Australian Broadcasting Corporation (ABC) Techno from past years – Oldie but goldie classic techno sets Culture of Detroit Electronic dance music genres Subcultures
12840831
https://en.wikipedia.org/wiki/Electronic%20Case%20Filing%20System
Electronic Case Filing System
Electronic Case Filing System (ECFS) is an automated system developed in Tarrant County, Texas that enables law enforcement agencies, criminal district attorney, county criminal courts, criminal district courts, and the defense bar to process and exchange information about criminal offenses. ECFS software does not work on the Apple Mac platform. History ECFS was conceived in November 2002 in Tarrant County, Texas. Initially, the purpose of the system was to enable law enforcement agencies to submit offense reports to the criminal district attorney's office for possible prosecution. In July 2003, the Criminal District Attorney's accepted the first electronic case filing via ECFS. Since that time, more than 100,000 cases have been filed in ECFS by the 47 Law Enforcement Agencies located in Tarrant County, Texas. ECFS was expanded in June 2004 to incorporate the Grand Jury function which is able to return Indictments to the Criminal District Courts on the same day that a True Bill is decided. In January 2005, ECFS was extended to enable the Judges and their Court Staff to effectively manage the docket (case load) for each of the nine (9) Criminal District Courts. Since the implementation of ECFS, Tarrant County has been able to control the Jail population, despite a significant increase in the number of cases being filed. In August 2005, ECFS was extended to enable members of the Tarrant County Criminal Defense Lawyers Association to browse and view defendant, offense, and evidence via ECFS. Through this process, defense attorneys are no longer required to visit the Criminal District Attorney's Office to view and copy file. Since January 2006, the Criminal District Attorney's Office has been completely paperless and all Offense Reports are submitted via ECFS and made available to Law Enforcement Agencies, County and District Courts, and Defense Attorneys. In July 2006, ECFS was extended to allow criminal defendants to be magistrated electronically. This process also allows the Office of Attorney appointments to be notified that the defendant has requested that defense counsel be appointed which triggers a business process that captures financial information, facilitates a determination of indigency, and when appropriate appoints defense counsel. In 2005, the Tarrant County Criminal Defense Lawyer's Association, a non-profit charitable association, implemented its own software design to enable all attorney's, whether members of TCCDLA or not, to access the ECFS system. The software only works on PCs and will not work on Apple's Mac platform. The process enables an attorney to access his or her case files from any computer on the World Wide Web, and is secure and reliable. TCCDLA has continuously revised its software process to enable access 24-7, with little down time. TCCDLA also installed computers in the Tarrant County Justice Center that allow subscribers to access files while in the courthouse. Storm's Edge Technologies is the computer software company that exclusively provides support and design of the TCCDLA ECFS access system. Project contributors Criminal District Attorney's Office: Honorable Tim Curry, Tarrant County Criminal District Attorney Alan Levy, Tarrant County Assistant Criminal District Attorney David Montague, Tarrant County Assistant Criminal District Attorney Betty Arvin, Tarrant County Assistant Criminal District Attorney Kurt Stallings, Tarrant County Assistant Criminal District Attorney Richard Alpert, Tarrant County Assistant Criminal District Attorney Mark Thielman, Tarrant County Assistant Criminal District Attorney Miles Brissette, Tarrant County Assistant Criminal District Attorney Tracey Kapsidelis, Tarrant County Assistant Criminal District Attorney John Cramer, Tarrant County Assistant Criminal District Attorney Information technology: Steve Smith, Tarrant County Chief Information Officer Mark O'Neal, Tarrant County Enterprise Architect Scott Hill, Tarrant County Customer Service and Support Manager Keith Hughes, Tarrant County Quality Assurance Manager Steve Harrelson, Tarrant County Application Development Jan DeBee, Tarrant County Application Development Martin McCreary, Tarrant County Application Development Phil Blankenship, Tarrant County Application Development Divya Gupta, Tarrant County Quality Assurance Bing Chen, Tarrant County Quality Assurance Dick Renn, Tarrant County Application Support Mozelle Duckett, Tarrant County Application Support Tarrant County Criminal Defense Lawyer's Association: William H. "Bill" Ray, President 2008, overseer of Defense Lawyer application to ECFS Storm's Edge Technologies: Dan Fitzgerald, President Storm's Edge Technologies, chief software designer and website designer for TCCDLA Pilot Agency: Forest Hill Police Department - Lt. Chris Hebert and Sgt. Dan Dennis External links Tarrant County Criminal Defense Lawyers Association References Business software Information systems Management systems Tarrant County, Texas
9488407
https://en.wikipedia.org/wiki/Home%20server
Home server
A home server is a computing server located in a private computing residence providing services to other devices inside or outside the household through a home network or the Internet. Such services may include file and printer serving, media center serving, home automation control, web serving (on the network or Internet), web caching, file sharing and synchronization, video surveillance and digital video recorder, calendar and contact sharing and synchronization, account authentication, and backup services. Because of the relatively low number of computers on a typical home network, a home server commonly does not require significant computing power. Home servers can be implemented do-it-yourself style with a re-purposed, older computer, or a plug computer; pre-configured commercial home server appliances are also available. An uninterruptible power supply is sometimes used in case of power outages that can possibly corrupt data. Services provided by home servers Administration and configuration Home servers often run headless, and can be administered remotely through a command shell, or graphically through a remote desktop system such as RDP, VNC, Webmin, Apple Remote Desktop, or many others. Some home server operating systems (such as Windows Home Server) include a consumer-focused graphical user interface (GUI) for setup and configuration that is available on home computers on the home network (and remotely over the Internet via remote access). Others simply enable users to use native operating system tools for configuration. Centralized storage Home servers often act as network-attached storage (NAS) providing the major benefit that all users' files can be centrally and securely stored, with flexible permissions applied to them. Such files can be easily accessed from any other system on the network, provided the correct credentials are supplied. This also applies to shared printers. Such files can also be shared over the Internet to be accessible from anywhere in the world using remote access. Servers running Unix or Linux with the free Samba suite (or certain Windows Server products - Windows Home Server excluded) can provide domain control, custom logon scripts, and roaming profiles to users of certain versions of Windows. This allows a user to log on from any machine in the domain and have access to her or his "My Documents" and personalized Windows and application preferences - multiple accounts on each computer in the home are not needed. Media serving Home servers are often used to serve multi-media content, including photos, music, and video to other devices in the household (and even to the Internet; see Space shifting, Tonido and Orb). Using standard protocols such as DLNA or proprietary systems such as iTunes, users can access their media stored on the home server from any room in the house. Windows XP Media Center Edition, Windows Vista, and Windows 7 can act as a home server, supporting a particular type of media serving that streams the interactive user experience to Media Center Extenders including the Xbox 360. Windows Home Server supports media streaming to Xbox 360 and other DLNA-based media receivers via the built-in Windows Media Connect technology. Some Windows Home Server device manufacturers, such as HP, extend this functionality with a full DLNA implementation such as PacketVideo TwonkyMedia server. There are many open-source and fully functional programs for media serving available for Linux. LinuxMCE is one example, which allows other devices to boot off a hard drive image on the server, allowing them to become appliances such as set-top boxes. Asterisk, Xine, MythTV (another media serving solution), VideoLAN, SlimServer, DLNA, and many other open-source projects are fully integrated for a seamless home theater/automation/telephony experience. On an Apple Macintosh server, options include iTunes, PS3 Media Server, and Elgato. Additionally, for Macs directly connected to TVs, Boxee can act as a full-featured media center interface. Servers are typically always on so the addition of a TV or radio tuner allows recording to be scheduled at any time. Some home servers provide remote access to media and entertainment content. Remote access A home server can be used to provide remote access into the home from devices on the Internet, using remote desktop software and other remote administration software. For example, Windows Home Server provides remote access to files stored on the home server via a web interface as well as remote access to Remote Desktop sessions on PCs in the house. Similarly, Tonido provides direct access via a web browser from the Internet without requiring any port forwarding or other setup. Some enthusiasts often use VPN technologies as well. On a Linux server, two popular tools are (among many) VNC and Webmin. VNC allows clients to remotely view a server GUI desktop as if the user was physically sitting in front of the server. A GUI need not be running on the server console for this to occur; there can be multiple 'virtual' desktop environments open at the same time. Webmin allows users to control many aspects of server configuration and maintenance all from a simple web interface. Both can be configured to be accessed from anywhere on the Internet. Servers can also be accessed remotely using the command line-based Telnet and SSH protocols. Web serving Some users choose to run a web server in order to share files easily and publicly (or privately, on the home network). Others set up web pages and serve them straight from their home, although this may be in violation of some ISPs terms of service. Sometimes these web servers are run on a nonstandard port in order to avoid the ISP's port blocking. Example web servers used on home servers include Apache and IIS. Many other web servers are available; see Comparison of lightweight web servers, Comparison of web servers. Web proxy Some networks have an HTTP proxy which can be used to speed up web access when multiple users visit the same websites, and to get past blocking software while the owner is using the network of some institution that might block certain sites. Public proxies are often slow and unreliable and so it is worth the trouble of setting up one's own private proxy. Some proxies can be configured to block websites on the local network if it is set up as a transparent proxy. E-mail Many home servers also run e-mail servers that handle e-mail for the owner's domain name. The advantages are having much bigger mailboxes and maximum message size than most commercial e-mail services. Access to the server, since it is on the local network is much faster than using an external service. This also increases security as e-mails do not reside on an off-site server. BitTorrent Home servers are ideal for utilizing the BitTorrent protocol for downloading and seeding files as some torrents can take days, or even weeks to complete and perform better on an uninterrupted connection. There are many text based clients such as rTorrent and web-based ones such as TorrentFlux and Tonido available for this purpose. BitTorrent also makes it easier for those with limited bandwidth to distribute large files over the Internet. Gopher An unusual service is the Gopher protocol, a hypertext document retrieval protocol which pre-dated the World Wide Web and was popular in the early 1990s. Many of the remaining gopher servers are run off home servers utilizing PyGopherd and the Bucktooth gopher server. Home automation Home automation requires a device in the home that is available 24/7. Often such home automation controllers are run on a home server. Security monitoring Relatively low cost CCTV DVR solutions are available that allow recording of video cameras to a home server for security purposes. The video can then be viewed on PCs or other devices in the house. A series of cheap USB-based webcams can be connected to a home server as a makeshift CCTV system. Optionally these images and video streams can be made available over the Internet using standard protocols. Family applications Home servers can act as a host to family-oriented applications such as a family calendar, to-do lists, and message boards. IRC and instant messaging Because a server is always on, an IRC client or IM client running on it will be highly available to the Internet. This way, the chat client will be able to record activity that occurs even while the user is not at the computer, e.g. asleep or at work or school. Textual clients such as Irssi and tmsnc can be detached using GNU Screen for example, and graphical clients such as Pidgin can be detached using xmove. Quassel provides a specific version for this kind of use. Home servers can also be used to run personal XMPP servers and IRC servers as these protocols can support a large number of users on very little bandwidth. Online gaming Some multiplayer games such as Continuum, Tremulous, Minecraft, and Doom have server software available which users may download and use to run their own private game server. Some of these servers are password protected, so only a selected group of people such as clan members or whitelisted players can gain access to the server. Others are open for public use and may move to colocation or other forms of paid hosting if they gain a large number of players. Federated social networks Home servers can be used to host distributed federated social networks like diaspora* and GNU Social. Federation protocols like ActivityPub allow many small home servers to interact in a meaningful way and give the perception of being on a large traditional social network. Federation is not just limited to social networks. Many innovative new free software web services are being developed that can allow people to host their own videos, photos, blogs etc. and still participate in the larger federated networks. Third-party platform Home servers often are platforms that enable third-party products to be built and added over time. For example, Windows Home Server provides a Software Development Kit. Similarly, Tonido provides an application platform that can be extended by writing new applications using their SDK. Operating systems Home servers run many different operating systems. Enthusiasts who build their own home servers can use whatever OS is conveniently available or familiar to them, such as Linux, Microsoft Windows, BSD, Solaris or Plan 9 from Bell Labs. Hardware Single-board computers are increasingly being used to power home servers, with many of them being ARM devices. Old desktop and laptop computers can also be re-purposed to be used as home servers. Mobile phones are typically just as powerful as ARM-based single board computers. Once mobile phones can run the Linux operating system, self-hosting might move to mobile devices with each person's data and services being served from their own mobile phone. See also Server definitions Server (computing) NAS (Network-Attached Storage) File server Print server Media server Operating systems BSD UNIX Various Linux distributions Mac OS X Server Solaris Windows Home Server (End of Support, Microsoft recommends upgrading to Windows Server Essentials) and other variants of Microsoft Windows Plan 9 from Bell Labs - The successor to Unix Products HP MediaSmart Server Technologies Client–server model Dynamic DNS File server Home network Network-attached storage (NAS) Residential gateway Media serving software Front Row (software) - Mac OS X LinuxMCE MythTV Plex Media Server Kodi Jellyfin Server software Comparison of web servers List of mail server software List of FTP server software Samba (software) RealVNC Tonido Home networking DOCSIS G.hn HomePNA Power line communication, HomePlug Powerline Alliance VDSL, VDSL2 Wireless LAN, IEEE 802.11 References Server Servers (computing)
291615
https://en.wikipedia.org/wiki/Cross%20compiler
Cross compiler
A cross compiler is a compiler capable of creating executable code for a platform other than the one on which the compiler is running. For example, a compiler that runs on a PC but generates code that runs on Android smartphone is a cross compiler. A cross compiler is necessary to compile code for multiple platforms from one development host. Direct compilation on the target platform might be infeasible, for example on embedded systems with limited computing resources. Cross compilers are distinct from source-to-source compilers. A cross compiler is for cross-platform software generation of machine code, while a source-to-source compiler translates from one programming language to another in text code. Both are programming tools. Use The fundamental use of a cross compiler is to separate the build environment from target environment. This is useful in several situations: Embedded computers where a device has extremely limited resources. For example, a microwave oven will have an extremely small computer to read its keypad and door sensor, provide output to a digital display and speaker, and to control the machinery for cooking food. This computer is generally not powerful enough to run a compiler, a file system, or a development environment. Compiling for multiple machines. For example, a company may wish to support several different versions of an operating system or to support several different operating systems. By using a cross compiler, a single build environment can be set up to compile for each of these targets. Compiling on a server farm. Similar to compiling for multiple machines, a complicated build that involves many compile operations can be executed across any machine that is free, regardless of its underlying hardware or the operating system version that it is running. Bootstrapping to a new platform. When developing software for a new platform, or the emulator of a future platform, one uses a cross compiler to compile necessary tools such as the operating system and a native compiler. Compiling native code for emulators for older now-obsolete platforms like the Commodore 64 or Apple II by enthusiasts who use cross compilers that run on a current platform (such as Aztec C's MS-DOS 6502 cross compilers running under Windows XP). Use of virtual machines (such as Java's JVM) resolves some of the reasons for which cross compilers were developed. The virtual machine paradigm allows the same compiler output to be used across multiple target systems, although this is not always ideal because virtual machines are often slower and the compiled program can only be run on computers with that virtual machine. Typically the hardware architecture differs (e.g. compiling a program destined for the MIPS architecture on an x86 computer) but cross-compilation is also applicable when only the operating system environment differs, as when compiling a FreeBSD program under Linux, or even just the system library, as when compiling programs with uClibc on a glibc host. Canadian Cross The Canadian Cross is a technique for building cross compilers for other machines, where the original machine is much slower or less convenient than the target. Given three machines A, B, and C, one uses machine A (e.g. running Windows XP on an IA-32 processor) to build a cross compiler that runs on machine B (e.g. running Mac OS X on an x86-64 processor) to create executables for machine C (e.g. running Android on an ARM processor). The practical advantage in this example is that Machine A is slow but has a proprietary compiler, while Machine B is fast but has no compiler at all, and Machine C is impractically slow to be used for compilation. When using the Canadian Cross with GCC, and as in this example, there may be four compilers involved The proprietary native Compiler for machine A (1) (e.g. compiler from Microsoft Visual Studio) is used to build the gcc native compiler for machine A (2). The gcc native compiler for machine A (2) is used to build the gcc cross compiler from machine A to machine B (3) The gcc cross compiler from machine A to machine B (3) is used to build the gcc cross compiler from machine B to machine C (4) The end-result cross compiler (4) will not be able to run on build machine A; instead it would run on machine B to compile an application into executable code that would then be copied to machine C and executed on machine C. For instance, NetBSD provides a POSIX Unix shell script named build.sh which will first build its own toolchain with the host's compiler; this, in turn, will be used to build the cross compiler which will be used to build the whole system. The term Canadian Cross came about because at the time that these issues were under discussion, Canada had three national political parties. Timeline of early cross compilers 1979 – ALGOL 68C generated ZCODE; this aided porting the compiler and other ALGOL 68 applications to alternate platforms. To compile the ALGOL 68C compiler required about 120 KB of memory. With Z80 its 64 KB memory is too small to actually compile the compiler. So for the Z80 the compiler itself had to be cross compiled from the larger CAP capability computer or an IBM System/370 mainframe. GCC and cross compilation GCC, a free software collection of compilers, can be set up to cross compile. It supports many platforms and languages. GCC requires that a compiled copy of binutils be available for each targeted platform. Especially important is the GNU Assembler. Therefore, binutils first has to be compiled correctly with the switch --target=some-target sent to the configure script. GCC also has to be configured with the same --target option. GCC can then be run normally provided that the tools, which binutils creates, are available in the path, which can be done using the following (on UNIX-like operating systems with bash): PATH=/path/to/binutils/bin:${PATH} make Cross-compiling GCC requires that a portion of the target platform'''s C standard library be available on the host platform. The programmer may choose to compile the full C library, but this choice could be unreliable. The alternative is to use newlib, which is a small C library containing only the most essential components required to compile C source code. The GNU autotools packages (i.e. autoconf, automake, and libtool) use the notion of a build platform, a host platform, and a target platform. The build platform is where the compiler is actually compiled. In most cases, build should be left undefined (it will default from host). The host platform is always where the output artifacts from the compiler will be executed whether the output is another compiler or not. The target platform is used when cross-compiling cross compilers, it represents what type of object code the package itself will produce; otherwise the target platform setting is irrelevant. For example, consider cross-compiling a video game that will run on a Dreamcast. The machine where the game is compiled is the build platform while the Dreamcast is the host platform. The names host and target are relative to the compiler being used and shifted like son and grandson''. Another method popularly used by embedded Linux developers involves the combination of GCC compilers with specialized sandboxes like Scratchbox, scratchbox2, or PRoot. These tools create a "chrooted" sandbox where the programmer can build up necessary tools, libc, and libraries without having to set extra paths. Facilities are also provided to "deceive" the runtime so that it "believes" it is actually running on the intended target CPU (such as an ARM architecture); this allows configuration scripts and the like to run without error. Scratchbox runs more slowly by comparison to "non-chrooted" methods, and most tools that are on the host must be moved into Scratchbox to function. Manx Aztec C cross compilers Manx Software Systems, of Shrewsbury, New Jersey, produced C compilers beginning in the 1980s targeted at professional developers for a variety of platforms up to and including PCs and Macs. Manx's Aztec C programming language was available for a variety of platforms including MS-DOS, Apple II, DOS 3.3 and ProDOS, Commodore 64, Macintosh 68XXX and Amiga. From the 1980s and continuing throughout the 1990s until Manx Software Systems disappeared, the MS-DOS version of Aztec C was offered both as a native mode compiler or as a cross compiler for other platforms with different processors including the Commodore 64 and Apple II. Internet distributions still exist for Aztec C including their MS-DOS based cross compilers. They are still in use today. Manx's Aztec C86, their native mode 8086 MS-DOS compiler, was also a cross compiler. Although it did not compile code for a different processor like their Aztec C65 6502 cross compilers for the Commodore 64 and Apple II, it created binary executables for then-legacy operating systems for the 16-bit 8086 family of processors. When the IBM PC was first introduced it was available with a choice of operating systems, CP/M-86 and PC DOS being two of them. Aztec C86 was provided with link libraries for generating code for both IBM PC operating systems. Throughout the 1980s later versions of Aztec C86 (3.xx, 4.xx and 5.xx) added support for MS-DOS "transitory" versions 1 and 2 and which were less robust than the "baseline" MS-DOS version 3 and later which Aztec C86 targeted until its demise. Finally, Aztec C86 provided C language developers with the ability to produce ROM-able "HEX" code which could then be transferred using a ROM burner directly to an 8086 based processor. Paravirtualization may be more common today but the practice of creating low-level ROM code was more common per-capita during those years when device driver development was often done by application programmers for individual applications, and new devices amounted to a cottage industry. It was not uncommon for application programmers to interface directly with hardware without support from the manufacturer. This practice was similar to Embedded Systems Development today. Thomas Fenwick and James Goodnow II were the two principal developers of Aztec-C. Fenwick later became notable as the author of the Microsoft Windows CE kernel or NK ("New Kernel") as it was then called. Microsoft C cross compilers Early history – 1980s Microsoft C (MSC) has a shorter history than others dating back to the 1980s. The first Microsoft C Compilers were made by the same company who made Lattice C and were rebranded by Microsoft as their own, until MSC 4 was released, which was the first version that Microsoft produced themselves. In 1987, many developers started switching to Microsoft C, and many more would follow throughout the development of Microsoft Windows to its present state. Products like Clipper and later Clarion emerged that offered easy database application development by using cross language techniques, allowing part of their programs to be compiled with Microsoft C. Borland C (California company) was available for purchase years before Microsoft released its first C product. Long before Borland, BSD Unix (Berkeley University) had gotten C from the authors of the C language: Kernighan and Ritchie who wrote it in unison while working for AT&T (labs). K&R's original needs was not only elegant 2nd level parsed syntax to replace asm 1st level parsed syntax: it was designed so that a minimal amount of asm be written to support each platform (the original design of C was ability to cross compile using C with the least support code per platform, which they needed.). Also yesterdays C directly related to ASM code wherever not platform dependent. Today's C (more-so c++) is no longer C compatible and the asm code underlying can be extremely different than written on a given platform (in Linux: it sometimes replaces and detours library calls with distributor choices). Today's C is a 3rd or 4th level language which is used the old way like a 2nd level language. 1987 C programs had long been linked with modules written in assembly language. Most C compilers (even current compilers) offer an assembly language pass (that can be tweaked for efficiency then linked to the rest of the program after assembling). Compilers like Aztec-C converted everything to assembly language as a distinct pass and then assembled the code in a distinct pass, and were noted for their very efficient and small code, but by 1987 the optimizer built into Microsoft C was very good, and only "mission critical" parts of a program were usually considered for rewriting. In fact, C language programming had taken over as the "lowest-level" language, with programming becoming a multi-disciplinary growth industry and projects becoming larger, with programmers writing user interfaces and database interfaces in higher-level languages, and a need had emerged for cross language development that continues to this day. By 1987, with the release of MSC 5.1, Microsoft offered a cross language development environment for MS-DOS. 16-bit binary object code written in assembly language (MASM) and Microsoft's other languages including QuickBASIC, Pascal, and Fortran could be linked together into one program, in a process they called "Mixed Language Programming" and now "InterLanguage Calling". If BASIC was used in this mix, the main program needed to be in BASIC to support the internal runtime system that compiled BASIC required for garbage collection and its other managed operations that simulated a BASIC interpreter like QBasic in MS-DOS. The calling convention for C code, in particular, was to pass parameters in "reverse order" on the stack and return values on the stack rather than in a processor register. There were other programming rules to make all the languages work together, but this particular rule persisted through the cross language development that continued throughout Windows 16- and 32-bit versions and in the development of programs for OS/2, and which persists to this day. It is known as the Pascal calling convention. Another type of cross compilation that Microsoft C was used for during this time was in retail applications that require handheld devices like the Symbol Technologies PDT3100 (used to take inventory), which provided a link library targeted at an 8088 based barcode reader. The application was built on the host computer then transferred to the handheld device (via a serial cable) where it was run, similar to what is done today for that same market using Windows Mobile by companies like Motorola, who bought Symbol. Early 1990s Throughout the 1990s and beginning with MSC 6 (their first ANSI C compliant compiler) Microsoft re-focused their C compilers on the emerging Windows market, and also on OS/2 and in the development of GUI programs. Mixed language compatibility remained through MSC 6 on the MS-DOS side, but the API for Microsoft Windows 3.0 and 3.1 was written in MSC 6. MSC 6 was also extended to provide support for 32-bit assemblies and support for the emerging Windows for Workgroups and Windows NT which would form the foundation for Windows XP. A programming practice called a thunk was introduced to allow passing between 16- and 32-bit programs that took advantage of runtime binding (dynamic linking) rather than the static binding that was favoured in monolithic 16-bit MS-DOS applications. Static binding is still favoured by some native code developers but does not generally provide the degree of code reuse required by newer best practices like the Capability Maturity Model (CMM). MS-DOS support was still provided with the release of Microsoft's first C++ Compiler, MSC 7, which was backwardly compatible with the C programming language and MS-DOS and supported both 16- and 32-bit code generation. MSC took over where Aztec C86 left off. The market share for C compilers had turned to cross compilers which took advantage of the latest and greatest Windows features, offered C and C++ in a single bundle, and still supported MS-DOS systems that were already a decade old, and the smaller companies that produced compilers like Aztec C could no longer compete and either turned to niche markets like embedded systems or disappeared. MS-DOS and 16-bit code generation support continued until MSC 8.00c which was bundled with Microsoft C++ and Microsoft Application Studio 1.5, the forerunner of Microsoft Visual Studio which is the cross development environment that Microsoft provide today. Late 1990s MSC 12 was released with Microsoft Visual Studio 6 and no longer provided support for MS-DOS 16-bit binaries, instead providing support for 32-bit console applications, but provided support for Windows 95 and Windows 98 code generation as well as for Windows NT. Link libraries were available for other processors that ran Microsoft Windows; a practice that Microsoft continues to this day. MSC 13 was released with Visual Studio 2003, and MSC 14 was released with Visual Studio 2005, both of which still produce code for older systems like Windows 95, but which will produce code for several target platforms including the mobile market and the ARM architecture. .NET and beyond In 2001 Microsoft developed the Common Language Runtime (CLR), which formed the core for their .NET Framework compiler in the Visual Studio IDE. This layer on the operating system which is in the API allows the mixing of development languages compiled across platforms that run the Windows operating system. The .NET Framework runtime and CLR provide a mapping layer to the core routines for the processor and the devices on the target computer. The command-line C compiler in Visual Studio will compile native code for a variety of processors and can be used to build the core routines themselves. Microsoft .NET applications for target platforms like Windows Mobile on the ARM architecture cross-compile on Windows machines with a variety of processors and Microsoft also offer emulators and remote deployment environments that require very little configuration, unlike the cross compilers in days gone by or on other platforms. Runtime libraries, such as Mono, provide compatibility for cross-compiled .NET programs to other operating systems, such as Linux. Libraries like Qt and its predecessors including XVT provide source code level cross development capability with other platforms, while still using Microsoft C to build the Windows versions. Other compilers like MinGW have also become popular in this area since they are more directly compatible with the Unixes that comprise the non-Windows side of software development allowing those developers to target all platforms using a familiar build environment. Free Pascal Free Pascal was developed from the beginning as a cross compiler. The compiler executable (ppcXXX where XXX is a target architecture) is capable of producing executables (or just object files if no internal linker exists, or even just assembly files if no internal assembler exists) for all OS of the same architecture. For example, ppc386 is capable of producing executables for i386-linux, i386-win32, i386-go32v2 (DOS) and all other OSes (see ). For compiling to another architecture, however, a cross architecture version of the compiler must be built first. The resulting compiler executable would have additional 'ross' before the target architecture in its name. i.e. if the compiler is built to target x64, then the executable would be ppcrossx64. To compile for a chosen architecture-OS, the compiler switch (for the compiler driver fpc) -P and -T can be used. This is also done when cross-compiling the compiler itself, but is set via make option CPU_TARGET and OS_TARGET. GNU assembler and linker for the target platform is required if Free Pascal does not yet have internal version of the tools for the target platform. Clang Clang is natively a cross compiler, at build time you can select which architectures you want Clang to be able to target. See also MinGW Scratchbox Free Pascal Cross assembler References External links Cross Compilation Tools – reference for configuring GNU cross compilation tools Building Cross Toolchains with gcc is a wiki of other GCC cross-compilation references Scratchbox is a toolkit for Linux cross-compilation to ARM and x86 targets Grand Unified Builder (GUB) for Linux to cross-compile multiple architectures e.g.:Win32/Mac OS/FreeBSD/Linux used by GNU LilyPond Crosstool is a helpful toolchain of scripts, which create a Linux cross-compile environment for the desired architecture, including embedded systems crosstool-NG is a rewrite of Crosstool and helps building toolchains. buildroot is another set of scripts for building a uClibc-based toolchain, usually for embedded systems. It is utilized by OpenWrt. ELDK (Embedded Linux Development Kit). Utilized by Das U-Boot. T2 SDE is another set of scripts for building whole Linux Systems based on either GNU libC, uClibc or dietlibc for a variety of architectures Cross Linux from Scratch Project IBM has a very clear structured tutorial about cross-building a GCC toolchain. Cross-compilation avec GCC 4 sous Windows pour Linux - A tutorial to build a cross-GCC toolchain, but from Windows to Linux, a subject rarely developed Compiler theory
4952814
https://en.wikipedia.org/wiki/MediaMan
MediaMan
MediaMan is a general purpose collection organizer software for establishing a personal database of media collections (DVDs, CDs, books, etc.) developed by He Shiming. Debuted in 2004 as freeware, MediaMan is the first software in its genre to create the concept of general purpose organizer, as people usually have to pay two licenses for a book organizer and a video organizer. The license of MediaMan was freeware until late 2006, when the author decided to switch to shareware with a price of $39.95 for each license. Amazon Web Services (later called E-Commerce Service and Product Advertising API) was used to retrieve product information automatically during the import process in MediaMan, which means it is also a part of the Amazon Associates program. However, the latest version of MediaMan (v3.10 series) no longer uses this API due to the efficiency guidelines introduced in October 2010. MediaMan is also known as a Windows alternative to Mac OS X's Delicious Library. Software development seems to have stalled with the last release of a beta of MediaMan 4.0 back in December 2013. There have been a growing number of bugs in the software that has made the program unusable for some. Communications with the developer have stopped, development and bug fixes have ceased, and the site has gone offline. Product history See also Delicious Library References External links MediaMan Neowin review Softpedia review Music By Mail Canada review Windows multimedia software PIM-software for Windows Personal information managers
19279
https://en.wikipedia.org/wiki/Mongolian%20Armed%20Forces
Mongolian Armed Forces
The Mongolian Armed Forces (; Mongol: ulsyn zevsegt hüchin) is the collective name for the Mongolian military and the joint forces that comprise it. It is tasked with protecting the independence, sovereignty, and territorial integrity of Mongolia. Defined as the peacetime configuration, its current structure consists of five branches: the Mongolian Ground Force, Mongolian Air Force, Construction and Engineering Forces, cyber security, and special forces. In case of a war situation, the Border Troops, Internal Troops and National Emergency Management Agency can be reorganized into the armed forces structure. The General Staff of the Mongolian Armed Forces is the main managing body and operates independently from the Ministry of Defence, its government controlled parent body. The official holiday of their military is Men's and Soldiers' Day () on 18 March, the equivalent of Defender of the Fatherland Day in Russia. History Mongol Empire and post-imperial As a unified state, Mongolia traces its origins to the Mongol Empire created by Genghis Khan in the 13th century. Genghis Khan unified the various tribes on the Mongol steppe, and his descendants eventually conquered almost the entirety of Asia, the Middle East, and parts of Eastern Europe. The Mongol Army was organized into decimal units of tens, hundreds, thousands, and ten thousands. A notable feature of the army is that it was composed entirely of cavalry units, giving it the advantage of maneuverability. Siege weaponry was adapted from other cultures, with foreign experts integrated into the command structure. The Mongols rarely used naval power, with a few exceptions. In the 1260s and 1270s they used seapower while conquering the Song dynasty of China, though they were unable to mount successful seaborne campaigns against Japan due to storms and rough battles. Around the Eastern Mediterranean, their campaigns were almost exclusively land-based, with the seas being controlled by the Crusader and Mamluk forces. With the disintegration of the Mongol Empire in the late 13th century, the Mongol Army as a unified unit also crumbled. The Mongols retreated to their homeland after the fall of the Mongol Yuan Dynasty, and once again delved into civil war. Although the Mongols became united once again during the reign of Queen Mandukhai and Batmongkhe Dayan Khan, in the 17th century they were annexed into the Qing Dynasty. Period under Qing Rule Once Mongolia was under the Qing, the Mongol Armies were used to defeat the Ming dynasty, helping to consolidate Manchu Rule. Mongols proved a useful ally in the war, lending their expertise as cavalry archers. During most of the Qing Dynasty time, the Mongols gave military assistance to the Manchus. With the creation of the Eight Banners, Banner Armies were broadly divided along ethnic lines, namely Manchu and Mongol. Bogd Khanate (1911–1919) In 1911, Outer Mongolia declared independence as the Bogd Khaanate under the Bogd Khan. This initial independence did not last, with Mongolia being occupied successively by the Chinese Beiyang Government, and Baron Ungern's White Russian forces. The modern precursor to the Mongolian Armed Forces was placed, with men's conscription and a permanent military structure starting in 1912. Mongolian People's Republic With Independence lost again to foreign forces, the newly created Mongolian People's Revolutionary Party created a native communist army in 1920 under the leadership of Damdin Sükhbaatar in order to fight against Russian troops from the White movement and Chinese forces. The MPRP was aided by the Red Army, which helped to secure the Mongolian People's Republic and remained in its territory until at least 1925. However, during the 1932 armed uprising in Mongolia and the initial Japanese border probes beginning in the mid-1930s, Soviet Red Army troops in Mongolia amounted to little more than instructors for the native army and as guards for diplomatic and trading installations. Battles of Khalkhin Gol The Battles of Khalkhin Gol began on 11 May 1939. A Mongolian cavalry unit of some 70–90 men had entered the disputed area in search of grazing for their horses. On that day, Manchukuoan cavalry attacked the Mongolians and drove them back across the Khalkhin Gol. On 13 May, the Mongolian force returned in greater numbers and the Manchukoans were unable to dislodge them. On 14 May, Lt. Col. Yaozo Azuma led the reconnaissance regiment of 23rd Infantry Division, supported by the 64th Infantry Regiment of the same division, under Colonel Takemitsu Yamagata, into the territory and the Mongolians withdrew. Soviet and Mongolian troops returned to the disputed region, however, and Azuma's force again moved to evict them. This time things turned out differently, as the Soviet–Mongolian forces surrounded Azuma's force on 28 May and destroyed it. The Azuma force suffered eight officers and 97 men killed and one officer and 33 men wounded, for 63% total casualties. The commander of the Soviet forces and the Far East Front was Comandarm Grigory Shtern from May 1938. Both sides began building up their forces in the area: soon Japan had 30,000 men in the theater. The Soviets dispatched a new Corps commander, Comcor Georgy Zhukov, who arrived on 5 June and brought more motorized and armored forces (I Army Group) to the combat zone. Accompanying Zhukov was Comcor Yakov Smushkevich with his aviation unit. Zhamyangiyn Lhagvasuren, Corps Commissar of the Mongolian People's Revolutionary Army, was appointed Zhukov's deputy. The Battles of Khalkhin Gol ended on 16 September 1939. World War II and immediate aftermath In the beginning stage of World War II, the Mongolian People's Army was involved in the Battle of Khalkhin Gol, when Japanese forces, together with the puppet state of Manchukuo, attempted to invade Mongolia from the Khalkha River. Soviet forces under the command of Georgy Zhukov, together with Mongolian forces, defeated the Japanese Sixth army and effectively ended the Soviet–Japanese Border Wars. In 1945, Mongolian forces participated in the Soviet invasion of Manchuria under the command of the Red Army, among the last engagements of World War II. A Soviet–Mongolian Cavalry mechanized group under Issa Pliyev took part as part of the Soviet Transbaikal Front. Mongolian troops numbered four cavalry divisions and three other regiments. During 1946–1948, the Mongolian People's Army successfully repelled attacks from the Kuomintang's Hui regiment and their Kazakh allies in the border between Mongolia and Xinjiang. The attacks were propagated by the Ili Rebellion, a Soviet-backed revolt by the Second East Turkestan Republic against the Kuomintang Government of the Republic of China. This little-known border dispute between Mongolia and the Republic of China became known as the Pei-ta-shan Incident. These engagements would be the last active battles the Mongolian Army would see, until after the democratic revolution. After the Democratic Revolution Mongolia underwent a democratic revolution in 1990, ending the communist one-party state that had existed since the early 1920s. In 2002, a law was passed that enabled Mongolian Army and police forces to conduct UN-backed and other international peacekeeping missions abroad. In August 2003, Mongolia contributed troops to the Iraq War as part of the Multi-National Force – Iraq. Mongolian troops, numbering 180 at its peak, were under Multinational Division Central-South and were tasked with guarding the main Polish base, Camp Echo. Prior to that posting, they had been protecting a logistics base dubbed Camp Charlie in Hillah. Then-Chairman of the Joint Chiefs of Staff, General Richard Myers, visited Ulan Baator on 13 January 2004 and expressed his appreciation for the deployment of a 173-strong contingent to Iraq. He then inspected the 150th Peacekeeping Battalion, which was planned to send a fresh force to replace the first contingent later in January 2004. All troops were withdrawn on 25 September 2008. In June 2005, Batzorigiyn Erdenebat, the Vice Minister of National Defence, told Jane's Defence Weekly that the deployment of forces in Mongolia was changing away from its Cold War, southern-orientated against China posture. "Under Mongolia's regional development concept the country has been divided into four regions, each incorporating several provinces. The largest capital city in each region will become the regional centre and we will establish regional military headquarters in each of those cities," he said. However, at the time, implementation had been delayed. In 2009, Mongolia sent 114 troops as part of the International Security Assistance Force to Afghanistan. The troops were sent, backing the U.S. surge in troop numbers. Mongolian forces in Afghanistan mostly assist NATO/International Security Assistance Force personnel in training on the former Warsaw Pact weapons that comprise the bulk of the military equipment available to the Afghan National Army. In 2021, on the occasion of the 100th anniversary of the armed forces, it was awarded the Order of Genghis Khan by President Khaltmaagiin Battulga. Peacekeeping missions Mongolian armed forces have been performing peacekeeping missions in South Sudan, Chad, Georgia, Ethiopia, Eritrea, Congo, Western Sahara, Sudan (Darfur), Iraq, Afghanistan, and in Sierra Leone under the mandate of the United Nations Mission in Liberia. In 2005/2006, Mongolian troops also served as part of the Belgian KFOR contingent in Kosovo. From 2009 to 2010 Mongolian Armed Forces deployed its largest peace keeping mission to Chad and completed the mission successfully. In 2011, the government decided to deploy its first fully self-sustained forces to the United Nations Mission UNMISS in South Sudan. Since then Mongolian Infantry battalion has been conducting the PKO tasks in Unity State of Republic of South Sudan. In addition, Mongolian Staff officers deployed at the Force Headquarter and Sector Headquarters of the UNMISS mission. First general officer deployed in this mission as Brigade Commander in 2014. On 17 November 2009, Deputy Assistant Secretary of Defense for Partnership Strategy and Stability Operations, James Schear had lunch with Col. Ontsgoibayar and selected troops from the 150th Peacekeeping Battalion under his command, bound for Chad on 20 November 2009. Afterwards Schear visited the Five Hills Regional Training Center, which hosts numerous combined multinational training opportunities for peacekeepers. Other peacekeeping battalions in the Mongolian forces may include the 084th Special Task Battalion, and the 330th and 350th Special Task Battalion. Historical Mongolian naval forces Historically, the Mongolian Navy was one of the largest in the world, during the time of Kublai Khan. However, most of the fleet sank during the Mongol invasions of Japan. The Mongolian Navy was recreated in the 1930s, while under Soviet rule, using it to transport oil. By 1990, the Mongolian Navy consisted of a single vessel, the Sukhbaatar III, which was stationed on Lake Khövsgöl, the nation’s largest body of water by volume. The Navy was made up of 7 men, which meant it was the smallest navy in the world at the time. In 1997, the navy was privatized, and offered tours on the lake to cover expenses. Currently, Mongolia does not have an official Navy, but they have small border patrols on Buir Lake, patrolling the border between Mongolia and China in the lake. Military policy Mongolia has a unique military policy due to its geopolitical position and economic situation. Being between two of the world's largest nations, Mongolian armed forces have a limited capability to protect its independence against foreign invasions; the country's national security therefore depends strongly on diplomacy, a notable part of which is the third neighbor policy. The country's military ideal is to create and maintain a small but efficient and professional armed forces. Organization Higher leadership The military order of precedence is as follows: President of Mongolia (Commander-in-Chief) Minister of Defense Deputy Ministers of Defense Chief of the General Staff of the Armed Forces Deputy Chiefs of the General Staff of the Armed Forces Service branch commanders Branches Ground Force The Ground Forces possess over 470 tanks, 650 Infantry Fighting Vehicles and armored personnel carriers, 500 mobile anti-aircraft weapons, more than 700 artillery and mortar and other military equipment. Most of them are old Soviet Union models designed between the late 1950s to early 1980s. There are a smaller number of newer models designed in post-Soviet Russia. Air Force On 25 May 1925 a Junkers F.13 entered service as the first aircraft in Mongolian civil and military aviation. By 1935 Soviet aircraft were based in the country. In May 1937 the air force was renamed the Mongolian People's Republic Air Corps. During 1939–1945 the Soviets delivered Polikarpov I-15s, Polikarpov I-16s, Yak-9s and Ilyushin Il-2s. By 1966 the first SA-2 SAM units entered service, and the air force was renamed the Air Force of the Mongolian People's Republic. The MiG-15, UTI and MiG-17 the first combat jet aircraft in the Mongolian inventory, entered service in 1970 and by the mid-1970s was joined by MiG-21s, Mi-8s and Ka-26s. After the end of the Cold War and the advent of the Democratic Revolution, the air force was effectively grounded due to a lack of fuel and spare parts. However, the government has been trying to revive the air force since 2001. The country has the goal of developing a full air force in the future. In 2011, the Ministry of Defense announced that they would buy MiG-29s from Russia by the end of the year, but this did not materialize. In October 2012 the Ministry of Defense returned a loaned Airbus A310-300 to MIAT Mongolian Airlines. From 2007 to 2011 the active fleet of MiG-21s was reduced. In 2013 the Air Force examined the possibility of buying three C-130J transport airplanes, manufactured by Lockheed Martin. Left without Russian aid, the Mongolian air force inventory gradually reduced to a few Antonov An-24/26 tactical airlifters and a dozen airworthy Mi-24 and Mi-8 helicopters. On 26 November 2019 Russia donated two MiG-29 fighter aircraft to Mongolia, which then became the only combat-capable fighter jets in its air force. Construction and Engineering Forces Since 1963, large-scale construction work has been a military affair, with the Council of Ministers on 8 January 1964 establishing the General Construction Military Agency under the Ministry of Defense. In addition, a large number of construction military units have been established. The work create a new construction and engineering army began in 2010. The Ministry of Defense and the General Staff of the Armed Forces have established six civil engineering units over the last 10 years. Cyber Security Forces The Armed Forces Cyber Security Center has been established under the General Staff of the Armed Forces. A project to upgrade the Armed Forces' information and communication network, conduct integrated monitoring, detect cyber attacks, and install response equipment is expected to be completed in August 2021. A decision has been made to build a Data Center for the Armed Forces' Cyber Security Center. This will be the basis for the creation of a Cyber Security Force. Special Forces The only Special Forces () in Mongolia is the 084th Special Task Battalion. Personnel Military education In October 1943, the Sukhe-Bator Officers' School was opened to train personnel of the Mongolian Army in accordance with the experience of the Red Army during the Second World War. The National Defense University serves as the main educational institution of the armed forces. The NDU is composed of the following education institutions: Defense Management Academy, Defense Research Institute, Academic Education Institute, Military Institute, Military Music College, NCO College. In 1994, the MNDU maintained a border protection faculty, which would later be expanded to establish the Border Troops Institute and what would later become the Law Enforcement University of Mongolia. Conscription The legal basis of conscription is the Universal Military Service Act. Men are conscripted between the ages of 18 and 25 for a one-year tour of duty. Mongolian men receive their conscription notices through their local administrative unit. Reserve service is still required up until the age of 45. Women in the Armed Forces More than 20 percent of the total personnel of the Armed Forces are females, who work mainly in communications, logistics and medical sectors. In addition, female members of the Armed Forces have been active in UN peacekeeping operations. Major N. Nyamjargal was the first female member of the Armed Forces to serve as a UN-mandated military observer in Western Sahara in 2007. A total of 12 women have served in the Western Sahara and Sierra Leone. Policies in recent years have been aimed at making female military service more equitable. Most women are assigned duties in the kitchen facilities and the barracks, as they are subject to many gender inequalities. Military courts On 16 March 1921, a joint meeting of the Provisional People's Government and the members of the Central Committee of the MPRP decided to establish a "Military Judicial Office under the Ministry of Defense". In 1928, the government approved the “Charter of the Red Army Judiciary” and the Military Judiciary established under the Ministry of Justice. This was disbanded a year later and the Military College of the Supreme Court was established. It was composed of the Khovd Regional Military Court, the Eastern Military Court, and the Military Courts of the 1st Cavalry Division (Ulaanbaatar). The military court were referred to as "special courts" at the time and dealt with criminal and civil cases involving military personnel. In 1929, the Provisional Court and the General Military Court were dissolved, and the Military College of the Supreme Court was subordinated to the three former military units. The Military College was dissolved in 1954, and was re-established in 1971. In connection with the change in the staffing, the parliament ordered in 1993 the abolition of the All-Military Special Court and the Special Military Court of First Instance, transferring the assets used by the Military Courts to the General Council of the Judiciary. All activities of the Military Court system is supervised by the Military Collegium. Equipment References ''World aircraft information files Bright Star Publishing London File 332 Sheet 3 External links General Staff of the Mongolian Armed Forces Ministry of Defense General Intelligence Agency Photo report on the Military Parade for the honor of National Flag of Mongolia, 2011
25961921
https://en.wikipedia.org/wiki/Empress%20Embedded%20Database
Empress Embedded Database
Empress Embedded Database is a relational database management system that has been embedded into applications, including medical systems, network routers, nuclear power plant monitors, satellite management systems. Empress is an ACID compliant relational database management system (RDBMS) with two-phase commit and several transaction isolation levels for real-time embedded applications. It supports both persistent and in-memory storage of data and works with text, binary, multimedia, as well as traditional data. History The first version of Empress was created by John Kornatowski and Ivor Ladd in 1979 and was originally named MISTRESS. It was based on research done on "MRS: A microcomputer database management system" at the University of Toronto, which was published by the Association for Computing Machinery in SIGSMALL SIGMOD 1981. The commercial version was one of the first available relational database management systems (RDBMS) and was named Empress. Its first customer ship was in early 1981. Empress was the first commercial database to be available on Linux. Its Linux release dates back to early 1995. API and architecture Empress supports many application programming interfaces in several programming languages. The C programming language has the most APIs including the low-level kernel MR Routines, Embedded SQL, MSCALL and ODBC. There are also APIs for C++ and JAVA. The layered architecture design provides levels of system optimization for application development. Applications developed using these APIs may be run in standalone and/or server modes. Product features Kernel API SQL API Fast Bulk Data Handling (BLOBs) Bulk Chunks Unlimited Attributes File Indices Persistent Stored Modules Triggers Stored Procedures No Pre-Partitioning required Referential Constraints Range Checks Micro-Second Time Stamps Layered Architecture Text Search Index Spatial Search Index Cancel Functionality Hierarchical Query JDBC Interface C++ APIs Database Encryption 64 BIT Operating System Versions UTF-8 UNICODE & National Language Support Replication Server Time-out Function Supported platforms Empress runs on all major Android, Linux-, Real-Time- and Windows-supported platforms: Android BlueCat Linux Debian Fedora HP-UX AIX Linux LynxOS RTOS MontaVista Linux QNX Neutrino Red Hat Linux Solaris Suse Linux Ubuntu Unix VxWorks Windows CE Windows Mobile Windows XP Windows 7 Wind River Linux References External links Product Reviews: Empress RDBMS and Just Logic by Rob Wehrli Data Embedded databases
18856683
https://en.wikipedia.org/wiki/Bohmini.A
Bohmini.A
Bohmini.A is a configurable remote access tool or Trojan. Bohmini.A exploits security flaws in Adobe Flash 9.0.115 with Internet Explorer 7.0 and Firefox 2.0 under Windows XP SP2. Adobe Flash 9.0.124 is not known to be vulnerable to Bohmini.A. In July 2008, it was known that Bohmini.A spread as malvertising from 247mediadirect through an advertising network via the social networking site Facebook. Bohmini.A is detected by at least one known anti-virus product; Microsoft Windows Live OneCare. However, as of August 12, 2008, Microsoft Windows Live OneCare does not remove Bohmini.A completely, although it claims to have detected and removed it. To remove Bohmini.A under Windows XP, run a known detecting anti-virus product such as Windows Live OneCare and then go to Control Panel and select Switch to Classic View. Then select Scheduled Tasks and remove all tasks with the prefix At such as At1, ..., At24. The Bohmini.A installation is customizable and therefore each of the implementations vary. For example, the executable names vary. Bohmini.A is configured to notify and update itself over HTTP. See also Trojan External links Threat Analysis from Telenor SOC (Norwegian) (Translated to English via Google) Virustotal MD5:a2cd6617e5b1c4b0a6df375d878d33f1 Virustotal MD5:45ecab7cc3aa1c86889ad6b13ed9838b Trojan horses Windows trojans Internet Protocol based network software Hacking in the 2000s
22133086
https://en.wikipedia.org/wiki/Psyb0t
Psyb0t
Psyb0t or Network Bluepill is a computer worm discovered in January 2009. It is thought to be unique in that it can infect routers and high-speed modems. Progress Psyb0t was first detected in January 2009 by Australian security researcher Terry Baume in a Netcomm NB5 ADSL router/modem. Then, in early March, it ran a DDoS attack against DroneBL (an IP blacklisting service). From this attack, DroneBL estimated that it had infected about 100000 devices. This attack brought some public attention to it in later March which probably caused its operator to shut it down. Also DroneBL successfully attempted to bring its command-and-control and its DNS servers down. Description Psyb0t targets modems and routers with little-endian MIPS processor running on Mipsel Linux firmware. It is a part of botnet operated by IRC command-and-control servers. After infecting, psyb0t blocks access to the router TCP ports 22, 23, 80. Psyb0t contains many attack tools. It is known that it is able to perform network scan for vulnerable routers/modems, check for MySQL and phpMyAdmin vulnerabilities or perform website DoS attack. There are two versions known. The first version 2.5L was affecting Netcomm NB5 ADSL router/modem. Newer version 2.9L now affects over 50 models by Linksys, Netgear and other vendors, including those running DD-WRT or OpenWrt firmware. Attack vectors and countermeasures The primary attack vector is SSH or telnet access. Using brute-forcing, it tries to gain access from over 6000 usernames and 13000 passwords. However, 90% of infections are caused by insecure configuration, mostly no or default administration password and allowed remote administration. Recommended countermeasures are to change default access credentials to more secure ones and to update router/modem firmware. In case of infection suspicion, it is advised to perform hard reset of the router. References External links Psyb0t description DroneBL blog about Psyb0t New worm can infect home modem/routers Computer worms
49078559
https://en.wikipedia.org/wiki/Remix%20OS
Remix OS
Remix OS was a computer operating system for personal computers with x86 and ARM architectures that, prior to discontinuation of development, shipped with a number of 1st- and 3rd-party devices. Remix OS allowed PC users to run Android mobile apps on any compatible Intel-based PC. In January 2016 Jide announced a beta version of their operating system called Remix OS for PC, which is based on Android-x86 — a x86-port of the Android operating system — and available for download for free from their website. The beta version of Remix OS for PC brings hard drive installation, 32-bit support, UEFI support and OTA updates. Except for the free software licensed parts available on GitHub, unlike Android-x86, the source code of Remix OS is not available to the public. Google Mobile Services (GMS) were removed from the Remix Mini after Remix OS Update: 3.0.207 which Jide claimed was to "ensure a consistent experience across all Android devices for all." Later comments suggest that there was a compatibility issue with some apps which resulted in Google requesting that GMS not be pre-loaded. On July 17, 2017, Jide announced that development of Remix OS for PC, as well as related consumer products in development, was being discontinued, stating that the company would be "restructuring [their] approach to Remix OS and transitioning away from the consumer space". PhoenixOS and PrimeOS. are similar Android-x86 based operating systems developed by other companies independently. Version history There were three versions of Remix OS: Remix OS for PC, Remix OS for Remix Ultratablet and Remix OS for Remix Mini: Remix OS for PC: Remix Ultratablet: Remix Mini: A Remix OS 3.0 device, the Remix Pro 2-in-1 tablet, had been announced in 2016, however, these will no longer be made. Legacy Due to the popularity and affordability of the OS in Asia, similar projects have been made by various firms. Most notably PhoenixOS by the Chinese-based Chaozhuo Technology, and PrimeOS by the Indian-based Floydwiz Technologies Private Limited. Just like RemixOS, both are pre-dominantly closed source, with a lot of improved features intended to improve and optimise both OS's for newer applications and PC's. Projects like OPENTHOS and BlissOS intend to release the project with open source in mind, but OPENTHOS is restricted to only Chinese Markets at the moment, and BlissOS is based on Android-x86. References Android (operating system) Linux distributions
574964
https://en.wikipedia.org/wiki/Motorboat
Motorboat
A motorboat, speedboat or powerboat is a boat that is exclusively powered by an engine. Some motorboats are fitted with inboard engines, others have an outboard motor installed on the rear, containing the internal combustion engine, the gearbox and the propeller in one portable unit. An inboard-outboard contains a hybrid of an inboard and an outboard, where the internal combustion engine is installed inside the boat, and the gearbox and propeller are outside. There are two configurations of an inboard, V-drive and direct drive. A direct drive has the powerplant mounted near the middle of the boat with the propeller shaft straight out the back, where a V-drive has the powerplant mounted in the back of the boat facing backwards having the shaft go towards the front of the boat then making a V towards the rear. Overview A motorboat has one or more engines that propel the vessel over the top of the water. Boat engines vary in shape, size, and type. Engines are installed either inboard or outboard. Inboard engines are part of the boat construction, while outboard engines are secured to the transom and hang off the back of the boat. Motorboat engines run on gasoline or diesel fuel. Engines come in various types. Engines vary in fuel types such as gasoline, diesel, gas turbine, rotary combustion or steam. Motorboats are commonly used for recreation, sport, or racing. Boat racing is a sport where drivers and engineers compete for fastest boat. The American Powerboat Association (APBA) splits the sport into categories. The categories include inboard, inboard endurance, professional outboard, stock outboard, unlimited outboard performance craft, drag, modified outboard, and offshore. Engines and hulls categorize racing. The two types of hull shape are runabout and hydroplane. Runabout is a v-shape and hydroplane is flat and stepped. The type of hull used depends on the type of water the boat is in and how the boat is being used. Hulls can be made of wood, fiberglass or metal but most hulls today are fiberglass. High performance speedboats can reach speeds of over 50 knots. Their high speed and performance can be attributed to their hull technology and powerful engine. With a more powerful and heavier engine, an appropriate hull shape is needed. High performance boats include yachts, HSIC (high speed interceptor craft) and racing powerboats. A V-type hull helps a boat cut through the water. A deep V-hull helps keep the boat's bow down at low speeds, improving visibility. V-hulls also improve a boat's speed and maneuvering capabilities. They stabilize a boat in rough conditions. History Invention Although the screw propeller had been added to an engine (steam engine) as early as the 18th century in Birmingham, England, by James Watt, boats powered by a petrol engine only came about in the later part of the 19th century with the invention of the internal combustion engine. The earliest boat to be powered by a petrol engine was tested on the Neckar River by Gottlieb Daimler and Wilhelm Maybach in 1886, when they tested their new "longcase clock" engine. It had been constructed in the former greenhouse (converted into a workshop) in Daimler's back yard. The first public display took place on the Waldsee in Cannstatt, today a suburb of Stuttgart, at the end of that year. The engine of this boat had a single cylinder of 1 horse power. Daimler's second launch in 1887 had a second cylinder positioned at an angle of 15 degrees to the first one, and was known as the "V-type". The first successful motor boat was designed by the Priestman Brothers in Hull, England, under the direction of William Dent Priestman. The company began trials of their first motorboat in 1888. The engine was powered with kerosene and used an innovative high-tension (high voltage) ignition system. The company was the first to begin large scale production of the motor boat, and by 1890, Priestman's boats were successfully being used for towing goods along canals. Another early pioneer was Mr. J. D. Roots, who in 1891 fitted a launch with an internal combustion engine and operated a ferry service between Richmond and Wandsworth along the River Thames during the seasons of 1891 and 1892. The eminent inventor Frederick William Lanchester recognized the potential of the motorboat and over the following 15 years, in collaboration with his brother George, perfected the modern motorboat, or powerboat. Working in the garden of their home in Olton, Warwickshire, they designed and built a river flat-bottomed launch with an advanced high-revving engine that drove via a stern paddle wheel in 1893. In 1897, he produced a second engine similar in design to his previous one but running on benzene at 800 r.p.m. The engine drove a reversible propeller. An important part of his new engine was the revolutionary carburettor, for mixing the fuel and air correctly. His invention was known as a "wick carburetor", because fuel was drawn into a series of wicks, from where it was vaporized. He patented this invention in 1905. The Daimler Company began production of motor boats in 1897 from its manufacturing base in Coventry. The engines had two cylinders and the explosive charge of petroleum and air was ignited by compression into a heated platinum tube. The engine gave about six horse-power. The petrol was fed by air pressure to a large surface carburettor and also an auxiliary tank which supplied the burners for heating the ignition tubes. Reversal of the propeller was effected by means of two bevel friction wheels which engaged with two larger bevel friction wheels, the intermediate shaft being temporarily disconnected for this purpose. It was not until 1901 that a safer apparatus for igniting the fuel with an electric spark was used in motor boats. Expansion Interest in fast motorboats grew rapidly in the early years of the 20th century. The Marine Motor Association was formed in 1903 as an offshoot of the Royal Automobile Club. Motor Boat & Yachting was the first magazine to address technical developments in the field and was brought out by Temple Press, London from 1904. Large manufacturing companies, including Napier & Son and Thornycroft began producing motorboats. The first motorboating competition was established by Alfred Charles William Harmsworth in 1903. The Harmsworth Cup was envisioned as a contest between nations, rather than between boats or individuals. The boats were originally to be designed and built entirely by residents of the country represented, using materials and units built wholly within that country. The first competition, held in July 1903, at Cork Harbour in Ireland, and officiated by the Automobile Club of Great Britain and Ireland and the Royal Victoria Yacht Club, was a very primitive affair, with many boats failing even to start. The competition was won by Dorothy Levitt in a Napier launch designed to the specifications of Selwyn Edge. This motorboat was the first proper motorboat designed for high speed. She set the world's first water speed record when she achieved in a steel-hulled, 75-horsepower Napier speedboat fitted with a three-blade propeller. As both the owner and entrant of the boat, "S. F. Edge" was engraved on the trophy as the winner. An article in the Cork Constitution on 13 July reported "A large number of spectators viewed the first mile from the promenade of the Yacht Club, and at Cork several thousand people collected at both sides of the river to see the finishes." Levitt was then commanded to the Royal yacht of King Edward VII where he congratulated her on her pluck and skill, and they discussed the performance of the motorboat and its potential for British government despatch work. France won the race in 1904, and the boat Napier II set a new world water speed record for a mile at almost 30 knots (56 km/h), winning the race in 1905. The acknowledged genius of motor boat design in America was the naval architect John L. Hacker. His pioneering work, including the invention of the V-hull and the use of dedicated petrol engines revolutionized boat design from as early as 1908, when he founded the Hacker Boat Co. In 1911, Hacker designed the Kitty Hawk, the first successful step hydroplane which exceeded the then-unthinkable speed of and was at that time the fastest boat in the world. The Harmsworth Cup was first won by Americans in 1907. The US and England traded it back and forth until 1920. From 1920 to 1933, Americans had an unbroken winning streak. Gar Wood won this race eight times as a driver and nine times as an owner between 1920 and 1933. Hull type The type of hull depends on the usage and type of water that the boat is being used in. Types of hulls include displacement hulls, vee-bottom hulls, modified vee-bottom hulls, deep-vee hulls and trim tabs for vee-bottom hulls. Hulls can be made of different materials. The three main materials are wood, reinforced fiberglass and metal. Wood hulls may be made of planks or plywood. Fiberglass hulls are reinforced with balsa wood. Metal hulls are either aluminum or steel. The hulls of recreational motorboats are distinguished between day cruiser, bow rider, pilothouse and cabin cruiser. The upper construction of each has different features according to its specific use. They differ especially by size of cabin, dimension of half deck, helmsman armchair and crew seats position. Racing Powerboat racing engine categories for inboard and outboard engines range from 7.5 cu in to 60 cu in. Categories range from 44 cu in to 450 cu in for inboard only. The two types of motorboat races are speed races and predicted-log race. Speed races involve boats with powerful engines competing for quickest time and take place on freshwater bodies of water on a closed course. Races are marked by buoys. For unlimited hydroplanes the race distance ranges from 5 miles to 30 miles. Hydroplanes are drag raced. Predicted-log races involve slow cabin cruisers. Predicted-log races is a competition of planning and carrying out a sea voyage. The contestants evaluate different factors and variables that they will encounter along the way. The contestant with the least error at the end of the race is the winner. Also, speedboat tour is a common tourist attraction, especially in Dubai. Military application A fast craft is a vessel capable of speeds over 30 knots. HSICs (High Speed Interceptor Craft) however, trump fast craft with speeds of over 50 knots. HSICs are usually either custom or law enforcement craft. Law enforcement use these extremely fast boats to catch criminals such as smugglers or drug traffickers who use fast craft as well. The military have started to use HSICs to stop international threats such as pirates, militias, terrorists and weapons traffickers. Originally HSICs were not meant to carry large weapons. Now that the military is using them, they are being built with heavy machine guns. The boats have to be fitted with robust weight control and at the same time have heavy amour. HSICs must do without propellers to account for cavitation and vibrations. Instead HSICs use surface drivers or water jets. Gallery See also Electric boat Go-fast boat Launch (boat) Motorboat racing Motor launch Motorsailer Powerboating Sterndrive Circle of death (boating) References Nautical terminology Vehicles introduced in 1886
28362003
https://en.wikipedia.org/wiki/Agastrophus
Agastrophus
In Greek mythology, Agastrophus (Ancient Greek: Ἀγάστροφος) is a Paionian "hero", "famed for his spear", fighting on the side of Troy in the Trojan War, killed by Diomedes. He was the son of Paeon and brother of Laophoon. Mythology Agastrophus' death comes about as the result of a lapse in judgment. Under the influence of Ate, a kind of judgmental blindness, Agastrophus made the fatal mistake of leaving his chariot too far behind him, thus being unable to escape when he was wounded by Diomedes. After killing him Diomedes strips the "gleaming corselet of valiant Agastrophus from about his breast, and the shield from off his shoulder, and his heavy helm". Notes References Connor, Peter, "Paeon" in Gods, Goddeses, and Mythology, Volume 8, editor, C. Scott Littleton, Marshall Cavendish, 2005 Homer. The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. North, Richard, Pagan Words and Christian Meanings, Rodopi, 1991. . Parada, Carlos, Genealogical Guide to Greek Mythology, Jonsered, Paul Åströms Förlag, 1993. . Quintus Smyrnaeus, The Trojan Epic: Posthomerica, JHU Press, 2007. . T. F. E., *"On the Homeric use of the word Ηρως", in The Philological Museum, Volume 2, editor, Julius Charles Hare, Printed by J. Smith for Deightons, 1833. Williams, John. "Homerus", in The Edinburgh Review, Volume 77, A. and C. Black, 1843. Yamogata, Naoko, Homeric Morality, BRILL, 1994. . Yamogata, Naoko, "Disaster revisited: Ate and the Litai in Homer's Iliad" in Personification in the Greek World: From Antiquity to Byzantium Editors, Emma Stafford, Judith Herrin, Ashgate Publishing, Ltd., 2005. . People of the Trojan War Paeonian mythology
256040
https://en.wikipedia.org/wiki/OpenLDAP
OpenLDAP
OpenLDAP is a free, open-source implementation of the Lightweight Directory Access Protocol (LDAP) developed by the OpenLDAP Project. It is released under its own BSD-style license called the OpenLDAP Public License. LDAP is a platform-independent protocol. Several common Linux distributions include OpenLDAP Software for LDAP support. The software also runs on BSD-variants, as well as AIX, Android, HP-UX, macOS, Solaris, Microsoft Windows (NT and derivatives, e.g. 2000, XP, Vista, Windows 7, etc.), and z/OS. History The OpenLDAP project was started in 1998 by Kurt Zeilenga. The project started by cloning the LDAP reference source from the University of Michigan where a long-running project had supported development and evolution of the LDAP protocol until that project's final release in 1996. , the OpenLDAP project has four core team members: Howard Chu (chief architect), Quanah Gibson-Mount, Hallvard Furuseth, and Kurt Zeilenga. There are numerous other important and active contributors including Ondrej Kuznik, Luke Howard, Ryan Tandy, and Gavin Henry. Past core team members include Pierangelo Masarati. Components OpenLDAP has four main components: slapd – stand-alone LDAP daemon and associated modules and tools. lloadd - stand-alone LDAP load balancing proxy server libraries implementing the LDAP protocol and ASN.1 Basic Encoding Rules (BER) client software: ldapsearch, ldapadd, ldapdelete, and others Additionally, the OpenLDAP Project is home to a number of subprojects: JLDAP – LDAP class libraries for Java JDBC-LDAP – Java JDBC – LDAP Bridge driver ldapc++ – LDAP class libraries for C++ LMDB – memory-mapped database library Backends Overall concept Historically the OpenLDAP server (slapd, the Standalone LDAP Daemon) architecture was split between a frontend that handles network access and protocol processing, and a backend that deals strictly with data storage. This split design was a feature of the original University of Michigan code written in 1996 and carried on in all subsequent OpenLDAP releases. The original code included one main database backend and two experimental/demo backends. The architecture is modular and many different backends are now available for interfacing to other technologies, not just traditional databases. Note: In older (1.x) releases, the terms "backend" and "database" were often used interchangeably. To be precise, a "backend" is a class of storage interface, and a "database" is an instance of a backend. The slapd server can use arbitrarily many backends at once, and can have arbitrarily many instances of each backend (i.e., arbitrarily many databases) active at once. Available backends Currently 17 different backends are provided in the OpenLDAP distribution, and various third parties are known to maintain other backends independently. The standard backends are loosely organized into three different categories: Data storage backends – these actually store data back-bdb: the first transactional backend for OpenLDAP, built on Berkeley DB, removed with OpenLDAP 2.5. back-hdb: a variant of back-bdb that is fully hierarchical and supports subtree renames, removed with OpenLDAP 2.5. back-ldif: built on plain text LDIF files back-mdb: a transactional backend built on OpenLDAP's Lightning Memory-Mapped Database (LMDB) back-ndb: a transactional backend built on MySQL's NDB cluster engine, removed with OpenLDAP 2.6. back-wiredtiger: an experimental transactional backend built on , introduced with OpenLDAP 2.5. Proxy backends – these act as gateways to other data storage systems back-asyncmeta: an asynchronous proxy with meta-directory features, introduced with OpenLDAP 2.5. back-ldap: simple proxy to other LDAP servers back-meta: proxy with meta-directory features back-passwd: uses a Unix system's passwd and group data back-relay: internally redirects to other slapd backends back-sql: talks to arbitrary SQL databases, deprecated with OpenLDAP 2.5. Dynamic backends – these generate data on the fly back-config: slapd configuration via LDAP back-dnssrv: Locates LDAP servers via DNS back-monitor: slapd statistics via LDAP back-null: a sink/no-op backend, analogous to Unix /dev/null back-perl: invokes arbitrary perl modules in response to LDAP requests, deprecated with OpenLDAP 2.5. back-shell: invokes shell scripts for LDAP requests, removed with OpenLDAP 2.5. back-sock: forwards LDAP requests over IPC to arbitrary daemons Some backends available in older OpenLDAP releases have been retired from use, most notably back-ldbm which was inherited from the original UMich code, and back-tcl which was similar to back-perl and back-shell. Support for other backends will soon be withdrawn as well. back-ndb is removed now since the partnership with MySQL that led to its development was terminated by Oracle after Oracle acquired MySQL. back-bdb and back-hdb have been removed in favor of back-mdb since back-mdb is superior in all aspects of performance, reliability, and manageability. In practice, backends like -perl and -sock allow interfacing to any arbitrary programming language, thus providing limitless capabilities for customization and expansion. In effect the slapd server becomes an RPC engine with a compact, well-defined and ubiquitous API. Overlays Overall concept Ordinarily an LDAP request is received by the frontend, decoded, and then passed to a backend for processing. When the backend completes a request, it returns a result to the frontend, which then sends the result to the LDAP client. An overlay is a piece of code that can be inserted between the frontend and the backend. It is thus able to intercept requests and trigger other actions on them before the backend receives them, and it can also likewise act on the backend's results before they reach the frontend. Overlays have complete access to the slapd internal APIs, and so can invoke anything the frontend or other backends could perform. Multiple overlays can be used at once, forming a stack of modules between the frontend and the backend. Overlays provide a simple means to augment the functionality of a database without requiring that an entirely new backend be written, and allow new functionalities to be added in compact, easily debuggable and maintainable modules. Since the introduction of the overlay feature in OpenLDAP 2.2 many new overlays have been contributed from the OpenLDAP community. Available overlays Currently there are 25 overlays in the core OpenLDAP distribution, with another 24 overlays in the user-contributed code section, and more awaiting approval for inclusion. The core overlays include: accesslog: log server activity in another LDAP database, for LDAP-accessible logging auditlog: log server activity in a flat text file autoca: Automatic Certificate Authority (OpenLDAP 2.5) chain: intercept referrals and chain them instead; code is part of back-ldap collect: implement X.500-style collective attributes (aka Netscape Class Of Service) constraint: restrict the acceptable values for particular attributes dds: dynamic data service – short-lived, self-expiring entries deref: return information about entries referenced in a given search result dyngroup: simple dynamic group support dynlist: more sophisticated dynamic group support plus more homedir: Remote home directory provisioning (OpenLDAP 2.5) memberof: support for memberOf and similar backlink attributes otp: allows OATH One-Time Passwords to be used in conjunction with the LDAP user password (OpenLDAP 2.5) pcache: cache search results, mainly to improve performance for proxied servers ppolicy: LDAP Password Policy – password quality, expiration, etc. refint: referential integrity remoteauth: Allows delegation of authentication requests to remote directories. Works with Active Directory. (OpenLDAP 2.5) retcode: set predetermined return codes for various operations; used for client debugging rwm: rewrite module, for various alterations of LDAP data seqmod: serialize writes to individual entries sssvlv: Server Side Sorting and Virtual List Views syncprov: Syncrepl Provider, implements the master side of a replication agreement translucent: Semi-transparent pass-through, for locally augmenting data on a proxied server unique: for enforcing uniqueness of attribute values within a tree valsort: maintain various sort orders for values of an attribute The contrib overlays include: addpartial: receive Add requests and turn them into Modifies if the target entry already exists adremap: remaps attributes for PAM/NSS MS AD support (OpenLDAP 2.5) allop: returns all operational attributes, for clients that don't know how to request them authzid: implements RFC 3829 support (OpenLDAP 2.5) autogroup: dynamically managed static groups cloak: hide attributes unless explicitly requested in a search datamorph: stores enumerated values and fixed size integers (OpenLDAP 2.5) denyop: reject arbitrarily configured requests dupent: return multivalued results as separate entries lastbind: record the timestamp of a user's last successful authentication lastmod: maintain the timestamp of the last change within a tree nops: filter out redundant modifies noopsrch: count entries that would be returned by a search nssov: Answer NSS and PAM requests directly in slapd, replaces nss-ldap and pam-ldap. ppm: adds additional password checking criteria to the slapo-ppolicy overlay (OpenLDAP 2.5) proxyOld: support an obsolete encoding of ProxyAuthz used by Sun et al. pw-radius: allows bind operations to be passed to the specified radius server(s) (OpenLDAP 2.5) rbac: intercepts, decodes and enforces specific RBAC policies per the Apache Fortress RBAC data formats (OpenLDAP 2.5) smbk5pwd: Maintain Samba and Kerberos passwords trace: Log every LDAP request and response totp: provides one time password support (OpenLDAP 2.5) usn: Update Sequence Numbers (OpenLDAP 2.5) variant: allows attributes/values to be shared between several entries (OpenLDAP 2.5) vc: provides the verify credentials extended operation (OpenLDAP 2.5) Other modules Backends and overlays are the two most commonly used types of modules. Backends were typically built into the slapd binary, but they may also be built as dynamically loaded modules, and overlays are usually built as dynamic modules. In addition, slapd supports dynamic modules for implementing new LDAP syntaxes, matching rules, controls, and extended operations, as well as for implementing custom access control mechanisms and password hashing mechanisms. OpenLDAP also supports SLAPI, the plugin architecture used by Sun and Netscape/Fedora/Red Hat. In current releases, the SLAPI framework is implemented inside a slapd overlay. While many plugins written for Sun/Netscape/Fedora/Red Hat are compatible with OpenLDAP, very few members of the OpenLDAP community use SLAPI. Available modules Native slapd modules acl/posixgroup – support posixGroup membership in access controls comp_match – support component-based matching kinit – maintain/refresh a Kerberos TGT for slapd passwd/ – additional password hashing mechanisms. Currently includes Kerberos, Netscape, RADIUS, and SHA-2. SLAPI plugins addrdnvalue – add RDN value to an entry if it was omitted in an Add request Release summary The major (functional) releases of OpenLDAP Software include: OpenLDAP Version 1 was a general clean-up of the last release from the University of Michigan project (release 3.3), and consolidation of additional changes. OpenLDAP Version 2.0, released in August 2000, included major enhancements including LDAP version 3 (LDAPv3) support, Internet Protocol version 6 (IPv6) support, and numerous other enhancements. OpenLDAP Version 2.1, released in June 2002, included the transactional database backend (based on Berkeley Database or BDB), Simple Authentication and Security Layer (SASL) support, and Meta, Monitor, and Virtual experimental backends. OpenLDAP Version 2.2, released in December 2003, included the LDAP "sync" Engine with replication support (syncrepl), the overlay interface, and numerous database and RFC-related functional enhancements. OpenLDAP Version 2.3, released in June 2005, included the Configuration Backend (dynamic configuration), additional overlays including RFC-compliant Password Policy software, and numerous additional enhancements. OpenLDAP Version 2.4, released in October 2007, introduced N-way MultiMaster replication, Stand-by master, and the ability to delete and modify Schema elements on the fly, plus many more. OpenLDAP Version 2.5, released in April 2021, introduced the LDAP load balancing proxy server, LDAP transaction support, HA proxy protocol v2 support, plus much more. OpenLDAP Version 2.6, released in October 2021, introduced additional load balancing strategies and additional options to improve coherence with certain LDAP controls and extended operations to the LDAP Load Balancer Daemon and the ability to log directly to a file rather than via syslog for both slapd and lloadd Replication OpenLDAP supports replication using Content Synchronization as specified in RFC 4533. This spec is hereafter referred to as "syncrepl". In addition to the base specification, an enhancement known as delta-syncrepl is also supported. Additional enhancements have been implemented to support multi-master replication. syncrepl The basic synchronization operation is described in RFC 4533. The protocol is defined such that a persistent database of changes is not required. Rather the set of changes is implied via change sequence number (CSN) information stored in each entry and optimized via an optional session log which is particularly useful to track recent deletes. The model of operation is that a replication client (consumer) sends a "content synchronizing search" to a replication server (provider). The consumer can provide a cookie in this search (especially when it has been in sync with the provider previously). In the OpenLDAP implementation of the RFC 4533, this cookie includes the latest CSN that has been received from the provider (called the contextCSN). The provider then returns as search results (or, see optimization below, sync info replies) the present (unchanged entry only used in the present phase of the refresh stage) (no attributes), added, modified (represented in the refresh phase as an add with all current attributes), or deleted (no attributes) entries to put the consumer into a synchronized state based on what is known via their cookie. If the cookie is absent or indicates that the consumer is totally out of sync, then the provider will, in the refresh stage, send an add for each entry it has. In the ideal case, the refresh stage of the response contains only a delete phase with just a small set of adds (including those that represent the current result of modifies) and deletes that have occurred since the time the consumer last synchronized with the provider. However, due to limited session log state (also non-persistent) kept in the provider, a present phase may be required, particularly including the presentation of all unchanged entries as a means (inefficient) of implying what has been deleted in the provider since the consumer last synchronized. The search can be done in either refresh or refreshAndPersist mode, which implies what stages occur. The refresh stage always occurs first. During the refresh stage, two phases may occur: present and delete, where present always occurs before delete. The phases are delimited via a sync info response that specifies which phase is completed. The refresh and persist stages are also delimited via such sync info response. An optional optimization to more compactly represent a group of entries that are to be presented or deleted is to use a sync info response containing a syncIdSet that identifies the list of entryUUID values of those entries. The present phase is differentiated from the delete phase as follows. Entries that present unchanged entries may only be returned in the present phase. Entries that delete entries may only be provided in the delete phase. In either phase, add entries (including adds that represent all current attributes of modified entries) can be returned. At the end of a present phase, each entry that the consumer has that was not identified in an add entry or present response during the present phase is implicitly no longer in the provider and thus must be deleted at the consumer so as to synchronize the consumer with the provider. Once the persist stage begins, the provider sends search results that indicate only the add, modify and delete of entries (no present unchanged entry indications) for those entries changed since the refresh stage completed. The persist stage continues indefinitely, meaning that search has no final "done" response. By contrast, in the refresh mode only a refresh stage occurs and such stage completes with a done response that also ends the present or delete phase (whichever phase was currently active). delta-syncrepl This protocol keeps a persistent database of write accesses (changes) and can represent each modify precisely (meaning only the attributes that have changed). It is still built on the standard syncrepl specification, which always sends changes as complete entries. But in delta-syncrepl, the transmitted entries are actually sent from a log database, where each change in the main database is recorded as a log entry. The log entries are recorded using the LDAP Log Schema. See also List of LDAP software References External links The OpenLDAP Foundation Using libldap, A tutorial on the OpenLDAP client API An OpenLDAP Update article by Marty Heyman 13 September 2007 Free software programmed in C Cross-platform free software Directory services
901027
https://en.wikipedia.org/wiki/Canadian%20Junior%20Football%20League
Canadian Junior Football League
The Canadian Junior Football League (CJFL) is a national Major Junior Canadian football league consisting of 18 teams playing in six provinces across Canada. The teams compete annually for the Canadian Bowl. Many CJFL players move on to professional football careers in the Canadian Football League (CFL) and elsewhere. Formed May 8, 1974, the CJFL's formal mission statement is: "The Canadian Junior Football League provides the opportunity for young men aged 17 to 22 to participate in highly competitive post-high school football that is unique in Canada. The goal of the league is to foster community involvement and yield a positive environment by teaching discipline, perseverance and cooperation. The benefits of the league are strong camaraderie, national competition and life-long friends." A handful of standout players are typically signed directly to CFL rosters each season, while U Sports permits up to two years of play in leagues such as the CJFL before a player begins to lose eligibility. The 9-team Quebec Junior Football League was formerly part of the CJFL, but eventually withdrew and now operates independently. Meanwhile, The Ontario Football Conference (OFC) consists of two divisions: Varsity Division (ages 11 to 19) and Junior Division (ages 17 to 22). While the Junior Division remains affiliated to the CJFL and its teams compete for the Canadian Bowl, the Varsity Division is operated solely by the OFC. Current teams Ontario Football Conference Prairie Football Conference British Columbia Football Conference Defunct teams Grand River Predators (Kitchener, Ontario) GTA Bears (2012-2013, Brampton, Ontario) Brampton Bears (2009-2011, Brampton, Ontario) Abbotsford Air Force (Abbotsford, B.C.) North Vancouver Argos (North Vancouver, B.C.) North Shore Cougars 1974-1990 Richmond Raiders (1978-1992, Richmond, B.C.) Tri-City Bulldogs (Vancouver Meralomas, 1925-1990, then 1991-2004 in Coquitlam, B.C.) Vancouver Trojans (Renfrew Trojans 1974-1993) (Burnaby, B.C.) Red Deer Packers (Red Deer, Alberta) Calgary Mohawks (Calgary, Alberta) Calgary Cougars (Calgary, Alberta) Medicine Hat Rattlers (Medicine Hat, Alberta) Regina Rams (Regina, Saskatchewan, moved to the CIS) Rosemount Bombers (Montreal, Quebec) St-Leonard Cougars (Montreal, Quebec) Fort Garry Lions (Winnipeg, Manitoba) St. Vital Mustangs (Winnipeg, Manitoba) Winnipeg Hawkeyes (Winnipeg, Manitoba) Winnipeg Rods (Winnipeg, Manitoba) Brampton Satellites (Brampton, Ontario) Brantford Bisons (Brantford, Ontario) Cornwall Emards (Cornwall, Ontario) Niagara Raiders (St. Catharines, Ontario) Oshawa Hawkeyes (Oshawa, Ontario) Ottawa Junior Riders (Ottawa, Ontario, moved back to the QJFL after 2005) Sault Ste. Marie Storm (Sault Ste. Marie, Ontario) Thunder Bay Giants (Thunder Bay, Ontario) Chateauguay Ramblers (Chateauguay, Quebec) Laval Scorpions (Laval, Quebec) Notre-Dame-de-Grace Maple Leafs (Montreal, Quebec, merged with the Verdun Invictus, renamed to the Verdun Maple Leafs, then the Montreal Junior Alouettes, and finally the Montreal Junior Concordes) St. Hubert Rebelles (Saint-Hubert, Quebec) Verdun Shamcats (Verdun, Quebec) Ville-Émard Juveniles (Ville-Émard, Quebec) Toronto Junior Argonauts, (Toronto, Ontario) Champions by city since 1947 Leader-Post Trophy, 1908-1973; Armadale Cup, 1974-1988; Canadian Bowl, 1989-present. Saskatoon, Saskatchewan - Saskatoon Hilltops 22 times - 2019, 2018, 2017, 2016, 2015, 2014, 2012, 2011, 2010, 2007, 2003, 2002, 2001, 1996, 1991, 1985, 1978, 1969, 1968, 1959, 1958, 1953. Regina, Saskatchewan - 17 times, Regina Rams 16 times, 2013, 1998, 1997, 1995, 1994, 1993, 1987, 1986, 1981, 1980, 1976, 1975, 1973, 1971, 1970, 1966; Regina Thunder 1 time, 2013. Edmonton, Alberta - 8 times, Edmonton Huskies 2005, 2004, 1964, 1963, 1962; Edmonton Wildcats 1983, 1977, 1967. Hamilton, Ontario - 5 times, Hamilton Hurricanes 1972; Hamilton Jr. Tiger Cats 1951, 1950; Hamilton Jr. Wildcats 1949, 1948. Ottawa, Ontario - Ottawa Sooners 4 times, 1992, 1984, 1979, 1974. Nanaimo, British Columbia - Vancouver Island Raiders 3 times, 2009, 2008, 2006. Windsor, Ontario - Windsor AKO Fratmen 3 times, 1999, 1954, 1952. Winnipeg, Manitoba - Winnipeg Rods 3 times, 1961, 1956, 1955. Kelowna, British Columbia - Okanagan Sun 2 times, 2000, 1988. Calgary, Alberta - Calgary Colts 2 times, 1990, 1989. Vancouver, British Columbia - 2 times, Renfrew Trojans 1982; Vancouver Blue Bombers 1947. Montreal, Quebec - 2 times, Notre-Dame-de-Grace Maple Leafs 1965; Montreal Rosemount Bombers 1960. Toronto, Ontario - Toronto Parkdale Lions 1 time, 1957. Langley, British Columbia - Langley Rams 2021. The national championship was contested from 1908-1946 with breaks for the World Wars and an additional break in the mid-1930s. In these years the championship was won by teams from Toronto (7 times), Montreal (6 times), Hamilton (4 times), Regina (2 times), and once each by Vancouver, Winnipeg, Calgary, Ottawa, Petrolia, St. Thomas, Woodstock, and London. References External links 3 Sports leagues established in 1974 1974 establishments in Canada
54296578
https://en.wikipedia.org/wiki/Phoenix%20Point
Phoenix Point
Phoenix Point is a strategy video game featuring a turn-based tactics system that is developed by Bulgaria-based independent developer Snapshot Games. It was released on December 3, 2019 for macOS and Microsoft Windows, and for Stadia on January 26, 2021. Xbox One and PlayStation 4 ports are set to be released on October 1, 2021. Phoenix Point is intended to be a spiritual successor to the X-COM series that had been originally created by Snapshot Games head Julian Gollop during the 1990s. Phoenix Point is set in 2047 on an Earth in the midst of an alien invasion, with Lovecraftian horrors on the verge of wiping out humanity. Players start the game in command of a lone base, Phoenix Point, and face a mix of strategic and tactical challenges as they try to save themselves and the rest of humankind from annihilation by the alien threat. Between battles, the aliens adapt through accelerated, evolutionary mutations to the tactics and technology which players use against them. Meanwhile, multiple factions of humans will pursue their own objectives as they compete with players for limited resources in the apocalyptic world. How players resolve these challenges can result in different endings to the game. Setting In 2022, Earth's scientists discovered an extraterrestrial virus in permafrost that had begun to melt. Only about one percent of the virus's genome matched anything recorded by scientists up to that time. Named the Pandoravirus, humans and animals who came into contact with it mutated into horrific abominations. By the late 2020s, a global apocalypse began when melting polar icecaps released the Pandoravirus into the world's oceans. The alien virus quickly dominated the oceans, mutating sea creatures of every size into hybrid alien monsters capable of crawling on to land. The oceans transformed in alien ways after which the Pandoravirus began to infect the world's landmasses with an airborne mutagenic mist. The mist was both a microbial contaminant and a conduit that networked the hive-mind of the Pandoravirus. Humanity was not prepared; everyone who failed to reach high ground where the mist could not reach succumbed to it. The monstrosities of this future world are intended to evoke themes of tentacles and unknown horror familiar to fans of H. P. Lovecraft. Likewise, the works of John Carpenter influence the themes of science fiction horror, particularly related to the mist that both hides alien monsters and creates them. The game starts in 2047. The alien mist and monsters created by the Pandoravirus have overwhelmed and destroyed worldwide civilization, reducing the remnants of humanity to isolated havens that are sparsely spread across the planet. The Pandoravirus controls the oceans, contaminating all sea life, and has brought humans to the brink of extinction on land. Various factions control the havens of humanity, and they each have very different ideas of how to survive the alien threat. Phoenix Project Players begin the game as the leader of a cell of the Phoenix Project, a global and secretive organization. Since the 20th century, the Phoenix Project has been organized to be ready to help humanity in a time of worldwide peril. The Phoenix Project activates the members of players' cell who then gather at a base called Phoenix Point. The cell includes some of the world's best remaining soldiers, scientists and engineers. However, after they assemble under the players' leadership, no further instructions come from the Phoenix Project. Gameplay focuses on players' strategic and tactical choices in finding out what has happened to the rest of the Phoenix Project while also trying to save humanity from total annihilation. Disciples of Anu The Disciples of Anu are a human cult with beliefs that synthesize parts of Abrahamic religions with aspects of pre-Pandoravirus doomsday cults. Their world view sees human nature as inherently corrupted by human biology. The Disciples of Anu worship an alien god, which they refer to as "the Dead God". Cultists view the alien mist as both punishment and salvation. They have found ways to develop human-alien hybrids. The process for doing so seems to involve the Disciples deliberately exposing humans to the mist in a way that can allow for the humans' intelligence to remain. The Disciples of Anu typically locate their havens in caves, and their haven leaders are called Exarchs. Disciples are led with absolute authority by a leader, the Exalted, who seems to exhibit highly advanced and stable mutations. New Jericho New Jericho is a militaristic human faction which seeks to fight the alien threat directly by building a superior force. They are led by Tobias West, a former billionaire who also is a veteran mercenary. West gained his prominence in the 2020s as the head of Vanadium Inc., a technology and security firm which provided escorts for container ships as they traveled the world's oceans when the Pandoravirus mist and mutations first began to appear. Being a league of human-focused survivalists, New Jericho seeks to wipe out every trace of the aliens on Earth. Their leaders consider warfare and military technology, including enhancement of humans through technology, as the only solution to the alien threat. However they have conflicting ideas among themselves that threaten to splinter their faction before they can realize their objectives. New Jericho havens typically are fortresses at abandoned industrial or hilltop locations, and they have an extensive manufacturing base for military technology. Allied with the Phoenix Project, they successfully place a targeting beacon at the alien control node allowing for a decapitation strike. With the aliens leaderless, the war shifts in humanity's favor leading to an eventual victory. All of humanity is thus united under one government, and the Phoenix Project is given the resources to continue with its original mission and prepare for the origin of the Pandoravirus. Synedrion Synedrion have the most advanced technology of all the human factions. They are radical ecologists who seek to build a new and better human civilization out of the detritus of the old. They value knowledge and seek to form a global nation that exists in partnership with both its citizens and the environment. Viewing aliens as part of Earth's environmental landscape, Synedrion seek to coexist with the aliens by using technology such as a wall that can repel the alien's mist. Synedrion generally place their havens on elevated hi-tech platforms. Overall, they are a decentralized organization that mostly is interconnected through shared philosophy and stable communication networks; however, havens that become disconnected from others prioritize self-sufficiency. The organizational decision-making of Synedrion is slow. Via an alliance with the Phoenix Project they are able to deliver a command virus into the alien control node, allowing them to take control of the Pandoravirus and use it to terraform the planet into one that is sustainably suited for the human race. The Phoenix Project resumes its original mandate. Gameplay Phoenix Point is described as a spiritual successor to X-COM. In the 1990s, the original X-COM series of video games introduced and integrated global strategy and tactical combat through which players try to save Earth from alien invasions. Though a multiplayer mode has not been ruled out, a survey answered by likely players has focused on developing Phoenix Point as a single-player game. Novel ideas include having aliens mutate and evolve in semi-random ways as they try to adapt to players' tactics and technology. Altogether, Phoenix Point is described as adding new and improved gameplay dynamics to the genre. Mutating aliens In Phoenix Point, the alien threat evolves as part of a gameplay system designed to generate a wide variety of challenges and surprises for players in tactical combat. Aliens encountered by players are procedurally generated on two basic levels: first, aliens will draw upon a pool of available, interchangeable body parts; second, aliens can change in size and shape. When the Pandoravirus encroaches on new regions, animals and other biological material found, including humans, are recombined to increase the pool of available body parts for the creation of new aliens, through mutations. For example, in Africa, the procedurally generated mutation system might mash up the body of a lion with body parts of humans and other animals to create alien monsters that resemble a Sphinx. When aliens are victorious in combat, they may mutate more in order to use captured weapons and other technology. In contrast, aliens that are consistently defeated will continue to mutate in a natural selection process which mimics evolution. For example, a mutation might generate aliens with a new melee attack ability or a new defensive counter to certain types of weapons used by the players' soldiers. These mutations are somewhat random; however, the game's AI works in the background to find mutations that can defeat players' soldiers by discarding iterations that are unsuccessful. Aliens will continue to evolve until they develop a mutation that allows them to prevail in battle. Aliens with successful mutations then will be deployed in increasing numbers. Thus, the Pandoravirus responds and adapts to the tactics and technology used by players. Competing factions While players contend with the alien threat, there are AI-controlled human factions in Phoenix Point that interact with the game's world much like players. The Disciples of Anu, New Jericho, and Synedrion with their conflicting ideologies are the major non-player factions in the game. These factions control most of the world's remaining resources. There are also independent havens who the major factions will try to recruit, and their isolated survivors that still can be found scavenging outside of havens. The three major factions have unique technologies, traits, and diplomatic relations with each other. They have short-term and long-term goals consistent with their ideologies, and they act to accomplish these goals. For example, the factions can work to expand and develop their havens while players do the same. Players can obtain unique technology from the other factions through conquest or trade. Each of the three major factions also have secrets that can help resolve the alien threat. The major factions thus offer three different ways that players can end the game. Players can ally with only one of the major factions. Therefore, players are not able to get all technologies and secrets from all of the non-player factions in the same playthrough. Players have to choose one narrative path of the game that forsakes other options. This means that players are not able to obtain access to all of the ways to defeat the aliens in a single campaign. Global strategy The world for each Phoenix Point campaign populates by means of procedural generation. Just to survive, players need to locate and acquire scarce resources and make smart strategic choices in how they obtain and use the resources. Players do not have a global reach initially, so they will have to expand thoughtfully. How players acquire resources can have dynamic ramifications for their relations with non-player human factions. Players can engage in open hostilities with other havens by either raiding them for resources or conquering their bases. Players can exploit the conflicts of other factions through kidnappings, sabotage, assassinations, and military coups. Players also can pursue more diplomatic options such as mediating conflicts between factions, defending havens from attacks of aliens or rival factions, forming alliances, or trading. Resource scarcity compels players to deal with non-player factions one way or another, or else the factions will deal with the players. How players choose to interact with other factions will determine substantially the narrative that players experience in their gameplay. Meanwhile, non-player factions fight or ally with each other regardless of what players do. Players can interact with non-player factions much like in a 4X video game from the Civilization series. In making strategic choices, players use a globe-shaped strategic user interface called a Geoscape. The Geoscape is a more complex version of the strategic user interfaces used in previous X-COM games. The Geoscape serves as the nexus for players to monitor their exploration and make choices concerning strategic operations, development, and relationships with other human factions. Players use the Geoscape to track the spread of the Pandoravirus mist, which correlates with alien activity. Players also use the Geoscape to deploy squads of soldiers on tactical combat missions to different locales spread around the world. For example, mission locations could be havens of other factions, scavenging sites at abandoned military or civilian infrastructure, alien encampments, players' bases, and other Phoenix Project facilities. Players even have missions where soldiers must venture on to the backs of city-sized alien land walkers while the mammoth monsters are moving and trying to rid themselves of the players' soldiers. Tactical combat Tactical combat mission environments are procedurally generated and destructible. Soldiers can deploy on combat missions with a large variety of weapon systems including flamethrowers, chemical weapons, and ordinary explosives. With the right technology, players are able to deploy aerial and ground-mobile drones. Players also can obtain access to vehicles with customization options that their soldiers can bring into battle for heavy weapon support and tactical transportation. Players can deploy squads of four to roughly sixteen soldiers, though limits on squad size are determined mostly by players' availability of healthy soldiers and transportation capacity. While players try to defeat their alien or human enemies in combat, enemies have their own objectives. For example, enemies who attack a haven or base will seek and try to destroy its vital functional elements. Aliens also will try to kill, eat, or abduct civilians they find on the battlefield. If players assault an enemy facility, soldiers can use stealth to avoid alerting the enemy to their presence; however, once alerted, enemies will seek out and attack the soldiers. Combat occurs through turn-based moves which involve tactical options that are similar to those found in X-COM games. Each soldier has two basic actions to take in a turn such as moving and firing a weapon. Weapon fire that misses its target will hit something else and potentially injure or damage what it hits. Basic actions can be extended under two circumstances: first, if an enemy is spotted during a movement action, then the soldier halts so that the player can choose to react by firing or moving; second, soldiers have special actions that add to what they can do in a turn. Examples of special actions available to soldiers include overwatch and return fire options. Return fire allows units to retaliate against enemy weapon fire with their own weapon fire. Soldiers have a willpower attribute which determines how many "will points" that a soldier has. Soldiers expend will points to take special actions. Soldiers lose will points from injuries, a comrade dying, encountering a horrifying monster, and special enemy attacks. A soldier whose will points fall below zero may panic or lose their sanity. Willpower can be regained through rest or through some special abilities such as a leader's rallying action. Willpower and will points relate to a system in Phoenix Point where combat can inflict lasting physical and even psychological injuries on soldiers. While soldiers can be injured, disabled, and knocked unconscious in battle, they are difficult to kill. The permanent death of soldiers, also called permadeath, is not a significant concern for players. The injuries which soldiers suffer and even just the ordinary experiences of battle can lead to drug addictions, permanent physical disabilities, or even insanity that will require players to research new technologies to rehabilitate. During combat missions, players face a wide variety of enemies, including an evolving assortment of aliens. Some of the most challenging enemies that players eventually face are bus-sized, boss aliens. For example, one alien boss is called a Crab Queen. Among its abilities, a Crab Queen is able to create a microbial mist which reveals to the aliens on the battlefield any soldiers who enter it and which can buff or revive aliens; this mist creates a literal fog of war which actively works to advantage aliens in battle and otherwise bolsters the horror themes of the game. A Crab Queen also is able to spawn new aliens during combat that will quickly mutate into threats for soldiers on the battlefield. Such abilities of aliens often are locked to their use of particular body parts that can be targeted by weapons and tactical targeting therefore is able to help players to defeat giant boss aliens. Early screenshots of a game prototype show that Phoenix Point has a targeting system which works similarly to the V.A.T.S. used in Fallout. This targeting system provides a wider selection of tactical choices that players can make in combat to take down difficult foes. For example, a soldier might target a claw of an alien boss to disable a melee attack, an arm to disable a weapon, or an organ that gives the alien boss a special ability. These tactical options allow players to combat adversaries which may be significantly tougher than those found in more traditional X-COM games. Development Julian Gollop and David Kaye founded Snapshot Games to create Chaos Reborn, a modern version of Gollop's own 1985 Chaos: The Battle of Wizards, which they released on October 26, 2015. Less than six months later, on March 18, 2016, Gollop used Twitter to provide the first teaser for the development of Phoenix Point. A team of eight Snapshot Games developers led by Gollop worked on designing and producing the game over the course of the next year. With Phoenix Point, Gollop returned to the X-COM genre he created. After investing $450,000 into this first year of development, Snapshot Games launched a Fig crowdfunding campaign to obtain the $500,000 they budgeted to complete the game. In Bulgaria, where the studio is based, video game development costs are about a third of what they are in the United States. The campaign ended successfully on June 7, 2017, raising $765,948 from 10,314 contributors. Crediting the success of the campaign, Snapshot Games announced the next day that they had hired four developers and planned to grow their team to include around thirty by the end of the year. Phoenix Point was initially expected to be released in the fourth quarter of 2018 through Steam and GOG for Microsoft Windows, macOS, and Linux platforms. Snapshot Games aims to sell at least one million copies of Phoenix Point. Gollop set this goal based on his confidence in the quality of the game being development and his belief that there is a strong interest in another X-COM game from its creator. While it was initially anticipated for release in the fourth quarter of 2018, Snapshot Games announced in May 2018 that the title was now scheduled for release until late 2019, to give them more time to properly integrate the large amount of content from its team into the game. Gollop said in May 2018: "When we launched our crowdfunding campaign for Phoenix Point in May 2017, we hoped that the game would be well received. But what has happened since has been phenomenal, with increasingly strong pre-orders and great press coverage. People’s expectations are higher, our team is growing, and Phoenix Point has become a bigger game." While it was initially anticipated for release through Steam and GOG, Snapshot Games announced a one-year exclusivity deal with Epic Games Store for Microsoft Windows and macOS with one year of free DLC for its backers or a full refund by no later than April 12, 2019. Based on a report from one of the game's Fig investors, the Epic Games Store exclusivity deal was estimated to be worth about . Gollop stated the added funds from the exclusivity deal would help assure a trouble-free launch and support early post-release content better. Development team Julian Gollop, original designer of X-COM: UFO Defense and X-COM: Apocalypse, is the creative lead for Phoenix Point. The game's music is composed by John Broomhall, who had worked on X-COM: UFO Defense, X-COM: Terror from the Deep, and X-COM: Apocalypse. And all the other sounds are composed by Simon Dotkov. Artists for the game include Svetoslav Petrov, who drafts and illustrates concept art; Aleksandar Ignatov, who sculpts concept art into Plasticine sculptures as a foundation for rendering 3D computer models; Samuil Stanoev, who creates 3D computer models; and Borislav Bogdanov, the game's art director. Petrov and Bogdanov previously worked in similar artistic roles on the development of Chaos Reborn. Narrative content and lore are developed by writers, Allen Stroud and Jonas Kyratzes. Stroud provided world-building and novelization for other games, Chaos Reborn and Elite: Dangerous. Kyratzes provided writing for the story and premise of The Talos Principle that was noted for being as much responsible for its success as its gameplay mechanics. Design inspirations Having an open-world environment in which multiple AI-controlled human factions act on their own agendas, Gollop's own X-COM: Apocalypse (1997) provided a foundational example of the type of strategic gameplay that Gollop developed for Phoenix Point. In designing improvements to the strategic gameplay systems that Gollop developed in the 1990s, Gollop sought to add a grand strategy view. His plans for Phoenix Point borrow from grand strategy video games with procedural generation elements and emergent gameplay like Crusader Kings II. Sid Meier's Alpha Centauri similarly influenced how Gollop plans to develop more 4X-like dynamics into the open-world strategy aspects of Phoenix Point. As for combat, the 2012 X-COM reboot, XCOM: Enemy Unknown, by Firaxis Games and its sequel, XCOM 2, inspire the turn-based tactical combat system and user interface found in Phoenix Point. In particular, the visual presentation of tactical combat missions looks similar to these X-COM games of the 2010s; however, the underlying tactical gameplay mechanics continued to draw inspiration from Gollop's 1994 original X-COM game, X-COM: UFO Defense. Phoenix Point also draws inspiration from the Fallout video game series with how players can target specific body parts of enemies during combat. Lore stories Phoenix Point writers, Allen Stroud and Jonas Kyratzes, wrote short stories which help establish the setting and narrative themes for the game. Other writers who contributed stories include Thomas Turnbull-Ross and Chris Fellows. With these stories, the writers seek to develop the dystopian world in which Phoenix Point occurs with tales of individuals from around the world who experience different aspects of the alien invasion at various points in the years leading up to the start of the game in 2047. Snapshot Games made many of these stories available for free on the game's official website. It also plans to release a compendium of Phoenix Point stories for publication in ebook and print formats. Reception Upon its release, Phoenix Point was met with "mixed or average" reviews from critics for Microsoft Windows, with an aggregate score of 74% on Metacritic. IGN reviewed that the game was "in a state that still feels very experimental and unrefined". Polygon wrote that the game felt "unbalanced", was "between onerous and dull" and that the tactical battles were "simply abysmal". The Guardian also felt that the game was lacking a lot compared to X-COM. PC Gamer reviewed that the game was full of interesting ideas, but that it was also full of bugs and its state was a mess. Network N's Strategy Gamer said the game "lacks character" compared to XCOM and "doesn’t seem to have found that human touch that made Firaxis’ own take so appealing." References External links Official website 2019 video games Alien invasions in video games Apocalyptic video games Crowdfunded video games Horror video games Indie video games MacOS games Permadeath games PlayStation 4 games Science fiction video games Single-player video games Turn-based strategy video games Turn-based tactics video games Video games about extraterrestrial life Video games developed in Bulgaria Video games scored by John Broomhall Video games set in the 2040s Video games using procedural generation Video games with alternate endings Windows games Xbox One games Stadia games
4459460
https://en.wikipedia.org/wiki/3Delight
3Delight
3Delight, is 3D computer graphics software that runs on Microsoft Windows, OS X and Linux. It is developed by Illumination Research. It is a photorealistic, offline renderer that has been used to render VFX for numerous feature films, including Chappie. History Work on 3Delight started in 1999. The renderer became first publicly available in 2000. 3Delight was the first RenderMan-compliant renderer combining the REYES algorithm with on-demand ray tracing. The only other RenderMan-compliant renderer capable of ray tracing at the time was BMRT. BMRT was not a REYES renderer though. 3Delight was meant to be a commercial product from the beginning. However, the 3Delight team decided to make it available free of charge from August 2000 to March 2005 in order to build a user base. During this time, customers using a large number of licenses on their sites or requiring extensive support were asked to kindly work out an agreement that specified some form of fiscal compensation for this. In March 2005, the license was changed. The first license was still free. From the second license onwards, the renderer used to be 1,000 USD per two thread node resp. US$1,500 per four thread node. The first company that licenses 3Delight commercially, in early 2005, was Rising Sun Pictures. The licensing scheme was originally based on number of threads or cores. Since 2018, all purchased licenses are unlimited multi-core. The first license is free; initially limited to four cores and later increased to eight and now 12. Features Until version 10 (2013), 3Delight primarily used the REYES algorithm but was also well capable of doing ray tracing and global illumination. As of version 11 (2014) 3Delight primarily uses Path Tracing, with the option to use the REYES + RayTracing when needed. The renderer is fully multi-threaded, supports RenderMan Shading Language (RSL) 1.0/2.0 with optimising compiler and last stage JIT compilation. 3Delight also supports distributed rendering. This allows for accelerated rendering on multi-CPU hosts or environments where a large number of computers are joined into a grid / cloud. It implements all required capabilities for a RenderMan-compliant renderer and also the following optional ones: Area light sources Depth of field Displacement mapping Environment mapping Global illumination Level of detail Motion blur Programmable shading Special camera projections (through the "ray trace hider") Ray tracing Shadow depth mapping Solid modeling Texture mapping Volume shading 3Delight also supports the following capabilities, which are not part of any capabilities list: Photon mapping Point clouds Hierarchical subdivision surfaces NURB curves Brick maps (three-dimensional, mip-mapped textures) (RIB) Conditionals Class-based shaders Co-shaders Other features include: Extended display subset functionality to allow rendering of geometric primitives, writing to the same display variable, to different images.For example, display subsets could be used to render the skin and fur of a creature to two separate images at once without the fur matting the skin passes. Memory efficient point clouds. Like brick maps, point clouds are organized in a spatial data structure and are loaded lazily, keeping the memory requirements as low as possible. Procedural geometry is instanced lazily even during ray tracing, keeping the memory requirements as low as possible. Displacement shaders can be stacked. Displacement shaders can (additionally) be run on the vertices of a geometric primitive, before that primitive is even shaded. The gather() shadeop can be used on point clouds and to generate sample distributions from (high dynamic range) images, e.g. for easily combining photon mapping with image-based lighting. First order ray differentials on any ray fired from within a shader. A read/write disk cache that allows the renderer to take strain off the network, when heavy scene data needs to be repeatedly distributed to clients on a render farm or image data sent back from such clients to a central storage server. A C API that allows running RenderMan Shading Language (RSL) code on arbitrary data, e.g. inside a modelling application. Version Release History 3Delight Studio Pro 12 : June 2015 3Delight Studio Pro 11: October 2013 3Delight Studio Pro 10 "Blade Runner": October 2011 3Delight 9.0.0 "Antonioni": December 2009 3Delight 8.5.0 "Bronson": May 2009 3Delight 8.0.0 "Midnight Express": October 2008 3Delight 7.0.0 "Django": November 2007 3Delight 6.5.0 "Ennio": February 2007 3Delight 6.0.1 "Argento": November 2006 3Delight 5.0.0 "Moroder": February 2006 3Delight 4.5.0 "Lucio Fulci": August 2005 3Delight 4.0.0 "Indiana": March 2005 3Delight 3.0.0 3Delight 2.1.0: June 2004 3Delight 2.0.0: January 2004 3Delight 1.0.6beta 3Delight 1.0.0beta: January 2003 3Delight 0.9.6: August 2002 3Delight 0.9.4: June 2002 3Delight 0.9.2: December 2001 3Delight 0.9.0: August 2001 3Delight 0.8.0: March 2001 3Delight 0.6.0: September 2000 3Delight 0.5.1: August 2000 Supported platforms Apple Mac OS X on the PowerPC and x86 architectures (The last version to support PPC architecture was version 9. All versions from 10 up are Intel x86 only and will not run on PowerPC Macs.) Linux on the x86, x86-64 and Cell architectures Microsoft Windows on the x86 and x86-64 architectures Operating environments The renderer comes in both 32-bit and 64-bit versions. The latter allowing the processing of very large scene datasets. Discontinued platforms Platforms supported in the past included: Digital Equipment Corporation Digital UNIX on the Alpha architecture Silicon Graphics IRIX on the MIPS architecture (might still be supported, on request) Sun Microsystems Solaris on the SPARC architecture Film credits 3Delight has been used for visual effects work on many films. Some notable examples are: Assault on Precinct 13 Bailey's Billions Black Christmas Blades of Glory The Blood Diamond Charlotte's Web CJ7 / Cheung Gong 7 hou The Chronicles of Narnia: The Lion, the Witch and the Wardrobe The Chronicles of Riddick Cube Zero District 9 Fantastic Four Fantastic Four: Rise of the Silver Surfer Final Destination 3 Harry Potter and the Half-Blood Prince Harry Potter and the Order of the Phoenix Hulk The Incredible Hulk The Last Mimzy The Ruins The Seeker: The Dark is Rising Terminator Salvation Superman Returns The Woods X-Men: The Last Stand X-Men Origins: Wolverine It was also used to render the following full CG features: Adventures in Animation (Imax 3D featurette) Happy Feet Two Free Jimmy References External links 3Delight home page Rodmena Network 3D graphics software RenderMan Proprietary commercial software for Linux
1434220
https://en.wikipedia.org/wiki/Command%20hierarchy
Command hierarchy
A command hierarchy is a group of people who carry out orders based on others' authority within the group. It can be viewed as part of a power structure, in which it is usually seen as the most vulnerable and also the most powerful part. Military chain of command In a military context, the chain of command is the line of authority and responsibility along which orders are passed within a military unit and between different units. In simpler terms, the chain of command is the succession of leaders through which command is exercised and executed. Orders are transmitted down the chain of command, from a responsible superior, such as a commissioned officer, to lower-ranked subordinate(s) who either execute the order personally or transmit it down the chain as appropriate, until it is received by those expected to execute it. "Command is exercised by virtue of office and the special assignment of members of the Armed Forces holding military rank who are eligible to exercise command." In general, military personnel give orders only to those directly below them in the chain of command and receive orders only from those directly above them. A service member who has difficulty executing a duty or order and appeals for relief directly to an officer above his immediate commander in the chain of command is likely to be disciplined for not respecting the chain of command. Similarly, an officer is usually expected to give orders only to his or her direct subordinate(s), even if only to pass an order down to another service member lower in the chain of command than said subordinate. The concept of chain of command also implies that higher rank alone does not entitle a higher-ranking service member to give commands to anyone of lower rank. For example, an officer of unit "A" does not directly command lower-ranking members of unit "B", and is generally expected to approach an officer of unit "B" if he requires action by members of that unit. The chain of command means that individual members take orders from only one superior and only give orders to a defined group of people immediately below them. If an officer of unit "A" does give orders directly to a lower-ranked member of unit "B", it would be considered highly unusual (a faux pas, or extraordinary circumstances, such as a lack of time or inability to confer with the officer in command of unit "B") as officer "A" would be seen as subverting the authority of the officer of unit "B". Depending on the situation or the standard procedure of the military organization, the lower-ranked member being ordered may choose to carry out the order anyway, or advise that it has to be cleared with his or her own chain of command first, which in this example would be with officer "B". Refusal to carry out an order is almost always considered insubordination; the only exception usually allowed is if the order itself is illegal (i.e., the person carrying out the order would be committing an illegal act). (See Superior Orders.) In addition, within combat units, line officers are in the chain of command, but staff officers in specialist fields (such as medical, dental, legal, supply, and chaplain) are not, except within their own specialty. For example, a medical officer in an infantry battalion would be responsible for the combat medics in that unit but would not be eligible to command the battalion or any of its subordinate units. The term is also used in a civilian management context describing comparable hierarchical structures of authority. Such structures are included in Fire Departments, Police Departments, and other organizations that have a paramilitary command or power structure. Sociology In sociology, command hierarchy is seen as the most visible element of a "power network." In this model, social capital is viewed as being mobilized in response to orders that move through the hierarchy leading to the phrase "command and control". Features Regardless of the degree of control or results achieved, and regardless of how the hierarchy is justified and rationalized, certain aspects of a command hierarchy tend to be similar: rank – especially military rank – "who outranks whom" in the power structure unity of command – each member of the hierarchy has one and only one superior, precluding the possibility of contradictory orders strict accountability – those who issue orders are responsible for the consequences, not those who carry them out strict feedback rules – complaints go up the hierarchy to those with power to deal with them, not down to those who do not have that power detailed rules for decision making – what criteria apply and when standardized language and terminology some ethics and key beliefs in common, usually enforced as early as recruiting and screening of recruits Problems However, people of such compatible views often have similar systemic biases because they are from the same culture. Such problems as groupthink or willingness to accept one standard of evidence internal to the group, but require drastically higher evidence from outside, are common. In part to address these problems, much modern management science has focused on reducing reliance on command hierarchy especially for information flow, since the cost of communications is now low, and the cost of management mistakes is higher. It is also easier to replace managers, so they have a personal interest in more distributed responsibility and perhaps more consensus decision making. Ubiquitous command and control posits for military organizations, a generalisation from hierarchies to networks that allows for the use of hierarchies when they are appropriate, and non-hierarchical networks when they are inappropriate. This includes the notion of mission agreement, to support "edge in" as well as "top-down" flow of intent. See also Control (management) Command (military formation) Hierarchical organization Incident Command System Command and control Military rank Directive control References Military life Military organization Networks Command and control
1956387
https://en.wikipedia.org/wiki/Internet%20Explorer%206
Internet Explorer 6
Microsoft Internet Explorer 6 (IE6) is a graphical web browser developed by Microsoft for Windows operating systems. Released on August 24, 2001, it is the sixth, and by now discontinued, version of Internet Explorer and the successor to Internet Explorer 5. It was the default browser in Windows XP (later default was Internet Explorer 8) and Windows Server 2003 and can replace previous versions of Internet Explorer on Windows NT 4.0 SP6a, Windows 98, Windows 2000 and Windows ME but unlike version 5, this version does not support Windows 95 or an earlier version. IE6 SP2+ and IE7 were only included (IE6 SP2+) in or available (IE7) for Windows XP SP2+. Despite dominating market share (attaining a peak of 90% in mid-2004), this version of Internet Explorer has been widely criticized for its security issues and lack of support for modern web standards, making frequent appearances in "worst tech products of all time" lists, with PC World labeling it "the least secure software on the planet." In 2004, Mozilla finalized Firefox to rival IE6, and it became highly popular and acclaimed for its security, add-ons, speed and other modern features such as tabbed browsing. Microsoft planned to fix these issues in Internet Explorer 7 by June–August 2005, but it was delayed until an October 2006 release, over 5 years after IE6 debuted. Because a substantial percentage of the web audience still used the outdated browser (especially in China), campaigns were established in the late-2000s to encourage users to upgrade to newer versions of Internet Explorer or switch to different browsers. Some websites dropped support for IE6 entirely, most notable of which was Google dropping support in some of its services in March 2010. According to Microsoft's modern.ie website, , 3.1% of users in China and less than 1% in other countries were using IE6. Internet Explorer 6 was the last version to be called Microsoft Internet Explorer. The software was rebranded as Windows Internet Explorer starting in 2006 with the release of Internet Explorer 7. Internet Explorer 6 is no longer supported, and is not available for download from Microsoft. Internet Explorer 6 is the final version of Internet Explorer which supports Windows NT 4.0 SP6a, Windows 98, Windows 2000 and Windows Me. The next version, Internet Explorer 7, only supports Windows XP SP2 or later and Windows Server 2003 SP1 or later. Overview When IE6 was released, it included a number of enhancements over its predecessor, Internet Explorer 5. It and its browser engine MSHTML (Trident) are required for many programs including Microsoft Encarta. IE6 improved support for Cascading Style Sheets, adding a number of properties that previously had not been implemented and fixing bugs such as the Internet Explorer box model bug. In Windows XP, IE6 introduced a redesigned interface based on the operating system's default theme, Luna. In addition, IE6 added DHTML enhancements, content restricted inline frames, and partial support of DOM level 1 and SMIL 2.0. The MSXML engine was also updated to version 3.0. Other new features included a new version of the Internet Explorer Administration Kit (IEAK) which introduced IExpress, a utility to create self-extracting INF-based installation packages, Media bar, Windows Messenger integration, fault collection, automatic image resizing, and P3P. Meanwhile, in 2002, the Gopher protocol was disabled. IE6 was the most widely used web browser during its tenure, surpassing Internet Explorer 5.x. At its peak in 2002 and 2003, IE6 attained a total market share of nearly 90%, with all versions of IE combined reaching 95%. There was little change in IE's market share for several years until Mozilla Firefox was released and gradually began to gain popularity. Microsoft subsequently resumed development of Internet Explorer and released Internet Explorer 7, further reducing the number of IE6 users. In a May 7, 2003 Microsoft online chat, Brian Countryman, Internet Explorer Program Manager, declared that Internet Explorer would cease to be distributed separately from Windows (IE6 would be the last standalone version); it would, however, be continued as a part of the evolution of Windows, with updates coming only bundled in Windows upgrades. Thus, Internet Explorer and Windows itself would be kept more in sync. However, after one release in this fashion (IE6 SP2 in Windows XP SP2, in August 2004), Microsoft changed its plan and released Internet Explorer 7 for Windows XP SP2 and Windows Server 2003 SP1 in late 2006. Microsoft Internet Explorer 6 was the last version of Internet Explorer to have "Microsoft" in the title: later versions changed branding to "Windows Internet Explorer", as a reaction to the findings of anti-competitive tying of Internet Explorer and Windows raised in United States v. Microsoft and the European Union Microsoft competition case. On March 4, 2011, Microsoft urged web users to stop using IE6 in favor of newer versions of Internet Explorer. They launched a website called IE6 Countdown, which would show how much percentage of the world uses IE6 and aim to get people to upgrade. Since 2015, all of the older sample questions offered by IE6 Search Companion on Windows XP and other unique functions have been replaced with "Windows 10 Upgrade". Security problems The security advisory site Secunia reported 24 unpatched vulnerabilities in Internet Explorer 6 as of February 9, 2010. These vulnerabilities, which include several "moderately critical" ratings, amount to 17% of the total 144 security risks listed on the website as of February 11, 2010. As of June 23, 2006, Secunia counted 20 unpatched security flaws for Internet Explorer 6, many more and older than for any other browser, even in each individual criticality-level, although some of these flaws only affect Internet Explorer when running on certain versions of Windows or when running in conjunction with certain other applications. On June 23, 2004, an attacker used two previously undiscovered security holes in Internet Explorer to insert spam-sending software on an unknown number of end-user computers. This malware became known as Download.ject and caused users to infect their computers with a back door and key logger merely by viewing a web page. Infected sites included several financial sites. Probably the biggest generic security failing of Internet Explorer (and other web browsers too) is the fact that it runs with the same level of access as the logged in user, rather than adopting the principle of least user access. Consequently, any malware executing in the Internet Explorer process via a security vulnerability (e.g. Download.ject in the example above) has the same level of access as the user, something that has particular relevance when that user is an Administrator. Tools such as DropMyRights are able to address this issue by restricting the security token of the Internet Explorer process to that of a limited user. However this added level of security is not installed or available by default, and does not offer a simple way to elevate privileges ad hoc when required (for example to access Microsoft Update). Art Manion, a representative of the United States Computer Emergency Readiness Team (US-CERT) noted in a vulnerability report that the design of Internet Explorer 6 Service Pack 1 made it difficult to secure. He stated that: Manion later clarified that most of these concerns were addressed in 2004 with the release of Windows XP Service Pack 2, and other browsers had begun to suffer the same vulnerabilities he identified in the above CERT report. In response to a belief that Internet Explorer's frequency of exploitation is due in part to its ubiquity, since its market dominance made it the most obvious target, David Wheeler argues that this is not the full story. He notes that Apache HTTP Server had a much larger market share than Microsoft IIS, yet Apache traditionally had fewer security vulnerabilities at the time. As a result of its issues, some security experts, including Bruce Schneier in 2004, recommended that users stop using Internet Explorer for normal browsing, and switch to a different browser instead. Several notable technology columnists suggested the same idea, including The Wall Street Journals Walt Mossberg and eWeek's Steven Vaughan-Nichols. On July 6, 2004, US-CERT released an exploit report in which the last of seven workarounds was to use a different browser, especially when visiting untrusted sites. Market share Internet Explorer 6 was the most widely used web browser during its tenure (surpassing Internet Explorer 5.x), attaining a peak percentage in usage share during 2002 and 2003 in the high 80s, and together with other versions up to 95%. It only slowly declined up to 2007, when it lost about half its market share to Internet Explorer 7 and Mozilla Firefox between late 2006 to 2008. IE6 remained more popular than its successor in business use for more than a year after IE7 came out. A 2008 DailyTech article noted, "A Survey found 55.2% of companies still use IE 6 as of December 2007", while "IE 7 only has a 23.4 percent adoption rate". Net Applications estimated IE6 market share at almost 39% for September 2008. According to the same source, IE7 users migrate faster to IE8 than users of its predecessor IE6 did, leading to IE6 once again becoming the most widely used browser during the summer and fall of 2009, eight years after its introduction. As of February 2010, estimates of IE6's global market share ranged from 10 to 20%. Nonetheless, IE6 continued to maintain a plurality or even majority presence in the browser market of certain countries, notably China and South Korea. Google Apps and YouTube dropped support for IE6 in March 2010, followed by Facebook chat in September. On January 3, 2012, Microsoft announced that usage of IE6 in the United States had dropped below 1%. In August 2012, IE6 was still the most popular IE web browser in China. It was also the second most used browser overall with a total market share of 22.41%, just behind the Chinese-made 360 Secure Browser at 26.96%. In July 2013, Net Applications reported the global market share of IE6 amongst all Internet Explorer browsers to be 10.9%. As of August 2015, IE6 was being used by <1% users in most countries, with the only exception being China (3.1%). Usage in China fell below 1% by the end of the year. Criticism A common criticism of Internet Explorer is the speed at which fixes are released after the discovery of security problems. Microsoft attributes the perceived delays to rigorous testing. A posting to the Internet Explorer team blog on August 17, 2004 explained that there are, at minimum, 234 distinct releases of Internet Explorer that Microsoft supports (covering more than two dozen languages, and several different revisions of the operating system and browser level for each language), and that every combination is tested before a patch is released. In May 2006, PC World rated Internet Explorer 6 the eighth worst tech product of all time. A certain degree of complacency has been alleged against Microsoft over IE6. With near 90% of the browser market the motive for innovation was not strongly present, resulting in the 5 year time between IE6's introduction and its replacement with IE7. This was a contributing factor for the rapid rise of the free software alternative Mozilla Firefox. Unlike most other modern browsers, IE6 does not fully nor properly support CSS version 2, which made it difficult for web developers to ensure compatibility with the browser without degrading the experience for users of more advanced browsers. Developers often resorted to strategies such as CSS hacks, conditional comments, or other forms of browser sniffing to make their websites work in IE6. Additionally, IE6 lacks support for alpha transparency in PNG images, replacing transparent pixels with a solid colour background (grey unless defined in a PNG bKGD chunk). There is a workaround by way of Microsoft's proprietary AlphaImageLoader, but it is more complicated and not wholly comparable in function. Internet Explorer 6 has also been criticized due to its instability. For example, the following code on a website would cause a program crash in IE6: <style>*{position:relative}</style><table><input></table> or <script>for (x in open);</script> The user could crash the browser with a single line of code in the address bar, causing a pointer overflow. ms-its:%F0: Several campaigns were later aimed at ridding Internet Explorer 6 from the browser market: In July 2008, 37signals announced it would phase out support for IE6 beginning in October 2008. In February 2009, some Norwegian sites began hosting campaigns with the same aim. In March 2009, a Danish anti-IE6 campaign was launched. In July 2009, developers of YouTube placed a site notice that warned about the impending deprecation of support for Internet Explorer 6, prompting its users to upgrade their browser. It is claimed that they represented 18% of the site traffic at that time. In January 2010, the German Government, and subsequently the French Government each advised their citizens to move away from IE6. Also in January 2010, Google announced it would no longer support IE6. In February 2010, British citizens began to petition their government to stop using IE6, though this was rejected in July 2010. In March 2010, in agreement with the EU, Microsoft began prompting users of Internet Explorer 6 in the EU with a ballot screen in which they are presented with a list of browsers in random order to select and upgrade to. The website is located at BrowserChoice.eu. In May 2010, Microsoft's Australian division launched a campaign which compared IE6 to 9-year-old milk and urged users to upgrade to IE8. With the increasing lack of compatibility with modern web standards, popular websites began removing support for IE6 in 2010, including YouTube and their parent company Google; however large IT company support teams and other employers forcing staff to use IE6 for compatibility reasons slowed upgrades. Microsoft themselves eventually began their own campaign to encourage users to stop using IE6, though stating that they would support IE6 until Windows XP SP3 (including embedded versions) support is removed. However, on January 12, 2016 when the new Microsoft Lifecycle Support policy for Internet Explorer went into effect, IE6 support on all Windows versions ended, more than 14 years after its original release, making the January 2016 security update for multiple versions of XP Embedded the last that Microsoft publicly issued for IE6. Security framework Internet Explorer uses a zone-based security framework, which means that sites are grouped based upon certain conditions. IE allows the restriction of broad areas of functionality, and also allows specific functions to be restricted. The administration of Internet Explorer is accomplished through the Internet Properties control panel. This utility also administers the Internet Explorer framework as it is implemented by other applications. Patches and updates to the browser are released periodically and made available through Windows Update web site. Windows XP Service Pack 2 adds several important security features to Internet Explorer, including a popup blocker and additional security for ActiveX controls. ActiveX support remains in Internet Explorer although access to the "Local Machine Zone" is denied by default since Service Pack 2. However, once an ActiveX control runs and is authorized by the user, it can gain all the privileges of the user, instead of being granted limited privileges as Java or JavaScript do. This was later solved in the Windows Vista version of IE 7, which supported running the browser in a low-permission mode, making malware unable to run unless expressly granted permission by the user. Quirks mode Internet Explorer 6 dropped Compatibility Mode, which allowed Internet Explorer 4 to be run side by side with 5.x. Instead, IE6 introduced quirks mode, which causes it to emulate many behaviors of IE 5.5. Rather than being activated by the user, quirks mode is automatically and silently activated when viewing web pages that contain an old, invalid or no DOCTYPE. This feature was later added to all other major browsers to maximize compatibility with old or poorly-coded web pages. Supported platforms Internet Explorer 6.0 supports Windows NT 4.0 (Service Pack 6a only), Windows 98, Windows 2000, Windows ME, Windows XP and Windows Server 2003. The Service Pack 1 update supports all of these versions, but Security Version 1 is only available as part of Windows XP Service Pack 2 and Windows Server 2003 Service Pack 1 and later service packs for those versions. Versions after Windows XP include Internet Explorer 7 and higher only. Release history System requirements IE6 requires at least: 486/66 MHz processor. Windows NT 4.0 SP6a Windows 98 Windows 2000 Windows ME Super VGA (800 × 600) monitor with 256 colors Mouse or compatible pointing device RAM: 16-32 MB Free disk space: 8.7–12.7 MB See also Comparison of web browsers History of the Internet List of web browsers Timeline of web browsers References External links IEBlog — The weblog of the Internet Explorer team IE 6 Countdown webpage by Microsoft 2001 software Internet Explorer Windows components Windows web browsers Windows XP
218221
https://en.wikipedia.org/wiki/Linux%20Gazette
Linux Gazette
The Linux Gazette was a monthly self-published Linux computing webzine, published between July 1995 and June 2011. Its content was published under the Open Publication License. History It was started in July 1995 by John M. Fisk as a free service. He went on to pursue his studies and become a medical doctor. At Mr. Fisk's request, the publication was sponsored and managed by SSC (Specialized System Consultants, who at that time were also publishers of Linux Journal). The content was always provided by volunteers, including most of the editorial oversight. After those years, the volunteer staff and the management of SSC had a schism (see Bifurcation below). Both the volunteer-run magazine and the magazine run by SSC has been closed down. One way Linux Gazette differed from other, similar, webzines (and magazines) was The Answer Gang. In addition to providing a regular page devoted to questions and answers, questions to The Answer Gang were answered on a mailing list, and the subsequent conversations are edited and published as conversations. This started with an arrangement between Marjorie Richardson and Jim Dennis (whom she dubbed "The Answer Guy"). She'd forward questions to him; he'd answer them to the original querent and copy her on the reply; then, she'd gather up all of those, and include them in the monthly help desk column. With its motto, "Making Linux just a little more fun", the magazine always had a finger on the pulse of Linux's open, collaborating, and sharing culture. The last issue (#186) was published in June 2011. Bifurcation Fisk transferred the management of the Linux Gazette to SSC (under Phil Hughes) in 1996 in order to pursue medical studies, on the understanding that the publication would continue to be open, free, and non-commercial. In October 2003, the Linux Gazette split into two competing groups. The staff of LinuxGazette.net, however, said that their decision to start their own version of Linux Gazette was due to several factors: SSC's assertion that Linux Gazette would no longer be edited or released in monthly issues, as well as the removal of material from older issues without notifying the authors. SSC attempted to assert trademark claims over the publication. LinuxGazette.net contributing editor Rick Moen, however, addressed this claim in an article for LinuxGazette.net: The very same day it received our notice of the magazine's departure, SSC, Inc. suddenly filed a US $300 fee and trademark application #78319880 with the USA Patent and Trademark Office (USPTO), requesting registration of the name "Linux Gazette" as a service mark. On that form, SSC certified that it had used the mark in commerce starting August 1, 1996. ... SSC's recent legal claim to hegemony over the name “Linux Gazette” strikes us as outrageously unmerited, and cheeky. Starting May 30, 2004, the US Patent and Trademark Office (USPTO) Trademark Electronic Business Center's TDR (Text Document Retrieval) online record showed a USPTO "Office Action" on application #78319880, summarily refusing registration on grounds that the proposed mark "merely describes the subject matter and nature of the applicant's goods and/or services", and also because publishing a journal is not per se a "service" within the meaning of the term in trademark law (SSC having not provided descriptive evidence or arguments to counter that presumption). The notice gave SSC six months to cure these deficiencies. On Dec. 27, 2004, USPTO's follow-up Notice of Abandonment ruled that "The trademark application below was abandoned because a response to the Office Action mailed on 05-30-2004 was not received within the 6-month response period", adding that any request for reinstatement would have to be received within two additional months. On Jan. 6, 2005, USPTO noted return of its Notice of Abandonment by the post office: "Not deliverable as addressed. Unable to forward." The magazine run by SSC was closed down, and for an undisclosed reason the volunteer-run magazine was also abandoned. In early 2006, SSC closed the Web site at LinuxGazette.com, and made it an HTTP redirect to the Linux Journal site. References External links Monthly magazines published in the United States Online magazines published in the United States Defunct computer magazines published in the United States Linux magazines Linux websites Magazines established in 1995 Magazines disestablished in 2011 Online computer magazines
5464202
https://en.wikipedia.org/wiki/EMule
EMule
eMule is a free peer-to-peer file sharing application for Microsoft Windows and Linux. Started in May 2002 as an alternative to eDonkey2000, eMule now connects to both the eDonkey network and the Kad network. Often used by clients looking for extremely rare content, the distinguishing features of eMule are the direct exchange of sources between client nodes, fast recovery of corrupted downloads, and the use of a credit system to reward frequent uploaders. Furthermore, eMule transmits data in zlib-compressed form to save bandwidth. eMule is coded in C++ using the Microsoft Foundation Classes. Since July 2002 eMule has been free software, released under the GNU General Public License; its popularity has led to eMule's codebase being used as the basis of cross-platform clients aMule, JMule, xMule, along with the release of many eMule mods (modifications of the original eMule) on the Internet. As of August 2017, it is the fourth most downloaded project on SourceForge, with over 685 million downloads. Development was later restarted by the community. The latest stable Community version is the 0.60d. History The eMule project was started on May 13, 2002 by Hendrik Breitkreuz (also known as Merkur) who was dissatisfied with the original eDonkey2000 client. Over time more developers joined the effort. The source was first released at version 0.02 and published on SourceForge on July 6, 2002. eMule was first released as a binary on August 4, 2002 at version 0.05a. The 'Credit System' was implemented for the first time on September 14, 2002 in version 0.19a. The eMule project website started up on December 8, 2002. Current versions (v0.40+) of eMule have added support for the Kad network. This network has an implementation of the Kademlia protocol, which does not rely on central servers as the eDonkey network does, but is an implementation of a distributed hash table. Also added in recent versions were the ability to search using unicode, allowing for searches for files in non-Latin alphabets, and the ability to search servers for files with complete sources of unfinished files on the eDonkey network. In new versions, a "Bad source list" was added. The application adds an IP address to this list after one unsuccessful connection. After adding an IP to the "Bad source list", the application treats this IP as a "dead" IP. Unavailable IPs are banned for a time period from 15 to 45 minutes. Some users have complained that it leads to a loss of active sources and subsequently slows download speed. Other recent additions include: the ability to run eMule from a user account with limited privileges (thus enhancing security), and Intelligent Corruption Handling (so that a corrupted chunk does not need to be re-downloaded entirely). The 0.46b version added the creation and management of "eMule collection" files, which contain a set of links to files intended to be downloaded as a set. From 2007, many ISPs have used bandwidth throttling for usual P2P ports, resulting in slow performances. The 0.47b version adds protocol obfuscation and eMule will automatically select two port numbers at random in the startup wizard. Basic concepts Each file that is shared using eMule is hashed as a hash list comprising separate 9500 KiB chunks using the MD4 algorithm. The top-level MD4 hash, file size, filename, and several secondary search attributes such as bit rate and codec are stored on eD2k servers and the serverless Kad network. Users can search for filenames in the servers/kad and are presented with the filenames and the unique identifier consisting of the top-level MD4 hash for the file and the file's size that can be added to their downloads. The client then asks the servers where the other clients are using that hash. The servers return a set of IP/ports that indicate the locations of the clients that share the file. eMule then asks the peers for the file. eMule will then be queued until an upload slot becomes available. When a complete chunk of 9,728,000 bytes (9500 KiB) is downloaded and verified, this data is also shared by the downloader, helping others to download the file as well. It is also possible that a client knows other clients that are also sharing that same file. In that case a source exchange between the clients is made. This exchange of known peers is done directly between the peers. Newer versions of eMule support AICH (Advanced Intelligent Corruption Handling). It is meant to make eMule's corruption handling competitive with BitTorrent. SHA-1 hashes are computed for each 180 KiB sub-chunk and a whole SHA-1 hash tree is formed. AICH is processed purely with peer-to-peer source exchanges. eMule requires 10 agreeing peers regarding the SHA-1 hash, so rare files generally do not benefit from AICH. Low ID Users who cannot be reached from the outside because they are firewalled, behind a NAT device that has not been correctly port forwarded, or whose IP address ends with a zero (e.g. 123.45.67.0) get a "Low ID" from the servers. They are still able to upload and download but need the help of servers or other kad clients to be reached by other clients. Since they cannot be notified that they are in front of an upload queue, they have to poll peers if an upload slot is available. Since they cannot connect to any other Low ID clients, they see only 40–60% of the clients that a High ID can see. Their IP/ports are not exchanged between other peers, limiting their possibilities for finding sources via eMule's pure-P2P source exchange. A Low ID client also consumes a lot more data on an eserver than a High ID client due to the lowidcallbacks. Also, a releaser or heavy uploader that uses a releaser mod such as MorphXT or Xtreme that is forced to operate on a Low ID (hotel room, job) also will find that they will have little control over their upload priorities (especially powershares) as the servers appear to limit their connection-forwarding for each client, thus turning their upload queue to a contention situation where the first to be able to get forwarding and find an open slot gets it. Credit system Credits are not global; they are exchanged between two specific clients. The credit system is used to reward users contributing to the network, i.e. uploading to other clients. The strict queue system in eMule is based on the waiting time a user has spent in the queue. The credit system provides a major modifier to this waiting time by taking the upload and download between the two clients into consideration. The more a user uploads to a client the faster they advance in this client's queue. The modifiers are calculated from the amount of transferred data between the two clients. The values used can be seen in the client's details dialog. To view this information, right-click on any user and choose View Details. All Clients uploading to you are rewarded by the credit system. It does not matter if the client supports the credit system or not. Non-supporting clients will grant you no credits when you upload to them. Credits are stored in the clients.met file. The unique user hash is used to identify the client. Your own credits are saved by the client who owes you the credit. This prevents faking the credits. Your own credits cannot be displayed. The computation formula for the Official Credit System is composed of two ratios as follows: Both ratios are then compared and the lower one is used as the modifier. A few conditions exist: If the Uploaded Total is less than 1 MB, then the modifier will remain at 1. If the client uploads data but doesn't download any, the modifier will be fixed at 10. The modifier can only be between 1 and 10. An exception to this rule applies only when a peer is assigned a "Friend Slot" after being added to the client's Friends list. This automatically assigns a reserved upload slot for that peer so that he/she can begin downloading regardless of the Credit rating. Only one Friend Slot can be reserved so as to prevent any form of abuse such as upload discrimination. eMule compared to other P2P applications eMule is said to be the most complete implementation of the eD2k protocol and its extensions. eMule supports AICH, making its corruption handling competitive with BitTorrent. eMule also supports source exchanges, allowing it to substantially reduce the loads on the servers and Kad. With a High ID and well-sourced downloads pre-acquired by server and/or Kad, eMule is able to sustain the peer sources on these files independent longer after disconnection from eD2k and Kad. eMule mods As a popular open source program, eMule has many variants, usually called mods. Some mods started as forks from official eMule versions and then continued to develop independently rather than modifying newer official versions. An example of this type of mod is the obsolete eMule Plus. Since eMule Plus forked off before the release of v0.30, the first official version to include Kad, eMule Plus does not support this feature mainly because the project development has been abandoned for about 4 years. Other current mods follow official eMule releases and make their own releases based on each new release of the official version. Since distributed mods are required to publicly share their source code by the GNU General Public License, useful features created by mod developers can be quickly incorporated into an official version. Fake eMule sites and malware Due to the popularity and open-source nature of eMule, some third parties have created modified versions of it, which frequently contain spyware and other malware programs. Some fake sites ask for credit card information or require the user to sign up for a paid membership. The official eMule is free and does not ask for such information. These versions are usually found via rotating advertisements sometimes placed on legitimate sites. Chinese mods of eMule client VeryCD's easyMule is a popular eMule client among Chinese users. It has a simplified interface and lacks some advanced settings available in the standard eMule client. As of version 1.1 it only supports searching through the VeryCD database, though external eD2k links are accepted. Some criticized VeryCD for their misleading name "Dianlv" (; generally the Chinese name for eDonkey or eMule) and the site emule.org.cn, which is named "Dianlv (eMule) Chinese Site" (电驴(eMule)中文网站). Community version On July 29, 2017 was released a "Community Version" of eMule maintained by the official forum users. This version is available to download in a GitHub repository and is based on the latest official release or beta but contains additional features and bug fixes made by the community, prioritizing a more up-to-date version. See also eDonkey network Kad network Comparison of eDonkey software Comparison of file sharing applications aMule (Mac version) References External links Official forum Official IRC network (MindForge) eMule Protocol Specification by Danny Bickson and Yoram Kulbak from Hebrew University of Jerusalem Glasnost test eMule traffic shaping (Max Planck Institute for Software Systems) 2002 software Free file sharing software Free software programmed in C++ Windows-only free software Portable software Windows file sharing software Beta software
66956
https://en.wikipedia.org/wiki/Ho%20Chi%20Minh%20City
Ho Chi Minh City
Ho Chi Minh City (; or ), formerly (and still commonly) known as Saigon (; or ), is the largest city in Vietnam, situated in the south. In the southeastern region, the city surrounds the Saigon River and covers about . Prior to Vietnamese settlement in the 17th century, the city was a scarcely populated area that had been part of historic empires of Funan, Chenla, and Cambodia. With the arrival of Vietnamese, the area became more populated and officials began establishing the city from 1623 to 1698. After it was ceded by the last Vietnamese dynasty to the French in 1862, the name Saigon was adopted and the city underwent urbanization to become a financial center in the region. The city was the capital of South Vietnam until the end of the Vietnam War with North Vietnamese victory in 1975. In 1976, the government of the unified Vietnam renamed Saigon in honor of Hồ Chí Minh. The primary economic center of Vietnam, it is also an emerging international destination, with popular landmarks related to remnants of its history showcased through its architecture. A major transportation hub, the city hosts the Tan Son Nhat International Airport, the busiest airport in Vietnam. Sài Gòn or Thành phố Hồ Chí Minh is also undergoing construction of educational institutions and transportation, and also serves as a major media and entertainment outlet. Etymology Ho Chi Minh City has gone by several different names during its history, reflecting settlement by different ethnic, cultural and political groups. Originally a trading port city of the Khmer Empire known as Prey Nokor (), it is still known as Prey Nokor to Cambodians today. In time, under the control of the Vietnamese, it was officially renamed Gia Dinh (), a name that was retained until the time of the French conquest in the 1860s, when it adopted the name , westernized as , although the city was still indicated as on Vietnamese maps written in Chữ Hán until at least 1891. The current name, Ho Chi Minh City, was given after reunification in 1976 to honor Ho Chi Minh. Even today, however, the informal name of Sài Gòn remains in daily speech. However, there is a technical difference between the two terms: Sài Gòn is commonly used to refer to the city center in District 1 and the adjacent areas, while Ho Chi Minh City refers to all of its urban and rural districts. Saigon An etymology of Saigon (or Sài Gòn in Vietnamese) is that Sài is a Sino-Vietnamese word (Hán tự: 柴) meaning "firewood, lops, twigs; palisade", while Gòn is another Sino-Vietnamese word (Hán tự: 棍) meaning "stick, pole, bole", and whose meaning evolved into "cotton" in Vietnamese (bông gòn, literally "cotton stick", i.e., "cotton plant", then shortened to gòn). This name may refer to the many kapok plants that the Khmer people had planted around Prey Nokor, and which can still be seen at Cây Mai temple and surrounding areas. It may also refer to the dense and tall forest that once existed around the city, a forest to which the Khmer name, Prey Nokor, already referred. Other proposed etymologies draw parallels from Tai-Ngon (堤 岸), the Cantonese name of Cholon, which means "embankment" (French: quais), and Vietnamese Sai Côn, a translation of the Khmer Prey Nokor (). Prey means forest or jungle, and nokor is a Khmer word of Sanskrit origin meaning city or kingdom, and related to the English word 'Nation' – thus, "forest city" or "forest kingdom". Truong Mealy (former director of King Norodom Sihanouk's royal Cabinet), says that, according to a Khmer Chronicle, The Collection of the Council of the Kingdom, Prey Nokor's proper name was Preah Reach Nokor (), "Royal City"; later locally corrupted to "Prey kor", meaning "kapok forest", from which "Saigon" was derived ("kor" meaning "kapok" in Khmer and Cham, going into Vietnamese as "gòn"). Ho Chi Minh City The current official name, , first proclaimed in 1946, later adopted in 1976. It's abbreviated TP.HCM, and translated as Ho Chi Minh City, abbreviated HCMC, and in French as (the circumflex is sometimes omitted), abbreviated HCMV. The name commemorates Ho Chi Minh, the first leader of North Vietnam. This name, though not his given name, was one he favored throughout his later years. It combines a common Vietnamese surname (Hồ, 胡) with a given name meaning "enlightened will" (from Sino-Vietnamese 志 明; Chí meaning 'will' or 'spirit', and Minh meaning 'light'), in essence, meaning "light bringer". Nowadays, "Saigon" is commonly used to refer to the city's central business districts, whereas "Ho Chi Minh City" is used to refer to the whole city. History Early settlement The earliest settlement in the area was a Funan temple at the location of the current Phụng Sơn Buddhist temple, founded in the 4th century AD. A settlement called Baigaur was established on the site in the 11th century by the Champa. Baigaur was renamed Prey Nokor around 1145, Prey Nokor grew on the site of a small fishing village and area of forest. The first Vietnamese people crossed the sea to explore this land completely without the organization of the Nguyễn Lords. Thanks to the marriage between Princess Nguyễn Phúc Ngọc Vạn - daughter of Lord Nguyễn Phúc Nguyên - and the King of Cambodia Chey Chettha II in 1620, the relationship between Vietnam and Cambodia became smooth, and the people of the two countries could freely move back and forth. Vietnamese settlers began to migrate to the area of Saigon, Dong Nai. Before that, the Funanese, Khmer, and Cham had lived there, scattered from time immemorial. The period from 1623 to 1698 is considered the period of the formation of later Saigon. In 1623, Lord Nguyen sent a mission to ask his son-in-law, King Chey Chettha II, to set up tax collection stations in Prey Nokor (Saigon) and Kas Krobei (Ben Nghe). Alhough this was a deserted jungle area, it was located on the traffic routes of Vietnamese, Chinese,... to Cambodia and Siam. The next two important events of this period were the establishment of the barracks and residence of Vice King Ang Non and the establishment of a palace at Tan My (near the present-day Cong Quynh – Nguyen Trai crossroads). It can be said that Saigon was formed from these three government agencies. Nguyễn Dynasty rule In 1679, Lord Nguyễn Phúc Tần allowed a group of Chinese refugees from the Qing Dynasty to settle in My Tho, Bien Hoa and Saigon to seek refuge. In 1698, Nguyễn Hữu Cảnh, a Vietnamese noble, was sent by the Nguyễn rulers of Huế by sea to establish Vietnamese administrative structures in the area, thus detaching the area from Cambodia, which was not strong enough to intervene. He is often credited with the expansion of Saigon into a significant settlement. In 1788, Nguyễn Ánh captured the city, and used it as a center of resistance against Tây Sơn. Two years later, a large Vauban citadel called Gia Định, or Thành Bát Quái ("Eight Diagrams") was built by Victor Olivier de Puymanel, one of the Nguyễn Ánh's French mercenaries. The citadel was captured by Lê Văn Khôi during his revolt of 1833–35 against Emperor Minh Mạng. Following the revolt, Minh Mạng ordered it to be dismantled, and a new citadel, called Phụng Thành, was built in 1836. In 1859, the citadel was destroyed by the French following the Battle of Kỳ Hòa. Initially called Gia Dinh, the Vietnamese city became Saigon in the 18th century. French colonial era Ceded to France by the 1862 Treaty of Saigon, the city was planned by the French to transform into a large town for colonization. During the late 19th and early 20th centuries, construction of various French-style buildings began, including a botanical garden, the Norodom Palace, Hotel Continental, Notre-Dame Cathedral, and Bến Thành Market, among many others. In April 1865, Gia Dinh Bao was established in Saigon, becoming the first newspaper published in Vietnam. During the French colonial era, Saigon became known as "Pearl of the Orient" (Hòn ngọc Viễn Đông), or "Paris of the Extreme Orient". On 27 April 1931, a new région called Saigon–Cholon consisting of Saigon and Cholon was formed; the name Cholon was dropped after South Vietnam gained independence from France in 1955. From about 256,000 in 1930, Saigon's population rose to 1.2 million in 1950. Republic of Vietnam era In 1949, former Emperor Bảo Đại made Saigon the capital of the State of Vietnam with himself as head of state. In 1954, the Geneva Agreement partitioned Vietnam along the 17th parallel (Bến Hải River), with the communist Việt Minh, under Ho Chi Minh, gaining complete control of the northern half of the country, while the southern half gaining independence from France. The State officially became the Republic of Vietnam when Bảo Đại was deposed by his Prime Minister Ngô Đình Diệm in the 1955 referendum, with Saigon as its capital. On 22 October 1956, the city was given the official name, Đô Thành Sài Gòn ("Capital City Saigon"). After the decree of 27 March 1959 came into effect, Saigon was divided into eight districts and 41 wards. In December 1966, two wards from old An Khánh Commune of Gia Định, were formed into District 1, then seceded shortly later to became District 9. In July 1969, District 10 and District 11 were founded, and by 1975, the city's area consisted of eleven districts, Gia Định, Củ Chi District (Hậu Nghĩa) and Phú Hòa District (Bình Dương). Saigon served as the financial, industrial and transport center of the Republic of Vietnam. In the late 1950s, with the U.S. providing nearly $2 billion in aid to the Diệm regime, the country's economy grew rapidly under capitalism; by 1960, over half of South Vietnam's factories were located in Saigon. However, beginning in the 1960s, Saigon experienced economic downturn and high inflation, as it was completely dependent to U.S. aids and imports from other countries. As a result of widespread urbanization, with the population reaching 3.3 million by 1970, the city was described by the USAID as being turned "into a huge slum". The city was also suffered from "prostitutes, drug addicts, corrupt officials, beggars, orphans, and Americans with money", and according to Stanley Karnow, it was "a black-market city in the largest sense of the word". On 28 April 1955, the Vietnamese National Army launched an attack against Bình Xuyên military force in the city. The battle lasted until May, killing an estimated 500 people and leaving about 20,000 homeless. Ngô Đình Diệm then later turned on other paramilitary groups in Saigon, including the Hoa Hao Buddhist reform movement. On 11 June 1963, Buddhist monk Thích Quảng Đức burned himself in the city, in protest of the Diệm regime. On 1 November of the same year, Diệm was assassinated in Saigon, in a successful coup by Dương Văn Minh. During the 1968 Tet Offensive, communist forces launched a failed attempt to capture the city. On 30 April 1975, Saigon fell, ending the Vietnam War with a victory for North Vietnam, and the city came under the control of the Vietnamese People's Army. Post–Vietnam War and today In 1976, upon the establishment of the unified communist Socialist Republic of Vietnam, the city of Saigon (including the Cholon area), the province of Gia Ðịnh and two suburban districts of two other nearby provinces were combined to create Ho Chi Minh City, in honor of the late Communist leader Ho Chi Minh. At the time, the city covers an area of 1,295.5 square kilometers with eight districts and five rurals: Thủ Đức, Hóc Môn, Củ Chi, Bình Chánh, and Nhà Bè. Since 1978, administrative divisions in the city has been revised numerous times, most recently in 2020, when District 2, District 9, and Thủ Đức District were consolidated to form a municipal city. Today, Ho Chi Minh City, along with its surrounding provinces, is described as "the manufacturing hub" of Vietnam, and "an attractive business hub". It was ranked the 111th-most expensive major city in the world according to a 2020 survey of 209 cities. In terms of international connectedness, as of 2020, the city was classified as a "Beta" city by the Globalization and World Cities Research Network. Geography Ho Chi Minh City is located in the south-eastern region of Vietnam, south of Hanoi. The average elevation is above sea level for the city center and for the suburb areas. It borders Tây Ninh Province and Bình Dương Province to the north, Đồng Nai Province and Bà Rịa–Vũng Tàu province to the east, Long An Province to the west, Tien Giang Province and East Sea to the south with a coast long. The city covers an area of 2,095 km2 (809 sq mi or 0.63% of the surface of Vietnam), extending up to Củ Chi District ( from the Cambodian border) and down to Cần Giờ on the Eastern Sea. The distance from the northernmost point (Phú Mỹ Hưng Commune, Củ Chi District) to the southernmost one (Long Hòa Commune, Cần Giờ District) is , and from the easternmost point (Long Bình ward, District Nine) to the westernmost one (Bình Chánh Commune, Bình Chánh District) is . Due to its location on the Mekong Delta, the city is fringed by tidal flats that have been heavily modified for agriculture. Climate The city has a tropical climate, specifically tropical savanna (Aw), with a high average humidity of 78–82%. The year is divided into two distinct seasons. The rainy season, with an average rainfall of about annually (about 150 rainy days per year), usually lasts from May to November. The dry season lasts from December to April. The average temperature is , with little variation throughout the year. The highest temperature recorded was in April while the lowest temperature recorded was in January. On average, the city experiences between 2,400 and 2,700 hours of sunshine per year. Flooding Ho Chi Minh City is considered one of the cities most vulnerable to the effects of climate change, particularly flooding. During the rainy season, a combination of high tide, heavy rains, high flow volume in the Saigon River and Dong Nai River and land subsidence results in regular flooding in several parts of the city. A once-in-100 year flood would cause 23% of the city to flood. Administration Ho Chi Minh City is a municipality at the same level as Vietnam's provinces, which is subdivided into 22 district-level sub-divisions (as of 2020): 5 rural districts ( in area), which are designated as rural (huyện): Củ Chi Hóc Môn Bình Chánh Nhà Bè Cần Giờ 16 urban districts ( in area), which are designated urban or suburban (quận): District 1 District 3 District 4 District 5 District 6 District 7 District 8 District 10 District 11 District 12 Gò Vấp Tân Bình Tân Phú Bình Thạnh Phú Nhuận Bình Tân 1 city ( in area), which is designated municipal city (thành phố thuộc thành phố trực thuộc trung ương): Thủ Đức They are further subdivided into 5 commune-level towns (or townlets), 58 communes, and 249 wards (, see List of HCMC administrative units below). On December 9th 2020, it was announced that District 2, District 9 and Thủ Đức District would be consolidated and was approved by Standing Committee of the National Assembly. City government The Ho Chi Minh City People's Committee is a 13-member executive branch of the city. The current chairman is Nguyễn Thành Phong. There are several vice chairmen and chairwomen on the committee with responsibility over various city departments. The legislative branch of the city is the Ho Chi Minh City People's Council and consists of 105 members. The current Chairwoman is Nguyễn Thị Lệ. The judiciary branch of the city is the Ho Chi Minh City People's Court. The current Chief Judge is Lê Thanh Phong. The executive committee of Communist Party of Ho Chi Minh City is the leading organ of the Communist Party in Ho Chi Minh City. The current secretary is Nguyễn Văn Nên. The permanent deputy secretary of the Communist Party is ranked second in the city politics after the Secretary of the Communist Party, while chairman of the People's Committee is ranked third and the chairman of the People's Council is ranked fourth. Demographics The population of Ho Chi Minh City, as of the 1 October 2004 census, was 6,117,251 (of which 19 inner districts had 5,140,412 residents and 5 suburban districts had 976,839 inhabitants). In mid-2007, the city's population was 6,650,942 – with the 19 inner districts home to 5,564,975 residents and the five suburban districts containing 1,085,967 inhabitants. The result of the 2009 Census shows that the city's population was 7,162,864 people, about 8.34% of the total population of Vietnam, making it the highest population-concentrated city in the country. As of the end of 2012, the total population of the city was 7,750,900 people, an increase of 3.1% from 2011. As an administrative unit, its population is also the largest at the provincial level. According to the 2019 census, Ho Chi Minh City has a population of over 8.9 million within the city proper and over 21 million within its metropolitan area. The city's population is expected to grow to 13.9 million by 2025. The population of the city is expanding faster than earlier predictions. In August 2017, the city's mayor, Nguyen Thanh Phong, admitted that previous estimates of 8–10 million were drastic underestimations. The actual population (including those who have not officially registered) was estimated 13 million in 2017. The Ho Chi Minh City Metropolitan Area, a metropolitan area covering most parts of the southeast region plus Tiền Giang Province and Long An Province under planning, will have an area of with a population of 20 million inhabitants by 2020. Inhabitants of Ho Chi Minh City are usually known as "Saigonese" in English and "dân Sài Gòn" in Vietnamese. Ethnic groups The majority of the population are ethnic Vietnamese (Kinh) at about 93.52%. Ho Chi Minh City's largest minority ethnic group are the Chinese (Hoa) with 5.78%. Cholon – in District 5 and parts of Districts 6, 10 and 11 – is home to the largest Chinese community in Vietnam. The Hoa (Chinese) speak a number of varieties of Chinese, including Cantonese, Teochew (Chaozhou), Hokkien, Hainanese and Hakka; smaller numbers also speak Mandarin Chinese. Other ethnic minorities include Khmer with 0.34%, and Cham with 0.1%. Also, various other nationalities including Koreans, Japanese, Americans, South Africans, Filipinos and Britons reside in Ho Chi Minh City, particularly in Thủ Đức and District 7 as expatriate workers. Religion The three most prevalent religions in Ho Chi Minh City are Mahayana Buddhism with Taoism and Confucianism (via ancestor worship), which are often celebrated together in the same temple. Most Vietnamese and Han Chinese are strongly influenced by these traditional religious practices. There is a sizeable community of Roman Catholics, representing about 10% of the city's population. Other minority groups include Hòa Hảo, Cao Đài, Protestants, Muslims, Hindus, and members of the Baháʼí Faith. Economy Ho Chi Minh City is the economic center of Vietnam and accounts for a large proportion of the economy of Vietnam. Although the city takes up just 0.6% of the country's land area, it contains 8.34% of the population of Vietnam, 20.2% of its GDP, 27.9% of industrial output and 34.9% of the FDI projects in the country in 2005. In 2005, the city had 4,344,000 labourers, of whom 130,000 are over the labour age norm (in Vietnam, 60 for male and 55 for female workers). In 2009, GDP per capita reached $2,800, compared to the country's average level of $1,042. Sectors The economy of Ho Chi Minh City consists of industries ranging from mining, seafood processing, agriculture, and construction, to tourism, finance, industry and trade. The state-owned sector makes up 33.3% of the economy, the private sector 4.6%, and the remainder in foreign investment. Concerning its economic structure, the service sector accounts for 51.1%, industry and construction account for 47.7% and forestry, agriculture and others make up just 1.2%. The city and its ports are part of the 21st Century Maritime Silk Road that runs from the Chinese coast via the Suez Canal to the Mediterranean, there to the Upper Adriatic region of Trieste with its rail connections to Central and Eastern Europe. Quang Trung Software Park is a software park situated in District 12. The park is approximately from downtown Ho Chi Minh City and hosts software enterprises as well as dot.com companies. The park also includes a software training school. Dot.com investors here are supplied with other facilities and services such as residences and high-speed access to the internet as well as favorable taxation. Together with the Hi-Tech Park in Thủ Đức, and the 32 ha. software park inside Tan Thuan Export Processing Zone in District 7 of the city, Ho Chi Minh City aims to become an important hi-tech city in the country and the South-East Asia region. This park helps the city in particular and Vietnam in general to become an outsourcing location for other enterprises in developed countries, as India has done. Some 300,000 businesses, including many large enterprises, are involved in high-tech, electronic, processing and light industries, and also in construction, building materials and agricultural products. Additionally, crude oil is a popular economic base in the city. Investors are still pouring money into the city. Total local private investment was 160 billion đồng (US$7.5 million) with 18,500 newly founded companies. Investment trends to high technology, services and real estate projects. As of June 2006, the city had three export processing zones and twelve industrial parks, in addition to Quang Trung Software Park and Ho Chi Minh City hi-tech park. Intel has invested about 1 billion dollars in a factory in the city. More than fifty banks with hundreds of branches and about 20 insurance companies are also located inside the city. The Stock Exchange, the first stock exchange in Vietnam, was opened in 2001. There are 171 medium and large-scale markets as well as several supermarket chains, shopping malls, and fashion and beauty center. Shopping Some of the larger shopping malls and plazas opened recently include: Maximark – Multiple locations (District 10, and Tân Bình District) Satramart – 460 3/2 Street, Ward 12, District 10 Auchan (2016) – Multiple locations (District 10, and Go Vap District) Lotte Mart – Multiple locations (District 7, District 11, and Tân Bình District) AEON Mall – Multiple locations (Bình Tân District, and Tân Phú District) SC VivoCity (2015) – 1058 Nguyễn Văn Linh Boulevard, Tân Phong Ward, District 7 Zen Plaza (1995) – 54–56 Nguyễn Trãi St, District 1 Saigon Centre (1997) – 65 Lê Lợi Blvd, Bến Nghé Ward, District 1 Diamond Plaza (1999) – 34 Lê Duẩn Blvd, District 1 Big C (2002) – Multiple locations (District 10, Bình Tân District, Gò Vấp District, Phú Nhuận District, and Tân Phú District) METRO Cash & Carry/Mega Market – Multiple locations (District 2, District 6, and District 12) Crescent Mall – Phú Mỹ Hưng, District 7 Parkson (2005–2009) – Multiple locations (District 1, District 2, District 5, District 7, District 11, and Tân Bình District) Saigon Paragon (2009) – 3 Nguyễn Lương Bằng St, Tân Phú Ward, District 7 NowZone (2009) – 235 Nguyễn Văn Cừ Ave, Nguyễn Cư Trinh Ward, District 1 Kumho Asiana Plaza (2010) – 39 Lê Duẩn Blvd, Bến Nghé Ward, District 1 Vincom Centre (2010) – 70–72 Lê Thánh Tôn St, Bến Nghé Ward, District 1 Union Square – 171 Lê Thánh Tôn st, Bến Nghé Ward, District 1 Vincom Mega Mall (2016) – No. 161 Hà Nội Highway, Thảo Điền Ward, District. 2 Bitexco Financial Tower (2010) Alley 2 Hàm Nghi Blvd, Bến Nghé Ward, District 1 Co.opmart – Multiple locations (District 1, District 3, District 5, District 6, District 7, District 8, District 10, District 11, District 12, Bình Chánh District, Bình Tân District, Bình Thạnh District, Củ Chi District, Gò Vấp District, Hóc Môn District, Phú Nhuận District, Tân Phú District, and Thủ Đức District) Landmark 81 (2018) – 208 Nguyễn Hữu Cảnh St, Bình Thạnh District Minh Hung medicine (2018) - 73 Street No. 5, Bình Hưng Hòa, Ward, Bình Tân District VinMart – Multiple locations (District 1, District 2, District 7, District 9, District 10, Bình Chánh District, Bình Thạnh District, Gò Vấp District, Tân Bình District, and Thủ Đức District) In 2007, three million foreign tourists, about 70% of the total number of tourists to Vietnam, visited the city. Total cargo transport to Ho Chi Minh City's ports reached 50.5 million tonnes, nearly one-third of the total for Vietnam. New urban areas With a population now of 8,382,287 (as of Census 2010 on 1 April 2010) (registered residents plus migrant workers as well as a metropolitan population of 10 million), Ho Chi Minh City needs increased public infrastructure. To this end, the city and central governments have embarked on an effort to develop new urban centres. The two most prominent projects are the Thu Thiem city centre in District 2 and the Phu My Hung Urban Area, a new city centre in District 7 (as part of the Saigon South project) where various international schools such as Saigon South International School and Australia's Royal Melbourne Institute of Technology are located. In December 2007, Phu My Hung's new City Centre completed the 10–14 lane wide Nguyen Van Linh Boulevard linking the Saigon port areas, Tan Thuan Export Processing Zone to the National Highway 1 and the Mekong Delta area. In November 2008, a brand new trade centre, Saigon Exhibition and Convention Centre, also opened its doors. Other projects include Grandview, Waterfront, Sky Garden, Riverside and Phu Gia 99. Phu My Hung's new City Center received the first Model New City Award from the Vietnamese Ministry of Construction. Tourism Tourist attractions in Ho Chi Minh City are mainly related to periods of French colonization and the Vietnam War. The city's center has some wide American-style boulevards and a few French colonial buildings. The majority of these tourist spots are located in District 1 and are a short distance from each other. The most prominent structures in the city centre are the Reunification Palace (Dinh Thống Nhất), City Hall (Ủy ban nhân dân Thành phố), Municipal Theatre (Nhà hát thành phố, also known as the Opera House), City Post Office (Bưu điện thành phố), State Bank Office (Ngân hàng nhà nước), City People's Court (Tòa án nhân dân thành phố) and Notre-Dame Cathedral (Nhà thờ Đức Bà) the cathedral was constructed between 1863 and 1880. Some of the historic hotels are the Hotel Majestic, dating from the French colonial era, and the Rex and Caravelle hotels are former hangouts for American officers and war correspondents in the 1960s & '70s. The city has various museums including the Ho Chi Minh City Museum, Museum of Vietnamese History, the Revolutionary Museum, the Museum of south-eastern Armed Forces, the War Remnants Museum, the Museum of Southern Women, the Museum of Fine Arts, the Nha Rong Memorial House, and the Ben Duoc Relic of Underground Tunnels. The Củ Chi tunnels are north-west of the city in Củ Chi District. The Saigon Zoo and Botanical Gardens, in District 1, dates from 1865. The Đầm Sen Tourist and Cultural Park, Suối Tiên Amusement and Culture Park, and Cần Giờ's Eco beach resort are three recreational sites inside the city which are popular with tourists. Aside from the Municipal Theatre, there are other places of entertainment such as the Bến Thành theatre, Hòa Bình theatre, and the Lan Anh Music Stage. Ho Chi Minh City is home to hundreds of cinemas and theatres, with cinema and drama theatre revenue accounting for 60–70% of Vietnam's total revenue in this industry. Unlike other theatrical organisations found in Vietnam's provinces and municipalities, residents of Ho Chi Minh City keep their theatres active without the support of subsidies from the Vietnamese government. The city is also home to most of the private film companies in Vietnam. Like many of Vietnam's smaller cities, the city boasts a multitude of restaurants serving typical Vietnamese dishes such as phở or rice vermicelli. Backpacking travellers most often frequent the "Backpackers’ Quarter" on Phạm Ngũ Lão Street and Bùi Viện Street, District 1. It was approximated that 4.3 million tourists visited Vietnam in 2007, of which 70 percent, approximately 3 million tourists, visited Ho Chi Minh City. According to the most recent international tourist statistic, Ho Chi Minh City welcomed 6 million tourists in 2017. According to Mastercard's 2019 report, Ho Chi Minh City is also the country's second most visited city (18th in Asia Pacific), with 4.1 million overnight international visitors in 2018 (after Hanoi with 4.8 million visitors). Transport Air The city is served by Tân Sơn Nhất International Airport, the largest airport in Vietnam in terms of passengers handled (with an estimated number of over 15.5 million passengers per year in 2010, accounting for more than half of Vietnam's air passenger traffic). Long Thành International Airport is scheduled to begin operating in 2025. Based in Long Thành District, Đồng Nai Province, about east of Ho Chi Minh City, Long Thành Airport will serve international flights, with a maximum traffic capacity of 100 million passengers per year when fully completed; Tân Sơn Nhất Airport will serve domestic flights. Rail Ho Chi Minh City is also a terminal for many Vietnam Railways train routes in the country. The Reunification Express (tàu Thống Nhất) runs from Ho Chi Minh City to Hanoi from Saigon Railway Station in District 3, with stops at cities and provinces along the line. Within the city, the two main stations are Sóng Thần and Sài Gòn. In addition, there are several smaller stations such as Dĩ An, Thủ Đức, Bình Triệu, Gò Vấp. However, rail transport is not fully developed and presently comprises only 0.6% of passenger traffic and 6% of goods shipments. Water The city's location on the Saigon River makes it a bustling commercial and passenger port; besides a constant stream of cargo ships, passenger boats operate regularly between Ho Chi Minh City and various destinations in Southern Vietnam and Cambodia, including Vũng Tàu, Cần Thơ and the Mekong Delta, and Phnom Penh. Traffic between Ho Chi Minh City and Vietnam's southern provinces has steadily increased over the years; the Doi and Te Canals, the main routes to the Mekong Delta, receive 100,000 waterway vehicles every year, representing around 13 million tons of cargo. A project to dredge these routes has been approved to facilitate transport, to be implemented in 2011–14. HCMC Ferrybus was also established as a maritime public transport. Public transport Metro The Ho Chi Minh City Metro, a rapid transit network, is being built in stages. The first line is under construction, and expected to be fully operational by 2024. This first line will connect Bến Thành to Suối Tiên Park in District 9, with a depot in Long Binh. Planners expect the route to serve more than 160,000 passengers daily. A line between Bến Thành and Tham Luong in District 12 has been approved by the government, and several more lines are the subject of ongoing feasibility studies. Bus Public buses run on many routes and tickets can be purchased on the bus. Ho Chi Minh City has a number of coach houses, which house coach buses to and from other areas in Vietnam. The largest coach station – in terms of passengers handled – is the Mien Dong Coach Station in the Bình Thạnh District. Private transport The main means of transport within the city are motorbikes, cars, buses, taxis, and bicycles. Motorbikes remain the most common way to move around the city. Taxis are plentiful and usually have meters, although it is also common to agree on a price before taking a long trip, for example, from the airport to the city centre. For short trips, "xe ôm" (literally, "hug vehicle") motorcycle taxis are available throughout the city, usually congregating at a major intersection. You can also book motorcycle and car taxis through ride-hailing apps like Grab and GoJek. A popular activity for tourists is a tour of the city on cyclos, which allow for longer trips at a more relaxed pace. For the last few years, cars have become more popular. There are approximately 340,000 cars and 3.5 million motorcycles in the city, which is almost double compared with Hanoi. The growing number of cars tend to cause gridlock and contribute to air pollution. The government has called out motorcycles as the reason for the congestion and has developed plans to reduce the number of motorcycles and to improve public transport. Expressway Ho Chi Minh City has two expressways making up the North-South Expressway system, connecting the city with other provinces. The first expressway is Ho Chi Minh City - Trung Luong Expressway, opened in 2010, connecting Ho Chi Minh City with Tiền Giang and the Mekong Delta. The second one is Ho Chi Minh City - Long Thanh - Dau Giay Expressway, opened in 2015, connecting the city with Đồng Nai, Bà Rịa–Vũng Tàu and the Southeast of Vietnam. The Ho Chi Minh City - Long Khanh Expressway is under planning and will be constructed in the near future. Healthcare The health care system of the city is relatively developed with a chain of about 100 government owned hospitals or medical centres and dozens of privately owned clinics. The 1,400 bed Chợ Rẫy Hospital, upgraded by Japanese aid and the French-sponsored Institute of Cardiology and City International Hospital are among the top medical facilities in the South-East Asia region. Education High schools Notable high schools in Ho Chi Minh City include Lê Hồng Phong High School for the Gifted, Phổ Thông Năng Khiếu High School for the Gifted, Trần Đại Nghĩa High School for the Gifted, Nguyễn Thượng Hiền High School, Nguyễn Thị Minh Khai High School, , , Marie Curie High School, Võ Thị Sáu High School and among others. Though the former schools are all public, private education is also available in Ho Chi Minh City. High school consists of grade 10–12 (sophomore, junior, and senior). List of Public High Schools in Ho Chi Minh City (limited list) VNUHCM High School for the Gifted Lê Hồng Phong High School for the Gifted Trần Đại Nghĩa High School for the Gifted Nguyễn Thượng Hiền High School Nguyễn Thị Minh Khai High School Bùi Thị Xuân High School Phú Nhuận High School Bình Phú High School Mạc Đĩnh Chi High School Nguyễn Du Secondary School Nguyễn Hữu Cầu High School Nguyễn Hữu Huân High School Marie Curie High School Võ Thị Sáu High School Võ Trường Toản High School Hùng Vương High School Chu Văn An High School Trưng Vương High School Lương Thế Vinh High School Trần Khai Nguyên High School Ten Lơ Man High School Nguyễn Trãi High School Nguyễn Khuyến High School Nguyễn Du High School Nguyễn Công Trứ High School TRần Hưng Đạo High School Nguyễn Chí Thanh High School Nguyễn Thái Bình High School Thủ Đức High School Nguyễn Thị Diệu High School List of Private High Schools in Ho Chi Minh City (limited list) British International School Ho Chi Minh City International School Ho Chi Minh City Saigon South International School Ngô Thời Nhiệm High School Nguyễn Khuyến High School Khai Trí High School Quang Trung Nguyễn Huệ High School Trí Đức High School Trương Vĩnh Ký High School VinSchool VStar School Australian International School Western Australian International School Systems The Canadian International School Hong Ha Secondary-High School Universities Higher education in Ho Chi Minh City is a burgeoning industry; the city boasts over 80 universities and colleges with a total of over 400,000 students. Notable universities include Vietnam National University, Ho Chi Minh City, with 50,000 students distributed among six schools; The University of Technology (Vietnamese: Đại học Bách khoa, formerly Phú Thọ National Center of Technology); The University of Sciences (formerly Saigon College of Sciences); The University of Social Sciences and Humanities (formerly Saigon College of Letters); The International University; The University of Economics and Law; and the newly established University of Information Technology. Some other important higher education establishments include HCMC University of Pedagogy, University of Economics, University of Architecture, Pham Ngoc Thach University of Medicine, Nong Lam University (formerly University of Agriculture and Forestry), University of Law, University of Technical Education, University of Banking, University of Industry, Open University, University of Sports and Physical Education, University of Fine Arts, University of Culture, the Conservatory of Music, the Saigon Institute of Technology, Văn Lang University, Saigon University and Hoa Sen University. In addition to the above public universities, Ho Chi Minh City is also home to several private universities. One of the most notable is RMIT International University Vietnam, a campus of Australian public research RMIT University with an enrollment of about 6,000 students. Tuition at RMIT is about US$40,000 for an entire course of study. Other private universities include The Saigon International University (or SIU) is another private university run by the Group of Asian International Education. Enrollment at SIU averages about 12,000 students Depending on the type of program, tuition at SIU costs US$5,000–6,000 per year. Culture Museums and art galleries Due to its history, artworks have generally been inspired by both Western and Eastern styles. Famous art locations in Ho Chi Minh City include Ho Chi Minh City Museum of Fine Arts, and various art galleries located on Nam Ky Khoi Nghia street, Tran Phu street, and Bui Vien street. Food and drink Ho Chi Minh City cultivates a strong food and drink culture with lots of roadside restaurants, coffee shops, and food stalls where locals and tourists can enjoy local cuisine and beverages at low prices. It's currently ranked in the top five best cities in the world for street food. Media The city's media is the most developed in the country. At present, there are seven daily newspapers: Sai Gon Giai Phong (Liberated Saigon), and its Vietnamese, investment and finance, sports, evening and weekly editions; Tuổi Trẻ (Youth), the highest circulation newspaper in Vietnam; Thanh Niên (Young People), the second largest circulation in the south of Vietnam; Người Lao Động (Labourer); The Thao (Sports); Pháp Luật (Law) and The Saigon Times Daily, an English-language newspaper as well as more than 30 other newspapers and magazines. The city has hundreds of printing and publishing houses, many bookstores and a widespread network of public and school libraries; the city's General Library houses over 1.5 million books. Locally based Ho Chi Minh City Television (HTV) is the second largest television network in the nation, just behind the national Vietnam Television (VTV), broadcasting 24/7 on 7 different channels (using analog and digital technology). Many major international TV channels are provided through two cable networks (SCTV and HTVC), with over one million subscribers. The Voice of Ho Chi Minh City is the largest radio station in south Vietnam. Internet coverage, especially through ADSL connections, is rapidly expanding, with over 2,200,000 subscribers and around 5.5 million frequent users. Internet service providers (ISPs) operating in Ho Chi Minh City include the Vietnam Data Communication Company (VDC), Corporation for Finance and Promoting Technology (FPT), Netnam Company, Saigon Post and Telecommunications Services Corporation (Saigon Postel Corporation, SPT) and Viettel Company. The city has more than two million fixed telephones and about fifteen million cellular phones (the latter growing annually by 20%). Mobile phone service is provided by a number of companies, including Viettel Mobile, MobiFone, VinaPhone, and Vietnam Mobile. Sports , Ho Chi Minh City was home to 91 football fields, 86 swimming pools, 256 gyms. The largest stadium in the city is the 25,000-seat Thống Nhất Stadium, located on Đào Duy Từ Street, in Ward 6 of District 10. The next largest is Army Stadium, located near Tan Son Nhat Airport in Tân Bình district. Army Stadium was of the venues for the 2007 AFC Asian Cup finals. As well as being a sporting venue, it is also the site of a music school. Phú Thọ Racecourse, another notable sporting venue established during colonial times, is the only racetrack in Vietnam. The city's Department of Physical Education and Sports also manages a number of clubs, including Phan Dinh Phung, Thanh Da, and Yet Kieu. Ho Chi Minh City is home to a number of association football clubs. One of the city's largest clubs, Ho Chi Minh City F.C., is based at Thống Nhất Stadium. As Cảng Sài Gòn, they were four-time champions of Vietnam's V.League 1 (in 1986, 1993–94, 1997, and 2001–02). Navibank Saigon F.C., founded as Quân Khu 4, also based at Thống Nhất Stadium, emerged as champions of the First Division in the 2008 season, and were promoted to the V-League in 2009. The city's police department also fielded a football team in the 1990s, Công An Thành Phố, which won the V-League championship in 1995. Celebrated striker Lê Huỳnh Đức, now manager of SHB Đà Nẵng F.C., played for the Police F.C. from 1995 to 2000, setting a league record of 25 goals in the 1996 season. Since 2016, Sài Gòn F.C. has competed in V.League 1. In 2011, Ho Chi Minh City was awarded an expansion team for the ASEAN Basketball League. SSA Saigon Heat is the first ever international professional basketball team to represent Vietnam. Ho Chi Minh City hosts a number of international sports events throughout the year, such as the AFF Futsal Championship and the Vietnam Vertical Run. Several other sports are represented by teams in the city, such as volleyball, basketball, chess, athletics, and table tennis. International relations Twin towns – sister cities Ho Chi Minh City is twinned with: Ahmadi Governorate, Kuwait (2010) Almaty, Kazakhstan (2011) Auvergne-Rhône-Alpes, France (1998) Bangkok, Thailand (2014) Champasak Province, Laos (2001) Busan, South Korea (1995) Guangdong Province, China (2009) Hafnarfjörður, Iceland Košice, Slovakia (2016) Leipzig, Germany (2021) Lyon, France (1997) Manila, Philippines (1994) Minsk, Belarus (2008) Moscow, Russia (2003) Osaka Prefecture, Japan (2007) Phnom Penh, Cambodia (1999) Saint Petersburg, Russia (2005) San Francisco, United States (1995) Shandong Province, China (2013) Shanghai, China (1994) Sofia, Bulgaria (2015) Vientiane, Laos (2001) Vladivostok, Russia (2009) Yangon, Myanmar (2012) Zhejiang Province, China (2009) Cooperation and friendship In addition to its twin towns, Ho Chi Minh City cooperates with: Barcelona, Spain (2009) Budapest, Hungary (2013) Daegu, South Korea (2015) Geneva, Switzerland (2007) Guangzhou, China (1996) Johannesburg, South Africa (2009) Moscow Oblast, Russia (2015) Northern Territory, Australia (2014) Osaka, Japan (2011) Queensland, Australia (2005) Seville, Spain (2009) Southampton, United Kingdom (1999) Shiga Prefecture, Japan (2014) Sverdlovsk Oblast, Russia (2000) Toronto, Canada (2006) Yokohama, Japan (2009) See also 175 Hospital History of Organized Crime in Saigon List of East Asian ports List of historic buildings in Ho Chi Minh City List of historical capitals of Vietnam Notes References External links Official website (in Vietnamese and English) Ho Chi Minh City People's Council 1698 establishments in Vietnam Populated places established in 1698 Cities in Vietnam Populated places in Ho Chi Minh City Port cities in Vietnam Capitals of former nations
172539
https://en.wikipedia.org/wiki/Modchip
Modchip
A modchip (short for modification chip) is a small electronic device used to alter or disable artificial restrictions of computers or entertainment devices. Modchips are mainly used in video game consoles, but also in some DVD or Blu-ray players. They introduce various modifications to its host system's function, including the circumvention of region coding, digital rights management, and copy protection checks for the purpose of using media intended for other markets, copied media, or unlicensed third-party (homebrew) software. Function and construction Modchips operate by replacing or overriding a system's protection hardware or software. They achieve this by either exploiting existing interfaces in an unintended or undocumented manner, or by actively manipulating the system's internal communication, sometimes to the point of re-routing it to substitute parts provided by the modchip. Most modchips consist of one or more integrated circuits (microcontrollers, FPGAs, or CPLDs), often complemented with discrete parts, usually packaged on a small PCB to fit within the console system it is designed for. Although there are modchips that can be reprogrammed for different purposes, most modchips are designed to work within only one console system or even only one specific hardware version. Modchips typically require some degree of technical skill to install since they must be connected to a console's circuitry, most commonly by soldering wires to select traces or chip legs on a system's circuit board. Some modchips allow for installation by directly soldering the modchip's contacts to the console's circuit ("quicksolder"), by the precise positioning of electrical contacts ("solderless"), or, in rare cases, by plugging them into a system's internal or external connector. Memory cards or cartridges that offer functions similar to modchips work on a completely different concept, namely by exploiting flaws in the system's handling of media. Such devices are not referred to as modchips, even if they are frequently traded under this umbrella term. The diversity of hardware modchips operate on and varying methods they use mean that while modchips are often used for the same goal, they may work in vastly different ways, even if they are intended for use on the same console. Some of the first modchips for the Nintendo Wii, known as drive chips, modify the behaviour and communication of the optical drive to bypass security. On the Xbox 360, a common modchip took advantage of the fact short periods of instability in the CPU could be used to fairly reliably lead it to incorrectly compare security signatures. The precision required in this attack meant that the modchip had to make use of a CPLD. Other modchips, such as the XenoGC and clones for the Nintendo GameCube, invoke a debug mode where security measures are reduced or absent (in which case a stock Atmel AVR microcontroller was used). A more recent innovation are optical disk drive emulators or ODDE, which replace the optical disk drive and allow data to come from another source bypassing the need to circumvent any security. These often make use of FPGAs to enable them to accurately emulate timing and performance characteristics of the optical drives. History Most cartridge-based console systems did not have modchips produced for them. They usually implemented copy protection and regional lockout with game cartridges, both on hardware and software level. Converters or passthrough devices have been used to circumvent the restrictions, while flash memory devices (game backup devices) were widely adopted in later years to copy game media. Early in the transition from solid-state to optical media, CD-based console systems did not have regional market segmentation or copy protection measures due to the rarity and high cost of user-writable media at the time. Modchips started to surface with the PlayStation system, due to the increasing availability and affordability of CD writers and the increasing sophistication of DRM protocols. At the time, a modchip's sole purpose was to allow the use of imported and copied game media. Today, modchips are available for practically every current console system, often in a great number of variations. In addition to circumventing regional lockout and copy protection mechanisms, modern modchips may introduce more sophisticated modifications to the system, such as allowing the use of user-created software (homebrew), expanding the hardware capabilities of its host system, or even installing an alternative operating system to completely re-purpose the host system (e.g. for use as a home theater PC). Anti-modchip measures Most modchips open the system to copied media, therefore the availability of a modchip for a console system is undesirable for console manufacturers. They react by removing the intrusion points exploited by a modchip from subsequent hardware or software versions, changing the PCB layout the modchips are customized for, or by having the firmware or software detect an installed modchip and refuse operation as a consequence. Since modchips often hook into fundamental functions of the host system that cannot be removed or adjusted, these measures may not completely prevent a modchip from functioning but only prompt an adjustment of its installation process or programming, e.g. to include measures to make it undetectable ("stealth") to its host system. With the advent of online services to be used by video game consoles, some manufacturers have executed their possibilities within the service's license agreement to ban consoles equipped with modchips from using those services. In an effort to dissuade modchip creation, some console manufacturers included the option to run homebrew software or even an alternative operating system on their consoles. However, some of these features have been withdrawn at a later date. An argument can be made that a console system remains largely untouched by modchips as long as their manufacturers provide an official way of running unlicensed third-party software. Legality One of the most prominent functions of many modchips—the circumvention of copy protection mechanisms—is outlawed by many countries' copyright laws such as the Digital Millennium Copyright Act in the United States, the European Copyright Directive and its various implementations by the EU member countries, and the Australian Copyright Act. Other laws may apply to the many diversified functions of a modchip, e.g. Australian law specifically allowing the circumvention of region coding. The ambiguity of applicable law, its nonuniform interpretation by the courts, and constant profound changes and amendments to copyright law do not allow for a definitive statement on the legality of modchips. A modchip's legality under a country's legislature may only be individually asserted in court. Most of the very few cases that have been brought before a court ended with the conviction of the modchip merchant or the manufacturer under the respective country's anti-circumvention laws. A small number of cases in the United Kingdom and Australia were dismissed under the argument that a system's copy protection mechanism would not be able to prevent the actual infringement of copyright—the actual process of copying game media—and therefore cannot be considered an effective technical protection measure protected by anti-circumvention laws. In 2006, Australian copyright law has been amended to effectively close this legal loophole. In a 2017 lawsuit against a retailer, a Canadian court ruled in favor of Nintendo under anti-circumvention provisions in Canadian copyright law, which prohibit any breaching of technical protection measures. The court ruled that even though the retailer claimed the products could be used for homebrew, thus asserting exemptions for maintaining interoperability, the court ruled that because Nintendo offers development kits for its platforms, interoperability could be achieved without breaching TPMs, and thus the defence is invalid. Alternatives An alternative of installing a modchip is a process of softmodding a device. A softmodded device does not need to permanently have any additional hardware pieces inside. Instead, the software of a device or its internal part is modified in order to change the device's behaviour. See also Game backup device Operation Tangled Web References Computer hardware tuning Hardware restrictions
3849041
https://en.wikipedia.org/wiki/Summercon
Summercon
Summercon is one of the oldest hacker conventions, and the longest running such conference in the United States. It helped set a precedent for more modern "cons" such as H.O.P.E. and DEF CON, although it has remained smaller and more personal. Summercon has been hosted in cities such as Pittsburgh, St. Louis, Atlanta, Washington, D.C., New York City, Austin, Las Vegas, and Amsterdam. Originally run by Phrack, the underground ezine, and held annually in St. Louis, the organizational responsibilities of running Summercon were transferred to clovis in 1998 and the convention took place in Atlanta, dubbed 'Summercon X'. In its modern incarnation, it is currently organized by redpantz and shmeck, who emphasize the importance of face-to-face interaction as technology increasingly mediates relationships between members of the information security community. Summercon is open to everyone, including "hackers, phreakers, phrackers, feds, 2600 kids, cops, security professionals, U4EA, r00t kids club, press, groupies, chicks, conference whores, k0d3 kids, convicted felons, and concerned parents." See also Chaos Communication Congress (C3) — oldest and Europe's biggest hacker conferences held by Chaos Computer Club (CCC). HoHoCon — first modern hacker convention held by CULT OF THE DEAD COW. Black Hat Briefings the largest 'official' computer security event in the world. MyDEFCON gathering point spawned from the annual DEFCON security conference. References External links Summercon Official site Phrack, Volume Three, Issue Thirty-one, Phile #5 of 10. References Summercon '88 An interview with Loyd Blankenship, aka The Mentor, reflecting on Summercon '88 Hacker conventions
4319754
https://en.wikipedia.org/wiki/Fill%20device
Fill device
A fill device or key loader is a module used to load cryptographic keys into electronic encryption machines. Fill devices are usually hand held and electronic ones are battery operated. Older mechanical encryption systems, such as rotor machines, were keyed by setting the positions of wheels and plugs from a printed keying list. Electronic systems required some way to load the necessary cryptovariable data. In the 1950s and 1960s, systems such as the U.S. National Security Agency KW-26 and the Soviet Union's Fialka used punched cards for this purpose. Later NSA encryption systems incorporated a serial port fill connector and developed several common fill devices (CFDs) that could be used with multiple systems. A CFD was plugged in when new keys were to be loaded. Newer NSA systems allow "over the air rekeying" (OTAR), but a master key often must still be loaded using a fill device. NSA uses two serial protocols for key fill, DS-101 and DS-102. Both employ the same U-229 6-pin connector type used for U.S. military audio handsets, with the DS-101 being the newer of the two serial fill protocols. The DS-101 protocol can also be used to load cryptographic algorithms and software updates for crypto modules. Besides encryption devices, systems that can require key fill include IFF, GPS and frequency hopping radios such as Have Quick and SINCGARS. Common fill devices employed by NSA include: KYK-28 pin gun used with the NESTOR (encryption) system KYK-13 Electronic Transfer Device KYX-15 Net Control Device MX-10579 ECCM Fill Device (SINCGARS) KOI-18 paper tape reader. The operator pulls 8-level tape through this unit by hand. AN/CYZ-10 Data Transfer Device - a small PDA-like unit that can store up to 1000 keys. Secure DTD2000 System (SDS) - Named KIK-20, this was the next generation common fill device replacement for the DTD when it started production in 2006. It employs the Windows CE operating system. AN/PYQ-10 Simple Key Loader (SKL) - a simpler replacement for the DTD. KSD-64 Crypto ignition key (CIK) KIK-30, a more recent fill device, is trademarked as the "Really Simple Key Loader" (RASKL) with "single button key-squirt." It supports a wide variety of devices and keys. The older KYK-13, KYX-15 and MX-10579 are limited to certain key types. See also List of cryptographic key types References External links Fill devices KYX-15 pictures Key management Encryption device accessories National Security Agency encryption devices
636608
https://en.wikipedia.org/wiki/Shadow%20Warrior%20%281997%20video%20game%29
Shadow Warrior (1997 video game)
Shadow Warrior is a first-person shooter video game developed by 3D Realms and published by GT Interactive Software. The shareware version was released for the PC on May 13, 1997, while the full version was released on September 12, 1997. Shadow Warrior was developed using Ken Silverman's Build engine and improved on 3D Realms' previous Build engine game, Duke Nukem 3D. Mark Adams ported Shadow Warrior to Mac OS in August 1997. The game's improvements included introduction of true room-over-room situations, the use of 3D voxels instead of 2D sprites for weapons and usable inventory items, transparent water, climbable ladders, and assorted vehicles to drive (some armed with weapons). Although violent, the game had its own sense of humor and contained some sexual themes. A combination of Shadow Warrior and Duke Nukem 3D: Atomic Edition was published by GT Interactive Software in March 1998, titled East Meets West. In 2005, 3D Realms released the source code for Shadow Warrior (including compiled Build engine object code) under the GPL-2.0-or-later license, which resulted in the first source port a day later on April 2, 2005. In 2013, Devolver Digital announced the game would be free to obtain for a limited time on Steam. Later, Devolver Digital announced that they would permanently offer the game for free. A reboot, also titled Shadow Warrior, was developed by Flying Wild Hog and published by Devolver Digital, launched on September 26, 2013. Plot Lo Wang is a bodyguard and enforcer for Zilla Enterprises, a powerful conglomerate that controls every major industry in a futuristic Japan. Although he is aware of the unchecked corruption and crime that has resulted from Zilla Enterprises' dominance, Lo Wang is too content with his well-paid position to challenge his employers. This changes when Master Zilla, the company president who desires even more power and wealth, embarks on a plan to conquer Japan using creatures from the "dark side", having formed an alliance with the ancient deities that rule over them. When he discovers this, Lo Wang finds he can no longer stomach Zilla's evil and quits his job. Master Zilla soon realizes the threat that Lo Wang poses and orders the creatures to kill him. Forced to fight for his life, Lo Wang manages to slaughter dozens of Zilla's minions until he discovers that Zilla also had his old mentor, Master Liep, murdered. Following his mentor's dying words, Lo swears to put an end to Zilla's schemes. The game ends with Lo Wang defeating Master Zilla, who tries and fails to kill him while piloting a massive war mech styled after a samurai. However, Zilla is able to escape, and informs his old bodyguard that they will meet again someday. Gameplay Shadow Warrior is a first-person shooter similar to Duke Nukem 3D and using the same Build engine. Players navigate the protagonist, Lo Wang, through three-dimensional environments or "levels". Throughout levels are enemies that attack Lo Wang, which can be killed by the player using weapons such as a katana. Shadow Warrior also features puzzles that must be solved to progress in various levels. Lo Wang's arsenal of weaponry includes Japanese-themed weapons such as shurikens—which were "likely [to] be dropped in favor of [a] high tech fun weapon" in development—and a katana, and marked the first appearance of a sticky bomb in an FPS, an idea popularized later by Halo. It also includes guns such as Uzis, a riot gun that fires shotgun shells, and the Eraser-inspired railgun (Lo Wang frequently says "Time to get erased! Ha ha!" when picking up this weapon). In addition, the head and heart of certain enemies can be used as weapons. Shadow Warrior was an ambitious game, containing many features not seen until later first-person shooter games. For example, the game features turrets and various vehicles (such as tanks) that the player can drive around freely in, climbable ladders, and multiple firing modes for various weapons. Development Development of Shadow Warrior began in early 1994 as Shadow Warrior 3D, and preliminary screenshots were released with Hocus Pocus in May 1994. Jim Norwood came up with the game idea, George Broussard designed the character Lo Wang and Michael Wallin did some concept sketches. Broussard in 1996 stated: "We want Shadow Warrior to surpass Duke Nukem 3D in features and gameplay and that's a TALL order." To this end, more tongue-in-cheek humor was added to the existing game in order to better match the style of the popular Duke Nukem 3D. The shareware version of Shadow Warrior was published in North America by GT Interactive Software on May 13, 1997, and the full version was published on September 12, 1997. At E3 1997, an area in the GT Interactive Software booth was dedicated to Shadow Warrior. Soundtrack Lee Jackson, who had already composed the soundtrack for Duke Nukem 3D also composed the soundtrack for Shadow Warrior. A Kurzweil K2500R5 keyboard was used to produce the music. Shadow Warrior uses the audio tracks of the game's CD for music playback rather than the system's MIDI device, which allows for a higher general quality and the use of samples and effects not possible with MIDI music. This allowed Lee Jackson to include a wide variety of instruments which support the game's East Asian theme as well as to include ambient tracks which depend on advanced sound design. MIDI support including MIDI versions of five songs from the game's soundtrack was added exclusively to the shareware version which had to be kept small in size. A special song called Lo Wang's Rap was included in one of the game disc's audio tracks. It was created out of sound bites and outtakes from recording sessions with John William Galt, the voice actor cast in the role of Lo Wang. This song was played during the credits sequence after completing the game. Jackson wrote and recorded a backing music track and then used a DAW to arrange the vocals over it in a way that made it sound like Lo Wang was actually rapping. The song was released as an MP3 on 3D Realms' website in 1999. Release Versions Shadow Warrior Registered is the original 1.2 version released on September 12, 1997 for MS-DOS and on October 1, 1997 for Mac OS. Shadow Warrior Classic Complete is the PC version of Shadow Warrior that was released onto GOG.com and includes the main game and both expansion packs, Wanton Destruction and Twin Dragon. While the Steam version is free (see below), the GOG.com version used to be paid, and has a digital copy of the game's soundtrack in MP3 and FLAC and the game's manual delivered with the game. Published by Devolver Digital, it was released on November 15, 2012, using DOSBox to run on modern systems. Since September 2, 2016, with the release of Classic Redux on GOG, Classic Complete became free. Shadow Warrior (iOS) is the iOS version of Shadow Warrior that was ported and published by indie developer General Arcade. It was released on December 19, 2012 to the App Store. Shadow Warrior Classic (previously Shadow Warrior Original) is the original MS-DOS version of Shadow Warrior that was released onto Steam using DOSBox. It is free to play and includes the original registered version but does not include the expansion packs. Published by Devolver Digital, it was released on May 29, 2013. Shadow Warrior Classic Redux is a PC version of Shadow Warrior, released on GOG.com & Steam for Microsoft Windows, OS X and Linux with the main game and both expansion packs, Wanton Destruction and Twin Dragon. Developed by General Arcade and published by Devolver Digital, it was released on July 8, 2013 with improved, OpenGL graphics and visuals, remastered audio and modern PC compatibility. Shadow Warrior (Classic) is the PC version rebuilt with Microsoft Windows and OS X support, published by 3D Realms as part of the 3D Realms Anthology Bundle, it was released on October 23, 2014 on their own website and on May 5, 2015 on Steam. Expansion packs Two expansion packs, Wanton Destruction and Twin Dragon, were released. The third one, Deadly Kiss from SillySoft, remains unreleased, but screenshots were released in January 1998. Twin Dragon was released as a free download on July 4, 1998. It was created by Level Infinity and Wylde Productions. The game reveals that Lo Wang has a twin brother, Hung Lo, with whom he was separated in early childhood. Hung Lo becomes a dark person whose goal is to destroy the world. Similar to Master Zilla, he uses the creatures from the "dark side", criminal underworld and Zilla's remnants to further his goals. Lo Wang has to journey through his dark minions, reach his palace and defeat the evil Twin Dragon Hung Lo once and for all. The game features 13 new levels, new sounds, artwork and a new final boss, Hung Lo, who replaced Zilla. Wanton Destruction was created by Sunstorm Interactive and tested by 3D Realms, but was not released by the distributor. Charlie Wiederhold presented the four maps he created to 3D Realms, and was consequently hired as a level designer for Duke Nukem Forever. With permission, he released the maps on March 22, 2004. On September 5, 2005, Anthony Campiti—former president of Sunstorm Interactive—notified 3D Realms by e-mail that he found the Wanton Destruction add-on, and it was released for free on September 9, 2005. The add-on chronicles Lo Wang's adventures after the original game. He visits his relatives in USA, but is forced to fight off Zilla's forces again. The game culminates with a battle against Master Zilla above the streets of Tokyo, which ends with Master Zilla's death. The game features 12 new levels, new artwork and a couple of new enemy replacements, such as human enemies; though they still act as their original counterparts. Reception NPD Techworld, a firm that tracked sales the United States, reported 118,500 units sold of Shadow Warrior by December 2002. Reviews from critics are mostly positive and ratings vary from average to positive. Thierry Nguyen of Computer Gaming World commented: "Shadow Warrior is an average action game. While there are some good enhancements to the BUILD engine and some good level design and enemy AI, the rest of the game is mediocre." By contrast, GamePro said that "Shadow Warrior scores high in both style and level design. ... It's enhanced by a high difficulty level, a great audio soundtrack, and ambient music and environments you can can really believe exist ..." Tim Soete of GameSpot called it "a late entry into the realm of sprite-based action games that's pretty fun despite its dated qualities." GamingOnLinux reviewer Hamish Paul Wilson decided in a later retrospective that Shadow Warrior was the weakest of the three major Build engine games, stating that its gunplay was the "least balanced and its levels the most likely to descend into tedium or frustration". Legacy 3D Realms released the source code of the Shadow Warrior engine on April 1, 2005 under the GPL-2.0-or-later license. Due to the timing of the source code release, some users initially believed that it was an April Fools joke. The first source port, JFShadowWarrior, was created by Jonathon Fowler and released a day later on April 2, 2005, including Linux support and improvements from his JFDuke3D source port. As of January 2015, there have been no new versions of JFShadowWarrior since October 9, 2005. Shadow Warrior for iOS was released on December 19, 2012 by 3D Realms and indie developer General Arcade. The official website was created by Jeffrey D. Erb and Mark Farish of Intersphere Communications Ltd. Two original novels featuring Lo Wang were published. For Dead Eyes Only was written by Dean Wesley Smith and You Only Die Twice by Ryan Hughes. The titles of the novels parody titles in the James Bond book series by Ian Fleming. References External links 1997 video games 3D Realms games Build (game engine) games Classic Mac OS games Commercial video games with freely available source code Cooperative video games Devolver Digital games DOS games ported to Windows DOS games First-person shooters Freeware games Games commercially released with DOSBox GT Interactive Software games IOS games Japan in non-Japanese culture Linux games MacOS games Multiplayer and single-player video games Shadow Warrior Sprite-based first-person shooters Video games about ninja Video games based on Japanese mythology Video games developed in the United States Video games scored by Lee Jackson (composer) Video games set in Japan Video games with 2.5D graphics Video games with alternative versions Video games with digitized sprites Video games with expansion packs Video games with voxel graphics Windows games
59337346
https://en.wikipedia.org/wiki/List%20of%20International%20Organization%20for%20Standardization%20standards%2C%2014000-14999
List of International Organization for Standardization standards, 14000-14999
This is a list of published International Organization for Standardization (ISO) standards and other deliverables. For a complete and up-to-date list of all the ISO standards, see the ISO catalogue. The standards are protected by copyright and most of them must be purchased. However, about 300 of the standards produced by ISO and IEC's Joint Technical Committee 1 (JTC1) have been made freely and publicly available. ISO 14000 – ISO 14999 ISO 14000 Environmental management systems (This is a set of standards, rather than a single standard) ISO 14001:2015 Environmental management systems – Requirements with guidance for use ISO 14004:2016 Environmental management systems – General guidelines on implementation ISO 14005:2010 Environmental management systems – Guidelines for the phased implementation of an environmental management system, including the use of environmental performance evaluation ISO 14006:2011 Environmental management systems – Guidelines for incorporating ecodesign ISO 14020:2000 Environmental labels and declarations – General principles ISO 14031 Environmental management - Environmental performance evaluation – Guidelines ISO 14046:2014 - Environmental management — Water footprint — Principles, requirements and guidelines ISO 14050:2009 Environmental management – Vocabulary ISO 14051 Environmental management – Material flow cost accounting - General framework ISO 14064 Greenhouse gases ISO/TR 14073:2017 Environmental management - Water footprint - Illustrative examples on how to apply ISO 14046 ISO 14084 Process diagrams for power plants ISO 14084-1:2015 Part 1: Specification for diagrams ISO 14084-2:2015 Part 2: Graphical symbols ISO/TS 14101:2012 Surface characterization of gold nanoparticles for nanomaterial specific toxicity screening: FT-IR method ISO/IEC 14102:2008 Information technology - Guideline for the evaluation and selection of CASE tools ISO/TR 14105:2011 Document management - Change management for successful electronic document management system (EDMS) implementation ISO 14117:2012 Active implantable medical devices – Electromagnetic compatibility – EMC test protocols for implantable cardiac pacemakers, implantable cardioverter defibrillators and cardiac resynchronization devices ISO 14132 Optics and photonics - Vocabulary for telescopic systems ISO 14132-1:2015 Part 1: General terms and alphabetical indexes of terms in ISO 14132 ISO 14132-2:2015 Part 2: Terms for binoculars, monoculars and spotting scopes ISO 14132-3:2014 Part 3: Terms for telescopic sights ISO 14132-4:2015 Part 4: Terms for astronomical telescopes ISO 14132-5:2008 Part 5: Terms for night vision devices ISO/IEC 14136:1995 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Specification, functional model and information flows – Identification supplementary services ISO 14139:2000 Hydrometric determinations – Flow measurements in open channels using structures – Compound gauging structures ISO/IEC 14143 Information technology - Software measurement - Functional size measurement ISO/IEC 14143-1:2007 Part 1: Definition of concepts ISO/IEC 14143-2:2011 Part 2: Conformity evaluation of software size measurement methods to ISO/IEC 14143-1 ISO/IEC TR 14143-3:2003 Part 3: Verification of functional size measurement methods ISO/IEC TR 14143-4:2002 Part 4: Reference model ISO/IEC TR 14143-5:2004 Part 5: Determination of functional domains for use with functional size measurement ISO/IEC 14143-6:2012 Part 6: Guide for use of ISO/IEC 14143 series and related International Standards ISO 14145 Roller ball pens and refills ISO 14145-2:1998 Part 2: Documentary use (DOC) ISO 14155:2011 Clinical investigation of medical devices for human subjects – Good clinical practice ISO 14163:1998 Acoustics – Guidelines for noise control by silencers ISO 14164:1999 Stationary source emissions – Determination of the volume flowrate of gas streams in ducts – Automated method ISO/IEC 14165 Information technology - Fibre Channel ISO/IEC 14165-114:2005 Part 114: 100 MB/s Balanced copper physical interface (FC-100-DF-EL-S) ISO/IEC 14165-115:2006 Part 115: Physical Interfaces (FC-PI) ISO/IEC 14165-116:2005 Part 116: 10 Gigabit (10GFC) ISO/IEC TR 14165-117:2007 Part 117: Methodologies for jitter and signal quality (MJSQ) ISO/IEC 14165-122:2005 Part 122: Arbitrated Loop-2 (FC-AL-2) ISO/IEC 14165-131:2000 Part 131: Switch Fabric Requirements (FC-SW) ISO/IEC 14165-133:2010 Part 133: Switch Fabric-3 (FC-SW-3) ISO/IEC 14165-141:2001 Part 141: Fabric Generic Requirements (FC-FG) ISO/IEC 14165-211:1999 Part 211: Mapping to HIPPI-FP (FC-FP) ISO/IEC 14165-222:2005 Part 222: Single-byte command code 2 mapping protocol (FC-SB-2) ISO/IEC 14165-241:2005 Part 241: Backbone 2 (FC-BB-2) ISO/IEC 14165-243:2012 Part 243: Backbone 3 (FC-BB-3) ISO/IEC 14165-251:2008 Part 251: Framing and Signaling (FC-FS) ISO/IEC TR 14165-312:2009 Part 312: Avionics environment upper layer protocol MIL-STD-1553B Notice 2 (FC-AE-1553) ISO/IEC TR 14165-313:2013 Part 313: Avionics Environment—Anonymous Synchronous Messaging (FC-AE-ASM) ISO/IEC TR 14165-314:2013 Part 314: Avionics Environment – Remote Direct Memory Access (FC-AE-RDMA) ISO/IEC 14165-321:2009 Part 321: Audio-Video (FC-AV) ISO/IEC 14165-331:2007 Part 331: Virtual Interface (FC-VI) ISO/IEC TR 14165-372:2011 Part 372: Methodologies of interconnects-2 (FC-MI-2) ISO/IEC 14165-414:2007 Part 414: Generic Services—4 (FC-GS-4) ISO/IEC 14165-521:2009 Part 521: Fabric application interface standard (FAIS) ISO/IEC 14169:1995 Information technology – 90 mm flexible disk cartridges – 21 MBytes formatted capacity – ISO Type 305 ISO 14189:2013 Water quality – Enumeration of Clostridium perfringens – Method using membrane filtration ISO 14199:2015 Health informatics – Information models – Biomedical Research Integrated Domain Group (BRIDG) Model ISO 14223 identification of animals – Advanced transponders ISO 14224 Petroleum, petrochemical and natural gas industries - Collection and exchange of reliability and maintenance data for equipment ISO 14230 Road vehicles – Diagnostic systems – Keyword Protocol 2000 ISO 14242 Implants for surgery – Wear of total hip-joint prostheses ISO 14242-1:2014 Part 1: Loading and displacement parameters for wear-testing machines and corresponding environmental conditions for test ISO 14242-2:2016 Part 2: Methods of measurement ISO 14242-3:2009 Part 3: Loading and displacement parameters for orbital bearing type wear testing machines and corresponding environmental conditions for test ISO 14243 Implants for surgery – Wear of total knee-joint prostheses ISO 14243-1:2009 Part 1: Loading and displacement parameters for wear-testing machines with load control and corresponding environmental conditions for test ISO 14243-2:2016 Part 2: Methods of measurement ISO 14243-3:2014 Part 3: Loading and displacement parameters for wear-testing machines with displacement control and corresponding environmental conditions for test ISO/IEC 14251:1995 Information technology – Data interchange on 12,7 mm 36-track magnetic tape cartridges ISO/IEC TR 14252:1996 Information technology - Guide to the POSIX Open System Environment (OSE) [Withdrawn without replacement] ISO 14253 Geometrical product specifications (GPS) - Inspection by measurement of workpieces and measuring equipment ISO 14253-1:2013 Part 1: Decision rules for proving conformity or nonconformity with specifications ISO 14253-2:2011 Part 2: Guidance for the estimation of uncertainty in GPS measurement, in calibration of measuring equipment and in product verification ISO 14253-3:2011 Part 3: Guidelines for achieving agreements on measurement uncertainty statements ISO/TS 14253-4:2010 Part 4: Background on functional limits and specification limits in decision rules ISO 14253-5:2015 Part 5: Uncertainty in verification testing of indicating measuring instruments ISO/TR 14253-6:2012 Part 6: Generalized decision rules for the acceptance and rejection of instruments and workpieces ISO/TS 14265:2011 Health Informatics – Classification of purposes for processing personal health information ISO/TR 14283:2004 Implants for surgery – Fundamental principles ISO 14289 Document management applications - Electronic document file format enhancement for accessibility ISO 14289-1:2014 Part 1: Use of ISO 32000-1 (PDF/UA-1) ISO/TR 14292:2012 Health informatics – Personal health records – Definition, scope and context ISO 14296:2016 Intelligent transport systems – Extension of map database specifications for applications of cooperative ITS ISO 14302:2002 Space systems – Electromagnetic compatibility requirements ISO 14362 Textiles - Methods for determination of certain aromatic amines derived from azo colorants ISO 14362-1:2017 Part 1: Detection of the use of certain azo colorants accessible with and without extracting the fibres ISO 14362-3:2017 Part 3: Detection of the use of certain azo colorants, which may release 4-aminoazobenzene ISO/IEC TR 14369:2018 Information technology – Programming languages, their environments and system software interfaces – Guidelines for the preparation of language-independent service specifications (LISS) ISO 14405 Geometrical product specifications (GPS) – Dimensional tolerancing ISO 14405-1:2016 Part 1: Linear sizes ISO 14405-2:2011 Part 2: Dimensions other than linear sizes ISO 14405-3:2016 Part 3: Angular sizes ISO 14406:2010 Geometrical product specifications (GPS) - Extraction ISO 14408:2016 Tracheal tubes designed for laser surgery – Requirements for marking and accompanying information ISO 14416:2003 Information and documentation – Requirements for binding of books, periodicals, serials and other paper documents for archive and library use – Methods and materials ISO/IEC 14417:1999 Information technology – Data recording format DD-1 for magnetic tape cassette conforming to IEC 1016 ISO/TS 14441:2013 Health informatics – Security and privacy requirements of EHR systems for use in conformity assessment ISO/IEC 14443 Identification cards – Contactless integrated circuit cards – Proximity cards ISO 14451 Pyrotechnic articles – Pyrotechnic articles for vehicles ISO 14451-1:2013 Part 1: Terminology ISO 14452:2012 Network services billing – Requirements ISO 14461 Milk and milk products – Quality control in microbiological laboratories ISO 14461-1:2005 Part 1: Analyst performance assessment for colony counts ISO 14461-2:2005 Part 2: Determination of the reliability of colony counts of parallel plates and subsequent dilution steps ISO/IEC 14462:2010 Information technology - Open-edi reference model ISO/TR 14468:2010 Selected illustrations of attribute agreement analysis ISO/IEC TR 14471:2007 Information technology - Software engineering - Guidelines for the adoption of CASE tools ISO/IEC 14473:1999 Information technology - Office equipment - Minimum information to be specified for image scanners ISO/IEC 14474:1998 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Functional requirements for static circuit-mode inter-PINX connections ISO/IEC TR 14475:2001 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Architecture and scenarios for Private Integrated Services Networking ISO/IEC 14476 Information technology – Enhanced communications transport protocol ISO/IEC 14476-1:2002 Specification of simplex multicast transport ISO/IEC 14476-2:2003 Specification of QoS management for simplex multicast transport ISO/IEC 14476-3:2008 Specification of duplex multicast transport ISO/IEC 14476-4:2010 Specification of QoS management for duplex multicast transport ISO/IEC 14476-5:2008 Specification of N-plex multicast transport ISO/IEC 14476-6:2010 Specification of QoS management for n-plex multicast transport ISO/IEC 14478 Information technology - Computer graphics and image processing - Presentation Environment for Multimedia Objects (PREMO) ISO/IEC 14478-1:1998 Part 1: Fundamentals of PREMO ISO/IEC 14478-2:1998 Part 2: Foundation Component ISO/IEC 14478-3:1998 Part 3: Multimedia Systems Services ISO/IEC 14478-4:1998 Part 4: Modelling, rendering and interaction component ISO/IEC 14492:2001 Information technology - Lossy/lossless coding of bi-level images ISO/IEC 14495 Information technology - Lossless and near-lossless compression of continuous-tone still images ISO/IEC 14495-1:1999 Baseline ISO/IEC 14495-2:2003 Extensions ISO/IEC 14496 Information technology – Coding of audio-visual objects ISO 14500:2003 Textile machinery and accessories – Harnesses for Jacquard weaving machines – Vocabulary ISO 14509 Small craft – Airborne sound emitted by powered recreational craft ISO 14509-1:2008 Part 1: Pass-by measurement procedures ISO 14509-3:2009 Part 3: Sound assessment using calculation and measurement procedures ISO 14511:2001 Measurement of fluid flow in closed conduits – Thermal mass flowmeters ISO/IEC 14515 Information technology - Portable Operating System Interface (POSIX®) - Test methods for measuring conformance to POSIX ISO/IEC 14515-1:2000 Part 1: System interfaces ISO/IEC TR 14516:2002 Information technology - Security techniques - Guidelines for the use and management of Trusted Third Party services ISO/IEC 14517:1996 Information technology - 130 mm optical disk cartridges for information interchange - Capacity: 2,6 Gbytes per cartridge ISO/IEC 14519:2001 Information technology - POSIX Ada Language Interfaces - Binding for System Application Program Interface (API) ISO 14532:2014 Natural gas – Vocabulary ISO 14533 Processes, data elements and documents in commerce, industry and administration – Long term signature profiles ISO 14533-1:2014 Part 1: Long term signature profiles for CMS Advanced Electronic Signatures (CAdES) ISO 14533-2:2012 Part 2: Long term signature profiles for XML Advanced Electronic Signatures (XAdES) ISO 14533-3:2017 Part 3: Long term signature profiles for PDF Advanced Electronic Signatures (PAdES) ISO 14534:2011 Ophthalmic optics – Contact lenses and contact lens care products – Fundamental requirements ISO 14539:2000 Manipulating industrial robots - Object handling with grasp-type grippers - Vocabulary and presentation of characteristics ISO/IEC 14543 Information technology – Home Electronic System (HES) architecture ISO/IEC 14543-2-1:2006 Part 2-1: Introduction and device modularity ISO/IEC 14543-3-1:2006 Part 3-1: Communication layers – Application layer for network based control of HES Class 1 ISO/IEC 14543-3-2:2006 Part 3-2: Communication layers – Transport, network and general parts of data link layer for network based control of HES Class 1 ISO/IEC 14543-3-3:2007 Part 3-3: User process for network based control of HES Class 1 ISO/IEC 14543-3-4:2007 Part 3-4: System management – Management procedures for network based control of HES Class 1 ISO/IEC 14543-3-5:2007 Part 3-5: Media and media dependent layers – Power line for network based control of HES Class 1 ISO/IEC 14543-3-6:2007 Part 3-6: Media and media dependent layers – Network based on HES Class 1, twisted pair ISO/IEC 14543-3-7:2007 Part 3-7: Media and media dependent layers – Radio frequency for network based control of HES Class 1 ISO/IEC 14543-3-10:2012 Part 3-10: Wireless short-packet (WSP) protocol optimized for energy harvesting – Architecture and lower layer protocols ISO/IEC 14543-3-11:2016 Part 3-11: Frequency modulated wireless short-packet (FMWSP) protocol optimised for energy harvesting – Architecture and lower layer protocols ISO/IEC TR 14543-4:2002 Part 4: Home and building automation in a mixed-use building ISO/IEC 14543-4-1:2008 Part 4-1: Communication layers – Application layer for network enhanced control devices of HES Class 1 ISO/IEC 14543-4-2:2008 Part 4-2: Communication layers – Transport, network and general parts of data link layer for network enhanced control devices of HES Class 1 ISO/IEC 14543-4-3:2015 Part 4-3: Application layer interface to lower communications layers for network enhanced control devices of HES Class 1 ISO/IEC 14543-5-1:2010 Part 5-1: Intelligent grouping and resource sharing for Class 2 and Class 3 – Core protocol ISO/IEC 14543-5-3:2012 Part 5-3: Intelligent grouping and resource sharing for HES Class 2 and Class 3 – Basic application ISO/IEC 14543-5-4:2010 Part 5-4: Intelligent grouping and resource sharing for HES Class 2 and Class 3 – Device validation ISO/IEC 14543-5-5:2012 Part 5-5: Intelligent grouping and resource sharing for HES Class 2 and Class 3 – Device type ISO/IEC 14543-5-6:2012 Part 5-6: Intelligent grouping and resource sharing for HES Class 2 and Class 3 – Service type ISO/IEC 14543-5-7:2015 Part 5-7: Intelligent grouping and 3 resource sharing – Remote access system architecture ISO/IEC 14543-5-8:2017 Part 5-8: Intelligent grouping and resource sharing for HES Class 2 and Class 3 – Remote access core protocol ISO/IEC 14543-5-9:2017 Part 5-9: Intelligent grouping and resource sharing for HES class 2 and class 3 – Remote access service platform ISO/IEC 14543-5-21:2012 Part 5-21: Intelligent grouping and resource sharing for HES Class 2 and Class 3 – Application profile – AV profile ISO/IEC 14543-5-22:2010 Part 5-22: Intelligent grouping and resource sharing for HES Class 2 and Class 3 – Application profile – File profile ISO 14560:2004 Acceptance sampling procedures by attributes - Specified quality levels in nonconforming items per million ISO/TR 14564:1995 Shipbuilding and marine structures - Marking of escape routes ISO/IEC 14568:1997 Information technology - DXL: Diagram eXchange Language for tree-structured charts ISO/IEC 14575:2000 Information technology - Microprocessor systems - Heterogeneous InterConnect (HIC) (Low-Cost, Low-Latency Scalable Serial Interconnect for Parallel System Construction) ISO/IEC 14576:1999 Information technology - Synchronous Split Transfer Type System Bus (STbus) - Logical Layer ISO 14580:2011 Hexalobular socket cheese head screws ISO 14588:2000 Blind rivets - Terminology and definitions ISO/IEC 14598 Software engineering - Product evaluation ISO/IEC 14598-5:1998 Part 5: Process for evaluators ISO/IEC 14598-6:2001 Part 6: Documentation of evaluation modules ISO 14602:2010 Non-active surgical implants – Implants for osteosynthesis – Particular requirements ISO 14607:2007 Non-active surgical implants – Mammary implants – Particular requirements ISO 14617 Graphical symbols for diagrams ISO 14617-1:2005 Part 1: General information and indexes ISO 14617-2:2002 Part 2: Symbols having general application ISO 14617-3:2002 Part 3: Connections and related devices ISO 14617-3:2002 Part 3: Connections and related devices ISO 14617-4:2002 Part 4: Actuators and related devices ISO 14617-5:2002 Part 5: Measurement and control devices ISO 14617-6:2002 Part 6: Measurement and control functions ISO 14617-7:2002 Part 7: Basic mechanical components ISO 14617-8:2002 Part 8: Valves and dampers ISO 14617-9:2002 Part 9: Pumps, compressors and fans ISO 14617-10:2002 Part 10: Fluid power converters ISO 14617-11:2002 Part 11: Devices for heat transfer and heat engines ISO 14617-12:2002 Part 12: Devices for separating, purification and mixing ISO 14617-13:2004 Part 13: Devices for material processing ISO 14617-14:2004 Part 14: Devices for transport and handling of material ISO 14617-15:2002 Part 15: Installation diagrams and network maps ISO 14630:2012 Non-active surgical implants – General requirements ISO 14638:2015 Geometrical product specifications (GPS) - Matrix model ISO/TR 14639 Health informatics – Capacity-based eHealth architecture roadmap ISO/TR 14639-1:2012 Part 1: Overview of national eHealth initiatives ISO/TR 14639-2:2014 Part 2: Architectural components and maturity model ISO 14641 Electronic archiving ISO 14641-1:2012 Specifications concerning the design and the operation of an information system for electronic information preservation ISO 14644 Cleanrooms and associated controlled environments ISO/IEC 14651:2020 Information technology — International string ordering and comparison — Method for comparing character strings and description of the common template tailorable ordering ISO/IEC 14662:2010 Information technology – Open-edi reference model ISO/TR 14685:2001 Hydrometric determinations – Geophysical logging of boreholes for hydrogeological purposes – Considerations and guidelines for making measurements ISO 14686:2003 Hydrometric determinations – Pumping tests for water wells – Considerations and guidelines for design, performance and use ISO 14695:2003 Industrial fans – Method of measurement of fan vibration ISO 14698 Cleanrooms and associated controlled environments – Biocontamination control ISO 14698-1:2003 General principles and methods ISO 14698-2:2003 Evaluation and interpretation of biocontamination data ISO/IEC 14699:1997 Information technology – Open Systems Interconnection – Transport Fast Byte Protocol ISO/IEC 14700:1997 Information technology – Open Systems Interconnection – Network Fast Byte Protocol ISO 14708 Implants for surgery – Active implantable medical devices ISO 14708-1:2014 Part 1: General requirements for safety, marking and for information to be provided by the manufacturer ISO 14708-2:2012 Part 2: Cardiac pacemakers ISO 14708-3:2017 Part 3: Implantable neurostimulators ISO 14708-4:2008 Part 4: Implantable infusion pumps ISO 14708-5:2010 Part 5: Circulatory support devices ISO 14708-6:2010 Part 6: Particular requirements for active implantable medical devices intended to treat tachyarrhythmia (including implantable defibrillators) ISO 14708-7:2013 Part 7: Particular requirements for cochlear implant systems ISO/IEC 14709 Information technology - Configuration of Customer Premises Cabling (CPC) for applications ISO/IEC 14709-1:1997 Part 1: Integrated Services Digital Network (ISDN) basic access ISO/IEC 14709-2:1998 Part 2: Integrated Services Digital Network (ISDN) primary rate ISO 14722:1998 Moped and moped-rider kinematics – Vocabulary ISO 14726:2008 Ships and marine technology – Identification colours for the content of piping systems ISO 14729:2001 Ophthalmic optics – Contact lens care products – Microbiological requirements and test methods for products and regimens for hygienic management of contact lenses ISO 14730:2014 Ophthalmic optics – Contact lens care products – Antimicrobial preservative efficacy testing and guidance on determining discard date ISO 14739 Document management - 3D use of Product Representation Compact (PRC) format ISO 14739-1:2014 Part 1: PRC 10001 ISO/TR 14742:2010 Financial services – Recommendations on cryptographic algorithms and their use ISO/IEC 14750:1999 Information technology – Open Distributed Processing – Interface Definition Language ISO/IEC 14752:2000 Information technology - Open Distributed Processing - Protocol support for computational interactions ISO/IEC 14753:1999 Information technology - Open Distributed Processing - Interface references and binding ISO/IEC 14754:1999 Information technology – Pen-Based Interfaces – Common gestures for Text Editing with Pen-Based Systems ISO/IEC 14755:1997 Information technology – Input methods to enter characters from the repertoire of ISO/IEC 10646 with a keyboard or other input device ISO/IEC 14756:1999 Information technology - Measurement and rating of performance of computer-based software systems ISO/IEC 14760:1997 Information technology - Data interchange on 90 mm overwritable and read only optical disk cartridges using phase change - Capacity: 1,3 Gbytes per cartridge ISO/IEC 14762:2009 Information technology - Functional safety requirements for Home and Building Electronic Systems (HBES) ISO/IEC 14763 Information technology - Implementation and operation of customer premises cabling ISO/IEC 14763-1:1999 Part 1: Administration ISO/IEC 14763-2:2012 Part 2: Planning and installation ISO/IEC TR 14763-2-1:2011 Part 2-1: Planning and installation - Identifiers within administration systems ISO/IEC 14763-3:2014 Part 3: Testing of optical fibre cabling ISO/IEC 14764:2006 Software Engineering - Software Life Cycle Processes - Maintenance ISO/IEC 14765:1997 Information technology - Framework for protocol identification and encapsulation ISO/IEC 14766:1997 Information technology - Telecommunications and information exchange between systems - Use of OSI applications over the Internet Transmission Control Protocol (TCP) ISO/IEC 14769:2001 Information technology - Open Distributed Processing - Type Repository Function ISO/IEC 14771:1999 Information technology - Open Distributed Processing - Naming framework ISO/IEC 14772 Information technology - Computer graphics and image processing - The Virtual Reality Modeling Language (VRML) ISO/IEC 14772-1:1997 Part 1: Functional specification and UTF-8 encoding ISO/IEC 14772-2:2004 Part 2: External authoring interface (EAI) ISO/IEC 14776 Information technology - Small Computer System Interface (SCSI) ISO/IEC 14776-112:2002 Part 112: Parallel Interface-2 (SPI-2) ISO/IEC 14776-113:2002 Part 113: Parallel Interface-3 (SPI-3) ISO/IEC 14776-115:2004 Part 115: Parallel Interface-5 (SPI-5) ISO/IEC 14776-121:2010 Part 121: Passive Interconnect Performance (PIP) ISO/IEC 14776-150:2004 Part 150: Serial Attached SCSI (SAS) ISO/IEC 14776-151:2010 Part 151: Serial Attached SCSI - 1.1 (SAS-1.1) ISO/IEC 14776-153:2015 Part 153: Serial Attached SCSI - 2.1 (SAS-2.1) ISO/IEC 14776-154:2017 Part 154: Serial Attached SCSI – 3 (SAS-3) ISO/IEC 14776-222:2005 Part 222: Fibre Channel Protocol for SCSI, Second Version (FCP-2) ISO/IEC 14776-223:2008 Part 223: Fibre Channel Protocol for SCSI, Third Version (FCP-3) ISO/IEC 14776-232:2001 Part 232: Serial Bus Protocol 2 (SBP-2) ISO/IEC 14776-251:2014 Part 251: USB attached SCSI (UAS) ISO/IEC 14776-261:2012 Part 261: SAS Protocol Layer (SPL) ISO/IEC 14776-262:2017 Part 262: SAS protocol layer - 2 (SPL-2) ISO/IEC 14776-321:2002 Part 321: SCSI-3 Block Commands (SBC) ISO/IEC 14776-322:2007 Part 322: SCSI Block Commands - 2 (SBC-2) ISO/IEC 14776-323:2017 Part 323: SCSI Block commands - 3 (SBC-3) ISO/IEC 14776-326:2015 Part 326: Reduced block commands (RBC) ISO/IEC 14776-331:2002 Part 331: Stream Commands (SSC) ISO/IEC 14776-333:2013 Part 333: SCSI Stream Commands - 3 (SSC-3) ISO/IEC 14776-341:2000 Part 341: Controller Commands (SCC) ISO/IEC 14776-342:2000 Part 342: Controller Commands - 2 (SCC-2) ISO/IEC 14776-351:2007 Part 351: Medium Changer Commands (SCSI-3 SMC) ISO/IEC 14776-362:2006 Part 362: Multimedia commands-2 (MMC-2) ISO/IEC 14776-372:2011 Part 372: SCSI Enclosure Services - 2 (SES-2) ISO/IEC 14776-381:2000 Part 381: Optical Memory Card Device Commands (OMC) ISO/IEC 14776-411:1999 Part 411: SCSI-3 Architecture Model (SCSI-3 SAM) ISO/IEC 14776-412:2006 Part 412: Architecture Model -2 (SAM-2) ISO/IEC 14776-413:2007 Part 413: SCSI Architecture Model -3 (SAM-3) ISO/IEC 14776-414:2009 Part 414: SCSI Architecture Model-4 (SAM-4) ISO/IEC 14776-452:2005 Part 452: SCSI Primary Commands - 2 (SPC-2) ISO/IEC 14776-453:2009 Part 453: Primary commands-3 (SPC-3) ISO 14785:2014 Tourist information offices – Tourist information and reception services – Requirements ISO/TR 14786:2014 Nanotechnologies – Considerations for the development of chemical nomenclature for selected nano-objects ISO 14801 Dentistry – Implants – Dynamic fatigue test for endosseous dental implants ISO/TR 14806:2013 Intelligent transport systems – Public transport requirements for the use of payment applications for fare media ISO 14813 Intelligent transport systems – Reference model architecture(s) for the ITS sector ISO 14813-1:2015 Part 1: ITS service domains, service groups and services ISO 14813-5:2010 Part 5: Requirements for architecture description in ITS standards ISO 14813-6:2009 Part 6: Data presentation in ASN.1 ISO 14814:2006 Road transport and traffic telematics – Automatic vehicle and equipment identification – Reference architecture and terminology ISO 14815:2005 Road transport and traffic telematics – Automatic vehicle and equipment identification – System specifications ISO 14816:2005 Road transport and traffic telematics – Automatic vehicle and equipment identification – Numbering and data structure ISO 14817 Intelligent transport systems – ITS central data dictionaries ISO 14817-1:2015 Part 1: Requirements for ITS data definitions ISO 14817-2:2015 Part 2: Governance of the Central ITS Data Concept Registry ISO 14817-3:2017 Part 3: Object identifier assignments for ITS data concepts ISO 14819 Intelligent transport systems – Traffic and travel information messages via traffic message coding ISO 14819-1:2013 Part 1: Coding protocol for Radio Data System – Traffic Message Channel (RDS-TMC) using ALERT-C ISO 14819-2:2013 Part 2: Event and information codes for Radio Data System – Traffic Message Channel (RDS-TMC) using ALERT-C ISO 14819-3:2013 Part 3: Location referencing for Radio Data System – Traffic Message Channel (RDS-TMC) using ALERT-C ISO 14819-6:2006 Part 6: Encryption and conditional access for the Radio Data System – Traffic Message Channel ALERT C coding ISO 14823:2017 Intelligent transport systems – Graphic data dictionary ISO 14825:2011 Intelligent transport systems – Geographic Data Files (GDF) – GDF5.0 ISO 14827 Transport information and control systems – Data interfaces between centres for transport information and control systems ISO 14827-1:2005 Part 1: Message definition requirements ISO 14827-2:2005 Part 2: DATEX-ASN ISO/IEC 14833:1996 Information technology – Data interchange on 12,7 mm 128-Track magnetic tape cartridges – DLT 3 format ISO/IEC 14834:1996 Information technology - Distributed Transaction Processing - The XA Specification ISO 14837 Mechanical vibration – Ground-borne noise and vibration arising from rail systems ISO 14837-1:2005 Part 1: General guidance ISO/TS 14837-32:2015 Part 32: Measurement of dynamic properties of the ground ISO 14839 Mechanical vibration - Vibration of rotating machinery equipped with active magnetic bearings ISO 14839-1:2002 Part 1: Vocabulary ISO 14839-2:2004 Part 2: Evaluation of vibration ISO 14839-3:2006 Part 3: Evaluation of stability margin ISO 14839-4:2012 Part 4: Technical guidelines ISO/IEC 14840:1996 Information technology – 12,65 mm wide magnetic tape cartridge for information interchange – Helical scan recording – Data-D3-1 format ISO/IEC 14841:1996 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Specification, functional model and information flows – Call offer supplementary service ISO/IEC 14842:1996 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Specification, functional model and information flows – Do not disturb and do not disturb override supplementary services ISO/IEC 14843:2003 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Inter-exchange signalling protocol – Call Offer supplementary service ISO/IEC 14844:2003 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Inter-exchange signalling protocol – Do Not Disturb and Do Not Disturb Override supplementary services ISO/IEC 14845:1996 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Specification, functional model and information flows – Call intrusion supplementary service ISO/IEC 14846:2003 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Inter-exchange signalling protocol – Call Intrusion supplementary service ISO 14855 Determination of the ultimate aerobic biodegradability of plastic materials under controlled composting conditions – Method by analysis of evolved carbon dioxide ISO 14855-1:2012 General method ISO 14855-2:2007 Gravimetric measurement of carbon dioxide evolved in a laboratory-scale test ISO/IEC 14863:1996 Information technology - System-Independent Data Format (SIDF) ISO/TR 14873:2013 Information and documentation - Statistics and quality issues for web archiving ISO 14879 Implants for surgery – Total knee-joint prostheses ISO 14879-1:2000 Part 1: Determination of endurance properties of knee tibial trays ISO 14880 Optics and photonics - Microlens arrays ISO 14880-1:2016 Part 1: Vocabulary and general properties ISO 14880-2:2006 Part 2: Test methods for wavefront aberrations ISO 14880-3:2006 Part 3: Test methods for optical properties other than wavefront aberrations ISO 14880-4:2006 Part 4: Test methods for geometrical properties ISO/TR 14880-5:2010 Part 5: Guidance on testing ISO 14881:2001 Integrated optics – Interfaces – Parameters relevant to coupling properties ISO/IEC 14882:2020 Programming languages — C++ ISO/IEC 14888 Information technology - Security techniques - Digital signatures with appendix ISO/IEC 14888-1:2008 Part 1: General ISO/IEC 14888-2:2008 Part 2: Integer factorization based mechanisms ISO/IEC 14888-3:2016 Part 3: Discrete logarithm based mechanisms ISO 14889:2013 Ophthalmic optics – Spectacle lenses – Fundamental requirements for uncut finished lenses ISO/TS 14904:2002 Road transport and traffic telematics – Electronic fee collection (EFC) – Interface specification for clearing between operators ISO 14906:2011 Electronic fee collection – Application interface definition for dedicated short-range communication ISO/TS 14907 Electronic fee collection – Test procedures for user and fixed equipment ISO/TS 14907-1:2015 Part 1: Description of test procedures ISO/TS 14907-2:2016 Part 2: Conformance test for the on-board unit application interface ISO/IEC 14908 Information technology - Control network protocol ISO/IEC 14908-1:2012 Part 1: Protocol stack ISO/IEC 14908-2:2012 Part 2: Twisted pair communication ISO/IEC 14908-3:2012 Part 3: Power line channel specification ISO/IEC 14908-4:2012 Part 4: IP communication ISO 14915 Software ergonomics for multimedia user interfaces ISO 14915-1:2002 Part 1: Design principles and framework ISO 14915-2:2003 Part 2: Multimedia navigation and control ISO 14915-3:2002 Part 3: Media selection and combination ISO 14917:2017 Thermal spraying - Terminology, classification ISO 14949:2001 Implants for surgery – Two-part addition-cure silicone elastomers ISO 14963:2003 Mechanical vibration and shock – Guidelines for dynamic tests and investigations on bridges and viaducts ISO 14971:2007 Medical devices – Application of risk management to medical devices ISO 14972:1998 Sterile obturators for single use with over-needle peripheral intravascular catheters ISO 14975:2000 Surface chemical analysis – Information formats ISO 14976:1998 Surface chemical analysis – Data transfer format ISO/IEC 14977:1996 Information technology – Syntactic metalanguage – Extended BNF ISO 14978:2006 Geometrical product specifications (GPS) – General concepts and requirements for GPS measuring equipment ISO 14982:1998 Agricultural and forestry machinery – Electromagnetic compatibility – Test methods and acceptance criteria Notes References External links International Organization for Standardization ISO Certification Provider ISO Consultant ISO 14000 International Organization for Standardization
471217
https://en.wikipedia.org/wiki/Pick%20operating%20system
Pick operating system
The Pick operating system (often called just "the Pick system" or simply "Pick") is a demand-paged, multiuser, virtual memory, time-sharing computer operating system based around a MultiValue database. Pick is used primarily for business data processing. It is named after one of its developers, Richard A. (Dick) Pick. The term "Pick system" has also come to be used as the general name of all operating environments which employ this multivalued database and have some implementation of Pick/BASIC and ENGLISH/Access queries. Although Pick started on a variety of minicomputers, the system and its various implementations eventually spread to a large assortment of microcomputers, personal computers and mainframe computers. Overview The Pick operating system consists of a database, dictionary, query language, procedural language (PROC), peripheral management, multi-user management, and a compiled BASIC Programming language. The database is a 'hash-file' data management system. A hash-file system is a collection of dynamic associative arrays which are organized altogether and linked and controlled using associative files as a database management system. Being hash-file oriented, Pick provides efficiency in data access time. Originally, all data structures in Pick were hash-files (at the lowest level) meaning records are stored as associated couplets of a primary key to a set of values. Today a Pick system can also natively access host files in Windows or Unix in any format. A Pick database is divided into one or more accounts, master dictionaries, dictionaries, files, and sub-files, each of which is a hash-table oriented file. These files contain records made up of fields, sub-fields, and sub-sub-fields. In Pick, records are called items, fields are called attributes, and sub-fields are called values or sub-values (hence the present-day label "multivalued database"). All elements are variable-length, with field and values marked off by special delimiters, so that any file, record, or field may contain any number of entries of the lower level of entity. As a result, a Pick item (record) can be one complete entity (one entire invoice, purchase order, sales order, etc.), or is like a file on most conventional systems. Entities that are stored as 'files' in other common-place systems (e.g. source programs and text documents) must be stored as records within files on Pick. The file hierarchy is roughly equivalent to the common Unix-like hierarchy of directories, sub-directories, and files. The master dictionary is similar to a directory in that it stores pointers to other dictionaries, files and executable programs. The master dictionary also contains the command-line language. All files (accounts, dictionaries, files, sub-files) are organized identically, as are all records. This uniformity is exploited throughout the system, both by system functions, and by the system administration commands. For example, the 'find' command will find and report the occurrence of a word or phrase in a file, and can operate on any account, dictionary, file or sub-file. Each record must have a unique primary key which determines where in a file that record is stored. To retrieve a record, its key is hashed and the resultant value specifies which of a set of discrete "buckets" (called "groups") to look in for the record. Within a bucket, records are scanned sequentially. Therefore, most records (e.g. a complete document) can be read using one single disk-read operation. This same method is used to write the record back to its correct "bucket". In its initial implementation, Pick records were limited to 32 KB in total (when a 10 MB hard disk cost US$5000), although this limit was removed in the 1980s. Files can contain an unlimited number of records, but retrieval efficiency is determined by the number of records relative to the number of buckets allocated to the file. Each file may be initially allocated as many buckets as required, although changing this extent later may (for some file types) require the file to be quiescent. All modern multi-value databases have a special file-type which changes extent dynamically as the file is used. These use a technique called linear hashing, whose cost is proportional to the change in file size, not (as in typical hashed files) the file size itself. All files start as a contiguous group of disk pages, and grow by linking additional "overflow" pages from unused disk space. Initial Pick implementations had no index structures as they were not deemed necessary. Around 1990, a B-tree indexing feature was added. This feature makes secondary key look-ups operate much like keyed inquiries of any other database system: requiring at least two disk reads (a key read then a data-record read). Pick data files are usually two levels. The first level is known as the "dictionary" level and is mandatory. It contains: Dictionary itemsthe optional items that serve as definitions for the names and structure of the items in the data fork, used in reporting The data-level identifiera pointer to the second or "data" level of the file Files created with only one level are, by default, dictionary files. Some versions of the Pick system allow multiple data levels to be linked to one dictionary level file, in which case there would be multiple data-level identifiers in the dictionary file. A Pick database has no data typing, since all data is stored as characters, including numbers (which are stored as character decimal digits). Data integrity, rather than being controlled by the system, is controlled by the applications and the discipline of the programmers. Because a logical document in Pick is not fragmented (as it would be in SQL), intra-record integrity is automatic. In contrast to many SQL database systems, Pick allows for multiple, pre-computed field aliases. For example, a date field may have an alias definition for the format "12 Oct 1999", and another alias formatting that same date field as "10/12/99". File cross-connects or joins are handled as a synonym definition of the foreign key. A customer's data, such as name and address, are "joined" from the customer file into the invoice file via a synonym definition of "customer number" in the "invoice" dictionary. Pick record structure favors a non-first-normal-form composition, where all of the data for an entity is stored in a single record, obviating the need to perform joins. Managing large, sparse data sets in this way can result in efficient use of storage space. This is why these databases are sometimes called NF2 or NF-squared databases. History Pick was originally implemented as the Generalized Information Retrieval Language System (GIRLS) on an IBM System/360 in 1965 by Don Nelson and Richard (Dick) Pick at TRW, whose government contract for the Cheyenne Helicopter project required developing a database. It was supposed to be used by the U.S. Army to control the inventory of Cheyenne helicopter parts. Pick was subsequently commercially released in 1973 by Microdata Corporation (and its British distributor CMC) as the Reality Operating System now supplied by Northgate Information Solutions. McDonnell Douglas bought Microdata in 1981. Originally on the Microdata implementation, and subsequently implemented on all Pick systems, a BASIC language called Data/BASIC with numerous syntax extensions for smart terminal interface and database operations was the primary programming language for applications. A PROC procedure language was provided for executing scripts. A SQL-style language called ENGLISH allowed database retrieval and reporting, but not updates (although later, the ENGLISH command "REFORMAT" allowed updates on a batch basis). ENGLISH did not fully allow manipulating the 3-dimensional multivalued structure of data records. Nor did it directly provide common relational capabilities such as joins. This was because powerful data dictionary redefinitions for a field allowed joins via the execution of a calculated lookup in another file. The system included a spooler. A simple text editor for file-system records was provided, but the editor was only suitable for system maintenance, and could not lock records, so most applications were written with the other tools such as Batch, RPL, or the BASIC language so as to ensure data validation and allow record locking. By the early 1980s observers saw the Pick operating system as a strong competitor to Unix. BYTE in 1984 stated that "Pick is simple and powerful, and it seems to be efficient and reliable, too ... because it works well as a multiuser system, it's probably the most cost-effective way to use an XT". Dick Pick founded Pick & Associates, later renamed Pick Systems, then Raining Data,then () TigerLogic, and finally Rocket Software. He licensed "Pick" to a large variety of manufacturers and vendors who have produced different "flavors" of Pick. The database flavors sold by TigerLogic were D3, mvBase, and mvEnterprise. Those previously sold by IBM under the "U2" umbrella are known as UniData and UniVerse. Rocket Software purchased IBM's U2 family of products in 2010 and TigerLogic's D3 and mvBase family of products in 2014. In 2021, Rocket acquired OpenQM and jBASE as well. Dick Pick died of stroke complications in October 1994. Pick Systems often became tangled in licensing litigation, and devoted relatively little effort to marketing and improving its software. Subsequent ports of Pick to other platforms generally offered the same tools and capabilities for many years, usually with relatively minor improvements and simply renamed (for example, Data/BASIC became Pick/BASIC and ENGLISH became ACCESS). Licensees often developed proprietary variations and enhancements (for example, Microdata created their own input processor called ScreenPro). Derivative and related products The Pick database was licensed to roughly three dozen licensees between 1978 and 1984. Application-compatible implementations evolved into derivatives and also inspired similar systems. Reality – The first implementation of the Pick database was on a Microdata platform using firmware and called Reality. The first commercial release was in 1973. Microdata acquired CMC Ltd. in the early 80s and were based in Hemel Hempstead, England. The Microdata implementations ran in firmware, so each upgrade had to be accompanied by a new configuration chip. Microdata itself was eventually bought by McDonnell Douglas Information Systems. Pick and Microdata sued each other for the right to market the database, the final judgment being that they both had the right. In addition to the Reality Sequoia and Pegasus series of computers, Microdata and CMC Ltd. sold the Sequel (Sequoia) series which was a much larger class able to handle over 1000 simultaneous users. The earlier Reality minicomputers were known to handle well over 200 simultaneous users, although performance was slow and it was above the official limit. Pegasus systems superseded Sequoia and could handle even more simultaneous users than its predecessors. The modern version of this original Pick implementation is owned and distributed by Northgate Information Solutions Reality. Ultimate – The second implementation of the Pick database was developed in about 1978 by a New Jersey company called The Ultimate Corp, run by Ted Sabarese. Like the earlier Microdata port, this was a firmware implementation, with the Pick instruction set in firmware and the monitor in assembly code on a Honeywell Level 6 machine. The system had dual personalities in that the monitor/kernel functions (mostly hardware I/O and scheduling) were executed by the native Honeywell Level 6 instruction set. When the monitor "select next user" for activation control was passed to the Honeywell WCS (writable control store) to execute Pick assembler code (implemented in microcode) for the selected process. When the user's time slice expired control was passed back to the kernel running the native Level 6 instruction set. Ultimate took this concept further with the DEC LSI/11 family of products by implementing a co-processor in hardware (bit-slice, firmware driven). Instead of a single processor with a WCS microcode enhanced instruction set, this configuration used two independent but cooperating CPUs. The LSI11 CPU executed the monitor functions and the co-processor executed the Pick assembler instruction set. The efficiencies of this approach resulted in a 2× performance improvement. The co-processor concept was used again to create a 5×, 7×, and dual-7× versions for Honeywell Level 6 systems. Dual ported memory with private busses to the co-processors were used to increase performance of the LSI11 and Level 6 systems. Another version used a DEC LSI-11 for the IOP and a 7X board. Ultimate enjoyed moderate success during the 1980s, and even included an implementation running as a layer on top of DEC VAX systems, the 750, 780, 785, and later the MicroVAX. Ultimate also had versions of the Ultimate Operating System running on IBM 370 series systems (under VM and native) and also the 9370 series computers. Ultimate was renamed Allerion, Inc., before liquidation of its assets. Most assets were acquired by Groupe Bull, and consisted of mostly maintaining extant hardware. Bull had its own problems and in approximately 1994 the US maintenance operation was sold to Wang. Prime INFORMATION – Devcom, a Microdata reseller, wrote a Pick-style database system called INFORMATION in FORTRAN and assembler in 1979 to run on Prime Computer 50-series systems. It was then sold to Prime Computer and renamed Prime INFORMATION. It was subsequently sold to VMark Software Inc. This was the first of the guest operating environment implementations. INFO/BASIC, a variant of Dartmouth BASIC, was used for database applications. UniVerse – Another implementation of the system, called UniVerse, was created by VMark Software and operated under Unix and Microsoft Windows. This was the first one to incorporate the ability to emulate other implementations of the system, such as Microdata's Reality Operating System, and Prime INFORMATION. Originally running on Unix, it was later also made available for Windows. It now is owned by Rocket Software. (The systems developed by Prime Computer and VMark are now owned by Rocket Software and referred to as "U2".) UniData – Very similar to UniVerse, but UniData had facilities to interact with other Windows applications. It is also owned and distributed by Rocket Software. PI/open – Prime Computer rewrote Prime INFORMATION in C for the Unix-based systems it was selling, calling it PI+. It was then ported to other Unix systems offered by other hardware vendors and renamed PI/open. Applied Digital Data Systems (ADDS) – This was the first implementation to be done in software only, so upgrades were accomplished by a tape load, rather than a new chip. The "Mentor" line was initially based on the Zilog Z-8000 chipset and this port set off a flurry of other software implementations across a wide array of processors with a large emphasis on the Motorola 68000. Fujitsu Microsystems of America – Another software implementation, existing in the late 1980s. Fujitsu Microsystems of America was acquired by Alpha Microsystems on October 28, 1989. Pyramid – Another software implementation existing in the 1980s General Automation "Zebra" – Another software implementation existing in the 1980s Altos – A software implementation on an 8086 chipset platform launched around 1983. WICAT/Pick – Another software implementation existing in the 1980s Sequoia – Another software implementation, existing from 1984. Sequoia was most well known for its fault-tolerant multi-processor model, which could be dialed into with the user's permission and his switching terminal zero to remote with the key on the system consol. He could watch what was done by the support person who had dialed on his terminal 0, a printer with a keyboard. Pegasus came out in 1987. The Enterprise Systems business unit (which was the unit that sold Pick), was sold to General Automation in 1996/1997. Revelation – In 1984, Cosmos released a Pick-style database called Revelation, later Advanced Revelation, for DOS on the IBM PC. Advanced Revelation is now owned by Revelation Technologies, which publishes a GUI-enabled version called OpenInsight. jBASE – jBASE was released in 1991 by a small company of the same name in Hemel Hempstead, England. Written by former Microdata engineers, jBASE emulates all implementations of the system to some degree. jBASE compiles applications to native machine code form, rather than to an intermediate byte code. In 2015, cloud solutions provider Zumasys in Irvine, California, acquired the jBASE distribution rights from Mpower1 as well as the intellectual property from Temenos Group. On 14 Oct 2021, Zumasys announced they had sold their databases and tools, including jBASE to Rocket Software. UniVision – UniVision was a Pick-style database designed as a replacement for the Mentor version, but with extended features, released in 1992 by EDP in Sheffield, England. OpenQM – The only MultiValue database product available both as a fully supported non-open source commercial product and in open source form under the General Public License. OpenQM is available from its exclusive worldwide distributor, Zumasys. Caché – In 2005 InterSystems, the maker of Caché database, announced support for a broad set of MultiValue extensions, Caché for MultiValue. ONware – ONware equips MultiValue applications with the ability to use common databases such as Oracle and SQL Server. Using ONware, MultiValue applications can be integrated with relational, object, and object-relational applications. D3 – Pick Systems ported the Pick operating system to run as a database product utilizing host operating systems such as Unix, Linux, or Windows servers, with the data stored within the file system of the host operating system. Previous Unix or Windows versions had to run in a separate partition, which made interfacing with other applications difficult. The D3 releases opened the possibility of integrating internet access to the database or interfacing to popular word processing and spreadsheet applications, which has been successfully demonstrated by a number of users. The D3 family of databases and related tools is owned and distributed by Rocket Software. Through the implementations above, and others, Pick-like systems became available as database/programming/emulation environments running under many variants of Unix and Microsoft Windows. Over the years, many important and widely used applications have been written using Pick or one of the derivative implementations. In general, the end users of these applications are unaware of the underlying Pick implementation. Criticisms and comparisons Run-time environment Native Pick did not require an underlying operating system (OS) to run. This changed with later implementations when Pick was re-written to run on various host OS (Windows, Linux, Unix, etc.). While the host OS provided access to hardware resources (processor, memory, storage, etc.), Pick had internal processes for memory management. Object-oriented Caché addressed some of these problems. Networking in mvBase was not possible without an accompanying application running in the host OS that could manage network connections via TCP ports and relay them to Pick internal networking (via serial connection). Credentials and security Individual user accounts must be created within the Pick OS, and cannot be tied to an external source (such as local accounts on the host OS, or LDAP). User passwords are stored within the Pick OS as an encrypted value. The encrypted password can be "cracked" via brute force methods, but requires system access and Pick programming skills as part of the attack vector. The Rocket D3 implementation supports SSL file encryption. Expertise and support Companies looking to hire developers and support personnel for MultiValue-based (Pick-based) systems recognize that although developers typically do not learn the environment in college and university courses, developers can be productive quickly with some mentoring and training. Due to the efficient design and nature of the programming language (a variant of BASIC), the learning curve is generally considered low. Pick products such as D3, UniVerse, UniData, jBASE, Revelation, MVON, Caché, OpenQM, and Reality are still supported globally via well established distribution channels and resellers. The mvdbms Google Group is a useful place to start when looking for resources. (See mvdbms on Google Groups) See also MUMPS, the predecessor of Caché References Bibliography The REALITY Pocket Guide ; Jonathan E. Sisk ; Irvine, CA ; JES & Associates, Inc. ; 1981 The PICK Pocket Guide; Jonathan E. Sisk ; Irvine, CA ; Pick Systems ; 1982 Exploring The Pick Operating System ; Jonathan E. Sisk ; Steve VanArsdale ; Hasbrouck Heights, N.J. ; Hayden Book Co. 1985. The Pick Pocket Guide ; Jonathan E. Sisk ; Desk reference ed ; Hasbrouck Heights, N.J. ; Hayden Book Co. 1985. The Pick Perspective ; Ian Jeffrey Sandler ; Blue Ridge Summit, PA ; TAB Professional and Reference Books; 1989. Part of The Pick Library Series, Edited by Jonathan E. Sisk Pick for professionals : advanced methods and techniques ; Harvey Rodstein ; Blue Ridge Summit, PA ; TAB Professional and Reference Books; 1990. Part of The Pick Library Series, Edited by Jonathan E. Sisk Encyclopedia PICK (EPICK) ; Jonathan E. Sisk ; Irvine, CA ; Pick Systems ; 1992 Le Système d'exploitation PICK ; Malcolm Bull ; Paris: Masson, 1989. The Pick operating system ; Joseph St John Bate; Mike Wyatt; New York : Van Nostrand Reinhold, 1986. The Pick operating system ; Malcolm Bull ; London ; New York : Chapman and Hall, 1987. Systeme pick ; Patrick Roussel, Pierre Redoin, Michel Martin ; Paris: CEdi Test, 1988. Advanced PICK et UNIX : la nouvelle norme informatique ; Bruno Beninca; Aulnay-sous-Bois, Seine-Saint-Denis ; Relais Informatique International, 1990. Le systeme PICK : mode d'emploi d'un nouveau standard informatique ; Michel Lallement, Jeanne-Françoise Beltzer; Aulnay-sous-Bois, Seine-Saint-Denis ; Relais Informatique International, 1987. The Pick operating system : a practical guide ; Roger J Bourdon; Wokingham, England ; Reading, Mass. : Addison-Wesley, 1987. Le Système d'éxploitation : réalités et perspectives ; Bernard de Coux; Paris : Afnor, 1988. Pick BASIC : a programmer's guide ; Jonathan E Sisk;Blue Ridge Summit, PA : TAB Professional and Reference Books, 1987. Part of The Pick Library Series, Edited by Jonathan E. Sisk Pick BASIC : a reference guide ; Linda Mui; Sebastopol, CA : O'Reilly & Associates, 1990. Programming with IBM PC Basic and the Pick database system ; Blue Ridge Summit, PA : TAB Books, 1990. Part of The Pick Library Series, Edited by Jonathan E. Sisk An overview of PICK system ;Shailesh Kamat; 1993. Pick: A Multilingual Operating System ; Charles M. Somerville; Computer Language Magazine, May 1987, p. 34. Encyclopedia Pick; Jonathan E. Sisk; Pick Systems, June 1991 External links Photo of Dick Pick in his anti-gravity boots on the cover of Computer Systems News, 1983. Pick/BASIC: A Programmer's Guidethe full text of the first and most widely read textbook by Pick educator and author Jonathan E. Sisk. Life the Universe and Everything: introduction to and online training course in Universe developed by Pick software engineer Manny Neira. Video: "History of the PICK System" made in 1990 Pick Publications Database 1987 Interview with Dick Pick in the Pick Pavilion at COMDEX 1990 Interview with Dick Pick in the Pick Pavilion at COMDEX 1990 Interview with Jonathan Sisk in the Pick Pavilion at COMDEX 1991 Pick Rap Show at COMDEX, co-written by Jonathan Sisk and John Treankler 1992 Video of Dick and Zion Pick, who appeared in the Ross Perot campaign rally - includes entire unedited Perot speech An insightful early history of the Pick System, by Chandru Murthi, who was there at the time 1984 PC Magazine article "Choosing the Pick of the Litter", by Jonathan E. Sisk and Steve VanArsdale Database Management Approach to Operating Systems Development, by Richard A. Pick Chapter 5 of New Directions for Database Systems, Gad Ariav, James Clifford editors Doing More With Less Hardware, Computer History Museum piece on Pick 1965 software Data processing Legacy systems Proprietary database management systems Proprietary operating systems Assembly language software Time-sharing operating systems X86 operating systems 68k architecture
16155375
https://en.wikipedia.org/wiki/Latif%20Khosa
Latif Khosa
Muhammad Latif Khosa (Punjabi and ) is a politician representing Pakistan People's Party who has been the Governor of Punjab from 2011 until 2013. He is a lawyer by profession, having been a senior advocate in the Supreme Court of Pakistan and is considered an authoritative legal and constitutional expert of the country. A former senator and a former Attorney General of Pakistan, Latif Khosa was appointed as the Governor of Punjab by the President of Pakistan after the murder of late-Governor Salman Taseer on 11 January 2011. He was appointed the attorney general on 19 August 2008. Khosa also remained Member Pakistan Bar Council for three terms 1990-2005 and served as Chairman of the Executive Committee (CEC) of Pakistan Bar Council. He co-authored an electoral fraud report with former prime minister Benazir Bhutto shortly before her assassination in December 2007. Khosa has been one of Late Banazir Bhutto's top aides. He belongs to Khosa (tribe) which is a Baloch tribe. Latif Khosa is the founder of Khosa Law Chambers (KLC), one of the top law firm in Pakistan based at Lahore. KLC has also offices in Karachi and Islamabad. Education After receiving his early education from Dera Ghazi Khan Punjab, he joined Government College, Lahore from where he did his graduation. During his stay at Government College, he showed excellent results in debate, sports and academics. After graduation, he joined the Punjab University Law College at Lahore, where he became President of the Punjab University Law College Students Union, and the editor of the college magazine, "Al-Mizan". He was declared the best English debater of 1967 in the Punjab University after winning the "Krishan Kishore Grover Goodwill Gold Medal Declamation Contest". He did his LL.B in 1967 with a top honours for best all round activities in academics, sports and debates. Professional career Latif Khosa joined the legal profession at Lahore and enrolled as an advocate of the subordinate Courts in 1968. In 1970, he became an advocate of the High Court and in 1980 as an advocate of the Supreme Court. During the years 1981; 1983 & 1985, Latif Khosa was selected as the President of High Court Bar Association Multan. In 1990, Khosa was elected as Member Pakistan Bar Council for the first time and subsequently for 2 more terms in 1995 & 2000. Political career He has held the portfolios of a Senator (2003-2009), Attorney General (2008-2009) in the Government of Pakistan, Advisor (2010) and also remained Governor of Punjab (2011-2013). Khosa was also instrumental in the 2007 lawyer's movement for the restoration of dozens of senior judges including Iftikhar Muhammad Chaudhry sacked by former military ruler Pervez Musharraf in 2007. Khosa has been invited to lecture on the subject of Politics, Diplomacy, Foreign Relations and International Law at universities across the globe. Senator Khosa appeared before the acting Chief Justice Javaid Iqbal in a hearing over the handling of the Iftikhar Chaudry sacking issue. Governor Punjab He was selected as Governor of Punjab in January 2011. The choice of his selection as Governor came after the PPP acquiesced to the PML-N's 10-point agenda to improve relationship between PML(N)'s Punjab Government and that of the PPP federal government. Khosa has been the key instrument in smooth running of the Punjab Government during his governorship 2011-2013 in spite of issues between the two mainstream political parties PPP (Pakistan People's Party) and PML-N (Pakistan Muslim League-Nawaz) Secretary General PPP Khosa was selected as the Secretary General of Pakistan Peoples Party in 2013 and he remained on that post until 2017. He is also the Member of CEC Pakistan People's Party. He is the Central Chairman of the party's Lawyer Wing Peples Lawyers Forum (PLF) and Election Monitoring Cell of Pakistan Peoples Party. Attorney general Soon after the PPP came to power after the 2008 elections; Khosa was selected Attorney-General for Pakistan until 2009. Being attorney general, he represented Pakistan on different International Forums & Judicial Conferences. Federal Minister for Information Technology & Communication He has also been selected as Advisor of Information Technology/Minister Incharge in the cabinet of Prime Minister Yousaf Raza Gilani. On 20 July 2010, Advisor to Prime Minister Yousaf Raza Gilani on Information Technology/Minister Incharge Sardar Latif Khosa sent his resignation to the president after developing disagreements with the premier. Senator Khosa was selected as Senator from Punjab Province in 2003. His name was nominated by late Mohtarma Benazir Bhutto. His senatorship remained in function until 2009. He has been member of senate Committees for Foreign Affairs, Law Justice and Human Rights, Government Assurances, Committee on Rules of Procedure and Privileges, Senate House Committee, Devolution Process Bar politics Khosa has been very active in bar politics and his group has won important Supreme Court Bar Association and Pakistan Bar Council elections over the last 40 years. He has been selected as the president of High Court Bar thrice (1981; 1983; 1985) and as member Pakistan Bar Council three times 1990-2005 (15 years). Family Khosa's Family include former Governor Punjab Sardar Zulfiqar Ali Khosa; former Chief Minister Punjab Sardar Dost Muhammad Khosa; former Inspector General Police Tariq Khosa; former Chief Secretary Punjab Nasir Khosa and former Supreme Court Chief Justice Asif Saeed Khosa. Khosa has 4 sons and 3 daughters. Three of his sons including Sardar Khurram Latif Khosa, Balakh Sher Khosa & Shahbaz Khosa are Advocates Supreme Court of Pakistan. One of his son Dr Faisal Khosa is a renowned Medical Specialist; based in USA. Political offices held References https://www.geo.tv/latest/53701-ppp-decides-to-change-punjab-governor Pakistani lawyers Attorneys General of Pakistan Baloch people Pakistan Peoples Party politicians Living people Governors of Punjab, Pakistan People from Dera Ghazi Khan District Government College University, Lahore alumni Year of birth missing (living people) Chairmen of the Pakistan Bar Council Vice Chairmen of the Pakistan Bar Council
50961554
https://en.wikipedia.org/wiki/Safetica%20Technologies
Safetica Technologies
Safetica Technologies is a European data loss prevention software (DLP) vendor. Safetica DLP is designed for contextual protection of data at the endpoint level. It blocks erroneous or malicious actions which might lead to sensitive files leaving a company. Safetica can also identify wasted costs connected with low work productivity or ineffective use of software licenses or wasted print. Safetica Technologies began developing their Data Loss Prevention software (DLP) designed for contextual protection of data at the endpoint level in 2011. Safetica is now being utilised by various organisations in over 110 countries worldwide. History The company's history goes back to 2007 when a group of security professionals led by the founder Jakub Mahdal started developing security solutions in the Czech Republic. In 2011 the company started using name Safetica Technologies and introduced the first version of product called Safetica DLP. Soon it partnered with a top-class investor Ondřej Tomek, put $1,000,00 in research and development and got its first customers abroad. In 2013 Safetica started distribution to the markets in Asia and in the Middle East. In 2015 Safetica Technologies received $1,500,000 investment for further product development and Petr Žikeš became the new CEO. In 2016 the company partnered with global antivirus company ESET to bundle its products with their antivirus products, which means that Safetica is distributed throughout their global sales network covering 180 countries. In November 2019, Safetica announced a partnership with Seclore, provider of the industry's first, open Data-Centric Security Platform, to bring automated detection, protection, and tracking of sensitive information to enterprises. Products Safetica DLP Context and content-aware protection for unstructured data (e.g. AutoCAD files) and structured data (e.g. credit card numbers) External device management (USB, CD/DVD, FireWire) Mobile device management and protection against the danger of loss and theft Encryption for computers and external devices Productivity management, websites and applications blocking, print control Safetica Auditor Audit of internal security File operations monitoring Employees productivity monitoring Safetica Mobile Company data protection on mobile devices Secure device when lost Manage and configure mobile devices References Companies based in Prague Software companies established in 2007 Computer security software companies Computer security companies Computer security software Software companies of the Czech Republic Czech brands Czech companies established in 2007
87858
https://en.wikipedia.org/wiki/Transputer
Transputer
The transputer is a series of pioneering microprocessors from the 1980s, featuring integrated memory and serial communication links, intended for parallel computing. They were designed and produced by Inmos, a semiconductor company based in Bristol, United Kingdom. For some time in the late 1980s, many considered the transputer to be the next great design for the future of computing. While Inmos and the transputer did not achieve this expectation, the transputer architecture was highly influential in provoking new ideas in computer architecture, several of which have re-emerged in different forms in modern systems. Background In the early 1980s, conventional central processing units (CPUs) appeared to reach a performance limit. Up to that time, manufacturing difficulties limited the amount of circuitry that could fit on a chip. Continued improvements in the fabrication process, however, removed this restriction. Within a decade, chips could hold more circuitry than the designers knew how to use. Traditional complex instruction set computer (CISC) designs were reaching a performance plateau, and it wasn't clear it could be overcome. It seemed that the only way forward was to increase the use of parallelism, the use of several CPUs that would work together to solve several tasks at the same time. This depended on such machines being able to run several tasks at once, a process termed multitasking. This had generally been too difficult for prior CPU designs to handle, but more recent designs were able to accomplish it effectively. It was clear that in the future, this would be a feature of all operating systems (OSs). A side effect of most multitasking design is that it often also allows the processes to be run on physically different CPUs, in which case it is termed multiprocessing. A low-cost CPU built for multiprocessing could allow the speed of a machine to be raised by adding more CPUs, potentially far more cheaply than by using one faster CPU design. The first transputer designs were due to computer scientist David May and telecommunications consultant Robert Milne. In 1990, May received an Honorary DSc from University of Southampton, followed in 1991 by his election as a Fellow of The Royal Society and the award of the Patterson Medal of the Institute of Physics in 1992. Tony Fuge, then a leading engineer at Inmos, was awarded the Prince Philip Designers Prize in 1987 for his work on the T414 transputer. Design The transputer (the name deriving from "transistor" and "computer") was the first general purpose microprocessor designed specifically to be used in parallel computing systems. The goal was to produce a family of chips ranging in power and cost that could be wired together to form a complete parallel computer. The name was selected to indicate the role the individual transputers would play: numbers of them would be used as basic building blocks, just as transistors had earlier. Originally the plan was to make the transputer cost only a few dollars per unit. Inmos saw them being used for practically everything, from operating as the main CPU for a computer to acting as a channel controller for disk drives in the same machine. In a traditional machine, the processor capability of a disk controller, for instance, would be idle when the disk was not being accessed. In contrast, in a transputer system, spare cycles on any of these transputers could be used for other tasks, greatly increasing the overall performance of the machines. Even one transputer would have all the circuitry needed to work by itself, a feature more commonly associated with microcontrollers. The intent was to allow transputers to be connected together as easily as possible, with no need for a complex bus, or motherboard. Power and a simple clock signal had to be supplied, but little else: random-access memory (RAM), a RAM controller, bus support and even a real-time operating system (RTOS) were all built in. Architecture The original transputer used a very simple and rather unusual architecture to achieve a high performance in a small area. It used microcode as the main method to control the data path, but unlike other designs of the time, many instructions took only one cycle to execute. Instruction opcodes were used as the entry points to the microcode read-only memory (ROM) and the outputs from the ROM were fed directly to the data path. For multi-cycle instructions, while the data path was performing the first cycle, the microcode decoded four possible options for the second cycle. The decision as to which of these options would actually be used could be made near the end of the first cycle. This allowed for very fast operation while keeping the architecture generic. The clock rate of 20 MHz was quite high for the era and the designers were very concerned about the practicality of distributing such a fast clock signal on a board. A slower external clock of 5 MHz was used, and this was multiplied up to the needed internal frequency using a phase-locked loop (PLL). The internal clock actually had four non-overlapping phases and designers were free to use whichever combination of these they wanted, so it could be argued that the transputer actually ran at 80 MHz. Dynamic logic was used in many parts of the design to reduce area and increase speed. Unfortunately, these methods are difficult to combine with automatic test pattern generation scan testing so they fell out of favour for later designs. Prentice-Hall published a book on the general principles of the transputer. Links The basic design of the transputer included serial links known as "os-link"s that allowed it to communicate with up to four other transputers, each at 5, 10, or 20 Mbit/s – which was very fast for the 1980s. Any number of transputers could be connected together over links (which could run tens of metres) to form one computing farm. A hypothetical desktop machine might have two of the "low end" transputers handling input/output (I/O) tasks on some of their serial lines (hooked up to appropriate hardware) while they talked to one of their larger cousins acting as a CPU on another. There were limits to the size of a system that could be built in this fashion. Since each transputer was linked to another in a fixed point-to-point layout, sending messages to a more distant transputer required that messages be relayed by each chip in the line. This introduced a delay with every "hop" over a link, leading to long delays on large nets. To solve this problem Inmos also provided a zero-delay switch that connected up to 32 transputers (or switches) into even larger networks. Booting Transputers could boot from memory, as is the case for most computers, but could also be booted over its network links. A special pin on the chips, BootFromROM, indicated which method it should use. If BootFromROM was asserted when the chip was reset, it would begin processing at the instruction two bytes from the top of memory, which was normally used to perform a backward jump into the boot code. If this pin was not asserted, the chip would instead wait for bytes to be received on any network link. The first byte to be received was the length of the code to follow. Following bytes were copied into low memory and then jumped into once that number of bytes had been received. The general concept for the system was to have one transputer act as the central authority for booting a system containing a number of connected transputers. The selected transputer would have the BootFromROM permanently asserted, which would cause it to begin running a booter process from ROM on startup. The other transputers would have the BootFromROM tied low, and would simply wait. The loader would boot the central transputer, which would then begin sending boot code to the other transputers in the network, and could customize the code sent to each one, for instance, sending a device driver to the transputer connected to the hard drives. The system also included the 'special' code lengths of 0 and 1 which were reserved for PEEK and POKE. This allowed inspection and changing of RAM in an unbooted transputer. After a peek, followed by a memory address, or a poke, with an address and single word of data, the transputer would return to waiting for a bootstrap. This mechanism was generally used for debugging. Scheduler Added circuitry scheduled traffic over the links. Processes waiting for communications would automatically pause while the networking circuitry finished its reads or writes. Other processes running on the transputer would then be given that processing time. It included two priority levels to improve real-time and multiprocessor operation. The same logical system was used to communicate between programs running on one transputer, implemented as virtual network links in memory. So programs asking for any input or output automatically paused while the operation completed, a task that normally required an operating system to handle as the arbiter of hardware. Operating systems on the transputer did not need to handle scheduling; the chip could be considered to have an OS inside it. Instruction set To include all this function on one chip, the transputer's core logic was simpler than most CPUs. While some have called it reduced instruction set computer (RISC) due to its rather sparse nature, and because that was then a desirable marketing buzzword, it was heavily microcoded, had a limited register set, and complex memory-to-memory instructions, all of which place it firmly in the CISC camp. Unlike register-heavy load/store RISC CPUs, the transputer had only three data registers, which behaved as a stack. In addition a workspace pointer pointed to a conventional memory stack, easily accessible via the instructions Load Local and Store Local. This allowed for very fast context switching by simply changing the workspace pointer to the memory used by another process (a method used in a number of contemporary designs, such as the TMS9900). The three register stack contents were not preserved past certain instructions, like Jump, when the transputer could do a context switch. The transputer instruction set consisted of 8-bit instructions assembled from opcode and operand nibbles. The upper nibble contained the 16 possible primary instruction codes, making it one of the very few commercialized minimal instruction set computers. The lower nibble contained the one immediate constant operand, commonly used as an offset relative to the workspace (memory stack) pointer. Two prefix instructions allowed construction of larger constants by prepending their lower nibbles to the operands of following instructions. Further instructions were supported via the instruction code Operate (Opr), which decoded the constant operand as an extended zero-operand opcode, providing for almost endless and easy instruction set expansion as newer implementations of the transputer were introduced. The 16 'primary' one-operand instructions were: All these instructions take a constant, representing an offset or an arithmetic constant. If this constant was less than 16, all these instructions coded to one byte. The first 16 'secondary' zero-operand instructions (using the OPR primary instruction) were: Development To provide an easy means of prototyping, constructing and configuring multiple-transputer systems, Inmos introduced the TRAM (TRAnsputer Module) standard in 1987. A TRAM was essentially a building block daughterboard comprising a transputer and, optionally, external memory and/or peripheral devices, with simple standardised connectors providing power, transputer links, clock and system signals. Various sizes of TRAM were defined, from the basic Size 1 TRAM (3.66 in by 1.05 in) up to Size 8 (3.66 in by 8.75 in). Inmos produced a range of TRAM motherboards for various host buses such as Industry Standard Architecture (ISA), MicroChannel, or VMEbus. TRAM links operate at 10 Mbit/s or 20 Mbit/s. Software Transputers were intended to be programmed using the programming language occam, based on the communicating sequential processes (CSP) process calculus. The transputer was built to run Occam specifically, more than contemporary CISC designs were built to run languages like Pascal or C. Occam supported concurrency and channel-based inter-process or inter-processor communication as a fundamental part of the language. With the parallelism and communications built into the chip and the language interacting with it directly, writing code for things like device controllers became a triviality; even the most basic code could watch the serial ports for I/O, and would automatically sleep when there was no data. The initial Occam development environment for the transputer was the Inmos D700 Transputer Development System (TDS). This was an unorthodox integrated development environment incorporating an editor, compiler, linker and (post-mortem) debugger. The TDS was a transputer application written in Occam. The TDS text editor was notable in that it was a folding editor, allowing blocks of code to be hidden and revealed, to make the structure of the code more apparent. Unfortunately, the combination of an unfamiliar programming language and equally unfamiliar development environment did nothing for the early popularity of the transputer. Later, Inmos would release more conventional Occam cross-compilers, the Occam 2 Toolsets. Implementations of more mainstream programming languages, such as C, FORTRAN, Ada and Pascal were also later released by both Inmos and third-party vendors. These usually included language extensions or libraries providing, in a less elegant way, Occam-like concurrency and channel-based communication. The transputer's lack of support for virtual memory inhibited the porting of mainstream variants of the Unix operating system, though ports of Unix-like operating systems (such as Minix and Idris from Whitesmiths) were produced. An advanced Unix-like distributed operating system, HeliOS, was also designed specifically for multi-transputer systems by Perihelion Software. Implementations The first transputers were announced in 1983 and released in 1984. In keeping with their role as microcontroller-like devices, they included on-board RAM and a built-in RAM controller which enabled more memory to be added with no added hardware. Unlike other designs, transputers did not include I/O lines: these were to be added with hardware attached to the existing serial links. There was one 'Event' line, similar to a conventional processor's interrupt line. Treated as a channel, a program could 'input' from the event channel, and proceed only after the event line was asserted. All transputers ran from an external 5 MHz clock input; this was multiplied to provide the processor clock. The transputer did not include a memory management unit (MMU) or a virtual memory system. Transputer variants (except the cancelled T9000) can be categorised into three groups: the 16-bit T2 series, the 32-bit T4 series, and the 32-bit T8 series with 64-bit IEEE 754 floating-point support. T2: 16-bit The prototype 16-bit transputer was the S43, which lacked the scheduler and DMA-controlled block transfer on the links. At launch, the T212 and M212 (the latter with an on-board disk controller) were the 16-bit offerings. The T212 was available in 17.5 and 20 MHz processor clock speed ratings. The T212 was superseded by the T222, with on-chip RAM expanded from 2 KB to 4 KB, and, later, the T225. This added debugging-breakpoint support (by extending the instruction "J 0") plus some extra instructions from the T800 instruction set. Both the T222 and T225 ran at 20 MHz. T4: 32-bit At launch, the T414 was the 32-bit offering. Originally, the first 32-bit variant was to be the T424, but fabrication difficulties meant that this was redesigned as the T414 with 2 KB on-board RAM instead of the intended 4 KB. The T414 was available in 15 and 20 MHz varieties. The RAM was later reinstated to 4 KB on the T425 (in 20, 25, and 30 MHz varieties), which also added the J 0 breakpoint support and extra T800 instructions. The T400, released in September 1989, was a low-cost 20 MHz T425 derivative with 2 KB and two instead of four links, intended for the embedded systems market. T8: floating point The second-generation T800 transputer, introduced in 1987, had an extended instruction set. The most important addition was a 64-bit floating-point unit (FPU) and three added registers for floating point, implementing the IEEE754-1985 floating point standard. It also had 4 KB of on-board RAM and was available in 20 or 25 MHz versions. Breakpoint support was added in the later T801 and T805, the former featuring separate address and data buses to improve performance. The T805 was also later available as a 30 MHz part. An enhanced T810 was planned, which would have had more RAM, more and faster links, extra instructions, and improved microcode, but this was cancelled around 1990. Inmos also produced a variety of support chips for the transputer processors, such as the C004 32-way link switch and the C011 and C012 "link adapters" which allowed transputer links to be interfaced to an 8-bit data bus. T400 Part of the original Inmos strategy was to make CPUs so small and cheap that they could be combined with other logic in one device. Although a system on a chip (SoC) as they are commonly termed, are ubiquitous now, the concept was almost unheard of back in the early 1980s. Two projects were started in around 1983, the M212 and the TV-toy. The M212 was based on a standard T212 core with the addition of a disk controller for the ST 506 and ST 412 Shugart standards. TV-toy was to be the basis for a video game console and was joint project between Inmos and Sinclair Research. The links in the T212 and T414/T424 transputers had hardware DMA engines so that transfers could happen in parallel with execution of other processes. A variant of the design, termed the T400, not to be confused with a later transputer of the same name, was designed where the CPU handled these transfers. This reduced the size of the device considerably since 4 link engines were approximately the same size as the whole CPU. The T400 was intended to be used as a core in what were then called systems on silicon (SOS) devices, now termed and better known as system on a chip (SoC). It was this design that was to form part of TV-toy. The project was canceled in 1985. T100 Although the prior SoC projects had had only limited success (the M212 was sold for a time), many designers still firmly believed in the concept and in 1987, a new project, the T100 was started which combined an 8-bit version of the transputer CPU with configurable logic based on state machines. The transputer instruction set is based on 8-bit instructions and can easily be used with any word size which is a multiple of 8 bits. The target market for the T100 was to be bus controllers such as Futurebus, and an upgrade for the standard link adapters (C011 etc.). The project was stopped when the T840 (later to become the basis of the T9000) was started. TPCORE TPCORE is an implementation of the transputer, including the os-links, that runs in a FPGA. T9000 Inmos improved on the performance of the T8 series transputers with the introduction of the T9000 (code-named H1 during development). The T9000 shared most features with the T800, but moved several pieces of the design into hardware and added several features for superscalar support. Unlike the earlier models, the T9000 had a true 16 KB high-speed cache (using random replacement) instead of RAM, but also allowed it to be used as memory and included MMU-like functionality to handle all of this (termed the PMI). For more speed the T9000 cached the top 32 locations of the stack, instead of three as in earlier versions. The T9000 used a five-stage pipeline for even more speed. An interesting addition was the grouper which would collect instructions out of the cache and group them into larger packages of up to 8 bytes to feed the pipeline faster. Groups then completed in one cycle, as if they were single larger instructions working on a faster CPU. The link system was upgraded to a new 100 MHz mode, but unlike the prior systems, the links were no longer downwardly compatible. This new packet-based link protocol was called DS-Link, and later formed the basis of the IEEE 1355 serial interconnect standard. The T9000 also added link routing hardware called the VCP (Virtual Channel Processor) which changed the links from point-to-point to a true network, allowing for the creation of any number of virtual channels on the links. This meant programs no longer had to be aware of the physical layout of the connections. A range of DS-Link support chips were also developed, including the C104 32-way crossbar switch, and the C101 link adapter. Long delays in the T9000's development meant that the faster load/store designs were already outperforming it by the time it was to be released. It consistently failed to reach its own performance goal of beating the T800 by a factor of ten. When the project was finally cancelled it was still achieving only about 36 MIPS at 50 MHz. The production delays gave rise to the quip that the best host architecture for a T9000 was an overhead projector. This was too much for Inmos, which did not have the funding needed to continue development. By this time, the company had been sold to SGS-Thomson (now STMicroelectronics), whose focus was the embedded systems market, and eventually the T9000 project was abandoned. However, a comprehensively redesigned 32-bit transputer intended for embedded applications, the ST20 series, was later produced, using some technology developed for the T9000. The ST20 core was incorporated into chipsets for set-top box and Global Positioning System (GPS) applications. ST20 Although not strictly a transputer, the ST20 was heavily influenced by the T4 and T9 and formed the basis of the T450, which was arguably the last of the transputers. The mission of the ST20 was to be a reusable core in the then emerging SoC market. The original name of the ST20 was the Reusable Micro Core (RMC). The architecture was loosely based on the original T4 architecture with a microcode-controlled data path. However, it was a full redesign, using VHDL as the design language and with an optimized (and rewritten) microcode compiler. The project was conceived as early as 1990 when it was realized that the T9 would be too big for many applications. Actual design work started in mid-1992. Several trial designs were done, ranging from a very simple RISC-style CPU with complex instructions implemented in software via traps to a rather complex superscalar design similar in concept to the Tomasulo algorithm. The final design looked very similar to the original T4 core although some simple instruction grouping and a workspace cache were added to help with performance. Adoption While the transputer was simple but powerful compared to many contemporary designs, it never came close to meeting its goal of being used universally in both CPU and microcontroller roles. In the microcontroller market, the market was dominated by 8-bit machines where cost was the most serious consideration. Here, even the T2s were too powerful and costly for most users. In the computer desktop and workstation field, the transputer was fairly fast (operating at about 10 million instructions per second (MIPS) at 20 MHz). This was excellent performance for the early 1980s, but by the time the floating-point unit (FPU) equipped T800 was shipping, other RISC designs had surpassed it. This could have been mitigated to a large extent if machines had used multiple transputers as planned, but T800s cost about $400 each when introduced, which meant a poor price/performance ratio. Few transputer-based workstation systems were designed; the most notable likely being the Atari Transputer Workstation. The transputer was more successful in the field of massively parallel computing, where several vendors produced transputer-based systems in the late 1980s. These included Meiko Scientific (founded by ex-Inmos employees), Floating Point Systems, Parsytec, and Parsys. Several British academic institutions founded research activities in the application of transputer-based parallel systems, including Bristol Polytechnic's Bristol Transputer Centre and the University of Edinburgh's Edinburgh Concurrent Supercomputer Project. Also, the Data Acquisition and Second Level Trigger systems of the High Energy Physics ZEUS Experiment for the Hadron Elektron Ring Anlage (HERA) collider at DESY was based on a network of over 300 synchronously clocked transputers divided into several subsystems. These controlled both the readout of the custom detector electronics and ran reconstruction algorithms for physics event selection. The parallel processing abilities of the transputer were put to use commercially for image processing by the world's largest printing company, RR Donnelley & Sons, in the early 1990s. The ability to quickly transform digital images in preparation for print gave the firm a significant edge over their competitors. This development was led by Michael Bengtson in the RR Donnelley Technology Center. Within a few years, the processing ability of even desktop computers ended the need for custom multi-processing systems for the firm. The German company Jäger Messtechnik used transputers for their early ADwin real-time data acquisition and control products. A French company built the Archipel Volvox Supercomputer with up to 144 T800 and T400 Transputers. It was controlled by a Silicon Graphics Indigo2 running UNIX and a special card that interfaced to the Volvox backplanes. Transputers also found use in protocol analysers such as the Siemens/Tektronix K1103 and in military applications where the array architecture suited applications such as radar and the serial links (that were high speed in the 1980s) served well to save cost and weight in sub-system communications. The transputer also appeared in products related to virtual reality such as the ProVision 100 system made by Division Limited of Bristol, featuring a combination of Intel i860, 80486/33 and Toshiba HSP processors, together with T805 or T425 transputers, implementing a rendering engine that could then be accessed as a server by PC, Sun SPARCstation or VAX systems. Myriade, a European miniaturized satellite platform developed by Astrium Satellites and CNES and used by satellites such as the Picard, is based on the T805 yielding around 4 MIPS and is scheduled to stay in production until about 2015. The asynchronous operation of the communications and computation allowed the development of asynchronous algorithms, such as Bane's "Asychronous Polynomial Zero Finding" algorithm. The field of asynchronous algorithms, and the asynchronous implementation of current algorithms, is likely to play a key role in the move to exascale computing. The High Energy Transient Explorer 2 (HETE-2) spacecraft used 4× T805 transputers and 8× DSP56001 yielding about 100 million instructions per second (MIPS) of performance. Legacy Growing internal parallelism has been one driving force behind improvements in conventional CPU designs. Instead of explicit thread-level parallelism (as is used in the transputer), CPU designs exploited implicit parallelism at the instruction-level, inspecting code sequences for data dependencies and issuing multiple independent instructions to different execution units. This is termed superscalar processing. Superscalar processors are suited for optimising the execution of sequentially constructed fragments of code. The combination of superscalar processing and speculative execution delivered a tangible performance increase on existing bodies of code – which were mostly written in Pascal, Fortran, C and C++. Given these substantial and regular performance improvements to existing code there was little incentive to rewrite software in languages or coding styles which expose more task-level parallelism. Nevertheless, the model of cooperating concurrent processors can still be found in cluster computing systems that dominate supercomputer design in the 21st century. Unlike the transputer architecture, the processing units in these systems typically use superscalar CPUs with access to substantial amounts of memory and disk storage, running conventional operating systems and network interfaces. Resulting from the more complex nodes, the software architecture used for coordinating the parallelism in such systems is typically far more heavyweight than in the transputer architecture. The fundamental transputer motive remains, yet was masked for over 20 years by the repeated doubling of transistor counts. Inevitably, microprocessor designers finally ran out of uses for the greater physical resources, almost at the same time when technology scaling began to hit its limits. Power consumption, and thus heat dissipation needs, render further clock rate increases unfeasible. These factors led the industry towards solutions little different in essence from those proposed by Inmos. The most powerful supercomputers in the world, based on designs from Columbia University and built as IBM Blue Gene, are real-world incarnations of the transputer dream. They are vast assemblies of identical, relatively low-performance SoCs. Recent trends have also tried to solve the transistor dilemma in ways that would have been too futuristic even for Inmos. On top of adding components to the CPU die and placing multiple dies in one system, modern processors increasingly place multiple cores in one die. The transputer designers struggled to fit even one core into its transistor budget. Today designers, working with a 1000-fold increase in transistor densities, can now typically place many. One of the most recent commercial developments has emerged from the firm XMOS, which has developed a family of embedded multi-core multi-threaded processors which resonate strongly with the transputer and Inmos. There is an emerging class of multicore/manycore processors taking the approach of a network on a chip (NoC), such as the Cell processor, Adapteva Epiphany architecture, Tilera, etc. The transputer and Inmos helped establish Bristol, UK, as a hub for microelectronic design and innovation. See also Adapteva David May (computer scientist) Ease (programming language) IEEE 1355 Inmos iWarp Meiko Computing Surface References External links The Transputer FAQ Ram Meenakshisundaram's Transputer Home Page WoTUG A group applying the principles of transputers (e.g., communicating sequential processes (CSP)) in other environments. Transputer emulator – It emulates one T414 transputer (i.e., no FPU, no blitting instructions) and supplies the file and terminal I/O services that were usually supplied by a host computer system. PC-based Transputer emulator – This is a PC port of the original T414 transputer emulator (called jserver) written by Julian Highfield in the mid- to late 1990s. Transputers can be fun. The Transterpreter virtual machine. – A portable runtime for occam-pi and other languages based on the transputer bytecode. The Kent Retargettable occam compiler. – The occam-pi compiler. transputer.net. – Documents and more about transputer. Inmos alumni Directory of ex-Inmos employees, plus photos and general info. Maintained by Ken Heddings. Prince Philip Designers Prize winners from 1959 to 2009, Design Council website HETE-2 Spacecraft internal systems 16-bit microprocessors 32-bit microprocessors Parallel computing Stack machines
65028571
https://en.wikipedia.org/wiki/Verkada
Verkada
Verkada Inc. is a San Mateo, CA-based company that develops cloud-based building security systems. The company combines security equipment such as video cameras, access control systems and environmental sensors, with cloud based machine vision and artificial intelligence. The company was founded in 2016. In 2021, it was the target of a data breach that accessed security camera footage and private data. History Verkada Inc. was founded in 2016 in Menlo Park, California by three Stanford University graduates: Filip Kailiszan, James Ren, and Benjamin Bercowitz, who were joined by Hans Robertson, co-founder and former COO of Meraki (now Cisco Meraki). Kaliszan, Ren, and Bercowitz had previously collaborated on Courserank, a class data aggregation platform that was acquired by Chegg in 2010. Verkada exited the beta development stage in September 2017, with a product offering of two camera models. In 2019, Forbes included Verkada in its Next Billion Dollar Startups list, as well as that year’s AI 50 list of most promising artificial intelligence companies. In April, the company announced a $40 million Series B funding round, which valued the company at $540 million. In January 2020, the company raised $80 million in a Series C funding found led by Felicis Ventures, giving the company a $1.6 billion valuation. In Spring 2020, the company launched its first access control device, the first move in a shift to moving beyond cameras, and integrating security cameras and locks onto a single platform. In June during the COVID-19 crisis Verkada instituted a program to offer free surveillance kits to businesses and healthcare institutions in order to remotely monitor high-risk locations. It also added features to let customers detect when crowds are forming, and to identify high traffic areas that might need more cleaning. In September, the company launched a line of integrated environmental sensors. In September, it introduced a line of environmental sensors for facilities monitoring. In April 2021, news site Bloomberg News reported allegations by former employees accusing the company of having a "bro" culture, with lax device security, excessive focus on profit, and parties during the COVID-19 pandemic. In the Bloomberg reporting, Verkada acknowledged an internal lapse in judgement, and was reportedly working to create a more inclusive work environment, including reviewing gender pay equity and implementing better training. In September, the company began donating security cameras to Asian Pacific American business communities, starting with the Oakland California Chinatown Chamber of Commerce, to address growing anti-Asian threats and violence against its members. Products Verkada develops cloud managed enterprise building security including security cameras, door access control systems and environmental sensors that can all integrate seamlessly together. The systems incorporate advanced machine learning and artificial intelligence technology. Cameras Verkada's company's cameras combine edge processing and storage with a centralized web-based platform to provide advanced physical security across numerous sites. The cloud-based system allows rapid sharing of video feeds via SMS text or weblink, such as with offsite law enforcement personnel or company management. Access control systems Verkada's cloud managed access control systems are integrated with its security cameras, which can be centrally managed remotely over the cloud, across all sites. The systems allow security personnel to remotely monitor entryways, provision badges, and unlock doors without requiring IT involvement. Environmental sensors Verkada develops indoor environmental sensors that measure air quality, temperature, humidity, motion and noise. The sensors are integrated with security cameras, and send alerts when a threshold reading is exceeded, allowing operators to view the area of interest. Alarms Verkada’s Alarms product analyzes information from the company's cloud-based physical security products including video security, door-based access control, and environmental sensors. The product's intrusion detection devices include the BP41 Alarm Panel, motion and contact sensors, and the BC51 Alarm Console. Data breach On March 8, 2021, Verkada was hacked by an international group including Tillie Kottmann and calling themselves the "APT69420 Arson Cats," which gained access to their network for about 36 hours and collected about 5 gigabytes of data. Initially, it was reported that the scope of the incident included live and recorded security camera footage from more than 150,000 cameras. It was later reported that 95 customers' video and images data were accessed Kottmann told Bloomberg News that the hack "exposes just how broadly we're being surveilled". In response to the data breach, in April 2021 it was reported that Verkada CEO Filip Kaliszan announced a series of measures, including red team/blue team exercises, a bug bounty program, mandatory two-factor authentication use by Verkada support staff, and the sharing of more audit logs with Verkada customers. Controversies In August 2021, Motorola Solutions filed a 52-page complaint against Verkada with the United States International Trade Commission, alleging that Verkada cameras and software infringe upon patents held by Motorola subsidiary Avigilon. Verkada subsequently filed a lawsuit against Motorola Solutions in the California Northern District Court in September 2021, arguing that Motorola has "sought to effectively shut Verkada’s business down." Later in September, the International Trade Commission initiated its investigation into Motorola's complaint, with Verkada stating in its response that it does not infringe upon any of Motorola's patents. References External links Video surveillance companies Physical security Companies based in San Mateo, California Technology companies based in the San Francisco Bay Area Computer companies of the United States Computer companies established in 2016 American companies established in 2016 2016 establishments in California
2581594
https://en.wikipedia.org/wiki/Richard%20Alston%20%28politician%29
Richard Alston (politician)
Richard Kenneth Robert Alston (born 19 December 1941) is an Australian businessman, former politician and former barrister. He served as a Senator for Victoria from 1986 to 2004, representing the Liberal Party. During the Howard Government he held ministerial office as Minister for Communications and the Arts (1996–1997), Communications, the Information Economy and the Arts (1997–1998), and Communications, Information Technology and the Arts (1998–2003). He later served as High Commissioner to the United Kingdom (2005–2008) and Federal President of the Liberal Party (2014–2017). Early life Alston was educated at Xavier College (Kew), the University of Melbourne and Monash University, graduating with bachelor's degrees in law, arts and commerce from Melbourne University and master's degrees in Law and Business Administration from Monash University. He was a barrister before entering politics. His brother is noted academic Philip Alston. Senate On 7 May 1986 Alston was appointed by the Parliament of Victoria under section 15 of the Australian Constitution to fill the vacancy in the Australian Senate caused by the death of Senator Alan Missen. He was re-elected in 1987, 1990, 1996 and 2001. Alston was a member of the Opposition Shadow Ministry from 1989 to 1996, and was Deputy Leader of the Opposition in the Senate 1993–96. Shadow Minister for Social Security, Child Care and Superannuation, as well as Communications and the Arts, were among the positions he held in the shadow ministry. He was Minister for Communications and the Arts 1996–97, Minister for Communications, the Information Economy and the Arts 1997–98 and Minister for Communications, Information Technology and the Arts 1998–2003. He was also Deputy Leader of the Government in the Senate 1996–2003. Alston resigned from the Senate on 10 February 2004, and he was replaced by Mitch Fifield. Later career From February 2005 to February 2008, Alston served as Australian High Commissioner to the United Kingdom. Since 2004 he has been an Adjunct Professor of Information Technology at Bond University. Since leaving Parliament, Alston has served as Chairman of three listed Australian companies and as a director of a number of listed public companies in both Australia and the United Kingdom. These have been in fields as diverse as information technology, broadcasting services, sandalwood, public relations, advertising and ironsands. Alston served as a member of the international board of CQS LLP, a United Kingdom-based hedge fund for seven years and as a director of its Australian subsidiary. Alston also served for six years as a director of United Kingdom-based public company, Chime PLC. He also served as Chairman of the advisory board of Qato Capital,an Australian long short fund, as a director of Nanuk Asset Management and as a director of Balmoral Gardens, a retirement village owner and operator for many years and a director of CPA Australia for three years.. He is currently Chairman of Sunny Ridge strawberry farms, a director of China Telecom (Australia), Chairman of National Advisory, a leading Australian corporate advisory firm, a member of the advisory board of Market Eye, a leading Australian investor relations firm and a member of the Council of the Australian National Gallery. Alston was federal president of the Liberal Party from 2014 to 2017. Honours At the 2015 Australia Day Honours, Alston was appointed an Officer of the Order of Australia for distinguished service to the Parliament of Australia, to international relations through diplomatic roles, to business development in diverse sectors, and to the community. Alston was also awarded the Centenary Medal in 2001 for service as Minister for Communications, Information Technology and the Arts. References 1941 births Living people High Commissioners of Australia to the United Kingdom Permanent Representatives of Australia to the International Maritime Organization Liberal Party of Australia members of the Parliament of Australia Melbourne Law School alumni Members of the Cabinet of Australia Members of the Australian Senate Members of the Australian Senate for Victoria Monash Law School alumni Officers of the Order of Australia Old Xaverians Football Club players People educated at Xavier College Politicians from Melbourne Recipients of the Centenary Medal 21st-century Australian politicians 20th-century Australian politicians Government ministers of Australia
158859
https://en.wikipedia.org/wiki/Plug%20and%20play
Plug and play
In computing, a plug and play (PnP) device or computer bus is one with a specification that facilitates the discovery of a hardware component in a system without the need for physical device configuration or user intervention in resolving resource conflicts. The term "plug and play" has since been expanded to a wide variety of applications to which the same lack of user setup applies. Expansion devices are controlled and exchange data with the host system through defined memory or I/O space port addresses, direct memory access channels, interrupt request lines and other mechanisms, which must be uniquely associated with a particular device to operate. Some computers provided unique combinations of these resources to each slot of a motherboard or backplane. Other designs provided all resources to all slots, and each peripheral device had its own address decoding for the registers or memory blocks it needed to communicate with the host system. Since fixed assignments made expansion of a system difficult, devices used several manual methods for assigning addresses and other resources, such as hard-wired jumpers, pins that could be connected with wire or removable straps, or switches that could be set for particular addresses. As microprocessors made mass-market computers affordable, software configuration of I/O devices was advantageous to allow installation by non-specialist users. Early systems for software configuration of devices included the MSX standard, NuBus, Amiga Autoconfig, and IBM Microchannel. Initially all expansion cards for the IBM PC required physical selection of I/O configuration on the board with jumper straps or DIP switches, but increasingly ISA bus devices were arranged for software configuration. By 1995, Microsoft Windows included a comprehensive method of enumerating hardware at boot time and allocating resources, which was called the "Plug and Play" standard. Plug and play devices can have resources allocated at boot-time only, or may be hotplug systems such as USB and IEEE 1394 (FireWire). History of device configuration Some early microcomputer peripheral devices required the end user physically to cut some wires and solder together others in order to make configuration changes; such changes were intended to be largely permanent for the life of the hardware. As computers became more accessible to the general public, the need developed for more frequent changes to be made by computer users unskilled with using soldering irons. Rather than cutting and soldering connections, configuration was accomplished by jumpers or DIP switches. Later on this configuration process was automated: Plug and Play. MSX The MSX system, released in 1983, was designed to be plug and play from the ground up, and achieved this by a system of slots and subslots, where each had its own virtual address space, thus eliminating device addressing conflicts in its very source. No jumpers or any manual configuration was required, and the independent address space for each slot allowed very cheap and commonplace chips to be used, alongside cheap glue logic. On the software side, the drivers and extensions were supplied in the card's own ROM, thus requiring no disks or any kind of user intervention to configure the software. The ROM extensions abstracted any hardware differences and offered standard APIs as specified by ASCII Corporation. NuBus In 1984, the NuBus architecture was developed by the Massachusetts Institute of Technology (MIT) as a platform agnostic peripheral interface that fully automated device configuration. The specification was sufficiently intelligent that it could work with both big endian and little endian computer platforms that had previously been mutually incompatible. However, this agnostic approach increased interfacing complexity and required support chips on every device which in the 1980s was expensive to do, and apart from its use in Apple Macintoshes and NeXT machines, the technology was not widely adopted. Amiga Autoconfig and Zorro bus In 1984, Commodore developed the Autoconfig protocol and the Zorro expansion bus for its Amiga line of expandable computers. The first public appearance was in the CES computer show at Las Vegas in 1985, with the so-called "Lorraine" prototype. Like NuBus, Zorro devices had absolutely no jumpers or DIP switches. Configuration information was stored on a read-only device on each peripheral, and at boot time the host system allocated the requested resources to the installed card. The Zorro architecture did not spread to general computing use outside of the Amiga product line, but was eventually upgraded as Zorro II and Zorro III for the later iteration of Amiga computers. Micro-Channel Architecture In 1987, IBM released an update to the IBM PC known as the Personal System/2 line of computers using the Micro Channel Architecture. The PS/2 was capable of totally automatic self-configuration. Every piece of expansion hardware was issued with a floppy disk containing a special file used to auto-configure the hardware to work with the computer. The user would install the device, turn on the computer, load the configuration information from the disk, and the hardware automatically assigned interrupts, DMA, and other needed settings. However, the disks posed a problem if they were damaged or lost, as the only options at the time to obtain replacements were via postal mail or IBM's dial-up BBS service. Without the disks, any new hardware would be completely useless and the computer would occasionally not boot at all until the unconfigured device was removed. Micro Channel did not gain widespread support because IBM wanted to exclude clone manufacturers from this next generation computing platform. Anyone developing for MCA had to sign non-disclosure agreements and pay royalties to IBM for each device sold, putting a price premium on MCA devices. End-users and clone manufacturers revolted against IBM and developed their own open standards bus, known as EISA. Consequently, MCA usage languished except in IBM's mainframes. ISA and PCI self-configuration In time, many Industry Standard Architecture (ISA) cards incorporated, through proprietary and varied techniques, hardware to self-configure or to provide for software configuration; often, the card came with a configuration program on disk that could automatically set the software-configurable (but not itself self-configuring) hardware. Some cards had both jumpers and software-configuration, with some settings controlled by each; this compromise reduced the number of jumpers that had to be set, while avoiding great expense for certain settings, e.g. nonvolatile registers for a base address setting. The problems of required jumpers continued on, but slowly diminished as more and more devices, both ISA and other types, included extra self-configuration hardware. However, these efforts still did not solve the problem of making sure the end-user has the appropriate software driver for the hardware. ISA PnP or (legacy) Plug & Play ISA was a plug-and-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. It was superseded by the PCI bus during the mid-1990s. The PCI plug and play (autoconfiguration) is based on the PCI BIOS Specification in 1990s, the PCI BIOS Specification is superseded by the ACPI in 2000s. Legacy Plug and Play In 1995, Microsoft released Windows 95, which tried to automate device detection and configuration as much as possible, but could still fall back to manual settings if necessary. During the initial install process of Windows 95, it would attempt to automatically detect all devices installed in the system. Since full auto-detection of everything was a new process without full industry support, the detection process constantly wrote to a progress tracking log file during the detection process. In the event that device probing would fail and the system would freeze, the end-user could reboot the computer, restart the detection process, and the installer would use the tracking log to skip past the point that caused the previous freeze. At the time, there could be a mix of devices in a system, some capable of automatic configuration, and some still using fully manual settings via jumpers and DIP switches. The old world of DOS still lurked underneath Windows 95, and systems could be configured to load devices three different ways: through Windows 95 device manager drivers only using DOS drivers loaded in the CONFIG.SYS and AUTOEXEC.BAT configuration files using both DOS drivers and Windows 95 device manager drivers together Microsoft could not assert full control over all device settings, so configuration files could include a mix of driver entries inserted by the Windows 95 automatic configuration process, and could also include driver entries inserted or modified manually by the computer users themselves. The Windows 95 Device Manager also could offer users a choice of several semi-automatic configurations to try to free up resources for devices that still needed manual configuration. Also, although some later ISA devices were capable of automatic configuration, it was common for PC ISA expansion cards to limit themselves to a very small number of choices for interrupt request lines. For example, a network interface might limit itself to only interrupts 3, 7, and 10, while a sound card might limit itself to interrupts 5, 7, and 12. This results in few configuration choices if some of those interrupts are already used by some other device. The hardware of PC computers additionally limited device expansion options because interrupts could not be shared, and some multifunction expansion cards would use multiple interrupts for different card functions, such as a dual-port serial card requiring a separate interrupt for each serial port. Because of this complex operating environment, the autodetection process sometimes produced incorrect results, especially in systems with large numbers of expansion devices. This led to device conflicts within Windows 95, resulting in devices which were supposed to be fully self-configuring failing to work. The unreliability of the device installation process led to Plug and Play being sometimes referred to as Plug and Pray. Until approximately 2000, PC computers could still be purchased with a mix of ISA and PCI slots, so it was still possible that manual ISA device configuration might be necessary. But with successive releases of new operating systems like Windows 2000 and Windows XP, Microsoft had sufficient clout to say that drivers would no longer be provided for older devices that did not support auto-detection. In some cases, the user was forced to purchase new expansion devices or a whole new system to support the next operating system release. Current plug and play interfaces Several completely automated computer interfaces are currently used, each of which requires no device configuration or other action on the part of the computer user, apart from software installation, for the self-configuring devices. These interfaces include: IEEE 1394 (FireWire) PCI, Mini PCI PCI Express, Mini PCI Express, Thunderbolt PCMCIA, PC Card, ExpressCard SATA, Serial Attached SCSI USB DVI, HDMI For most of these interfaces, very little technical information is available to the end user about the performance of the interface. Although both FireWire and USB have bandwidth that must be shared by all devices, most modern operating systems are unable to monitor and report the amount of bandwidth being used or available, or to identify which devices are currently using the interface. See also Autoconfig (Amiga) Hot plugging PCI configuration space References External links Plug and play in Windows 2000 on ZDNet https://community.rapid7.com/docs/DOC-2150 Computer peripherals Motherboard
50559678
https://en.wikipedia.org/wiki/Telfair%20County%20High%20School
Telfair County High School
Telfair County High School is located in McRae, Georgia, United States. It is the only high school in the Telfair County School District. Its teams are known as the Trojans. It shares a campus with its feeder school, Telfair County Middle School. Athletics The Telfair County Trojans field the following sports teams: Baseball Basketball (Boys & Girls) Cross Country (Boys & Girls) Football Golf (Boys & Girls) Softball Tennis (Boys & Girls) Track & Field (Boys & Girls) Wrestling Postseason Success The Trojans' sports teams have found postseason success a number of times in school history, with notable successes outlined below: Baseball: 4 Sweet Sixteen Appearances, 2 Elite Eight appearances, 1 Final Four appearance, Class A State Runner-Up (2018), Region 2-A Champion (2008). Basketball: Cross Country (Boys): Cross County (Girls): Football: 1 Sweet Sixteen appearance, 1 Elite Eight appearance, 2 Region Championships (1992, 1993) Golf (Boys): Golf (Girls): Fastpitch Softball: 3 Region Championships, 5 Sweet Sixteen appearances, 4 Elite Eight appearances, 2 Final Four appearances Tennis (Boys): Tennis (Girls): Track & Field (Boys): Track & Field (Girls): Wrestling: References Public high schools in Georgia (U.S. state)
3329381
https://en.wikipedia.org/wiki/UDP%20flood%20attack
UDP flood attack
A UDP flood attack is a volumetric denial-of-service (DoS) attack using the User Datagram Protocol (UDP), a sessionless/connectionless computer networking protocol. Using UDP for denial-of-service attacks is not as straightforward as with the Transmission Control Protocol (TCP). However, a UDP flood attack can be initiated by sending a large number of UDP packets to random ports on a remote host. As a result, the distant host will: Check for the application listening at that port; See that no application listens at that port; Reply with an ICMP Destination Unreachable packet. Thus, for a large number of UDP packets, the victimized system will be forced into sending many ICMP packets, eventually leading it to be unreachable by other clients. The attacker(s) may also spoof the IP address of the UDP packets, ensuring that the excessive ICMP return packets do not reach them, and anonymizing their network location(s). Most operating systems mitigate this part of the attack by limiting the rate at which ICMP responses are sent. UDP Flood Attack Tools: Low Orbit Ion Cannon UDP Unicorn This attack can be managed by deploying firewalls at key points in a network to filter out unwanted network traffic. The potential victim never receives and never responds to the malicious UDP packets because the firewall stops them. However, as firewalls are 'stateful' i.e. can only hold a number of sessions, firewalls can also be susceptible to flood attacks. External links Denial-of-service attacks
47582037
https://en.wikipedia.org/wiki/Crosswalk%20Project
Crosswalk Project
Crosswalk Project is an open-source web app runtime built with the latest releases of Chromium and Blink from Google. These are also used in Google Chrome. The project's focus is to provide the most up-to-date and innovative capabilities to web apps including experimental APIs and extensibility. A web app that bundles the Crosswalk Project runtime can install and run on different Android versions with consistent behavior and feature parity (Android 4.0 and newer). The project was founded by Intel's Open Source Technology Center in September 2013. It is licensed under the BSD license. As of February 2017 Intel ceased to actively support the project, saying: Features The primary features include: Support for: Android, iOS (limited), Linux (currently deb pkgs available), Windows 10 and Tizen. Web Audio, WebRTC, Intel RealSense, WebGL, Web Components, Web workers, CSS transforms, HTML canvas 2D context, Media Queries level 3 Experimental APIs: WebCL (graphics acceleration using the GPU) and SIMD (parallel data computation) Device capabilities Presentation API (Miracast second screen) Launch screen Raw sockets Compare with other phone web-based frameworks. Apache Cordova Apache Cordova is a set of device APIs for accessing device capabilities and sensors. Crosswalk Project integrates well with Cordova to enable both the Cordova device APIs and the Crosswalk advanced Web runtime. Starting with Apache Cordova Android 4.0 it is now possible to add a pluggable webview. This simplifies adding the Crosswalk Project webview into a Cordova project. Tools integrating Crosswalk Project Crosswalk Project is part of the following developer tools: AppGyver: a UI framework for building hybrid mobile apps Cocos2d-x: a suite of open-source, cross-platform, game-development tools Cordova/PhoneGap: a platform for building native mobile applications using HTML, CSS and JavaScript famo.us: a JavaScript framework with an open source 3D layout engine integrated with a 3D physics animation engine that can render to DOM, Canvas, or WebGL Intel XDK: a cross-platform development tool to create and deploy web and hybrid apps across multiple app stores and form factor devices ionic: an open-source front-end SDK for developing hybrid mobile apps with HTML5 ManifoldJS: a tool to create hosted apps across platforms and devices, and package web experience as native apps across Android, iOS, and Windows Monaca: cloud-powered tools and services to simplify PhoneGap/Cordova hybrid mobile app development Scirra's Construct 2: an HTML5 game creator for 2D games Sencha Web Application Manager: an application platform for deploying and managing web apps on desktops, tablets, and smartphones telerik: an instantly available PhoneGap/Cordova-based development environment that enables cross-platform hybrid mobile apps to be created using HTML5, JavaScript and CSS trigger.io: a hybrid app runtime for the artists and artisans of the web Standards Crosswalk Project provides a web application framework based on common standards: HTML, CSS, JavaScript, and web APIs created and supported by W3C, WHATWG and TC39. License Crosswalk Project is open-source and licensed under BSD license. Versions Each release cycle is about 6 weeks, incorporating the latest release of Chromium and Blink along with other features and APIs ready at the time. New releases are labeled "Canary" (potentially unstable and higher risk). After validation, a level of quality is reached and the version is labeled "Beta". With further testing it becomes "Stable". References 2013 software Free and open-source Android software Android (operating system) development software Free software programmed in C++ Cross-platform free software Software using the BSD license Google software Software based on WebKit Google Chrome
33125193
https://en.wikipedia.org/wiki/Crowdsourced%20testing
Crowdsourced testing
Crowdsourced testing is an emerging trend in software testing which exploits the benefits, effectiveness, and efficiency of crowdsourcing and the cloud platform. It differs from traditional testing methods in that the testing is carried out by a number of different testers from different places, and not by hired consultants and professionals. The software is put to test under diverse realistic platforms which makes it more reliable, cost-effective, and can be fast. In addition, crowdsource testing can allow for remote usability testing because specific target groups can be recruited through the crowd. This method of testing is considered when the software is more user-centric: i.e., software whose success is determined by its user feedback and which has a diverse user space. It is frequently implemented with gaming, mobile applications, when experts who may be difficult to find in one place are required for specific testing, or when the company lacks the resources or time to carry out the testing internally. Crowdsourced testing vs. Outsourced testing Crowdsourced testing may be considered to be a sub-type of software testing outsourcing. While for some projects it may be possible to get away with only using one approach or the other, a more thorough approach would use a more diverse software testing method, which uses both a dedicated testing team in addition to the crowd. Crowdsource testing is best for things like beta and compatibility testing, which are necessary final steps for testing; however, most software is far too complex for late-stage testing like this to cover all of the possible issues. A dedicated outsourced or in-house testing team will give a better idea of the software's possible defects, but will not give anywhere near the scope of crowdtesting. Therefore, a good solution is to integrate multiple test teams into any development project (and also to develop with the principles of testability in mind from the very beginning.) Crowdsourcing alone may not give the best feedback on applications. A diverse testing approach that pools both crowdsource testing and a dedicated testing team may be favorable. "Having this diversity of staffing allows you to scale your resources up and down in a fluid manner, meeting tight deadlines during peak periods of development and testing, while controlling costs during slow periods." References Crowdsourcing Software testing
2697895
https://en.wikipedia.org/wiki/QtParted
QtParted
QtParted is a Qt4 front-end to GNU Parted and the official KDE Partition Editor application besides KDE Partition Manager. QtParted is a program for Linux which is used for creating, destroying, resizing and managing partitions. It uses the GNU Parted libraries and is built with the Qt4 toolkit. Like GNU Parted, it has inherent support for the resizing of NTFS partitions, using the ntfsresize utility. It does not handle LVM partitions. The QtParted team does not provide an official Live CD to use QtParted with. However, QtParted is included in the live Linux distribution Knoppix, on the Kubuntu Live CD, in MEPIS, in NimbleX and in the Trinity Rescue Kit. After not being maintained since 2005, it has been superseded by KDE Partition Manager. It has since been revived by the developers of the (now discontinued) Ark Linux distribution, and is still being maintained. Reviews See also KDE Partition Manager Partition (computing) List of disk partitioning software GParted, the GTK+ counterpart of QtParted References External links Software that uses Qt Discontinued software Free partitioning software de:GNU Parted#QtParted
24686241
https://en.wikipedia.org/wiki/Collaborative%20Computing%20Project%20for%20NMR
Collaborative Computing Project for NMR
The Collaborative Computing Project for NMR (CCPN) is a project that aims to bring together computational aspects of the scientific community involved in NMR spectroscopy, especially those who work in the field of protein NMR. The general aims are to link new and existing NMR software via a common data standard and provide a forum within the community for the discussion of NMR software and the scientific methods it supports. CCPN was initially started in 1999 in the United Kingdom but collaborates with NMR and software development groups worldwide. The Collaborative Project for the NMR Community The Collaborative Computing project for NMR spectroscopy was set up in with three main aims; to create a common standard for representing NMR spectroscopy related data, to create a suite of new open-source NMR software packages and to arrange meetings for the NMR community, including conferences, workshops and courses in order to discuss and spread best-practice within the NMR community, for both computational and non-computational aspects. Primary financial support for CCPN comes from the BBSRC; the UK Biotechnology and Biological Sciences Research Council. CCPN is part of an array of collaborative computing projects (CCP) and follows in a similar vein to the successful and well-established CCP4 project for X-ray crystallography. CCPN is also supported by European Union grants, most recently as part of the Extend-NMR project; which links together several software producing groups from across Europe. CCPN is governed by an executive committee which draws its members from academics throughout the UK NMR community. This committee is chosen at the CCPN Assembly Meeting where all UK based NMR groups may participate and vote. The day-to-day work of CCPN, including the organisation of meetings and software development, is handled by an informal working group, coordinated by Ernest Laue at the University of Cambridge, which comprises the core group of staff and developers, as well as a growing number of collaborators throughout the world who contribute to coordinated NMR software development. NMR Data Standards The many different software packages available to the NMR spectroscopy community have traditionally employed a number of different data formats and standards to represent computational information. The inception of CCPN was partly to look at this situation and to develop a more unified approach. It was deemed that multiple, informally connected data standards not only made it more difficult for a user to move from one program to the next, but also adversely affected data fidelity, harvesting and database deposition. To this end CCPN has developed a common data standard for NMR, referred to as the CCPN data model, as well as software routines and libraries that allow access, manipulation and storage of the data. The CCPN system works alongside the Bio Mag Res Bank which continues to handle archiving NMR database depositions; the CCPN standard is for active data exchange and in-program manipulation. Although NMR spectroscopy remains at the core of the data standard it naturally expands into other related areas of science that support and complement NMR. These include molecular and macromolecular description, three-dimensional biological structures, sample preparation, workflow management and software setup. The CCPN libraries are created using the principles of model-driven architecture and automatic code generation; the CCPN data model provides a specification for the automatic generation of APIs in multiple languages. To date CCPN provides APIs to its data model in Python, Java and C programming languages. Through its collaborations, CCPN continues to link new and existing software via its data standards. To enable interaction with as much external software as possible, CCPN has created a format conversion program. This allows data to enter from outside the CCPN scheme and provides a mechanism to translate between existing data formats. The open-source CcpNmr FormatConverter software was first released in 2005 and is available for download (from CCPN and SourceForge) but is also recently accessible as a web application. CCPN Software Suite As well as enabling data exchange, CCPN aims to develop software for processing, analysis and interpretation of macromolecular NMR data. To this end CCPN has created CcpNmr Analysis; a graphical program for spectrum visualisation, assignment and NMR data analysis. Here, the requirement was for a program that used a modern graphical user interface and could run on many types of computer. It would be supported and maintained by CCPN and would allow modification and extension, including for new NMR techniques. The first version of Analysis was released in 2005 and is now at version 2.1. Analysis is built directly on the CCPN data model and its design is partly inspired by the older ANSIG. and SPARKY programs, but it has continued to develop from the suggestions, requirements and computational contributions of its user community. Analysis is freely available to academic and non-profit institutions. Commercial users are required to subscribe to CCPN for a moderate fee. CCPN software, including Analysis, is available for download at the CCPN web site and is supported by an active JISC email discussion group. CCPN Meetings Through its meetings CCPN provides a forum for the discussion of computational and experimental NMR techniques. The aim is to debate and spread best practice in the determination of macromolecular information, including structure, dynamics and biological chemistry. CCPN continues to arrange annual conferences for the UK NMR community (the current being the ninth) and a series of workshops to discuss and promote data standards. Because it is vital to the success of CCPN as a software project and as a coordinated NMR community, its software developers run courses to teach the use of CCPN software and its development framework. They also arrange visits to NMR groups to introduce the CCPN program suite and to gain an understanding of the requirements of users. CCPN is especially keen to enable young scientists to contribute to and attend its meetings. Accordingly, wherever possible CCPN tries to keep conference fees at a minimum by using contributions that come from our industrial sponsorship and software subscriptions. Footnotes References Vranken WF, Boucher W, Stevens TJ, Fogh RH, Pajon A, Llinas M, Ulrich EL, Markley JL, Ionides J,Laue ED. (2005) "The CCPN data model for NMR spectroscopy: development of a software pipeline." Proteins 59(4):687-96. Fogh RH, Boucher W, Vranken WF, Pajon A, Stevens TJ, Bhat TN, Westbrook J, Ionides JM, Laue ED.(2005) "A framework for scientific data modeling and automated software development." Bioinformatics. 21(8):1678-84 External links CCPN Website CCPN Community Software Wiki E-Science Information technology organisations based in the United Kingdom Medical Research Council (United Kingdom) Nuclear magnetic resonance Organisations associated with the University of Cambridge Science and technology in Cambridgeshire
3051918
https://en.wikipedia.org/wiki/Data%20I/O
Data I/O
Data I/O Corporation is a manufacturer of programming and automated device handling systems for programmable integrated circuits. The company is headquartered in Redmond, Washington with sales and engineering offices in multiple countries. History Data I/O was incorporated in 1969. Before the IBM PC was introduced, the company developed equipment that allowed electronic designers to program the non-volatile semiconductor devices with data stored on punched cards or ASCII-encoded (eight-level) punched paper tape. Over the next three decades the company rode the non-volatile technology wave as Bipolar, EPROM, EEPROM, NOR FLASH, Antifuse, FRAM and most recently, NAND FLASH devices were introduced by semiconductor vendors. While not manufacturing semiconductors itself, Data I/O's business is the design and manufacture of equipment that transfers data into various non-volatile semiconductor devices. These devices commonly are Flash Memory, Microcontroller devices and Programmable Logic Devices. Products Current Introduced in 2000, Data I/O FlashCORE technology is optimized for programming of NAND and NOR based flash devices and Flash microcontrollers and is sold in FlashPAK, PS-System, FLX500, and ProLINE-RoadRunner programmer models spanning engineering to high-volume offline and inline "just-in-time" manufacturing. Data I/O provides Tasklink for Windows software to set up FlashCORE programmers and specify data sources. In addition, they develop software that manages automated and remote programming, secures data and manages device serialization. Many of these work with TaskLink, while others are independent software packages. Data I/O manufactures two device programmers that can accommodate DIP (through-hole) devices, the Plus-48 and the Optima. Both are aimed at the small, (relatively) low-cost, desktop programmer (engineering) market. Legacy Model 1 One of their first attempts at a 'Universal' programmer was the Model 1, Model 5 (TTL-Sequencer based), Model 9 (Microprocessor based),and the System 19 (introduced in the late 1970s). It utilized interchangeable device sockets and configuration plug-in printed-circuit cards, consisting mainly of resistors, diodes and jumpers, to allow reading and programming of a variety of memory devices. System 29 In the early 1980s the System 29 series emerged. The first model, the 29A, added user RAM, and eliminated the need for configuration cards by offering keypad-programmable 'Family' and 'Pinout' codes to configure the programmer. Introduced along with the 29A was the 'Unipak,' a large plug-in adapter that featured multiple sizes of ZIF sockets to reduce the need for changing socket modules. Since the Unipak was limited to dealing with memory devices, an additional accessory series, called the 'LogicPak,' was introduced to handle programmable logic devices (PALs, GALs, etc.) Later models featured a series of fixed sockets and an interchangeable socket module in one housing. Memory devices up to 40 pins in size could be read or programmed with the simple installation of the appropriate socket module. The 29B chassis could accommodate up to 1MB of user RAM. Unifamily Around 1987, Data I/O introduced the first of the 'Unifamily' programmers in the form of the 'Unisite.' This was their first engineering programmer to feature software-programmable pin drivers, a technology that allows any pin of the device socket to be configured, through software, for power, ground or nearly any type of programming waveform. The first model in this line, the Unisite-40, featured a removable module with a single 40-pin DIP ZIF socket, called the SITE-40 and space to install optional programming adapters to the right of this DIP module. Such modules included the 'SetSite,' a module containing eight 40-pin ZIF sockets to allow gang programming of up to eight identical memory devices, and the 'ChipSite,' an early multi-socket module accommodating several sizes of PLCC and SOIC DIP packages with 'clamshell' ZIF sockets. The final successor to the ChipSite unit was the PinSite. This featured a universal programming base which could accept a variety of socket adapters, including those for chips packaged in PGA, QFP, TSOP, and many others. There was even a special connection module made available which could, through the Pinsite's base, allow the Unisite to serve as the programming source in automated device handlers in factory floor environments. The Unifamily was the first series of Data I/O's programmers to feature a built-in user menu. All the programmer required for basic operation was a dumb terminal, hooked up via an RS-232 serial port. Facilities were also provided for computer-based remote control via a second serial port. The early Unifamily all booted and ran from software stored on 720k floppy diskettes (in the case of the Unisite) or on 1.44MB floppies (in the case of other Unifamily members). This software consists of the operator's menu system, self-test routines and device algorithms. Later in production, an option for installation of a miniature hard drive was provided (See MSM, or Mass Storage Module, below). The Unisite is the only programmer that still requires true 720k floppies for non-MSM operation, or updating the MSM's software without the aid of external PC-based software. The Unisite was the flagship model of the Unifamily line, selling for over $35,000 in a typical configuration and staying in active production for at least 20 years. Data I/O, in an effort to make the Unifamily line more attractive to companies with tighter budgets, introduced several other programmers utilizing the same pin-driver technology as the Unisite, all selling for (typically) under $10,000. These included the model 2900, 3900, 3980, and 3980XPi. These units varied in capability, primarily in terms of the number of pin drivers. The basic 2900 featured 44 drivers, while the 39xx series all had 88. Data I/O developed a proprietary multiplexing scheme which allowed Unifamily programmers, equipped with their maximum number of hardware pin drivers, to handle devices with up to 240 pins. Other differences in the series are minor. Models share a common base design and feature the ability to boot and run from floppy diskettes and provide an internal menu. The differences are primarily in features. The Unisite, less than a year of entering production, was revised in the form of a new DIP module, referred to as the 'Site48.' This adapter had 48 pins in its DIP socket, and remained the standard for many years. Its successor, the Site48-HS, is functionally identical, but utilizes solid-state switching for the socket pins instead of the electromechanical relays in earlier adapters such as the 2900 and 39xx series. The Unisites featured 512K of user RAM, standard. Field-installable upgrade kits, consisting of a separate memory board, an appropriate number of 30-pin SIMMs, a mounting bracket and interconnecting cable, were made available to upgrade these early units to 1MB or 8MB. The price for the 8MB upgrade kit was around $495 in the mid-1990s. These early kits required considerable labor to install, including extensive disassembly of the programmer, as the memory board was designed to mount under the main circuit board. In response to these difficulties, as well as improvements in available technology, the Unisite's main circuit board soon received major revisions. These included the removal of most of the DIP-based DRAM chips, and the addition of two 30-pin memory module sockets on the main board. With these changes, upgrading the programmer's available RAM became much easier, requiring only the removal of the top cover, installation of two SIMMs, and replacing one PAL chip. Mass Storage Module The revisions to the UniSite main board were done to support a new option. Data I/O created the Mass Storage Module (MSM). This consisted of an additional circuit board containing a miniature hard disk drive (either a 2.5 inch PATA/IDE device or a PCMCIA Type III card drive, depending on revision level) and appropriate interface circuitry. All the programmer's operating software and device algorithms could be transferred to the MSM's drive in less than a half-hour, obsoleting floppy diskettes. The latest revision is entirely solid-state, consisting of a single large FPGA chip as the board's glue logic, an SPROM (Serial Programmable Read-Only Memory) chip, containing the FPGA's operating code, a few SRAM chips for buffering and a solid-state or 'Flash' drive. The MSM remains an optional, field-installable module for the 3900 and Unisite. Unisite programmers require 8MB of user RAM and controller board revision 701-2313-00 or higher to utilize this option. In addition, the MSM requires operating software revisions of 6.6 or above. All 3900 series programmers are MSM-compatible at the hardware level. Successful installation of the MSM in a 3900 programmer automatically turns it into the model 3980. MSM adds another option, a high-speed parallel port interface that supplements the programmer's serial port. In conjunction with a Windows-based PC, and Data I/O's TaskLink software, the parallel port greatly enhances the speed of data transfers to and from the programmer. As one example, a 1MB data file takes at least two minutes to be transferred into or out of a Unifamily programmer via the serial port at its highest available speed (19200 baud). The same file, transferred with the parallel port's help, takes around 30–40 seconds. Any Unifamily programmer with 'XPi' after its name (Unisite-XPi, 3980-XPi) already has the MSM and parallel port options installed as standard equipment. These programmers represent the end of the Unifamily line and, although no longer in production, are fully supported. References External links Corporate Webpage Data I/O Germany Webpage Data I/O China Webpage Data I/O Japan Webpage Independent resource for Data I/O hardware Data I/O Optima and Sprint resource Data I/O Labsite resource Data I/O ChipWriter resource Data I/O 3900 resource Data I/O 2900 resource Data I/O Unisite resource Data I/O 2700 resource Companies based in Redmond, Washington Companies listed on the Nasdaq Manufacturing companies based in Washington (state) Manufacturing companies established in 1969
37524684
https://en.wikipedia.org/wiki/Entera
Entera
Entera is a middleware product introduced in the mid-1990s by the Open Environment Corporation (OEC), an early implementation of the three-tiered client–server model development model. Entera viewed business software as a collection of services, rather than as a monolithic application. Entera was designed to allow companies to build and manage large, multinational information systems, while preserving existing investments in skill sets, hardware and software systems. The multi-tiered architecture solved problems that were inherent in first generation client/server application development, including the lack of scalability, manageability, application security, and vendor lock-in. Entera was built on industry-standard distributed computing infrastructures, and supported a variety of programming languages with a management framework. After its origination with OEC, and OEC’s purchase by Borland Software, the rights to Entera were acquired by eCube Systems LLC in 2003. Today, Entera and NXTera, are still used by Fortune 1000 companies in several countries and developed, maintained and marketed by eCube Systems. History In 1992, Open Environment Corporation (“OEC”), located in Cambridge, Massachusetts, released a middleware product called “OEC Toolkit.” The product, which eventually would become known as “Entera,” was the first middleware product sold as an application server platform. In subsequent years, the company added to Entera three-tier application monitoring called AppMinder, and a transaction processing system called Entera FX. Over the next nine years, OEC sold tens of millions of dollars worth of Entera to at least 300 Fortune 1000 customers. In 1995 OEC went public, completing its initial public offering on the NASDAQ stock exchange under ticker . While continuing to develop Entera, two primary versions of the software were developed: the Transmission Control Protocol (TCP) version and the Distributed Computing Environment (DCE), the latter based on DCS Cell Directory Service, the DCE endpoint mapper, and DCE threads. Despite various challenges, the DCE version of Entera became popular with IBM and its customer base because it simplified the complexity of DCE development. During this time, OEC developers began work on a fourth version of Entera, aimed at unifying the TCP and DCE versions. They used the DCE and endpoint mapper to provide stability and increased performance. This newer version implemented multithreading and broker persistence. Prior to its acquisition, OEC developed a number of products including the OEC Toolkit, Encompass, Netminder, Entera Workbench, Oec3270, Appminder, Entera OLE, and Entera FX. Borland acquisition In 1996, a year after going public, OEC was acquired by Borland. Borland continued to invest in Entera, but the software peaked in popularity. At this time, over 300 Fortune 1000 companies were using Entera worldwide. During the post-buyout transition phase, the original Entera development team underwent some changes. Entera 4.0 was released prematurely, causing problems with its use as a replacement for Entera 3.2. Borland responded by releasing version 4.1, which proved to be a more stable version. In 2000 the most stable version, 4.2.1, was released. Over the next few years, Borland sold hundreds of copies of Entera to new and former Entera customers. During this time, Borland also developed a professional services organizations, capable of working in diverse programming environments. In November 1997, Borland announced its acquisition of Visigenic Software (VSGN), a CORBA vendor and the developer of VisiBroker. VisiBroker overlapped some of Entera’s capabilities while adding support for the Java programming language, object-orientation and a degree of complexity to distributed computing. During the following year, Borland refocused its efforts on targeting enterprise applications development. They spent less time on developing Entera and turned their attention to Visibroker. Borland changed its name to Inprise, a fusion of “integrating and enterprise.” Their new aim was to integrate Borland’s tools including Delphi, C++Builder, and JBuilder with Visigenic’s COBRA-based Visibroker for C++ and Java. With their efforts concentrated on Visigenic Software, Borland unsuccessfully attempted to switch Entera users to Visibroker, before ultimately discontinuing active development of Entera in 2001. Even while experiencing growth within its customer base, Borland quietly reallocated Entera development resources. eCube acquires license Borland licensed the rights to Entera to eCube Systems LLC in 2002 to provide a migration path for existing clients and to enable further development of the Entera platform. eCube Systems was started by former OEC and Borland engineers and architects who still wanted to develop the product. eCube Systems continues to support and develop existing Entera installations today. The latest version of Entera is called NXTera. It has added new technology models and allows for interoperability with contemporary middleware technologies including messaging platforms and web service standards such as SOAP and REST. In July 2009, Micro Focus acquired Borland for $75 million. In February 2012 Micro Focus and eCube Systems announced their cooperation in the delivery of middleware consulting. Today eCube and Micro Focus collaborate on a variety of middleware projects. References External links Borland products page at Micro Focus eCubeSystems.com Middleware Distributed computing
44578211
https://en.wikipedia.org/wiki/Ministry%20of%20Information%20Policy%20%28Ukraine%29
Ministry of Information Policy (Ukraine)
The Ministry of Information Policy () or MIP was a government ministry in Ukraine established on 2 December 2014. The Honcharuk Government (on 29 August 2019) abounded the Ministerial post. The government institution was revived in March 2020 as part of the Ministry of Culture and Information Policy. History The ministry was created concurrently with the formation of the Second Yatsenyuk Government, after the October 2014 Ukrainian parliamentary election. The ministry's task was to oversee information policy in Ukraine. According to the first Minister of Information, Yuriy Stets, one of the goals of its formation was to counteract "Russian information aggression" amidst pro-Russian unrest across Ukraine, and the ongoing Russian military intervention of 2014. Ukrainian president Petro Poroshenko mentioned that the main function of the ministry was to stop "the spreading of biased information about Ukraine". A proposal to establish an information ministry for Ukraine was first put forth on 30 November 2014 by Internal Affairs Ministry advisor Anton Herashchenko. He said that ministry could protect "Ukraine's information space from Russian propaganda and counter propaganda in Russia, in the temporarily occupied territories of Crimea and eastern Ukraine". The proposal was made amidst ongoing efforts to form a government, following the October 2014 Ukrainian parliamentary election. Ukrainian president Petro Poroshenko advocated for the establishment of such a ministry through the night on 1–2 December. It was quickly pushed through parliament with little fanfare. The formation of the Second Yatsenyuk Government was announced on 2 December, with Poroshenko ally Yuriy Stets confirmed as the first Minister for Information Policy. One day after his appointment, Stets published the ministry's regulations, which were based on a draft he wrote in 2007–09. According to these regulations, the ministry is meant to "develop and implement professional standards in the media sphere", "ensure freedom of speech", and prevent the spread of "incomplete, outdated, or unreal information. Prior to its establishment, many Ukrainian journalists protested the creation of the ministry. They cited concerns that the ministry would "open the way to grave excesses" in restricting free speech, and that the ministry would inhibit journalists' work. Journalists demonstrating outside the parliament building said that the creation of the ministry was equivalent to "a step back to the USSR". The ministry was given the satirical appellation "Ministry of Truth" (), a reference to George Orwell's dystopian novel Nineteen Eighty-Four. Reporters without Borders strongly opposed the creation of the ministry, and said that it was a "retrograde step". Petro Poroshenko Bloc politician Serhiy Leshchenko called for the ministry's immediate dissolution, whilst Poroshenko Bloc politician Svitlana Zalishchuk said that ministry's implementation should be put on hold, and that its regulations should be redrafted. Newly appointed Minister for Information Policy Yuriy Stets said that one of the primary goals of the ministry was to counteract "Russian information aggression" amidst the prolonged ongoing Russo-Ukrainian War in the Donbass region. According to Stets, no other Ukrainian government institution was capable of handling this task. He stated that "different states with different historical and cultural experiences in times of crisis came to need to create a body of executive power that would control and manage the information security of the country". Stets also said that the ministry "will in no way try to impose censorship or restrict freedom of speech". President Poroshenko told journalists on 7 December 2014 that the main purpose of the ministry is to stop external "information attacks on Ukraine" by promoting the "truth about Ukraine" across the world. Poroshenko added that it was "foolish" to think that the ministry would become an organ of censorship. The ministry was officially established by a resolution of the Ukrainian government on 14 January 2015. The resolution contained the duties and regulations of the ministry. According to the resolution, the primary objectives of the MIP are to "protect the information sovereignty of Ukraine", and to "implement media reforms to spread socially important information". A statement released by the ministry on 19 February 2015 announced the creation of an "information force" to counter misinformation on social media and across the internet. The force is targeted at Russia, which has been said to employ an "army of trolls" to spread false information and propaganda during the Russo-Ukrainian War. Yuriy Stets resigned from his post of Minister of Information Policy on 8 December 2015. He withdrew his letter of resignation on 4 February 2016, and continued in the post. United States Ambassador to Ukraine Geoffrey Pyatt stated late January 2016 "It's a huge mistake to create a 'Ministry of Truth' that tries to generate alternative stories. That is not the way to defeat this information warfare". On 29 August 2019 the (then) new Honcharuk Government abolished the Ministry. The Ministry was revived and merged on 26 March 2020 as part of the Ministry of Culture and Information Policy. Staff of the Ministry of Information Policy The approved staff of the Ministry of Information Policy of Ukraine for 2015 includes 29 employees. Given the Ukrainian practice, the figure seems to be very small. However, the experience of many countries and contemporary understanding of management prove that these are the public institutions with a small staff working based on delegation principles, which are much more capable to implement global state tasks. Despite strict requirements of Ukrainian legislation, the Ministry of Information Policy is able to successfully perform all assigned tasks with minimum resources." to "The approved staff of the Ministry of Information Policy of Ukraine for 2015 includes 29 employees, so we are a "lean and mean" department. Contemporary management strategies suggest that public institutions with a small staff working on delegated principles, can be highly effective to performing government tasks. The Ministry of Information Policy aim is to successfully perform all assigned tasks with minimum resources, and within the strict requirements of Ukrainian legislation. Structure of the Ministry of Information Policy of Ukraine Minister - Yuriy Stets First Deputy Minister - Emine Dzhaparova Deputy Minister - Dmytro Zolotukhin State Secretary - Artem Bidenko Executive Support Service of the Minister Legal Sector Sector for Human Resources Management Financial and Economic Sector Sector for Accountancy and Accounting Control Sector for Administrative Services Sector for Strategic Communications Sector for Information Reintegration of Crimea and Donbas Sector for Implementation of the Doctrine of Information Security Sector for International Cooperation in the Field of Information Security Sector for Promotion of Ukraine in the World Sector for European Integration Sector for Reforming the Information Field and Public Relations Chief Specialist on Internal Audit Chief Specialist on Sensitive and Classified Activities Chief Specialist on Prevention and Exposure of Corruption Main ministry projects Social advertising According to Ukrainian legislation, social advertising - information of any kind, common in any form, which aims to achieve social goals, promotion of human values and the distribution of which is not intended to make a profit. MIP launched such social campaigns: Social advertising "Army - pride". Social campaign to mark the anniversary of the Revolution of Dignity. Social campaign by individually selected key program on Information Policy Crimea. Social campaign on mobilization "Dignity, Freedom and Victory". Social campaign "Two Flags - One Country" timed to the Day of Crimean Tatar Flag. Social campaign aimed at countering manifestations of separatism. Social campaign in support of internally displaced persons "The life of each person can change momentarily". Social campaign about supporting of internally displaced persons by World Food Programme of the United Nations (WFP UN) Social campaign to support the decentralization reform in Ukraine. Social campaign "Honour Your Heroes!" Proofs for Hague The Ministry has video and photo materials that it claims prove Russian military presence on the territory of Ukrainian Donbass. Information army On February 23, 2014 MIP created the Internet platform "Information armies Ukraine". Today the site pages and relevant social networks are used by more than 10,000 users. Monthly dozens of anti-Ukrainian users are blocked. Information attack of FSB security services is blocked, monthly almost 80,000 users get information on the fake of Russian propaganda. On each post there are dozens of false answers, which provided facts and arguments, so that the effectiveness of anti-Ukrainian propaganda in social networks plummeted. Embedded journalism The Ministry of Defense of Ukraine jointly with the Ministry of Information Policy of Ukraine continue to take the application process for project «Embedded journalists» - attaching media to military units in the ATO area and journalists are invited to participate. The journalist is not involved in the fighting, but he is subject to the relevant officer and lives in the same conditions as the rest of the soldiers. Today, Ukrainian and foreign journalists are working successfully with the military in the area ATO within the project Embedded journalism. Journalists from BBC, CNN, Washington Post, London Evening Standard, The Independent, Newsweek, France Press, Polsat, Daily Signal, Hanslucas, Tsenzor.NET, "Radio Liberty", Inter, Business Ukraine, New time have already participated. Results for the three months of the work: 18 video, 15 articles and three films in the foreign media. List of ministers See also Freedom of the press in Ukraine Internet censorship and surveillance in Ukraine National Expert Commission of Ukraine on the Protection of Public Morality Notes References Information Information Ukraine, Information Ukraine 2014 establishments in Ukraine 2019 disestablishments in Ukraine Former government ministries of Ukraine Ministries disestablished in 2019
9638738
https://en.wikipedia.org/wiki/MapWindow%20GIS
MapWindow GIS
MapWindow GIS is a lightweight open-source GIS (mapping) desktop application and set of programmable mapping components. History MapWindow GIS and its associated MapWinGIS ActiveX Control were originally developed by Daniel P. Ames and a team of professors and students at Utah State University in 2002-2003 as part of a research project with the Idaho National Laboratory in Idaho Falls, Idaho as a GIS mapping framework for watershed modelling tools in conjunction with source water assessments conducted by the laboratory. In 2004 it the first open source version of the software was released as MapWindow GIS 3.0, after which it was adopted by the United States Environmental Protection Agency as the primary GIS platform for its BASINS (Better Assessment Science Integrating Point and Nonpoint Sources) watershed analysis and modeling software. As the project has grown, much of the day to day management of the code and associated website has been handled by Paul Meems and a group of volunteer user-developers from around the world. Conferences The 1st International MapWindow GIS Users and Developers Conference was held in Orlando, Florida from March 31 - April 2, 2010 and included 60 participants from multiple countries and government, private, and educational institutions. The 2nd International MapWindow GIS and DotSpatial Conference included the newly developed DotSpatial GIS programming environment and was held in San Diego, California, June 13-15, 2011. The 2012 International Open Source GIS Conference was held in Velp, The Netherlands, from July 9-11, 2012. This was the first joint meeting of MapWindow GIS users and developers together with the broader regional open source GIS community. Later MapWindow GIS users and developers meetings have largely been held in conjunction with other communities and conferences including the American Water Resources Association, American Geophysical Union, OSGEO, and the International Environmental Modelling & Software Society. Technical details MapWindow GIS is distributed as an open source application under the Mozilla Public License distribution license, MapWindow GIS can be reprogrammed to perform different or more specialized tasks. There are also plug-ins available to expand compatibility and functionality. The core component of MapWindow GIS is the MapWinGIS ActiveX Control. This component (MapWinGIS.ocx) is written in the C++ programming language and includes all of the core mapping, data management, and data analysis functions required by the MapWindow GIS desktop application. A user manual for MapWinGIS ActiveX Control written by Daniel P. Ames and Dinesh Grover was released through in 2007. The MapWindow GIS desktop application is built upon Microsoft .NET technology. Originally written using Visual Basic .NET, the application was re-written using C# .NET. Project source code was originally hosted and maintained on a local SVN server on www.mapwindow.org. Later it was ported to the Microsoft open source code repository, codeplex.com. Presently all project code is hosted at GitHub.org. Updates for MapWindow GIS are regularly released by a group of student and volunteer developers. MapWindow GIS in scientific literature MapWindow GIS has found much adoption in the water resources and modelling community. Some example research projects using the software include: MapWindow GIS and its watershed delineation tool were used to generate terrain curvature networks by Burgholzer Fujisawa used MapWindow GIS in conjunction with Google Earth for data preparation. MapWindow GIS was extended with several plug-ins and custom datasets for the United Nations WaterBase project. MapWindow GIS was extended with a large number of watershed analysis plugins and was completely rebranded as BASINS by the United States Environmental Protection Agency. A "cost efficient" modelling tool for distributed hydrologic modelling was created by Lei et al. Fan et al. coupled a large scale water quality model with MapWindow GIS. See also List of GIS software Comparison of geographic information systems software References Rafn, E. and Ames, D.P. “Estimating Stream Channel Cross Sections from Watershed Characteristics.” GIS and Water Resources AWRA Spring Specialty Conference, Houston, TX, May 2006. Reed, M. "Strategies for involvement of the United Nations University International Institute for Software Technology (UNU-IIST) in building ICT infrastructure." United Nations University Office in New York Seminar Series. 2006. Taylor, A. "New mapping software developed at ISU in Idaho Falls a hit worldwide." INRA Journal. August 2006. pp 3–4. External links MapWindow GIS Home page The Soil Company, The Netherlands The Soil Company supplies spatial soil maps and soil related knowledge for land owners and users and uses MapWindow for their main tasks. U.S. EPA BASINS watershed analysis system WaterBase (United Nations University project supporting Integrated Water Resources Management in developing countries) MapWindowGIS project in Delphi The ISIS flood modelling software suite uses MapWindow in their ISIS Mapper application for building models and viewing results Free GIS software Free software programmed in C Sharp
497847
https://en.wikipedia.org/wiki/TradeStation
TradeStation
TradeStation Group, Inc. is the parent company of online securities and futures brokerage firms and trading technology companies. It is headquartered in Plantation, Florida, and has offices in New York; Chicago; Richardson, Texas; London; Sydney; and Costa Rica. TradeStation is best known for the technical analysis software and electronic trading platform it provides to the active trader and certain institutional trader markets. TradeStation Group was a Nasdaq GS-listed company from 1997 to 2011, until acquired by Monex Group, a Tokyo Stock Exchange listed parent company of one of Japan's leading online securities brokerage firms. History TradeStation was founded by Cuban-born brothers William (Bill) and Rafael (Ralph) Cruz, who sought to create a way to design, test, optimize, and automate their own custom trading strategies. Bill started studying trading at the age of 16, and two years later, the brothers pooled $2,400 to open a futures trading account. They gathered trading data to create charts, which were used to test trading ideas. Bill began to test strategies himself without the benefit of formal coding knowledge, a process that resulted in the development of EasyLanguage, the company's proprietary coding language for non-specialists. Bill and Ralph decided to start their own company, then known as Omega Research. The brothers focused on selling tools that would give clients without a technical or computer programming background the ability to program and test their own trading strategies. In 1987, they released System Writer, a software product that enabled users to develop and “back-test” their own trading ideas using historical market data. In 1989, System Writer Plus was released with new and innovative charting features, which the publication Commodity Trader Consumer Reports likened to “the system trading software equivalent of putting a man on the moon.” In 1991, Omega Research released TradeStation and, three years later, struck a licensing deal with Dow Jones Telerate to offer TradeStation as a premium service to Telerate's institutional clients worldwide. In 1997, Omega Research conducted an IPO and became listed on the Nasdaq National Market. The company launched an online version of its product in 1999. In 2001, the company converted itself from a trading software company to an online securities brokerage, and renamed itself “TradeStation”. The software's back-testing, order-generation and trade execution capabilities were fully integrated for both securities and futures markets in 2003. In 2004 and 2005, TradeStation became a self-clearing equities and options firm. An entire industry eventually grew up around TradeStation's software, including seminars, publications, and user networks. 1982 - Company is formed under the name Omega Research, Inc. 1991 - TradeStation, the company's flagship product, is launched. 1996 - OptionStation, an options trading analytics product that enables traders to explore complex options trading strategies, is launched. 1997 - The company launches an initial public offering (IPO) and is listed on the Nasdaq National Market. 1999 - RadarScreen®, a software application that enables traders to scan hundreds of symbols to identify buy and sell opportunities based on the user's custom criteria, is launched; the company also acquires Window On WallStreet and in 2000 launches WindowOnWallStreet.com, the company's first Internet-based charting and analytics service. 2000 - TradeStation 6, the Internet-based trading platform that serves as the foundation of the company's direct-access brokerage service, is launched; the company also acquires a direct-access securities brokerage, Onlinetrading.com, which is subsequently renamed TradeStation Securities, Inc. 2001 - TradeStation Group replaces Omega Research as publicly-traded Nasdaq company as a parent of two operating subsidiaries – TradeStation Securities, Inc. (formerly onlinetrading.com) and TradeStation Technologies, Inc. (formerly Omega Research); trading symbol on Nasdaq is changed from OMGA to TRAD. 2006 - TradeStation Europe Limited (now TradeStation International Ltd) receives approval from the United Kingdom's Financial Conduct Authority (FCA) as an introducing broker. 2011 - TradeStation Group is acquired by Japan's Monex Group; Corporate structure TradeStation Group, Inc.’s principal operating subsidiaries are TradeStation Securities, Inc. and TradeStation Technologies, Inc. TradeStation Group is a wholly owned subsidiary of Monex Group, Inc., one of Japan’s largest online financial services providers. TradeStation Securities, Inc. is a member of the New York Stock Exchange (NYSE), Financial Industry Regulatory Authority (FINRA), Securities Investor Protection Corporation (SIPC), Depository Trust & Clearing Corporation (DTCC), Options Clearing Corporation (OCC) and the National Futures Association (NFA). It is a licensed securities broker-dealer and a registered futures commission merchant, and is also a member of the Boston Options Exchange, Chicago Board Options Exchange, Chicago Stock Exchange, International Securities Exchange and NASDAQ OMX. The company’s technology subsidiary, TradeStation Technologies, Inc., develops and offers strategy trading software tools and subscription services. Its London-based subsidiary, TradeStation International Ltd, a Financial Services Authority authorized brokerage firm, introduces the UK and other European accounts to TradeStation Securities, Inc. Monex Group, Inc. provides online investment and trading services for retail and institutional customers around the world through its subsidiaries, including Monex, Inc. in Japan, TradeStation in the U.S., and Europe, and Monex Boom in Hong Kong. Monex Group is pursuing its “Global Vision” strategy to establish a truly global online financial institution that creates positive synergies for all stakeholders. Its main subsidiary, Monex Inc., one of Japan's largest online securities brokerages, provides advanced and unique financial services to its nearly one million individual investors. Monex Group's services also cover M&A advisory, debt and equity underwriting, asset management focusing on alternative investments, investment education, and other investment banking functions in Japan. TradeStation analysis and trading platform The TradeStation analysis and trading platform is a professional electronic trading platform for financial market traders. It provides extensive functionality for receiving real-time data, displaying charts, entering orders, and managing outstanding orders and market positions. Although it comes with a large number of pre-defined indicators, strategy components, and analysis tools, individuals can modify and customize existing indicators and strategies, as well as create their own indicators and strategies using TradeStation's proprietary object-oriented EasyLanguage programming language. Traders can also access hundreds of TradeStation-compatible products created by independent third-party developers through the TradeStation TradingApp Store, as well as access strategy trading ideas submitted by the TradeStation community in the EasyLanguage Library. TradeStation supports the development, testing, optimizing, and automation of all aspects of trading. Trading strategies can be back-tested and refined against historical data in simulated trading before being traded "live". TradeStation can be used either as a research and testing tool or as a trading platform with TradeStation Securities or IBFX acting as the broker. Add-on products A large number of third-party developers develop TradeStation-compatible products. Since Tradestation is a development platform, a custom trading program can be developed called a trading system or trading strategy. If any trader has an analysis technique or potentially profitable strategy he would like to have developed, he can either write his own strategy in EasyLanguage or have his trading system developed by third-party developers. Traders can also take advantage of the TradeStation TradingApp Store, an online marketplace of ready-to-use add-on products built to run on the TradeStation platform by independent developers. References External links Technical analysis software Online brokerages Foreign exchange companies Electronic trading platforms American companies established in 1982 Financial services companies established in 1982 1982 establishments in Florida Companies based in Broward County, Florida Plantation, Florida 1997 initial public offerings 2011 mergers and acquisitions American subsidiaries of foreign companies Online financial services companies of the United States
2890340
https://en.wikipedia.org/wiki/SpeedScript
SpeedScript
SpeedScript is a word processor originally printed as a type-in machine language listing in 1984-85 issues of Compute! and Compute!'s Gazette magazines. Approximately 5 KB in length, it provided many of the same features as commercial word processing packages of the 8-bit era, such as PaperClip and Bank Street Writer. Versions were published for the Apple II, Commodore 64 and 128, Atari 8-bit family, VIC-20, and MS-DOS. Versions In April 1983 Compute! published Scriptor, a word processor written by staff writer Charles Brannon in BASIC and assembly language, as a type-in program for the Atari 8-bit family. In January 1984 version 1.0 of his new word processor SpeedScript appeared in Compute!'s Gazette for the Commodore 64 and VIC-20. 1.1 appeared in Compute!'s Second Book of Commodore 64, 2.0 on Gazette Disk in May 1984, and 3.0 in Compute! in March and April 1985. Corrections that updated 3.0 to 3.1 appeared in May 1985, and the full version appeared in a book published by Compute!, SpeedScript: The Word Processor for the Commodore 64 and VIC-20. A 3.2 update appeared in the December 1985 Compute! and January 1986 Compute! Disk and again later in the May 1987 Compute!'s Gazette issue with three additional utilities. Ports to the Atari and the Apple II were printed in Compute! in May and June 1985 respectively. SpeedScript was written entirely in assembly language, and Compute! Publications later released book/disk combinations that contained the complete commented source code (as well as the machine language in MLX format) for each platform. A version of SpeedScript for MS-DOS was created in 1988 by Randy Thompson and published in book form by Compute! Books. This version was written in Turbo Pascal with portions written in assembly language, and added incremental new features to the word processor such as additional printer commands, full cursor-control (to take advantage of the PC's Home, End, PgUp, and PgDn keys), and a native 80-column mode. 80-column updates The original versions of SpeedScript were designed for the 40-column Commodore 64 and the 22-column VIC-20. When the Commodore 128 was released, featuring an 80-column display, many users requested an updated version of SpeedScript to take advantage of this new capability. In June 1986, Compute!'s Gazette published SpeedScript-80, a short patch for SpeedScript 3.0 or higher, which enabled the use of the VDC's new 80-column capabilities on a Commodore 128 running in 64 mode. However, this did not take advantage of the C128's expanded memory, and a few minor commands were eliminated due to the alterations to the existing code. SpeedScript-80 was enhanced soon after with SpeedScript-80 Revisited, by Bob Kodadek. A native version for the C128 called SpeedScript 128, also written by Kodadek, was finally released in October 1987. This version eliminated the problems of the patch and took full advantage of the C128's 80-column screen, its expanded memory and the enhanced keyboard. A later update appeared in September 1989, adding full text justification, tab setting, and online help. In December 1987, Compute!'s Gazette published Instant 80, a utility for the C64 version of SpeedScript that allowed 80-column document previewing (though not editing) on a standard C64. This was done by using half-width characters on a high-resolution graphics screen. Utilities Although SpeedScript did not include a built-in spell checker, additional utilities were soon published. In December 1985, SpeedCheck was published in Compute!'s Gazette. This external utility accepted SpeedScript files (as well as those from compatible word processors, such as PaperClip) and spell-checked them against a user-defined dictionary. An enhanced 80-column version for the C128, SpeedCheck 128, was published in September 1988. Another utility, ScriptSave, was developed to provide automatic saving functionality to the Commodore 64 version of SpeedScript 3.0. This program would set up a timer program to save documents to disk, before loading and running SpeedScript itself. Several additional utilities were published in the May 1987 issue of Compute!'s Gazette along with SpeedScript 3.2. ScriptRead was developed to identify and preview SpeedScript documents on a disk, with the ability to scratch any files no longer needed. This was an important addition as on a single-drive system there would be no way to save work if the disk became full. SpeedSearch provided full-text search of all SpeedScript documents on a disk, returning a count of how many times the searched word or phrase was used in each document. Date and Time Stamper introduces a program to the disk drive that adds time stamps to files on disk, then executes SpeedScript. Reception In a review of four word processors, The Transactor in May 1986 praised SpeedScript as "extremely sophisticated", citing its large text buffer, logical cursor navigation, and undo command. While criticizing its lack of right justification, the magazine concluded that SpeedScript was not only "an easy winner" among budget-priced word processors, but also "a serious contender even when compared with the higher priced programs". SpeedScript was sufficiently popular to receive coverage in reference works, such as the "Wordprocessing Reference Guide" of Karl Hildon's Inner Space Anthology and Mitchell Waite's The Official Book for the Commodore 128. Columbia University's Kermit software for Commodore computers supported transferring SpeedScript files. Gallery References 1984 software Word processors Atari 8-bit family software Apple II word processors Commodore 64 software Commodore 128 software Commodore VIC-20 software Assembly language software Commercial software with available source code
2506658
https://en.wikipedia.org/wiki/SMBRelay
SMBRelay
SMBRelay and SMBRelay2 are computer programs that can be used to carry out SMB man-in-the-middle (mitm) attacks on Windows machines. They were written by Sir Dystic of CULT OF THE DEAD COW (cDc) and released March 21, 2001 at the @lantacon convention in Atlanta, Georgia. More than seven years after its release, Microsoft released a patch that fixed the hole exploited by SMBRelay. This fix only fixes the vulnerability when the SMB is reflected back to the client. If it is forwarded to another host, the vulnerability can be still exploited. SMBRelay SMBrelay receives a connection on UDP port 139 and relays the packets between the client and server of the connecting Windows machine to the originating computer's port 139. It modifies these packets when necessary. After connecting and authenticating, the target's client is disconnected and SMBRelay binds to port 139 on a new IP address. This relay address can then be connected to directly using "net use \\192.1.1.1" and then used by all of the networking functions built into Windows. The program relays all of the SMB traffic, excluding negotiation and authentication. As long as the target host remains connected, the user can disconnect from and reconnect to this virtual IP. SMBRelay collects the NTLM password hashes and writes them to hashes.txt in a format usable by L0phtCrack for cracking at a later time. As port 139 is a privileged port and requires administrator access for use, SMBRelay must run as an administrator access account. However, since port 139 is needed for NetBIOS sessions, it is difficult to block. According to Sir Dystic, "The problem is that from a marketing standpoint, Microsoft wants their products to have as much backward compatibility as possible; but by continuing to use protocols that have known issues, they continue to leave their customers at risk to exploitation... These are, yet again, known issues that have existed since day one of this protocol. This is not a bug but a fundamental design flaw. To assume that nobody has used this method to exploit people is silly; it took me less than two weeks to write SMBRelay." SMBRelay2 SMBRelay2 works at the NetBIOS level across any protocol to which NetBIOS is bound (such as NBF or NBT). It differs from SMBrelay in that it uses NetBIOS names rather than IP addresses. SMBRelay2 also supports man-in-the-middling to a third host. However, it only supports listening on one name at a time. See also Pass the hash References External links The SMB Man-In-the-Middle Attack by Sir Dystic Symantec Security Bulletin How to disable LM authentication on Windows NT - lists affected operating systems Your Field Guide To Designing Security Into Networking Protocols Extended Protection for Authentication Windows security software Computer security exploits Internet Protocol based network software Cult of the Dead Cow software
69627446
https://en.wikipedia.org/wiki/OCR%20Systems
OCR Systems
OCR Systems, Inc., was an American computer hardware manufacturer and software publisher dedicated to optical character recognition technologies. The company's first product, the System 1000 in 1970, was used by numerous large corporations for bill processing and mail sorting. Following a series of pitfalls in the 1970s and early 1980s, founder Theodore Herzl Levine put the company in the hands of Gregory Boleslavsky and Vadim Brikman, the company's vice presidents and recent immigrants from the Soviet Ukraine, who were able to turn OCR System's fortunes around and expand its employee base. The company released the software-based OCR application ReadRight for DOS, later ported to Windows, in the late 1980s. Adobe Inc. bought the company in 1992. History OCR Systems was co-founded by Theodore Herzl Levine ( 1923 – May 30, 2005). Levine served in the U.S. Army Signal Corps during World War II in the Solomon Islands, where he helped develop a sonar to find ejected pilots in the ocean. After the war, Levine spent 22 years at the University of Pennsylvania, earning his bachelor's degree in 1951, his master's degree in electrical engineering in 1957, and his doctorate in 1968. Alongside his studies, Levine taught statistics and calculus at Temple University, Rutgers University, La Salle University and Penn State Abington. Sometime in the 1960s, Levine was hired at Philco. He and two of his co-workers decided to form their own company dedicated to optical character recognition, founding OCR Systems in 1969 in Bensalem, Pennsylvania. OCR Systems's first product, the System 1000, was announced in 1970. OCR Systems entered a partnership with 3M to resell the System 1000 throughout the United States in March 1973. This was 3M's entry into the data entry field, managed by the company's Microfilm Products Division and accompanying 3M's suite of data retrieval systems. It soon found use among Texas Instruments, AT&T, Ricoh, Panasonic and Canon for bill processing and mail sorting. Later in the mid-1970s an unspecified Fortune 500 company reneged on a contract to distribute the System 1000; later still a Canadian company distributing the System 1000 in Canada went defunct. Both incidents led OCR Systems to go nearly bankrupt, although it eventually recovered. By the early 1980s, however, the company was almost insolvent. In 1983 Levine had only $8,000 in his savings and became bedridden with an illness. He left the company in the hands of Gregory Boleslavsky and Vadim Brikman, two Soviet Ukraine expats whom Levine had hired earlier in the 1980s. Boleslavsky was hired as a wire wrapper for the System 1000 and as a programmer and beta tester for ReadRight—a software package developed by Levine implementing patents from Nonlinear Technology, another OCR-centric company from Greenbelt, Maryland. Boleslavsky in turn recommended Brikman to Levine. The two soon became vice presidents of the company while Levine was bedridden; in Boleslavsky's case, he worked 14-hour work days for over half a year in pursuit of the title. The two presented OCR Systems' products to the National Computer Conference in Chicago, where they were massively popular. The company soon gained such clients as Allegheny Energy in Pennsylvania and the postal service of Belgium and received an influx of employees—mostly expats from Russia but also Poland and South Korea, as well as American-born workers. To accommodate the company's employee base, which had grown to over 30 in 1988, Levine moved OCR System's headquarters from Bensalem to the Masons Mill Business Park in Bryn Athyn. Chinon Industries of Japan signed an agreement with OCR Systems in 1987 to distribute OCR's ReadRight 1.0 software with Chinon's scanners, starting with their N-205 overhead scanner. In 1988, OCR opened their agreement to distribute ReadRight to other scanner manufacturers, including Canon, Hewlett-Packard, Skyworld, Taxan, Diamond Flower and Abaton. That year, the company posted a revenue of $3 million. OCR Systems extended their agreement with Chinon in 1989 and introduced version 2.0 of ReadRight. OCR Systems faced stiff competition in the software OCR market in the turn of the 1990s. The Toronto-based software firm Delrina signed a letter of intent to purchase the company in November 1991, expecting the deal to close in December and have OCR software available by Christmas. OCR was to receive $3 million worth of Delrina shares in a stock swap, but the deal collapsed in January 1992. Delrine later marketed its own Extended Character Recognition, or XCR, software package to compete with ReadRight. In July 1992, OCR Systems was purchased by Adobe Inc. for an undisclosed sum. Products System 1000 The System 1000 was based on the 16-bit Varian Data 620/i minicomputer with 4 KB of core memory. The system used the 620/i for controlling the paper feed, interpreting the format of the documents, the optical character recognition process itself, error detection, sequencing and output. The System was initially programmed to recognize 1428 OCR (used by Selectrics); IBM 407 print; and the full character sets of OCR-A, OCR-B and Farrington 7B; as well as optical marks and handwritten numbers. OCR Systems promised added compatibility with more fonts available down the line—per request—in 1970. The number of fonts supported was limited by the amount of core memory, which was expandable in 4 KB increments up to 32 KB. The System 1000 later supported generalized typewriter and photocopier fonts. The rest of the System 1000 comprised the document transport, one or more scanner elements, a CRT display and a Teletype Model 33 or 35. Pages are fed via friction with a rubber belt. Up to three lines could be scanned per document, while the rest of the scanned document could be laid out in any manner granted there was enough space around the fields to be read. The reader initially supported pages as small as 3.25 in by 3.5 in dimension (later supporting 2.6 in by 3.5 in utility cash stubs) all the way to the standard ANSI letter size (8.5 in by 11 in; later 8.5 in by 12 in as used in stock certificates). The initial System 1000 had a maximum throughput of 420 documents per minute per transport (later 500 documents per minute), contingent on document size and content. A feature unique to the System 1000 over other optical character recognition systems of the time was its ability to alert the operator when a field was unreadable or otherwise invalid. This feature, called Document Referral, placed the document in front of the operator and displayed a blank field on the screen of the included CRT monitor for manual re-entry via keyboard. Once input, data could be output to 7- or 9-track tape, paper tape, punched cards and other mass storage media or to System/360 mainframes for further processing. The complete System 1000 could be purchased for US$69,000. Options for renting were $1,800 per month on a three-year lease or $1,600 per month for five years. Computerworld wrote that it was less than half the cost of its competitors while more capable and user-friendly. Competing systems included the Recognition Equipment Retina, the Scan-Optics IC/20 and the Scan-Data 250/350. ReadRight ReadRight processes individual letters topographically: it breaks down the scanned letter into parts—strokes, curves, angles, ascenders and descenders—and follows a tree structure of letters broken down into these parts to determine the corresponding character code. ReadRight was entirely software-based, requiring no expansion card to work. Version 2.01, the last version released for DOS, runs in real mode in under 640 KB of RAM. OCR Systems released the Windows-only version 3.0 in 1991 while offering version 2.01 alongside it. The company unveiled a sister product, ReadRight Personal, dedicated to handheld scanners and for Windows only in October 1991. This version adds real-time scanning—each word is updated to the screen while lines are being scanned. ReadRight proper was later made a Windows-only product with version 3.1 in 1992. The inclusion of ReadRight 2.0 with Canon's IX-12F flatbed scanner led PC Magazine to award it an Editor's Choice rating in 1989. Despite this, reviewer Robert Kendall found qualification with ReadRight's ability to parse proportional typefaces such as Helvetica and Times New Roman. Mitt Jones of the same publication found version 2.01 to have improved its ability to read such typefaces and praised its ease of use and low resource intensiveness. Jones disliked the inability to handle uneven page paragraph column widths and graphics, noting that the manual recommended the user block out graphics with a Post-it Note. Version 3.1 for Windows received mixed reviews. Mike Heck of InfoWorld wrote that its "low cost and rich collection of features are hard to ignore" but rated its speed and accuracy average. Barry Simon of PC Magazine called it economical but inaccurate, unable to correct errors it did not detect, and found its spellchecker flawed and its speed lacking compared to Calera's WordScan Plus. Gary Berline of the same publication wrote that "ReadRight produced serviceable accuracy on clean files with simple layouts, but at a less than sprightly pace", finding it unable to process small type and multicolumn text with small margins between columns. The software also regularly interpreted graphical illustrations as text in his experience. OCR Systems announced a follow-up release promising to correcting these issues in July 1992, which never came to fruition on account of Adobe buying the company. Citations References Adobe Inc. American companies established in 1969 American companies disestablished in 1992 Computer companies established in 1969 Computer companies disestablished in 1992 Defunct computer companies of the United States Defunct computer hardware companies Defunct software companies of the United States Optical character recognition
33444929
https://en.wikipedia.org/wiki/Timothy%20Binkley
Timothy Binkley
Timothy Binkley (born Timothy Glenn Binkley on September 14, 1943 in Baltimore, MD), is an American philosopher, artist, and teacher, known for his radical writings about conceptual art and aesthetics, as well as several essays that help define computer art. He is also known for his interactive art installations. Biography Timothy Binkley studied mathematics at University of Colorado at Boulder, earning a B.A. (1965) and an M.A. (1966). His PhD in philosophy, from University of Texas at Austin (1970), explored Ludwig Wittgenstein's use of language. Binkley has lectured and taught at several colleges and universities in the United States, most notably at School of Visual Arts where he initiated the MFA Computer Art program, the first of its kind in the country. In 1992, he founded the New York Digital Salon, an international exhibition of computer art. He has exhibited his interactive art in the United States, Europe, South America, and Asia. Philosophy Binkley postulates that twentieth-century art is a strongly self-critical discipline, which creates ideas free of traditional piece-specifying conventions including aesthetic parameters and qualities. The artwork is a piece, and a piece isn't necessarily an aesthetic object—or an object at all. Binkley states that anything that can be thought about or referred to can be labeled an artwork by an artist. Binkley argues that the computer is neither a medium nor a tool, since both media and tools have inherent characteristics that can be explored through an artist's gestures or physical events for mark-making. Instead, the computer is a chameleon-like or even promiscuous assistant, whose services can be applied to any number of tasks and whose capabilities can be defined endlessly from application to application. Binkley refers to the computer as a non-specific technology and an incorporeal metamedium. Yet the computer contains phenomena not found in other media: namely, a conceptual space where symbolic content can be modified using mathematical abstractions. The notion of an “original” and its consequent value are considered irrelevant, obsolete, or inapplicable to computer art. Binkley's philosophy extends beyond art and aesthetics to culture itself, whose foundations he believes we are overhauling through our involvement with computers. Bibliography Books Symmetry Studio: Computer-Aided Surface Design. New York: Van Nostrand Reinhold, 1992. With John F. Simon Jr. Includes surface design software on CD. Wittgenstein's Language. The Hague: Martinus Nijhoff, 1973. Selected articles “A Philosophy of Computer Art by Lopes, Dominic McIver”, Journal of Aesthetics and Art Criticism 68(4), (2010): 409–411. "Autonomous Creations: Birthing Intelligent Agents", Leonardo, 31(5), Sixth Annual New York Digital Salon, (1998): 333–336. "The Vitality of Digital Creation", The Journal of Aesthetics and Art Criticism, 55(2), Perspectives on the Arts and Technology. (Spring, 1997): 107–116. "Computer Art" and "Digital Media", Encyclopedia of Aesthetics, New York: Oxford University Press, 1998. 1:412–414, 2:47–50. “Transparent Technology: The Swan Song of Electronics", Leonardo, 28(5), Special Issue "The Third Annual New York Digital Salon" (1995): 427–432. "Creating Symmetric Patterns with Objects and Lists", Symmetry: Culture and Science, 6(1), (1995). "Refiguring Culture", Future Visions: New Technologies of the Screen, London: British Film Institute Publications, (1993): 90–122. "Postmodern Torrents", Millennium Film Journal, 23/24 (Winter 1990–91): 130–141. "The Computer is Not a Medium", Philosophic Exchange (Fall/Winter, 1988/89). Reprinted in EDB & kunstfag, Rapport Nr. 48, NAVFs EDB-Senter for Humanistisk Forskning. Translated as "L'ordinateur n'est pas un médium", Esthétique des arts médiatiques, Sainte-Foy, Québec: Presses de l'Université du Québec, 1995. "Computed Space", National Computer Graphics Association Conference Proceedings, (1987): 643–652. "Piece: Contra Aesthetics", Philosophy Looks at the Arts: Contemporary Readings in Aesthetics, 3rd Ed., edited by Joseph Margolis, (Philadelphia: Temple University Press, 1987). Originally published in The Journal of Aesthetics and Art Criticism, 35(3), (Spring 1977): 265–277. A French translation appeared in Poétique 79 (Septembre, 1989) and has been collected in Esthétique et Poétique, edited by Gérard Genette, (Paris: Éditions du Seuil, 1992). Also anthologized in The Philosophy of the Visual Arts, edited by Philip Alperson (New York: Oxford University Press, 1990), and A Question of Art, edited by Benjamin F. Ward, (Florence, KY: Brenael Publishing, 1994). "Conceptual Art: Appearance and Reality", Art In Culture, 1, edited by A. Balis, L. Aagaard-Mogensen, R. Pinxten, F. Vandamme (Ghent, Belgium: Communication & Cognition Publishers, 1985). Proceedings of the Ghent colloquium "Art in Culture." "Deciding About Art", Culture and Art, edited by Lars Aagaard-Mogensen (Atlantic Highlands, N.J.: Humanities Press, 1976). Exhibitions Rest Rooms, interactive telecommunications installation with video-conferenced computers, exhibited at SIGGRAPH ’94 in Orlando, FL., Wexner Center for the Arts in Columbus, OH (April 1–30, 1995), Schloss Agathenberg in Germany (September 24 – November 26, 1995), Schloß Arolsen in Germany (February 24 – April 14, 1996). Books of Change, interactive computer installation exhibited in "Tomorrow's Realities", SIGGRAPH 1994. Included in the "Multimedia Playground" at the Exploratorium in San Francisco (February 12 – March 13, 1994). Exhibited at the Hong Kong Arts Centre in Hong Kong (June 26–29, 1994), the Central Academy of Art and Design in Beijing (July 4–8, 1994), and Camera Obscura in Tel Aviv (October 16–20, 1994). Watch Yourself, interactive computer installation. Included in "Tomorrow's Realities" exhibit at SIGGRAPH '91 in Las Vegas (July 29 – August 2, 1991). Exhibited at the National Conference on Computing and Values, New Haven (August 12–16, 1991). Accepted for Ars Electronica in Linz, Austria (1992). Exhibited at Videobrasil International Videofestival in São Paulo (September 21–27, 1992). Exhibited at Digital Jambalaya in New York City (November 16 – December 1, 1992) in conjunction with the international TRIP '92 event. Demonstration tape included on Computer Graphics Access '89-'92 videodisks (Bunkensha: Tokyo, 1992); Electronic Dictionary videodisks (G.R.A.M.: Montréal, 1993). Exhibited at Images du Futur in Montréal (May 13 – September 19, 1993). Exhibited at Vidéoformes in France (April 6–23, 1994). Shown at the Solomon R. Guggenheim Museum in New York City (June 2, 1994). Included in "Art for the End of the Century: Art and Technology" at the Reading Public Museum (July 23, 1995 – January 1, 1996). Exhibited at ciberfestival 96 in Lisbon, Portugal (February 9 – March 17, 1996). Permanent installation at Tempozan Contemporary Museum in Osaka, Japan (opened in September 1996). Personal life and family Binkley is married to artist and author Sonya Shannon and has a daughter Shelley Binkley, M.D., from a previous marriage to Sue Binkley Tatem. References 1943 births 20th-century American philosophers Philosophers of art Living people Philosophers from Maryland
2107097
https://en.wikipedia.org/wiki/Jagged%20Alliance%202
Jagged Alliance 2
Jagged Alliance 2 is a tactical role-playing game for PC, released in 1999 for Microsoft Windows and later ported to Linux by Tribsoft. It is the third entry in the Jagged Alliance series. The game was followed by the expansion Unfinished Business in 2000. Two commercial versions of the mod Wildfire were released in 2004 in the form of expansion packs. The core game and the Unfinished Business expansion were combined and released under the title Jagged Alliance 2 Gold Pack in 2002. The game takes place in the fictional country of Arulco, which has been ruled by the ruthless monarch Deidranna for several years. The player is put in control of hired mercenaries and with aid of local citizens and militia must reclaim Arulco's cities and ultimately defeat Deidranna. The game uses a strategic map screen of Arulco where the player may issue high level strategic orders for their troops, such as travelling or prolonged training. Combat and individual location exploration takes place in tactical screen, where the player can issue individual direct commands to their mercenaries, such as run, shoot, talk and so on. The game features a wide variety of guns, armour and items that the player may use. The game was commercially successful; Pelit estimated its sales at 300,000 units by 2006. However, it sold poorly in the United States. The game received positive scores from reviewers and was praised for its freedom of action, memorable characters and non-linear and tactical gameplay. Gameplay The game puts the player in control of several mercenaries that must explore and reclaim towns and territories from enemy forces. As the game advances, the player can hire new mercenaries and acquire better weapons and armour to combat opponents. The map screen displays the world map of Arulco in a square grid (called sectors) and the forces deployed by the enemy and the player. This is the strategic side of the game, as the player directs his forces, and controls the progress of time, which may be sped up or paused. From here, the player can access the game's laptop function, allowing the player to receive emails from characters, buy weapons and equipment, and hire and fire mercenaries. This screen is used to give mercenaries tasks. Mercs with a medical kit and medical skill can be set to tend to wounded mercs; this significantly quickens their recovery. Mercs with a tool box and mechanical skill can be set to repair damaged weapons, tools and armour. Mercs can practice a skill by themselves or work as a trainer or student. Training a student increases their chosen skill. A trainer may train local citizens to become militia to defend sectors while the mercs are away. Mercs can be ordered to travel on foot between the sectors. If the player acquires a ground or aerial vehicle in-game, they may load their troops into it to travel between sectors much faster. There is a tactical screen, where the player takes control of individual mercenaries during real-time interactions and turn-based combat. The tactical screen shows a sector from an isometric viewpoint. Here the player can view the terrain, explore buildings and find items. Although the game does not feature a visual fog of war, the non-player characters (NPCs) can only be seen if a player-controlled or allied character sees them. The game time advances in real-time on the tactical screen unless a battle is initiated, then the game switches to a turn-based combat mode. The player can control an individual merc or group of mercs, issuing move, communication and various interaction commands. Mercs can run, walk, swim, crouch or crawl. Mercs may climb onto the roofs of flat-roofed buildings. Battles occur whenever the player's and enemy forces occupy the same sector. The game proceeds in real-time until a member of one force spots an enemy. The game then switches to turn-based play. Each force takes alternating turns to move, attack, and perform various other actions. Each character has a limited number of action points, which are spent to perform actions. The action points are renewed at the beginning of each round, depending on the physical state of the merc. Some unspent action points will be carried over to the next round. If a combatant has some action points left over during the enemy's turn and spots an enemy, they stand a chance of interrupting the enemy turn and performing actions. Mercs can attack enemies in many different ways. Firearms such as handguns, machine guns, rifles, close combat and thrown weapons like knives and hand grenades, heavy weapons such as mortars, rocket-propelled grenades and light anti-tank weapons and explosives like mines, and bombs. When a merc attacks, they have a certain chance to hit the target depending on the appropriate skill, obstacles in the line of fire and the number of action points spent aiming. Walls, doors, and many objects can be destroyed using explosives or heavy weapons. Some battles may be automatically resolved if the player chooses to do so. The game may be played using stealth elements. Mercs may move either in normal or stealth mode. In stealth mode, the merc attempts to move without making any noise. Moving stealthily costs more action points, but may successfully hide their position from enemies. The game features weapons that do not cause loud noise and camouflage kits, which when used may disguise the merc in their environment. Merc attributes and some special skills affect how stealthy they are. The game features a large array of various items. These include weapons, armour, tools and miscellaneous items. Items can be traded between mercs, picked up, dropped or thrown. If a merc dies, they drop all their items. Enemies will sometimes drop items upon dying. Mercs can be hired with their own combat equipment. As the game progresses, the online shop offers a larger and better variety of weapons, armour and tools for sale. Mercenaries can equip and carry various items in their inventory. The mercs can wear armour on their head, chest and legs. Certain skills and interactions require a certain tool or object to be held in the hand. The mercs can hold one large weapon or dual wield two small weapons. Weapons may be improved via special attachments, for instance a silencer or bipod. The player needs money to pay the mercenary hire fees and to purchase equipment. At the start of the game, the player is given a set amount of money. The main source of income in the game are silver and gold mines located in several towns. The player has to reclaim these town sectors and convince the local miners to work for the player to receive a daily income. Weaponry, equipment and miscellaneous items may be sold to local merchants. Although the player is directed by rebels to head to Drassen first, the player may choose to capture the towns and explore the countryside in any order they desire. It is not required to capture any towns. Additionally, almost every sector may be entered via two or more entry points. The player may choose between a very large array of different mercs, allowing combinations for specific purposes – e.g. stealth combat, night combat, close-quarters combat and so on. The game features random treasure chests, characters and certain events that differ from game to game. Characters The characters in Jagged Alliance 2 are mercenaries, enemies, allies and the townsfolk; many of the NPC characters may be interacted with. The game is played almost entirely through the mercenaries chosen. Mercenaries can be hired from private military company websites, or recruited from the local citizenry. The player may create one personalized, unique mercenary. Characters are defined by their skills expressed in character points. Every character has an experience level, five attributes (agility, dexterity, strength, leadership, wisdom) and four skills (marksmanship, explosives, mechanical, medical). A character's level is increased by actively participating in the game. Skills are increased by performing actions based on these skills or by training. Attribute points may be lost if a character is critically wounded. Apart from this, a mercenary can have two (or one highly developed) special skills enhancing a certain aspect of his or her performance such as night operations, or lockpicking. Each character has a certain number of health points, which are reduced when they take damage. Wounds can be bandaged by using first aid kits. This prevents the character from bleeding and losing more health but does not restore health points. A wounded mercenary is given less action points proportionally to their wounds. Health can be restored by resting, being treated by another mercenary with 'doctor'-'patient' pair of commands or visiting a hospital. When a mercenary runs very low on health, the character falls to the ground slowly dying, unable to do anything until medically treated. If a character dies, they cannot be resurrected. Characters have an energy level, restored by sleep, rest, fluids or injections. Moving, using stealth and getting hit saps energy. Tired mercs who have not rested for a long time become exhausted faster. Exhausted characters will fall to the ground until they regain some energy. Mercenaries have a morale level, mainly increased by victories and successful kills and decreased by the opposite. Happy mercs perform better, while unhappy mercs will complain and may leave the player's forces altogether. Mercs who like each other and work together will have a higher morale than others. Mercenaries who dislike each other will complain often and eventually one of the two mercenaries will quit. Mercenaries may refuse to be hired if the player has already hired someone they dislike. Plot Jagged Alliance 2 takes place in the fictional nation of Arulco, ruled until the late 1980s by a unique democratic monarchy – monarchs led the nation, but elections were held every ten years to assert their legitimacy. In 1988, election candidate Enrico Chivaldori took a wife, Deidranna Reitman of Romania, in order to boost his popularity and consequently was victorious. However, Deidranna proved to be more than a pawn; showing a thirst for power, she framed Chivaldori for the murder of his father. Enrico managed to escape, faking his death. Removing all other obstacles from her way, she consolidated her power and converted Arulco into an authoritarian state. When the game begins, Chivaldori has hired the player to remove Deidranna by whatever means necessary. He puts the player and their team of mercenaries in contact with a rebel movement in the northern town of Omerta. Omerta suffered a massive raid shortly before the events of the game, leaving the town damaged and nearly deserted. The rebel leader Miguel Cordona, former election candidate and opponent of Enrico, guides the player to the city of Drassen. The game features a science fiction mode that introduces enemies not present in realistic mode – the "Crepitus", a species of giant insect living underground, infesting mines and occasionally emerging to the surface. Development In an interview with Game.EXE during the game's development, SirTech noted an intention to preserve freedom of movement, character relationships and flexible storyline from the previous title. Among the new features discussed were a real-time mode and player own character's generation. In a later interview during game's alpha release, the game's designer Ian Currie noted the addition of role-playing and strategy elements compared to the previous title. He explained that — while the tactical game mode was well suited for multiplayer — it would not be implementing, explaining that non-linear story and role-playing elements did not suit multiplayer. The developers explained the game being delayed due to heavy improvements in the game engine, graphics and the creation of a detailed world, and originally expected to release in October 1998. Currie explains the game engine had several ground-up rewrites and that significant time was spent on the game's physics and projectile ballistics, as well as the creation of the non-linear storyline. A demo of the game, containing a unique map named Demotown, was released in the summer of 1998 through distribution with various gaming magazines. An open source AmigaOS 4 port was released on June 28, 2008. Reception Sales Jagged Alliance 2 became a commercial success and, according to designer Ian Currie, was the second-largest hit published by TalonSoft. By October 1999, Udo Hoffman of PC Player reported that its sales had surpassed 100,000 units in Germany and that distributor TopWare was pleased with the returns. It debuted at #1 in April 1999 on Germany's computer game sales charts and held the position the following month, before falling to ninth in June. It claimed 14th for July. However, the game flopped in North America: PC Data reported sales of 24,000 for Jagged Alliance II through the end of 1999. The editors of GameSpot nominated it for their 1999 "Best Game No One Played" award, which ultimately went to Disciples: Sacred Lands. Regarding Jagged Alliance 2s global sales, Pelits Niko Nirvi reported that "estimates of sales bounced between 150,000–500,000." He remarked that team member Chris Camfield placed its sales at 300,000 units by 2006, a figure that Nirvi said "sounds credible". Critical reviews Jagged Alliance 2 received a score of 85.09% from review aggregation website GameRankings. GameSpot praised its non-linear gameplay, freedom of action and variety of tactics and mercenary character traits, while IGN highlighted its story-telling and role-playing, detailed world, challenging opponents and excellent audio. Computer Gaming World presented the game Editor's Choice award, and Jeff Green described it as a hardcore game, praising its selection of mercenaries, replay value and combat planning, although noted AI was not very intelligent and graphics weren't strong. Jagged Alliance 2 was a finalist for Computer Games Strategy Plus, GameSpot's and Computer Gaming Worlds 1999 "Strategy Game of the Year" awards, which it lost to RollerCoaster Tycoon, Age of Empires II: The Age of Kings and Homeworld, respectively. It was also nominated in GameSpot's "Best Sound" and "Best Game No One Played" categories. The editors of Computer Games wrote, "Sirtech's amazing hybrid of detailed tactical strategy and role-playing delivered the goods. How many real-time games have this much tension?" Legacy After the release of the original Jagged Alliance 2, two sequels and various mods have been released, including a prequel titled Jagged Alliance: Flashback which was released on Steam in October 2014. Unfinished Business Jagged Alliance 2: Unfinished Business, alternatively known as Jagged Alliance 2.5 is a short, mission-based standalone sequel released by Sir-Tech on December 5, 2000. This release adds some tweaks to the combat engine, as well as a scenario editor, The gameplay remains largely unchanged. A new plot is introduced in Unfinished Business. The original owners of Arulco's lucrative mines have returned and established a missile base on the nearby island of Tracona, demanding the mines are returned to them. They destroy the Arulco's now empty Tixa prison to show an example for if their demands are not met. The player must put a team of mercenaries together to infiltrate Tracona and disable the missile base. Alternatively, the player may choose to use the characters from a previously saved Jagged Alliance 2: Gold savegame. Unfinished Business is notably harder than the original. The product appears to be rushed as the gameplay is virtually the same as the original's, the play-time of Unfinished Business is much shorter, and the plot is linear and thus lacks replayability value. Sir-tech was experiencing financial problems at the time, most significantly trouble finding a publisher. Gold Pack Jagged Alliance 2: Gold Pack was published by Strategy First on August 6, 2002 and adds the improvements of Unfinished Business to the final release of Jagged Alliance 2. The Unfinished Business and a scenario editor are included in the package. Gold Pack introduces notable changes to the difficulty setting. The player choosing an advanced difficulty level may decide to make the player turns timed and whether to disallow saving during combat, as opposed to the original Jagged Alliance 2, which set these settings automatically. As of July 2006 it is available via the Steam content delivery system, as well as Turner Broadcasting's GameTap. Wildfire Jagged Alliance 2: Wildfire was published by i-Deal Games in 2004 as an official expansion pack by Strategy First. The game's source code was published in the package, under a license permitting non-commercial use. Comparing to the original Jagged Alliance 2, Wildfire has not altered the game engine or controls. The focus was instead directed into designing revamped environments, new items and stronger enemies. This presents players with a more challenging campaign, however the goals and progression remain the same. In terms of the gameplay features, the game remains almost unchanged. A renewed commercial release of Wildfire, dubbed version 6, through European publisher Zuxxez Entertainment in 2005 saw the Jagged Alliance series staying on shop shelves more than five years after the debut of its second iteration. Wild Fire version 6 contains changed sourcecode, a tweaked graphics engine that allows for a higher resolution, introduces new mercenaries and increases squad size from 6 to 10. Modding and community development Jagged Alliance 2: Gold has seen numerous community mods after its release and especially after the source code release. Notable examples include v1.13, Urban Chaos and Deidranna Lives. JA2-Stracciatella The JA2-Stracciatella project aims for cross platform capabilities for Jagged Alliance 2 by porting the vanilla Wildfire source code to SDL as underlying library. Mac OS, Linux, Android and other source ports were released as result. Also, Stracciatella can be seen as an unofficial patch project, since the second focus is the fixing of technical errors (like buffer overflows) and the removal of technical limitations without changing the game's balancing. The last commit of project initiator Tron in the project repository was r7072 from August 2010. In March 2013 the JA2-Stracciatella project was continued by another developer and transferred to a new repository. Support for higher resolutions and additional fixes were introduced. In March 2016 the project was continued on GitHub. v1.13 v1.13 is an enhanced version of Jagged Alliance 2 Gold and partial conversion mod of the game. The main change from the original code is the "externalisation" of many previously hardcoded variables to editable XML files, allowing users a great level of modding flexibility. It introduced many new features and items, as well as a multiplayer mode. Since July, 2007, 1.13 has been successfully ported to Linux. It is also backwards-compatible with a base Jagged Alliance 2 installation. References External links Official website via Internet Archive Jagged Alliance 2 at MobyGames 1999 video games Amiga games AmigaOS 4 games Commercial video games with freely available source code Linux games MacOS games Proprietary software that uses SDL Single-player video games Sir-Tech games Strategy First games Tactical role-playing video games Turn-based tactics video games Video game sequels Video games developed in Canada Video games scored by Kevin Manthei Video games with expansion packs Video games with isometric graphics Windows games
91256
https://en.wikipedia.org/wiki/Computer%20and%20network%20surveillance
Computer and network surveillance
Computer and network surveillance is the monitoring of computer activity and data stored locally on a computer or data being transferred over computer networks such as the Internet. This monitoring is often carried out covertly and may be completed by governments, corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent government agencies. Computer and network surveillance programs are widespread today and almost all Internet traffic can be monitored. Surveillance allows governments and other agencies to maintain social control, recognize and monitor threats or any suspicious activity, and prevent and investigate criminal activities. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens. Many civil rights and privacy groups, such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union, have expressed concern that increasing surveillance of citizens will result in a mass surveillance society, with limited political and/or personal freedoms. Such fear has led to numerous lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance". Network surveillance The vast majority of computer surveillance involves the monitoring of personal data and traffic on the Internet. For example, in the United States, the Communications Assistance For Law Enforcement Act mandates that all phone calls and broadband internet traffic (emails, web traffic, instant messaging, etc.) be available for unimpeded, real-time monitoring by Federal law enforcement agencies. Packet capture (also known as "packet sniffing") is the monitoring of data traffic on a network. Data sent between computers over the Internet or between any networks takes the form of small chunks called packets, which are routed to their destination and assembled back into a complete message. A packet capture appliance intercepts these packets, so that they may be examined and analyzed. Computer technology is needed to perform traffic analysis and sift through intercepted data to look for important/useful information. Under the Communications Assistance For Law Enforcement Act, all U.S. telecommunications providers are required to install such packet capture technology so that Federal law enforcement and intelligence agencies are able to intercept all of their customers' broadband Internet and voice over Internet protocol (VoIP) traffic. There is far too much data gathered by these packet sniffers for human investigators to manually search through. Thus, automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic, filtering out, and reporting to investigators those bits of information which are "interesting", for example, the use of certain words or phrases, visiting certain types of web sites, or communicating via email or chat with a certain individual or group. Billions of dollars per year are spent by agencies such as the Information Awareness Office, NSA, and the FBI, for the development, purchase, implementation, and operation of systems which intercept and analyze this data, extracting only the information that is useful to law enforcement and intelligence agencies. Similar systems are now used by Iranian Security dept. to identify and suppress dissidents. All of the technology has been allegedly installed by German Siemens AG and Finnish Nokia. The Internet's rapid development has become a primary form of communication. More people are potentially subject to Internet surveillance. There are advantages and disadvantages to network monitoring. For instance, systems described as "Web 2.0" have greatly impacted modern society. Tim O’ Reilly, who first explained the concept of "Web 2.0", stated that Web 2.0 provides communication platforms that are "user generated", with self-produced content, motivating more people to communicate with friends online. However, Internet surveillance also has a disadvantage. One researcher from Uppsala University said "Web 2.0 surveillance is directed at large user groups who help to hegemonically produce and reproduce surveillance by providing user-generated (self-produced) content. We can characterize Web 2.0 surveillance as mass self-surveillance". Surveillance companies monitor people while they are focused on work or entertainment. Yet, employers themselves also monitor their employees. They do so in order to protect the company's assets and to control public communications but most importantly, to make sure that their employees are actively working and being productive. This can emotionally affect people; this is because it can cause emotions like jealousy. A research group states "...we set out to test the prediction that feelings of jealousy lead to ‘creeping’ on a partner through Facebook, and that women are particularly likely to engage in partner monitoring in response to jealousy". The study shows that women can become jealous of other people when they are in an online group. The virtual assistant(AI) has become a social integration into lives. Currently, virtual assistants such as Amazon's Alexa or Apple's Siri cannot call 911 or local services. They are constantly listening for command and recording parts of conversations that will help improve algorithms. If the law enforcement is able to be called using a virtual assistant, the law enforcement would then be able to have access to all the information saved for the device. The device is connected to the home's internet, because of this law enforcement would be the exact location of the individual calling for law enforcement. While the virtual assistance devices are popular, many debates the lack of privacy. The devices are listening to every conversation the owner is having. Even if the owner is not talking to a virtual assistant, the device is still listening to the conversation in hopes that the owner will need assistance, as well as to gather data. Corporate surveillance Corporate surveillance of computer activity is very common. The data collected is most often used for marketing purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used as a form of business intelligence, which enables the corporation to better tailor their products and/or services to be desirable by their customers. The data can also be sold to other corporations so that they can use it for the aforementioned purpose, or it can be used for direct marketing purposes, such as targeted advertisements, where ads are targeted to the user of the search engine by analyzing their search history and emails (if they use free webmail services), which are kept in a database. Such type of surveillance is also used to establish business purposes of monitoring, which may include the following: Preventing misuse of resources. Companies can discourage unproductive personal activities such as online shopping or web surfing on company time. Monitoring employee performance is one way to reduce unnecessary network traffic and reduce the consumption of network bandwidth. Promoting adherence to policies. Online surveillance is one means of verifying employee observance of company networking policies. Preventing lawsuits. Firms can be held liable for discrimination or employee harassment in the workplace. Organizations can also be involved in infringement suits through employees that distribute copyrighted material over corporate networks. Safeguarding records. Federal legislation requires organizations to protect personal information. Monitoring can determine the extent of compliance with company policies and programs overseeing information security. Monitoring may also deter unlawful appropriation of personal information, and potential spam or viruses. Safeguarding company assets. The protection of intellectual property, trade secrets, and business strategies is a major concern. The ease of information transmission and storage makes it imperative to monitor employee actions as part of a broader policy. The second component of prevention is determining the ownership of technology resources. The ownership of the firm's networks, servers, computers, files, and e-mail should be explicitly stated. There should be a distinction between an employee's personal electronic devices, which should be limited and proscribed, and those owned by the firm. For instance, Google Search stores identifying information for each web search. An IP address and the search phrase used are stored in a database for up to 18 months. Google also scans the content of emails of users of its Gmail webmail service in order to create targeted advertising based on what people are talking about in their personal email correspondences. Google is, by far, the largest Internet advertising agency—millions of sites place Google's advertising banners and links on their websites in order to earn money from visitors who click on the ads. Each page containing Google advertisements adds, reads, and modifies "cookies" on each visitor's computer. These cookies track the user across all of these sites and gather information about their web surfing habits, keeping track of which sites they visit, and what they do when they are on these sites. This information, along with the information from their email accounts, and search engine histories, is stored by Google to use to build a profile of the user to deliver better-targeted advertising. The United States government often gains access to these databases, either by producing a warrant for it, or by simply asking. The Department of Homeland Security has openly stated that it uses data collected from consumer credit and direct marketing agencies for augmenting the profiles of individuals that it is monitoring. Malicious software In addition to monitoring information sent over a computer network, there is also a way to examine data stored on a computer's hard drive, and to monitor the activities of a person using the computer. A surveillance program installed on a computer can search the contents of the hard drive for suspicious data, can monitor computer use, collect passwords, and/or report back activities in real-time to its operator through the Internet connection. A keylogger is an example of this type of program. Normal keylogging programs store their data on the local hard drive, but some are programmed to automatically transmit data over the network to a remote computer or Web server. There are multiple ways of installing such software. The most common is remote installation, using a backdoor created by a computer virus or trojan. This tactic has the advantage of potentially subjecting multiple computers to surveillance. Viruses often spread to thousands or millions of computers, and leave "backdoors" which are accessible over a network connection, and enable an intruder to remotely install software and execute commands. These viruses and trojans are sometimes developed by government agencies, such as CIPAV and Magic Lantern. More often, however, viruses created by other people or spyware installed by marketing agencies can be used to gain access through the security breaches that they create. Another method is "cracking" into the computer to gain access over a network. An attacker can then install surveillance software remotely. Servers and computers with permanent broadband connections are most vulnerable to this type of attack. Another source of security cracking is employees giving out information or users using brute force tactics to guess their password. One can also physically place surveillance software on a computer by gaining entry to the place where the computer is stored and install it from a compact disc, floppy disk, or thumbdrive. This method shares a disadvantage with hardware devices in that it requires physical access to the computer. One well-known worm that uses this method of spreading itself is Stuxnet. Social network analysis One common form of surveillance is to create maps of social networks based on data from social networking sites as well as from traffic analysis information from phone call records such as those in the NSA call database, and internet traffic data gathered under CALEA. These social network "maps" are then data mined to extract useful information such as personal interests, friendships and affiliations, wants, beliefs, thoughts, and activities. Many U.S. government agencies such as the Defense Advanced Research Projects Agency (DARPA), the National Security Agency (NSA), and the Department of Homeland Security (DHS) are currently investing heavily in research involving social network analysis. The intelligence community believes that the biggest threat to the U.S. comes from decentralized, leaderless, geographically dispersed groups. These types of threats are most easily countered by finding important nodes in the network, and removing them. To do this requires a detailed map of the network. Jason Ethier of Northeastern University, in his study of modern social network analysis, said the following of the Scalable Social Network Analysis Program developed by the Information Awareness Office: Monitoring from a distance With only commercially available equipment, it has been shown that it is possible to monitor computers from a distance by detecting the radiation emitted by the CRT monitor. This form of computer surveillance, known as TEMPEST, involves reading electromagnetic emanations from computing devices in order to extract data from them at distances of hundreds of meters. IBM researchers have also found that, for most computer keyboards, each key emits a slightly different noise when pressed. The differences are individually identifiable under some conditions, and so it's possible to log key strokes without actually requiring logging software to run on the associated computer. In 2015, lawmakers in California passed a law prohibiting any investigative personnel in the state to force businesses to hand over digital communication without a warrant, calling this Electronic Communications Privacy Act. At the same time in California, state senator Jerry Hill introduced a bill making law enforcement agencies to disclose more information on their usage and information from the Stingray phone tracker device. As the law took into effect in January 2016, it will now require cities to operate with new guidelines in relation to how and when law enforcement use this device. Some legislators and those holding a public office have disagreed with this technology because of the warrantless tracking, but now if a city wants to use this device, it must be heard by a public hearing. Some cities have pulled out of using the StingRay such as Santa Clara County. And it has also been shown, by Adi Shamir et al., that even the high frequency noise emitted by a CPU includes information about the instructions being executed. Policeware and govware In German-speaking countries, spyware used or made by the government is sometimes called govware. Some countries like Switzerland and Germany have a legal framework governing the use of such software. Known examples include the Swiss MiniPanzer and MegaPanzer and the German R2D2 (trojan). Policeware is a software designed to police citizens by monitoring the discussion and interaction of its citizens. Within the U.S., Carnivore was the first incarnation of secretly installed e-mail monitoring software installed in Internet service providers' networks to log computer communication, including transmitted e-mails. Magic Lantern is another such application, this time running in a targeted computer in a trojan style and performing keystroke logging. CIPAV, deployed by the FBI, is a multi-purpose spyware/trojan. The Clipper Chip, formerly known as MYK-78, is a small hardware chip that the government can install into phones, designed in the nineties. It was intended to secure private communication and data by reading voice messages that are encoded and decode them. The Clipper Chip was designed during the Clinton administration to, “…protect personal safety and national security against a developing information anarchy that fosters criminals, terrorists and foreign foes.” The government portrayed it as the solution to the secret codes or cryptographic keys that the age of technology created. Thus, this has raised controversy in the public, because the Clipper Chip is thought to have been the next “Big Brother” tool. This led to the failure of the Clipper proposal, even though there have been many attempts to push the agenda. The "Consumer Broadband and Digital Television Promotion Act" (CBDTPA) was a bill proposed in the United States Congress. CBDTPA was known as the "Security Systems and Standards Certification Act" (SSSCA) while in draft form and was killed in committee in 2002. Had CBDTPA become law, it would have prohibited technology that could be used to read digital content under copyright (such as music, video, and e-books) without Digital Rights Management (DRM) that prevented access to this material without the permission of the copyright holder. Surveillance as an aid to censorship Surveillance and censorship are different. Surveillance can be performed without censorship, but it is harder to engage in censorship without some forms of surveillance. And even when surveillance does not lead directly to censorship, the widespread knowledge or belief that a person, their computer, or their use of the Internet is under surveillance can lead to self-censorship. In March 2013 Reporters Without Borders issued a Special report on Internet surveillance that examines the use of technology that monitors online activity and intercepts electronic communication in order to arrest journalists, citizen-journalists, and dissidents. The report includes a list of "State Enemies of the Internet", Bahrain, China, Iran, Syria, and Vietnam, countries whose governments are involved in active, intrusive surveillance of news providers, resulting in grave violations of freedom of information and human rights. Computer and network surveillance is on the increase in these countries. The report also includes a second list of "Corporate Enemies of the Internet", Amesys (France), Blue Coat Systems (U.S.), Gamma (UK and Germany), Hacking Team (Italy), and Trovicor (Germany), companies that sell products that are liable to be used by governments to violate human rights and freedom of information. Neither list is exhaustive and they are likely to be expanded in the future. Protection of sources is no longer just a matter of journalistic ethics. Journalists should equip themselves with a "digital survival kit" if they are exchanging sensitive information online, storing it on a computer hard-drive or mobile phone. Individuals associated with high-profile rights organizations, dissident groups, protest groups, or reform groups are urged to take extra precautions to protect their online identities. See also Anonymizer, a software system that attempts to make network activity untraceable Computer surveillance in the workplace Cyber spying Datacasting, a means of broadcasting files and Web pages using radio waves, allowing receivers near total immunity from traditional network surveillance techniques. Differential privacy, a method to maximize the accuracy of queries from statistical databases while minimizing the chances of violating the privacy of individuals. ECHELON, a signals intelligence (SIGINT) collection and analysis network operated on behalf of Australia, Canada, New Zealand, the United Kingdom, and the United States, also known as AUSCANNZUKUS and Five Eyes GhostNet, a large-scale cyber spying operation discovered in March 2009 List of government surveillance projects Mass surveillance China's Golden Shield Project Mass surveillance in Australia Mass surveillance in China Mass surveillance in East Germany Mass surveillance in India Mass surveillance in North Korea Mass surveillance in the United Kingdom Mass surveillance in the United States Surveillance Surveillance by the United States government: 2013 mass surveillance disclosures, reports about NSA and its international partners' mass surveillance of foreign nationals and U.S. citizens Bullrun (code name), a highly classified NSA program to preserve its ability to eavesdrop on encrypted communications by influencing and weakening encryption standards, by obtaining master encryption keys, and by gaining access to data before or after it is encrypted either by agreement, by force of law, or by computer network exploitation (hacking) Carnivore, a U.S. Federal Bureau of Investigation system to monitor email and electronic communications COINTELPRO, a series of covert, and at times illegal, projects conducted by the FBI aimed at U.S. domestic political organizations Communications Assistance For Law Enforcement Act Computer and Internet Protocol Address Verifier (CIPAV), a data gathering tool used by the U.S. Federal Bureau of Investigation (FBI) Dropmire, a secret surveillance program by the NSA aimed at surveillance of foreign embassies and diplomatic staff, including those of NATO allies Magic Lantern, keystroke logging software developed by the U.S. Federal Bureau of Investigation Mass surveillance in the United States NSA call database, a database containing metadata for hundreds of billions of telephone calls made in the U.S. NSA warrantless surveillance (2001–07) NSA whistleblowers: William Binney, Thomas Andrews Drake, Mark Klein, Edward Snowden, Thomas Tamm, Russ Tice Spying on United Nations leaders by United States diplomats Stellar Wind (code name), code name for information collected under the President's Surveillance Program Tailored Access Operations, NSA's hacking program Terrorist Surveillance Program, an NSA electronic surveillance program Total Information Awareness, a project of the Defense Advanced Research Projects Agency (DARPA) TEMPEST, codename for studies of unintentional intelligence-bearing signals which, if intercepted and analyzed, may disclose the information transmitted, received, handled, or otherwise processed by any information-processing equipment References External links "Selected Papers in Anonymity", Free Haven Project, accessed 16 September 2011. Computer forensics Surveillance Espionage techniques
9189771
https://en.wikipedia.org/wiki/Inter-server
Inter-server
In computer network protocol design, inter-server communication is an extension of the client–server model in which data are exchanged directly between servers. In some fields server-to-server (S2S) is used as an alternative, and the term inter-domain can in some cases be used interchangeably. Protocols Protocols that have inter-server functions as well as the regular client–server communications include the following: IPsec, secure network protocol that can be used to secure a host-to-host connection The domain name system (DNS), which uses an inter-server protocol for zone transfers; The Dynamic Host Configuration Protocol (DHCP); FXP, allowing file transfer directly between FTP servers; The Inter-Asterisk eXchange (IAX); InterMUD; The IRC, an Internet chat system with an inter-server protocol allowing clients to be distributed across many servers; The Network News Transfer Protocol (NNTP); The Protocol for SYnchronous Conferencing (PSYC); SIP, a signaling protocol commonly used for Voice over IP; SILC, a secure Internet conferencing protocol; The Extensible Messaging and Presence Protocol (XMPP, formerly named Jabber). ActivityPub a client/server API for creating, updating and deleting content, as well as a federated server-to-server API for delivering notifications and content. SMTP which accepts both MTU->MTA traffic, as well as MTA->MTA, but it is usually recommended that different ports are used for these actions Some of these protocols employ multicast strategies to efficiently deliver information to multiple servers at once. See also Overlay network Internet Relay Chat Network protocols
31676840
https://en.wikipedia.org/wiki/User%20virtualization
User virtualization
User virtualization refers to the independent management of all aspects of the user on the desktop environment. User virtualization decouples a user's profile, settings and data from the operating system and stores this information into a centralized data share either in the data center or cloud. User virtualization solutions provide consistent and seamless working environments across a range of application delivery mechanisms. Although user virtualization is most closely associated with desktop virtualization, in fact, this technology can be used to manage user profiles on physical desktops as well. As the range of currently used operating systems expands, and the use of multiple devices by workers to perform their jobs escalates, user virtualization can support the creation of a "follow-me" identity that will allow access to a workspace without being tied into only a single device or a single location. User virtualization for virtual desktops For virtualized desktop environments, user virtualization represents a fundamental change in the way the corporate desktop is constructed, delivered and managed. The user’s personality is decoupled from the operating system and applications, managed independently and applied into a desktop as needed without scripting, group policies or use of user profiles – regardless of how the desktop is being delivered (physical, virtual, cloud, etc.). User virtualization for terminal servers For server based computing environments, user virtualization enables IT to have more control over the shared environment, optimize infrastructure needs and ensure an optimal experience for their users. With application entitlement, unauthorized applications are blocked without the need for complex scripts or high-maintenance lists, providing protection from unknown executables and ensuring compliance with Microsoft licensing. User personality User personality is a combination of corporate policy and user personalization. Policy is used to set up and maintain a user desktop session. Policy also ensures a user session remains compliant by controlling application access, locking down or removing operating system and application functions, as well as self-healing essential files, folders, processes, services and registry settings. Personalization constitutes any change a user makes to his or her desktop. User persona User persona is another term used interchangeably with user personality or user personalization. References Coming Together On Virtualization Above the Cloud – User Virtualization User virtualization; the key to successful desktop virtualization User Virtualization, consider the user first Human–computer interaction
35534004
https://en.wikipedia.org/wiki/Signavio
Signavio
Signavio is a vendor of Business Process Management (BPM) software based in Berlin and Silicon Valley. Its main product is Signavio Process Manager, a web-based business process modeling tool. History The company was founded by a team of alumnae from Hasso Plattner Institute (HPI) in Potsdam, Germany. Prior to Signavio, the founders were involved in development of the world's first web modeler for BPMN at HPI. This technology, known as the "Oryx project", was published under an Open Source license and served as blueprint for the Signavio Process Manager. Signavio is headquartered in Berlin, Germany. In 2012 the company was incorporated in the United States as Signavio Inc. with an office in Burlington, Massachusetts. The company is fully owned by the founders and staff. On January 21, 2021, SAP announced that it will acquire Signavio. Awards In 2011 the German Federal Ministry of Economy and Technology named Signavio "ICT startup of the Year" and selected the firm for its "German Silicon Valley Accelerator" program. References External links Software companies of Germany Companies based in Berlin Diagramming software Technical communication tools Windows graphics-related software Unix software Vector graphics editors Graphics software Workflow applications Announced mergers and acquisitions
52737878
https://en.wikipedia.org/wiki/Cura%20%28software%29
Cura (software)
Cura is an open source slicing application for 3D printers. It was created by David Braam who was later employed by Ultimaker, a 3D printer manufacturing company, to maintain the software. Cura is available under LGPLv3 license. Cura was initially released under the open source Affero General Public License version 3, but on 28 September 2017 the license was changed to LGPLv3. This change allowed for more integration with third-party CAD applications. Development is hosted on GitHub. Ultimaker Cura is used by over one million users worldwide and handles 1.4 million print jobs per week. It is the preferred 3D printing software for Ultimaker 3D printers, but it can be used with other printers as well. Technical specifications Ultimaker Cura works by slicing the user’s model file into layers and generating a printer-specific g-code. Once finished, the g-code can be sent to the printer for the manufacture of the physical object. The open source software, compatible with most desktop 3D printers, can work with files in the most common 3D formats such as STL, OBJ, X3D, 3MF as well as image file formats such as BMP, GIF, JPG, and PNG. Major software versions 7 June 2016: Ultimaker announced the new Cura major release 2.1.2, superseding the previous 15.04.6 version (note the non-sequentiality in the major version numbers). September 2016: Version 2.3 was a major release. It includes new printing profiles, slicing features, as well as increasing speed. It also supported the dual extrusion possible with the Ultimaker 3 model 17 October 2017: Version 3.0 updated the user interface and allowed for CAD integration. This was the first version with plugin support. November 2017: Cura Connect was released to enable users to control, monitor, and configure a group of network-enabled 3D printers from a single interface. October 2018: Beginning with version 3.5, all files are saved in the 3MF format for improved compatibility with other 3D software. Hotkeys were introduced as well as a searchable profile guide. November 2018: Version 3.6 introduced material profile support for materials made by major manufacturers such as BASF, DuPont, Clariant, and other members of the Materials Alliance Program consortium. March 2019: Version 4.0 made significant changes to the user interface. In support of plugin capabilities, a star-based rating system was incorporated to allow users to rate plugins. Cloud-backup functionality was added as well as support for more third-party printers. Plugins Release 3.0 introduced plugin capability. Users can develop their own plugins or use plugins commercially available. Plugins simplify workflow for users by allowing them to quickly perform tasks like opening a file from a menu or exporting a file from an application. Starting with Release 4.0, users can rate plugins using a star system. Current plugins include: SolidWorks, Siemens NX, HP 3D Scanning, MakePrintable, AutoDesk Inventor. Media coverage On August 31, 2014 Cura was included in a review of 3D slicing software by Think3DPrint3D In the summer of 2015, Ultimaker released Cura 2.0. On January 1, 2018, All3DP named Cura one of the best 3D slicer software tools. In 2019, Cura was named one of the top free 3D printing tools by the industry blog, G2 Cura was named Software Tool of the Year at the international 2019 3D Printing Industry Awards ceremony in London. References External links 3D printing Free computer-aided manufacturing software Free software Linux software MacOS software Windows software
2259785
https://en.wikipedia.org/wiki/GForge
GForge
GForge is a commercial service originally based on the Alexandria software behind SourceForge, a web-based project management and collaboration system which was licensed under the GPL. Open source versions of the GForge code were released from 2002 to 2009, at which point the company behind GForge focused on their proprietary service offering which provides project hosting, version control (CVS, Subversion, Git), code reviews, ticketing (issues, support), release management, continuous integration and messaging. The FusionForge project emerged in 2009 to pull together open-source development efforts from the variety of software forks which had sprung up. History In 1999, VA Linux hired four developers, including Tim Perdue, to develop the SourceForge.net service to encourage open-source development and support the Open Source developer community. SourceForge.net services were offered free of charge to any Open Source project team. Following the SourceForge launch on November 17, 1999, the free software community rapidly took advantage of SourceForge.net, and traffic and users grew very quickly. As another competitive web service, "Server 51", was being readied for launch, VA Linux released the source code for the sourceforge.net web site on January 14, 2000, as a marketing ploy to show that SourceForge was 'more open source'. Many companies began installing and using it themselves and contacting VA Linux for professional services to set up and use the software. However, their pricing was so unrealistic, they had few customers. By 2001, the company's Linux hardware business had collapsed in the dotcom bust. The company was renamed to VA Software and called the closed codebase SourceForge Enterprise Edition to try to force some of the large companies to purchase licenses. This prompted objections from open source community members. VA Software continued to say that a new source code release would be made at some point, but it never was. Some time later, 2002, Tim Perdue left VA and started GForge LLC which released both an open source and commercial version of GForge. Both codebases were forked from the last publicly released version, 2.6, and merged the debian-sf fork, previously maintained by Roland Mas and Christian Bayle, into the project. In February 2009 there was a break-up of the original open source (GPL) version of GForge with some of the developers of GForge releasing the continued development of the old open source code under the new name of FusionForge while Perdue and his new company focused on a commercial offering (GForge Advanced Server and later GForgeNext). GForge and GForge Advanced Server Tim Perdue and his company begin focusing on a commercial version of GForge originally called GForge Advanced Server (also called GForge AS). It saw first public release on June 21, 2006. While it was offered commercially it could be used freely (with some restrictions on project limits and number of users.). GForge AS was written in PHP and continued to use PostgreSQL. Plug-ins for Eclipse IDE as well as Microsoft Visual Studio (only for customers and with no trial available) and other related tools were added to increase developer functionality. Workflow process management to handle making use of the full software life cycle from inception, bug tracking to new release enhancement citation. In 2011 GForge came under new ownership under GForge Group, Inc and while work on the GForge AS 6.x series continued the company began working on a partial rewrite dubbed GForgeNext.  GForgeNext, later rebranded back to GForge, was released on October 1, 2018, which included a revamped user interface, REST API, support for Agile/Scrum disciplines and the GForge Group, Inc expanded to support SaaS. While not open source, the source is available* and the downloadable version can be used for free for up to five users. * the source code that does the license enforcement is encrypted. FusionForge In 2007, Bull announced the first public release of Novaforge which is based on the GForge open source branch. In February 2009 some of the developers of GForge continued development of the old open source code under the new name of FusionForge after GForge Group focused on GForge Advanced Server. One objective is to merge GForge forks into a single project, hence the prefix Fusion. In 2011, FusionForge is selected as part of the Coclico project. It aims to fusion three existing trees of forked forges: FusionForge, Codendi & Novaforge. End 2013, main Savane maintainer Sylvain Beucler joins FusionForge as INRIA contractor for 2 years. Main contributors to FusionForge include individual contributors such as Roland Mas, small companies such as TrivialDev In 2017, FusionForge software is the first forge software to contribute to the Software Heritage initiative, providing a connector to retrieve any information from old FusionForge installations. See also Alioth Computer-aided software engineering (CASE) Computer-supported collaboration GitLab GitHub GNU Savannah Toolkits for User Innovation References External links Free groupware Free project management software Software forks
18532395
https://en.wikipedia.org/wiki/Project%20KickStart
Project KickStart
Project KickStart is desktop project management software by Experience in Software, Inc. in Berkeley, California. The program uses a wizard-like interface for project planning. History The original Project KickStart for DOS was released in 1992. The product far outsold Experience in Software's other titles, and in 1995 Project KickStart for Windows was released. Versions 2, 3, 4 and 5 (all for Windows) followed. Since 2008, the company sells Project KickStart Standard 5 and Project KickStart Pro 5. Software Project KickStart's wizard prompts users to identify phases, goals, obstacles and personnel assignments for projects and uses a calendar to produce a Gantt chart that features the project’s phases and the goals, tasks and assignments for each. KickStart’s project files can be exported into Microsoft's Project, Outlook, Word, Excel or PowerPoint, as well as ACT!, Milestones Professional, MindManager and WBS Chart. References External links Official Website Project management software 1992 software Projects established in 1992
293991
https://en.wikipedia.org/wiki/Computer-supported%20cooperative%20work
Computer-supported cooperative work
Computer-supported cooperative work (CSCW) is the study of how people utilize technology collaboratively, often towards a shared goal. CSCW addresses how computer systems can support collaborative activity and coordination. More specifically, the field of CSCW seeks to analyze and draw connections between currently understood human psychological and social behaviors and available collaborative tools, or groupware. Often the goal of CSCW is to help promote and utilize technology in a collaborative way, and help create new tools to succeed in that goal. These parallels allow CSCW research to inform future design patterns or assist in the development of entirely new tools. History The origins of CSCW as a field are intertwined with the rise and subsequent fall of office automation as response to some of the criticisms, particularly the failure to address the impact human psychological and social behaviors can have. Greif and Cashman created the term CSCW to help employees seeking to further their work with technology. A few years later, in 1987, Dr. Charles Findley presented the concept of Collaborative Learning-Work. Computer-supported cooperative work is an interdisciplinary research area of growing interest which relates workstations to digitally advanced networking systems. The first technologies were economically feasible, but their interoperability was lacking which makes understanding a well-tailored supporting system difficult. Due to global markets, more organizations are being pushed to decentralize their corporate systems. When faced with the complexities of today's business issues, a significant effort must be made to improve manufacturing systems' efficiency, improve product quality, and reduce time to market. The idea of CSCW or computer-supported cooperative work has become useful over the years since its inception and most especially in the ongoing crisis of the COVID-19 pandemic. The measures to mitigate the virus’ spread have led to firm closures and increased the rates of remote working and learning. People now share a common workspace, hold virtual meetings, see and hear each other's movements and voices in a common virtual workspace with a group-centered design. Only when advanced and generic methods are combined does a CSCW framework seem complete to the consumer. For decades, CSCW studies have been proposed using a variety of technologies to promote collaborative work, ranging from shared data services to video-mediated networks for synchronous operations. Among the various domains of CSCW, the Audio/Video Conference Module (AVM) has become useful in enabling audiovisual communication via the online applications used to discuss and undertake work operations such as Zoom and EzTalks. Central Concerns and Concepts CSCW is a design-oriented academic field that is interdisciplinary in nature and brings together librarians, economists, organizational theorists, educators, social psychologists, sociologists, anthropologists and computer scientists, among others. The expertise of researchers in various and combined disciplines help researchers identify venues for possible development. Despite the variety of disciplines, CSCW is an identifiable research field focused on understanding characteristics of interdependent group work with the objective of designing adequate computer-based technology to support such cooperative work. Essentially, CSCW goes beyond building technology itself and looks at how people work within groups and organizations, as well as the impacts of technology on those processes. CSCW has ushered in a great extent of melding between social scientists and computer scientists. These scientists work together to overcome both technical and non-technical problems within the same user spaces. For example, many R&D professionals working with CSCW are computer scientists who have realized that social factors play an important role in the development of collaborative systems. On the flip side, many social scientists who understand the increasing role of technology in our social world become "technologists" who work in R&D labs developing cooperative systems. Over the years, CSCW researchers have identified a number of core dimensions of cooperative work. A non-exhaustive list includes: Awareness: individuals working together need to be able to gain some level of shared knowledge about each other's activities. Articulation work: cooperating individuals must be able to partition work into units, divide it amongst themselves and, after the work is performed, reintegrate it. Appropriation (or tailorability): how an individual or group adapts a technology to their own particular situation; the technology may be appropriated in a manner completely unintended by the designers. These concepts have largely been derived through the analysis of systems designed by researchers in the CSCW community, or through studies of existing systems (for example, Wikipedia). CSCW researchers that design and build systems try to address core concepts in novel ways. However, the complexity of a domain can make it difficult to produce conclusive results. Articulation Work Articulation work is essentially the work that makes other work exist and possible. It is an effort made to make other work easier, more manageable, and can either be planned or unplanned. Therefore articulation work is an integral part of software process since software processes can sometimes fail or break down. Articulation work is also commonly known as "invisible work" since it is not always noticed. Articulation work was introduced by Anselm Strauss. He discovered it as a way to observe the “nature of mutually dependent actors in their division of labour”.  This concept was then introduced in the CSCW by Schmidt and Bannon in 1992, where it would be applied to more realistic work scenarios in society. Articulation work is inherent in collaboration. The idea of articulation work was initially used in relation to computer-supported cooperative work, but it was travelled through other domains of work, such as healthcare. Initially, articulation work was known for scheduling and allocation of resources, but now, extends beyond that. Articulation work can also be seen as the response developers make to adapt to changes due to error or misjudgments in the real world. There are various models of articulation work that help identify applicable solutions to recover or reorganize planned activities. It’s also important to note that it can vary depending on the scenario. Oftentimes there is an increase in the need for articulation work as the situation becomes more complex. Because articulation work is so abstract, it can be split into two categories from the highest level: individual activity and collective activity. With individual activity, articulation work is almost always applicable. It is obvious that the subject is required to articulate his/her own work. But when a subject is faced with a new task, there are many questions that must be answered in order to move forward and be successful. This questioning is considered the articulation work to the actual project; invisible, but necessary. There is also articulation of action within an activity. For example, creating to-do lists and blueprints may be imperative to progressing a project. There is also articulation of operation within an action. In terms of software, the user must have adequate knowledge and skill in using computer systems and knowledge about software in order to perform tasks. In a teamwork setting, articulation is imperative for collective activity. To maximize the efficiency of all the people working, the articulation work must be very solid. Without a solid foundation, the team is unable to collaborate effectively. Furthermore, as the size of the team increases, the articulation work becomes more complex. What goes in between the user and the system is often overlooked. But software process modeling techniques as well as the model of articulation work is imperative in creating a solid foundation that allows for improvement and enhancement. In a way, all work needs to be articulated; there needs to be a who, what, where, when and how. With technology, there are many tools that utilize articulation work. Tasks such as planning and scheduling can be considered articulation work. There are also times when the articulation work is bridging the gap between the technology and the user. Ultimately, articulation work is the means that allows for cooperative work to be cooperative, a main objective of CSCW. Matrix One of the most common ways of conceptualizing CSCW systems is to consider the context of a system's use. One such conceptualization is the CSCW Matrix, first introduced in 1988 by Johansen; it also appears in Baecker (1995). The matrix considers work contexts along two dimensions: whether collaboration is co-located or geographically distributed, and whether individuals collaborate synchronously (same time) or asynchronously (not depending on others to be around at the same time). Same time/same place - Face to face interaction Roomware Shared tables, wall displays Digital whiteboards Electronic meeting systems Single display groupware Group Decision Support System Same time/different place - Remote interaction Electronic meeting systems Videoconferencing Real-time groupware Messaging (instant messaging, email, chat) Telephoning Different time/same place - Continuous task (Ongoing task ) Team rooms Large displays Post-it War-rooms Different time/different place - Communication & Coordination Electronic meeting systems Blogs Workflow Version control This matrix is an outline of CSCW in different contexts, but it does have its limitations for users who are beginners at understanding CSCW. For example, there is a collaborative mode called multi-synchronous that can not fit the matrix. As the field evolves whether by new social standards or technological development, the simple matrix cannot describe all of CSCW and fields of research within. Model of Coordinated Action (MoCA) The Model of Coordinated Action, as a framework for analyzing group collaboration, identifies several dimensions of common features of cooperative work that extend beyond the CSCW matrix and allow for more complexity in describing how teams work given certain conditions. The seven total dimensions that constitute the model (MoCA) are used to describe essential “fields of action” seen in existing CSCW research. Rather than existing as a rigid matrix with distinct quadrants, this model is to be interpreted as multidimensional – each dimension existing as its own continuum. These ends of these dimensions’ continuums are defined in the following subsections. Synchronicity This is pertaining to the time at which the collaborative work occurs. This could range from live meetings conducted at exact times to viewing recordings or responding to messages that do not require one or all participants to be active at the time the recording, message, or other deliverable was created. Physical Distribution This covers the distance in which team members could be geographically separated while still being able to collaborate. The least physically distributed cooperative work is a meeting in which all team members are physically present in the same space and communicating verbally, face-to-face. Conversely, technology now allows for more distanced communication that could extend as far as meeting from multiple countries. Scale The scale of a collaborative project refers to how many individuals comprise the project team. As the number of people involved increases, the division of tasks must become more intricate and complex to ensure that each participant is contributing in some way. Number of Communities of Practice A community of practice refers to a group of individuals with shared, common knowledge of a specific subject. This group may be composed of newcomers and experts, alike. New members will gain knowledge through exposure and immersion and become experts as newer members join, thus expanding the community of practice over time. These groups can be as specific or broad as its members feel is necessary, as no two people have the same set of knowledge and diversification of perspectives is common. Nascence Some collaborative projects are designed to be more long-lasting than others, often meaning that their standard practices and actions are more established than newer, less developed projects. Synonymous with “newness”, nascence refers to how established a cooperative effort is at a given point in time. While most work is always developing in some way, newer projects will have to spend more time establishing common ground among its team members and will thus have a higher level of nascence. Planned Permanence This dimension encourages teams to establish common practices, terminology, etc. within the group to ensure cohesion and understanding among the work. It is difficult to gauge how long a project will last, therefore establishing these foundations in early stages helps to prevent confusion between group members at later stages when there may be higher stakes or deeper investigation. The notion of planned permanence is essential to the model as it allows for productive communication between individuals who may have different expertise or are members of different communities of practice. Turnover This dimension is used to describe the rate at which individuals leave a collaborative group. Such events may occur at various rates depending on the impact one’s departure may have on the individual and the group. In a well-established collaborative action or a group with a small scale, a team member leaving may have detrimental effects, whereas temporary projects with open membership may have high turnover rates covered by the project’s high scale. Crowdsourcing, such as the means by which Wikipedia creates its articles, are an example of an entity with high turnover rates  (e.g. a Wikipedian contributes only to one article at one time) that does not face impactful consequences due to the high scale of the collaborative work. Considerations for interaction design Self-presentation Self-presentation has been studied in traditional face-to-face environments, but as society has embraced content culture, social platforms have generated new affordances for presenting oneself online. Due to technological growth, social platforms, and their increased affordances, society has reconfigured the way users self-present online due to audience input and context collapse. In an online setting, audiences are physically invisible which complicates the users ability to distinguish their intended audience. Audience input, on social platforms, can range from commenting, sharing, liking, tagging, etc. For example, LinkedIn is a platform who encourages commentary where positive feedback outweighs negative feedback on topics including career announcements. Conversely, audience input can be unwarranted which can lead to real-life implications, especially for marginalized groups who are prone to both warranted and unwarranted commentary on public posts. Context collapse is when separate audiences join together and make curated content for an audience which is visible to unintended audiences. The likelihood of context collapse is especially challenging with the surge of proprietary software which introduces a conflict of interest for the users who have an ideal audience, but the platforms algorithm has a differing one. Collapsed context influences self-presentation when previously separate audiences are merged into one. Affordance As media platforms proliferate, so do the affordances offered that directly influence how users manage their self-presentation. According to researchers, the three most influential affordances on how users present themselves in an online domain include anonymity, persistence, and visibility. Anonymity in the context of social media refers to the separation of an individual's online and offline identity by making the origin of their messages unspecified. Platforms that support anonymity have users that are more likely to depict their offline self accurately online (i.e Reddit). Comparatively, platforms with less constraints on anonymity are subject to users that portray their online and offline selves differently, thus creating a “persona”. Facebook, for example, requires its users to abide by its “real-name” policy, further connecting their offline and online identities. Furthermore, being able to unequivocally associate an online persona to a real-life human contributes to how users present themselves online honestly. Platforms which have "content persistence" store content so it may be accessed at a later point in time. Platforms including Instagram and Facebook are highly persistent with their ability to make content available until deleted. Whereas, Snapchat has lower persistence because content is ephemeral causing users to post content that represents their offline self more accurately. This affordance strongly affects users' self-presentation management because they recognize content can be openly accessed on platforms that are highly persistent. On social platforms, visibility is created when information is acquired with search of a word or phrase or even topic name, an example being a hashtag. When content is visible, users become aware of their self-presentation and will adjust accordingly. However, some platforms give their users leverage in specifying how visible their content is, thus affording for visibility control. For example Snapchat and Instagram both allow users to build a “close friends list” and block specific people from viewing content. Nonetheless, intended audiences are never guaranteed. Facebook is an example of a platform that shares content to both primary (e.g. direct friends) and secondary viewers (e.g. friends of friends). The concern of visibility with Facebook's algorithm is notably challenging for marginalized groups because of such blurred visibility mechanisms. In addition, users face privacy concerns relative to visibility given the current era of screenshotting. Boundary Object A boundary object is an informational item which is used differently by various communities or fields of study and may be a concrete, physical item or an abstract concept. Examples of boundary objects include: Most research libraries, as different research groups may use different resources from the same libraries. An interdisciplinary research project, as different business sectors and research groups may have different goals for the project. The outline of a U.S. state's boundaries, which may be drawn on a roadmap for travelers or on an ecological map for biologists. In computer-supported cooperative work, boundary objects are typically used to study how information and tools are transmitted between different cultures or communities. Some examples of boundary objects in CSCW research are: Electronic health records, which pass health information between groups with different priorities (such as doctors, nurses, and medical secretaries). The concept of a "Digital Work Environment," as used in Swedish political debate. Standardization vs. flexibility in CSCW Standardization is defined as “agile processes that are enforced as a standard protocol across an organization to share knowledge and best practice.” Flexibility, on the other hand, is the "ability to customize and evolve processes to suit the aims of an agile team". As CSCW tools, standardization and flexibility are almost mutually exclusive from each other. In CSCW, flexibility comes in two forms, flexibility for future change, and flexibility for interpretation. Everything that is done on the internet has a level of standardization due to the internet standards. In fact, Email has its own set of standards, of which the first draft was created in 1977. No CSCW tool is perfectly flexible, and all lose flexibility in the same three levels. Either flexibility is lost when the programmer makes the toolkit, when the programmer makes the application, and/or when the user uses the application. Standardization in information infrastructure Information infrastructure requires extensive standardization to make collaboration work. Since data is transferred from company to company and occasionally nation to nation, international standards have been put in place to make communication of data much simpler. Often one company’s data will be included in a much larger system, and this would become almost impossible without standardization. With information infrastructure, there is very little flexibility in potential future changes. Due to the fact that the standards have been around for decades and there are hundreds of them, it is nearly impossible to change one standard without greatly affecting the others. Flexibility in toolkits Creating CSCW toolkits requires flexibility of interpretation; it is important that these tools are generic and can be used in many different ways. Another important part of a toolkit’s flexibility is the extensibility, the extent to which new components or tools can be created using the tools provided. An example of a toolkit that is flexible in how generic the tools are is Oval. Oval consists of four components: objects, views, agents, and links. This toolkit was used to recreate four previously existing communication systems: The Coordinator, gIBIS, Lotus Notes, and Information Lens. It proved that, due to its flexibility, Oval was able to create many forms of peer-to-peer communication applications. Applications Applications in Education There have been three main generations to distance education, starting with the first being through postal service, the second through mass media such as the radio, television, and films, and the third being the current state of e-learning. Technology-enhanced learning, or “e-learning”, has been an increasingly relevant topic in education, especially with the development of the COVID-19 pandemic that has caused many schools to switch to remote learning. E-learning is defined as “the use of technology to support and enhance learning practice”. It includes the utilization of many different types of information and communication technologies (ICTs) and is limited to the use of intranet and internet in the teaching and learning process. The development of content is mainly through using learning objectives to create activities through Virtual Learning Environments, Content Management Systems, and Learning Management Systems. These technologies have created massive change in their use as CSCW tools, allowing students and teachers to work on the same platforms and have a shared online space in which to communicate in. The delivery of content can be either asynchronous, such as email and discussion forums, or synchronous, like through chat or video conferencing. Synchronous education allows for much more equal interaction between students and instructors and better communication between students for the facilitation of group projects and assignments. Community of Inquiry Framework E-learning has been explained by the community of inquiry (COI) framework introduced by Garrison et al. In this framework, there are three major elements: cognitive presence, social presence, and teaching presence. Cognitive presence in this framework is the measure of how well meaning is able to be constructed from the content being taught. It assumes that students have access to a large network from which to gain information from. This includes peers, instructors, alumni, and practicing professionals. E-learning has allowed this network to be easily accessible through the internet, and these connections can be made synchronously through video, audio, or texts. Social presence in the community of inquiry framework is how well participants can connect with one another on a social level and present themselves as “real people”. Video conferencing has been shown to increase social presence within students. One study found that “social presence in VC [Virtual Conferencing] can have a positive effect on group efficacy and performance by amplifying group cohesion”. This information is greatly useful in designing future systems because it explains the importance of technology like video conferencing in synchronous e-learning. Groups that are able to see each other face to face have a stronger bond and are able to complete tasks faster than those without it. Increasing the social presence in online education environments helps facilitate in the understanding of the content and the ability for the group to solve problems. Teaching presence in the COI framework contains two main functions: creation of content and the facilitation of this content. The creation of content is usually done by the instructor, but students and instructors can share the role of facilitator, especially in higher education settings. The goal of teaching presence is “to support and enhance social and cognitive presence for the purpose of realizing educational outcome”. Virtual educational software and tools are becoming more readily used globally. Remote educational platforms and tools must be accessible for various generations, including children as well as guardians or teachers, yet these frameworks are not adapted to be child-friendly. The lack of interface and design consideration for younger users causes difficulty in potential communication between children and older generations utilizing the software. This in turn leads to a decrease in virtual learning participation as well as potential diminished collaboration with peers. In addition, it may be difficult for older teachers to utilize such technology, and communicate with their students. Similar to orienting older workers with CSCW tools, it is difficult to train younger students or older teachers in utilizing virtual technology, and may not be possible for widely spread virtual classrooms and learning environments. Applications in Gaming Collaborative mixed reality games modify the shared social experience, during which players can interact in real-time with physical and virtual gaming environments and with other multiplayer video gamers. This may be done through any means of communication, self-representation, and collaboration. Communication Systems The group members experience effective communication practices following the availability of a common platform for expressing opinions and coordinating tasks. The technology is applicable not only in professional contexts but also in the gaming world. CSCW usually offers synchronous and asynchronous games to allow multiple individuals to compete in a certain activity across social networks. Thus, the tool has made gaming more interesting by facilitating group activities in real-time and widespread social interactions beyond geographical boundaries. Self-presentation HCI, CSCW, and game studies in MMORPGs highlight the importance of avatar-mediated self-presentation in player experience. These studies have put together known two components of self-presentation in games. First, through personal choice and personalization of avatars, various social values (such as gender roles and social norms) are integrated and reflected in the player's self-image. Second, self-presentation in games conjointly options experimentation of fully new identities or reaffirmation of existing identities. This includes cross-gender play and queerness gameplay. Computer-mediated communication in gaming settings takes place across different channels, which can consist of structured message systems, bulletin boards, meeting rooms, and shared diaries.  As such, the players can hold conversations while proceeding with the game to create a lively experience. Thus, the features of video games offer a platform for users to openly express themselves. Collaborations and game design in multi-user video games The most collaborative and socially interactive aspect of a video game is the online communities. Popular video games often have various social groups for their diverse community of players. For example, in the quest-based multiplayer game World of Warcraft, the most collaborative and socially interactive aspect of the game are the “Guilds,” which are alliances of individuals with whom players must join forces. By incorporating Guilds, World of Warcraft creates opportunities for players to work together with their team members who can be from anywhere in the world. WOW players who are associated with a Guild are more likely to play and do quests with the same Guild mates each time which develops a strong bond between players and a sense of community. These bonds and friendships formed from playing with Guild mates, not only improves collaboration within the game, it also creates a sense of belonging and community which is one of the most important attribute of online gaming communities. When it comes to designing a multi-user collaborative game, it involves Positive Interdependence, Personal Accountability, and Social Skills. Positive Interdependence is the dependence of collaboration from members of a group in order to accomplish a task. In video games, this is the idea of players on a team or in a group understanding that working together is beneficial, and that the success and failure of the group is shared equally if all members participate. An example of including a positive interdependence aspect to a video game is creating a common shared goal for the team to increase collaboration. The next guideline is Personal Accountability, which is the idea that each individual in a group must put forth their best effort for the team’s overall success. Personal Accountability might be incorporated into video games by including an incentive system where individual players are rewarded with additional points for completing an objective or an action that improves the team’s chances of success. The final guideline, Social Skills, is the most important to consider when designing a collaborative game. An example of developing player social skills through a video game can be creating in-game situations where players have to assign roles, plan, and execute to solve the problem. By following these guidelines, game makers can create gaming-environments which encourage collaboration and social interaction between players. Applications of Mobile Devices Mobile devices are generally more accessible than their non-mobile counterparts, with about 41% of the world's population as per a survey from 2019 owning a mobile device. This coupled with their relative ease of transport makes mobile devices usable in a large variety of settings which other computing devices would not function as well. Because of this, mobile devices make videoconferencing and texting possible in a variety of settings which would not be accessible without such a device. The Chinese social media platform WeChat has is utilized to facilitate communication between patients and doctors. WeChat is able to enhance healthcare interactions between patient and doctor by allowing direct communication of the patient's symptoms. Applications in Social Media Social media tools and platforms have expanded virtual communication amongst various generations. However, with older individuals being less comfortable with CSCW tools, it is difficult to design social platforms that account for both older and younger generational social needs. Often, these social systems focus key functionality and feature creation for younger demographics, causing issues in adaptability for older generations. In addition, with the lack of scalability for these features, the tools are not able to adapt to fit evolutional needs of generations as they age. With the difficulty for older demographics to adopt these intergenerational virtual platforms, the risk of social isolation is increased in them. While systems have been created specifically for older generations to communicate amongst one another, system design frameworks are not complex enough to lend to intergenerational communication. Applications to Ubiquitous Computing Along the lines of a more collaborative modality is something called ubiquitous computing. Ubiquitous computing was first coined by Mark Weiser of Xerox PARC. This was to describe the phenomenon of computing technologies becoming prevalent everywhere. A new language was created to observe both the dynamics of computers becoming available at mass scale and its effects on users in collaborative systems. Between the use of social commerce apps, the rise of social media, and the widespread availability of smart devices and the Internet, there is a growing area of research within CSCW that how come out of these three trends. These topics include ethnomethodology and conversation analysis (EMCA) within social media, ubiquitous computing, and instant message based social commerce. Ethnomethodology and synchronicity In You Recommend I Buy: How and Why People Engage in Instant Messaging Based Social Commerce, researchers on this project analyzed twelve users of Chinese Instant Messenger (IM) social commerce platforms to study how social recommendation engines on IM commerce platforms result in a different user experience. The study was entirely on Chinese platforms, mainly WeChat. The research was conducted by a team composed of members from Stanford, Beijing, Boston, and Kyoto. The interviewing process took place in the winter of 2020 and was an entirely qualitative analysis, using just interviews. The goal of the interviews were to probe about how participants got involved in IM based social commerce, their experience on IM based social commerce, the reasons for and against IM based social commerce, and changes introduced by IM based social commerce to their lives. An IM-based service integrates directly with more intimate social experiences. Essentially, IM is real-time texting over a network. This can be both a synchronous or asynchronous activity. IM based social commerce makes the user shopping experience more accessible. In terms of CSCW, this is an example of ubiquitous computing. This creates a “jump out of the box” experience as described in the research because the IM based platform facilitates a change in user behavior and the overall experience on social commerce. The benefit of this concept is that the app is leveraging personal relationships and real-life networks that can actually lead to a more meaningful customer experience, which is founded upon trust. Embeddedness A second CSCW paper, Embeddedness and Sequentiality in Social Media, explores a new methodology for analyzing social media—another expression of ubiquitous computing in CSCW. This paper used ethnomethodology and conversation analysis (EMCA) as a framework to research Facebook users. In brief, ethnomethodology studies the everyday interactions of people and relates how this pertains to forming their outlook of the world. Conversational analysis delves into the structures of conversations so as to extract information about how people construct their experiences. The team behind this research, hailing from University of Nottingham and Stockholm University, recognized that “moment-by-moment, unfolding, real-time human action” was somewhat missing from the CSCW literature on social media. The significance of this is they felt that by exploring EMCA, it could provide different insights on collaborative social network systems, as opposed to relying solely on recall. Here is a formal definition for EMCA:For EMCA, the activities of everyday life are structured in time—some things routinely happen before others. Fundamentally there is a ‘sequentiality’ to activity, something that has been vital for developing understanding of the orderly nature of talk [45] and bodily interaction [16].In other words, EMCA pays attention to the sequence of events, so as to reveal some sort of underlying order about our behavior in our day-to-day interactions. In the bigger picture, this work reveals that time, as one of the dimensions to consider within collaborative systems design, matters. Another major factor would be distance. Does Distance Still Matter? Revisiting the CSCW Fundamentals on Distributed Collaboration is another research article that, as the title suggests, explores under what circumstances distance matters. Most notably, it mentions the "mutual knowledge problem." This problem arises when a group in a distributed collaborative system experiences a breakdown in communication due to the fact that its members lack a shared understanding for the given context they are working in. According to the article, it matters that everyone is in alignment over the nature of what they are doing. Co-located, parallel and sequential activities The solutions of unresolved issues in ubiquitous computing systems can be explored now that the observations of user experiences in social media, which are normally based on recollection, are no longer needed. Some of the unresolved questions include: “How does social media start being used, stop being used? When is it being used, and how is that usage ordered and integrated into other, parallel activities at the time?” Parallel activities refer to occurrences in co-located groupware and ubiquitous computing technologies like social media. Examining these sequential and parallel activities in user groups on social media networks enables the ability to “[manage] the experience of that everyday life.” An important takeaway from this paper on EMCA and sequentiality is that it reveals how the choices made by designers of social media apps ultimately mediates our end-user experience, for better or for worse. It reveals: “when content is posted and sequentially what is associated with it.” Ubiquitous computing infrastructures On the topic of computing infrastructures, Democratizing Ubiquitous Computing - a Right for Locality presents a study from researchers at Lancaster University on ubiquitous computing ("ubicomp") to identify where there exists positive or negative effects on users and society at large. The research specifically focuses on cities or urban areas as they are places where one can expect a lot of technological and social activities to take place. An apparent guiding principle to the research is that the goal of advancing any ubicomp technologies should be to maximize the amount of good to as many people in a society as possible. A key observation is made about the way in which these infrastructures come into being:A ubiquitous computing infrastructure can play an important role in enabling and enhancing beneficial social processes as, unlike electricity, digital infrastructure enhances a society’s cognitive power by its ability to connect people and information [39]. While infrastructure projects in the past had the idealistic notion to connect the urban realm and its communities of different ethnicity, wealth, and beliefs, Graham et al. [28] note the increasing fragmentation of the management and ownership of infrastructures. This is because ubicomp has the potential to further disadvantage marginalized communities online.The current disadvantage of ubiquitous computing infrastructures is that they do not best support urban development. Proposals to resolve these social issues include increased transparency about personal data collection as well as individual and community accountability about the data collection process in ubicomp infrastructure. Data at work: supporting sharing in science and engineering is one such paper that goes into greater depth about how to build better infrastructures that enable open data-sharing and thus, empower its users. What this article outlines is that in building better collaborative systems that advance science and society, we are, by effect, "promoting sharing behaviors" that will encourage greater cooperation and more effective outcomes. Essentially, ubiquitous computing will reflect society and the choices it makes will influence those computing systems that are put in place. Ubiquitous computing is huge to the field of CSCW because as the barriers between physical boundaries that separate us break down with the adoption of technology, our relationships to those locations is actually strengthened. However, there remains few potential challenges when it comes to social collaboration and the workplace. Challenges Social - technical gap The success of CSCW systems is often so contingent on the social context that it is difficult to generalize. Consequently, CSCW systems that are based on the design of successful ones may fail to be appropriated in other seemingly similar contexts for a variety of reasons that are nearly impossible to identify a priori. CSCW researcher Mark Ackerman calls this "divide between what we know we must support socially and what we can support technically" the social-technical gap and describes CSCW's main research agenda to be "exploring, understanding, and hopefully ameliorating" this gap. It is important to analyze ‘what we know we must support socially’ for a few reasons. The way interaction takes place within an in-person setting is something that cannot be easily changed unlike the way technology is able to be manipulated to fit specific needs today. There are certain norms and standards lived up to within peoples’ day to day lives, a certain part of those norms and attitudes carry over into the online world. The problem is mimicking daily communication styles and behavior into an online setting. Schmidt examines this concept within “Mind the Gap”, he states “Cooperative work is a tricky phenomenon. We are all engaged in cooperative activities of various sorts in our everyday lives and routinely observe others working together around us. We are all experts from our everyday experience. And yet this quotidian insight can be utterly misleading when applied to the design of systems to support cooperative work”. Though in-person communication on a day-to-day basis is natural for most, it does not easily translate over into cooperative work. This highlights the need for adaptability within CSCW systems, Schmidt expands on the “crucial requirement of flexibility that arises from the changing needs of the cooperative work setting”. These all tie together to highlight the gaps within CSCW. Leadership Generally, teams working in a CSCW environment need the same types of leadership as non-CSCW teams. However, research has shown that distributed CSCW teams may need more direction at the time the group is formed than traditional working groups, largely to promote cohesion and liking among people who may not have as many opportunities to interact in person, both before and after the formation of the working group. Adoption of groupware Groupware goes hand in hand with CSCW. The term refers to software that is designed to support activities of a group or organization over a network and includes email, conferencing tools, group calendars, workflow management tools, etc. While groupware enables geographically dispersed teams to achieve organizational goals and engage in cooperative work, there are also many challenges that accompany use of such systems. For instance, groupware often requires users to learn a new system, which users may perceive as creating more work for them without much benefit. If team members are not willing to learn and adopt groupware, it is highly difficult for the organization to develop the requisite critical mass for the groupware to be useful. Further, research has found that groupware requires careful implementation into a group setting, and product developers have not as yet been able to find the most optimal way to introduce such systems into organizational environments. On the technical side, networking issues with groupware often create challenges in using groupware for CSCW. While access to the Internet is becoming increasingly ubiquitous, geographically dispersed users still face challenges of differing network conditions. For instance, web conferencing can be quite challenging if some members have a very slow connection and others are able to utilize high speed connections. Intergenerational groups Adapting CSCW tools for intergenerational groups is a prevalent issue within all forms of CSCW. Different generations have different feelings towards technology as well as different ways to utilize technology. However, as technology has become integral to everyday tasks, it must be accessible to all generations of people. With cooperative work becoming increasingly important and diversified, virtual interaction between different generations is also expanding. Given this, many fields that utilize CSCW tools require carefully designed frameworks to account for different generations. Workplace teams One of the recurring challenges in CSCW environments is development of an infrastructure that can bridge cross-generational gaps in virtual teams. Many companies rely on communication and collaboration between intergenerational employees to be successful, and often this collaboration is performed using various software and technologies. These team-driven groupware platforms range from email and daily calendars to version control platforms, task management software, and more. These tools must be accessible to workplace teams virtually, with remote work becoming more commonplace. Ideally, system designs will accommodate all team members, but orienting older workers to new CSCW tools can often be difficult. This can cause problems in virtual teams due to the necessity of incorporating the wealth of knowledge and expertise that older workers bring to the table with the technological challenges of new virtual environments. Orienting and retraining older workers to effectively utilize new technology can often be difficult, as they generally have less experience than younger workers with learning such new technologies. As older workers delay their retirement and re-enter the workforce, teams are becoming increasingly intergenerational, meaning that the creation of effective intergenerational CSCW frameworks for virtual environments is essential. Tools in CSCW Collaboration amongst peers has always been an integral aspect to getting something done. Working together not only eases the difficulty of the task at hand, but leads to more effective work that is accomplished. As computers and technology become increasingly important in everyday lives, communication skills change as technology allows individuals to stay connected across many previous barriers. Barriers to communication might have been the end of the work day, being across the country or even slow applications that are more of a hindrance than an aid. With new collaborative tools that have been tried and tested, these previous barriers to communication have been shattered and replaced with new tools that help progress collaboration. Tools that have been integral in shaping computer supported cooperative work can be split into two major categories: communication and organization. Communication: The ability to communicate with others while working is a luxury that has increased the speed and accuracy at which tasks are accomplished. Individuals can also send pictures of code and issues through platforms like Microsoft Teams without anyone needing to change screen monitors. This particular change increased office productivity and communication by almost half. The ability to send more specific information faster gave the employees the ability to get more done with also much less effort for themselves. Tools like Microsoft Teams and Slack also allow people to collaborate with ease even if they are in different time zones or different geographical areas. This means that work is no longer tied to specific offices at a 9-5 job, but can be done anywhere because you have the ability to communicate with one or groups of people on a large scale. Organization: Apps such as iCal and Reminders on the iPhone provide time-oriented structure and remind users of the tasks they must complete. Organization and communication go hand in hand with one another, as they help individuals better plan their day because apps warn them when two events overlap, a due date approaches, or whether there is time available for an event. There is reduced hassle to daily scheduling and group coordination. Such apps usually tie into different electronic devices such as computers and tablets, therefore people receive reminders across multiple platforms. If the platform permits, individuals in teams can set reminders for other people. Departmental Conflicts Cross-Boundary Breakdowns Cross-boundary breakdowns are when different departments of the same organization unintentionally harm other. They may be caused by failures to coordinate activities across multiple departments, a form of articulation work. Hospitals may experience cross-boundary breakdowns during patient transfers. When a patient is sent from the emergency department to the operation room, the inpatient access department (IPA) must normally be notified, allowing them to track the number and location of available ICU beds. However, when the emergency department fails to notify the IPA, the IPA staff are later unable to find suitable beds for patients. Re-coordinating Activities To restore useful communication between departments after a cross-boundary breakdown, organizations may perform re-coordinating activities. Hospitals may respond to cross-boundary breakdowns by explicitly ranking key resources or assigning "integrator" roles to multiple staff members across different departments. Challenges in research Differing meanings In the CSCW field, researchers rely on a variety of sources that include journals and research schools of thought. These different sources may lead to disagreement and confusion, as there are terms in the field that can be used in different contexts ("user", "implementation", etc.) User requirements change over time and are often not clear to participants due to their evolving nature and the fact that requirements are always in flux. Identifying user needs CSCW researchers often have difficulty deciding which set(s) of tools will benefit a particular group because of the nuances within organizations. This is exacerbated by the fact that it is challenging to accurately identify user/group/organization needs and requirements, since such needs and requirements inevitably change through the introduction of the system itself. When researchers study requirements multiple times, the requirements themselves often change and evolve once the researchers have completed a particular iteration. Evaluation and measurement The range of disciplinary approaches leveraged in implementing CSCW systems makes CSCW difficult to evaluate, measure, and generalize to multiple populations. Because researchers evaluating CSCW systems often bypass quantitative data in favor of naturalistic inquiry, results can be largely subjective due to the complexity and nuances of organizations themselves. Possibly as a result of the debate between qualitative and quantitative researchers, three evaluation approaches have emerged in the literature examining CSCW systems. However, each approach faces its own unique challenges and weaknesses: Methodology-oriented frameworks explain the methods of inquiry available to CSCW researchers without providing guidance for selecting the best method for a particular research question or population. Conceptual frameworks provide guidelines for determining factors that a researcher should consider and evaluate through CSCW research but fail to link conceptual constructs with methodological approaches. Thus, while researchers may know what factors are important to their inquiry, they may have difficulty understanding which methodologies will result in the most informative findings. Concept-oriented frameworks provide specific advice for studying isolated aspects of CSCW but lack guidance as to how specific areas of study can be combined to form more comprehensive insight. Diversity, Equity, and Inclusion in CSCW Gender and CSCW In computer-supported cooperative work, there are small psychological differences between how men and women approach CSCW programs. This can lead to unintentionally biased systems, due to the majority of software being designed and tested by men. As well, in systems where societal gender differences are not accounted for and countered, men tend to overrepresent women in these online spaces. This can lead to women feeling potentially alienated and unfairly targeted by CSCW programs. In recent years, more studies have been conducted on how men and women interact with each other using CSCW systems. Findings do not indicate that men and women have performance difference when performing CSCW tasks, but rather that each gender approaches and interacts with software and performs CSCW tasks differently. In most findings, men were more likely to explore potential choices and willing to take risks compared to women. In group tasks, women in general were more conservative in voicing their opinions and suggestions on tasks when paired with a male, but inversely were very communicative when paired with another woman. As well, men are found to be more likely to take control of group activities and teamwork, even from a young age, leading to further ostracizing of women speaking up in CSCW group work. Additionally, in CSCW message boards, men on average posted more messages and engaged more frequently than their female counterparts. Increasing female participation The dynamic of women in the workforce not participating as much is less of a CSCW problem and is prevalent in all workspaces, but software can still be designed to increase female participation in CSCW. In software design, women are more likely to be involved if software is design is centered on communication and cooperation. This is one possible method to increasing female participation, and it does not address why CSCW has lower female participation in the first place. In a study, women generally rated themselves as being poor at understanding technology, having difficulty at using mobile programs, and disliked using CSCW software. However, when asked these same questions about specific software in general, they rated themselves just as strongly as the men in the study did. This lack of confidence in software as a whole impacts women’s ability to efficiently and effectively use online programs compared to men, and accounts for some of the difficulties women face in using CSCW software. Despite being an active area of research since the 1990’s, many developers often do not take gender differences into account when designing their CSCW systems. These issues compound on top of the cultural problems mentioned previously, and lead to further difficulties for women in CSCW. By enabling developers to be more aware of the differences and difficulties facing women in CSCW design, women can be more effective users of CSCW systems through sharing and voicing opinions. Conferences Since 2010, the Association for Computing Machinery (ACM) has hosted a yearly conference on CSCW. From 1986-2010, it was held biannually. The conference is currently held in October or November and features research in the design and use of technologies that affect organizational and group work. With the rapidly increasing development of new devices that allow collaboration from different locations and contexts, CSCW seeks to bring together researchers from across academia and industry to discuss the many facets of virtual collaboration from both social and technical perspectives. Internationally, the Institute of Electrical and Electronics Engineers (IEEE) sponsors the International Conference on Computer Supported Work in Design, which takes place yearly. In addition, the European Society for Socially Embedded Technologies sponsors the European Conference on Computer Supported Cooperative Work, which has been held every two years since 1989. CSCW panels are a regular component of conferences of the adjacent field of Science and Technology Studies. See also Collaborative working environment Collaborative working system Collaborative software Collaborative innovation network Collaborative information seeking Computer-supported collaboration Commons-based peer production Electronic meeting system E-professional Human–computer interaction Integrated collaboration environment Knowledge management Mass collaboration Pervasive informatics Participatory design Remote work Social peer-to-peer processes Sociology Virtual research environment Ubiquitous computing References Further reading Most cited papers The 47 CSCW Handbook Papers. This paper list is the result of a citation graph analysis of the CSCW Conference. It has been established in 2006 and reviewed by the CSCW Community. This list only contains papers published in one conference; papers published at other venues have also had significant impact on the CSCW community. The "CSCW handbook" papers were chosen as the overall most cited within the CSCW conference <...> It led to a list of 47 papers, corresponding to about 11% of all papers. External links CSCW Conference, ACM CSCW Conference Series European CSCW Conference Foundation, European CSCW Conference Series GROUP Conference COOP Conference Computer-related introductions in 1984 Collaboration Groupware Multimodal interaction Human–computer interaction
65666671
https://en.wikipedia.org/wiki/Youtube-dl
Youtube-dl
youtube-dl is a free and open source download manager for video and audio from YouTube and over 1,000 other video hosting websites. It is released under the Unlicense software license. As of September 2021, youtube-dl is one of the most starred projects on GitHub, with over 100k stars. According to libraries.io, 308 other packages and 1.43k repositories depend on it. Numerous forks exist of the project, including yt-dlp, with 17.8k stars. History youtube-dl was created in 2006 by Ricardo Garcia. Initially, only YouTube was supported, but as the project grew, it began supporting other video sharing websites. Ricardo Garcia stepped down as maintainer in 2011 and was replaced with phihag, who later stepped down and was replaced with dstftw.Who stepped down in 2021 and was replaced with dirkf. RIAA takedown request On October 23, 2020, the Recording Industry Association of America (RIAA) issued a takedown notice to GitHub under the Digital Millennium Copyright Act (DMCA), requesting the removal of youtube-dl and 17 public forks of the project. The RIAA request argued that youtube-dl violates the Section 1201 anti-circumvention provisions of the DMCA, and provisions of German copyright law, since it circumvents a "rolling cipher" used by YouTube to generate the URL for the video file itself (which the RIAA has considered to be an effective technical protection measure, since it is "intended to inhibit direct access to the underlying YouTube video files, thereby preventing or inhibiting the downloading, copying, or distribution of the video files"), and that its documentation expressly encouraged its use with copyrighted media by listing music videos by RIAA-represented artists as examples. GitHub complied with the request. Users criticized the takedown, noting the legitimate uses for the application, including downloading video content released under open licensing schemes or to create derivative works falling under fair use (such as for archival and news reporting purposes). Public attention to the take-down resulted in Streisand effect reminiscent to that of the DeCSS take-down. Users reposted the software's source code across the internet in multiple formats. For example, users posted images on Twitter containing the whole youtube-dl source code encoded in different colors on each pixel. GitHub users also filed pull requests to GitHub's own repository of DMCA takedown notices that included youtube-dl source code. On November 16, 2020, GitHub publicly reinstated the repository, after the Electronic Frontier Foundation sent GitHub a document contesting the takedown notice, which clarified that the software was not capable of breaching commercial DRM systems. GitHub also announced that future takedown claims under Section 1201 would be manually scrutinized on a case-by-case basis by legal and technical experts. See also Comparison of YouTube downloaders Stream ripping References External links Download managers YouTube Command-line software Free software programmed in Python Software using the Unlicense license
11442
https://en.wikipedia.org/wiki/FidoNet
FidoNet
__ / \ /|oo \ (_| /_) _`@/_ \ _ | | \ \\ | (*) | \ )) __ || / \// / FIDO \ _//|| _\ / () (_/(_|(/ (c) John Madill FidoNet logo by John Madill FidoNet is a worldwide computer network that is used for communication between bulletin board systems (BBSes). It uses a store-and-forward system to exchange private (email) and public (forum) messages between the BBSes in the network, as well as other files and protocols in some cases. The FidoNet system was based on several small interacting programs, only one of which needed to be ported to support other BBS software. FidoNet was one of the few networks that was supported by almost all BBS software, as well as a number of non-BBS online services. This modular construction also allowed FidoNet to easily upgrade to new data compression systems, which was important in an era using modem-based communications over telephone links with high long-distance calling charges. The rapid improvement in modem speeds during the early 1990s, combined with the rapid decrease in price of computer systems and storage, made BBSes increasingly popular. By the mid-1990s there were almost 40,000 FidoNet systems in operation, and it was possible to communicate with millions of users around the world. Only UUCPNET came close in terms of breadth or numbers; FidoNet's user base far surpassed other networks like BITNET. The broad availability of low-cost Internet connections starting in the mid-1990s lessened the need for FidoNet's store-and-forward system, as any system in the world could be reached for equal cost. Direct dialing into local BBS systems rapidly declined. Although FidoNet has shrunk considerably since the late 1990s, it has remained in use even today despite internet connectivity becoming more widespread. History Origins There are two major accounts of the development of the FidoNet, differing only in small details. Tom Jennings' account Around Christmas 1983, Tom Jennings started work on a new bulletin board system that would emerge as Fido BBS. It was called "Fido" because the assorted hardware together was "a real mongrel". Jennings set up the system in San Francisco sometime in early 1984. Another early user was John Madill, who was trying to set up a similar system in Baltimore on his Rainbow 100. Fido started spreading to new systems, and Jennings eventually started keeping an informal list of their phone numbers, with Jennings becoming #1 and Madill #2. Jennings released the first version of the FidoNet software in June 1984. In early 1985 he wrote a document explaining the operations of the FidoNet, along with a short portion on the history of the system. In this version, FidoNet was developed as a way to exchange mail between the first two Fido BBS systems, Jennings' and Madill's, to "see if it could be done, merely for the fun of it". This was first supported in Fido V7, "sometime in June 84 or so". Ben Baker's account In early 1984, Ben Baker was planning on starting a BBS for the newly forming computer club at the McDonnell Douglas automotive division in St. Louis. Baker was part of the CP/M special interest group within the club. He intended to use the seminal, CP/M-hosted, CBBS system, and went looking for a machine to run it on. The club's president told Baker that DEC would be giving them a Rainbow 100 computer on indefinite loan, so he made plans to move the CBBS onto this machine. The Rainbow contained two processors, an Intel 8088 and a Zilog Z80, allowing it to run both MS-DOS and CP/M, with the BBS running on the latter. When the machine arrived, they learned that the Z80 side had no access to the I/O ports, so CBBS could not communicate with a modem. While searching for software that would run on the MS-DOS side of the system, Baker learned of Fido through Madill. The Fido software required changes to the serial drivers to work properly on the Rainbow. A porting effort started, involving Jennings, Madill and Baker. This caused all involved to rack up considerable long distance charges as they all called each other during development, or called into each other's BBSes to leave email. During one such call "in May or early June", Baker and Jennings discussed how great it would be if the BBS systems could call each other automatically, exchanging mail and files between them. This would allow them to compose mail on their local machines, and then deliver it quickly, as opposed to calling in and typing the message in while on a long-distance telephone connection. Jennings responded by calling into Baker's system that night and uploading a new version of the software consisting of three files: FIDO_DECV6, a new version of the BBS program itself, FIDONET, a new program, and NODELIST.BBS, a text file. The new version of FIDO BBS had a timer that caused it to exit at a specified time, normally at night. As it exited it would run the separate FIDONET program. NODELIST was the list of Fido BBS systems, which Jennings had already been compiling. The FIDONET program was what later became known as a mailer. The FIDO BBS software was modified to use a previously unused numeric field in the message headers to store a node number for the machine the message should be delivered to. When FIDONET ran, it would search through the email database for any messages with a number in this field. FIDONET collected all of the messages for a particular node number into a file known as a message packet. After all the packets were generated, one for each node, the FIDONET program would look up the destination node's phone number in NODELIST.BBS, and call the remote system. Provided that FIDONET was running on that system, the two systems would handshake and, if this succeeded, the calling system would upload its packet, download a return packet if there was one, and disconnect. FIDONET would then unpack the return packet, place the received messages into the local system's database, and move onto the next packet. When there were no remaining packets, FIDONET would exit, and run the FIDO BBS program. In order to lower long-distance charges, the mail exchanges were timed to run late at night, normally 4 AM. This would later be known as national mail hour, and, later still, as Zone Mail Hour. Up and running By June 1984 Version 7 of the system was being run in production, and nodes were rapidly being added to the network. By August there were almost 30 systems in the nodelist, 50 by September, and over 160 by January 1985. As the network grew, the maintenance of the nodelist became prohibitive, and errors were common. In these cases, people would start receiving phone calls at 4 AM, from a caller that would say nothing and then hang up. In other cases the system would be listed before it was up and running, resulting in long-distance calls that accomplished nothing. In August 1984 Jennings handed off control of the nodelist to the group in St. Louis, mostly Ken Kaplan and Ben Baker. Kaplan had come across Fido as part of finding a BBS solution for his company, which worked with DEC computers and had been given a Rainbow computer and a USRobotics 1200bit/s modem. From then on, joining FidoNet required one to set up their system and use it to deliver a netmail message to a special system, Node 51. The message contained various required contact information. If this message was transmitted successfully, it ensured that at least some of the system was working properly. The nodelist team would then reply with another netmail message back to the system in question, containing the assigned node number. If delivery succeeded, the system was considered to be working properly, and it was added to the nodelist. The first new nodelist was published on 21 September 1984. Nets and nodes Growth continued to accelerate, and by the spring of 1985, the system was already reaching its limit of 250 nodes. In addition to the limits on the growth of what was clearly a popular system, nodelist maintenance continued to grow more and more time-consuming. It was also realized that Fido systems were generally clustered – of the 15 systems running by the start of June 1984, 5 of them were in St. Louis. A user on Jennings's system in San Francisco that addressed emails to different systems in St. Louis would cause calls to be made to each of those BBSes in turn. In the United States, local calls were normally free, and in most other countries were charged at a low rate. Additionally, the initial call setup, generally the first minute of the call, was normally billed at a higher rate than continuing an existing connection. Therefore, it would be less expensive to deliver all the messages from all the users in San Francisco to all of the users in St. Louis in a single call. Packets were generally small enough to be delivered within a minute or two, so delivering all the messages in a single call could greatly reduce costs by avoiding multiple first-minute charges. Once delivered, the packet would be broken out into separate packets for local systems, and delivered using multiple local free calls. The team settled on the concept of adding a new network number patterned on the idea of area codes. A complete network address would now consist of the network and node number pair, which would be written with a slash between them. All mail travelling between networks would first be sent to their local network host, someone who volunteered to pay for any long distance charges. That single site would collect up all the netmail from all of the systems in their network, then re-package it into single packets destined to each network. They would then call any required network admin sites and deliver the packet to them. That site would then process the mail as normal, although all of the messages in the packet would be guaranteed to be local calls. The network address was placed in an unused field in the Fido message database, which formerly always held a zero. Systems running existing versions of the software already ignored the fields containing the new addressing, so they would continue to work as before; when noticing a message addressed to another node they would look it up and call that system. Newer systems would recognize the network number and instead deliver that message to the network host. To ensure backward compatibility, existing systems retained their original node numbers through this period. A huge advantage of the new scheme was that node numbers were now unique only within their network, not globally. This meant the previous 250 node limit was gone, but for a variety of reasons this was initially limited to about 1,200. This change also devolved the maintenance of the nodelists down to the network hosts, who then sent updated lists back to Node 51 to be collected into the master list. The St. Louis group now had to only maintain their own local network, and do basic work to compile the global list. At a meeting held in Kaplan's living room in St. Louis on 11 April 1985 the various parties hammered out all of the details of the new concept. As part of this meeting, they also added the concept of a region, a purely administrative level that was not part of the addressing scheme. Regional hosts would handle any stragglers in the network maps, remote systems that had no local network hosts. They then divided up the US into ten regions that they felt would have roughly equal populations. By May, Jennings had early versions of the new software running. These early versions specified the routing manually through a new ROUTE.BBS file that listed network hosts for each node. For instance, an operator might want to forward all mail to St. Louis through a single node, node 10. ROUTE.BBS would then include a list of all the known systems in that area, with instructions to forward mail to each of those nodes through node 10. This process was later semi-automated by John Warren's NODELIST program. Over time, this information was folded into updated versions of the nodelist format, and the ROUTES file is no longer used. A new version of FIDO and FIDONET, 10C, was released containing all of these features. On 12 June 1985 the core group brought up 10C, and most Fido systems had upgraded within a few months. The process went much smoother than anyone imagined, and very few nodes had any problems. Echomail Sometime during the evolution of Fido, file attachments were added to the system, allowing a file to be referenced from an email message. During the normal exchange between two instances of FIDONET, any files attached to the messages in the packets were delivered after the packet itself had been up or downloaded. It is not clear when this was added, but it was already a feature of the basic system when the 8 February 1985 version of the FidoNet standards document was released, so this was added very early in Fido's history. At a sysop meeting in Dallas, the idea was raised that it would be nice if there was some way for the sysops to post messages that would be shared among the systems. In February 1986 Jeff Rush, one of the group members, introduced a new mailer that extracted messages from public forums that the sysop selected, like the way the original mailer handled private messages. The new program was known as a tosser/scanner. The tosser produced a file that was similar (or identical) to the output from the normal netmail scan, but these files were then compressed and attached to a normal netmail message as an attachment. This message was then sent to a special address on the remote system. After receiving netmail as normal, the scanner on the remote system looked for these messages, unpacked them, and put them into the same public forum on the original system. In this fashion, Rush's system implemented a store and forward public message system similar to Usenet, but based on, and hosted by, the FidoNet system. The first such echomail forum was one created by the Dallas area sysops to discuss business, known as SYSOP. Another called TECH soon followed. Several public echos soon followed, including GAYNET and CLANG. These spawned hundreds of new echos, and led to the creation of the Echomail Conference List (Echolist) by Thomas Kenny in January 1987. Echomail produced world-spanning shared forums, and its traffic volume quickly surpassed the original netmail system. By the early 1990s, echo mail was carrying over 8 MB of compressed message traffic a day, many times that when uncompressed. Echomail did not necessarily use the same distribution pathways as normal netmail, and the distribution routing was stored in a separate setup file not unlike the original ROUTES.BBS. At the originating site a header line was added to the message indicating the origin system's name and address. After that, each system that the message traveled through added itself to a growing PATH header, as well as a SEENBY header. SEENBY prevented the message from looping around the network in the case of misconfigured routing information. Echomail was not the only system to use the file attachment feature of netmail to implement store-and-forward capabilities. Similar concepts were used by online games and other systems as well. Zones and points The evolution towards the net/node addressing scheme was also useful for reducing communications costs between continents, where time zone differences on either end of the connection might also come into play. For instance, the best time to forward mail in the US was at night, but that might not be the best time for European hosts to exchange. Efforts towards introducing a continental level to the addressing system started in 1986. At the same time, it was noted that some power users were interested in using FidoNet protocols as a way of delivering the large quantities of echomail to their local machines where it could be read offline. These users did not want their systems to appear in the nodelist - they did not (necessarily) run a bulletin board system and were not publicly accessible. A mechanism allowing netmail delivery to these systems without the overhead of nodelist maintenance was desirable. In October 1986 the last major change to the FidoNet network was released, adding zones and points. Zones represented major geographical areas roughly corresponding to continents. There were six zones in total, North America, South America, Europe, Oceania, Asia, and Africa. Points represented non-public nodes, which were created privately on a BBS system. Point mail was delivered to a selected host BBS as normal, but then re-packaged into a packet for the point to pick up on-demand. The complete addressing format was now zone:net/node.point, so a real example might be Bob Smith@1:250/250.10. Points were widely used only for a short time, the introduction of offline reader systems filled this role with systems that were much easier to use. Points remain in use to this day but are less popular than when they were introduced. Other extensions Although FidoNet supported file attachments from even the earliest standards, this feature tended to be rarely used and was often turned off. File attachments followed the normal mail routing through multiple systems and could back up transfers all along the line as the files were copied. A solution was offered in the form of file requests, which made file transfers driven by the calling system and used one-time point-to-point connections instead of the traditional routing. Two such standards became common, "WaZOO" and "Bark", which saw varying support among different mailers. Both worked similarly, with the mailer calling the remote system and sending a new handshake packet to request the files. Although FidoNet was, by far, the best known BBS-based network, it was by no means the only one. From 1988 on, PCBoard systems were able to host similar functionality known as RelayNet, while other popular networks included RBBSNet from the Commodore 64 world, and AlterNet. Late in the evolution of the FidoNet system, there was a proposal to allow mail (but not forum messages) from these systems to switch into the FidoNet structure. This was not adopted, and the rapid rise of the internet made this superfluous as these networks rapidly added internet exchange, which acted as a lingua franca. Peak FidoNet started in 1984 and listed 100 nodes by the end of that year. Steady growth continued through the 1980s, but a combination of factors led to rapid growth after 1988. These included faster and less expensive modems and rapidly declining costs of hard drives and computer systems in general. By April 1993, the FidoNet nodelist contained over 20,000 systems. At that time it was estimated that each node had, on average, about 200 active users. Of these 4 million users in total, 2 million users commonly used echomail, the shared public forums, while about 200,000 used the private netmail system. At its peak, FidoNet listed approximately 39,000 systems. Throughout its lifetime, FidoNet was beset with management problems and infighting. Much of this can be traced to the fact that the inter-net delivery cost real money, and the traffic grew more rapidly than decreases caused by improving modem speeds and downward trending long-distance rates. As they increased, various methods of recouping the costs were attempted, all of which caused friction in the groups. The problems were so bad that Jennings came to refer to the system as the "fight-o-net". Decline As modems reached speeds of 28.8 kbit/s, the overhead of the TCP/IP protocols were no longer so egregious and dial-up Internet became increasingly common. By 1995, the bulletin board market was reeling as users abandoned local BBS systems in favour of larger sites and web pages, which could be accessed worldwide for the same cost as accessing a local BBS system. This also made FidoNet less expensive to implement, because inter-net transfers could be delivered over the Internet as well, at little or no marginal cost. But this seriously diluted the entire purpose of the store-and-forward model, which had been built up specifically to address a long-distance problem that no longer existed. The FidoNet nodelist started shrinking, especially in areas with a widespread availability of internet connections. This downward trend continues but has levelled out at approximately 2,500 nodes. FidoNet remains popular in areas where Internet access is difficult to come by, or expensive. Resurgence Around 2014, a retro movement led to a slow increase in internet-connected BBS and nodes. Telnet, rlogin, and SSH are being used between systems. This means the user can telnet to any BBS worldwide as cheaply as ones next door. Also, Usenet and internet mail has been added, along with long file names to many newer versions of BBS software, some being free-ware, resulting in increasing use. Nodelists are no longer declining in all cases. FidoNet organizational structure FidoNet is governed in a hierarchical structure according to FidoNet policy, with designated coordinators at each level to manage the administration of FidoNet nodes and resolve disputes between members. This structure is very similar to the organization structure of the Sicilian Mafia. Network coordinators (referred to as "Button Men") are responsible for managing the individual nodes within their area, usually a city or similar sized area. Regional coordinators (referred to as "Underbosses") are responsible for managing the administration of the network coordinators within their region, typically the size of a state, or small country. Zone coordinators (referred to as either "Dons" or "Godfathers") are responsible for managing the administration of all of the regions within their zone. The world is divided into six zones, the coordinators of which appoint themselves or representatives to the positions of "International Coordinators" of FidoNet (referred to as "La Cosa Nostra"). The six zone "International Coordinators", along with their Counselors (also known as their "Consiglieres"), form the twelve person body known as "FidoNet Central". Technical structure FidoNet was historically designed to use modem-based dial-up access between bulletin board systems, and much of its policy and structure reflected this. The FidoNet system officially referred only to the transfer of Netmail—the individual private messages between people using bulletin boards—including the protocols and standards with which to support it. A netmail message would contain the name of the person sending, the name of the intended recipient, and the respective FidoNet addresses of each. The FidoNet system was responsible for routing the message from one system to the other (details below), with the bulletin board software on each end being responsible for ensuring that only the intended recipient could read it. Due to the hobbyist nature of the network, any privacy between the sender and recipient was only the result of politeness from the owners of the FidoNet systems involved in the mail's transfer. It was common, however, for system operators to reserve the right to review the content of mail that passed through their system. Netmail allowed for the attachment of a single file to every message. This led to a series of piggyback protocols that built additional features onto FidoNet by passing information back and forth as file attachments. These included the automated distribution of files and transmission of data for inter-BBS games. By far the most commonly used of these piggyback protocols was Echomail, public discussions similar to Usenet newsgroups in nature. Echomail was supported by a variety of software that collected up new messages from the local BBSes' public forums (the scanner), compressed it using ARC or ZIP, attached the resulting archive to a Netmail message, and sent that message to a selected system. On receiving such a message, identified because it was addressed to a particular user, the reverse process was used to extract the messages, and a tosser put them back into the new system's forums. Echomail was so popular that for many users, Echomail was the FidoNet. Private person-to-person Netmail was relatively rare. Geographical structure FidoNet is politically organized into a tree structure, with different parts of the tree electing their respective coordinators. The FidoNet hierarchy consists of zones, regions, networks, nodes and points broken down more-or-less geographically. The highest level is the zone, which is largely continent-based: Zone 1 is the United States and Canada Zone 2 is Europe, Former Soviet Union countries, and Israel Zone 3 is Australasia Zone 4 is Latin America (except Puerto Rico) Zone 5 was Africa Zone 6 was Asia, Israel and the Asian parts of Russia, (which are listed in Zone 2). On 26 July 2007 zone 6 was removed, and all remaining nodes were moved to zone 3. Each zone is broken down into regions, which are broken down into nets, which consist of individual nodes. Zones 7-4095 are used for othernets; groupings of nodes that use Fido-compatible software to carry their own independent message areas without being in any way controlled by FidoNet's political structure. Using un-used zone numbers would ensure that each network would have a unique set of addresses, avoiding potential routing conflicts and ambiguities for systems that belonged to more than one network. FidoNet addresses FidoNet addresses explicitly consist of a zone number, a network number (or region number), and a node number. They are written in the form Zone:Network/Node. The FidoNet structure also allows for semantic designation of region, host, and hub status for particular nodes, but this status is not directly indicated by the main address. For example, consider a node located in Tulsa, Oklahoma, United States with an assigned node number is 918, located in Zone 1 (North America), Region 19, and Network 170. The full FidoNet address for this system would be 1:170/918. The region was used for administrative purposes, and was only part of the address if the node was listed directly underneath the Regional Coordinator, rather than one of the networks that were used to divide the region further. FidoNet policy requires that each FidoNet system maintain a nodelist of every other member system. Information on each node includes the name of the system or BBS, the name of the node operator, the geographic location, the telephone number, and software capabilities. The nodelist is updated weekly, to avoid unwanted calls to nodes that had shut down, with their phone numbers possibly having been reassigned for voice use by the respective telephone company. To accomplish regular updates, coordinators of each network maintain the list of systems in their local areas. The lists are forwarded back to the International Coordinator via automated systems on a regular basis. The International Coordinator would then compile a new nodelist, and generate the list of changes (nodediff) to be distributed for node operators to apply to their existing nodelist. Routing of FidoNet mail In a theoretical situation, a node would normally forward messages to a hub. The hub, acting as a distribution point for mail, might then send the message to the Net Coordinator. From there it may be sent through a Regional Coordinator, or to some other system specifically set up for the function. Mail to other zones might be sent through a Zone Gate. For example, a FidoNet message might follow the path: 1:170/918 (node) to 1:170/900 (hub) to 1:170/0 (net coordinator) to 1:19/0 (region coordinator) to 1:1/0 (zone coordinator). From there, it was distributed 'down stream' to the destination node(s). Originally there was no specific relationship between network numbers and the regions they reside in. In some areas of FidoNet, most notably in Zone 2, the relationship between region number and network number are entwined. For example, 2:201/329 is in Net 201 which is in Region 20 while 2:2410/330 is in Net 2410 which is in Region 24. Zone 2 also relates the node number to the hub number if the network is large enough to contain any hubs. This effect may be seen in the nodelist by looking at the structure of Net 2410 where node 2:2410/330 is listed under Hub 300. This is not the case in other zones. In Zone 1, things are much different. Zone 1 was the starting point and when Zones and Regions were formed, the existing nets were divided up regionally with no set formula. The only consideration taken was where they were located geographically with respect to the region's mapped outline. As net numbers got added, the following formula was used. Region number × 20 Then when some regions started running out of network numbers, the following was also used. Region number × 200 Region 19, for instance, contains nets 380-399 and 3800-3999 in addition to those that were in Region 19 when it was formed. Part of the objective behind the formation of local nets was to implement cost reduction plans by which all messages would be sent to one or more hubs or hosts in compressed form (ARC was nominally standard, but PKZIP is universally supported); one toll call could then be made during off-peak hours to exchange entire message-filled archives with an out-of-town uplink for further redistribution. In practice, the FidoNet structure allows for any node to connect directly to any other, and node operators would sometimes form their own toll-calling arrangements on an ad-hoc basis, allowing for a balance between collective cost saving and timely delivery. For instance, if one node operator in a network offered to make regular toll calls to a particular system elsewhere, other operators might arrange to forward all of their mail destined for the remote system, and those near it, to the local volunteer. Operators within individual networks would sometimes have cost-sharing arrangements, but it was also common for people to volunteer to pay for regular toll calls either out of generosity or to build their status in the community. This ad-hoc system was particularly popular with networks that were built on top of FidoNet. Echomail, for instance, often involved relatively large file transfers due to its popularity. If official FidoNet distributors refused to transfer Echomail due to additional toll charges, other node operators would sometimes volunteer. In such cases, Echomail messages would be routed to the volunteers' systems instead. The FidoNet system was best adapted to an environment in which local telephone service was inexpensive and long-distance calls (or intercity data transfer via packet-switched networks) costly. Therefore, it fared somewhat poorly in Japan, where even local lines are expensive, or in France, where tolls on local calls and competition with Minitel or other data networks limited its growth. Points As the number of messages in Echomail grew over time, it became very difficult for users to keep up with the volume while logged into their local BBS. Points were introduced to address this, allowing technically-savvy users to receive the already compressed and batched Echomail (and Netmail) and read it locally on their own machines. To do this, the FidoNet addressing scheme was extended with the addition of a final address segment, the point number. For instance, a user on the example system above might be given point number 10, and thus could be sent mail at the address 1:170/918.10. In real-world use, points are fairly difficult to set up. The FidoNet software typically consisted of a number of small utility programs run by manually edited scripts that required some level of technical ability. Reading and editing the mail required either a "sysop editor" program or a BBS program to be run locally. In North America (Zone 1), where local calls are generally free, the benefits of the system were offset by its complexity. Points were used only briefly, and even then only to a limited degree. Dedicated offline mail reader programs such as Blue Wave, Squiggy and Silver Xpress (OPX) were introduced in the mid-1990s and quickly rendered the point system obsolete. Many of these packages supported the QWK offline mail standard. In other parts of the world, especially Europe, this was different. In Europe, even local calls are generally metered, so there was a strong incentive to keep the duration of the calls as short as possible. Point software employs standard compression (ZIP, ARJ, etc.) and so keeps the calls down to a few minutes a day at most. In contrast to North America, pointing saw rapid and fairly widespread uptake in Europe. Many regions distribute a pointlist in parallel with the nodelist. The pointlist segments are maintained by Net- and Region Pointlist Keepers and the Zone Point List Keeper assembles them into the Zone pointlist. At the peak of FidoNet there were over 120,000 points listed in the Zone 2 pointlist. Listing points is on a voluntary basis and not every point is listed, so how many points there really were is anybody's guess. As of June 2006, there are still some 50,000 listed points. Most of them are in Russia and Ukraine. Technical specifications FidoNet contained several technical specifications for compatibility between systems. The most basic of all is FTS-0001, with which all FidoNet systems are required to comply as a minimum requirement. FTS-0001 defined: Handshaking - the protocols used by mailer software to identify each other and exchange meta-information about the session. Transfer protocol (XMODEM) - the protocols to be used for transferring files containing FidoNet mail between systems. Message format - the standard format for FidoNet messages during the time which they were exchanged between systems. Other specifications that were commonly used provided for echomail, different transfer protocols and handshake methods (e.g.: Yoohoo/Yoohoo2u2, EMSI), file compression, nodelist format, transfer over reliable connections such as the Internet (Binkp), and other aspects. Zone mail hour Since computer bulletin boards historically used the same telephone lines for transferring mail as were used for dial-in human users of the BBS, FidoNet policy dictates that at least one designated line of each FidoNet node must be available for accepting mail from other FidoNet nodes during a particular hour of each day. Zone Mail Hour, as it was named, varies depending on the geographic location of the node, and was designated to occur during the early morning. The exact hour varies depending on the time zone, and any node with only one telephone line is required to reject human callers. In practice, particularly in later times, most FidoNet systems tend to accept mail at any time of day when the phone line is not busy, usually during night. FidoNet deployments Most FidoNet deployments were designed in a modular fashion. A typical deployment would involve several applications that would communicate through shared files and directories, and switch between each other through carefully designed scripts or batch files. However, monolithic software that encompassed all required functions in one package is available, such as D'Bridge. Such software eliminated the need for custom batch files and is tightly integrated in operation. The preference for deployment was that of the operator and there were both pros and cons of running in either fashion. Arguably the most important piece of software on a DOS-based Fido system was the FOSSIL driver, which was a small device driver which provided a standard way for the Fido software to talk to the modem. This driver needed to be loaded before any Fido software would work. An efficient FOSSIL driver meant faster, more reliable connections. Mailer software was responsible for transferring files and messages between systems, as well as passing control to other applications, such as the BBS software, at appropriate times. The mailer would initially answer the phone and, if necessary, deal with incoming mail via FidoNet transfer protocols. If the mailer answered the phone and a human caller was detected rather than other mailer software, the mailer would exit, and pass control to the BBS software, which would then initialise for interaction with the user. When outgoing mail was waiting on the local system, the mailer software would attempt to send it from time to time by dialing and connecting to other systems who would accept and route the mail further. Due to the costs of toll calls which often varied between peak and off-peak times, mailer software would usually allow its operator to configure the optimal times in which to attempt to send mail to other systems. BBS software was used to interact with human callers to the system. BBS software would allow dial-in users to use the system's message bases and write mail to others, locally or on other BBSes. Mail directed to other BBSes would later be routed and sent by the mailer, usually after the user had finished using the system. Many BBSes also allowed users to exchange files, play games, and interact with other users in a variety of ways (i.e.: node to node chat). A scanner/tosser application, such as FastEcho, FMail, TosScan and Squish, would normally be invoked when a BBS user had entered a new FidoNet message that needed to be sent, or when a mailer had received new mail to be imported into the local messages bases. This application would be responsible for handling the packaging of incoming and outgoing mail, moving it between the local system's message bases and the mailer's inbound and outbound directories. The scanner/tosser application would generally be responsible for basic routing information, determining which systems to forward mail to. In later times, message readers or editors that were independent of BBS software were also developed. Often the System Operator of a particular BBS would use a devoted message reader, rather than the BBS software itself, to read and write FidoNet and related messages. One of the most popular editors in 2008 was GoldED+. In some cases, FidoNet nodes, or more often FidoNet points, had no public bulletin board attached and existed only for the transfer of mail for the benefit of the node's operator. Most nodes in 2009 had no BBS access, but only points, if anything. The original Fido BBS software, and some other FidoNet-supporting software from the 1980s, is no longer functional on modern systems. This is for several reasons, including problems related to the Y2K bug. In some cases, the original authors have left the BBS or shareware community, and the software, much of which was closed source, has been rendered abandonware. Several DOS-based legacy FidoNet Mailers such as FrontDoor, Intermail, MainDoor and D'Bridge from the early 1990s can still be run today under Windows without a modem, by using the freeware NetFoss Telnet FOSSIL driver, and by using a Virtual Modem such as NetSerial. This allows the mailer to dial an IP address or hostname via Telnet, rather than dialing a real POTS phone number. There are similar solutions for Linux such as MODEMU (modem emulator) which has limited success when combined with DOSEMU (DOS emulator). Mail Tossers such as FastEcho and FMail are still used today under both Windows and Linux/DOSEMU. There are several modern Windows based FidoNet Mailers available today with source code, including Argus, Radius, and Taurus. MainDoor is another Windows based Fidonet mailer, which also can be run using either a modem or directly over TCP/IP. Two popular free and open source software FidoNet mailers for Unix-like systems are the binkd (cross-platform, IP-only, uses the binkp protocol) and qico (supports modem communication as well as the IP protocol of ifcico and binkp). On the hardware side, Fido systems were usually well-equipped machines, for their day, with quick CPUs, high-speed modems and 16550 UARTs, which were at the time an upgrade. As a Fidonet system was usually a BBS, it needed to quickly process any new mail events before returning to its 'waiting for call' state. In addition, the BBS itself usually necessitated lots of storage space. Finally, a FidoNet system usually had at least one dedicated phone line. Consequently, operating a Fidonet system often required significant financial investment, a cost usually met by the owner of the system. FidoNet availability While the use of FidoNet has dropped dramatically compared with its use up to the mid-1990s, it is still used in many countries and especially Russia and former republics of the USSR. Some BBSes, including those that are now available for users with Internet connections via telnet, also retain their FidoNet netmail and echomail feeds. Some of FidoNet's echomail conferences are available via gateways with the Usenet news hierarchy using software like UFGate. There are also mail gates for exchanging messages between Internet and FidoNet. Widespread net abuse and e-mail spam on the Internet side has caused some gateways (such as the former 1:1/31 IEEE fidonet.org gateway) to become unusable or cease operation entirely. FidoNews FidoNews is the newsletter of the FidoNet community. Affectionately nicknamed The Snooze, it is published weekly. It was first published in 1984. Throughout its history, it has been published by various people and entities, including the short-lived International FidoNet Association. See also PODSnet RelayNet UUCP References Notes Citations Further reading Alt URL External links Alternate US FidoNet Home Page FidoNet Technical Standards Committee Home Page FidoNews, the weekly newsletter of the FidoNet community International Echolist Home Page IFDC FileGate Project Fidonet Showcase Project Computer-related introductions in 1984 Computer-mediated communication Pre–World Wide Web online services Wikipedia articles with ASCII art BBS networks
239050
https://en.wikipedia.org/wiki/Project%20manager
Project manager
A project manager is a professional in the field of project management. Project managers have the responsibility of the planning, procurement and execution of a project, in any undertaking that has a defined scope, defined start and a defined finish; regardless of industry. Project managers are first point of contact for any issues or discrepancies arising from within the heads of various departments in an organization before the problem escalates to higher authorities, as project representative. Project management is the responsibility of a project manager. This individual seldom participates directly in the activities that produce the result, but rather strives to maintain the progress, mutual interaction and tasks of various parties in such a way that reduces the risk of overall failure, maximizes benefits, and minimizes costs. Overview A project manager is the person responsible for accomplishing the project objectives. Key project management responsibilities include defining and communicating project objectives that are clear, useful and attainable procuring the project requirements like workforce, required information, various agreements and material or technology needed to accomplish project objectives managing the constraints of the project management triangle, which are cost, time, scope and quality A project manager is a client representative and has to determine and implement the exact needs of the client, based on knowledge of the organization they are representing. An expertise is required in the domain the project managers are working to efficiently handle all the aspects of the project. The ability to adapt to the various internal procedures of the client and to form close links with the nominated representatives, is essential in ensuring that the key issues of cost, time, quality and above all, client satisfaction, can be realized. Project management key topics to specify the reason why a project is important to specify the quality of the deliverables resource estimate timescale Investment, corporate agreement and funding Implementation of management plan on to the project team building and motivation risk assessments and change in the project maintain sustaining project monitoring stakeholder management provider management closing the project. Project tools The tools, knowledge and techniques for managing projects are often unique to project management. For example: work breakdown structures, critical path analysis and earned value management. Understanding and applying the tools and techniques which are generally recognized as good practices are not sufficient alone for effective project management. Effective project management requires that the project manager understands and uses the knowledge and skills from at least four areas of expertise. Examples are PMBOK, Application Area Knowledge: standards and regulations set forth by ISO for project management, General Management Skills and Project Environment Management There are also many options for project management software to assist in executing projects for the project manager and his/her team. Project teams When recruiting and building an effective team, the manager must consider not only the technical skills of each person, but also the critical roles and chemistry between workers. A project team has mainly three separate components: project manager, core team and contracted team. Risk Most of the project management issues that influence a project arise from risk, which in turn arises from uncertainty. The successful project manager focuses on this as his/her main concern and attempts to reduce risk significantly, often by adhering to a policy of open communication, ensuring that project participants can voice their opinions and concerns. Responsibilities The project manager is accountable for ensuring that everyone on the team knows and executes his or her role, feels empowered and supported in the role, knows the roles of the other team members and acts upon the belief that those roles will be performed. The specific responsibilities of the project manager may vary depending on the industry, the company size, the company maturity, and the company culture. However, there are some responsibilities that are common to all project managers, noting: Developing the project plans Managing the project stakeholders Managing communication Managing the project team Managing the project risks Managing the project schedule Managing the project budget Managing the project conflicts Managing the project delivery Contract administration Types Architectural project manager Architectural project manager are project managers in the field of architecture. They have many of the same skills as their counterpart in the construction industry. And will often work closely with the construction project manager in the office of the general contractor (GC), and at the same time, coordinate the work of the design team and numerous consultants who contribute to a construction project, and manage communication with the client. The issues of budget, scheduling, and quality control are the responsibility of the project manager in an architect's office. Construction project manager Until recently, the American construction industry lacked any level of standardization, with individual States determining the eligibility requirements within their jurisdiction. However, several trade associations based in the United States have made strides in creating a commonly accepted set of qualifications and tests to determine a project manager's competency. The Construction Management Association of America (CMAA) maintains the Certified Construction Manager (CCM) designation. The purpose of the CCM is to standardize the education, experience and professional understanding needed to practice construction management at the highest level. The Project Management Institute has made some headway into being a standardizing body with its creation of the Project Management Professional (PMP) designation. The Constructor Certification Commission of the American Institute of Constructors holds semiannual nationwide tests. Eight American Construction Management programs require that students take these exams before they may receive their Bachelor of Science in construction management degree, and 15 other universities actively encourage their students to consider the exams. The Associated Colleges of Construction Education and the Associated Schools of Construction have made considerable progress in developing national standards for construction education programs. The profession has recently grown to accommodate several dozen construction management Bachelor of Science programs. Many universities have also begun offering a master's degree in project management. These programs generally are tailored to working professionals who have project management experience or project related experience; they provide a more intense and in depth education surrounding the knowledge areas within the project management body of knowledge. The United States Navy construction battalions, nicknamed the SeaBees, puts their command through strenuous training and certifications at every level. To become a chief petty officer in the SeaBees is equivalent to a BS in construction management with the added benefit of several years of experience to their credit. See ACE accreditation. Engineering project manager In engineering, project management involves seeing a product or device through the developing and manufacturing stages, working with various professionals in different fields of engineering and manufacturing to go from concept to finished product. Optionally, this can include different versions and standards as required by different countries, requiring knowledge of laws, requirements and infrastructure. Insurance claim project manager In the insurance industry project managers often oversee and manage the restoration of a client's home/office after a fire, flood, or other disaster, covering the fields from electronics through to the demolition and construction contractors. IT project manager IT project management generally falls into two categories, namely software (development) project manager and infrastructure project manager. Software project manager A software project manager has many of the same skills as their counterparts in other industries. Beyond the skills normally associated with traditional project management in industries such as construction and manufacturing, a software project manager will typically have an extensive background in software development. Many software project managers hold a degree in computer science, information technology, management of information systems or another related field. In traditional project management a heavyweight, predictive methodology such as the waterfall model is often employed, but software project managers must also be skilled in more lightweight, adaptive methodologies such as DSDM, Scrum and XP. These project management methodologies are based on the uncertainty of developing a new software system and advocate smaller, incremental development cycles. These incremental or iterative cycles are time boxed (constrained to a known period of time, typically from one to four weeks) and produce a working subset of the entire system deliverable at the end of each iteration. The increasing adoption of lightweight approaches is due largely to the fact that software requirements are very susceptible to change, and it is extremely difficult to illuminate all the potential requirements in a single project phase before the software development commences. The software project manager is also expected to be familiar with the software development life cycle (SDLC). This may require in-depth knowledge of requirements solicitation, application development, logical and physical database design and networking. This knowledge is typically the result of the aforementioned education and experience. There is not a widely accepted certification for software project managers, but many will hold the Project Management Professional (PMP) designation offered by the Project Management Institute, PRINCE2 or an advanced degree in project management, such as a MSPM or other graduate degree in technology management. IT infrastructure project management An infrastructure IT PM is concerned with the nuts and bolts of the IT department, including computers, servers, storage, networking, and such aspects of them as backup, business continuity, upgrades, replacement, and growth. Often, a secondary data center will be constructed in a remote location to help protect the business from outages caused by natural disaster or weather. Recently, cyber security has become a significant growth area within IT infrastructure management. The infrastructure PM usually has an undergraduate degree in engineering or computer science, while a master's degree in project management is required for senior level positions. Along with the formal education, most senior level PMs are certified, by the Project Management Institute, as a Project Management Professional. PMI also has several additional certification options, but PMP is by far the most popular. Infrastructure PMs are responsible for managing projects that have budgets from a few thousand dollars up to many millions of dollars. They must understand the business and the business goals of the sponsor and the capabilities of the technology in order to reach the desired goals of the project. The most difficult part of the infrastructure PM's job may be this translation of business needs / wants into technical specifications. Oftentimes, business analysts are engaged to help with this requirement. The team size of a large infrastructure project may run into several hundred engineers and technicians, many of whom have strong personalities and require strong leadership if the project goals are to be met. Due to the high operations expense of maintaining a large staff of highly skilled IT engineering talent, many organizations outsource their infrastructure implementations and upgrades to third-party companies. Many of these companies have strong project management organizations with the ability to not only manage their clients projects, but to also generate high quality revenue at the same time. See also Event planning and production Master of Science in Project Management Project engineer Project management Project planning Product management References Further reading Project Management Institute (PMI), USA US DoD (2003). Interpretive Guidance for Project Manager Positions. August 2003. Open source handbook for project managers Open source handbook for project managers. July 2006. Collection of scholarly articles Project Management Training: Research. Nov 2012. Project management Management occupations Building engineering Product lifecycle management Computer occupations
69438196
https://en.wikipedia.org/wiki/Exploit-as-a-Service
Exploit-as-a-Service
Exploit-as-a-service or EaaS is a scheme of cybercriminals whereby zero-day vulnerabilities are leased to hackers. EaaS is typically offered as a Cloud Service. By the end of 2021, EaaS became more of a trend among ransomware groups. In the past, zero-day vulnerabilities were often sold on the Dark Web, but this was usually at very high prices, like millions of US dollars per zero-day. A leasing model makes such vulnerabilities more affordable for many hackers. Even if such zero-day vulnerabilities will ever be sold at high prices, they can be leased for some time. The scheme can be compared with similar schemes like Ransomware-as-a-Service (RaaS), Phishing-as-a-Service and Hacking-as-a-Service (HaaS). The latter includes such services as DoS and DDoS and botnets that are maintained for hackers who use these services. Parties who offer Exploit-as-a-service need to address various challenges. Payment is usually done in cryptocurrencies like the bitcoin. Anonymity is not always guaranteed when cryptocurrencies are used, and the police have been able to seize criminals on various occasions. Zero day vulnerabilities that are leased could be discovered and the software that is used to exploit them could be reverse engineered. It is as yet uncertain how profitable the exploit-as-a-service business model will be. If it turns out to be profitable, probably the amount of threat actors that will offer this service will increase. Sources of information on Exploit-as-a-Service include discussions on the Dark Web, which reveal an increased interest in this kind of service. See also Exploit (computer security) Computer security Computer virus Crimeware Exploit kit IT risk Metasploit Shellcode w3af Notes External links Exploit-as-a-service: Cybercriminals exploring potential of leasing out zero-day vulnerabilities as saved in the Internet Archive Exploit-as-a-Service, high rollers and zero-day criminal tactics as saved in the Internet Archive Hacking as a Service as saved in the Internet Archive Dark web
24445207
https://en.wikipedia.org/wiki/Electronic%20Data%20Systems
Electronic Data Systems
Electronic Data Systems (EDS) was an American multinational information technology equipment and services company headquartered in Plano, Texas. History Electronic Data Systems (EDS) was founded in 1962 by H. Ross Perot, a graduate of the United States Naval Academy and a successful IBM salesman who firsthand observed how inefficiently IBM's customers typically were using their expensive systems. Somewhat to IBM's chagrin, since the company sold as many computers as possible, Perot made a fortune changing this. An early success was in matching the unused computer time at Southwestern Life Insurance Company with the computing needs of rapidly expanding Collins Radio, both located in Dallas, Texas. Perot knew the inside details of both companies. In its early years, EDS was a pioneer in facilities management (becoming the IT department for many companies) as well as beginning to service banks and provide early support for both Medicaid and Medicare in its home state of Texas. Leading the effort internally was Morton H. Meyerson, who joined the company in 1966 as the company's 54th employee. In 1967, he proposed the business model that eventually became known as "outsourcing" and which led to exponential growth for EDS. In the 1970s, EDS expanded initially into more insurance services and later credit unions, and by 1975 revenue topped $100 million and the company began bidding for work internationally. In 1978 EDS expanded into financial markets with the arrival of automated teller machines, electronic funds transfers and real-time point-of-sale terminals. Meyerson was named president in 1979, at which point EDS had revenue of $270 million, was free of debt, and had 8,000 employees. In the 1980s, they expanded into travel services supporting payment services between travel agents and airlines represented by the Air Transport Association of America and provided large scale contracts for the US military. In 1984, the company was acquired by General Motors for $2.5 billion, with EDS becoming a wholly owned subsidiary of GM. Meyerson remained president and in 1985, the company had a presence in 21 countries with 40,000 employees. Meyerson retired in 1987. During his years of executive leadership, EDS revenue grew to $4 billion a year, and the company grew to 45,000 employees. By the end of the decade, revenue was $5 billion. In 1996, GM spun off EDS as an independent company. In the 1990s, in addition to its existing markets, EDS was entering the telecommunications industry and was providing IT systems in many foreign countries. They were providing information systems for global sporting events including the 1992 Barcelona Olympics, the 1994 FIFA World Cup, and the 1998 FIFA World Cup. In 1994, they signed what was at the time the largest information technology contract with Xerox for $3.2 billion and also bought the New Zealand banking processing company Databank Systems. In 1995 they purchased A.T. Kearney, the world's 4th largest private management consulting firm. In 1996, they became an independent company again and relisted on the New York Stock Exchange. Before the turn of the century they took part in over 1,300 Year 2000 projects. As a part of the move towards being an independent company, EDS asked its employees to assist in the re-branding effort by submitting designs for a new logo. While a design (a square with the "E" in it) was selected and used for several years, it was the design of Shawn Downs, an employee in the Charlotte IPC, that was ultimately selected and utilized in the 2000 launch. In 2000, EDS launched a new logo with an award-winning Super Bowl commercial about herding cats. Post-2000, they continued to sign long term, billion dollar contracts with organizations such as Bank of America, American Airlines, General Motors, Kraft Foods and the United States Navy. In 2006 they sold A.T. Kearney in a management buyout. On May 13, 2008, Hewlett-Packard Co. confirmed that it had reached a deal with EDS to acquire the company for $13.9 billion. The deal was completed on August 26, 2008. EDS became an HP business unit and was temporarily renamed "EDS, an HP company". Ronald A. Rittenmeyer, EDS Chairman, President, and CEO, remained at the helm and reported to HP CEO Mark Hurd until his retirement. In December 2008, HP announced that Rittenmeyer would retire at the end of the month. As of 2008, EDS employed 300,000 people in 64 countries, the largest locations being the United States, India and the UK. It was ranked as one of the largest service companies on the Fortune 500 list with around 2,000 clients. As of 23 September 2009, EDS began going to market as HP Enterprise Services, a name change which came one year after HP announced the acquisition of EDS, and which was a critical milestone as the integration of EDS into HP neared completion. On April 3, 2017, Hewlett Packard Enterprise Services merged with Computer Sciences Corporation to form DXC Technology. Retaining significant operations from Plano, Texas and many aspects of EDS. On June 1, 2018, DXC spun off the U.S. public services sector of the business through a Reverse Morris Trust, combining with Vencore and KeyPoint Government Solutions to create a new independent and publicly traded government contractor, Perspecta Inc. Company structure In 2006, EDS sold their management consulting subsidiary company, A.T. Kearney, in a management buyout and retained interests in five related companies: ExcellerateHRO, which offers human resources outsourcing services jointly owned by Towers Perrin Injazat Data Systems, which was a joint venture between EDS and Mubadala Development Company of Abu Dhabi. Its purpose is to provide IT and business process outsourcing (BPO) services in the United Arab Emirates, Qatar and Oman to the government, oil and gas, utilities, financial services, transportation, telecom and healthcare sectors SOLCORP, which provided software and consulting services for the life insurance and wealth management industry EDS Consumer Loan Services (Wendover), which supports consumer loans in the United States MphasiS, operating from Bangalore, India, is an applications development and business processing and infrastructure outsourcing company. MphasiS was merged with then EDS India Unit to become MphasiS, an HP Company with about 33,000 employees. MphasiS operated as an independent subsidiary with its own board and was listed on Indian markets as MphasiS Limited. Acquisitions In June 2006, EDS acquired a majority holding in MphasiS, a leading applications and business process outsourcing (BPO) services company based in Bangalore, India. In March 2007, EDS acquired RelQ Ltd, a testing company based in Bangalore, India. In November 2007, EDS announced that it had agreed to purchase an approximate 93 percent equity interest in Sabre Corporation, a leading provider of software and services to U.S. state governments, from various sellers, including majority shareholder Accel-KKR, for approximately $420 million in cash. Saber became Saber Government Solutions after merging with other EDS state and local non-healthcare groups. In January 2009, it rebranded as EDS, an HP company. In April 2008, EDS acquired Vistorm Holdings Limited, a provider of information assurance and managed security services based in the U.K. The acquisition will create one of the largest information assurance and managed security services firms in Europe. In May 2008, HP and EDS announced that they had signed a definitive agreement under which HP would purchase EDS at a price of $25.00 per share, or an enterprise value of approximately $13.9 billion. The terms of the transaction were unanimously approved by the HP and EDS boards of directors. The transaction closed on 26 August 2008. The companies' collective services businesses, as of the end of each company's 2007 fiscal year, had annual revenues of more than $38 billion and 210,000 employees, doing business in more than 80 countries. In September 2009, HP purchased Lecroix Systems and incorporated it into the infrastructure of EDS to facilitate both in-house and client network security needs. Revenue sources For 2006, $9.6 billion of revenue came from the Americas (Canada, Latin America, and the United States); $6.4 billion from Europe, Middle East, and Africa; $1.5 billion from Asia-Pacific; Services' revenue was: Infrastructure $12 billion, Applications Software $5.9 billion, Business Process Outsourcing $3 billion and all other $421 million. EDS announced the expansion of its SAP consulting practice: By collaborating with SAP on client engagement training and techniques that will drive the long-term growth of its consulting practice, EDS will further enhance its existing SAP capabilities and bring end-to-end SAP consulting and systems integration to the market by early 2008. Additionally, EDS will work closely with SAP's Global Partner and Ecosystem Group for market penetration and value-added customer offerings. Locations EDS operated in 66 countries, with the largest numbers of employees in the cities of Detroit, Michigan, USA; Dallas-Fort Worth, Texas, USA; São Paulo, Brazil; Washington, D.C., USA; Toronto, Ontario, Canada; Rome and Milan, Italy; Paris, France; Adelaide, Australia; Philadelphia, Pennsylvania, USA; Sydney, Australia; Blackpool, UK; Sacramento, California, USA; Tyneside, UK; Madrid and Barcelona, Spain, Antwerp, Belgium and Frankfurt, Germany. Other major facilities were in Argentina, Australia, Belgium, Brazil, Canada, Egypt, Germany, Hungary, India, Dublin, Ireland, Israel, Italy, Mexico, Netherlands, New Zealand, South Africa, Spain and the United Kingdom. EDS's Plano, Texas campus is located about 20 miles (30 km) north of downtown Dallas. The campus consists of 3,521,000 square feet (327,000 m²) of office and data center space on 270 acres (1.1 km²) of land. The campus included four Tier-IV data centers, a command center, four clusters of office buildings, a fitness center, a service station, four helipads and a hangar. It is the center of the 2,665 acre (11 km²) Legacy in Plano real estate development, which EDS built. Sponsorships EDS sponsored the Premier League association football team Derby County from 1998 to 2001. EDS was the title sponsor of the PGA Tour's EDS Byron Nelson Championship from 2003 to 2008, played in nearby Irving, Texas. In 2009, it became the HP Byron Nelson Championship. The tournament raises about $6 million each year for youth and family service centers in Dallas, Texas. EDS signed a sponsorship agreement in 2007 with Nobel Media to become a Global Sponsor of the Nobel Prize Series, and with Nobel Web to become its Global Technology Services Partner. The three-year agreement enables EDS to apply its technology expertise for the benefit of the Nobel Prize Series and the organization's Web technologies, including supporting the development of content on nobelprize.org, Nobel's award-winning website. EDS sponsored Formula One team Jaguar Racing and was title sponsor of the 1995 Australian Grand Prix. Services EDS cataloged its services into three service portfolios; Infrastructure, Applications, and Business Process Outsourcing. Infrastructure services includes maintaining the operation of part or all of a client's computer and communications infrastructure, such as networks, mainframes, "midrange" and Web servers, desktops and Laptops, and printers. Applications services involves the developing, integrating, and/or maintaining of applications software for clients. Business process outsourcing includes performing a business function for a client, like payroll, call centers, insurance claims processing, and so forth. Partners EDS established a number of Business alliances with other companies through its global alliance program. The company has three types of alliances: Agility Alliances, Solution Alliances, and Technology Alliances. The EDS Agility Alliance has worked on a range of projects, notably its Agile Enterprise. Members of the EDS Agility Alliance include Cisco Systems, EMC Corporation, Microsoft, Oracle Corporation, SAP, Sun Microsystems, Symantec, and Xerox. Major clients Most of EDS's clients were very large companies and governments that need services from a company of EDS's scale. EDS's largest clients included Rolls-Royce plc, General Motors, Bank of America, Arcandor, Kraft, United States Navy, the UK Ministry of Defence and Royal Dutch Shell although General Motors announced plans to move 90% of its IT work back in house over the next 3 to 5 years starting in 2012. EDS formed the National Heritage Insurance Company in 1996. The creation of this subsidiary is to manage Medicare Part B services on behalf of the Centers for Medicare and Medicaid Services (CMS), formerly the Health Care Financing Administration (HCFA). NHIC handles call center, claims processing and payment, fraud investigations, physician enrollment etc. in many states of the US. Another large EDS client is the United States Navy. In 2000, they won a contract for the creation of a US$9 billion Intranet linking the Navy and the Marine Corps, which was set to late 2006, but on March 24, 2006, was extended to 2010, adding $3 billion to the accumulated contract worth. This initiative is known as the Navy Marine Corps Intranet, or simply NMCI. In 2004, NMCI accounted for about 4% of EDS's revenue. NMCI has been called the largest private network in the world, with approximately 400,000 "seats". EDS provided the network, desktops, laptops, servers, telephones, video-conferencing, satellite transceivers, and overall management of the intranet. Following on to the NMCI type of services, EDS in March 2005 won a US$4 billion contract with the UK Ministry of Defence to "consolidate numerous existing information networks into a single next-generation infrastructure ... The network will provide seamless interaction between headquarters, battlefield support and the front line, linking about 150,000 desktop terminals and 340,000 users in approximately 2,000 locations ..." In February 2008 EDS signed a US$1.3 billion contract with the Infocomm Development Authority of Singapore, one of the largest IT projects ever undertaken in Asia. This agreement will help the Singapore government achieve a standard desktop, network and messaging/collaboration environment across its public sector by the end of fiscal year 2010. In October 2008, the U.S. Defense Information Systems Agency (DISA) signed a US$111 million contract with EDS. Under this contract, EDS will: conduct worldwide security reviews, deliver certification and accreditation support, provide independent evaluation of United States Department of Defense security policies, and conduct security assessments on DOD operating systems, applications, databases, and networks. DOD and EDS have had a 13-year relationship of providing DISA with a wide range of infrastructure services, hardware and software through the DISA I-Assure and Encore contract vehicles. Of historical significance, just prior to the overthrow of the Shah of Iran, EDS was the IT company that developed the Iranian social security information system. During the 1979 overthrow, several EDS employees were detained by the transitioning government of Iran, causing H. Ross Perot to undertake extraordinary clandestine measures to get these employees out of Iran. These events were recounted in Ken Follett's book On Wings of Eagles. Client contract controversies In December 2003, EDS lost a 10-year £3 billion contract to run Inland Revenue IT services after a series of serious delays in the payment of tax credits, the contract instead being awarded to the company Capgemini. EDS had operated systems for the Inland Revenue since 1994 but the performance of its system had been low, causing late arrival of tax credit payments for hundreds of thousands of people. In 2004, EDS was criticized by the UK's National Audit Office for its work on IT systems for the UK's Child Support Agency (CSA), which ran seriously over budget causing problems which led to the resignation of the CSA's head, Doug Smith on 2004-11-27. The system's rollout had been two years late and following its introduction in March 2003 the CSA was obliged to write off £1 billion in claims, while £750 million in child support payments from absent parents remained uncollected. An internal EDS memo was leaked that admitted that the CSA's system was "badly designed, badly tested and badly implemented". UK MPs described it as an "appalling waste of public money" and called for it to be scrapped. In 2006, EDS' Joint Personnel Administration (JPA) system for the RAF led to thousands of personnel not receiving correct pay due to "processing errors". EDS and MoD staff were reported to have "no definitive explanations for the errors". In September 2007 EDS paid $500,000 to settle an action by the U.S. Securities and Exchange Commission regarding charges related to overstatement of its contract revenues in 2001–2003. At the time these caused a fall in share prices in 2002 which led to legal action against EDS from US shareholder groups. On 2007-10-16, British TV company BSkyB claimed £709m compensation from EDS, claiming that EDS' failure to meet its agreed service standards resulted not just from incompetence, but from fraud and deceit in the way it pitched for the contract. During the BSkyB case, it was shown that a Managing Director had obtained a degree over the Internet. Lawyers for Sky were able to demonstrate that the process for awarding the degree claimed would give a degree to a dog, and that the mark attained by the dog was higher than that of the HP executive, who was questioned on his expertise and integrity. HP lost the case with a preliminary £200 million payment ordered, whilst they appeal over the £700 million total. On 2008-10-10 it was reported that a Ministry of Defence hard drive potentially containing the details of 100,000 Armed Forces personnel could not be located by EDS. See also In 1997 EDS signed its largest IT contract to that date, with the Commonwealth Bank of Australia for $6-8 billion. References External links "Electronic Data Systems" entry in the Handbook of Texas Online, published by the Texas State Historical Association Business services companies of the United States Companies based in Plano, Texas Companies formerly listed on the London Stock Exchange Former General Motors subsidiaries Hewlett-Packard acquisitions International information technology consulting firms Online companies of the United States Multinational companies headquartered in the United States Ross Perot Technology companies established in 1962 2008 mergers and acquisitions
13205719
https://en.wikipedia.org/wiki/DHCP%20snooping
DHCP snooping
In computer networking, DHCP snooping is a series of techniques applied to improve the security of a DHCP infrastructure. DHCP servers allocate IP addresses to clients on a LAN. DHCP snooping can be configured on LAN switches to exclude rogue DHCP servers and remove malicious or malformed DHCP traffic. In addition, information on hosts which have successfully completed a DHCP transaction is accrued in a database of bindings which may then be used by other security or accounting features. Other features may use DHCP snooping database information to ensure IP integrity on a Layer 2 switched domain. This information enables a network to: Track the physical location of IP addresses when combined with AAA accounting or SNMP. Ensure that hosts only use the IP addresses assigned to them when combined with source-guard a.k.a. source-lockdown Sanitize ARP requests when combined with arp-inspection a.k.a. arp-protect References Internet Standards Application layer protocols
60015557
https://en.wikipedia.org/wiki/List%20of%20G.I.%20Joe%3A%20A%20Real%20American%20Hero%20characters
List of G.I. Joe: A Real American Hero characters
This is an alphabetical list of G.I. Joe: A Real American Hero characters who are members of the G.I. Joe Team. For Cobra characters, see List of Cobra characters. Ace Agent Faces Agent Faces is the G.I. Joe Team's infiltrator. His real name is Michelino J. Paolino, and he was born in Parma, Ohio. Agent Faces was first released as an action figure in 2003, in a two-pack with Zartan. His primary military specialty is fighting. His secondary military specialty is intelligence. Agent Faces was born with an uncanny talent for mimicry. After doing a brutally accurate impression of his first sergeant during basic training, he was sent to a top-secret intelligence school. There, he learned the tricks of cloak and dagger, and the use of advanced makeup and disguise techniques. Agent Faces appeared in the direct-to-video CGI animated movie G.I. Joe: Spy Troops, voiced by Ward Perry. Agent Helix Agent Helix is a covert operations officer with advanced martial arts training and expert marksmanship. Her favorite weapons are dual 10mm Auto pistols. An Olympic-class gymnast, her distinctive "Whirlwind attack" is an overpowering combination of kicks and firepower. Agent Helix appears as a playable character in the G.I. Joe: The Rise of Cobra video game, voiced by Nancy Truman. She was designed by Mayan Escalante, a character artist at Double Helix Games, as an un-lockable character in the video game. She then became an action figure in the 2009 edition of the toyline. Airborne Airtight Airwave Airwave is the G.I. Joe Team's audible frequency specialist. His real name is Cliff V. Mewett, and he was born in Louisville, Kentucky. The same name Cliff V. Mewett was also used a few years later for Colonel Courage, even though the character is African-American and born in a different city. Airwave was first released as an action figure in 1990, as part of the "Sky Patrol" line. He is the Sky Patrol communications specialist, and is also the Signal Corps Adjutant to the Joint Chiefs of Staff. He is noted for being able to gain a signal where few others can. Airwave appears in the DiC G.I. Joe cartoon voiced by Michael Benyaer. Alpine Altitude Altitude is the G.I. Joe Team's recon scout. His real name is John Edwards Jones, and he was born in Cambria, California. Altitude was first released as an action figure in 1990, as part of the "Sky Patrol" line. He is a full-blooded Apache. He joined the military after his budding artistic career was cut short by the collapse of the syndicated cartoon industry. Altitude uses his photographic memory and drawing skills to bring back intelligence as a recon scout. Altitude appeared in the Devil's Due G.I. Joe series. He is part of the assault team sent to Cobra Island to destroy the forces of the revived Serpentor. Altitude appears in the DiC G.I. Joe cartoon voiced by Terry Klassen. Ambush Ambush is the G.I. Joe Team's concealment specialist. His real name is Aaron McMahon, and he was born in Walnut, California. Ambush was first released as an action figure in 1990. There was a "Dinosaur Hunter" release in 1993. A new version of Aaron "Ambush" McMahon was released in 2004, as part of the Toys R Us exclusive "Desert Patrol Squad" set, which also included the figures Dusty, Gung Ho, Snake-Eyes, Stalker and Tunnel Rat. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 111. There, he is part of an advance recon team that was sent to the Middle Eastern country of Benzheen. As the Battle of Benzheen rages on, Ambush, Recoil and Sneak-Peek are shot by Cobra soldiers who themselves are killed by other Joes. Sneak-Peek does not survive his injuries. Ambush later drove the "Battle Wagon" during a mission in Trans-Carpathia, in which the Joes defended Destro and the Baroness against Cobra forces. Later, Hawk and Lady Jaye were captured in Grodsnz, the capital of Borovia, by local security police. Ambush and the remaining Joes drove the Battle Wagon into the city to rescue their teammates. In the Devil's Due series, Ambush is seen as one of the many Joes interfering in the second Cobra civil war, again caused by Serpentor. This conflict takes place on Cobra Island. His infiltration skills are put to use when a Joe team investigates hostile corporate instances in the fictional country of Darklonia. Ambush appeared in the DiC G.I. Joe animated series, voiced by Andrew Koenig and Ian Corlett. In the episode "United We Stand", Ambush and Pathfinder have to work together or perish. In the episode "I Found you Evy", Ambush reveals a story from his past, about the only person who has ever been able to find him, a childhood friend who had become a female Range-Viper. Armadillo Armadillo is the G.I. Joe Team's driver of the Rolling Thunder vehicle. His real name is Philo R. Makepeace, and his rank is E-7 (Sergeant First Class). Armadillo was born in Fort Huachuca, Arizona, and was first released as an action figure in 1988 with the "Rolling Thunder" missile launcher. His primary military specialty is that of armored assault vehicle driver. His secondary military specialty is advanced reconnaissance. Prior to his military career, he used to drive semi trucks, before his aggressive driving style got him into trouble. In the Marvel Comics G.I. Joe series, the character Armadillo was called Rumbler. His first appearance was in issue No. 80, when he helped the G.I. Joe team to keep Cobra Command from claiming a newly formed island near the original Cobra Island. However, just as the battle was over, the island sank back beneath the waves. He later participated in a secret mission to rescue captured Joes and members of the Oktober Guard from Sierra Gordo. He participates in the Battle of Benzheen. In Marvel UK's Action Force comic, Armadillo appeared in G.I. Joe Annual 1992, as part of a team sent the fictional country of Sao Cristobel. The mission is to keep Cobra from acquiring a nuclear warhead. Backblast Backblast is the G.I. Joe Team's Anti-aircraft soldier. His real name is Edward J. Menninger, and his rank is that of Sergeant E-5. Backblast was born in New York City, and was first released as an action figure in 1989. The figure was repainted and released as part of the Battle Corps line in 1993. Different versions of the character were released in 2004 and 2005. Backblast's primary military specialty is air defense, and his secondary military specialty is signal corps. He grew up in a house next to one of the most popular airports in the world. His bedroom was directly under the landing path of incoming jets. When asked his job preferences upon his enlistment, he answered, "Where can I go to shoot airplanes out of the sky?" In the Marvel Comics G.I. Joe series, he first appeared in issue No. 92. He was part of a covert team of Joes sent into the fictional country of Sierra Gordo. They successfully rescue Shockwave, Recondo and Lt. Falcon, as well as the surviving members of the Oktober Guard. Backblast personally shoots down a Cobra Condor plane, which was attempting to destroy the Joes' vehicle, before the team could get across the border into the friendly nation of Punta del Mucosa. Backblast was in the Joe's Utah HQ when Clutch and Rock 'n Roll nearly went berserk due to Cobra brainwashing. He is one of the many Joes sent to the fictional Middle Eastern nation of Benzheen during the conflict in that nation. He works with Rampart to shoot down a Cobra Rattler pursuing Joe pilots. In the Devil's Due series, Backblast is seen as one of the many Joes fighting against the new army created by Serpentor. This conflict takes place on Cobra Island. Backblast appeared in the DiC G.I. Joe animated series, in a non-speaking cameo role in the episode Operation Dragonfire part 5. He also appears as a playable character in the G.I. Joe: The Rise of Cobra video game voiced by Chopper Bernet. Back-Stop Back-Stop is the G.I. Joe Team's Persuader tank driver. His real name is Robert A. Levin. Back-Stop was born in Montreal, Quebec, Canada. He was first released as an action figure in 1987, packaged with the "Persuader" high-speed tank. A second version of Back-Stop was available as an authorized exclusive figure included in the 2009 Canadian G.I. Joe Convention box set. The set was limited to 100 with all figures being done in a 25th style design. Back-Stop's primary military specialty is armor, and his secondary military specialty is mechanized infantry. As a youth playing in junior league hockey in Canada, he injured so many opposing players that his family had to move to the United States to escape angry parents. He grew up in Detroit, where he boxed in the Golden Gloves under he was barred from competing; he also spent two years as his high school's undefeated wrestling champion when no one would challenge him. After a short demolition derby career, he found his true calling in the Army and eventually the G.I. Joe Team. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 64 (October 1987). He joins the team when they are in their Utah Quonset hut base. His Persuader tank is used with an A.W.E. Striker for transport for some of the Joes who have joined at the same time. He is not informed of the top-secret aspects of the Joe team right away, such as the underground complex and the space shuttle, the USS Defiant. This is because while their new Joes had their transit orders, their top-secret clearances simply had not caught up to them yet. The shuttle itself, in its own transport, almost crushes the two vehicles. As Back-Stop was not allowed to see the Defiant, Leatherneck moves the vehicles. Back-Stop is seen driving the Persuader to greet a trio of Joe soldiers that had been illegally imprisoned for five months. He next appears as part of a security team, with Cover-Girl, Frostbite, Iceberg and Hawk, sent to the fictional country of Frusenland to help Battleforce 2000. Back-Stop ends up assisting in a firefight against Cobra forces, who had allied themselves with the country's government, as they attack as soon as the vehicles are literally dropped onto the runway. The 2009 Canadian G.I. Joe convention had a limited edition box set that included a 25th Anniversary-style figure of Back-Stop. The set also included a comic book, 110 copies, produced for attendees. Part of the story featured Back-Stop fighting Cobra allies in Canada. His Persuader tank is destroyed by his adversaries. The use of the trademarked character was approved by Hasbro. Back-Stop also appears in the British Action Force continuity. Banzai Banzai is the G.I. Joe Team's Rising Sun ninja. His real name is Robert J. Travalino. His primary military specialty is first-strike commando. His secondary military specialty is nunchaku instructor. His birthplace is Hartsdale, New York. Banzai trained with a reclusive ninja master in the hostile mountains Tibet for some time. He is noted for training while blind-folded. Barbecue Barrel Roll Barrel Roll is the G.I. Joe Team's high-altitude sniper. His real name is Dwight E. Stall, and he was born in Cincinnati, Ohio. Barrel Roll was first released as an action figure in 2003, and is the brother of both G.I. Joe Bombstrike and Cobra Black Out. A version of Barrel Roll with no accessories came with the Built to Rule Rising Tide, which followed the G.I. Joe: Spy Troops story line. The forearms and the calves of the figure sported places where blocks could be attached. His primary military specialty is marksmanship instructor. His secondary military specialty is fixed-wing aircraft pilot. Barrel Roll pushes himself to practice daily on the sniper range. He is a crack shot, and a skilled HALO jumper and pilot. He can claim the high ground without being spotted, drifting in silently by glider or parachute, and then disappear into the underbrush, sitting absolutely still to align the perfect shot. Barrel Roll appeared in the direct-to-video CGI animated movie G.I. Joe: Spy Troops, voiced by Paul Dobson. Barricade Barricade is the G.I. Joe Team's bunker buster. His real name is Philip M. Holsinger. Barricade was born in Pittsburg, Kansas, and was first released as an action figure in 1992. His 1993 release was part of the Battle Corps line. In 2004, he was released as part of a Toys R Us exclusive "Anti-Venom Task Force" set. The story behind the Anti-Venom Task Force, is that they are G.I. Joe's response to Doctor Mindbender and Cobra Commander turning civilians into dangerous monsters. His primary military specialty is bunker busting, i.e. penetrating hard targets. His secondary military specialty is the driver of the "Badger" vehicle. Barricade is also explicitly trained to fight enemy agents in city and urban areas. Bazooka Beach Head Big Ben Big Ben is the G.I. Joe Team's SAS Fighter. His real name is David J. Bennett, and his rank is that of Staff Sergeant. Big Ben was born in Burford, England, and was first released as an action figure in 1991. The figure was repainted and released in 1993 as part of the "international Action Force" mail-in offer. Other repainted releases came in 2000 packaged in a two-pack with Whiteout, and two different versions in 2002, packaged in a double-pack with an Alley Viper figure. Big Ben received training at Bradbury Barracks in Hereford, before becoming a cadre member at the NATO Long Range Recon Patrol School in West Germany. He is a member of the 22nd Regiment of the British Special Air Service, on his second assignment with the G.I. Joe Team, as part of a temporary exchange program between American Special Forces and the British SAS. His primary military specialty is infantry, with a secondary of subversive operations. In the Marvel Comics G.I. Joe series, he assists the Joes in defending Destro, when the allied group is entrenched in Destro's Trans-Carpathian castle. He also appears in issue #137. In the Devil's Due G.I. Joe series, he assists the Joes when they invade Cobra Island to interfere in their second civil war. Big Ben appeared in the DiC G.I. Joe cartoon, voiced by Maurice LaMarche. Big Brawler Big Brawler is the code name of Brian K. Mulholland. He is the G.I. Joe Team's jungle mission specialist, and was first released as an action figure in 2001. A new version with red hair was released in 2003, in a Tiger Force five-pack exclusive to Toys R Us stores. His specialties are counter-intelligence and espionage, and he is a master of both psychological warfare and hand-to-hand combat. When it came to terrorist attacks orchestrated by the Cobra Organization, Big Brawler transferred from the Army Intelligence to the G.I. Joe Team. Big Lob Big Lob is a former basketball player who speaks in sports commentator jargon. His real name is Bradley J. Sanders, and he was born in Chicago, Illinois. Big Lob first appeared in G.I. Joe: The Movie, voiced by Brad Sanders. He is established as a member of the "Rawhides", a group of new Joe recruits (including Lt. Falcon, Chuckles, Jinx, Law & Order and Tunnel Rat) trained by Beach Head. Big Lob had no action figure or comic book counterpart until 2010, when his figure became available as a G.I. Joe Club exclusive. He was listed as a reserve member of G.I. Joe during the America's Elite comic series, and is seen on a map as having been deployed as part of the Joes' efforts to battle Cobra Commander's forces worldwide during the "World War III" storyline. Blast-Off Blast-Off is the G.I. Joe Team's flamethrower. His real name is Jeffrey D. Thompson, and he was born in Kirkwood, Missouri. Blast-Off was first released as an action figure in 1993, as part of the "Mega Marines" subset. The Mega-Marines are several Joes teaming up to battle Cobra-allied monsters. His figure came with "moldable bio-armor". His primary military specialty is flamethrower. His secondary military specialty is firefighter. He is recruited into the G.I. Joe Team from his firefighting job, after he single-handedly put out an entire forest fire. When it is discovered that the "Mega-Monsters", a recently emerging threat, are vulnerable to fire, Blast-Off is assigned to the "Mega-Marine" team under the command of Gung-Ho. His other squad-mates are Clutch and Mirage. Blizzard Blizzard is the G.I. Joe Team's arctic attack soldier. His real name is Patch Kelly, and his rank is that of Sergeant First Class E-7. Blizzard was born in Wolfeboro, New Hampshire (spelled "Wolfboro" on the action figure's file card), and was first released as an action figure in 1988. In 1991, he was one of six exclusive European releases under the "Tiger Force" line. In 1997, he was released as part of an "Arctic Mission" triple pack with Iceberg and Snow-Job. Blizzard's primary military specialty is Arctic warfare training instructor, and his secondary military specialty is infantry. Blizzard led an experimental security team based at Thule, Greenland for an entire winter, whose objective it was to determine what kind of training and conditioning worked best to prepare trainees for combat in Arctic conditions. Blizzard's team found that training and conditioning had little effect, as only the hardest and meanest men made it through the course – of which Blizzard was the hardest and meanest. He is noted by his teammates as being difficult to work with, though his success record makes up for it. Blizzard is featured as a playable character in the 1991 G.I. Joe video game created for the Nintendo Entertainment System. His special power is being able to fire weapon-shots through walls. Blowtorch Breaker Budo Budo is the G.I. Joe Team's samurai warrior. His real name is Kyle A. Jesso, and his rank is that of sergeant E-5. Budo was born in Sacramento, California, and was first released as an action figure in 1988. Budo's primary military specialty is infantry, and his secondary military specialty is hand-to-hand combat instructor. Budo's father was an orthodontist in Oakland, his grandfather a farmer in Fresno, his great-grandfather a track-worker on the Rocky Mountain Line, and his great-great grandfather was a fencing master in one of Japan's last great samurai warrior families. Budo was given the family swords on his eighteenth birthday, as well as a haiku written by his ancestor. Budo has a fifth-degree black belt in Iaidō, and similar rank in Karate, Judo, and Jujutsu. He has an affinity for his chopped, pan-head Harley and for heavy metal music. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 82. He has just joined the Joe with Repeater and Lightfoot. Their veteran instructor, Grand Slam is injured, leaving the three to defend a weapons depot from enemy forces. In the Devil's Due series, Budo has an interrupted romantic relationship with the ninja Jinx. He also spends some time undercover, infiltrating and partially converting a Japanese businessman's private army. Gung-Ho and Wild Bill assist in this mission. His efforts save Japan from a military takeover. Bullet-Proof Bullet-Proof is the G.I. Joe Team's Drug Elimination Force leader. His real name is Earl S. Morris. He was born in Chicago, Illinois, and was first released as an action figure in 1992, as part of the DEF (Drug Elimination Force) line. He was released in 1993 with the Battle Corps line. In addition to leading the G.I. Joe DEF (Drug Elimination Force), he is also an official U.S. Marshal. Before being assigned to the G.I. Joe team, he served with the Drug Enforcement Administration in the Caribbean, the "Golden Triangle" and Central America. His code name resulted from his enemies, as they observed how he remained unscathed while leading his men through firefights. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 124. He also appeared in No. 125 and No. 127. As part of the DEF, he helps eliminate the drug trade from the town of Broca Beach without realizing the entire town was a Cobra front. The DEF also confront the enemy operatives Headman and his Headhunters. Bullet-Proof appeared in the DiC G.I. Joe animated series. Bullhorn Bullhorn is the G.I. Joe Team's hostage negotiator. His real name is Stephen A. Ferreira. He was born in Providence, Rhode Island, and was first released as an action figure in 1990. Version 2 was released in 2008 for the International G.I. Convention which was held in Dallas, Texas. It came with the transport called "S.W.A.T. R.T.V." This was produced in relation with the "Official G.I. Joe Collectors' Club". Bullhorn taught hand to hand combat at the F.B.I. Academy in Qauntico, Virginia. He is also a contender for the "national practical pistol title", another skill practiced at the Academy. He is noted as being a reckless driver. Bullhorn appeared in the DiC G.I. Joe animated series, voiced by David Willis. Bushido Bushido is the G.I. Joe Team's snow ninja. His real name is Lloyd S. Goldfine. His primary military specialty is cold weather specialist. His secondary military specialty is strategist. His birthplace is Hollis, Queens, New York. He has trained in Iceland and continues to prefer to train in cold weather environments. He wears a helmet similar to the one his father wore. He considers fellow Ninja Force member Banzai his 'blood brother'. Captain Grid-Iron Captain Grid-Iron is the G.I. Joe Team's hand-to-hand combat specialist. His real name is Terrence Lydon, and his rank is that of captain O-3. Captain Grid-Iron was born in Evergreen Park, Illinois, and was first released as an action figure in 1990. He was released as part of the tradition of Hasbro to release a sports figure each year, starting with Bazooka in 1985. A recolored version was also released in India. Grid-Iron was quarterback for the West Point football team. He graduated in the top ten of his class. He passed up an appointment to the U.S. Army War college, for a conventional infantry command at the company level. His determination to be "where the action is" brought him to the attention of G.I. Joe. According to his file card, his personality is grating, but tolerable. The other Joes think if he would stop trying so hard to be likable, "they might let him play quarterback at the annual G.I. Joe Fish Fry Football Game!" Grid-Iron makes a single panel appearance in issue No. 130 of the Marvel Comics series. He is seen defending G.I. Joe headquarters from Cobra attack. Years later he appears on the cover to the Devil's Due series America's Elite No. 25. He is listed as a reservist in Special Missions: Manhattan. In America's Elite #28, he is listed as fighting in the Sudan. Captain Grid-Iron's most significant appearances were in the first-season of the DiC G.I. Joe animated series, voiced by Dale Wilson. His speech was peppered with football terminology. He was in charge of the team in the absence of General Hawk and Sgt. Slaughter, and took orders from both of them when they appeared. Grid-Iron was absent for most of the second season, but was featured in the second-season episode "Metal-Head's Reunion," which revealed that Grid-Iron and the Cobra officer Metal-Head both attended the same school. Captain Grid-Iron is featured as a playable character in the 1991 G.I. Joe video game created for the Nintendo Entertainment System. Chameleon Chameleon is the illegitimate half-sister of the Baroness, who infiltrated the Cobra organization by assuming the Baroness' role. She serves as a secret agent and intelligence officer for G.I. Joe. She was introduced to the toyline when Hasbro lost the trademark to the Baroness' name. Charbroil Charbroil is the G.I. Joe Team's flamethrower. His real name is Carl G. Shannon, and his rank is that of corporal E-4. Charbroil was born in Blackduck, Minnesota, and was first released as an action figure in 1988. The figure was repainted and released as part of the "Night Force" line in 1989, packaged with Repeater. In 2004, he was part of a Toys R Us Exclusive "Anti-Venom Task Force", a G.I. Joe response team to enemy agents turning civilians into monsters. Charbroil had a new sculpt in 2009, as part of the line released for the G.I. Joe: The Rise of Cobra movie. Charbroil's primary military specialty is flame weapons specialist, and his secondary military specialty is small arms armorer. One of his childhood chores was to heat the water pipes in his family's basement with a blowtorch in the winter to keep them from freezing and bursting. As a teenager, his job was to feed coal into the blast furnaces in the mills on the Great Lakes. As such, when he was recruited into the Army he requested a job dealing with open flames. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 80 (November 1988). He is part of a Joe effort to stop Cobra from claiming new territory forming near Cobra Island. The land mass eventually sinks on its own. In Special Missions #21, Charbroil is part of a G.I. Joe squad sent to investigate Cobra activity in the sewers of New York City along with Airtight, Spearhead & Max and Tunnel Rat. In the Devil's Due G.I. Joe series, Charbroil is one of the many Joes called back into service to fight The Coil, a new army formed by the former Cobra agent, Serpentor. This mission again focuses on Cobra Island. Chuckles Claymore Clean-Sweep Clean-Sweep is the G.I. Joe Team's Anti-tox trooper. His real name is Daniel W. Price, and he was first released as an action figure in 1991, as part of the Eco-Warriors line. He is a U.S. Army Sergeant, and he was born in Elizabeth, New Jersey. He is a chemicals operation specialist and combat engineer. He is often called in to use his remote control devices to clean up Cobra chemical spills; the problem is that Cobra soldiers are often still around. His primary offensive weapon is a laser pistol. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 123. He becomes part of the "Eco-Warriors", assigned to stop environmental threats. With Flint, the team leader and Ozone, they confront the Cobra agent Cesspool, who was causing pollution from an abandoned oil platform. Clean-Sweep appeared in the DiC G.I. Joe animated series. Cloudburst Cloudburst is the G.I. Joe Team's glider trooper. His real name is Chuck Ram. He was born in San Diego, California, and was first released as an action figure in 1991, as part of the Air Commandos line. As a teenager, he designed and built his own working prototype gliders. After joining the Army, he helps develop stealth-gliders for troop-insertion and recon. He is now on special assignment to the G.I. Joe team as their in-house glider specialist. He's noted for constantly working on his equipment because he knows his services are a 'last resort' situation. In the Marvel Comics G.I. Joe series, he is mentioned by name in issue No. 118, but not seen. Cloudburst appeared in the DiC G.I. Joe animated series. Clutch Cold Front Cold Front is the G.I. Joe Team's Avalanche driver. His real name is Charles Donahue. He was born in Fort Knox, Kentucky, and was first released as an action figure in 1990, packaged with the "Avalanche" arctic tank/hovercraft. This vehicle should not be confused with the G.I. Joe Battleforce 2000 character, also called Avalanche. Cold Front's primary military specialty is Avalanche driver, and his secondary military specialty is fire control technician. He grew up literally close to the weapons testing facilities at the military base called Fort Knox, hearing the sounds of the M-80 tanks. This inspired a lifelong love of tanks. Self-taught strategy and his affiliation with military vehicles got him an assignment to the 3rd Armored Division when he enlisted in the Army at the age of eighteen. From the Army, he was reassigned to the G.I. Joe "Arctic Patrol". From there, he was picked by General Hawk to drive the Avalanche. He is noted for his poor treatment of civilian vehicles. Colonel Courage Colonel Courage is the G.I. Joe Team's strategic commander. His real name is Cliff V. Mewett, and he was born in Boston, Massachusetts. Colonel Courage was first released as an action figure in 1993, as part of the Battle Corps line. The name Cliff V. Mewett had been used a few years earlier for the character Airwave, though the character is Caucasian and born in a different city. A Brazil variant of Colonel Courage has him as a Caucasian. His primary military specialty is administrative strategist. His secondary military specialty is Patriot driver. He is often assigned to intelligence tasks behind the lines and behind a desk, partly due to his attention to detail. This also translated into a noted tendency to dress well, something he tries to pass onto those he commands. Countdown Countdown is one of the G.I. Joe Team's astronauts. His real name is David D. Dubosky, and his rank is that of Captain, USAF O-3. Countdown was born in Plainfield, New Jersey, and was first released as an action figure in 1989. The figure was repainted and released as part of the Star Brigade line in 1993, and again in 1994. Countdown's primary military specialty is astronaut/fighter pilot, and his secondary military specialty is electronics engineer. He is a qualified F-16 fighter pilot, a NASA astronaut, an electronics engineer, and even a ranking chess master. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 109 and again in No. 110. He takes part in a mission that launches a Joe vehicle into orbit and then into the fictional country of Trucial Absymia. The mission, which succeeds, is to rescue the survivors of a Joe squad that has suffered many fatalities. Cover Girl Crankcase Crankcase is the G.I. Joe team's A.W.E. Striker driver. His real name is Elwood G. Indiana, and he was born in Lawrence, Kansas. Crankcase's primary military specialty is motor vehicle driver, and his secondary military specialty is armor. He was first released as an action figure in 1985, packaged with the A.W.E. Striker vehicle. Crankcase first appeared in G.I. Joe: A Real American Hero #44 (February 1986), but is among several Joes killed in action by a SAW Viper in issue #109. Crazylegs Cross-Country Cutter Daemon Daemon is the code name of Jeff Lacefield. He was born in Cincinnati, Ohio, and developed an interest in computers at an early age. By the time he graduated from college at age 21, he had become quite a computer programmer, and started to develop computer viruses in his spare time. When one of these viruses was inadvertently set loose in the FBI central computer system, he was tracked down and arrested. However, the Feds saw his abilities as a programmer, and instead of being sent to federal prison, Daemon was appointed to the reinstated G.I. Joe task force, to help them thwart the top-secret nano-mite technology that was stolen from the U.S. Army by Cobra. Daemon is killed in the Devil's Due G.I. Joe series, when his neck is snapped by Serpentor during a battle with The Coil. Dart Dart is the G.I. Joe Team's pathfinder, and he was first released as an action figure in 2002. His real name is Jimmy Tall Elk, and his rank is that of sergeant E-6. Dart was born in White Earth, Minnesota. Dart's primary military specialty is recon, and his secondary military specialty is infantry. He was a former hunting guide in Minnesota before joining the G.I. Joe team. Dee-Jay Dee-Jay is the code name of Thomas R. Rossi III. He was born in Providence, Rhode Island, and was the most popular DJ in Boston before he signed up for Battleforce 2000. His primary military specialty is radio telephone operator, and his secondary military specialty is infantry. Dee-Jay was first released as an action figure in 1989. Dee-Jay appeared in only issue #113 of the Marvel Comics G.I. Joe series, and was killed in that same issue. Deep Six Depth Charge Depth Charge is the G.I. Joe Team's underwater demolitions expert. His real name is Nick H. Langdon, and he was born in Pittsburgh, Pennsylvania. first released as an action figure in 2003. He specializes in clearing mines and other devices in the water. Despite having some of the best scores in the history of the UDT program and loving his job, he hates water. Dial Tone Doc Dogfight Dogfight is the G.I. Joe Team's Mudfighter pilot, and he was first released as an action figure in 1989, packaged with the Mudfighter bomber. His real name is James R. King, and his rank is that of 1st Lieutenant, USAF O-2. Dogfight was born in Providence, Rhode Island. Dogfight's primary military specialty is Mudfighter pilot, and his secondary military specialty is electronics technician. The combination of his uncanny depth perception, precise hand/eye coordination, and powerful throwing arm got him permanently forbidden from every county fair and carnival in Alabama for winning too many stuffed bears. He now uses those same skills to destroy Cobra's vehicles. In the Marvel Comics G.I. Joe series, he first appeared in G.I. Joe Special Missions No. 28. In that issue, Dogfight assists in saving the USS Flagg. In the same issue, he also breaks the "fourth wall" as part of a group addressing the reader. Later, Dogfight is the co-pilot for Ace during a recon mission over the supposedly friendly skies of Benzheen. Their craft is shot up off-panel by a Cobra Rattler. They escape to the awaiting aircraft carrier, the USS Flagg. Dogfight urges Ace to punch out. He does not, because he knows Dogfight's ejection system is shot to pieces and Ace could not live with knowing he abandoned his co-pilot. In the same issue, the pilots Slip-Stream and Ghostrider take another flight over Benzheen in a Stealth Fighter. Ghostrider and later, Hawk both refer to Slip-Stream as Dogfight. Dogfight also appears in the America's Elite G.I. Joe series from Devil's Due. He is part of a small group of Joe pilots sent to assist European military forces. Despite expectations, they survive the mission. He also witnesses Iron Grenadier pilots suffering aircraft malfunctions. Dojo Dojo is the code name of Michael P. Russo. He was born in San Francisco, California. Impressed by his skills and integrity, Storm Shadow recruited Dojo for the G.I. Joe's new sub-team, Ninja Force. He is noted for using "patter" to distract his opponents. He also prefers to drive the G.I. Joe vehicle "Brawler". Double Blast Double Blast is a heavy machine gunner for the G.I. Joe Team. He was named after Charles L. Griffith (a real-life G.I. Joe collector), and was released as an action figure in 2001. Double Blast was created to replace Roadblock when Hasbro temporarily lost the trademark to his name. He is characterized for his ability to assemble, disassemble, and reassemble a weapon in less than 60 seconds in the dark. Downtown Downtown is the G.I. Joe Team's mortar man, and he was first released as an action figure in 1989. His real name is Thomas P. Riley, and his rank is that of corporal E-4. Downtown was born in Cleveland, Ohio. Downtown's primary military specialty is infantry, and his secondary military specialty is special operations. Downtown can keep up with a highly mobile, rapid strike force like G.I. Joe with his high-powered mortar, whereas slow, ponderous artillery cannot. He can judge range and trajectory just by eyesight. In the America's Elite G.I. Joe series from Devil's Due, Downtown is one of the many Joes to take part in the second Cobra civil war, which again takes place on Cobra Island. Drop Zone Drop Zone is the G.I. Joe Team's Sky Patrol weapon specialist. His real name is Samuel C. Delisi, and he was born in Poteau, Oklahoma. Drop Zone was first released as an action figure in 1990, as part of the "Sky Patrol" line. He is also a Special Forces adviser. He is noted for volunteering for every dangerous assignment and deeply enjoying his job. Drop Zone appears in the DiC G.I. Joe cartoon, voiced by Don Brown. Duke Dusty Effects Effects is the G.I. Joe Team's explosives expert, and he was first released as an action figure in 1994, as part of the Star Brigade line. His real name is Aron Beck. Effects was born in Fort Worth, Texas. His primary military specialty is explosives/munitions ordnance. His secondary military specialty is special effects coordinator. He uses visual distractions to draw attention away from targets he then destroys. Fast Draw Fast Draw is the G.I. Joe Team's mobile missile specialist, and he was first released as an action figure in 1987. His real name is Eliot Brown, and his rank is that of corporal E-4. Fast Draw was born in Collierville, Tennessee. Fast Draw's primary military specialty is ordnance, and his secondary military specialty is clerk typist. Fast Draw carries the FAFNIR (Fire and Forget Non-tube-launched Infantry Rocket) missile system, and wears a protective suit to shield him from hot exhaust gases. The FAFNIR target acquisition and homing devices are self-contained within the missile, which allows the operator to move and take cover immediately after launch. These missiles are extremely fast, and resistant to ECM jamming. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 60. Along with Chuckles, Falcon, and Law and Order, he is part of a faux G.I. Joe team being used by others for political gain. After the "new" Joes assist Hawk in battling several Dreadnoks, they are made official members of the team. The conflict had been over a rogue US military faction trying to use a high-tech missile to destroy Cobra Island. He is spotlighted in a latter incident, destroying Cobra tanks threatening his fellow soldiers. Firewall Firewall is the code name of Michelle LaChance. She was born in Virginia Beach, Virginia, and learned early on that she had a knack for computers. In high school, she figured out how to access protected school records and alter grades. This eventually led to hacking government systems and classified military computers, which landed her in federal prison. But her handiwork impressed enough people, that she was sent to the G.I. Joe Team under supervision of Mainframe. There, she received basic military training, and has since been a loyal member, though she is not a field operative. Firewall was instrumental in developing a counter-program, to thwart the top-secret nano-mite technology that was stolen from the U.S. Army by Cobra. Flash Flint Footloose Freefall Freefall is the G.I. Joe Team's paratrooper, and he was first released as an action figure in 1990. He had a 2009 re-release as "Spc. Altitude", but is the same character. This latter release was part of the "Assault On Cobra Island" box set, which included the figures Chuckles, Hit and Run, Outback, Recondo, Wet-Suit and Zap. Freefall's real name is Phillip W. Arndt, and he was born in Downers Grove, Illinois. To prepare for the Airborne Ranger school, he went through the Ranger Indoctrination Course designed to remove forty percent of the applicants. Freefall then had to conquer a three-week pre-training course, simply to qualify for the full eight-week training course. He is noted for having enjoyed it and come out the best of the Rangers. Freefall has a master's degree in Eastern Philosophy. He is known for having a large ego. Freefall appeared in the DiC G.I. Joe animated series voiced by Scott McNeil. Fridge The Fridge is the code name used by football player William Perry. He was born in Aiken, South Carolina. During his time as a member of the NFL's Chicago Bears football team, Perry worked with G.I. Joe as a physical training instructor. Though he was one of many Joes listed on the World War III member assignment map in America's Elite No. 28, The Fridge was unavailable during the conflict known as World War III. Frostbite General Joseph Colton General Flagg General Philip Rey General Philip Rey was introduced in the Devil's Due G.I. Joe series. His real name is Philip A. Rey, and he emerged from seemingly nowhere, to become the field commander of the G.I. Joe Team. It was later revealed that Rey is one of the dozen original clones that were produced during Cobra's development of Serpentor. Dr. Mindbender altered Rey's growth patterns and features to hide his connection to the Cobra Emperor. Additionally, Crystal Ball helped construct Rey's personality, and Zandar helped insert him as a U.S. military general, to make him Cobra's most insidious sleeper agent. Unexpectedly, Rey's years of service and his time with G.I. Joe helped him shake off Cobra's control, and he refused to betray his countrymen, despite deeply implanted hypnotic triggers. Rey's past remains classified, known only to a handful of Joes. Ghostrider Ghostrider is the G.I. Joe Team's stealth fighter pilot, and he was first released as an action figure in 1988, packaged with the Phantom X-19 Stealth Fighter. His real name is Jonas S. Jeffries, and his rank is that of Major, USAF O-4. Ghostrider was born in Chicago, Illinois. Ghostrider's primary military specialty is stealth fighter pilot, and his secondary military specialty is aeronautical engineer. Ghostrider has been working on not being noticed since the second grade; teachers never noticed him because he conscientiously worked on not being noticed. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 76. There he is one of the many Joes to participate in the first Cobra civil war on Cobra Island. He is featured in issue #16 of G.I. Joe Special Missions. He later spends a week with Scarlett, helping to establish a Stealth Fighter base in South America. It is destroyed in a raid orchestrated by Cobra Commander and Darklon. Ghostrider manages to lift off, and assists in saving the aircraft carrier the USS Flagg and the space shuttle the USS Defiant. Also in the battle on the side of the Joes, is the pilot Dogfight in his own craft. Later, Ghostrider and Slip-Stream, working off the USS Flagg, run a recon mission over the fictional country of Benzheen. Rampart and Backblast save the duo, by shooting down a Cobra Rattler. As with his other appearances, Ghostrider accepts that nobody can remember his code-name. While the mission succeeds, the Stealth Fighter is a complete loss. For most of the issue, Slip-Stream is referred to as "Dogfight", who survived an earlier wreck onto the Flagg in the same issue. A running gag throughout the Marvel G.I. Joe comic series was that Ghostrider's name was never actually said by any other Joes, or even used in narration. In reality, this was done to avoid any potential issues or problems with Marvel's own Ghost Rider, despite the G.I. Joe character's named being spelled differently as one word. Grand Slam Grunt Gung-Ho Hardball Hardball is the G.I. Joe Team's multi-shot grenadier. His real name is Wilmer S. Duggleby, and his rank is that of corporal E-4. Hardball was born in Cooperstown, New York, and was first released as an action figure in 1988. Hardball's primary military specialty is infantry, and his secondary military specialty is special services. Hardball played centerfield in the minor leagues for five seasons before he realized that the big league scouts were looking for star quality over athletic prowess. The G.I. Joe Team was looking for team players however, and had a need for someone who could judge distances accurately and react quickly with deliberation. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 80 (November 1988). Hardball is later selected as one of the many Joes to help protect the President of the United States. His skills are vital to rescuing the President after he is kidnapped by Cobra forces. He later mans a machine-gun turret in the Joe vehicle called "The Mean Dog" that had been headed out to a weapons testing range. Hardball, Repeater and Wildcard assist in a running battle against Dreadnoks, who are trying to capture two other Joes, Clutch and Rock 'n Roll. In the Devil's Due series, the Red Shadows, a Cobra splinter group, wages a campaign against the Joes. While on assignment in South America, Hardball (along with Rampart and Glenda) is killed by the Red Shadows. Hard Drive Hard Drive is the G.I. Joe Team's battlefield computer specialist. His real name is Martin A. Pidel, and he was first released as an action figure in 2004. Hardtop Hardtop is the designer and driver of the G.I. Joe Team's Crawler. His real name is Nicholas D. Klas, and his rank is that of sergeant E-5. Hardtop was born in Chicago, Illinois, and was first released as an action figure in 1987, packaged with the Defiant space vehicle launch complex. In 2004, he was released as part of the "40 Years of Adventure" Tiger Force Box Set, at the 2004 G.I. Joe Convention in Orlando, Florida. Hardtop's primary military specialty is heavy equipment operator, and his secondary military specialty is electronics. He is a man known for getting the job done without questions; for example, moving the Crawler to the top of a mountain. He is known for being quiet, as talking is not one of his priorities. Budget cuts later force the closing of the G.I. Joe space shuttle program. Hardtop continues to work with the team as a heavy equipment operator, and also becomes their liaison to the National Space Agency. Due to later developments with fuel cells, he is one of Cobra Commander's most wanted prisoners. In the Marvel Comics G.I. Joe series, he first appeared, with Payload in issue No. 64 (October 1987). In that issue, he almost crushes Crankcase's A.W.E. Striker vehicle and Back-Stop's Persuader tank. Hawk Heavy Duty Heavy Metal Heavy Metal is the G.I. Joe Team's Mauler M.B.T. Tank driver. His action figure debuted in 1985 alongside the Mauler M.B.T. tank. His actual name is Sherman R. Guderian (which is a combination of the Sherman Tank and German general Heinz Guderian). Heavy Metal was born in Brooklyn, New York. Hi-Tech Hi-Tech is the G.I. Joe Team's operations support specialist. His real name is David P. Lewinski, and he was born in St. Paul, Minnesota. Hi-Tech was first released as an action figure in 2004, in a two-pack with Dr. Mindbender. A version of Hi-Tech with no accessories also came with the Built to Rule Patriot Grizzly in 2004. The figure featured additional articulation with a mid-thigh cut joint, and the forearms and the calves of the figure sported places where blocks could be attached. His primary military specialty is armament research and design. His secondary military specialty is telecommunications. Hi-Tech is a technological genius, and is more at home with a soldering gun than an automatic pistol. He can be counted on to repair any computer-controlled device, rewrite computer code on the fly, and enact emergency field repairs, to get the most out of the G.I. Joe Team's cutting-edge arsenal of equipment. Hi-Tech appeared in the direct-to-video CGI animated movies G.I. Joe: Spy Troops and G.I. Joe: Valor vs. Venom, voiced by Mark Hildreth. He also appeared in the animated series for G.I. Joe: Sigma 6 voiced by Eric Stuart. Hit and Run Hit & Run is the G.I. Joe Team's light infantryman. His real name is Brent Scott, and his rank is that of corporal E-4. Hit & Run was born in Sioux City, Iowa, and was first released as an action figure in 1988. In 1991, Hit & Run was released in Europe in Tiger Force colors, and he received a 25th anniversary style figure as part of the "Assault on Cobra Island" 7-pack. In the UK Action Force series, Hit and Run's real name is Bryan Scott and he is from Basildon in Essex, England. Hit & Run's primary military specialty is infantry, and his secondary military specialty is mountaineering. He was orphaned at age three by a drunken driver and grew up in a county institution. He escaped from the institution regularly, climbing down sheer walls and running for miles across the plains in the middle of the night. He claimed that he was not running away from anything and merely "practicing." He joined the Army immediately after leaving custody of the county. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 80. He assists other Joes in stopping Cobra forces on Cobra Island from claiming a nearby land mass. He later takes part in an attempt to rescue hostages, which turns out to be a Cobra ruse: the terrorists and hostages were all Cobra agents. Later, he deals with a legitimate hostage situation, where an isolated farmhouse is taken over by two criminals, but problems arise when the criminals are initially misidentified. He also joins with Tunnel Rat, Stalker and the rookie Scoop soon after to battle Iron Grenadiers in the fictional country of Sierra Gordo. In the Devil's Due series, he is one of the Joes assigned to invade Cobra Island during their second civil war. Hollow Point Hollow Point is a U.S. Marine sniper and the Range Officer of the G.I. Joe Team. His real name is Max V. Corey, and he was born in Quitman, Arkansas. He was first released as an action figure in 2003 with the Built to Rule Locust, which followed the G.I. Joe: Spy Troops story line. The forearms and the calves of the figure sported places where blocks could be attached. Hot Seat Hot Seat is the G.I. Joe Team's Raider driver. His real name is Michael A. Provost, and his rank is that of Sergeant First Class E-7. Hot Seat was born in Pawtucket, Rhode Island, and was first released as an action figure in 1989, packaged with the "Raider" 4-track assault vehicle. Hot Seat's primary military specialty is Raider driver, and his secondary military specialty is drill instructor. He was a boxer and could have been a heavyweight contender; he had a left jab like a jack hammer, reflexes like liquid crystal, and the tactical mind of a 5-star general. When he considered the possibilities of permanent brain damage, he instead opted for the Army and asked for "Anything fast and furious!" In the Marvel Comics G.I. Joe series, he first appeared in issue No. 105. He works with other Joes, the Oktober Guard and the Indian soldiers Tucaros, long time Joe allies, in battle against Destro's Iron Grenadiers. Ice Cream Soldier Ice Cream Soldier is the G.I. Joe Team's flamethrower commando. His real name is Tom-Henry Ragan, and his rank is that of sergeant E-5. Ice Cream Soldier was born in Providence, Rhode Island, and was first released as an action figure in 1994, as part of the "Battle Corps" line. The entire mold was re-used in 2002 for the Shock-Viper figure. His primary military specialty is fire operations expert. His secondary military specialty is barbecue chef. His code name is designed to cause enemy troops to underestimate him. His equipment is capable of delivering streams of flame up to seventy-five feet. Iceberg Iceberg is the G.I. Joe Team's snow trooper. His real name is Clifton L. Nash, and his rank is that of sergeant E-5. Iceberg was born in Brownsville, Texas, and was first released as an action figure in 1986. A new version of Iceberg was released in 1993 as part of the Battle Corps line. His primary military specialty is infantry, and his secondary military specialty is cold weather survival instructor. Iceberg hates hot weather; when he signed up for the Army, he asked for duty in Alaska. He is a qualified expert in the M-16A2, M-79, M-60, and M-1911A1. In the Marvel Comics G.I. Joe series, he first appeared in issue #68, in which he is part of a team sent in to provide security for Battleforce 2000 in Frusenland. In the Sunbow G.I. Joe cartoon, Iceberg is a supporting character in the 1986 second season, and is featured in the episode "Iceberg Goes South", in which he is captured by Dr. Mindbender and mutated into a killer whale but is restored to being human. Jinx Kamakura Keel-Haul Keel-Haul is the G.I. Joe Team's Admiral, and was first released as an action figure in 1985, as commander of the aircraft carrier. The figure was repainted and released as part of the "Battle Corps" line in 1993. His real name is Everett P. Colby, and he was born in Charlottesville, Virginia. Keel Haul's rank is that of O-9 (Vice Admiral, USN). He is the highest ranking G.I. Joe officer outside of General Joseph Colton (O-10), outranks General Hawk by two pay grades and serves as head of the Joe team when they operate out of the Flagg. Keel-Haul's primary military specialty is command, and his secondary military specialty is piloting. He graduated from Annapolis and Navy Flight School, and flew F-4 Phantoms off the Intrepid in the late 1960s. He attended the Naval War College in Newport, RI and the Armed Forces Staff College, and is a holder of the Navy Cross, DFC and Air Medal. He is a respected military historian, a nationally-rated chess player, and a clarinet player of questionable talent. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 36 (June 1985), a cameo appearance as rescue for seemingly stranded Joes. Keel-Haul and the USS Flagg serve as support in the first assault on Cobra Island. Later, Keel-Haul suggests using a captured Cobra "MAMBA" helicopter to insert a recon team onto Cobra island during the Cobra civil war. Keel-Haul also takes part in the conflict referred to as the "Battle of Benzheen". In the Devil's Due series, he serves as naval support in the second Cobra Island civil war. Later, he assists a Joe team in neutralizing a Cobra submarine armed with a nuclear device. Keel-Haul saves Wet-Suit from death after the sub-infiltration goes badly. Keel-Haul will be appearing in G.I. Joe: Ever Vigilant. Lady Jaye Law and Order Leatherneck Lifeline Lift-Ticket Lift-Ticket is the G.I. Joe Team's rotary wing aircraft pilot, and his secondary military specialty is fixed-wing aircraft pilot. His real name is Victor W. Sikorski, and his rank is that of chief warrant officer CW-2. Lift-Ticket was born in Lawton, Oklahoma. He joined the army to get out of his hometown, scoring high enough on the aptitude test to qualify for West Point Prep., O.C.S., and Flight Warrant Officer School. He opted for the latter, thinking that it was the only one which offered training applicable to civilian employment. Lift-Ticket was first released as an action figure in 1986, packaged exclusively with the Tomahawk. In the Marvel Comics G.I. Joe series, he first appeared in G.I. Joe: A Real American Hero No. 49 (July 1986). He is seen transporting several Joes to the American town of Springfield, which was a Cobra stronghold. In the Sunbow animated series, he was often partnered with Lifeline. He had brief appearances in G.I. Joe: The Movie and in the G.I. Joe: Renegades episode "Prodigal", where he was voiced by Charlie Schlatter. Lightfoot Lightfoot is the G.I. Joe Team's explosives expert. His real name is Cory R. Owens, and his rank is that of corporal E-4. Lightfoot was born in Wichita, Kansas, and was first released as an action figure in 1988. The figure was repainted and released as part of the Night Force line in 1989, packaged with Shockwave. Lightfoot's primary military specialty is demolitions, and his secondary military specialty is artillery coordinator. Lightfoot has memorized all the mathematical tables that he found in military manuals for explosives, for calculating amounts of explosives needed, safe firing distances, power requirements for firing circuits, and formulas for cutting structural steel, timber and breaching various forms of bunker material. He has also memorized all the conversion tables for foreign and non-military explosives, as he doesn't take any chances. In the Marvel Comics G.I. Joe series, he first appears in Special Missions No. 13. He is sent to the Trucial Absysmia desert with the Joes Outback, Dusty, and fellow trainee Mangler. They are captured by local military forces, who torture the Joes' objective out of Lightfoot; they were sent to Africa to destroy a buried weapons cache. Only Mangler is angry that Lightfoot broke. After escaping, the Joes manage to make their way to the cache. Lightfoot, despite his injuries, succeeds in destroying it. Mangler sacrifices himself to allow the others to escape. Lightfoot spends much time recovering from his injuries, and has to go through training again. Despite the real possibility of washing out, he makes it along with the fresh recruits Budo and Repeater. All three are drawn into a mission under the command of Grand Slam. They are defending a weapons cache, from Iron Grenadiers. Despite their leader being badly wounded, the Joes complete the mission, killing all they came across. Lightfoot saves the day with a time-delayed bomb destroying a retreating helicopter. He is one of the few Joes available to protect a space-based laser weapon from Cobra hands., and later assists in fighting "Darklonian" terrorists in New York City. In the Devil's Due continuity, he makes a cameo appearance in G.I. Joe Frontline #18, walking down a hallway in the current G.I. Joe headquarters. He also appears when Cobra Commander makes an attempt on General Hawk's life by bombing the television studio he had appeared in. Lightfoot and Zap are two of the Joes who safely rescue Hawk. In IDW continuity, Lightfoot is part of a mission meant to Sierra Gordo. The intent rescue several fellow Joes from imprisonment. Long Range Low-Light Lt. Falcon Mace Mace is the G.I. Joe Team's undercover operative. His real name is Thomas S. Bowman, and he was first released as an action figure in 1993. Mace was born in Denver, Colorado. His primary military specialty is undercover surveillance. His secondary military specialty is intelligence. Mace has spent years undercover, working against Cobra and other criminal factions. He feeds information to fellow "Battle Corps" members, who then make the resulting raids and arrests. Mainframe Major Altitude Major Altitude is the G.I. Joe Team's Battle Copter pilot. His real name is Robert D. Owens, and he was born in Rumford, Rhode Island. Major Altitude was first released as an action figure in 1991, as part of the Battle Copters line. He came exclusively with the "Battle Copter" vehicle. He was released again in 1993, as part of a mail-in special called "Terrifying Lasers of Destruction". He was packaged with a Cobra agent, another helicopter pilot, called Interrogator. At the age of eleven he decides he will eventually join the G.I. Joe team; he decides to focus on its Flight School branch. Eight years later, he finishes Aviator School and Flight Warrant Officer School. He is recruited right into the Joe team. The "Major" does not reflect his rank, it is part of his code-name. He is noted as one of the most skilled pilots in the world. Major Altitude appeared in the DiC G.I. Joe animated series. Major Barrage Major Barrage is the G.I. Joe Team's artillery commander. His real name is David Vennemeyer, and he was first released as an action figure in 2005. He is able to take down a squadron in battle and keep fighting. Major Storm Major Storm is the G.I. Joe Team's "General" commander. His real name is Robert G. Swanson, and he was born in Providence, Rhode Island. Major Storm was first released as an action figure in 1990, packaged with the General mobile assault fort. His figure was re-released in 2003. This edition was a G.I. Joe Convention exclusive. His primary military specialty is command of the General, a large armored vehicle with multiple types of offensive weaponry. His secondary military specialty is long range artillery officer. He has extensive experience with most armored vehicles in many battlefield situations. It is noted that Major Storm is the only one who can decipher some of the General's systems. It is specified he leads a battlefield operation to discover the source of major sabotage against the General. Mercer Mirage Mirage is the G.I. Joe Team's Bio-Artillery expert. His real name is Joseph R. Baikun, and his rank is that of U.S. Marine Staff Sergeant. Mirage was born in Molson, Washington, and was first released as an action figure in 1993, as part of the "Mega Marines" subset. The Mega Marines are a subgroup dedicated to fighting the "Mega Monsters". His figure came with "moldable bio-armor", a clay like substance. Mirage then had two releases in 2002, one in 2003 and another in 2005. The last release came with the remote-controlled toy called the "Hoverstrike". Mirage is an expert in various weapons, and trains other soldiers in their use. He was trained by Roadblock. Mirage appeared in the Devil's Due series. He assists the Joe team in fighting the second Cobra civil war, which like the first one, is against Serpentor's forces on Cobra Island. He also appears in issues #34–36. Muskrat Muskrat is the G.I. Joe Team's swamp fighter. His real name is Ross A. Williams, and his rank is that of corporal E-4. Muskrat was born in Thibodaux, Louisiana, and was first released as an action figure in 1988. The 1988 Target stores exclusive release of Muskrat, is a double-pack with Voltar. The packaging text specifies the two characters have a particular hatred of each other. The figure was repainted and released as part of the Night Force line in 1989, packaged with Spearhead. A new version of Muskrat was released in 1993 as part of the Battle Corps line. Muskrat's primary military specialty is infantry, and his secondary military specialty is social services. He spent his youth in the swamp, hunting raccoon, possum, and wild pig, holding his own against poachers, 'gator skinners, moonshiners, chain gang escapees, and smugglers. Ranger School and Jungle Warfare Training Center seemed easy to him after that. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 80. Muskrat is also part of a rescue squad sent into a hot-spot in Southeast Asia to rescue fellow Joes. He is one of many sent in on a Tomahawk helicopter. He has to assist in dealing with Russian gunships, highly explosive extra fuel and the wounding of several crew members (himself included). Mutt Nunchuk Nunchuk is the codename of Ralph Baducci. His code-name is a variation on the word nunchaku, the character's preferred weapon. He was born in Brooklyn, New York, and studied with a blind sensei in Denver. Nunchuk felt the need for improvement, and moved to San Francisco. He caught the attention of Storm Shadow, who trained him and supervised his acceptance into G.I. Joe's Ninja Force. Nunchuk later moves to training other Joe soldiers in various forms of hand-to-hand combat. He also develops a grudge against the Cobra operative Firefly, because he is angry that the man would use martial arts for evil purposes. Outback Ozone Ozone is G.I. Joe Team's ozone replenisher trooper. His real name is David Kunitz, and his rank is that of corporal E-4. Ozone was born in Three Mile Island, Pennsylvania, and was first released as an action figure in 1991, as part of the Eco-Warriors line. He had two releases in 1993 and another in 1994. The last three were releases under the Star Brigade subgroup, establishing that the character has traveled into space. Ozone is a specialist in environmental health and various forms of airborne sludge and other harmful chemicals. He carriers equipment designed to neutralize these harmful substances while at the same time replenishing the ozone layer. He can do this while wearing a cumbersome environmental suit and fighting Cobra forces. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 123. There and in the next two issues, he teams with Flint and Clean-Sweep as the "Eco-Warriors" sub-team. They confront the Cobra operative Cesspool on a seemingly abandoned oil platform. Ozone stops the confrontation, by literally bringing in a lawyer. Ozone appeared in the DiC G.I. Joe animated series. Pathfinder Pathfinder is the G.I. Joe Team's jungle assault specialist. His real name is William V. Iannotti, and his rank is that of Staff Sergeant E-6. Pathfinder was born in Key West, Florida, and was first released as an action figure in 1990. He also had a release under the "Action Force" line. He had a 2001 release packaged with the A.W.E. Striker vehicle, and in the same year, he had a release with the V.A.M.P. vehicle. Pathfinder's father was a Korean War veteran who taught him the finer points of military reconnaissance. He was not considered too young to learn how to rough it out in the wild swamps of Florida, which enabled him to breeze through much of the Army's jungle training. It came to the point where he was teaching everyone including the instructors what jungle survival is all about. Soon thereafter, he received his certification as a jungle assault specialist, and became part of the G.I. Joe Team. Pathfinder is now responsible for leading all covert attacks on Cobra Island. Pathfinder appears in issue No. 24 of the Devil's Due G.I. Joe series. He is one of many Joes called up to fight against the personal army created by Serpentor. In the DiC G.I. Joe animated series, Pathfinder was voiced by Garry Chalk, and was friends with Capt. Grid-Iron and Ambush. Payload Payload is the G.I. Joe Team's Defiant pilot. His real name is Mark Morgan, Jr., and his rank is that of Colonel, USAF O-6. Payload was born in Cape Canaveral, Florida, and was first released as an action figure in 1987, packaged with the Defiant space vehicle complex. He was re-colored and released again in 1989, packaged with the Crusader space shuttle. A new version of Payload was released in 1993 as part of the Star Brigade line. That version was re-colored and released again in 1994. In Europe, Payload was released as an interplanetary Cobra soldier. Payload's primary military specialty is astronaut, and his secondary military specialty is fixed wing pilot. He grew up watching the early space flights blasting off, staring at the flaming boosters through the hurricane fence. He joined the Air Force to make his dream a reality, flying F-4 Phantoms over southeast Asia for three tours. He signed up for the astronaut training program after returning to the United States. Payload frequently works closely with Hardtop, a specialist in the launching facility the Defiant moves in. In the Marvel Comics G.I. Joe series, he first appears in issue No. 64. He heads up a mission to stop Cobra forces from stealing U.S. spy satellites; the mission fails when Cobra destroys the satellites, after they are prevented from stealing them. Payload then leads a mission to rescue survivors from a G.I. Joe mission to the fictional land of Trucial-Abysmia. Payload is featured in the last issue of the "Special Missions" series, where he, Ace and Slipstream are sent to space to test out various surveillance techniques. When he learns G.I. Joe forces are in trouble on land, Payload goes against plan and pilots the Defiant back to Earth. He uses the Defiant's weaponry to neutralize the threat and lands on the USS Flagg aircraft carrier. He later becomes a member of Star Brigade, which also and includes Space Shot, Sci-Fi and Roadblock. The Joes team up with the current Oktober Guard to stop an asteroid endangering Earth; this team. The shuttles for both teams are damaged in the mission, and Payload cannibalizes the Defiant to fix the Russian spacecraft. Both teams safely leave in the latter one. The Defiant is destroyed when the asteroid safely explodes. Payload and Wild Bill rescue several of their fellow pilots from summary execution in an ill-fated mission to Sierra Gordo. Payload and the Defiant play a critical role in the climax of the G.I. Joe novel "Fool's Gold". He works with Sci-Fi and Hawk to destroy a Cobra weapon aimed at Earth. He also is featured in the Little Golden Books "Tower Of Power" G.I. Joe story. Psyche-Out Quick Kick Rampage Rampage is the code name of Walter A. McDaniel. He was first released as an action figure in 1989, as a replacement for Heavy Metal. He was re-released in 2003, as the G.I. Joe Team's "Split Fire" driver. Rampage once trained alongside Beach Head. Rampart Rampart is the G.I. Joe Team's shoreline defender. His real name is Dwayne A. Felix, and his rank is that of U.S. Navy Petty Officer (2nd class). Rampart grew up in New York City, and was first released as an action figure in 1990. Rampart spent his time mastering all video games he had access to, at home and the arcade. He put his hand-eye coordination to use in the Navy. In the air defense artillery, Rampart attained the highest combat success ratio in the 7th Fleet for "splashing" enemy aircraft. He joins the Joes directly from the Navy. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 115. He served in the "Battle of Benzheen". He and Backblast maintain a sentry point deep in the Benzheen desert, and destroy a Rattler plane chasing the Joe pilot Ghostrider. In the Devil's Due series, he was killed by Red Shadow agents while on assignment in South America. Rampart appeared in the DiC G.I. Joe animated series, voiced by Ian James Corlett. Rapid Fire Rapid Fire is the G.I. Joe Team's fast attack expert. His real name is Robbie London, named after an executive at DIC Animation. Rapid Fire was born in Seattle, Washington, and was first released as an action figure in 1990. He came with a free VHS tape of the G.I. Joe DiC episode "Revenge Of The Pharaohs". He does not appear in that episode. He specializes in fast-attack maneuvers and sabotage tactics. He is fluent in three languages, has Airborne Ranger training and is the recipient of a Medal of Honor. He attended the United States Military Academy, commonly known as "West Point". He completed their ten-week Cadet Summer Orientation in only five weeks. Recoil Recoil is the G.I. Joe Team's L.R.R.P. (Long Range Recon Patrol, pronounced "Lurp"). His real name is Joseph Felton, and his rank is that of sergeant E-5. Recoil was born in Fashion Island, Washington, and was first released as an action figure in 1989. Recoil's primary military specialty is infantry, and his secondary military specialty is RTO (Radio Telephone Operator). He was a marathon runner and professional bodybuilder before joining G.I. Joe, and his excellent physical shape made him a good candidate to be a "Lurp". His job is to penetrate deep into enemy territory, gather intelligence and extricate himself without being detected, all the while carrying 100 pounds of gear, including rations, radio, weapons, ammo and climbing rope. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 111. Recoil is one of many Joes sent to the fictional county of Benzheen, to battle Cobra influence. Recoil's patrol group, consisting of Sneak Peek, Dusty, Stalker and Ambush come under fire by a group of Cobra soldiers. Sneak Peek is killed, and Recoil and Ambush are injured. In the Devil's Due G.I. Joe series, Recoil is seen as one of the Joes fighting against 'Coil', the army created by Serpentor. This conflict takes place on Cobra Island. Recondo Red Dog Red Dog is a member of the G.I. Joe Team as one of Sgt. Slaughter's Renegades. His real name is David Taputapu, and his rank is equivalent to that of sergeant E-5. Red Dog was born in Pago Pago, Samoa, and debuted as an action figure in 1987 as part of the Sgt. Slaughter's Renegades three-pack, along with Mercer and Taurus. Red Dog's primary military specialty is infantry. He had a promising career as a barefoot placekicker on an American football team, until a defensive lineman stomped on his big toe. Red Dog gave the lineman a broken helmet and a concussion in return, for which he was suspended for excessive roughness. After a brief career as a stuntman in "B" movies, he was recruited by the G.I. Joe Command for the Sgt. Slaughter's Renegades sub team. This team has no official status, and its movements and activities are virtually unrestricted. However, this means that they get no credit when they succeed, and everyone denies all knowledge of them when they fail. Red Dog appeared in the animated film G.I. Joe: The Movie voiced by Poncie Ponce. The Renegades, under Sgt. Slaughter, operate as drill sergeants. Red Zone Red Zone is the code name of Luke Ellison. He is the Steel Brigade's urban assault trooper, and was first released as an action figure in 2006. The G.I. Joe Team took an interest in him when he was "a little too enthusiastic for the FBI." Repeater Repeater is the G.I. Joe Team's steadi-cam machine gunner. His real name is Jeffrey R. Therien, and his rank is that of Staff Sergeant E-6. Repeater was born in Cumberland, Rhode Island, and was first released as an action figure in 1988. The figure was repainted and released as part of the Night Force line in 1989, packaged with Charbroil. Repeater's primary military specialty is infantry, and his secondary military specialty is heavy weapons. Repeater had twenty years of top-notch field performance in the Army, although he never did well in the garrison. However, out in the bush he is the one who brings the other grunts back home alive. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 82 as part of a training class of potential G.I.Joe recruits. Only he, Lightfoot and Budo become official Joe members. They are taken into battle swiftly and defeat an Iron Grenadier plot to steal valuable weapons, mainly by killing every adversary involved. He is shot and wounded while defending a "Strategic Defense Initiative" installation. He recovers and soon after is involved in a fight with Cobra and Dreadnok forces on the Atlantic City Freeway. Several years later, he again appeared to be shot and wounded during the defense of The Pit in a surprise Cobra assault on the Joe base. Rip Cord Roadblock Robo-J.O.E. Robo-J.O.E. is the G.I. Joe Team's jet-tech operations expert. He is a scientist who was injured by Destro during a raid to steal plans for Bio Armor. To save his life, he was transplanted into armor and rebuilt as a cyborg. His real name is listed as Greg D. Scott, which is the same name used for the Lifeline v5 and v6 file cards. Robo-J.O.E. was born in Casper, Wyoming, and was first released as an action figure in 1993, as part of the Star Brigade line. Robo-J.O.E.'s only comic book appearance was in the large group shot on the cover of America's Elite No. 25. Rock 'n Roll Rumbler Rumbler is the code name of Earl-Bob Swilley. He was first released as an action figure in 1987, packaged as the driver of the "Crossfire" 4WD vehicle. Salvo Salvo is the G.I. Joe Team's Anti-Armor Trooper. His real name is David K. Hasle, and he was born in Arlington, Virginia. Salvo was first released as an action figure in 1990, and again in 2005. Both versions have the T-shirt slogan 'The Right of Might'. Salvo's primary military specialty is anti-armor trooper. He also specializes in repairing "TOW/Dragon" missiles. Salvo expresses a deep distrust of advanced electronic weaponry. He prefers to use mass quantities of conventional explosives to overwhelm enemy forces. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 114. There, he fights as part of a large scale operation against Cobra forces in the fictional country of Benzheen. Steeler, Dusty, Salvo, Rock'N'Roll, and Hot Seat get into vehicular based combat against the missile expert Metal-Head He is later part of the Joe team on-site who defends G.I. Joe headquarters in Utah against a Cobra assault. Salvo appeared in the DiC G.I. Joe animated series, voiced by Brent Chapman. Scanner Scanner is the codename of Scott E. Sturgis. His primary military specialty is information technology. He first appears in the Devil's Due series. Snake Eyes and Scarlett hide out with Scanner in Iceland, before they are tracked down by Overlord. Scanner is killed in the process of defending the Iceland base, but instrumental in destroying the base (with Overlord inside) to save his teammates. Scarlett Sci-Fi Sci-Fi is a character from the G.I. Joe: A Real American Hero toyline, comic books and animated series. He is the G.I. Joe Team's laser trooper and debuted in 1986. His real name is Seymour P. Fine, and his rank is that of corporal E-4. Sci-Fi was born in Geraldine, Montana. His primary military specialty is infantry, and his secondary military specialty is electronics. Sci-Fi was released as an action figure in 1986, and repackaged by Hasbro in 1994 as part of the Star Brigade line. In the Marvel Comics G.I. Joe series, he first appears in issue #64 in a brief cameo and appeared fully in #65. He is a supporting character in a five-issue story arc from #145 to #149 as part of the G.I. Joe Star Brigade team. Sci-Fi is a supporting character in the 1986 second season of the Marvel/Sunbow animated series and the 1989 DiC G.I. Joe series, voiced both times by Jerry Houser. Scoop Scoop is the G.I. Joe Team's combat information specialist. His real name is Leonard Michaels, and his rank is that of corporal E-4. Scoop was born in Chicago, Illinois, and was first released as an action figure in 1989. In the animated series, his character was a Cobra spy; in the other continuities he is simply a journalist/soldier. His name, occupation and visage were based on real-life NBC News journalist Mike Leonard. Scoop's primary military specialty is journalist, and his secondary military specialty is microwave transmission specialist. He has an advanced degree in journalism, as well as a master's degree in electrical engineering. Scoop could have worked for a network news team, but instead opted for service on the G.I. Joe Team so he could be on the spot when news was being made. In the Marvel Comics G.I. Joe series, he first appeared in G.I. Joe Special Missions No. 23. He is one of a team sent to Sierra Gordo. Conflict arises because Scoop, while a trained soldier, barely meets G.I. Joe standards. It is shown how he interacts badly with his teammates Muskrat, Leatherneck, Hit and Run, Tunnel Rat and Stalker. Scoop defeats an Iron Grenadier in hand-to-hand combat, smashing the man in the head with the treasured video footage. This also saves the life of Tunnel Rat, who had been wounded. Scoop earns the respect of the other Joe soldiers. He later returns to Sierra Gordo to help rescue Joes and the Oktober Guard. Scoop eventually returns to the reformed G.I. Joe team. Scoop appeared in the DiC G.I. Joe animated series, voiced by Michael Benyaer. Scoop was recruited by Sgt. Slaughter for his "Marauders" sub-team. Scoop was suspected of being a Cobra spy. In the episode "Operation: Dragonfire", Scoop confesses that he is in fact a Cobra spy. He is placed under arrest by Low-Light. Stalker frees Scoop when convinced he's no longer working for Cobra, after discovering Cobra lied about the Joes destroying his family home. Scoop then spies on Cobra for the Joes. Scoop appears as a non-playable character in the G.I. Joe arcade game. Sgt. Hacker Sgt. Hacker is the G.I. Joe Team's information retrieval specialist. His real name is Jesse E. Jordan, and he was first released as an action figure in 2003. He is a computer specialist from Fort Leonard Wood. Sgt. Slaughter Sgt. Stone Shipwreck Shockwave Short-Fuze Sideswipe Sideswipe is the code name of Andrew Frankel. He is the G.I. Joe Team's medical specialist, and was released as an action figure in 2002. Sidetrack Sidetrack was originally the code name of Sean C. McLaughlin. He was the G.I. Joe Team's wilderness survival specialist, and was released as an action figure in 2000. Sidetrack was then used as the code name of John Boyce in 2002. He was a ranger for the G.I. Joe Team, and a former professional wrestler. Boyce was killed by a trap laid out by Cobra hunter Shadow Tracker in a mini-comic published by the G. I. Joe Collectors Club. Skidmark Skidmark is the G.I. Joe Team's Desert Fox driver. His real name is Cyril Colombani, and his rank is that of corporal E-4. Skidmark was born in Los Angeles, California, and was first released as an action figure in 1988, packaged with the "Desert Fox" 6WD jeep. Skidmark's primary military specialty is fast attack vehicle driver, and his secondary military specialty is infantry. As a kid, he was polite, well groomed, and successful in his studies. However, when he received his first driver's lesson, he subsequently shattered all-known records for accumulating speeding violations. He is the G.I. Joe Team's fastest and most reliable recon driver. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 72. He joins the team at the same time as Wildcard and Windmill. A Cobra agent, the Star Viper, sneaks onto the Joe's Utah base by holding onto the underside of Skidmark's Desert Fox vehicle. Skidmark and the new Joes pursue the Viper in the next issue. Skidmark returns in the Devil's Due G.I. Joe series in issue No. 24. He is one of the many Joes recalled up to duty for the second Cobra civil war, this one also taking place on Cobra Island. In issue No. 25, Skidmark is killed by a falling helicopter crash while aiding General Hawk in an attempt to arrest Overlord. Skidmark is featured in the 1989 'Golden' G.I. Joe coloring book. Skydive Skydive is the G.I. Joe Team's Sky Patrol leader. His real name is Lynton N. Felix, and he was born in Pensacola, Florida. Skydive was first released as an action figure in 1990, as part of the "Sky Patrol" line. Before he was recruited by G.I. Joe, he spent ten years as a non-commissioned officer teaching Ranger School at Fort Benning. He also specializes in personnel administration. Skydive is voiced by Dale Wilson in the DiC G.I. Joe cartoon. Skymate Skymate is the G.I. Joe Team's glider trooper. His real name is Daniel T. Toner, and he was born in Queenstown, Australia. Skymate was first released as an action figure in 1991, as part of the Air Commandos line. Skymate flies the "Air Commando" glider. He grew up in a remote station near the Haast's Bluff Aboriginal Reserve. He receives exotic weapons training in the 'Special Air Services', which only complemented his already extensive knowledge of the subject. He is considered very quiet. His preferred weapon is a bow and arrow. In the Marvel Comics G.I. Joe series, he is mentioned by name in issue No. 118, as being part of a mission involving Chuckles and the Air Commandos, but not seen. In the Devil's Due G.I. Joe series, Skymate is one of many Joes sent to Europe to assist in worldwide outbreaks of Cobra terrorist activity. Skymate appeared in the DiC G.I. Joe animated series. Skystriker Skystriker is a member of the special G.I. Joe group Tiger Force, and serves as the jet fighter pilot tasked with operating the "Tiger Rat" assault plane. His real name is Alexander P. Russo, and he was first released as an action figure in 1988. Skystriker was born in Providence, Rhode Island, and grew up around planes on a military base. He is noted for destroying more than fifteen Cobra planes during attacks on Cobra Island. Slip Stream Snake Eyes Sneak Peek Sneak Peek is the G.I. Joe Team's advanced recon specialist. His real name is Owen King, and his rank is that of sergeant E-5. Sneak Peek was born in Bangor, Maine, where Stephen King is a longtime resident, and this is an apparent reference to Stephen King's son, Owen King. He was first released as an action figure in 1987. The figure was repainted and released as part of the Night Force line in 1988, packaged with Lt. Falcon. Sneak Peek's primary military specialty is infantry, and his secondary military specialty is radio-telecommunications. Sneak Peek is known for a mission while in a Ranger recon battalion, in which he was never recalled due to an error; he continued observing enemy activity, taking notes and sketching maps for two weeks, until someone remembered he was still out there and signaled for him to return. Sneak Peek is Ranger qualified and proficient with all NATO night vision devices. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 73. He is part of a recon team that works its way through Cobra Island during the Cobra civil war. Later, he is shot and killed during the battle of Benzheen. He "dies" saving a little boy being placed in danger by a Frag Viper. The same issue reveals details of his friendship with fellow Joe Dusty. In IDW's continuation of this storyline, it is revealed that Sneak Peek survived these wounds and was sent deep undercover in Darklonia. His survival was a secret even to his own friends and family. In the Devil's Due G.I. Joe series, another agent takes his code name, and goes undercover with the Dreadnoks. He is severely injured by a Viper while checking out a Joe nuclear bomb shelter. Sneak Peek is a supporting character in the novel The Sultan's Secret by Peter Lerangis. He also has a role in Invisibility Island. Snow Job Snow Storm Snow Storm is the G.I. Joe Team's high-tech snow trooper. His real name is Guillermo "Willie" Suarez, and his rank is that of Staff Sergeant E-6. Snow Storm was born in Havana, Cuba, and was first released as an action figure in 1993, as part of the Battle Corps line. His primary military specialty is arctic warfare. His secondary military specialty is cold weather survival instructor. Space Shot Space Shot is the G.I. Joe Team's combat freighter pilot. His real name is George A. Roberts, and he was born in Everett, Massachusetts. Space Shot was first released as an action figure in 1994, as part of the Star Brigade line. His file card establishes that he flew cargo between planets in Earth's solar system, and for fun he would fly blindfolded through the rings of Saturn. This earned him the attention of Duke, who recruited him and found it was not easy teaching him military discipline. He has defended four space stations from Cobra attack, and makes Cobra 'Blackstar' pilots look like trainees. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 145. His comics continuity does not match the file card, as he is simply one of many Joes with basic, "real-world" astronaut experience. Space Shot is part of Star Brigade and takes part in a mission to deal with an asteroid threatening all of Earth. With the assistance of the latest version of Oktober Guard, the Joes fight androids in the asteroid's interior, and ultimately the robot army is defeated and the teams makes it off the asteroid before it is safely destroyed. Sparks Sparks is the G.I. Joe Team's communication and computer expert. His real name is Alessandro "Alex" D. Verdi, and he was first released as an action figure in 2007. Sparks is the son of a former U.S. ambassador, and was born in Carcare, Italy. He spent his formative years in Europe, becoming fluent in 13 languages, as well as learning the finer points of diplomacy. After graduating from Harvard, he planned to become an interpreter for the military, but instead serves as a liaison to the Pentagon for the G.I. Joe Team. Sparks is an essential cog in G.I. Joe operations, thanklessly filing mountains of paperwork and records, according to the stringent protocols of military bureaucracy. His military specialties include telecommunications, cryptologic operations, and electronic warfare. In the Sunbow G.I. Joe cartoon, he appeared in the 1984 "The Revenge of Cobra" mini-series and later retired from the team, working at a television station, but helped G.I. Joe uncover a Cobra plot in the episode "Grey Hairs and Growing Pains". Spearhead Spearhead is the G.I. Joe Team's point man. His real name is Peter R. Millman, and his rank is that of corporal E-4. Spearhead was born in St. Louis, Missouri, and was first released as an action figure in 1988, with his pet bobcat Max. The figure was repainted and released as part of the Night Force line in 1989, packaged with Muskrat. Spearhead's primary military specialty is infantry, and his secondary military specialty is finance. He was once the youngest and most successful insurance salesman in the Pacific Northwest; everybody liked him and trusted him, and bought more insurance from him than they could afford. However, he joined the Army, feeling that somebody had to do it. Thanks to Spearhead's charisma, and with his bobcat Max as a source of inspiration, soldiers are willing to follow him when he takes the lead. In the Marvel Comics G.I. Joe series, he first appeared in G.I. Joe Special Missions No. 21. He works with Airtight, Charbroil and other Joes in an attempt to stop Dreadnoks activity in the sewers of New York. They fail to stop Cobra's plan to create a telemarketing scam center, and their new ally, a homeless veteran, dies while believing he saved the Joes' lives. Spearhead returns for active duty when the Joe team is reformed in the Devil's Due series. Spearhead is also one of the many Joes to combat Serpentor in the second Cobra civil war. Specialist Trakker Specialist Trakker is the M.A.S.K. character Matt Trakker. He was released in 2008 as an advanced vehicle specialist for the G.I. Joe Team. (In the G.I. Joe universe, according to Specialist Trakker's file card, M.A.S.K.'s enemies in V.E.N.O.M. were a splinter faction of Cobra Command.) Spirit Stalker Starduster Starduster is the G.I. Joe Team's Jet Pack Trooper. His real name is Edward J. Skylar, and he was born in Burlingame, California. Starduster was first released as an action figure in 1987, as a mail-in exclusive from Action Stars cereal, and later as a mail-in offer from Hasbro Direct. In 2008, he was renamed Skyduster and released with the Toys R Us exclusive Air Command Set, which also included Capt. Ace and Wild Bill. Starduster's primary military specialty is Infantry Transportable Air Recon, and his secondary military specialty is Helicopter Assault. He was a trapeze artist before he enlisted in the Airborne Rangers. Starduster was recruited into the G.I. Joe team by Duke. In 1985, a television commercial for Action Stars cereal depicted a boy making his way to a bowl of cereal led by the character Duke. After eating the cereal, the boy flies into the air following Starduster. This was the only time that the action-figure Starduster appeared in animated form, as he was never part of the cartoon television series. Starduster was featured in three out-of-continuity mini-comics packaged in Action Stars cereal. Starduster also appeared in the comic tie-in to the Commandos Heroicas, which were released in both toy and comic book character form as part of the 2009 G.I. Joe convention. Starduster became commander of this Argentine branch of the G.I. Joe team. Static Line Static Line is the G.I. Joe Team's Sky Patrol demolitions expert. His real name is Wallace J. Badducci, and he was born in Chicago, Illinois. Static Line was first released as an action figure in 1990, as part of the "Sky Patrol" line. His primary military specialty is demolitions expert. He is also a trained aircraft mechanic. Static Line is noted for his eye for detail and for not destroying explosive devices, but rendering them inert. Steam-Roller Steam-Roller is the G.I. Joe Team's Mobile Command Center operator. His real name is Averill B. Whitcomb, and his rank is that of sergeant E-5. Steam-Roller was born in Duluth, Minnesota, and was first released as an action figure in 1987, packaged with the Mobile Command Center. Steam-Roller's primary military specialty is heavy equipment operator, and his secondary military specialty is armor. He worked on heavy cranes on the Great Lakes' docks, earth movers in the strip mines of Appalachia, and graders on the blacktop highways of several states. He was operating an M-15A2, 50 ton transporter when he was assigned to the G.I. Joe Team. Steam-Roller is a qualified expert with all NATO small-arms and explosives. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 99. He also makes an appearance in the following issue. He battles Python Patrol members in the Utah desert. Steeler Stretcher Stretcher is the G.I. Joe Team's Medical Specialist. His real name is Thomas J. Larivee, and he was born in Hartford, Connecticut. Stretcher was first released as an action figure in 1990. Before the G.I. Joe team, he served as a front-line medic in a NATO military unit. Though Stretcher is a qualified medical specialist, his primary purpose is removing wounded soldiers from the battlefield. As such, he is noted for his strength. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 105. He is one of a team of Joes sent to Sierra Gordo to rescue fellow soldiers from Iron Grenadiers. Stretcher is one of the many Joes to take part in a confrontation against Cobra forces in Benzheen. Stretcher is one of many staffing an isolated military outpost. He confirms the death of Sneak Peek, who had died saving a child. Stretcher also appears in issue No. 125. Stretcher returns to the Joe team in the Devil's Due produced comic book series. He is one of the many soldiers to intervene in the second Cobra civil war, which again takes place on Cobra Island. Stretcher appeared in the DiC G.I. Joe animated series, voiced by Alvin Sanders. Sub-Zero Sub-Zero is the G.I. Joe Team's winter operations specialist. His real name is Mark Habershaw, and he was born in Smithfield, Rhode Island. Sub-Zero spent time as an instruction at the Army Northern Warfare Training Center in Fort Greely. He was also a consult to the Cold Regions Test center at the same base. He also trained military forces in Europe for cold weather combat. He is noted for hating cold weather. Sub-Zero was first released as an action figure in 1990. In 1993 he is part of the mail-order Arctic Commandos subset. This is part of the main-in campaign known as 'Terrifying Lasers Of Destruction'. Sub-Zero is included with Stalker, Dee-Jay and a Cobra Snow-Serpent. The fiction of this sub-set is that Sub-Zero's team must stop a Cobra weapon placed atop Mount Everest. Sub-Zero first appeared in G.I. Joe: America's Elite No. 32, providing security at a prison during the World War III event. Sub-Zero appeared in the DiC G.I. Joe animated series, voiced by Don Brown. Super Trooper Super Trooper is the code name of Paul Latimer. He was born in Dayton, Ohio, and was first released as a mail-in figure in 1988. His primary military specialty is infantry, and his secondary military specialty is public relations. Switch Gears Switch Gears is a tank driver for the G.I. Joe Team, and was released as an action figure in 2003. His real name is Jerome T. Jivoin, and he was born in Bogotá, Colombia. Switch Gears is said to have a high tolerance for pain, and described as very strong and never giving up. He also likes to show up at fortified Cobra positions disguised as a Cobra courier with fake retreat orders, and prefers his bare hands to weapons. Taurus Taurus is a member of the G.I. Joe Team as one of Sgt. Slaughter's Renegades. His real name is Varujan Ayvazyan, and his rank is equivalent to that of sergeant E-5. Taurus was born in Istanbul, Turkey and was first released as an action figure in 1987, as part of a three-pack with Mercer and Red Dog. Taurus's primary military specialty is demolitions. He was a circus acrobat in Europe, doing occasional undercover work for INTERPOL. When the G.I. Joe top brass witnessed him breaking two-by-fours on his own face as part of his circus act, they recruited him for the Sgt. Slaughter's Renegades sub team on the spot. Taurus is fluent in a dozen languages, and has been cross-trained in explosives and mountaineering. The Renegades have a freedom of operation unmatched by the other Joes: they are not carried on the existing rosters of any existing military unit, there is no computer access to their dossiers, and they are paid through a special fund earmarked for "Pentagon Pest Control". This team has no official status, and its movements and activities are virtually unrestricted. However, this means that they get no credit when they succeed, and that the government can deny the Renegades' existence if they are caught. Taurus is seen in issue No. 32 of G.I. Joe: America's Elite (Feb 2007). He is fighting Cobra soldiers in his home city of Istanbul. Assisting him are the Joe soldiers Heavy Duty and Bombstrike. Taurus appeared in the animated film G.I. Joe: The Movie voiced by Earl Boen, as a member of Sgt. Slaughter's Renegades. He operates as an assistant drill sergeant. T'Gin-Zu T'Gin-Zu is the G.I. Joe Team's "Pile Driver" operator. His real name is Joseph R. Rainone. His primary military specialty is Pile Driver vehicle operator. His secondary military specialty is ninja swords master. His birthplace is Somers, New York. T'Gin-Zu has studied martial arts for more than two decades. He has learned some of the secrets of the Arashikage ninja clan, and has spent time as a student of Storm Shadow, who considers him his most talented pupil. T'Gin-Zu has a developed a deep desire to single-handedly capture Cobra's band of 'Red Ninjas'. T'Jbang T'Jbang is the code name of Sam LaQuale. He was born in East Greenwich, Rhode Island. He is a former member of the Arashikage clan founded by Storm Shadow, a ninja who is also his second cousin. He has crafted his own personal sword, designed for his secretive 'Silent Backslash' technique. T'Jbang is also skilled in piloting helicopters. Thunder Thunder is a character from the G.I. Joe: A Real American Hero toyline, comic books and animated series. He is the G.I. Joe team's self-propelled gun artilleryman, and debuted in 1984. His real name is Matthew Harris Breckinridge, and his rank is that of sergeant E-5. He was born in Louisville, Kentucky. Thunder was first released as an action figure packaged with the Slugger artillery vehicle. He first appeared in G.I. Joe: A Real American Hero #51 (September 1986). He is among several Joes killed in action in issue #109. Thunder made his debut in the Sunbow/Marvel G.I. Joe animated series in "The Revenge of Cobra". Tiger Claw Tiger Claw is the code name of Chad M. Johnson. He was first released as an action figure in 2005, as the ninja apprentice of Snake Eyes. Tiger Claw appeared in the direct-to-video CGI animated movie G.I. Joe: Ninja Battles, voiced by Brian Drummond. Tollbooth Tollbooth is the G.I. Joe Team's bridge layer driver. His real name is Chuck X. (for nothing) Goren, and his rank is that of E-5 (Sergeant). Tollbooth was born in Boise, Idaho, and was first released as an action figure in 1984, packaged exclusively with the Bridgelayer (Toss N Cross) as a Sears Exclusive. Tollbooth and the Bridgelayer (Toss N Cross) were later released as part of the fourth series in 1985. Tollbooth's primary military specialty is combat engineer, and his secondary military specialty is demolitions. As a child, Tollbooth had a love for construction sets, which he made bigger and more complex until he outgrew them all. As an adult he started building in earnest, and got his Masters in engineering from MIT. When he needed a bigger challenge, he joined the Army to sign up for the G.I. Joe Team. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 51 (September 1986). He is manning the "Chaplain's Assistant Motor Pool" machinery, the figurative and literal cover for the Pit, the headquarters of the G.I. Joe team. He later appeared in issues No. 62, 76, and 77. In issue No. 76, Tollbooth is part of a Joe infiltration team attacking Cobra Island defenses through the swamps. Tollbooth appeared in the G.I. Joe animated series voiced by Michael Bell. His first appearance in the first-season episode "Three Cubes to Darkness." His appearance is slightly different from his figure as he is shown with a green hardhat in the series. Topside Topside is the G.I. Joe Team's Navy assault specialist. His real name is John Blanchet, and his rank is that of First Class Petty Officer in the United States Navy. Topside was born in Fort Wayne, Indiana, and was first released as an action figure in 1990. He grew up on a farm with his father; their pigs won many awards at the county fairs. Topside became known as the Fort Wayne 'Hog Master'. At age twenty, wanting a more exciting career, he joined the navy. Serving as a deckhand, he overheard tales of bragging from a G.I. Joe special ops team on their way to a mission. He challenged the entire team; this led to him being noticed and recruited. A quote on his file card indicates Topside takes physical punishment with ease. Topside appeared in the Devil's Due G.I. Joe series. In terms of the comics, he had worked with the Joe team a short time before they disbanded in 1994. Topside is part of the team to invade Cobra Island. He is also a featured character in part 1 of the "Fun Publishing" official "G.I. Joe Vs. Cobra" comic book released for the G.I. Joe conventions. Topside appeared in three episodes of the DiC G.I. Joe animated series: "An Officer and a Viperman" and "Ghost of Alcatraz" Part I and Part II. Torpedo Tracker Tracker was first released as an action figure in 1991. His real name is Christopher R. Groen, and he was born in Helena, Arkansas. Tracker is a Navy SEAL with a specialty in underwater arms development. In terms of tracking, escaping and evading, Tracker has outperformed the best the Joe team has to offer. Tracker appears in the DiC G.I. Joe animated series. Tripwire Tunnel Rat Updraft Updraft is the G.I. Joe Team's Retaliator pilot. His real name is Matthew W. Smithers, and he was born in Bismarck, North Dakota. Updraft was first released as an action figure in 1990, packaged with the "Retaliator" hi-tech attack copter. Updraft was the team leader in the "World Helicopter Championships", leading the US team to victory twice. He joins the Flight Warrant Office School at Fort Rucker and became a special instructor. From there, he was selected for G.I. Joe duty. He personally improves much of the "Retaliator" helicopter, a vehicle he later flies into battle. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 130. He assists the Joe team in defending their headquarters from a Cobra attack. He is also part of a mission in the Devil's Due G.I. Joe series, helping the Joe team battle Serpentor and his forces in the second Cobra civil war. As with the first one, this war takes place on Cobra Island. Wet Suit Whiteout Whiteout is an arctic trooper for the G.I. Joe Team. His real name is Leonard J. Lee III, and he was first released as an action figure in 2000. He is a cold weather strategist for the G.I. Joe team and experienced in polar combat mobility. Wild Bill Wildcard Wildcard is the G.I. Joe Team's Mean Dog vehicle Driver. His real name is Eric U. Scott, and his rank is that of corporal E-4. Wildcard was born in Northampton, Massachusetts, and was first released as an action figure in 1988, packaged with the "Mean Dog" 6WD heavy assault vehicle. Wildcard's primary military specialty is armored vehicle operator, and his secondary military specialty is chaplain's assistant. Wildcard possesses an unnatural talent for breaking things, from sturdy steel machines to simple tools, delicate toys, immovable objects of cast iron, and 8-piece dinner settings. When driving the Mean Dog, the vehicle becomes an extension of himself – a raging engine of destruction, pulverizing all in its path. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 72 (June 1988). He joins the team with Skidmark and Windmill. The trio's actual entry to the current Joe base, with the Mean Dog and the vehicle Desert Fox, is marred by the discovery that a Cobra agent had snuck in with them. He appears in issue #89, on a trip to the Aberdeen Proving Grounds to test the Mean Dog. Assisted by Repeater and Hardball, he routes Cobra forces chasing other Joes. At the end of the battle, Wildcard personally tugs the fleeing Zanzibar out of his Pogo vehicle. Windchill Windchill is the G.I. Joe Team's Arctic Blast vehicle Driver. His real name is Jim Steel, and his rank is that of Staff Sergeant E-6. Windchill was born in Cedar Rapids, Iowa. Windchill was first released as an action figure in 1989, packaged with the "Arctic Blast" tundra assault sled. The figure was repainted and released as part of the Battle Corps line in 1994. His 1994 release has him packaged with the "Blockbuster" arctic vehicle; furthermore he is named Jim McDonald in that release. Windchill's primary military specialty is Arctic Blast driver, and his secondary military specialty is cold weather survival instructor. He was an avid skimobiler and hunter, and figured the biathlon would be the ultimate sport for him. He might have qualified for a spot on the American Olympic team if Blizzard hadn't met him at the National Elimination Tournament and given him the idea of getting paid to drive fast, heavily armed snow vehicles. Windmill Windmill is the G.I. Joe Team's Skystorm X-Wing Chopper pilot. His real name is Edward J. Roth, and his rank is that of Captain, USAF O-3. Windmill was born in Allentown, Pennsylvania, and was first released as an action figure in 1988, packaged with the Skystorm X-Wing Chopper. Windmill's primary military specialty is stopped-rotor aircraft operator, and his secondary military specialty is attack helicopter pilot. He was a flight instructor at the Army Flight Warrant Officers School at Fort Rucker, later flying experimental helicopter prototypes at that facility for the Army Aviation Department Test Activity. In the Marvel Comics G.I. Joe series, he first appeared in issue No. 72 (June 1988). He drives onto the current Joe base in the "Desert Fox", accompanied by Skidmark and Wildcard; the latter driving the "Mean Dog". The occasion is marred by the discovery of a hostile that had snuck in by hanging to the underside of the Fox. Zap See also List of Cobra characters List of G.I. Joe Extreme characters List of G.I. Joe: A Real American Hero action figures References External links Character Guide at JMM's G.I. Joe Comics Home Page A Real American Hero (A-C) G.I. Joe: A Real American Hero Lists of comics characters
9038169
https://en.wikipedia.org/wiki/Trilogy%20Systems
Trilogy Systems
Trilogy Systems Corporation was a computer systems company started in 1980. Originally called ACSYS, the company was founded by Gene Amdahl, his son Carl Amdahl and Clifford Madden. Flush with the success of his previous company, Amdahl Corporation, Gene Amdahl was able to raise $230 million for his new venture. Trilogy was the most well funded start-up company up till that point in Silicon Valley history. It had corporate support from Groupe Bull, Digital Equipment Corporation, Unisys, Sperry Rand and others. The plan was to use extremely advanced semiconductor manufacturing techniques to build an IBM compatible mainframe computer that was both cheaper and more powerful than existing systems from IBM and Amdahl Corporation. These techniques included wafer scale integration (WSI), with the goal of producing a computer chip that was 2.5 inch on one side. At the time, computer chips of only 0.25 inch on a side could be reliably manufactured. This giant chip was to be connected to the rest of the system using a package with 1200 pins, an enormous number at the time. Previously, mainframe computers were built from hundreds of computer chips due to the size of standard computer chips. These computer systems were hampered through chip-to-chip communication which both slowed performance as well consumed much power. As with other WSI projects, Trilogy's chip design relied on redundancy, that is replication of functional units, to overcome the manufacturing defects that precluded such large chips. If one functional unit was not fabricated properly, it would be switched out through on-chip wiring and another correctly functioning copy would be used. By keeping most communication on-chip, the dual benefits of higher performance and lower power consumption were supposed to be achieved. Lower power consumption meant less expensive cooling systems, which would aid in lower system costs. For maximum performance, bipolar emitter-coupled logic was employed, even though its power consumption is large. The large chip size demanded larger minimum dimensions for the transistors (due to photolithography manufacturing tolerances over the large chip) than standard-size chips. Consequently, logic density and performance were less than had been forecast. "Triple Modular Redundancy" was employed systematically. Every logic gate and every flip-flop were triplicated with binary two-out-of-three voting at each flip-flop. Alongside the advances in chip manufacturing, advanced chip packaging techniques were also pursued by the company. These included vertical stacking of computer chips and chip-to-chip interconnect technology that used copper conductors and polyimide insulation that allowed for extremely dense packing of signal wiring. Though overall system power consumption would be lower, the power dissipation would be much more concentrated at the single large chip. This required new cooling techniques such as sealed heat exchangers to be developed. The company was beset by many problems. Gene Amdahl was involved in a car accident and preoccupied with the ensuing lawsuit. Madden, the company's president, died from a brain tumor. Their semiconductor fabrication plant was damaged during construction by a winter storm. The redundancy schemes used in the design were not sufficient to give reasonable manufacturing yields. The chip interconnect technology could not be reliably manufactured as the layers tended to delaminate and there was no automated way to repair soldering errors. In 1983, the company had an initial public offering and raised $60 million. By mid-1984, the company decided it was too difficult to manufacture their computer design. Gene Amdahl stepped down as CEO and Henry Montgomery was brought in as replacement. The new leadership redirected the company to be a technology provider to other computer companies. The only major customer was Digital Equipment, which paid $10 million for the rights to the interconnect and cooling technologies. These were used for its VAX 9000 mainframe computers. Years later, the manufacturing difficulties of the copper/polyimide technology restricted DEC's ability to ship its mainframes. At the end of 1985, Gene Amdahl, as company chairman, decided to stop all Trilogy development and use the remaining $70 million of the raised capital to buy Elxsi, a minicomputer start-up company. In 1989, Gene Amdahl left the merged company. Trilogy systems was known as one of the largest financial failures in Silicon Valley before the burst of Internet/dotcom bubble in 2001. In describing the company, financial columnists coined the term "crater" as describing companies that consumed huge amounts of venture capital and later imploded to leave nothing for its investors. References Myron Magnet, GENE AMDAHL FIGHTS TO SALVAGE A WRECK / Fortune Magazine article on Trilogy's history, September 1, 1986 American companies established in 1980 American companies disestablished in 1985 Computer companies established in 1980 Computer companies disestablished in 1985 Defunct computer companies of the United States
376707
https://en.wikipedia.org/wiki/R%20%28programming%20language%29
R (programming language)
R is a programming language for statistical computing and graphics supported by the R Core Team and the R Foundation for Statistical Computing. Created by statisticians Ross Ihaka and Robert Gentleman, R is used among data miners and statisticians for data analysis and developing statistical software. Users have created packages to augment the functions of the R language. According to surveys like Rexer's Annual Data Miner Survey and studies of scholarly literature databases, R is one of the most commonly used programming language used in data mining. R ranks 13th in the TIOBE index, a measure of programming language popularity. The official R software environment is an open-source free software environment within the GNU package, available under the GNU General Public License. It is written primarily in C, Fortran, and R itself (partially self-hosting). Precompiled executables are provided for various operating systems. R has a command line interface. Multiple third-party graphical user interfaces are also available, such as RStudio, an integrated development environment, and Jupyter, a notebook interface. History R is an open-source implementation of the S programming language combined with lexical scoping semantics from Scheme, which allow objects to be defined in predetermined blocks rather than the entirety of the code. S was created by Rick Becker, John Chambers, Doug Dunn, Jean McRae, and Judy Schilling at Bell Labs around 1976. Designed for statistical analysis, the language is an interpreted language whose code could be directly run without a compiler. Many programs written for S run unaltered in R. As a dialect of the Lisp language, Scheme was created by Gerald J. Sussman and Guy L. Steele Jr. at MIT around 1975. In 1991, statisticians Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand, embarked on an S implementation. It was named partly after the first names of the first two R authors and partly as a play on the name of S. They began publicizing it on the data archive StatLib and the s-news mailing list in August 1993. In 1995, statistician Martin Mächler convinced Ihaka and Gentleman to make R a free and open-source software under the GNU General Public License. The first official release came in June 1995. The first official "stable beta" version (v1.0) was released on 29 February 2000. The Comprehensive R Archive Network (CRAN) was officially announced on 23 April 1997. CRAN stores R's executable files, source code, documentations, as well as packages contributed by users. CRAN originally had 3 mirrors and 12 contributed packages. As of January 2022, it has 101 mirrors and 18,728 contributed packages. The R Core Team was formed in 1997 to further develop the language. , it consists of Chambers, Gentleman, Ihaka, and Mächler, plus statisticians Douglas Bates, Peter Dalgaard, Kurt Hornik, Michael Lawrence, Friedrich Leisch, Uwe Ligges, Thomas Lumley, Sebastian Meyer, Paul Murrell, Martyn Plummer, Brian Ripley, Deepayan Sarkar, Duncan Temple Lang, Luke Tierney, and Simon Urbanek, as well as computer scientist Tomas Kalibera. Stefano Iacus, Guido Masarotto, Heiner Schwarte, Seth Falcon, Martin Morgan, and Duncan Murdoch were members. In April 2003, the R Foundation was founded as a non-profit organization to provide further support for the R project. Features Data processing R's data structures include vectors, arrays, lists, and data frames. Vectors are ordered collections of values and can be mapped to arrays of one or more dimensions in a column major order. That is, given an ordered collection of dimensions, one fills in values along the first dimension first, then fill in one-dimensional arrays across the second dimension, and so on. R supports array arithmetics and in this regard is like languages such as APL and MATLAB. The special case of an array with two dimensions is called a matrix. Lists serve as collections of objects that do not necessarily have the same data type. Data frames contain a list of vectors of the same length, plus a unique set of row names. R has no scalar data type. Instead, a scalar is represented as a length-one vector. R and its libraries implement various statistical techniques, including linear and nonlinear modeling, classical statistical tests, spatial and time-series analysis, classification, clustering, and others. For computationally intensive tasks, C, C++, and Fortran code can be linked and called at run time. Another of R's strengths is static graphics; it can produce publication-quality graphs that include mathematical symbols. Programming R is an interpreted language; users can access it through a command-line interpreter. If a user types 2+2 at the R command prompt and presses enter, the computer replies with 4. R supports procedural programming with functions and, for some functions, object-oriented programming with generic functions. Due to its S heritage, R has stronger object-oriented programming facilities than most statistical computing languages. Extending it is facilitated by its lexical scoping rules, which are derived from Scheme. R uses S-expressions to represent both data and code. R's extensible object system includes objects for (among others): regression models, time-series and geo-spatial coordinates. Advanced users can write C, C++, Java, .NET or Python code to manipulate R objects directly. Functions are first-class objects and can be manipulated in the same way as data objects, facilitating meta-programming that allows multiple dispatch. Function arguments are passed by value, and are lazy—that is to say, they are only evaluated when they are used, not when the function is called. A generic function acts differently depending on the classes of the arguments passed to it. In other words, the generic function dispatches the method implementation specific to that object's class. For example, R has a generic print function that can print almost every class of object in R with print(objectname). Many of R's standard functions are written in R, which makes it easy for users to follow the algorithmic choices made. R is highly extensible through the use of packages for specific functions and specific applications. Packages R's capabilities are extended through user-created packages, which offer statistical techniques, graphical devices, import/export, reporting (RMarkdown, knitr, Sweave), etc. R's packages and the ease of installing and using them, has been cited as driving the language's widespread adoption in data science. The packaging system is also used by researchers to create compendia to organise research data, code and report files in a systematic way for sharing and archiving. Multiple packages are included with the basic installation. Additional packages are available on CRAN, Bioconductor, Omegahat, GitHub, and other repositories. The "Task Views" on the CRAN website lists packages in fields including Finance, Genetics, High Performance Computing, Machine Learning, Medical Imaging, Social Sciences and Spatial Statistics. R has been identified by the FDA as suitable for interpreting data from clinical research. Microsoft maintains a daily snapshot of CRAN that dates back to Sept. 17, 2014. Other R package resources include R-Forge, a platform for the collaborative development of R packages. The Bioconductor project provides packages for genomic data analysis, including object-oriented data-handling and analysis tools for data from Affymetrix, cDNA microarray, and next-generation high-throughput sequencing methods. A group of packages called the Tidyverse, which can be considered a "dialect" of the R language, is increasingly popular among developers. It strives to provide a cohesive collection of functions to deal with common data science tasks, including data import, cleaning, transformation and visualisation (notably with the ggplot2 package). Dynamic and interactive graphics are available through additional packages. R is one of 5 languages with an Apache Spark API, along with Scala, Java, Python, and SQL. Milestones A list of changes in R releases is maintained in various "news" files at CRAN. Some highlights are listed below for several major releases. Interfaces Various applications can be used to edit or run R code. Early developers preferred to run R via the command line console, succeeded by those who prefer an IDE. IDEs for R include (in alphabetical order) Rattle GUI, R Commander, RKWard, RStudio, and Tinn-R. R is also supported in multi-purpose IDEs such as Eclipse via the StatET plugin, and Visual Studio via the R Tools for Visual Studio. Of these, RStudio is the most commonly used. Editors that support R include Emacs, Vim (Nvim-R plugin), Kate, LyX, Notepad++, Visual Studio Code, WinEdt, and Tinn-R. Jupyter Notebook can also be configured to edit and run R code. R functionality is accessible from scripting languages including Python, Perl, Ruby, F#, and Julia. Interfaces to other, high-level programming languages, like Java and .NET C# are available. Implementations The main R implementation is written in R, C, and Fortran. Several other implementations aimed at improving speed or increasing extensibility. A closely related implementation is pqR (pretty quick R) by Radford M. Neal with improved memory management and support for automatic multithreading. Renjin and FastR are Java implementations of R for use in a Java Virtual Machine. CXXR, rho, and Riposte are implementations of R in C++. Renjin, Riposte, and pqR attempt to improve performance by using multiple cores and deferred evaluation. Most of these alternative implementations are experimental and incomplete, with relatively few users, compared to the main implementation maintained by the R Development Core Team. TIBCO built a runtime engine called TERR, which is part of Spotfire. Microsoft R Open (MRO) is a fully compatible R distribution with modifications for multi-threaded computations. As of 30 June 2021, Microsoft started to phase out MRO in favor of the CRAN distribution. Communities R has local communities worldwide for users to network, share ideas, and learn. A growing number of R events bring users together, such as conferences (e.g. useR!, WhyR?, conectaR, SatRdays), meetups, as well as R-Ladies groups that promote gender diversity. The R Foundation taskforce focuses on women and other under-represented groups. useR! conferences The official annual gathering of R users is called "useR!". The first such event was useR! 2004 in May 2004, Vienna, Austria. After skipping 2005, the useR! conference has been held annually, usually alternating between locations in Europe and North America. History: useR! 2006, Vienna, Austria useR! 2007, Ames, Iowa, US useR! 2008, Dortmund, Germany useR! 2009, Rennes, France useR! 2010, Gaithersburg, Maryland, US useR! 2011, Coventry, United Kingdom useR! 2012, Nashville, Tennessee, US useR! 2013, Albacete, Spain useR! 2014, Los Angeles, California, US useR! 2015, Aalborg, Denmark useR! 2016, Stanford, California, US useR! 2017, Brussels, Belgium useR! 2018, Brisbane, Australia useR! 2019, Toulouse, France useR! 2020, took place online due to COVID-19 pandemic useR! 2021, took place online due to COVID-19 pandemic no next event date has been set yet. The R Journal The R Journal is an open access, refereed journal of the R project. It features short to medium length articles on the use and development of R, including packages, programming tips, CRAN news, and foundation news. Comparison with alternatives R is comparable to popular commercial statistical packages such as SAS, SPSS, and Stata. One difference is that R is available at no charge under a free software license. In January 2009, the New York Times ran an article charting the growth of R, the reasons for its popularity among data scientists and the threat it poses to commercial statistical packages such as SAS. In June 2017 data scientist Robert Muenchen published a more in-depth comparison between R and other software packages, "The Popularity of Data Science Software". R is more procedural than either SAS or SPSS, both of which make heavy use of pre-programmed procedures (called "procs") that are built-in to the language environment and customized by parameters of each call. R generally processes data in-memory, which limits its usefulness in processing larger files. Commercial support Although R is an open-source project, some companies provide commercial support and extensions. In 2007, Richard Schultz, Martin Schultz, Steve Weston and Kirk Mettler founded Revolution Analytics to provide commercial support for Revolution R, their distribution of R, which includes components developed by the company. Major additional components include: ParallelR, the R Productivity Environment IDE, RevoScaleR (for big data analysis), RevoDeployR, web services framework, and the ability for reading and writing data in the SAS file format. Revolution Analytics offers an R distribution designed to comply with established IQ/OQ/PQ criteria that enables clients in the pharmaceutical sector to validate their installation of REvolution R. In 2015, Microsoft Corporation acquired Revolution Analytics and integrated the R programming language into SQL Server, Power BI, Azure SQL Managed Instance, Azure Cortana Intelligence, Microsoft ML Server and Visual Studio 2017. In October 2011, Oracle announced the Big Data Appliance, which integrates R, Apache Hadoop, Oracle Linux, and a NoSQL database with Exadata hardware. , Oracle R Enterprise became one of two components of the "Oracle Advanced Analytics Option" (alongside Oracle Data Mining). IBM offers support for in-Hadoop execution of R, and provides a programming model for massively parallel in-database analytics in R. TIBCO offers a runtime-version R as a part of Spotfire. Mango Solutions offers a validation package for R, ValidR, to comply with drug approval agencies, such as the FDA. These agencies required the use of validated software, as attested by the vendor or sponsor. Examples Basic syntax The following examples illustrate the basic syntax of the language and use of the command-line interface. (An expanded list of standard language features can be found in the R manual, "An Introduction to R".) In R, the generally preferred assignment operator is an arrow made from two characters <-, although = can be used in some cases. > x <- 1:6 # Create a numeric vector in the current environment > y <- x^2 # Create vector based on the values in x. > print(y) # Print the vector’s contents. [1] 1 4 9 16 25 36 > z <- x + y # Create a new vector that is the sum of x and y > z # Return the contents of z to the current environment. [1] 2 6 12 20 30 42 > z_matrix <- matrix(z, nrow=3) # Create a new matrix that turns the vector z into a 3x2 matrix object > z_matrix [,1] [,2] [1,] 2 20 [2,] 6 30 [3,] 12 42 > 2*t(z_matrix)-2 # Transpose the matrix, multiply every element by 2, subtract 2 from each element in the matrix, and return the results to the terminal. [,1] [,2] [,3] [1,] 2 10 22 [2,] 38 58 82 > new_df <- data.frame(t(z_matrix), row.names=c('A','B')) # Create a new data.frame object that contains the data from a transposed z_matrix, with row names 'A' and 'B' > names(new_df) <- c('X','Y','Z') # Set the column names of new_df as X, Y, and Z. > print(new_df) # Print the current results. X Y Z A 2 6 12 B 20 30 42 > new_df$Z # Output the Z column [1] 12 42 > new_df$Z==new_df['Z'] && new_df[3]==new_df$Z # The data.frame column Z can be accessed using $Z, ['Z'], or [3] syntax, and the values are the same. [1] TRUE > attributes(new_df) # Print attributes information about the new_df object $names [1] "X" "Y" "Z" $row.names [1] "A" "B" $class [1] "data.frame" > attributes(new_df)$row.names <- c('one','two') # Access and then change the row.names attribute; can also be done using rownames() > new_df X Y Z one 2 6 12 two 20 30 42 Structure of a function One of R's strengths is the ease of creating new functions. Objects in the function body remain local to the function, and any data type may be returned. Example: # Declare function “f” with parameters “x”, “y“ # that returns a linear combination of x and y. f <- function(x, y) { z <- 3 * x + 4 * y return(z) ## the return() function is optional here } > f(1, 2) [1] 11 > f(c(1,2,3), c(5,3,4)) [1] 23 18 25 > f(1:3, 4) [1] 19 22 25 Modeling and plotting The R language has built-in support for data modeling and graphics. The following example shows how R can easily generate and plot a linear model with residuals. > x <- 1:6 # Create x and y values > y <- x^2 > model <- lm(y ~ x) # Linear regression model y = A + B * x. > summary(model) # Display an in-depth summary of the model. Call: lm(formula = y ~ x) Residuals: 1 2 3 4 5 6 3.3333 -0.6667 -2.6667 -2.6667 -0.6667 3.3333 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -9.3333 2.8441 -3.282 0.030453 * x 7.0000 0.7303 9.585 0.000662 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.055 on 4 degrees of freedom Multiple R-squared: 0.9583, Adjusted R-squared: 0.9478 F-statistic: 91.88 on 1 and 4 DF, p-value: 0.000662 > par(mfrow = c(2, 2)) # Create a 2 by 2 layout for figures. > plot(model) # Output diagnostic plots of the model. Mandelbrot set Short R code calculating Mandelbrot set through the first 20 iterations of equation z = z2 + c plotted for different complex constants c. This example demonstrates: use of community-developed external libraries (called packages), in this case caTools package handling of complex numbers multidimensional arrays of numbers used as basic data type, see variables C, Z and X. install.packages("caTools") # install external package library(caTools) # external package providing write.gif function jet.colors <- colorRampPalette(c("red", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) dx <- 1500 # define width dy <- 1400 # define height C <- complex(real = rep(seq(-2.2, 1.0, length.out = dx), each = dy), imag = rep(seq(-1.2, 1.2, length.out = dy), dx)) C <- matrix(C, dy, dx) # reshape as square matrix of complex numbers Z <- 0 # initialize Z to zero X <- array(0, c(dy, dx, 20)) # initialize output 3D array for (k in 1:20) { # loop with 20 iterations Z <- Z^2 + C # the central difference equation X[, , k] <- exp(-abs(Z)) # capture results } write.gif(X, "Mandelbrot.gif", col = jet.colors, delay = 100) See also R package Comparison of numerical-analysis software Comparison of statistical packages List of numerical-analysis software List of statistical software Rmetrics Notes References External links of the R project R Technical Papers Array programming languages Cross-platform free software Data mining and machine learning software Data-centric programming languages Dynamically typed programming languages Free plotting software Free statistical software Functional languages GNU Project software Literate programming Numerical analysis software for Linux Numerical analysis software for MacOS Numerical analysis software for Windows Programming languages created in 1993 Science software Statistical programming languages Articles with example R code
3497031
https://en.wikipedia.org/wiki/Telecommunication%20Engineering%20Center
Telecommunication Engineering Center
The Telecommunication Engineering Centre is a body under telecom commission and a nodal agency of the Department of Telecommunications, Ministry of Communications and Information Technology, Government of India which is responsible for drawing up of standards, generic requirements, interface requirements, service requirements and specifications for telecom products, services and networks. The Telecommunication Engineering Centre (TEC) is an "S&T" institution of Department of Telecommunications, Ministry of Communications and Information Technology, Government of India, with headquarters in New Delhi. It has four Regional Centres in New Delhi, Kolkata, Mumbai and Bangalore. TEC Organization Chart: SPECIALISED DIVISIONS : The following Specialised divisions of TEC cover various telecom technology areas : External Plant Information Technology Networks Optical Transmission Line Transmission Radio Transmission Satellite Transmission Value Aided Services Switching Mobile Communications These divisions are responsible for standardisation and trials of new technologies. They have capabilities and human resources for testing of all kinds of telecom products, services and networks. The regional centres, as mentioned above, carry out test Interface Approvals and Service test certificates for telcom products and services. TEC Certification The TEC issues TEC certificates which cover the listed products. TEC Certification has been a voluntary process since 1991. However, as per announcement No. 10-1/2017-IT/TEC/ER, starting in 2019 the TEC will roll out the MTCTE (Mandatory Testing Certification of Telecommunication Equipment) regulations incrementally. Since 2019 many telecommunication products now require a mandatory product certification in India. Products under the TEC approval scheme will have to meet electromagnetic compatibility (EMC/EMI) limits. The TEC registration process needs to be conducted by an Authorized Indian Representative. The product must be marked with an ID number (individual factory code), issued by the TEC. The TEC certification allows a maximum of 10 models per TEC certificate. International reports from ILAC-accredited test labs will only be accepted until March 31, 2020. After that all testing will be required to be done in India. HUMAN RESOURCES : TEC has a strength of 27 telecom professionals, 94 telecom engineers and 88 support staff for carrying out its responsibilities. References External Links Official website Telecommunications authorities of India Ministry of Communications and Information Technology (India)
5180201
https://en.wikipedia.org/wiki/George%20Washington%20University%20School%20of%20Engineering%20and%20Applied%20Science
George Washington University School of Engineering and Applied Science
The School of Engineering and Applied Science (SEAS) at the George Washington University in Washington, D.C. is a technical school which specializes in engineering, technology, communications, and transportation. The school is located on the main campus of the George Washington University and offers both undergraduate and graduate programs. History In May 2011, site preparation began for construction of the $300 million project. The building consists of six below-grade stories used for lab space, parking, and mechanical systems, as well as eight above-grade stories. The design of the Science and Engineering Hall combines flexible, reconfigurable spaces within common areas on each floor to promote collaborative thinking and to integrate lectures and laboratories with hands-on projects. Other key features of the building include: a vibration and particulate-free nanotechnology facility, a three-story high-bay including a strong wall and floor with easy access to a street level loading dock, and a multi-use auditorium and media center for science and engineering symposia and conferences. The building was designed by architecture firm Ballinger and conceptualized to meet the growing research needs of engineering disciplines. After four years of construction which included demolishing the campus' largest parking deck, the building was completed in November 2014. Since January 2015, the School of Engineering and Applied Science has occupied in the Science and Engineering Hall on George Washington University's main campus in Foggy Bottom. Previously, the engineering school was housed in Tompkins Hall. Tompkins Hall is still used for office space for faculty as well as the computing facility. The Science and Engineering Hall is the largest academic building dedicated to these fields in Washington, D.C. The facility is 500,000 square feet and eight floors tall. It houses 140 faculty members and classrooms used by four schools: the School of Engineering and Applied Science, Columbian College of Arts & Sciences, the School of Medicine and Health Sciences, and the Milken Institute School of Public Health. The building is designed with sustainability in mind. Departments Biomedical Engineering The Department of Biomedical Engineering offers Bachelor of Science, Master of Science, and Doctor of Philosophy degree programs in biomedical engineering. Until 2015, these programs were administered through the Department of Electrical and Computer Engineering. Undergraduate students may choose from a number of options with the bachelor of science degree. Graduate students may select focus areas of concentration in medical imaging or medical instrumentation. Faculty and students conduct research programs across a wide array of topics, leveraging the proximity of the GW Schools of Medicine and Public Health, Children's National Medical Center, and federal agencies to do so. The department has 6 full-time faculty, 9 affiliated faculty, 228 undergraduate students, and 49 graduate students. Its annual research expenditure is around $1.2 million. Civil and Environmental Engineering The department offers a Bachelor of Science degree in civil engineering; a five-year bachelor's/master's degree; master's, doctoral, and professional degree programs in civil engineering; and several graduate certificate programs. The department has 12 full-time faculty, 83 undergraduate students, and 49 graduate students. It has an annual research expenditure of around $551,000. Computer Science The Department of Computer Science offers both Bachelor of Science and Bachelor of Arts degree programs in computer science, as well as a Master of Science and Doctor of Philosophy in computer science and a master of science in Cybersecurity in Computer Science. The department also offers a graduate certificate in computer security and information assurance. The department is one of the largest at SEAS, both in terms of faculty and students. The department has 18 full-time faculty members, 187 undergraduate students, and 445 graduate students. It has an annual research expenditure of $3.7 million. Electrical and Computer Engineering The Department of Electrical and Computer Engineering (ECE) offers Bachelor of Science degree programs in computer engineering and electrical engineering. Students may also choose from a number of options with each degree. Graduate students may pursue Master of Science or doctor of philosophy degrees in computer engineering or electrical engineering. The department also offers a Master of Science degree program in telecommunications engineering, as well as professional degree programs and graduate certificate programs. The department has 23 full-time faculty members, 87 undergraduate students, and 245 graduate students. It has an annual research expenditure of $2.65 million. Engineering Management and Systems Engineering The Department of Engineering Management and Systems Engineering offers Bachelor of Science, Bachelor of Arts, Master of Science, and Doctor of Philosophy degree programs in engineering management and systems engineering. They also offer an online Doctor of Engineering in engineering management. The department has 15 full-time faculty, 119 undergraduate students, and 879 graduate students. It has an annual research expenditure of $1.1 million. Mechanical and Aerospace Engineering The Department of Mechanical and Aerospace Engineering offers a Bachelor of Science degree program in mechanical engineering, a Master of Science, or Doctor of Philosophy degrees in mechanical and aerospace engineering. The department also offers professional degree programs and graduate certificate programs. It has 27 full-time faculty, 249 undergraduate students, and 152 graduate students. It has an annual research expenditure of $4.3 million. Research laboratories SEAS has research laboratories dedicated to high-performance computing, nanotechnology, robotics, transportation engineering, among other fields, including: Biomedical engineering research Biomedical engineering research at the George Washington University includes biofluid dynamics, medical imaging, cardiac electrophysiology, plasma medicine, therapeutic ultrasound, nanomedicine and tissue engineering. Cybersecurity research Cybersecurity research is spread across six laboratories at the George Washington University including Dr. Zhang's laboratory which focuses on data security, the Cyber Security Policy and Research Institute, and Dr. Monteleoni's laboratory in Machine Learning. Undergraduate programs With approximately 780 students enrolled, SEAS has a variety of undergraduate programs. Applied Science and Technology (B.S.) Biomedical Engineering (B.S.) The Bachelor of Science in Biomedical Engineering is an ABET-accredited program located in the Department of Electrical & Computer Engineering Civil Engineering (B.S.) The Department of Civil and Environmental Engineering (CEE) at SEAS has eleven full-time teaching and/or research faculty. The following programs are currently offered by the department as B.S. options (note that all BS degrees are degrees in civil engineering, not the concentration): Civil Engineering – This option is the most general of the options and has a bias toward structural engineering studies. Civil Engineering with Medical Preparation Option – This is the same degree, but with more emphasis in medical school preparation. Some changes include more requirements in chemistry and organic chemistry, and introduction to circuit theory. Environmental Engineering Option in Civil Engineering Transportation Option in Civil Engineering Sustainability option in Civil Engineering 5-year Bachelor's/Master's programs – The department recently enacted three options for both general CE and EE option students to complete a Master of Science degree in one additional year. A letter of intent is necessary, along with a 3.0 GPA, but application to the graduate school and GREs are not necessary. There are currently three options available: B.S./M.S. in Civil Engineering with Structural Engineering Focus B.S./M.S. in Civil Engineering with Environmental Engineering Focus B.S./M.S. in Civil Engineering with Transportation Engineering Focus Computer Engineering (B.S.) Offered through the Department of Electrical and Computer Engineering, the computer engineering program combines the best of both worlds: electronic system hardware design with computer software design. Students in the program are prepared in the theory and application of hardware and software design, computer networks, embedded systems, and very large scale integrated (VLSI) circuit design and applications. Students can take electives in advanced topics, such as optical networks, broadband wireless networks, and technologies for the next generation of information systems. Students work on projects in modern, well-equipped VLSI and computer engineering laboratories. The capstone design sequence involves students in the design and fabrication of a large-scale digital system based on their area of interest. This program is accredited by the Engineering Accreditation Commission of ABET. Computer Science (B.S.) Technical Tracks: Computer Security and Information Assurance – for students interested in the design and implementation of secure computing infrastructures. Artificial Intelligence (AI) – for students interested in artificial intelligence and its applications. Computational Mathematics and Sciences Computer Graphics and Digital Media – for students interested in computer graphics, visualization, animation and digital media. Data Science Foundations and Theory – for students interested in exploring theory or developing strong foundations, perhaps in preparation for graduate work in Computer Science. Software engineering and Application Development – for students interested in the software engineering concepts and techniques required for the design and implementation of large software systems and applications. Systems – for students interested in the software engineering concepts and techniques required for the design and implementation of large software systems and applications. Individually designed technical track: This track is designed by you with the agreement of your advisor. It will comprise at least three courses, not necessarily with CSci designations, but the content must meet a broad technical requirement that it be closely related to the disciplines of computing. Computer Science (B.A.) Medical Preparation option – If you are interested in combining a Computer Science major with preparation for admission to a school of medicine, consider the Medical Preparation options in the B.A. and B.S. programs. In these options, you add additional natural science material to your course requirements. Bioinformatics option – The emerging field of Bioinformatics combines the disciplines of Computer Science and Biochemistry, and focuses on the use of computers to characterize the molecular components of living things. If you choose this option in either the B.S. or the B.A. program, you will study a number of subjects in Biology and Chemistry, including molecular biology and genetics, and take specific coursework in Bioinformatics. Both options also meet the requirements for medical school admission, and the B.A. option in Bioinformatics meets the requirements for a second major in Biology. Digital Media option – Digital Media encompasses audio, video, the World Wide Web and other technologies that can be used to create and distribute digital content. Graphics is the use of computers to create virtual worlds from which visuals can be generated and with which humans can interact. You can choose between two degree options. The Bachelor of Science (BS) concentrates on the technology. The Bachelor of Arts (BA) allows you to explore the use of digital media and computer graphics in the arts, sciences, engineering, business, medicine, and in a number of other disciplines. The expanded breadth is made available through the opportunity to take a number of related courses from other departments. Read more on the Digital Media option. Biomedical Computing option – Biomedical Computing is at the intersection of health care and computer science. It involves all aspects of the analysis, management, and visualization of information in biomedical applications. The technology is based on computer science, but the field demands knowledge of the problems that need to be solved in medicine and health care. Electrical Engineering (B.S.) Offered through the Department of Electrical and Computer Engineering, the electrical engineering program focuses on signal processing; communication theory and practice; voice, data, video and multimedia communication networks; very large scale integrated (VLSI) circuit design and applications; and control systems. Students can take electives in advanced topics, such as optical networks, broadband wireless networks, and technologies for the next generation of information systems. Our department's modern curriculum is complemented by well-staffed and well-equipped laboratories. Students are required to work on real-world projects throughout their education and complete a capstone design sequence with real-world design experience. This program is accredited by the Engineering Accreditation Commission of ABET. Mechanical Engineering The Mechanical Engineering Program is one of the oldest SEAS programs. Most graduates easily secure their EIT designation. The specialized major options are as follows: Aerospace Option in Mechanical Engineering – The Aerospace Engineering Option leads to a bachelor's degree in Mechanical Engineering while preparing the student to work in the aerospace industry or to pursue graduate study in Aerospace Engineering. It provides a strong foundation in aerodynamics, airplane performance, propulsion, aerospace structures, orbital mechanics, spacecraft dynamics, and aircraft and spacecraft design. The Biomechanical Engineering Option – The Biomechanical Engineering Option leads to a bachelor's degree in Mechanical Engineering while preparing the student to work in the biomedical industry or to pursue graduate study in biomedical engineering. It provides a strong foundation in human anatomy and physiology, biomechanics, biomaterials, and design of biomedical devices. Patent Law Option in Mechanical Engineering – The Patent Law Option leads to a bachelor's degree in Mechanical Engineering while providing a strong foundation in fundamental principles of patent law and the influences of the US patent system on modern engineering design. A student in this option obtains background that can lead to work as a technical specialist in a patent law firm or in the patent department of an industrial employer. The option also provides excellent preparation for pursuit of a subsequent law school degree in intellectual property. Robotics option in Mechanical Engineering Systems Engineering Systems Engineering is a multidisciplinary field that applies engineering techniques and mathematical methods to improve planning and decision making. By observing systems composed of people, machines, and procedures, Systems Engineers attempt to model and predict the behavior of complex systems so that they can be (re)designed to operate optimally. Graduate programs As of the Spring 2017 semester, SEAS offers 12 master's programs, 8 doctoral programs, and 13 certificate programs. It also facilitates a number of combined B.S./M.S. programs for current GW undergraduate students, as well as accelerated master's programs with global partner institutions and special programs for working professionals in select tech and government agencies located in Washington, D.C. and Northern Virginia, such as Booz Allen Hamilton. As of Fall 2016, there were 834 graduate students enrolled in a master's or doctorate program. In terms of gender ratio, 27% of the graduate students at SEAS are female, one of the highest in the country. Degree programs are offered in the following fields of study. They may be completed full-time or part-time on George Washington University's main campus in Foggy Bottom & off-campus sites in Arlington, Virginia: Biomedical Engineering (M.S., Ph.D.) Offered through the Department of Biomedical Engineering, the M.S. and Ph.D. programs in Biomedical Engineering are designed to prepare students to apply engineering principles to problems in medicine and biology; to understand and model attributes of living systems; and to synthesize biomedical systems and devices. Students choose from two areas of focus: Medical imaging or medical instrumentation. Students may choose to do a master's thesis or take extra courses in lieu of a thesis. Civil and Environmental Engineering (M.S., Ph.D.) The Department of Civil and Environmental Engineering offers graduate degree and certificate programs that are designed to help students explore solutions for issues such as improving clean water access; designing intelligent transportation systems to alleviate traffic congestion; improving the crashworthiness of cars; and designing bridges to become more resistant to earthquake. Students choose from six areas of focus: Engineering mechanics Environmental engineering Geotechnical engineering Structural engineering Transportation safety engineering Water safety engineering Computer Engineering (M.S., Ph.D.) Offered through the Department of Electrical and Computer Engineering, the Master of Science program in Computer Engineering program offers up-to- date knowledge and skills in the advances of computer systems architecture and networking and in the rapidly growing use of superscalar microprocessors, real-time embedded systems, VLSI and ASIC design modules, digital signal processors and networked computing platforms. Students learn sophisticated computer architecture and integrated circuit design techniques using industry-standard computer-aided design tools and choose from among two areas of focus: computer architecture and high-performance computing or microelectronics and VLSI systems. The program offers a flexible schedule that includes courses in the late afternoon and evening, as well as the ability to choose a thesis or non-thesis degree option. The doctoral program in computer engineering is designed to involve students in cutting-edge research in the areas of computer architecture and high-performance computing, or microelectronics and VLSI systems. The research interests of the faculty in the computer architecture and high-performance computing area span computer architecture, parallel processing, cloud computing, and high-performance and grid computing. In the microelectronics and VLSI design area, the faculty's interests include the design and modeling of electronic and nanoelectronic devices and systems, microfluidic devices integrated with electronic devices, the design of MicroElectroMechanical Systems (MEMS) for sensors and for RF-MEMS devices, micro and nanoelectronic circuits with applications to sensors and biosensors, and techniques to develop CMOS Integrated sensors and their interface circuits using analog and digital circuits. Students choose from the following Areas of Focus in selecting their coursework: Computer architecture and high-performance computing MEMS, electronics, and photonics (microelectronics and VLSI systems) Computer Science (M.S., Ph.D.) Offered through the Department of Computer Science, the M.S., Ph.D., and certificate programs in Computer Science are designed to equip students with excellent skills at the forefront of computing. Through research and teaching, the Department contributes to computing breakthroughs that are fueling advances in medicine, communications, transportation, security, and other areas vital to society and the world. Students choose from the following areas of focus: Algorithms and theory Computer architecture, networks, parallel and distributed computing Computer security and information assurance Database and information retrieval systems Machine intelligence and cognition Multimedia, animation, graphics, and user interface Software engineering and systems Cybersecurity in Computer Science (M.S.) Offered through the Department of Computer Science, the M.S. in Cybersecurity in Computer Science is designed to respond to the large and fast-growing need for technical cybersecurity experts both nationally and internationally. As the first such degree offered in the D.C. area, students acquire up-to-date knowledge and skills in cybersecurity, an increasingly important field to national security, the economy, and private citizens. Students take a combination of core courses focused on design and analysis of algorithms; computer architectures; and advanced software paradigms. These are to be combined with courses focused on security (ex. applied cryptography, computer network defense, etc.) and elective courses. Additionally, the program is federally designated as a National Center of Academic Excellence for Information Assurance Excellence by the Department of Homeland Security and National Security Agency. This recognition uniquely qualifies students for internships, scholarships, and job opportunities with the U.S. government in the cybersecurity field. Cybersecurity Policy & Compliance (M.Eng.) Offered through the Off-Campus division of the Department of Engineering Management and Systems Engineering (EMSE). The Master of Engineering in Cybersecurity Policy and Compliance (M.Eng.[CPC]) follows this history, bringing a fully online cybersecurity master's degree to those seeking critical positions at the managerial level leading an organization's cyber practices. The multidisciplinary curriculum incorporates a unique blend of engineering and computer science courses, making it well-suited for those with a technical or engineering background and project management professionals with a strong desire to protect proprietary information in the age of cyber warfare and global corporate espionage. The online master's in cybersecurity policy and compliance offers an engineering management-focused course of study, providing an overview of cryptography, security systems, algorithms, and software paradigms, a perfect choice for those interested in exploring the complex intersections between policy, business, and technology. Students of the program will become familiar with industry-recognized frameworks and methodologies as they train to take charge in the fast-paced world of information security. Data Analytics (M.S.) Established in 2017, the Data Analytics program is jointly administered between the Departments of Computer Science (CS) and Engineering Management & Systems Engineering (EMSE). It is a terminal master's degree intended to provide research skills in big data for professionals seeking career advancement in corporate and government organizations. Electrical Engineering (M.S., Ph.D.) Offered through the Department of Electrical and Computer Engineering, the Master of Science program in Electrical Engineering are designed to help students understand and apply the principles of electrical engineering to communications, power and energy, and micro- and nano-electronics. Students are able to choose their coursework around specific research areas of the department, such as wireless/mobile communications, micro-electro- mechanical systems, magnetics, and remote sensing. Students in the doctoral program in electrical engineering conduct research in a variety of areas with the department's world-class faculty. Students can choose from the following six areas of focus: communications and networks; electrical power and energy; electromagnetics, radiation systems, and microwave engineering; microelectronics and VLSI systems; and signal and image processing, systems and controls. Students choose from the following Areas of Focus in selecting their coursework: Applied electromagnetics Communications and networks Electrical power and energy Electronics, photonics, and MEMS (VLSI systems and microelectronics) Signal and image processing, systems and controls Engineering Management (M.S., Ph.D., D. Eng.) Offered through the Department of Engineering Management and Systems Engineering (EMSE), the M.S., Ph.D., D. Eng., and certificate programs in Engineering Management are designed to prepare technical managers who need a broad education in order to keep an organization operating efficiently and working ahead of its competitors. The Engineering Management program provides a graduate education in the latest management techniques for technical and scientific organizations. Students choose from five areas of focus: Crisis, emergency and risk management Economics, finance and cost engineering Engineering and technology management Environmental and energy management Knowledge and information management Mechanical and Aerospace Engineering (M.S., Ph.D.) Programs offered include: Aerospace engineering Design of mechanical engineering systems Fluid mechanics, thermal sciences, and energy Industrial engineering Solid mechanics and materials science Structures and dynamics Robotics, mechatronics, and controls Regulatory Biomedical Engineering (M.Eng.) Systems Engineering (M.S., Ph.D.) The following options are offered: Operations research and management science Systems engineering and integration Enterprise information assurance Telecommunications Engineering (M.S.) Offered through the Department of Electrical and Computer Engineering, the Master of Science in Telecommunications Engineering is geared towards the practicing or aspiring telecommunications engineer. The program provides students with a foundation in the fundamentals of telecommunications engineering, including topics such as transmission systems, computer networking, network architectures and protocols, and telecommunications security protocols. Optionally, students may take courses on optical networking, wireless networking, cloud computing, and other current topics. Certificate programs Certificate programs are offered in the following areas. Each program consists of 4–6 courses to be completed within one calendar year or at the student's desired pace. Students enrolled in a master's or doctoral program may also complete a certificate in conjunction with their degree: Notable alumni Many of the school's former students have gone on to distinguished careers in both the private and public sectors. Some notable alumni include Ian Waitz (Vice Chancellor of Massachusetts Institute of Technology), Stanley Crane (CEO of Southern Railway (U.S.) and member of the National Academy of Engineering), Mario Cardullo (inventor of read-write Radio-frequency identification), and Christopher J. Wiernicki (CEO of American Bureau of Shipping), among numerous others. References External links School of Engineering and Applied Science web site Engineering Engineering schools and colleges in the United States Engineering universities and colleges in Washington, D.C. 1884 establishments in Washington, D.C. Educational institutions established in 1884
78597
https://en.wikipedia.org/wiki/Amazonomachy
Amazonomachy
In Greek mythology, Amazonomachy (English translation: "Amazon battle"; plural, Amazonomachiai () or Amazonomachies) was one of various mythical battles between the ancient Greeks and the Amazons, a nation of all-female warriors. Many of the myths portrayed were that of Heracles' ninth labor, which was the retrieval of the girdle of Hippolyta, Queen of the Amazons; and of Theseus' abduction of Hippolyta (or Antiope), whom he claimed as his wife, sparking the Attic War. Another famous scene portrayed is that of Achilles' victorious battle against Penthesilea during the Trojan war. The subject was popular in ancient Greek art and Roman art. Symbolism of Amazonomachy Amazonomachy represents the Greek ideal of civilization. The Amazons were portrayed as a savage and barbaric race, while the Greeks were portrayed as a civilized race of human progress. According to Bruno Snell's view of Amazonomachy: For the Greeks, the Titanomachy and the battle against the giants remained symbols of the victory which their own world had won over a strange universe; along with the battles against the Amazons and Centaurs they continue to signalize the Greek conquest of everything barbarous, of all monstrosity and grossness. Amazonomachy is also seen as the rise of feminism in Greek culture. In Quintus Smyrnaeus's The Fall of Troy, Penthesilea, an Amazonian queen, who joined on the side of the Trojans during the Trojan war, was quoted at Troy, saying: Not in strength are we inferior to men; the same our eyes, our limbs the same; one common light we see, one air we breathe; nor different is the food we eat. What then denied to us hath heaven on man bestowed? According to Josine Blok, Amazonomachy provides two different contexts in defining a Greek hero. Either the Amazons are one of the disasters from which the hero rids the country after his victory over a monster; or they are an expression of the underlying Attis motif, in which the hero shuns human sexuality in marriage and procreation. In the 5th century, the Achaemenid Empire of Persia began a series of invasions against Ancient Greece. Because of this, some scholars believe that on most 5th-century Greek art, the Persians were shown allegorically, through the figure of centaurs and Amazons. In art Warfare was a very popular subject in Ancient Greek art, represented in grand sculptural scenes on temples but also countless Greek vases. On the whole fictional and mythical battles were preferred as subjects to the many historical ones available. Along with scenes from Homer and the Gigantomachy, a battle between the race of Giants and the Olympian gods, the Amazonomachy was a popular choice. Later, in Roman art, there are many depictions on the sides of later Roman sarcophagi, when it became the fashion to depict elaborate reliefs of battle scenes. Scenes were also shown on mosaics. A trickle of medieval depictions increased at the Renaissance, and especially in the Baroque period. West metopes of Parthenon Kalamis, a Greek sculptor, is attributed to designing the west metopes of the Parthenon, a temple on the Athenian Acropolis dedicated to the Greek goddess Athena. The west metopes of the Parthenon depict a battle between Greeks and Amazons. Despite its mutilated state, scholars generally concur that the scene represents the Amazon invasion of Attica. Shield of Athena Parthenos The shield of Athena Parthenos, sculpted by Phidias, depicts a fallen Amazon. Athena Parthenos was a massive chryselephantine sculpture of Athena, the main cult image inside the Parthenon at Athens, which is now lost, though known from descriptions and small ancient copies. Bassae Frieze in Temple of Apollo The Bassae Frieze in the Temple of Apollo at Bassae contains a number of slabs portraying Trojan Amazonomachy and Heraclean Amazonomachy. The Trojan Amazonomachy spans three blocks, displaying the eventual death of Penthesilea at the hands of Achilles. The Heraclean Amazonomachy spans eight blocks and represents the struggle of Heracles to seize the belt of the Amazon queen Hippolyta. Amazonomachy frieze from Mausoleum at Halicarnassus Several sections of an Amazonomachy frieze from the Mausoleum at Halicarnassus are now in the British Museum. One part depicts Heracles grasping an Amazon by the hair, while holding a club behind his head in a striking manner. This Amazon is believed to be the Amazon queen Hippolyta. Behind Heracles is a scene of a Greek warrior clashing shields with an Amazon warrior. Another slab displays a mounted Amazon charging at a Greek, who is defending himself with a raised shield. This Greek is believed to be Theseus, who joined Heracles during his labours. Other Micon painted the Amazonomachy on the Stoa Poikile of the Ancient Agora of Athens, which is now lost. Phidias depicted Amazonomachy on the footstool of the chryselephantine statue of Zeus at Olympia. In 2018, archaeologists discovered relief-decorated shoulder boards made from bronze that were part of a breastplate of a Greek warrior at a Celtic sacrificial place near the village of Slatina nad Bebravou in Slovakia. Deputy of director of Slovak Archaeological Institute said that it is the oldest original Greek art relic in the area of Slovakia. Researchers analyzed the pieces and determined they were once part of a relief that depicted the Amazonomachy. Gallery See also For discussion of such battles, see Amazons in historiography For the most famous Amazonomachy, see Attic War For representation of Amazonomachies as depicted in ancient visual art, see Amazons in art and Warfare in Ancient Greek Art Amazon statue types Gigantomachy Centauromachy References Further reading Weitzmann, Kurt, ed., Age of spirituality: late antique and early Christian art, third to seventh century, no. 200, 1979, Metropolitan Museum of Art, New York, ; full text available online from The Metropolitan Museum of Art Libraries External links Fragment of a marble shield Slabs from the Amazonomachy frieze from the Mausoleum at Halikarnassos Amazonmachy: The Art of Progress Hellenistic art War in mythology Ancient Greek military art War art Iconography Women in war in Greece
18959902
https://en.wikipedia.org/wiki/CD-ROM
CD-ROM
A CD-ROM (, compact disc read-only memory) is a pre-pressed optical compact disc that contains data. Computers can read—but not write or erase—CD-ROMs, i.e. it is a type of read-only memory. Some CDs, called enhanced CDs, hold both computer data and audio with the latter capable of being played on a CD player, while data (such as software or digital video) is only usable on a computer (such as ISO 9660 format PC CD-ROMs). During the 1990s, CD-ROMs were popularly used to distribute software and data for computers and fifth generation video game consoles. History The earliest theoretical work on optical disc storage was done by independent researchers in the United States including David Paul Gregg (1958) and James Russel (1965–1975). In particular, Gregg's patents were used as the basis of the LaserDisc specification that was co-developed between MCA and Philips after MCA purchased Gregg's patents, as well as the company he founded, Gauss Electrophysics. The LaserDisc was the immediate precursor to the CD, with the primary difference being that the LaserDisc encoded information through an analog process whereas the CD used digital encoding. Key work to digitize the optical disc was performed by Toshi Doi and Kees Schouhamer Immink during 1979–1980, who worked on a taskforce for Sony and Phillips. The result was the Compact Disc Digital Audio (CD-DA), defined on 1980. The CD-ROM was later designed an extension of the CD-DA, and adapted this format to hold any form of digital data, with an initial storage capacity of 553 MB. Sony and Philips created the technical standard that defines the format of a CD-ROM in 1983, in what came to be called the Yellow Book. The CD-ROM was announced in 1984 and introduced by Denon and Sony at the first Japanese COMDEX computer show in 1985. In November, 1985, several computer industry participants including Microsoft, Philips, Sony, Apple and Digital Equipment Corporation met to create a specification to define a file system format for CD-ROMs. The resulting specification, called the High Sierra format, was published in May 1986. It was eventually standardized, with a few changes, as the ISO 9660 standard in 1988. One of the first CD-ROM products to be made available to the public was the Grolier Academic Encyclopedia, presented at the Microsoft CD-ROM Conference in March 1986. CD-ROMs began being used in home video game consoles starting with the PC Engine CD-ROM² (TurboGrafx-CD) in 1988, while CD-ROM drives had also become available for home computers by the end of the 1980s. In 1990, Data East demonstrated an arcade system board that supported CD-ROMs, similar to 1980s laserdisc video games but with digital data, allowing more flexibility than older laserdisc games. By early 1990, about 300,000 CD-ROM drives were sold in Japan, while 125,000 CD-ROM discs were being produced monthly in the United States. Some computers which were marketed in the 1990s were called "multimedia" computers because they incorporated a CD-ROM drive, which allowed for the delivery of several hundred megabytes of video, picture, and audio data. CD-ROM discs Media CD-ROMs are identical in appearance to audio CDs, and data are stored and retrieved in a very similar manner (only differing from audio CDs in the standards used to store the data). Discs are made from a 1.2 mm thick disc of polycarbonate plastic, with a thin layer of aluminium to make a reflective surface. The most common size of CD-ROM is 120 mm in diameter, though the smaller Mini CD standard with an 80 mm diameter, as well as shaped compact discs in numerous non-standard sizes and molds (e.g., business card-sized media), also exist. Data is stored on the disc as a series of microscopic indentations called "pits", with the non-indented spaces between them called "lands". A laser is shone onto the reflective surface of the disc to read the pattern of pits and lands. Because the depth of the pits is approximately one-quarter to one-sixth of the wavelength of the laser light used to read the disc, the reflected beam's phase is shifted in relation to the incoming beam, causing destructive interference and reducing the reflected beam's intensity. This is converted into binary data. Standard Several formats are used for data stored on compact discs, known as the Rainbow Books. The Yellow Book, created in 1983, defines the specifications for CD-ROMs, standardized in 1988 as the ISO/IEC 10149 standard and in 1989 as the ECMA-130 standard. The CD-ROM standard builds on top of the original Red Book CD-DA standard for CD audio. Other standards, such as the White Book for Video CDs, further define formats based on the CD-ROM specifications. The Yellow Book itself is not freely available, but the standards with the corresponding content can be downloaded for free from ISO or ECMA. There are several standards that define how to structure data files on a CD-ROM. ISO 9660 defines the standard file system for a CD-ROM. ISO 13490 is an improvement on this standard which adds support for non-sequential write-once and re-writeable discs such as CD-R and CD-RW, as well as multiple sessions. The ISO 13346 standard was designed to address most of the shortcomings of ISO 9660, and a subset of it evolved into the UDF format, which was adopted for DVDs. A bootable CD specification, called El Torito, was issued in January 1995, to make a CD emulate a hard disk or floppy disk. Manufacture Pre-pressed CD-ROMs are mass-produced by a process of stamping where a glass master disc is created and used to make "stampers", which are in turn used to manufacture multiple copies of the final disc with the pits already present. Recordable (CD-R) and rewritable (CD-RW) discs are manufactured by a different method, whereby the data are recorded on them by a laser changing the properties of a dye or phase transition material in a process that is often referred to as "burning". CD-ROM format Data stored on CD-ROMs follows the standard CD data encoding techniques described in the Red Book specification (originally defined for audio CD only). This includes cross-interleaved Reed–Solomon coding (CIRC), eight-to-fourteen modulation (EFM), and the use of pits and lands for coding the bits into the physical surface of the CD. The structures used to group data on a CD-ROM are also derived from the Red Book. Like audio CDs (CD-DA), a CD-ROM sector contains 2,352 bytes of user data, composed of 98 frames, each consisting of 33 bytes (24 bytes for the user data, 8 bytes for error correction, and 1 byte for the subcode). Unlike audio CDs, the data stored in these sectors corresponds to any type of digital data, not audio samples encoded according to the audio CD specification. To structure, address and protect this data, the CD-ROM standard further defines two sector modes, Mode 1 and Mode 2, which describe two different layouts for the data inside a sector. A track (a group of sectors) inside a CD-ROM only contains sectors in the same mode, but if multiple tracks are present in a CD-ROM, each track can have its sectors in a different mode from the rest of the tracks. They can also coexist with audio CD tracks, which is the case of mixed mode CDs. Sector structure Both Mode 1 and 2 sectors use the first 16 bytes for header information, but differ in the remaining 2,336 bytes due to the use of error correction bytes. Unlike an audio CD, a CD-ROM cannot rely on error concealment by interpolation; a higher reliability of the retrieved data is required. To achieve improved error correction and detection, Mode 1, used mostly for digital data, adds a 32-bit cyclic redundancy check (CRC) code for error detection, and a third layer of Reed–Solomon error correction using a Reed-Solomon Product-like Code (RSPC). Mode 1 therefore contains 288 bytes per sector for error detection and correction, leaving 2,048 bytes per sector available for data. Mode 2, which is more appropriate for image or video data (where perfect reliability may be a little bit less important), contains no additional error detection or correction bytes, having therefore 2,336 available data bytes per sector. Note that both modes, like audio CDs, still benefit from the lower layers of error correction at the frame level. Before being stored on a disc with the techniques described above, each CD-ROM sector is scrambled to prevent some problematic patterns from showing up. These scrambled sectors then follow the same encoding process described in the Red Book in order to be finally stored on a CD. The following table shows a comparison of the structure of sectors in CD-DA and CD-ROMs: The net byte rate of a Mode-1 CD-ROM, based on comparison to CD-DA audio standards, is 44,100 Hz × 16 bits/sample × 2 channels × 2,048 / 2,352 / 8 = 150 KB/s (150 × 210) . This value, 150 kbit/s, is defined as "1× speed". Therefore, for Mode 1 CD-ROMs, a 1× CD-ROM drive reads 150/2 = 75 consecutive sectors per second. The playing time of a standard CD is 74 minutes, or 4,440 seconds, contained in 333,000 blocks or sectors. Therefore, the net capacity of a Mode-1 CD-ROM is 650 MB (650 × 220). For 80 minute CDs, the capacity is 703 MB. CD-ROM XA extension CD-ROM XA is an extension of the Yellow Book standard for CD-ROMs that combines compressed audio, video and computer data, allowing all to be accessed simultaneously. It was intended as a bridge between CD-ROM and CD-i (Green Book) and was published by Sony and Philips, and backed by Microsoft, in 1991, first announced in September 1988. "XA" stands for eXtended Architecture. CD-ROM XA defines two new sector layouts, called Mode 2 Form 1 and Mode 2 Form 2 (which are different from the original Mode 2). XA Mode 2 Form 1 is similar to the Mode 1 structure described above, and can interleave with XA Mode 2 Form 2 sectors; it is used for data. XA Mode 2 Form 2 has 2,324 bytes of user data, and is similar to the standard Mode 2 but with error detection bytes added (though no error correction). It can interleave with XA Mode 2 Form 1 sectors, and it is used for audio/video data. Video CDs, Super Video CDs, Photo CDs, Enhanced Music CDs and CD-i use these sector modes. The following table shows a comparison of the structure of sectors in CD-ROM XA modes: Disc images When a disc image of a CD-ROM is created, this can be done in either "raw" mode (extracting 2,352 bytes per sector, independent of the internal structure), or obtaining only the sector's useful data (2,048/2,336/2,352/2,324 bytes depending on the CD-ROM mode). The file size of a disc image created in raw mode is always a multiple of 2,352 bytes (the size of a block). Disc image formats that store raw CD-ROM sectors include CCD/IMG, CUE/BIN, and MDS/MDF. The size of a disc image created from the data in the sectors will depend on the type of sectors it is using. For example, if a CD-ROM mode 1 image is created by extracting only each sector's data, its size will be a multiple of 2,048; this is usually the case for ISO disc images. On a 74-minute CD-R, it is possible to fit larger disc images using raw mode, up to 333,000 × 2,352 = 783,216,000 bytes (~747 MB). This is the upper limit for raw images created on a 74 min or ≈650 MB Red Book CD. The 14.8% increase is due to the discarding of error correction data. Capacity CD-ROM capacities are normally expressed with binary prefixes, subtracting the space used for error correction data. The capacity of a CD-ROM depends on how close the outward data track is extended to the disc's outer rim. A standard 120 mm, 700 MB CD-ROM can actually hold about 703 MB of data with error correction (or 847 MB total). In comparison, a single-layer DVD-ROM can hold 4.7 GB (4.7 × 109) of error-protected data, more than 6 CD-ROMs. CD-ROM drives CD-ROM discs are read using CD-ROM drives. A CD-ROM drive may be connected to the computer via an IDE (ATA), SCSI, SATA, FireWire, or USB interface or a proprietary interface, such as the Panasonic CD interface, LMSI/Philips, Sony and Mitsumi standards. Virtually all modern CD-ROM drives can also play audio CDs (as well as Video CDs and other data standards) when used with the right software. Laser and optics CD-ROM drives employ a near-infrared 780 nm laser diode. The laser beam is directed onto the disc via an opto-electronic tracking module, which then detects whether the beam has been reflected or scattered. Transfer rates Original speed CD-ROM drives are rated with a speed factor relative to music CDs. If a CD-ROM is read at the same rotational speed as an audio CD, the data transfer rate is 150 kbit/s, commonly called "1×" (with constant linear velocity, short "CLV"). At this data rate, the track moves along under the laser spot at about 1.2 m/s. To maintain this linear velocity as the optical head moves to different positions, the angular velocity is varied from about 500 rpm at the inner edge to 200 rpm at the outer edge. The 1× speed rating for CD-ROM (150 kbit/s) is different from the 1× speed rating for DVDs (1.32 MB/s). Speed advancements By increasing the speed at which the disc is spun, data can be transferred at greater rates. For example, a CD-ROM drive that can read at 8× speed spins the disc at 1600 to 4000 rpm, giving a linear velocity of 9.6 m/s and a transfer rate of 1200 kbit/s. Above 12× speed most drives read at Constant angular velocity (CAV, constant rpm) so that the motor is not made to change from one speed to another as the head seeks from place to place on the disc. In CAV mode the "×" number denotes the transfer rate at the outer edge of the disc, where it is a maximum. 20× was thought to be the maximum speed due to mechanical constraints until Samsung Electronics introduced the SCR-3230, a 32x CD-ROM drive which uses a ball bearing system to balance the spinning disc in the drive to reduce vibration and noise. As of 2004, the fastest transfer rate commonly available is about 52× or 10,400 rpm and 7.62 MB/s. Higher spin speeds are limited by the strength of the polycarbonate plastic of which the discs are made. At 52×, the linear velocity of the outermost part of the disc is around 65 m/s. However, improvements can still be obtained using multiple laser pickups as demonstrated by the Kenwood TrueX 72× which uses seven laser beams and a rotation speed of approximately 10×. The first 12× drive was released in late 1996. Above 12× speed, there are problems with vibration and heat. CAV drives give speeds up to 30× at the outer edge of the disc with the same rotational speed as a standard (constant linear velocity, CLV) 12×, or 32× with a slight increase. However, due to the nature of CAV (linear speed at the inner edge is still only 12×, increasing smoothly in-between) the actual throughput increase is less than 30/12; in fact, roughly 20× average for a completely full disc, and even less for a partially filled one. Physical limitations Problems with vibration, owing to limits on achievable symmetry and strength in mass-produced media, mean that CD-ROM drive speeds have not massively increased since the late 1990s. Over 10 years later, commonly available drives vary between 24× (slimline and portable units, 10× spin speed) and 52× (typically CD- and read-only units, 21× spin speed), all using CAV to achieve their claimed "max" speeds, with 32× through 48× most common. Even so, these speeds can cause poor reading (drive error correction having become very sophisticated in response) and even shattering of poorly made or physically damaged media, with small cracks rapidly growing into catastrophic breakages when centripetally stressed at 10,000–13,000 rpm (i.e. 40–52× CAV). High rotational speeds also produce undesirable noise from disc vibration, rushing air and the spindle motor itself. Most 21st-century drives allow forced low speed modes (by use of small utility programs) for the sake of safety, accurate reading or silence, and will automatically fall back if numerous sequential read errors and retries are encountered. Workarounds Other methods of improving read speed were trialled such as using multiple optical beams, increasing throughput up to 72× with a 10× spin speed, but along with other technologies like 90~99 minute recordable media, GigaRec and double-density compact disc (Purple Book standard) recorders, their utility was nullified by the introduction of consumer DVD-ROM drives capable of consistent 36× equivalent CD-ROM speeds (4× DVD) or higher. Additionally, with a 700 MB CD-ROM fully readable in under 2½ minutes at 52× CAV, increases in actual data transfer rate are decreasingly influential on overall effective drive speed when taken into consideration with other factors such as loading/unloading, media recognition, spin up/down and random seek times, making for much decreased returns on development investment. A similar stratification effect has since been seen in DVD development where maximum speed has stabilised at 16× CAV (with exceptional cases between 18× and 22×) and capacity at 4.3 and 8.5 GB (single and dual layer), with higher speed and capacity needs instead being catered to by Blu-ray drives. Speed ratings CD-Recordable drives are often sold with three different speed ratings, one speed for write-once operations, one for re-write operations, and one for read-only operations. The speeds are typically listed in that order; i.e. a 12×/10×/32× CD drive can, CPU and media permitting, write to CD-R discs at 12× speed (1.76 MB/s), write to CD-RW discs at 10× speed (1.46 MB/s), and read from CDs at 32× speed (4.69 MB/s). Speed table Copyright issues Software distributors, and in particular distributors of computer games, often make use of various copy protection schemes to prevent software running from any media besides the original CD-ROMs. This differs somewhat from audio CD protection in that it is usually implemented in both the media and the software itself. The CD-ROM itself may contain "weak" sectors to make copying the disc more difficult, and additional data that may be difficult or impossible to copy to a CD-R or disc image, but which the software checks for each time it is run to ensure an original disc and not an unauthorized copy is present in the computer's CD-ROM drive. Manufacturers of CD writers (CD-R or CD-RW) are encouraged by the music industry to ensure that every drive they produce has a unique identifier, which will be encoded by the drive on every disc that it records: the RID or Recorder Identification Code. This is a counterpart to the Source Identification Code (SID), an eight character code beginning with "IFPI" that is usually stamped on discs produced by CD recording plants. See also ATA Packet Interface (ATAPI) Optical recording (history of) CD/DVD authoring Compact Disc Digital Audio Computer hardware DVD-Audio DVD-ROM MultiLevel Recording Optical disc drive Phase-change Dual Thor-CD DVP Media, patent holder for self-loading and self configuring CD-ROM technology List of optical disc manufacturers Notes References Compact disc Rotating disc computer storage media Ecma standards Optical computer storage Optical computer storage media Rainbow Books Video game distribution
6231532
https://en.wikipedia.org/wiki/Genigraphics
Genigraphics
Genigraphics is a large-format printing service bureau specializing in providing poster session services to medical and scientific conferences throughout the US and Canada. The company began in 1973 as a division of General Electric. History Genigraphics began as a computer graphics system, developed by General Electric in the late 1960s, for NASA to use in space flight simulation. The technologies thus developed provided a foundation for the company's expansion into the commercial market. The Computed Images System & Services division (CISS, to become Genigraphics Corporation) of GE delivered the first presentation graphics system to Amoco Oil's corporate headquarters in 1973. It was named the 100 Series, and was based on DEC's PDP 11 series of mini computer systems. The first Genigraphics systems (100 Series and 100A Series) used an array of buttons, dials, knobs and joysticks, along with a built in keyboard, as the means of user interface. The PDP-11/40 computer was housed in a tall cabinet and used random access magnetic tape drives (DECtape) for storing completed presentations. The graphics generator (Forox recorder) was capable of outputting 2,000 line resolution, suitable for 35mm and 72mm film and large sheet film positive using larger cassettes for recording. 4000 and 8000 line resolution was later achieved with duplex scanning and 4x scanning by modifying to the Forox recorder's settings menu. Subsequent models (100B,C,D,D+ and D+/GVP) replaced the knobs and dials with an on screen, text based menu system, a graphics tablet and a pen. The pen/tablet combination gave way to a mouse like device in later models, and served to provide the interface with the graphics tools. User interaction with the computer for functions such as media initialization or modem to modem data transfer required a DECwriter serial terminal. In 1982, GE divested the Genigraphics division along with a host of other "non essential" business units (Genitext, Geniponics) and Genigraphics Corporation was born. Shortly after the divestiture, the headquarters of Genigraphics was moved from Liverpool, New York to Saddle Brook, New Jersey. Major success followed as the company grew exponentially over the next few years selling both systems and slide creation services. Genigraphics film recorders produced high-resolution digital images on 35mm film. The computer-generated scenes for The Last Starfighter were calculated on a Cray X-MP supercomputer and mastered with a Genigraphics film recorder. At its peak, Genigraphics Corporation employed roughly 300 people in 24 offices worldwide, with revenues upwards of $70 million annually. By the late 1980s Genigraphics saw demand for its proprietary systems dwindle and began selling the MASTERPIECE 8770 film recorder and GRAFTIME software as a peripheral for DEC Vaxes, IBM PC AT’s, and Mac NuBus machines. But the MASTERPIECE film recorder proved too expensive to sell in volume. In 1988, the company began a partnership with Microsoft to help develop the PowerPoint software. In exchange, every copy of PowerPoint included a “Send to Genigraphics” link to have files sent to a Genigraphics service bureau to be produced as 35mm slides. This partnership continued until 2001. In 1989, after three years of flat revenue, Genigraphics sold its hardware business in order to focus on its service bureau business and partnership with Microsoft via PowerPoint. In 1994, all assets of Genigraphics, including equipment, software development, in-house artwork, trademarks, and rights to the Microsoft partnership, were sold to InFocus Corporation of Wilsonville, Oregon who continued to operate under the Genigraphics brand name. The twenty-four service bureaus were consolidated to a 20,000 square foot facility next to the FedEx hub in Memphis, Tennessee. This allowed PowerPoint slide orders to be received until 10pm and delivered across the United States by the following morning. In 1995, InFocus registered www.genigraphics.com and was among the first to offer a form of ecommerce allowing 35mm slides, color prints and transparencies, printed booklets, and digital projectors to be purchased online. In 1998, then current management bought Genigraphics from InFocus and have operated it continuously ever since as Genigraphics LLC. That same year, InFocus projector rentals were added to the “Send to Genigraphics” link in PowerPoint and Genigraphics became the rental and repair center for all InFocus national accounts. It also marked Genigraphics entry into the new industry of large format printing; leveraging their knowledge of, and access to, PowerPoint programming code to develop a proprietary printer driver to output directly to an Epson 9500 wide format printer. At the time, Genigraphics was the exclusive 35mm slide vendor for all Kinko’s stores in the United States and poster printing was added to the arrangement. In 2003, Genigraphics closed their 35mm slide E6 photo lab – one of the last high-volume commercial E6 labs in the US – and expanded their large format printing capabilities. Since 2003, Genigraphics has become a major player in the poster session market, providing printing and on-site services to medical and scientific conferences throughout the US and Canada. As of February 2019, over 150,000 medical or scientific ‘ePosters’ are made available through their ResearchPosters.com archive service. Partnership with Microsoft and development of PowerPoint As presentations began to be created on personal computers in the late 80’s, Genigraphics sought presentation software partners in Silicon Valley who would be interested in sending files to Genigraphics via dial-up modem to be produced on 35mm slides. In 1987, Michael Beetner, Director of Marketing Planning for Genigraphics, met with Robert Gaskins, head of Microsoft's Graphics Business Unit, who was leading the development of the newly released PowerPoint software. A joint development agreement between Microsoft and Genigraphics was agreed upon and announced at Mac World 1988. According to Erica Robles-Anderson and Patrik Svensson, "It would be hard to overestimate Genigraphics’ influence on PowerPoint. PowerPoint 2.0 was designed for Genigraphics film recorders. It shipped with Genigraphics color palettes, schemes, and the distinctively Genigraphics color-gradient backgrounds. The application contained a ‘Send to Genigraphics’ menu item that wrote the presentation to floppy disk or transmitted the order directly via modem. Within three and a half months PowerPoint orders accounted for ten percent of revenue at Genigraphics service centers. PowerPoint 3.0 was even more intimately dependent upon Genigraphics. The software incorporated a collection of clip art images and symbols that had been produced by hundreds of artists at dozens of service centers across tens of thousands of presentations. Genigraphics artists designed PowerPoint 3.0 colors, templates, and sample presentations. The software even used Genigraphics (rather than Excel) chart style. Bar charts were rendered two-dimensionally with apparent thickness added to make them seemingly recede from the axes. The technique made it easier for viewers to compare bar heights and estimate values from axis ticks and labels. Pie charts were handled analogously. Microsoft paid Genigraphics to produce more than 500 clip art drawings and symbols used in Microsoft programs.” In exchange for Genigraphics development efforts, Microsoft included a “Send to Genigraphics” link in every copy of PowerPoint through the 10.0 version (2000/2001). The arrangement came to an end when Microsoft restructured as a result of anti-trust lawsuits. References External links Official site Computer graphics
8968314
https://en.wikipedia.org/wiki/GatorBox
GatorBox
The GatorBox is a LocalTalk-to-Ethernet bridge, a router used on Macintosh-based networks to allow AppleTalk communications between clients on LocalTalk and Ethernet physical networks. The GatorSystem software also allowed TCP/IP and DECnet protocols to be carried to LocalTalk-equipped clients via tunneling, providing them with access to these normally Ethernet-only systems. When the GatorBox is running GatorPrint software, computers on the Ethernet network can send print jobs to printers on the LocalTalk network using the 'lpr' print spool command. When the GatorBox is running GatorShare software, computers on the LocalTalk network can access Network File System (NFS) hosts on Ethernet. Specifications The original GatorBox (model: 10100) is a desktop model that has a 10 MHz Motorola 68000 Cpu, 1MB RAM, 128k EPROM for boot program storage, 2 kB NVRAM for configuration storage, LocalTalk Mini-DIN-8 connector, Serial port Mini-DIN-8 connector, BNC connector, AUI connector, and is powered by an external power supply (16VAC 1 A transformer that is connected by a 2.5 mm plug). This model requires a software download when it is powered on to be able to operate. The GatorBox CS (model: 10101) is a desktop model that uses an internal power supply (120/240 V, 1.0 A, 50–60 Hz). The GatorMIM CS is a media interface module that fits in a Cabletron Multi-Media Access Center (MMAC). The GatorBox CS/Rack (model: 10104) is a rack-mountable version of the GatorBox CS that uses an internal power supply (120/240 V, 1.0 A, 50–60 Hz). The GatorStar GXM integrates the GatorMIM CS with a 24 port LocalTalk repeater. The GatorStar GXR integrates the GatorBox CS/Rack with a 24 port LocalTalk repeater. This model does not have a BNC connector and the serial port is a female DE-9 connector. All "CS" models have 2MB of memory and can boot from images of the software that have been downloaded into the EPROM using the GatorInstaller application. Software There are three disks in the GatorBox software package. Note that the content of the disks for an original GatorBox is different from that of the GatorBox CS models. Configuration - contains GatorKeeper, MacTCP folder and either GatorInstaller (for CS models) or GatorBox TFTP and GatorBox UDP-TFTP (for original GatorBox model) Application - contains GatorSystem, GatorPrint or GatorShare, which is the software that runs in the GatorBox. The application software for the GatorBox CS product family has a "CS" at the end of the filename. GatorPrint includes GatorSystem functionality. GatorShare includes GatorSystem and GatorPrint functionality. Network Applications - NCSA Telnet, UnStuffit Software Requirements The GatorKeeper 2.0 application requires Macintosh System version 6.0.2 up to 7.5.1 and Finder version 6.1 (or later) MacTCP (not Open Transport) See also Kinetics FastPath Line Printer Daemon protocol – Print Spooling LocalTalk-to-Ethernet bridge – Other LocalTalk-to-Ethernet bridges/routers MacIP – TCP/IP Gateway References External links GatorBox CS configuration information Internet Archive copy of a configuration guide produced by the University of Illinois Juiced.GS magazine Volume 10, Issue 4 (Dec 2005) – contains an article on how to set up a GatorBox for use with an Apple IIgs Software and scanned manuals for the GatorBox and GatorBox CS Networking hardware
1758239
https://en.wikipedia.org/wiki/Maximo%20%28software%29
Maximo (software)
Maximo is enterprise asset management software originally developed by Project Software & Development (later MRO Software) with the first commercial version released in 1985. Purchased by IBM in 2005, it is now branded as IBM Maximo Asset Management. Maximo is designed to assist an organisation in managing its assets such as buildings, vehicles, fire extinguishers, equipment recording details such as details, maintenance schedules and participating in workflows to manage the assets. History Maximo was originally developed by Project Software & Development Inc (PSDI) which changed its name to MRO Software in 2000. The product was acquired by IBM and placed in the Tivoli Portfolio. Previously the Tivoli portfolio contained software that was related to the Information Technology sphere; this acquisition brought management of non Information Technology assets into the portfolio. With release 7.6 the program has been developed with options to be deployed in a multitenancy solution which has options for deployment to the cloud and delivery by Software as a Service (SaaS) solution. The program has traditionally been based on a character-based user experience known as the classic interface. Later versions have also provided a graphical interface termed by IBM as a Work Center based graphical interface. In 2020, the IBM Maximo 8.0 release IBM Maximo Application Suite, where all licensed software and add-ons are integrated into one suite of applications, such as ViiBE integration in late 2021. Architecture Maximo originated as a stand-alone solution running on an IBM Personal Computer. it is supported on specified versions of AIX, Linux and Windows Server, previously HP-UX and Solaris were also supported. Successive versions have developed to leverage newer technologies. Interfaces have been developed for automated interfacing feeds, integration with enterprise level database, resource and reporting tools. Pre-configured models for some industries are available including rail, nuclear and mining. Software is also available to assist in the interfacing with other software packages and protocols. Disputes Kalibrate Asset Management, a consultancy specializing in Maximo, is suing IBM for 500,000 dollars in a deal registration dispute. This suit was dismissed by the court on 23 March 2018 as Kalibrate had failed to demonstrate it had a contractual right to the commission, nor had it established a claim for misleading and deceptive conduct. Kalibrate was ordered to pay IBM's costs. References External links Business software IBM software
17640584
https://en.wikipedia.org/wiki/Ikonboard
Ikonboard
Ikonboard was a free online forum or Bulletin Board System developed in Perl, PHP for use on MySQL, PostgreSQL, Oracle, as well as flat file databases. Ikonboard History Early years Ikonboard was originally developed by Matt Mecham, with the first release being Ikonboard 0.9 beta in September 1999. Originally much of the development took place on ikondiscussion.com (no longer active) until it suffered a server crash in March 2001 and it was initially thought everything might have been lost, including early work on version 3. As a result, they switched to Ikonboard.com, and by April 2001 the majority had been recovered. During ownership of Jarvis Entertainment Group (JEG) In late April 2001 Ikonboard officially joined the Jarvis Network. At this point the latest available release was 2.1.8, with 3.0 in beta development. Matt Mecham sold Ikonboard to the Jarvis Entertainment Group for 50,000 shares of common stock in the publicly traded company, but said in a 2002 interview that he was unable to sell any of the shares. Soon after the release of 3.0 Mecham stopped developing Ikonboard, and departed to work on Invision Power Board. It is believed that Mecham had been paying for the domain during the time JEG owned Ikonboard. A year after his departure the DNS was altered to point to a holding page which redirected users to other software, during this time there was a legal dispute over the domain ownership. After the departure of Matt Mecham, owner of JEG made a public request for individuals to staff and develop Ikonboard. As a result, a group of individuals emerged with separate development and support teams formed. Amongst the others to later emerge were Sly, Camil, Quasi and these along with the others were seen as the 'second set' of coders. On June 12, 2002 Ikonboard 3.1 was made with plans for PHP versions being announced at the same time. Initially the release represented small bug fixes to Mechams' 3.0.x series. In October 2003 chairman (and CEO) of JEG John Jarvis was forced out of JEG. During ownership of Westlin Corporation In February 2004 the company changed its name to Westlin Corporation. The departure of the second set of coders was fairly low profile, with many of them departing to work on their own project Infinite Core Technology. In summer 2005 it emerged that former JEG chairman John Jarvis was taking legal action against Westlin, to regain ownership of Ikonboard amongst other things. During much of September that year the sites' server was down. Westlin declined to comment on the outage, prompting several staff members to quit. In October 2005, with Westlin still declining to talk to the support staff and developers, Ikonboard releases were no longer available for download. During ownership post-Westlin Corporation On October 28, 2005, after weeks of speculation, ownership of Ikonboard was officially transferred to John Jarvis. However the change of ownership resulted in the site being down until December, with a new parent company 'Pitboss Entertainment'. Though John Jarvis officially owned Ikonboard he had no visible presence on the site, with Joshua Johnson managing Ikonboard on his behalf. Soon after coming back online development on 3.1.3 resumed, being developed by a group of volunteers who referred to themselves as 'The Ikonboard Team' (and in addition provided support via the site's forums). Ikonboard 3.1.3 was publicly released on January 30, 2006, and 3.1.4 was subsequently released in February. On March 22, 2006, it was announced that the parent company Pitboss Entertainment no longer existed, and that all its assets (including Ikonboard) now came under 'Level 6 Studios', and subsequently related contact details were amended. Development was unaffected by the parent company changes, and 3.1.5 was publicly released on May 30, 2006. Additionally the Ikonboard team started development on '3.2' though their work was never publicly released under the Ikonboard name. On September 10, 2006 development on this release was discontinued as the Ikonboard team departed to work on IkonForums 1.0.0, their own independent project. With most of the developers gone efforts began on recruiting new developers, and plans for forthcoming releases were announced. In January 2007 the domain record contacts were revised to those of John Jarvis. This change implied that John Jarvis was still the owner of at least the domain, which appeared to contradict announcements made by those managing the site. For most of June 2007 Ikonboard suffered downtime, with 'DNS issues' being cited as the cause. On January 8, 2008 the (former) official German support site 'ikonboard.de - Reloaded' completed its transition in converting to IkonForums, though still provided support for Ikonboard. During ownership - Ikonboard Services Inc. As of September 2009, ownership of the ikonboard name (and possibly the software itself) passed back to Joshua Johnson. In June 2010, Ikonboard 3.1.5A was released, containing bug fixes and minor updates. In May 2012 it was reported that Ikonboard PHP was making progress, and is now used as the support forum. However, as of January 2017 no new software releases have been made, and their website has ceased to exist. The WHOIS records suggest the domain is no longer owned by either Ikonboard Services Inc or Joshua Johnson. Version History Ikonboard 0.9 The first released version of Ikonboard was 0.9 beta, in September 1999. It was written by Matthew Mecham in Perl and stored all data in flat text files. Compared to present-day message board software, such as Mecham's own Invision Power Board, it contained only basic features. Ikonboard 1.x Releases in the 1.x series of Ikonboard built on the original 0.9 beta code. It was still authored entirely by Mecham, and stored data in text files. Ikonboard 2.1.x Ikonboard's 2.1.x series incorporated some of the ideas and developments from the 1.x series, but started with a new codebase. It was with the 2.1.x series releases that Ikonboard really became popular on the web, perhaps due to its status as a free alternative to UBB. The 2.1.x releases of Ikonboard contained many of the features found in today's forum software. As with previous releases, Ikonboard 2.1.x releases were written in Perl and used a flat file storage system. While the actual code for the software continued to be written exclusively by Mecham, other members from the "Ikonboard community" assisted in providing things like images and documentation. Ikonboard 2.2 Ikonboard 2.2 was an effort to continue improving the 2.x series after Mecham shifted his efforts to Ikonboard 3.0, and development occurred alongside that of Ikonboard 3.0. Most of the development, promotion, and support efforts were focused on the 3.0 series, however, and there has never been a stable release of Ikonboard 2.2. Since late December 2005 the download for the 2.2 development was removed from the site, and effectively discontinued as the team work on the 3.x series. Ikonboard 3.0 Ikonboard 3.0 represented a "total rewrite" for Ikonboard. The board was still coded entirely in Perl; however, Mecham adopted an object-oriented style of coding, and did away with flat files in favor of storage abstraction, allowing data storage in a relational database such as MySQL or Oracle. After the release of Ikonboard 3.0, Mecham stopped developing Ikonboard. Further releases in the 3.0.x series represented small fixes and improvements on Mecham's 3.0.0 code. Ikonboard 3.1 As the new developers gained familiarity with the code, larger changes were made in the 3.1.x series releases. For a long time the stable version was 3.1.2a. Noticeable additions included the addition of a 'quick reply' box below topics. The next release was due to be 3.2 and was originally started in 2003. However, for various reasons work of this version stalled, this resulting in the next release of Ikonboard becoming 3.1.3 and was developed mainly by 2 coders. With the change of ownership around August 2005 the two coders departed leaving development on 3.1.3 at beta 09 stage. When Ikonboard returned online work on 3.1.3 continued, and Ikonboard 3.1.3 was publicly released on January 30, 2006. It contained many new features as well as fixing bugs and a couple security issues. The most significant new additions in this release was Humain Readable Image (HRI) on registration which keeps malicious users from mass registering, and an update centre in the adminCP. Within a few weeks 3.1.4 was released, this release fixed bugs found since the previous release. On 2 June 2006 another release was made, like the previous release 3.1.5 fixed bugs that had been found. June 2010 Ikonboard Released 3.1.5a with minor updates and bug releases. Ikonboard 3.2 Ikonboard 3.2 is yet to be publicly released, though already has a notable history. Originally development began in late April 2003, however work on this release stalled, and development made later became part of iB 3.1.3. Then development on this release was restarted on February 10, 2006 by 'The Ikonboard Team', however they departed on September 10, 2006 taking their work with them to IkonForums. Since their departure Ikonboards' developer has said that he'll be working on a 3.2 release. The release is believed to be similar to the previous development, with additional features similar to those found on Netgimmicks.com and Swarf.net. In January 2008 it was announced via mass email that '3.2 would be released in February 2008', and a topic in the support forums also announced its forums would be upgraded on 14 February to this release. However to date no release has been made and the topic has been removed. It has also been noticed that users enquiring about the 3.2 release via the forums have had their posting rights removed and the threads have been removed within a few days. Related products myIkonboard myIkonboard is a separate product originally created by Ikonboards' parent company. While not directly connected to Ikonboard, it was originally powered by a Perl version of Ikonboard with a custom installer. In March 2006 it was announced by Ikonboards' new owners that myIkonboard would return, powered by the Ikonboard Perl software. In June 2007 the website became unavailable, initially due to the domain expiring, though this was renewed after a few days. From June 2007 to February 2010 myIkonboard has been unavailable, with the a 'server not found' error message appearing when attempting to connect to the site. As of February 2010, myIkonboard is now back online, under control of ikonboard. Ikonboard PHP (Project Mongoose) In 2002 work on a PHP based version of Ikonboard was commissioned. This work was entitled "Project Mongoose" and was a complete re-write of Ikonboard in PHP. It was similar only in name and ownership. Once 'Project Mongoose' reached a release candidate stage the core functionality which included an advanced session-handler and template system, then owners (Jarvis Entertainment Group, Inc.) requested that the existing MyIkonboard service - which was based on the Perl-Based Ikonboard product and running on over a dozen Windows-based servers. Ikonboard PHP was adapted to a multi-user system that allowed thousands of individual forums to be created and hosted through a completely PHP-based service and a converter was created to automate the process of transferring forums that were hosted on the old Perl-based MyIkonboard service. There was never a public release of Ikonboard PHP because the developers left the company in February 2003. Ikonboard PHP Lite Ikonboard PHP Lite was a re-branded version of a forum that was purchased by John Jarvis (Then CEO of Jarvis Entertainment Group) through an eBay auction. The only involvement the Project Mongoose developers had with this product was building a rudimentary installer at the request of Jarvis Entertainment Group.) References Open Tech Support interview with Mecham (July 2001) Sitepoint interview with Mecham (September 2002) The Forum Insider interview with Ron Witmer (March 2004) The Admin Zone interview with David Munn (December 2005) Internet forum software Perl software
371700
https://en.wikipedia.org/wiki/Locale%20%28computer%20software%29
Locale (computer software)
In computing, a locale is a set of parameters that defines the user's language, region and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language code and a country/region code. On POSIX platforms such as Unix, Linux and others, locale identifiers are defined by ISO/IEC 15897, which is similar to the BCP 47 definition of language tags, but the locale variant modifier is defined differently, and the character set is included as a part of the identifier. It is defined in this format: . (For example, Australian English using the UTF-8 encoding is .) General locale settings These settings usually include the following display (output) format settings: Number format setting Character classification, case conversion settings Date-time format setting String collation setting Currency format setting Paper size setting Color setting The locale settings are about formatting output given a locale. So, the time zone information and daylight saving time are not usually part of the locale settings. Less usual is the input format setting, which is mostly defined on a per application basis. Programming and markup language support In these environments, C C++ Eiffel Java .NET Framework REBOL Ruby Perl PHP Python XML JSP JavaScript and other (nowadays) Unicode-based environments, they are defined in a format similar to BCP 47. They are usually defined with just ISO 639 (language) and ISO 3166-1 alpha-2 (2-letter country) codes. POSIX platforms On POSIX platforms, locale identifiers are defined by ISO/IEC 15897, which is similar to the BCP 47 definition of language tags, but the locale variant modifier is defined differently, and the character set is included as a part of the identifier. In the next example there is an output of command locale for Czech language (cs), Czech Republic (CZ) with explicit UTF-8 encoding: $ locale LANG=cs_CZ.UTF-8 LC_CTYPE="cs_CZ.UTF-8" LC_NUMERIC="cs_CZ.UTF-8" LC_TIME="cs_CZ.UTF-8" LC_COLLATE="cs_CZ.UTF-8" LC_MONETARY="cs_CZ.UTF-8" LC_MESSAGES="cs_CZ.UTF-8" LC_PAPER="cs_CZ.UTF-8" LC_NAME="cs_CZ.UTF-8" LC_ADDRESS="cs_CZ.UTF-8" LC_TELEPHONE="cs_CZ.UTF-8" LC_MEASUREMENT="cs_CZ.UTF-8" LC_IDENTIFICATION="cs_CZ.UTF-8" LC_ALL= Specifics for Microsoft platforms Windows uses specific language and territory strings. The locale identifier (LCID) for unmanaged code on Microsoft Windows is a number such as 1033 for English (United States) or 1041 for Japanese (Japan). These numbers consist of a language code (lower 10 bits) and a culture code (upper bits), and are therefore often written in hexadecimal notation, such as 0x0409 or 0x0411. Microsoft is starting to introduce managed code application programming interfaces (APIs) for .NET that use this format. One of the first to be generally released is a function to mitigate issues with internationalized domain names, but more are in Windows Vista Beta 1. Starting with Windows Vista, new functions that use BCP 47 locale names have been introduced to replace nearly all LCID-based APIs. See also Internationalization and localization ISO 639 language codes ISO 3166-1 alpha-2 region codes ISO 15924 script codes IETF language tag C localization functions CCSID Code page Common Locale Data Repository Date and time representation by country AppLocale References External links BCP 47 Language Subtag Registry Common Locale Data Repository Javadoc API documentation Locale and Language information from Microsoft MS-LCID: Windows Language Code Identifier (LCID) Reference from Microsoft Microsoft LCID list Microsoft LCID chart with decimal equivalents POSIX Environment Variables Low Level Technical details on defining a POSIX locale ICU Locale Explorer Debian Wiki on Locales Article "The Standard C++ Locale" by Nathan C. Myers locale(7): Description of multi-language support - Linux man page Apache C++ Standard Library Locale User's Guide Sort order charts for various operating system locales and database collations NATSPEC Library Description of locale-related UNIX environment variables in Debian Linux Reference Manual Guides to locales and locale creation on various platforms Unix user management and support-related utilities Unix SUS2008 utilities Internationalization and localization
1740222
https://en.wikipedia.org/wiki/Selective%20Repeat%20ARQ
Selective Repeat ARQ
Selective Repeat ARQ/Selective Reject ARQ is a specific instance of the automatic repeat request (ARQ) protocol used to manage sequence numbers and retransmissions in reliable communications. Summary Selective Repeat is part of the automatic repeat request (ARQ). With selective repeat, the sender sends a number of frames specified by a window size even without the need to wait for individual ACK from the receiver as in Go-Back-N ARQ. The receiver may selectively reject a single frame, which may be retransmitted alone; this contrasts with other forms of ARQ, which must send every frame from that point again. The receiver accepts out-of-order frames and buffers them. The sender individually retransmits frames that have timed out. Concept It may be used as a protocol for the delivery and acknowledgement of message units, or it may be used as a protocol for the delivery of subdivided message sub-units. When used as the protocol for the delivery of messages, the sending process continues to send a number of frames specified by a window size even after a frame loss. Unlike Go-Back-N ARQ, the receiving process will continue to accept and acknowledge frames sent after an initial error; this is the general case of the sliding window protocol with both transmit and receive window sizes greater than 1. The receiver process keeps track of the sequence number of the earliest frame it has not received, and sends that number with every acknowledgement (ACK) it sends. If a frame from the sender does not reach the receiver, the sender continues to send subsequent frames until it has emptied its window. The receiver continues to fill its receiving window with the subsequent frames, replying each time with an ACK containing the sequence number of the earliest missing frame. Once the sender has sent all the frames in its window, it re-sends the frame number given by the ACKs, and then continues where it left off. The size of the sending and receiving windows must be equal, and half the maximum sequence number (assuming that sequence numbers are numbered from 0 to n−1) to avoid miscommunication in all cases of packets being dropped. To understand this, consider the case when all ACKs are destroyed. If the receiving window is larger than half the maximum sequence number, some, possibly even all, of the packets that are present after timeouts are duplicates that are not recognized as such. The sender moves its window for every packet that is acknowledged. When used as the protocol for the delivery of subdivided messages it works somewhat differently. In non-continuous channels where messages may be variable in length, standard ARQ or Hybrid ARQ protocols may treat the message as a single unit. Alternately selective retransmission may be employed in conjunction with the basic ARQ mechanism where the message is first subdivided into sub-blocks (typically of fixed length) in a process called packet segmentation. The original variable length message is thus represented as a concatenation of a variable number of sub-blocks. While in standard ARQ the message as a whole is either acknowledged (ACKed) or negatively acknowledged (NAKed), in ARQ with selective transmission the ACK response would additionally carry a bit flag indicating the identity of each sub-block successfully received. In ARQ with selective retransmission of sub-divided messages each retransmission diminishes in length, needing to only contain the sub-blocks that were linked. In most channel models with variable length messages, the probability of error-free reception diminishes in inverse proportion with increasing message length. In other words, it's easier to receive a short message than a longer message. Therefore, standard ARQ techniques involving variable length messages have increased difficulty delivering longer messages, as each repeat is the full length. Selective re-transmission applied to variable length messages completely eliminates the difficulty in delivering longer messages, as successfully delivered sub-blocks are retained after each transmission, and the number of outstanding sub-blocks in following transmissions diminishes. Selective Repeat is implemented in UDP transmission. Examples The Transmission Control Protocol uses a variant of Go-Back-N ARQ to ensure reliable transmission of data over the Internet Protocol, which does not provide guaranteed delivery of packets; with Selective Acknowledgement (SACK) extension, it may also use Selective Repeat ARQ. The ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 Gigabit/s) Local area network using existing home wiring (power lines, phone lines and coaxial cables), uses Selective Repeat ARQ to ensure reliable transmission over noisy media. G.hn employs packet segmentation to sub-divide messages into smaller units, to increase the probability that each one is received correctly. The STANAG 5066 PROFILE FOR HF RADIO DATA COMMUNICATIONS uses Selective Repeat ARQ, with a maximum window size of 128 protocol-data units (PDUs). References Further reading Logical link control Error detection and correction it:Selective repeat ARQ ja:Selective Repeat ARQ
3468593
https://en.wikipedia.org/wiki/Computer%20Literacy%20Bookshops
Computer Literacy Bookshops
Computer Literacy Bookshops was a local chain of bookstores selling primarily technical-oriented books in Northern California. It was founded in 1983 in Sunnyvale, California, where its concentration in technical books fit well with its Silicon Valley customer base. Computer Literacy was acquired by CBooks Express in 1997, and after going public traded as fatbrain.com, selling books both online and in brick-and-mortar stores. Fatbrain was acquired by Barnes & Noble in 2000, which absorbed the company into its main enterprise, and shut down the physical stores the following year. History The first Computer Literacy Bookshop was opened in March 1983 on Lawrence Expressway between Lakeside Drive and Titan Way in Sunnyvale, California, by founders Dan Doernberg and Rachel Unkefer. It was located in the heart of Silicon Valley, not far from where the original Fry's Electronics store opened two years later. In 1987 the company opened two additional stores: one on North First Street in San Jose and another in the TechMart complex near Great America in Santa Clara. The San Jose store was probably the largest computer bookstore in America, with over 14,000 square feet of floorspace dedicated to new computer books. The TechMart store subsequently relocated to the headquarters of Apple Computer, Inc. at One Infinite Loop in Cupertino. The store not only sold books and periodicals but displayed galley pre-prints for skimming and editing, held author and guest engineer speaking events such as Gene Amdahl or Donald Knuth. In 1993, the only East Coast location was opened in the Tysons Corner area of suburban Washington, DC to make a total of four bricks-and-mortar locations. On August 25, 1991, the company registered the domain name clbooks.com and began taking book orders from customers worldwide via email. Their UUCP hostname was clb_books. Acquisition by CBooks Express In 1995, Chris MacAskill and Kim Orumchian started an online bookstore called CBooks Express, specializing in computer-related books. The domain for CBooks Express was cbooks.com. Computer Literacy Bookstores moved to sue CBooks Express for trademark infringement. Instead, the young company acquired Computer Literacy Bookshops in 1997. The combined company became ComputerLiteracy.com, and it went public in 1998. Fatbrain Soon after going public the company was renamed Fatbrain.com (NASDAQ FATB) after a six-month process to come up with a new name. Company executives worked with branding specialists Interbrand Group; but eventually a name suggested by the company's editorial director, Deborah Bohn, was chosen. Along with the new name, a new logo (an emoticon: {*}) and slogan were introduced. eMatter and MightyWords In the summer of 1999 Fatbrain started selling electronic documents under the eMatter brand. This was eventually spun off as a new company called MightyWords. Acquisition by Barnes & Noble Fatbrain.com was acquired and absorbed by Barnes & Noble, the large bookstore chain, in November 2000. The physical stores were finally closed on December 1, 2001, and the domain name clbooks.com was retired; it is now operated by an unrelated organization. References External links Defunct retail companies of the United States Bookstores in the San Francisco Bay Area Companies based in Sunnyvale, California Barnes & Noble Defunct companies based in California Computer literacy
61239757
https://en.wikipedia.org/wiki/Clyde%20Foster
Clyde Foster
Clyde Foster (November 21, 1931 – March 6, 2017) was an American scientist and mathematician. He worked for the Army Ballistic Missile Agency and then for NASA, and from 1975 to 1986 was the head of Equal Employment Opportunity at Marshall Space Flight Center in Huntsville, Alabama. He is credited with setting up training programs that allowed hundreds of African Americans to get the training necessary for positions and promotions at NASA in Huntsville, when Alabama was segregated and African Americans were denied those opportunities. To that purpose he also helped found a Computer Science program at his alma mater, Alabama A&M University, a historically black university, where he headed the Computer Science program while on loan from NASA. In 1981 he was awarded the Philip A. Hart Award for his "significant contribution toward improving urban and working environments". Foster was also a community activist, and helped revive Triana, Alabama, a small majority-African American community near Huntsville; he was instrumental in the community regaining its charter, was a plaintiff in lawsuits filed over DDT pollution in Triana, and for two decades served as its mayor. Biography Clyde Foster was born in Birmingham, Alabama, on November 21, 1931, as the sixth child of twelve. He attended A. H. Parker High School, and the experience of living as an African American in segregated Birmingham made him realize he needed to get away; for that reason he attended Alabama A&M University (a historically black university in the north of the state), where he received his BS degree in Mathematics and Chemistry in 1954. After serving two years in the United States Army, he moved to Selma, Alabama, and worked as a science teacher in Dallas County, Alabama, from 1956 to 1957. Foster left Selma and became a mathematician technician at the Army Ballistic Missile Agency, at Redstone Arsenal in Huntsville, Alabama, as part of a team that did calculations for rockets. ABMA became part of NASA in 1958, and in 1960, he and a group of colleagues were transferred to NASA's Marshall Space Flight Center, which had just been founded. He was assigned a position as a mathematician and instructor in the Computation Laboratory. Around that time Foster was considering leaving, but President John F. Kennedy's announcement that the new administration would charge NASA to go to the Moon made him stay. Foster worked regularly as a recruiter, trying to attract black workers to Marshall. The problem, both for hiring new workers and promoting current workers at NASA, was that training was required, and while NASA itself, as a federal entity, did not segregate, its location in a segregated state meant that employees and future employees who were African-American could not attend the kinds of training programs they needed in order to be hired or promoted, since those were held in public facilities, which were segregated (such as ballrooms of hotels that allowed whites only). Soon after he joined NASA he was asked to train a white colleague to become his boss, at a time when the Civil Rights Movement in Alabama was demanding change. Foster complained to his boss and refused the assignment, and then demanded that NASA start a program to train black workers. In the end, NASA agreed and started a training program in collaboration with Alabama A&M University. That the program in a way continued segregation was of secondary concern to Foster. At the end of the 1960s he persuaded Wernher von Braun, who had worked in Nazi Germany's rocket development program and later headed the Marshall Center, to support him in setting up a Computer Science program at Alabama A&M. The university initially was not interested, being more focused on more traditional training in nursing, education, and farming, besides civil engineering. Foster persisted, and in 1968 he became the director of Alabama A&M's Computer Science Department (until 1970), and established the program's undergraduate degree; NASA paid his salary for those two years. In 1972, Foster joined the Equal Opportunity Office at Marshall, as a staff officer, and in 1975 he became the office's director. His job was to ensure that all the Center's operations and its contractors provided equal opportunity. He retired in 1986. In this capacity, and through the establishment of training programs, he helped hundreds of African-Americans become employed by NASA. Foster died on March 6, 2017. Other activities Foster was mayor of Triana, Alabama, a settlement of less than a hundred African-American families near Huntsville. The town had collapsed after the re-routing of a railroad to bypass it. Foster came to know the community after he met his future wife Dorothy in college; she lived in Triana, and he moved there after coming to work for NASA. Eight years after moving there, he managed to sway a probate judge to revive the municipality, having discovered that its municipal charter had never actually been dissolved and that Alabama law allowed such still-chartered municipalities to be revived. The judge subsequently named Foster mayor, and appointed a city council. A feature article in Ebony credited Foster with reviving the town, which he hoped would share in the development book prompted in part by NASA. He was mayor for twenty years, from 1964 until September 1984. He also served on the state's Commission on Higher Education, on an appointment made in 1974 by Alabama Governor George C. Wallace. With his to-be life-long friends Alonzo Toney and George Malone, in 1972 he co-founded Triana Industries, an electronics manufacturer with himself as president and 28 employees. Toney was the company vice-president and also the operations manager of the Alabama A&M Computer Services Center, and in 1984 later succeeded Foster as mayor of Triana. Malone was the general manager of Triana Industries. In 1981 Foster was awarded the Philip A. Hart Award for his "significant contribution toward improving urban and working environments". He was one of the plaintiffs in the 1980s lawsuits filed against Olin Corporation over DDT pollution in Triana. References Reference bibliography () Further reading () External links "Race and the Space Race", including interview with Clyde Foster 1931 births 2017 deaths African-American academics African-American mathematicians African-American mayors in Alabama African-American military personnel Alabama A&M University alumni Alabama A&M University faculty Mathematicians from Alabama Mayors of places in Alabama Military personnel from Birmingham, Alabama NASA people People from Huntsville, Alabama Politicians from Birmingham, Alabama 20th-century African-American people 21st-century African-American people
31736324
https://en.wikipedia.org/wiki/Kenneth%20Hess
Kenneth Hess
Kenneth Lafferty Hess (born January 22, 1953) is an engineer, author, entrepreneur, and philanthropist. Hess is the founder and president of Science Buddies, a non-profit organization dedicated to furthering science literacy through the creation of free resources and services for K-12 students, teachers, and families. He holds one of the first software patents ever granted and has designed and/or developed dozens of commercial software, content, and Internet products, including Family Tree Maker, one of the all-time best-selling home software programs. Among his awards are a PC Magazine Editor's Choice, PC Magazine Top 100 Web Site, a Codie award and a Science Prize for Online Resources in Education (SPORE). Science Buddies As a ninth-grader, Hess' science fair project involved using a dentist's X-ray machine to test a cloud chamber he had built. Hess was interested in observing the trails of the radioactive particles as they moved through the chamber. Later, as a parent, Hess observed his daughter's success and enjoyment of the science fair process. At the same time, he recognized that many students lack the resources and support they need to get the maximum educational benefit from a science fair project. With a goal of supporting students from all walks of life (as well as their teachers, parents, and schools) in doing science research projects, Hess founded Science Buddies in 2001 under the umbrella of the Kenneth Lafferty Hess Family Charitable Foundation. (The organization formalized its name change to Science Buddies in 2010.) Since the inception of Science Buddies, Hess has led the organization in creating an innovative library of resources designed to enrich and support science education. These resources include 1000+ scientist-vetted project ideas in more than 30 scientific fields, a Topic Selection Wizard to help students find exciting and appropriate science and engineering projects, tools and materials for classroom use, guides to help science-fair administrators, and a complete project guide to help students with all steps of conducting a science or engineering project. During 2010, 9.8 million unique individuals visited the Science Buddies website, a number equal to ∼18% of U.S. students in grades K–12. As a testament to the quality of resources Hess has implemented and developed at Science Buddies to meet the needs of K-12 educators and to help bridge the gap between researchers and K-12 students, Science Buddies was awarded the prestigious Science Prize for Online Resources in Education (SPORE) in April 2011. SPORE awards are given by Science and the American Association for the Advancement of Science (AAAS). Career Prior to launching his first company, Hess worked at Intel Corporation, Teradyne, Hewlett-Packard, and Symantec. He founded Banner Blue Software in 1984. Tapping into a growing societal interest in genealogy and personal ancestry, Banner Blue developed Family Tree Maker, genealogy software that enabled users to locate and organize ancestral information. During the first half of 1996, Family Tree Maker was one of three top-selling personal productivity product lines, according to PC Data. Banner Blue also developed Org Plus, a tool for creating corporate organization charts. A version of Org Plus, labeled Microsoft Organization Chart, was bundled into copies of Microsoft Office for many years. Hess wrote the initial versions of both Family Tree Maker and Org Plus and designed the initial version of Family Tree Maker Online. Hess' success with Banner Blue Software was an exercise in "bootstrapping." Starting the company with a personal investment of $20,000, Hess bootstrapped Banner Blue Software into a company with 100 employees and annual sales of $25 million (and approximately 2 million copies of Family Tree Maker sold) when Broderbund Software, Inc. acquired it in 1995. Hess outlined the success of Banner Blue and his "bootstrapping" approach in Bootstrap: Lessons Learned Building a Successful Company from Scratch. In 1999, Hess co-founded Pocket Express, a company that designed and manufactured software for Palm pocket computers. Pocket Express' product line was sold to Handmark, Inc. in 2002. Boards and advisory committees Chairman of California State Summer School for Mathematics and Science (COSMOS) Advisory Board Sensant Corporation, San Leandro, CA, 1998-2005 (Sensant was acquired by Siemens in 2005) Carmel Bach Festival, Carmel, CA, 2001–2004 The Hoover Institution, Stanford University, Board of Overseers, 1996–2002 Software Forum Advisory Board, 1993–1998 Publications and patents Science Buddies: Advancing Informal Science Education Science 29 April 2011: Vol. 332 no. 6029 pp. 550–551, (video, 2011) Bootstrap: Lessons Learned Building A Successful Company From Scratch, S-Curve Press, 2001. Remarks to Software Forum Dinner Meeting, 1997. U.S. Patent 4,764,867, "Display System and Method for Constructing and Editing a Hierarchical Arrangement of Information," issued August 1988. "Picking the Best Display: An Easy-to-Follow Guide." Electronic Design (August 19, 1982): 139-146-1215 Early life and education Hess was born in Warren, Ohio, to Phyllis Lafferty Hess and Richard Morton Hess. After graduating from Howland High School in 1971, Hess attended Stanford University where he earned a BS in Engineering. His course of study was interdisciplinary with emphasis in engineering, computer science, and political science. Following Stanford, Hess received an MBA from Harvard. Family and hobbies Hess and his wife, Constance, have one daughter, Amber. Hess' personal interests include photography and astronomy. Combining his interest in photography, astro-photography, and science literacy, Hess has authored the following Science Buddies resources and materials: Camera Lens Testing The Golden State Star Party, Science Buddies Blog The Golden State Star Party - II, Science Buddies Blog The Golden State Star Party - III, Science Buddies Blog References External links KLHess.com Science Buddies Corporate Chronology of Family Tree Maker, October 17, 2006 1953 births American male writers 21st-century American engineers American philanthropists American businesspeople Living people Harvard Business School alumni Stanford University alumni
1942523
https://en.wikipedia.org/wiki/Architecture%20of%20Windows%20NT
Architecture of Windows NT
The architecture of Windows NT, a line of operating systems produced and sold by Microsoft, is a layered design that consists of two main components, user mode and kernel mode. It is a preemptive, reentrant multitasking operating system, which has been designed to work with uniprocessor and symmetrical multiprocessor (SMP)-based computers. To process input/output (I/O) requests, they use packet-driven I/O, which utilizes I/O request packets (IRPs) and asynchronous I/O. Starting with Windows XP, Microsoft began making 64-bit versions of Windows available; before this, there were only 32-bit versions of these operating systems. Programs and subsystems in user mode are limited in terms of what system resources they have access to, while the kernel mode has unrestricted access to the system memory and external devices. Kernel mode in Windows NT has full access to the hardware and system resources of the computer. The Windows NT kernel is a hybrid kernel; the architecture comprises a simple kernel, hardware abstraction layer (HAL), drivers, and a range of services (collectively named Executive), which all exist in kernel mode. User mode in Windows NT is made of subsystems capable of passing I/O requests to the appropriate kernel mode device drivers by using the I/O manager. The user mode layer of Windows NT is made up of the "Environment subsystems", which run applications written for many different types of operating systems, and the "Integral subsystem", which operates system-specific functions on behalf of environment subsystems. The kernel mode stops user mode services and applications from accessing critical areas of the operating system that they should not have access to. The Executive interfaces, with all the user mode subsystems, deal with I/O, object management, security and process management. The kernel sits between the hardware abstraction layer and the Executive to provide multiprocessor synchronization, thread and interrupt scheduling and dispatching, and trap handling and exception dispatching. The kernel is also responsible for initializing device drivers at bootup. Kernel mode drivers exist in three levels: highest level drivers, intermediate drivers and low-level drivers. Windows Driver Model (WDM) exists in the intermediate layer and was mainly designed to be binary and source compatible between Windows 98 and Windows 2000. The lowest level drivers are either legacy Windows NT device drivers that control a device directly or can be a plug and play (PnP) hardware bus. User mode User mode is made up of various system-defined processes and DLLs. The interface between user mode applications and operating system kernel functions is called an "environment subsystem." Windows NT can have more than one of these, each implementing a different API set. This mechanism was designed to support applications written for many different types of operating systems. None of the environment subsystems can directly access hardware; access to hardware functions is done by calling into kernel mode routines. There are three main environment subsystems: the Win32 subsystem, an OS/2 subsystem and a POSIX subsystem. Win32 environment subsystem The Win32 environment subsystem can run 32-bit Windows applications. It contains the console as well as text window support, shutdown and hard-error handling for all other environment subsystems. It also supports Virtual DOS Machines (VDMs), which allow MS-DOS and 16-bit Windows (Win16) applications to run on Windows NT. There is a specific MS-DOS VDM that runs in its own address space and which emulates an Intel 80486 running MS-DOS 5.0. Win16 programs, however, run in a Win16 VDM. Each program, by default, runs in the same process, thus using the same address space, and the Win16 VDM gives each program its own thread on which to run. However, Windows NT does allow users to run a Win16 program in a separate Win16 VDM, which allows the program to be preemptively multitasked, as Windows NT will pre-empt the whole VDM process, which only contains one running application. The Win32 environment subsystem process (csrss.exe) also includes the window management functionality, sometimes called a "window manager". It handles input events (such as from the keyboard and mouse), then passes messages to the applications that need to receive this input. Each application is responsible for drawing or refreshing its own windows and menus, in response to these messages. OS/2 environment subsystem The OS/2 environment subsystem supports 16-bit character-based OS/2 applications and emulates OS/2 1.x, but not 32-bit or graphical OS/2 applications as used with OS/2 2.x or later, on x86 machines only. To run graphical OS/2 1.x programs, the Windows NT Add-On Subsystem for Presentation Manager must be installed. The last version of Windows NT to have an OS/2 subsystem was Windows 2000; it was removed as of Windows XP. POSIX environment subsystem The POSIX environment subsystem supports applications that are strictly written to either the POSIX.1 standard or the related ISO/IEC standards. This subsystem has been replaced by Interix, which is a part of Windows Services for UNIX. This was in turn replaced by the Windows Subsystem for Linux. Security subsystem The security subsystem deals with security tokens, grants or denies access to user accounts based on resource permissions, handles login requests and initiates login authentication, and determines which system resources need to be audited by Windows NT. It also looks after Active Directory. The workstation service implements the network redirector, which is the client side of Windows file and print sharing; it implements local requests to remote files and printers by "redirecting" them to the appropriate servers on the network. Conversely, the server service allows other computers on the network to access file shares and shared printers offered by the local system. Kernel mode Windows NT kernel mode has full access to the hardware and system resources of the computer and runs code in a protected memory area. It controls access to scheduling, thread prioritization, memory management and the interaction with hardware. The kernel mode stops user mode services and applications from accessing critical areas of the operating system that they should not have access to; user mode processes must ask the kernel mode to perform such operations on their behalf. While the x86 architecture supports four different privilege levels (numbered 0 to 3), only the two extreme privilege levels are used. Usermode programs are run with CPL 3, and the kernel runs with CPL 0. These two levels are often referred to as "ring 3" and "ring 0", respectively. Such a design decision had been done to achieve code portability to RISC platforms that only support two privilege levels, though this breaks compatibility with OS/2 applications that contain I/O privilege segments that attempt to directly access hardware. Code running in kernel mode includes: the executive, which is itself made up of many modules that do specific tasks; the kernel, which provides low-level services used by the Executive; the Hardware Abstraction Layer (HAL); and kernel drivers. Executive The Windows Executive services make up the low-level kernel-mode portion, and are contained in the file NTOSKRNL.EXE. It deals with I/O, object management, security and process management. These are divided into several subsystems, among which are Cache Manager, Configuration Manager, I/O Manager, Local Procedure Call (LPC), Memory Manager, Object Manager, Process Structure and Security Reference Monitor (SRM). Grouped together, the components can be called Executive services (internal name Ex). System Services (internal name Nt), i.e., system calls, are implemented at this level, too, except very few that call directly into the kernel layer for better performance. The term "service" in this context generally refers to a callable routine, or set of callable routines. This is distinct from the concept of a "service process", which is a user mode component somewhat analogous to a daemon in Unix-like operating systems. Object Manager The Object Manager (internal name Ob) is an executive subsystem that all other executive subsystems, especially system calls, must pass through to gain access to Windows NT resources—essentially making it a resource management infrastructure service. The object manager is used to reduce the duplication of object resource management functionality in other executive subsystems, which could potentially lead to bugs and make development of Windows NT harder. To the object manager, each resource is an object, whether that resource is a physical resource (such as a file system or peripheral) or a logical resource (such as a file). Each object has a structure or object type that the object manager must know about. Object creation is a process in two phases, creation and insertion. Creation causes the allocation of an empty object and the reservation of any resources required by the object manager, such as an (optional) name in the namespace. If creation was successful, the subsystem responsible for the creation fills in the empty object. Finally, if the subsystem deems the initialization successful, it instructs the object manager to insert the object, which makes it accessible through its (optional) name or a cookie called a handle. From then on, the lifetime of the object is handled by the object manager, and it's up to the subsystem to keep the object in a working condition until being signaled by the object manager to dispose of it. Handles are identifiers that represent a reference to a kernel resource through an opaque value. Similarly, opening an object through its name is subject to security checks, but acting through an existing, open handle is only limited to the level of access requested when the object was opened or created. Object types define the object procedures and any data specific to the object. In this way, the object manager allows Windows NT to be an object-oriented operating system, as object types can be thought of as polymorphic classes that define objects. Most subsystems, though, with a notable exception in the I/O Manager, rely on the default implementation for all object type procedures. Each instance of an object that is created stores its name, parameters that are passed to the object creation function, security attributes and a pointer to its object type. The object also contains an object close procedure and a reference count to tell the object manager how many other objects in the system reference that object and thereby determines whether the object can be destroyed when a close request is sent to it. Every named object exists in a hierarchical object namespace. Cache Controller Closely coordinates with the Memory Manager, I/O Manager and I/O drivers to provide a common cache for regular file I/O. The Windows Cache Manager operates on file blocks (rather than device blocks), for consistent operation between local and remote files, and ensures a certain degree of coherency with memory-mapped views of files, since cache blocks are a special case of memory-mapped views and cache misses a special case of page faults. Configuration Manager Implements the system calls needed by Windows Registry. I/O Manager Allows devices to communicate with user-mode subsystems. It translates user-mode read and write commands into read or write IRPs which it passes to device drivers. It accepts file system I/O requests and translates them into device specific calls, and can incorporate low-level device drivers that directly manipulate hardware to either read input or write output. It also includes a cache manager to improve disk performance by caching read requests and write to the disk in the background. Local Procedure Call (LPC) Provides inter-process communication ports with connection semantics. LPC ports are used by user-mode subsystems to communicate with their clients, by Executive subsystems to communicate with user-mode subsystems, and as the basis for the local transport for Microsoft RPC. Memory Manager Manages virtual memory, controlling memory protection and the paging of memory in and out of physical memory to secondary storage, and implements a general-purpose allocator of physical memory. It also implements a parser of PE executables that lets an executable be mapped or unmapped in a single, atomic step. Starting from Windows NT Server 4.0, Terminal Server Edition, the memory manager implements a so-called session space, a range of kernel-mode memory that is subject to context switching just like user-mode memory. This lets multiple instances of the kernel-mode Win32 subsystem and GDI drivers run side-by-side, despite shortcomings in their initial design. Each session space is shared by several processes, collectively referred to as a "session". To ensure a degree of isolation between sessions without introducing a new object type, the association between processes and sessions is handled by the Security Reference Monitor, as an attribute of a security subject (token), and it can only be changed while holding special privileges. The relatively unsophisticated and ad hoc nature of sessions is due to the fact they weren't part of the initial design, and had to be developed, with minimal disruption to the main line, by a third party (Citrix Systems) as a prerequisite for their terminal server product for Windows NT, called WinFrame. Starting with Windows Vista, though, sessions finally became a proper aspect of the Windows architecture. No longer a memory manager construct that creeps into user mode indirectly through Win32, they were expanded into a pervasive abstraction affecting most Executive subsystems. As a matter of fact, regular use of Windows Vista always results in a multi-session environment. Process Structure Handles process and thread creation and termination, and it implements the concept of Job, a group of processes that can be terminated as a whole, or be placed under shared restrictions (such a total maximum of allocated memory, or CPU time). Job objects were introduced in Windows 2000. PnP Manager Handles plug and play and supports device detection and installation at boot time. It also has the responsibility to stop and start devices on demand—this can happen when a bus (such as USB or IEEE 1394 FireWire) gains a new device and needs to have a device driver loaded to support it. Its bulk is actually implemented in user mode, in the Plug and Play Service, which handles the often complex tasks of installing the appropriate drivers, notifying services and applications of the arrival of new devices, and displaying GUI to the user. Power Manager Deals with power events (power-off, stand-by, hibernate, etc.) and notifies affected drivers with special IRPs (Power IRPs). Security Reference Monitor (SRM) The primary authority for enforcing the security rules of the security integral subsystem. It determines whether an object or resource can be accessed, via the use of access control lists (ACLs), which are themselves made up of access control entries (ACEs). ACEs contain a Security Identifier (SID) and a list of operations that the ACE gives a select group of trustees—a user account, group account, or login session—permission (allow, deny, or audit) to that resource. GDI The Graphics Device Interface is responsible for tasks such as drawing lines and curves, rendering fonts and handling palettes. The Windows NT 3.x series of releases had placed the GDI component in the user-mode Client/Server Runtime Subsystem, but this was moved into kernel mode with Windows NT 4.0 to improve graphics performance. Kernel The kernel sits between the HAL and the Executive and provides multiprocessor synchronization, thread and interrupt scheduling and dispatching, and trap handling and exception dispatching; it is also responsible for initializing device drivers at bootup that are necessary to get the operating system up and running. That is, the kernel performs almost all the tasks of a traditional microkernel; the strict distinction between Executive and Kernel is the most prominent remnant of the original microkernel design, and historical design documentation consistently refers to the kernel component as "the microkernel". The kernel often interfaces with the process manager. The level of abstraction is such that the kernel never calls into the process manager, only the other way around (save for a handful of corner cases, still never to the point of a functional dependence). Hybrid kernel design The Windows NT design includes many of the same objectives as Mach, the archetypal microkernel system, one of the most important being its structure as a collection of modules that communicate via well-known interfaces, with a small microkernel limited to core functions such as first-level interrupt handling, thread scheduling and synchronization primitives. This allows for the possibility of using either direct procedure calls or interprocess communication (IPC) to communicate between modules, and hence for the potential location of modules in different address spaces (for example in either kernel space or server processes). Other design goals shared with Mach included support for diverse architectures, a kernel with abstractions general enough to allow multiple operating system personalities to be implemented on top of it and an object-oriented organisation. The primary operating system personality on Windows is the Windows API, which is always present. The emulation subsystem which implements the Windows personality is called the Client/Server Runtime Subsystem (csrss.exe). On versions of NT prior to 4.0, this subsystem process also contained the window manager, graphics device interface and graphics device drivers. For performance reasons, however, in version 4.0 and later, these modules (which are often implemented in user mode even on monolithic systems, especially those designed without internal graphics support) run as a kernel-mode subsystem. Applications that run on NT are written to one of the OS personalities (usually the Windows API), and not to the native NT API for which documentation is not publicly available (with the exception of routines used in device driver development). An OS personality is implemented via a set of user-mode DLLs (see Dynamic-link library), which are mapped into application processes' address spaces as required, together with an emulation subsystem server process (as described previously). Applications access system services by calling into the OS personality DLLs mapped into their address spaces, which in turn call into the NT run-time library (ntdll.dll), also mapped into the process address space. The NT run-time library services these requests by trapping into kernel mode to either call kernel-mode Executive routines or make Local Procedure Calls (LPCs) to the appropriate user-mode subsystem server processes, which in turn use the NT API to communicate with application processes, the kernel-mode subsystems and each other. Kernel-mode drivers Windows NT uses kernel-mode device drivers to enable it to interact with hardware devices. Each of the drivers has well defined system routines and internal routines that it exports to the rest of the operating system. All devices are seen by user mode code as a file object in the I/O manager, though to the I/O manager itself the devices are seen as device objects, which it defines as either file, device or driver objects. Kernel mode drivers exist in three levels: highest level drivers, intermediate drivers and low level drivers. The highest level drivers, such as file system drivers for FAT and NTFS, rely on intermediate drivers. Intermediate drivers consist of function drivers—or main driver for a device—that are optionally sandwiched between lower and higher level filter drivers. The function driver then relies on a bus driver—or a driver that services a bus controller, adapter, or bridge—which can have an optional bus filter driver that sits between itself and the function driver. Intermediate drivers rely on the lowest level drivers to function. The Windows Driver Model (WDM) exists in the intermediate layer. The lowest level drivers are either legacy Windows NT device drivers that control a device directly or can be a PnP hardware bus. These lower level drivers directly control hardware and do not rely on any other drivers. Hardware abstraction layer The Windows NT hardware abstraction layer (HAL) is a layer between the physical hardware of the computer and the rest of the operating system. It was designed to hide differences in hardware and provide a consistent platform on which the kernel is run. The HAL includes hardware-specific code that controls I/O interfaces, interrupt controllers and multiple processors. However, despite its purpose and designated place within the architecture, the HAL isn't a layer that sits entirely below the kernel, the way the kernel sits below the Executive: All known HAL implementations depend in some measure on the kernel, or even the Executive. In practice, this means that kernel and HAL variants come in matching sets that are specifically constructed to work together. In particular hardware abstraction does not involve abstracting the instruction set, which generally falls under the wider concept of portability. Abstracting the instruction set, when necessary (such as for handling the several revisions to the x86 instruction set, or emulating a missing math coprocessor), is performed by the kernel, or via hardware virtualization. See also Microsoft Windows library files MinWin Unix architecture Comparison of operating system kernels User-Mode Driver Framework Kernel-Mode Driver Framework Hybrid Kernel Further reading Martignetti, E.; What Makes It Page?: The Windows 7 (x64) Virtual Memory Manager () Russinovich, Mark E.; Solomon, David A.; Ionescu, A.; Windows Internals, Part1: Covering Windows Server 2008 R2 and Windows 7 () Russinovich, Mark E.; Solomon, David A.; Ionescu, A.; Windows Internals, Part2: Covering Windows Server 2008 R2 and Windows 7 () Notes and references Notes References External links Memory management in the Windows XP kernel Windows NT Windows NT kernel