id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
10704823
https://en.wikipedia.org/wiki/Panorama%20Software
Panorama Software
Panorama Software is a Canadian software and consulting company specializing in business intelligence. The company was founded by Rony Ross in Israel in 1993; it relocated its headquarters to Toronto, Canada in 2003. Panorama sold its online analytical processing (OLAP) technology to Microsoft in 1996, which was built into Microsoft OLAP Services and later SQL Server Analysis Services, an integrated component of Microsoft SQL Server. Products The company’s main product is a business intelligence (BI) suite named Necto. Before 2011 it had a product called NovaView. Necto offers data mining and report generation, allowing custom views of the data without having to wait to run a report. It lets users create collaborative "workboards" and visual presentations. The users are able discover those who are attempting to analyze similar data sets. The company contends this focus on social analysis leads, starting with business data and connecting it with the people who are involved in this data. Necto is a BI application based upon understanding of user behavior, one-click reporting, and collaborative decision making. It supports social sharing of data, similar to sharing found on consumer oriented social networking sites. Data analysis is treated as "conversations" which can themselves be followed and analyzed. Panorama encourages Necto enterprise users to form cross-departmental teams based on data research behaviors. It allows tracking of user behavior and making corresponding adjustments. Necto includes analytics, custom reporting, intuitive dashboards, and integration with Microsofttechnology. It can use data sources including spreadsheets, in-memory, OLAP, or relational databases. Integration on Microsoft Azure and optimization with Microsoft SQL Server 2012 platform is also available. Necto integrates with SharePoint. It can be scaled up to manage thousands of users and several terabytes of data. Panorama and Microsoft Panorama Software is the original developer of the online analytical processing technology that Microsoft acquired in 1996 and rebranded as SQL Server Analysis Services, a component of Microsoft SQL Server. Since this acquisition, Panorama sells support as a Microsoft Gold ISV Competency partner. References External links Panorama Software web site Software companies of Canada Online analytical processing Companies based in Toronto Software companies established in 1993 Privately held companies of Canada Software companies of Israel 1993 establishments in Ontario Canadian companies established in 1993
2029255
https://en.wikipedia.org/wiki/System%20Restore
System Restore
System Restore is a feature in Microsoft Windows that allows the user to revert their computer's state (including system files, installed applications, Windows Registry, and system settings) to that of a previous point in time, which can be used to recover from system malfunctions or other problems. First included in Windows Me, it has been included in all following desktop versions of Windows released since, excluding Windows Server. In Windows 10, System Restore is turned off by default and must be enabled by users in order to function. This does not affect personal files such as documents, music, pictures, and videos. In prior Windows versions it was based on a file filter that watched changes for a certain set of file extensions, and then copied files before they were overwritten. An updated version of System Restore introduced by Windows Vista uses the Shadow Copy service as a backend (allowing block-level changes in files located in any directory on the volume to be monitored and backed up regardless of their location) and allows System Restore to be used from the Windows Recovery Environment in case the Windows installation no longer boots at all. Overview In System Restore, the user may create a new restore point manually (as opposed to the system creating one automatically), roll back to an existing restore point, or change the System Restore configuration. Moreover, the restore itself can be undone. Old restore points are discarded in order to keep the volume's usage within the specified amount. For many users, this can provide restore points covering the past several weeks. Users concerned with performance or space usage may also opt to disable System Restore entirely. Files stored on volumes not monitored by System Restore are never backed up or restored. System Restore backs up system files of certain extensions (.exe, .dll, etc.) and saves them for later recovery and use. It also backs up the registry and most drivers. Resources monitored Starting with Windows Vista, System Restore takes a snapshot of all volumes it is monitoring. However, on Windows XP, it only monitors the following: Windows Registry Files in the Windows File Protection folder (Dllcache) Local user profiles COM+ and WMI databases IIS metabase Specific file types monitored The list of file types and directories to be included or excluded from monitoring by System Restore can be customized on Windows Me and Windows XP by editing %windir%\system32\restore\Filelist.xml. Disk space consumption The amount of disk space System Restore consumes can be configured. Starting with Windows XP, the disk space allotted is configurable per volume and the data stores are also stored per volume. Files are stored using NTFS compression and a Disk Cleanup handler allows deleting all but the most recent Restore Points. System Restore can be disabled completely to regain disk space. It automatically disables itself if the volume's free space is too low for it to operate. Restore points Windows creates restore points: When software is installed using Windows Installer or other installers that are aware of System Restore When Windows Update installs new updates When the user installs a driver that is not digitally signed by Windows Hardware Quality Labs Periodically. By default: Windows XP creates a restore point every 24 hours Windows Vista creates a restore point if none is created within the last 24 hours Windows 7 creates a restore point if none has been created within the last seven days On user's command Windows XP stores restore point files in a hidden folder named "System Volume Information" on the root of every drive, partition or volume, including most external drives and some USB flash drives. The operating system deletes older restore points per the configured space constraint on a first in, first out basis. Implementation differences There are considerable differences between how System Restore works under Windows XP and later Windows versions. Configuration user interface – In Windows XP, there is a graphical slider to configure the amount of disk space allotted to System Restore. In Windows Vista, the slider to configure the disk space is not available. Using the command-line tool Vssadmin.exe or by editing the appropriate registry key, the space reserved can be adjusted. Starting with Windows 7, the slider is available once again. Maximum space – In Windows XP, System Restore can be configured to use up to a maximum of 12% of the volume's space for most disk sizes; however, this may be less depending on the volume's size. Restore points over 90 days old are automatically deleted, as specified by the registry value RPLifeInterval (Time to Live – TTL) default value of 7776000 seconds. In Windows Vista and later, System Restore is designed for larger volumes. By default, it uses 15% of the volume's space. File paths monitored – Up to Windows XP, files are backed up only from certain directories. On Windows Vista and later, this set of files is defined by monitored extensions outside of the Windows folder, and everything under the Windows folder. File types monitored – Up to Windows XP, it excludes any file types that are considered "personal" to the user, such as documents, digital photographs, media files, e-mail, etc. It also excludes the monitored set of file types (, etc.) from folders such as My Documents. Microsoft recommends that if a user is unsure as to whether certain files will be modified by a rollback, they should keep those files under My Documents. When a rollback is performed, the files that were being monitored by System Restore are restored and newly created folders are removed. However, on Windows Vista and later, it excludes only document file types; it does not exclude any monitored system file type regardless of its location. Configuring advanced System Restore settings – Windows XP supports customizing System Restore settings via Windows Registry and a file at %windir%\system32\restore\Filelist.xml. Windows Vista and later no longer support this. FAT32 volume support – On Windows Vista and later, System Restore no longer works on FAT32 disks and cannot be enabled on disks smaller than 1 GB. Restoring the system Up to Windows XP, the system can be restored as long as it is in an online state, that is, as long as Windows boots normally or from Safe mode. It is not possible to restore the system if Windows is unbootable without using 3rd-party bootable recovery media such as ERD Commander. Under Windows Vista and later, the Windows Recovery Environment can be used to launch System Restore and restore a system in an offline state, that is, in case the Windows installation is unbootable. Since the advent of Microsoft Desktop Optimization Pack, Diagnostics and Recovery Toolset from it can be used to create a bootable recovery disc that can log on to an unbootable Windows installation and start System Restore. The toolset includes ERD Commander for Windows XP that was previously a 3rd-party product by Winternals. Limitations and complications A limitation which applies to System Restore in Windows versions prior to Windows Vista is that only certain file types and files in certain locations on the volume are monitored, therefore unwanted software installations and especially in-place software upgrades may be incompletely reverted by System Restore. Consequently, there may be little or no practical beneficial impact. Certain issues may also arise when attempting to run or completely uninstall that application. In contrast, various other utilities have been designed to provide much more complete reversal of system changes including software upgrades. However, beginning with Windows Vista, System Restore monitors all system file types on all file paths on a given volume, so there is no issue of incomplete restoration. It is not possible to create a permanent restore point. All restore points will eventually be deleted after the time specified in the RPLifeInterval registry setting is reached or earlier if allotted disk space is insufficient. Even if no user or software triggered restore points are generated allotted, disk space is consumed by automatic restore points. Consequently, in systems with little space allocated, if a user does not notice a new problem within a few days, it may be too late to restore to a configuration from before the problem arose. For data integrity purposes, System Restore does not allow other applications or users to modify or delete files in the directory where the restore points are saved. On NTFS volumes, the restore points are protected using ACLs. Since its method of backup is fairly simplistic, it may end up archiving malware such as viruses, for example in a restore point created before using antivirus software to clean an infection. Antivirus software is usually unable to remove infected files from System Restore; the only way actually to delete the infected files is to disable System Restore, which will result in losing all saved restore points; otherwise they will remain until Windows deletes the affected restore points. However stored infected files in themselves are harmless unless executed; they will only pose a threat if the affected restore point is reinstated. Windows System Restore is not compatible with restore points made by third party applications. Changes made to a volume from another operating system (in case of multi-booting scenarios) cannot be monitored. In addition, multi-booting different versions of Windows can disrupt the operation of System Restore. Specifically, Windows XP and Windows Server 2003 delete the checkpoints created by Windows Vista and later. Also, checkpoints created by Windows 8 may be destroyed by previous versions of Windows. See also Backup References Further reading External links Microsoft Support article Windows components Windows administration
16443260
https://en.wikipedia.org/wiki/5028%20Halaesus
5028 Halaesus
5028 Halaesus is a Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 23 January 1988 by American astronomer Carolyn Shoemaker at the Palomar Observatory in California. The dark D-type asteroid has a rotation period of 24.9 hours and belongs to the 100 largest Jupiter trojans. It was named after Halaesus from Greek mythology. Orbit and classification Halaesus is a Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of its orbit in a 1:1 resonance . It is a non-family asteroid in the Jovian background population. It orbits the Sun at a distance of 4.57–5.95 AU once every 12 years and 1 month (4,408 days; semi-major axis of 5.26 AU). Its orbit has an eccentricity of 0.13 and an inclination of 21° with respect to the ecliptic. The asteroid was first observed as at CERGA Observatory in October 1985. The body's observation arc begins with its official discovery observation at Palomar in January 1988. Physical characteristics In the SDSS-based taxonomy, Halaesus is a D-type asteroid. Pan-STARRS' survey also characterized it as a D-type, the most common spectral type among the Jupiter trojan population. It has a typical V–I color index of 0.90. Rotation period In September 1996, photometric observations of Halaesus were made by Italian astronomer Stefano Mottola, using the now decommissioned Bochum 0.61-metre Telescope at ESO's La Silla Observatory in Chile. The resulting rotational lightcurve showed a well-defined period of hours with a brightness variation of in magnitude (). In August 2015, observations by the Kepler space telescope gave two period determinations of 25.052 and 29.95 hours with an amplitude of 0.23 and 0.19 magnitude, respectively (). Diameter and albedo According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Halaesus measures 50.77 kilometers in diameter and its surface has an albedo of 0.057. The Collaborative Asteroid Lightcurve Link adopts an albedo of 0.057 and a diameter 50.77 of kilometers based on an absolute magnitude of 10.2. Naming This minor planet was named from Greek mythology after Halaesus, a son of king Agamemnon, after whom the asteroid 911 Agamemnon is named. The official naming citation was published by the Minor Planet Center on 4 June 1993 (). References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (5001)-(10000) – Minor Planet Center Asteroid 5028 Halaesus at the Small Bodies Data Ferret 005028 Discoveries by Carolyn S. Shoemaker Minor planets named from Greek mythology Named minor planets 19880123
26252738
https://en.wikipedia.org/wiki/Wendy%20Wilkins
Wendy Wilkins
Wendy Wilkins (born March 1, 1949) was provost and executive vice president at New Mexico State University until November 2012. She took the post in mid-July 2010 after resigning from the University of North Texas at the end of business day, July 1, 2010. Prior to beginning her service as the University of North Texas Provost and Vice President for Academic Affairs in 2007, Wilkins has served as Dean of the College of Arts and Letters at Michigan State University and as Associate Dean for Academic Personnel in the College of Liberal Arts and Sciences at Arizona State University. Her work in academic administration began with service as Associate Chair, and then Chair, of the Department of English at ASU. Education Wilkins’ academic preparation, in Linguistics, includes a PhD from the University of California at Los Angeles, two post-doctoral appointments in Cognitive Science at the University of California at Irvine, and a pre-doctoral appointment in the Department of Linguistics and Philosophy at MIT. Employment and research Wilkins' primary research training was in syntactic theory; more recently she has worked on the evolutionary neurobiology of language and comparative linguistic and musical cognition. As a faculty member, Wilkins has held numerous positions both in the United States and in Mexico. In the U.S, in addition to ASU and MSU, she has served in a visiting capacity at the University of Massachusetts Amherst and at the University of Washington. In Mexico City, she was a professor of Linguistics at Universidad Autónoma Metropolitana, Unidad Ixtapalapa; Centro de Estudios Lingüísticos y Literarios, El Colegio de México; and Departamento de Lingüística, Escuela Nacional de Antropología e Historia. She also held a research position at Instituto de Investigaciones en Matemáticas Applicadas y en Sistemas, Universidad Nacional Autónoma de México. Since becoming an administrator, Wilkins has remained involved in service to the profession. She engaged actively with the Council of Colleges of Arts and Sciences, including service on the Board of Directors. Within Linguistics, she was elected delegate-at-large for Section Z (Linguistics and Language Science) of the American Association for the Advancement of Science (AAAS) and has accepted numerous service assignments for the Linguistic Society of America (LSA). Administrative professional activities National and state 2009–Presenter, Panel on Cost Containment and Enhancing Effectiveness, Council on Academic Affairs Summer Meeting, Association of Public and Land-Grant Universities (APLU) 2008-09 - Participant, Institute for New Chief Academic Officers, American Council on Education (ACE) 2002-04 - Member, Board of Directors, Boar’s Head Theatre, Lansing, Michigan 2002 - Chair, Workshop for Assistant and Associate Deans, Council of Colleges of Arts and Sciences (CCAS) 2002 - Co-Chair, Program Committee, Annual Meeting of Arts and Sciences Deans, Association of American Universities (AAU), Berkeley, CA 2001-04 - Member, Board of Directors, East Lansing Film Festival 2001 - Organizer, Panel on Budgeting, Annual Meeting of Arts and Sciences Deans, AAU, Tucson, AZ 2000-03 - Member, Board of Directors, CCAS 2000 - Member, External Review Team, Departments of Modern Languages and Literatures and Language Media Center, University of Iowa 2000 - Participant, Re-Envisioning the PhD Project, University of Washington. Project funded by the Pew Charitable Trusts and the Council of Graduate Schools; project directors at the University of Washington, with invited participation of others 2000 - Organizer, Case Studies Session, Annual Meeting, CCAS 2000 - Member, Program Committee, CCAS 1999-2004 - Participant, Imagining America/Imagining Michigan. Project conceived and developed out of White House Millennium Council, with leadership provided by University of Michigan and the Woodrow Wilson Foundation, with invited participation of others 1999–Presenter, Dean’s Clinic (Large Institutions), Annual Meeting, CCAS 1999-2000 - Member, Committee on the Internationalization of Education, CCAS 1998-2004 - Member, Arts and Sciences Deans Group, Committee on Institutional Cooperation (CIC). The CIC comprises the Big Ten universities plus the University of Illinois at Chicago and the University of Chicago. 1998-2004 - Member, Arts and Sciences Deans Group, AAU Public Institutions 1996-1999 - Member, Resolutions Committee, CCASS. Developed two resolutions, both adopted by CCAS at the 1996 annual meeting, on fair and appropriate use of electronic communications technologies. 1996-98 - Founder, Arizona Council of Arts and Sciences Administrators (ACASA). ACASA includes the arts and sciences deans, associate deans, and assistant deans from the Arizona state universities (ASU, UA, NAU). 1996 - Panel Chair, CCAS. Panel on information technologies (including technology transfer, the Avirtual@ university, and evaluating technology innovations for personnel reviews) 1995–Presenter, CCAS. Panel addressing partner accommodation issues in faculty hiring. 1995 - Panel Chair, Association of Departments of English, Summer Seminar. Workshop for newly appointed English department chairs. 1992 - Chair, Host Committee, Association of Departments of English, Summer Seminar Michigan State University (1998–2004) 2001-04 - Member, Board of Directors, Estate Wealth and Strategies Institute. One of three deans appointed to Board of non-profit institute focusing on inter-generational wealth transfer issues. 2001-04 - Member, Board of Directors, Michigan State University Alumni Association. One of two deans who serve on the Association Board, along with elected MSU alumni and other ex-officio members from University central administration. 2001-04 - Member, Selection Committee, University Distinguished Professorships. The UDP is MSU’s highest internal honor for an individual faculty members 2001-04 - Member, Enrollment Strategy Council. One of three deans on advisory committee working with Director of Admissions. 2000-04 - Member, Executive Advisory Committee, Executive Development Center. One of six on-campus members of committee to serve in advisory capacity. 1999-2003 - Member, Campaign Strategy Committee (Office of the Provost and University Development). Committee of six deans plus members of the central administration and University Development, involved in developing policies and procedures for MSU’s billion-dollar-plus (sesquicentennial) capital campaign 1999-2000 - Chair, Steering Committee, Center for Great Lakes Culture 1999-2000 - Member, Planning Committee, Academic Leadership Program 1999-2004 - Member, Jewish Studies Program Advisory Board 1998-2004 - Lead Dean, Integrative Studies Program. Integrative Studies is the core curriculum in general education at MSU 1998-2004 - Member, Study Abroad Deans Group. On-campus committee, advisory to the Dean of International Studies and Programs 1998-2004 - Member, Campus Cultural Coordinating Committee 1998-99 - Member, MSU Team, Wharton/IRHE Program for the Knight Collaborative, The Wharton School, University of Pennsylvania. Team composed of two deans and four other academic leaders; team project focused on collaborative work Arizona State University (1987–1997) 1997-98 - Member, Workforce for the University for the Next Century Project Team. Task force charged with revisioning Human Resources in the University 1997 - Member, Search Committee, Director of Faculty Development Program 1997-98 - Member, Advisory Network, Employee Career Enrichment Program 1996-98 - Member, Main Campus Information Technology Strategic Planning Committee. Committee charged to address both strategic and implementation plans for Information Technology 1996-98 - Member, Provost’s Working Group on Diversity 1995-98 - Chair, University Information Technology Advisory Committee. Committee advisory to Vice Provost for Information Technology. 1995-98 - Member, Employee Assistance Program Advisory Committee. Committee advisory to EAP Program Director. 1995-98 - Facilitator, Associate and Assistant Deans Council. Program to coordinate on-campus professional development activities for assistant and associate deans 1995-96 - Member, Human Resources Oversight Committee. Committee charged with on-campus review of human resources office. 1995-96 - Vice President, Board of Directors, University Club 1995 - Member, Coordinating Team, Student Process Re-engineering Project. SPRP was the University-level project with the fundamental goal of improving services to students; Wilkins was the Academic Affairs representative 1994-98 - Chair, Strategic Planning and Academic Resources Advisory Committee. Committee responsibilities included advising the Dean on both short- and long-term priorities and writing and revising the College strategic plan. 1994-1997 - Member, Chicana and Chicano Studies Advisory Committee. Chicana/o Studies, a new degree program and department, gained Regents’ approval in 1997. 1994 - Member, Main Campus Strategic Budgeting Committee. Subcommittee of the Main Campus Strategic Planning Committee. 1994 - Member, Preparing Future Faculty project. Graduate student training for the future professoriate; project directed by the Graduate College and funded by the Pew Foundation and the Council of Graduate Schools. 1994 - Member, Selection Committee, Legislative Intern Program 1994 - Member, University Summer Bridge Program Committee. Committee charged with developing programs to ease transition for marginally prepared high school students becoming first-year ASU students. 1994-95 - Member, Commission on New Student Orientation. Committee charged with developing a plan to improve orientation activities. 1994-95 - Member, Killer Course Task Force. Task force charged with determining which lower division courses were obstacles to student success, especially first- to second-year persistence. 1994-95 - Member, University Strategic Planning Committee for Mathematical Sciences. Committee charged with developing a plan to improve student learning and satisfaction in lower division mathematics. 1994 - Member, Search Committee, Humanities Computing Facility director 1992-1996 - Member, Board of Directors, University Club 1992-93 - Chair, Task Force on Quality and Diversity. College-level subcommittee of University task force charged with developing a plan for enhancing quality and diversity among the students and faculty. 1992-93 - Member, Search Committee, Dean of the College of Extended Education 1990-92 - Member, CLAS Strategic Planning Committee 1988-90 - Chair, CLAS Student Affairs and Grievances Committee 1987-89 - Chair, University Interdisciplinary Committee on Adaptive Neural Systems Publications Books 2007 - Phrasal and clausal architecture: Syntactic derivation and interpretation, Amsterdam: John Benjamins (Editor, with S. Karimi and V. Samiian). 1988 - Thematic Relations, Syntax and Semantics, Vol. 21. New York: Academic Press (Editor). 1984 - Locality in Linguistic Theory, New York: Academic Press (with P. Culicover). Refereed Publications 2010 - Toward an evolutionary biology of language through comparative neuroanatomy, in The Oxford Handbook of Language Evolution, Maggie Tallerman and Kathleen Gibson (editors), Oxford University Press, in press. 2009 - Mosaic neurobiology and anatomical plausibility, in The Prehistory of Language, Rudolph Botha and Chris Knight (editors), Oxford University Press, pp. 390–420. 2007 - Layers, mosaic pieces, and tiers, The Linguistic Review, 24(4):475–484. 2007 - Conceptual space. In Karimi, S., Samiian, V., and Wilkins, W.K. (Eds.), Phrasal and clausal architecture: Syntactic derivation and interpretation, Amsterdam: John Benjamins, pp. 365–395 (with J. Wakefield). 2005 - Anatomy matters, The Linguistic Review, 22(2-4):271-288. 1998 - El lexicón posminimista y las construcciones con se en español y macedonio. In De Báez, Y.J., Venier, M.E., Butragueño, P.M., and Barriga Villanueva, R. (Eds.), Volume to commemorate the 50th anniversary of the Centro de Estudios Lingüísticos y Literarios, El Colegio de México, pp. 143–163 (with V. Todoroska and C. Agüero Bautista). 1997 - Further issues in neurolinguistic preconditions, Behavioral and Brain Sciences, 19(4):788-798 (with J. Wakefield). 1997 - El lexicón posminimista: el caso de se. In Pool, M. (Ed.), Estudios de lingúística formal, El Colegio de México, pp. 67–86. 1995 - Issues and non-issues in the origins of language (reply to commentary), Behavioral and Brain Sciences, 18(1):205-226 (with J. Wakefield). 1995 - Brain evolution and neurolinguistic preconditions, Behavioral and Brain Sciences, 18(1):161-182 (with J. Wakefield). 1994 - Lexical learning by error detection, Language Acquisition, 3(2):121-157. 1991 - Autonomy and the nature of the input, Behavioral and Brain Sciences, 14(4):638. 1990 - In defense of exaptation, Behavioral and Brain Sciences, 13(4):763-764 (with J. Dumford). 1989 - Linguistics, learnability, and computation. In Brink, J.R. and Haden, C.R. (Eds.), The Computer and the Brain: Perspectives on Human and Artificial Intelligence, Amsterdam: Elsevier North-Holland, 197-207. 1989 - Why degree-0/?, Behavioral and Brain Sciences, 12(2):362-363. 1988 - Linguistics and the teaching of science, in Linguistics in the Undergraduate Curriculum, Linguistic Society of America (position paper). 1988 - Thematic structure and reflexivization, in Wilkins, W.K. (Ed.), Thematic Relations, New York: Academic Press, 191-213. 1987 - On the linguistic function of event roles, in Grammar and Cognition, Berkeley Linguistics Society, 460-472. 1987 - On the learnability of the scope of reflexivization, Proceedings of the Sixth Annual West Coast Conference on Formal Linguistics, (WCCFL VI), 317-327. 1986 - Control, PRO, and the Projection Principle, Language, 62(2):120-153 (with P. Culicover). 1986 - El sintagma nominal de infinitivo, Revista Argentina de Lingüística, 2(2):209-229. 1981 - On the nonnecessity of the Locality Principle, Linguistic Analysis, 8(2):111-144. 1980 - Adjacency and variables in syntactic transformations, Linguistic Inquiry, 11(4):709-758. 1977 - Wh-fronting and the Variable Interpretation Convention, Proceedings of the Seventh Annual Meeting of the Northeastern Linguistics Society, MIT, 365-381. 1975 - Strategies in constructing a definite description: some evidence from KinyaRwanda, Studies in African Linguistics, 6 (2):151-169 (with A. Kimenyi). Book reviews 2006 - Review of The Birth of the Mind, by Gary Marcus, Language, Vol. 82, No. 4, 921-924. 1992 - Review of Lexical Semantics Without Thematic Roles, by Yael Ravin, Journal of Linguistics, Vol. 28, No. 1, 241-250. 1988 - Review of The Mind's New Science by Howard Gardner, Annals of the History of Computing, Vol. 10, No. 1, 89-93. 1983 - Review of Approaches to Island Phenomena by Alexander Grosu, Language, Vol. 59, No. 4, 902-905 Recent Academic Presentations 2008 - Biological plausibility and comparative anatomy, paper presented at the Annual Meeting, Linguistic Society of America, Chicago 2008 - Language in the light of evolution, Symposium organized and presented (with James Hurford, University of Edinburgh) at the 2008 Annual Meeting, Linguistic Society of America, Chicago References External links Provost and Academic Affairs - The University of North Texas 1949 births Living people
290474
https://en.wikipedia.org/wiki/Alice%20Munro
Alice Munro
Alice Ann Munro (; ; born 10 July 1931) is a Canadian short story writer who won the Nobel Prize in Literature in 2013. Munro's work has been described as revolutionizing the architecture of short stories, especially in its tendency to move forward and backward in time. Her stories have been said to "embed more than announce, reveal more than parade." Munro's fiction is most often set in her native Huron County in southwestern Ontario. Her stories explore human complexities in an uncomplicated prose style. Munro's writing has established her as "one of our greatest contemporary writers of fiction", or, as Cynthia Ozick put it, "our Chekhov." Munro has received many literary accolades, including the 2013 Nobel Prize in Literature for her work as "master of the contemporary short story", and the 2009 Man Booker International Prize for her lifetime body of work. She is also a three-time winner of Canada's Governor General's Award for fiction, and received the Writers' Trust of Canada's 1996 Marian Engel Award and the 2004 Rogers Writers' Trust Fiction Prize for Runaway. Early life and education Munro was born Alice Ann Laidlaw in Wingham, Ontario. Her father, Robert Eric Laidlaw, was a fox and mink farmer, and later turned to turkey farming. Her mother, Anne Clarke Laidlaw (née Chamney), was a schoolteacher. She is of Irish and Scottish descent; her father is a descendant of James Hogg, the Ettrick Shepherd. Munro began writing as a teenager, publishing her first story, "The Dimensions of a Shadow", in 1950 while studying English and journalism at the University of Western Ontario on a two-year scholarship. During this period she worked as a waitress, a tobacco picker, and a library clerk. In 1951, she left the university, where she had been majoring in English since 1949, to marry fellow student James Munro. They moved to Dundarave, West Vancouver, for James's job in a department store. In 1963, the couple moved to Victoria, where they opened Munro's Books, which still operates. Career Munro's highly acclaimed first collection of stories, Dance of the Happy Shades (1968), won the Governor General's Award, then Canada's highest literary prize. That success was followed by Lives of Girls and Women (1971), a collection of interlinked stories. In 1978, Munro's collection of interlinked stories Who Do You Think You Are? was published (titled The Beggar Maid: Stories of Flo and Rose in the United States). This book earned Munro a second Governor General's Literary Award. From 1979 to 1982, she toured Australia, China and Scandinavia for public appearances and readings. In 1980 Munro held the position of writer in residence at both the University of British Columbia and the University of Queensland. From the 1980s to 2012, Munro published a short-story collection at least once every four years. First versions of Munro's stories have appeared in journals such as The Atlantic Monthly, Grand Street, Harper's Magazine, Mademoiselle, The New Yorker, Narrative Magazine, and The Paris Review. Her collections have been translated into 13 languages. On 10 October 2013, Munro was awarded the Nobel Prize in Literature, cited as a "master of the contemporary short story". She is the first Canadian and the 13th woman to receive the Nobel Prize in Literature. Munro is noted for her longtime association with editor and publisher Douglas Gibson. When Gibson left Macmillan of Canada in 1986 to launch the Douglas Gibson Books imprint at McClelland and Stewart, Munro returned the advance Macmillan had already paid her for The Progress of Love so that she could follow Gibson to the new company. Munro and Gibson have retained their professional association ever since; when Gibson published his memoirs in 2011, Munro wrote the introduction, and to this day Gibson often makes public appearances on Munro's behalf when her health prevents her from appearing personally. Almost 20 of Munro's works have been made available for free on the web, in most cases only the first versions. From the period before 2003, 16 stories have been included in Munro's own compilations more than twice, with two of her works scoring four republications: "Carried Away" and "Hateship, Friendship, Courtship, Loveship, Marriage". Film adaptations of Munro's short stories have included Martha, Ruth and Edie (1988), Edge of Madness (2002), Away from Her (2006), Hateship, Loveship (2013) and Julieta (2016). Writing Many of Munro's stories are set in Huron County, Ontario. Her strong regional focus is one of her fiction's features. Another is an omniscient narrator who serves to make sense of the world. Many compare Munro's small-town settings to writers from the rural American South. As in the works of William Faulkner and Flannery O'Connor, Munro's characters often confront deep-rooted customs and traditions, but her characters' reactions are generally less intense than their Southern counterparts'. Her male characters tend to capture the essence of the everyman, while her female characters are more complex. Much of Munro's work exemplifies the Southern Ontario Gothic literary genre. Munro's work is often compared with the great short-story writers. In her stories, as in Chekhov's, plot is secondary and "little happens". As in Chekhov, Garan Holcombe says, "All is based on the epiphanic moment, the sudden enlightenment, the concise, subtle, revelatory detail." Munro's work deals with "love and work, and the failings of both. She shares Chekhov's obsession with time and our much-lamented inability to delay or prevent its relentless movement forward." A frequent theme of her work, particularly in her early stories, has been the dilemmas of a girl coming of age and coming to terms with her family and her small hometown. In recent work such as Hateship, Friendship, Courtship, Loveship, Marriage (2001) and Runaway (2004) she has shifted her focus to the travails of middle age, women alone, and the elderly. Her characters often experience a revelation that sheds light on, and gives meaning to, an event. Munro's prose reveals the ambiguities of life: "ironic and serious at the same time," "mottoes of godliness and honor and flaming bigotry," "special, useless knowledge," "tones of shrill and happy outrage," "the bad taste, the heartlessness, the joy of it." Her style juxtaposes the fantastic and the ordinary, with each undercutting the other in ways that simply and effortlessly evoke life. Robert Thacker wrote: Many critics have written that Munro's stories often have the emotional and literary depth of novels. Some have asked whether Munro actually writes short stories or novels. Alex Keegan, writing in Eclectica, gave a simple answer: "Who cares? In most Munro stories there is as much as in many novels." Research on Munro's work has been undertaken since the early 1970s, with the first PhD thesis published in 1972. The first book-length volume collecting the papers presented at the University of Waterloo first conference on her work was published in 1984, The Art of Alice Munro: Saying the Unsayable. In 2003/2004, the journal Open Letter. Canadian quarterly review of writing and sources published 14 contributions on Munro's work. In autumn 2010, the Journal of the Short Story in English (JSSE)/Les cahiers de la nouvelle dedicated a special issue to Munro, and in May 2012 an issue of the journal Narrative focussed on a single story by Munro, "Passion" (2004), with an introduction, summary of the story, and five analytical essays. Creating new versions Munro publishes variant versions of her stories, sometimes within a short span of time. Her stories "Save the Reaper" and "Passion" came out in two different versions in the same year, in 1998 and 2004 respectively. Two other stories were republished in a variant versions about 30 years apart, "Home" (1974/2006/2014) and "Wood" (1980/2009). In 2006 Ann Close and Lisa Dickler Awano reported that Munro had not wanted to reread the galleys of Runaway (2004): "No, because I'll rewrite the stories." In their symposium contribution An Appreciation of Alice Munro they say that of her story "Powers", for example, Munro did eight versions in all. Awano writes that "Wood" is a good example of how Munro, "a tireless self-editor", rewrites and revises a story, in this case returning to it for a second publication nearly 30 years later, revising characterizations, themes and perspectives, as well as rhythmic syllables, a conjunction or a punctuation mark. The characters change, too. Inferring from the perspective they take on things, they are middle-age in 1980, and in 2009 they are older. Awano perceives a heightened lyricism brought about not least by the poetic precision of the revision Munro undertakes. The 2009 version comprises eight sections to the 1980 version's three, and has a new ending. Awano writes that Munro literally "refinishes" the first take on the story with an ambiguity characteristic of Munro's endings, and that Munro reimagines her stories throughout her work a variety of ways. Several stories were republished with considerable variation as to which content goes into which section. This can be seen, for example, in "Home", "The Progress of Love", "What Do You Want to Know For?", "The Children Stay", "Save the Reaper", "The Bear Came Over the Mountain", "Passion", "The View From Castle Rock", "Wenlock Edge", and "Deep-Holes". Personal life Munro married James Munro in 1951. Their daughters Sheila, Catherine, and Jenny were born in 1953, 1955, and 1957 respectively; Catherine died the day of her birth due to the lack of functioning kidneys. In 1963, the Munros moved to Victoria, where they opened Munro's Books, a popular bookstore still in business. In 1966, their daughter Andrea was born. Alice and James Munro divorced in 1972. Munro returned to Ontario to become writer in residence at the University of Western Ontario, and in 1976 received an honorary LLD from the institution. In 1976, she married Gerald Fremlin, a cartographer and geographer she met in her university days. The couple moved to a farm outside Clinton, Ontario, and later to a house in Clinton, where Fremlin died on 17 April 2013, aged 88. Munro and Fremlin also owned a home in Comox, British Columbia. At a Toronto appearance in October 2009, Munro indicated that she had received treatment for cancer and for a heart condition requiring coronary-artery bypass surgery. In 2002, Sheila Munro published a childhood memoir, Lives of Mothers and Daughters: Growing Up with Alice Munro. Works Original short-story collections Dance of the Happy Shades – 1968 (winner of the 1968 Governor General's Award for Fiction) Lives of Girls and Women – 1971 (winner of the Canadian Bookseller's Award) Something I've Been Meaning to Tell You – 1974 Who Do You Think You Are? – 1978 (winner of the 1978 Governor General's Award for Fiction; also published as The Beggar Maid; short-listed for the Booker Prize for Fiction in 1980) The Moons of Jupiter – 1982 (nominated for a Governor General's Award) The Progress of Love – 1986 (winner of the 1986 Governor General's Award for Fiction) Friend of My Youth – 1990 (winner of the Trillium Book Award) Open Secrets – 1994 (nominated for a Governor General's Award) The Love of a Good Woman – 1998 (winner of the 1998 Giller Prize and the 1998 National Book Critics Circle Award) Hateship, Friendship, Courtship, Loveship, Marriage – 2001 (republished as Away from Her) Runaway – 2004 (winner of the Giller Prize and Rogers Writers' Trust Fiction Prize) The View from Castle Rock – 2006 Too Much Happiness – 2009 Dear Life – 2012 Short-story compilations Selected Stories (later retitled Selected Stories 1968–1994 and A Wilderness Station: Selected Stories, 1968–1994) – 1996 No Love Lost – 2003 Vintage Munro – 2004 Alice Munro's Best: A Selection of Stories – Toronto 2006 / Carried Away: A Selection of Stories – New York 2006; both 17 stories (spanning 1977–2004) with an introduction by Margaret Atwood My Best Stories – 2009 New Selected Stories – 2011 Lying Under the Apple Tree. New Selected Stories, 434 pages, 15 stories, c Alice Munro 2011, Vintage, London 2014, paperback Family Furnishings: Selected Stories 1995–2014 – 2014 Selected awards and honours Awards Governor General's Literary Award for English language fiction (1968, 1978, 1986) Canadian Booksellers Award for Lives of Girls and women (1971) Shortlisted for the annual (UK) Booker Prize for Fiction (1980) for The Beggar Maid The Writers' Trust of Canada's Marian Engel Award (1986) for her body of work Rogers Writers' Trust Fiction Prize (2004) for Runaway Trillium Book Award for Friend of My Youth (1991), The Love of a Good Woman (1999) and Dear Life (2013) WH Smith Literary Award (1995, UK) for Open Secrets Lannan Literary Award for Fiction (1995) PEN/Malamud Award for Excellence in Short Fiction (1997) National Book Critics Circle Award (1998, U.S.) For The Love of a Good Woman Giller Prize (1998 and 2004) Rea Award for the Short Story (2001) given to a living American or Canadian author. Libris Award Edward MacDowell Medal for outstanding contribution to the arts by the MacDowell Colony (2006). O. Henry Award for continuing achievement in short fiction in the U.S. for "Passion" (2006), "What Do You Want To Know For" (2008) and "Corrie" (2012) Man Booker International Prize (2009, UK) Canada-Australia Literary Prize Commonwealth Writers Prize Regional Award for Canada and the Caribbean. Nobel Prize in Literature (2013) as a "master of the contemporary short story". Honours 1992: Foreign Honorary Member of the American Academy of Arts and Letters 1993: Royal Society of Canada's Lorne Pierce Medal 2005: Medal of Honor for Literature from the U.S. National Arts Club 2010: Knight of the Order of Arts and Letters 2014: Silver coin released by the Royal Canadian Mint in honour of Munro's Nobel Prize win 2015: Postage stamp released by Canada Post in honour of Munro's Nobel Prize win References Further reading Atwood, Margaret et al. "Appreciations of Alice Munro." Virginia Quarterly Review 82.3 (Summer 2006): 91–107. Interviews with various authors (Margaret Atwood, Russell Banks, Michael Cunningham, Charles McGrath, Daniel Menaker and others) presented in first-person essay format Awano, Lisa Dickler. "Kindling The Creative Fire: Alice Munro's Two Versions of 'Wood.'" New Haven Review (30 May 2012). Examining overall themes in Alice Munro's fiction through a study of her two versions of "Wood." Awano, Lisa Dickler. "Alice Munro's Too Much Happiness." Virginia Quarterly Review (22 October 2010). Long-form book review of Too Much Happiness in the context of Alice Munro's canon. Besner, Neil Kalman. Introducing Alice Munro's Lives of Girls and Women: a reader's guide. (Toronto: ECW Press, 1990) Blodgett, E. D. Alice Munro. (Boston: Twayne Publishers, 1988) Buchholtz, Miroslawa (ed.). Alice Munro. Understanding, Adapting, Teaching (Springer International Publishing, 2016) Carrington, Ildikó de Papp. Controlling the Uncontrollable: the fiction of Alice Munro. (DeKalb: Northern Illinois University Press, 1989) Carscallen, James. The Other Country: patterns in the writing of Alice Munro. (Toronto: ECW Press, 1993) Cox, Alisa. Alice Munro. (Tavistock: Northcote House, 2004) Dahlie, Hallvard. Alice Munro and Her Works. (Toronto: ECW Press, 1984) Davey, Frank. 'Class, Family Furnishings, and Munro's Early Stories.' In Ventura and Conde. 79–88. de Papp Carrington, Ildiko."What's in a Title?: Alice Munro's 'Carried Away.'" Studies in Short Fiction. 20.4 (Fall 1993): 555. Dolnick, Ben. "A Beginner's Guide to Alice Munro" The Millions (5 July 2012) Elliott, Gayle. "A Different Track: Feminist meta-narrative in Alice Munro's 'Friend of My Youth.'" Journal of Modern Literature. 20.1 (Summer 1996): 75. Fowler, Rowena. "The Art of Alice Munro: The Beggar Maid and Lives of Girls and Women." Critique. 25.4 (Summer 1984): 189. Garson, Marjorie. "Alice Munro and Charlotte Bronte." University of Toronto Quarterly 69.4 (Fall 2000): 783. Genoways, Ted. "Ordinary Outsiders." Virginia Quarterly Review 82.3 (Summer 2006): 80–81. Gibson, Douglas. Stories About Storytellers: Publishing Alice Munro, Robertson Davies, Alistair MacLeod, Pierre Trudeau, and Others. (ECW Press, 2011.) Excerpt. Gittings, Christopher E.. "Constructing a Scots-Canadian Ground: Family history and cultural translation in Alice Munro." Studies in Short Fiction 34.1 (Winter 1997): 27 Hebel, Ajay. The Tumble of Reason: Alice Munro's discourse of absence. (Toronto: University of Toronto Press, 1994) Hiscock, Andrew. "Longing for a Human Climate: Alice Munro's 'Friend of My Youth' and the culture of loss." Journal of Commonwealth Literature 32.2 (1997): 18. Hooper, Brad The Fiction of Alice Munro: An Appreciation (Westport, Conn.: Praeger, 2008), Houston, Pam. "A Hopeful Sign: The making of metonymic meaning in Munro's 'Meneseteung.'" Kenyon Review 14.4 (Fall 1992): 79. Howells, Coral Ann. Alice Munro. (New York: Manchester University Press, 1998), Hoy, H. "'Dull, Simple, Amazing and Unfathomable': Paradox and Double Vision In Alice Munro's Fiction." Studies in Canadian Literature/Études en littérature canadienne, Volume 5.1. (1980). Lecercle, Jean-Jacques. 'Alice Munro's Two Secrets.' In Ventura and Conde. 25–37. Levene, Mark. "It Was About Vanishing: A Glimpse of Alice Munro's Stories." University of Toronto Quarterly 68.4 (Fall 1999): 841. Lorre-Johnston,Christine, and Eleonora Rao, eds. Space and Place in Alice Munro's Fiction: "A Book with Maps in It." Rochester, NY: Camden House, 2018.. Lynch, Gerald. "No Honey, I'm Home." Canadian Literature 160 (Spring 1999): 73. MacKendrick, Louis King. Some Other Reality: Alice Munro's Something I've Been Meaning to Tell You. (Toronto: ECW Press, 1993) Martin, W.R. Alice Munro: paradox and parallel. (Edmonton: University of Alberta Press, 1987) Mazur, Carol and Moulder, Cathy. Alice Munro: An Annotated Bibliography of Works and Criticism. (Toronto: Scarecrow Press, 2007) McCaig, JoAnn. Reading In: Alice Munro's archives. (Waterloo: Wilfrid Laurier University Press, 2002) Miller, Judith, ed. The Art of Alice Munro: saying the unsayable: papers from the Waterloo conference. (Waterloo: Waterloo Press, 1984) Munro, Sheila. Lives of Mother and Daughters: growing up with Alice Munro. (Toronto: McClelland & Stewart, 2001) Murray, Jennifer. Reading Alice Munro with Jacques Lacan. (Montreal: McGill-Queen's University Press, 2016) Pfaus, B. Alice Munro. (Ottawa: Golden Dog Press, 1984.) Rasporich, Beverly Jean. Dance of the Sexes: art and gender in the fiction of Alice Munro. (Edmonton: University of Alberta Press, 1990) Redekop, Magdalene. Mothers and Other Clowns: the stories of Alice Munro. (New York: Routledge, 1992) Ross, Catherine Sheldrick. Alice Munro: a double life. (Toronto: ECW Press, 1992.) Simpson, Mona. A Quiet Genius The Atlantic. (December 2001) Smythe, Karen E. Figuring Grief: Gallant, Munro and the poetics of elegy. (Montreal: McGill-Queen's University Press, 1992) Somacarrera, Pilar. A Spanish Passion for the Canadian Short Story: Reader Responses to Alice Munro's Fiction in Web 2.0 Open Access, in: Made in Canada, Read in Spain: Essays on the Translation and Circulation of English-Canadian Literature Open Access, edited by Pilar Somacarrera, de Gruyter, Berlin 2013, p. 129–144, Steele, Apollonia and Tener, Jean F., editors. The Alice Munro Papers: Second Accession. (Calgary: University of Calgary Press, 1987) Tausky, Thomas E. Biocritical Essay. The University of Calgary Library Special Collections (1986) Thacker, Robert. Alice Munro: writing her lives: a biography. (Toronto: McClelland & Stewart, 2005) Thacker, Robert. Ed. The Rest of the Story: critical essays on Alice Munro. (Toronto: ECW Press, 1999) Ventura, Héliane, and Mary Condé, eds. Alice Munro. Open Letter 11:9 (Fall-Winter 2003-4). ISSN 0048-1939. Proceedings of the Alice Munro conference L'écriture du secret/Writing Secrets, Université d'Orléans, 2003. External links List of Works "Alice Munro, The Art of Fiction No. 137", The Paris Review No. 131, Summer 1994 Alice Munro at the British Council Writers Directory Stories by Alice Munro accessible online Alice Munro's papers (fonds) held at the University of Calgary How To Tell If You Are in an Alice Munro Story, 8 December 2014 with a pre-recorded video conversation with the Laureate Alice Munro: In Her Own Words 1931 births Living people 20th-century Canadian short story writers 20th-century Canadian women writers 21st-century Canadian short story writers 21st-century Canadian women writers Canadian Nobel laureates Canadian people of Irish descent Canadian people of Scottish descent Canadian women short story writers Chevaliers of the Ordre des Arts et des Lettres Fellows of the Royal Society of Literature Governor General's Award-winning fiction writers International Booker Prize winners Members of the Order of Ontario Nobel laureates in Literature PEN/Malamud Award winners People from Wingham, Ontario The New Yorker people University of Western Ontario alumni Women Nobel laureates Writers from Ontario
24002637
https://en.wikipedia.org/wiki/Hisashi%20Kobayashi
Hisashi Kobayashi
Hisashi Kobayashi (Japanese: 小林久志 Kobayashi Hisashi; born on June 13, 1938) is the Sherman Fairchild University Professor of Electrical Engineering and Computer Science, Emeritus at Princeton University in Princeton, New Jersey. His fields of expertise include applied probability; queueing theory; system modeling and performance analysis; digital communication and networks; network architecture; investigation of the Riemann hypothesis; and stochastic modeling of an infectious disease. He was a Senior Distinguished Researcher at the National Institute of Information and Communications Technology (NICT), Japan from September 2008 to March 2016. He was President of Friends of UTokyo, Inc. (FUTI), New York from April 2011 to September 2015, Chair of its Advisory Committee from September 2015 to September 2019, and an advisory member (September 2019 to present). He also serves on the Board of Directors, Armstrong Memorial Research Foundation, Inc. from September 2008 to August 2021. Early life in Japan Hisashi Kobayashi was born in Tokyo, Japan. The mathematician Shoshichi Kobayashi (1932–2012) was Hisashi's elder brother. Hisashi studied at the University of Tokyo, and completed a Bachelor of Engineering and Master of Engineering in electrical engineering in 1961 and 1963, respectively. He was a recipient of Sugiyama Schloarship (1958–61) and RCA David Sarnoff Scholarship (1960). He worked as a radar system designer at Toshiba, Kawasaki in 1963–65. Life and career in the United States In 1965 Kobayashi came to the United States as a recipient of the Orson Desaix Munn Fellowship of Princeton University and received a PhD degree in electrical engineering in 1967. He worked for the IBM Thomas J. Watson Research Center at Yorktown Heights, New York, for fifteen years (1967–1982). He was a research staff member in its Applied Research Department from 1967 to 1970, where he worked on seismic signal processing, data transmission theory, digital magnetic recording, and image compression algorithms. In 1971 he moved to its Computer Science Department as Manager of the then newly created, "System Measurement and Modeling" group, was promoted in 1976 to Senior Manager of "Systems Analysis and Algorithms", and in 1981 became Department Manager of "VLSI Design". During his tenure at IBM Research, he was granted sabbatical leaves to accept invitations from several institutions. He was a visiting professor at the University of California, Los Angeles (September 1969 – March 1970), University of Hawaii (July 1975 - December 1975), Stanford University (January 1976 - June 1976), Technische Universität Darmstadt (September 1979 – August 1980), and Free University of Brussels (September 1980 - December 1980). In 1982, Kobayashi was appointed the founding director of the IBM Japan Science Institute (later named as IBM Tokyo Research Laboratory) in 1982, and served in that position until 1986, when he joined Princeton University's faculty as Dean of the School of Engineering and Applied Science (SEAS), and the Sherman Fairchild University Professor of Electrical Engineering and Computer Science. He was Dean from 1986–1991, and was responsible for establishing multiple interdisciplinary and/or inter-institutional centers and programs in academic disciplines as material science, opto-electronics, earthquake engineering, surface engineered materials, discrete mathematics for computer science, and plasma etching. After finishing his tenure as Dean, he was an NEC C&C visiting professor at the RCAST (Research Center for Advanced Science and Technology), the University of Tokyo (1991–1992). Since the fall of 1992 until June 2008, he assumed a full-time research and teaching position at Princeton University's Department of Electrical Engineering. He was a BC ASI Visiting Fellow at the University of Victoria in Canada from 1998–1999. He retired from Princeton University in June 2008. In the fall semesters of 2012-13, and 2013-14, however, Kobayashi was a visiting lecturer at Princeton University and taught a graduate course on "ELE 526: Random processes in information systems". He was a distinguished researcher (part-time) at the National Institute of Information and communications technology (NICT) of Japan in 2008–2016. Since 2016 he has been challenging the Riemann hypothesis. Since 2020, he is actively pursuing a new stochastic model of an infectious disease. Major awards and honors 1977: Fellow of IEEE (Institute of Electrical and Electronics Engineers) 1979: Silver Core Award of IFIP (International Federation of Information Processing) 1979: Humboldt Prize (Senior U.S. Scientist Award) from Alexander von Humboldt Foundation, West Germany 1984: Member of Engineering Academy of Japan 1992: Fellow of IEICE (Institute of Electronics, Information and Communications Engineers) of Japan Commendation List for Outstanding Teaching, School of Engineering and Applied Science, Princeton University 2005: Life Fellow of IEEE 2005: Technology Award from Eduard Rhein Foundation of Germany, with Dolivo and Eleftheriou, for their contributions for their pioneering contributions to PRML (Partial-Response, Maximum-Likelihood), which allowed dramatic increases in the storage capacity of computer hard disks. 2006: Guest speech at the Entrance Ceremony of the University of Tokyo, Graduate School 2012: C&C Prize from NEC Foundation, Japan for his "pioneering and leading contributions both to the invention of high-density and highly reliable digital recording technology and to the creation and development of a performance-evaluation methodology for computer and communication systems." 2019: Honorary Doctorate Degree from Ghent University, Belgium for his "research contributions in queuing theory, systems performance modeling and evaluation, the Riemann hypothesis." University of Ghent article and photos from ceremony. List of books Probability, Random Processes and Statistical Analysis (2012), coauthored by Brian L. Mark and William Turin, Cambridge University Press. System Modeling and Analysis: Foundations of System Performance Evaluation(2009), coauthored with Brian L. Mark, Pearson/Prentice Hall. Modeling and Analysis: An Introduction to System Performance Evaluation Methodology(1978),Addison-Wesley Publishing Company. External links , Dean of Faculty, Princeton University, Hisashi Kobayashi , Friends of UTokyo, Inc. Hisashi Kobayashi's Blog References 1938 births Japanese electrical engineers Princeton University alumni Princeton University faculty University of Tokyo alumni Living people Technische Universität Darmstadt faculty
3026193
https://en.wikipedia.org/wiki/Alias%20%28command%29
Alias (command)
In computing, alias is a command in various command-line interpreters (shells), which enables a replacement of a word by another string. It is mainly used for abbreviating a system command, or for adding default arguments to a regularly used command. alias is available in Unix shells, AmigaDOS, 4DOS/4NT, KolibriOS, Windows PowerShell, ReactOS, and the EFI shell. Aliasing functionality in the MS-DOS and Microsoft Windows operating systems is provided by the DOSKey command-line utility. An alias will last for the life of the shell session. Regularly used aliases can be set from the shell's rc file (such as .bashrc) so that they will be available upon the start of the corresponding shell session. The alias commands may either be written in the config file directly or sourced from a separate file. History In Unix, aliases were introduced in the C shell and survive in descendant shells such as tcsh and bash. C shell aliases were strictly limited to one line. This was useful for creating simple shortcut commands, but not more complex constructs. Older versions of the Bourne shell did not offer aliases, but it did provide functions, which are more powerful than the csh alias concept. The alias concept from csh was imported into Bourne Again Shell (bash) and the Korn shell (ksh). With shells that support both functions and aliases but no parameterized inline shell scripts, the use of functions wherever possible is recommended. Cases where aliases are necessary include situations where chained aliases are required (bash and ksh). The command has also been ported to the IBM i operating system. Usage Creating aliases Unix Non-persistent aliases can be created by supplying name/value pairs as arguments for the alias command. In Unix shells the syntax is: alias gc='git commit' C shell The corresponding syntax in the C shell or tcsh shell is: alias gc "git commit" This alias means that when the command gc is read in the shell, it will be replaced with git commit and that command will be executed instead. 4DOS In the 4DOS/4NT shell the following syntax is used to define cp as an alias for the 4DOS copy command: alias cp copy Windows PowerShell To create a new alias in Windows PowerShell, the new-alias cmdlet can be used: new-alias ci copy-item This creates a new alias called ci that will be replaced with the copy-item cmdlet when executed. In PowerShell, an alias cannot be used to specify default arguments for a command. Instead, this must be done by adding items to the collection $PSDefaultParameterValues, one of the PowerShell preference variables. Viewing currently defined aliases To view defined aliases the following commands can be used: alias # Used without arguments; displays a list of all current aliases alias -p # List aliases in a way that allows re-creation by sourcing the output; not available in 4DOS/4NT and PowerShell alias myAlias # Displays the command for a defined alias Overriding aliases In Unix shells, it is possible to override an alias by quoting any character in the alias name when using the alias. For example, consider the following alias definition: alias ls='ls -la' To override this alias and execute the ls command as it was originally defined, the following syntax can be used: 'ls' or \ls In the 4DOS/4NT shell it is possible to override an alias by prefixing it with an asterisk. For example, consider the following alias definition: alias dir = *dir /2/p The asterisk in the 2nd instance of dir causes the unaliased dir to be invoked, preventing recursive alias expansion. Also the user can get the unaliased behaviour of dir at the command line by using the same syntax: *dir Changing aliases In Windows PowerShell, the set verb can be used with the alias cmdlet to change an existing alias: set-alias ci cls The alias ci will now point to the cls command. In the 4DOS/4NT shell, the eset command provides an interactive command line to edit an existing alias: eset /a cp The /a causes the alias cp to be edited, as opposed to an environment variable of the same name. Removing aliases In Unix shells and 4DOS/4NT, aliases can be removed by executing the unalias command: unalias copy # Removes the copy alias unalias -a # The -a switch will remove all aliases; not available in 4DOS/4NT unalias * # 4DOS/4NT equivalent of `unalias -a` - wildcards are supported In Windows PowerShell, the alias can be removed from the alias:\ drive using remove-item: remove-item alias:ci # Removes the ci alias Features Chaining An alias usually replaces just the first word. But some shells, such as and , allow a sequence or words to be replaced. This particular feature is unavailable through the function mechanism. The usual syntax is to define the first alias with a trailing space character. For instance, using the two aliases: alias list='ls ' # note the trailing space to trigger chaining alias long='-Flas' # options to ls for a long listing allows: list long myfile # becomes "ls -Flas myfile" when run for a long listing, where "long" is also evaluated as an alias. Command arguments In the C Shell, arguments can be embedded inside the command using the string . For example, with this alias: alias ls-more 'ls \!* | more' ls-more /etc /usr expands to ls /etc /usr | more to list the contents of the directories /etc and /usr, pausing after every screenful. Without , alias ls-more 'ls | more' would instead expand to ls | more /etc /usr which incorrectly attempts to open the directories in more. The Bash and Korn shells instead use shell functions — see § Alternatives below. Alternatives Aliases should usually be kept simple. Where it would not be simple, the recommendation is usually to use one of the following: Shell scripts, which essentially provide the full ability to create new system commands. Symbolic links in the user's PATH (such as /bin). This method is useful for providing an additional way of calling the command, and in some cases may allow access to a buried command function for the small number of commands that use their invocation name to select the mode of operation. Shell functions, especially if the command being created needs to modify the internal runtime environment of the shell itself (such as environment variables), needs to change the shell's current working directory, or must be implemented in a way which guarantees they it appear in the command search path for anything but an interactive shell (especially any "safer" version of , , and so forth). The most common form of aliases, which just add a few options to a command and then include the rest of the command line, can be converted easily to shell functions following this pattern: alias ll='ls -Flas' # long listing, alias ll () { ls -Flas "$@" ; } # long listing, function To prevent a function from calling itself recursively, use command: ls () { command ls --color=auto "$@" ; } In older Bourne shells use /bin/ls instead of command ls. References Further reading External links Bash man page for alias The alias Command by The Linux Information Project (LINFO) Alias ReactOS commands Windows commands Unix SUS2008 utilities Windows administration
4023692
https://en.wikipedia.org/wiki/Nanoindentation
Nanoindentation
Nanoindentation, also called instrumented indentation testing, is a variety of indentation hardness tests applied to small volumes. Indentation is perhaps the most commonly applied means of testing the mechanical properties of materials. The nanoindentation technique was developed in the mid-1970s to measure the hardness of small volumes of material. Background In a traditional indentation test (macro or micro indentation), a hard tip whose mechanical properties are known (frequently made of a very hard material like diamond) is pressed into a sample whose properties are unknown. The load placed on the indenter tip is increased as the tip penetrates further into the specimen and soon reaches a user-defined value. At this point, the load may be held constant for a period or removed. The area of the residual indentation in the sample is measured and the hardness, , is defined as the maximum load, , divided by the residual indentation area, : For most techniques, the projected area may be measured directly using light microscopy. As can be seen from this equation, a given load will make a smaller indent in a "hard" material than a "soft" one. This technique is limited due to large and varied tip shapes, with indenter rigs which do not have very good spatial resolution (the location of the area to be indented is very hard to specify accurately). Comparison across experiments, typically done in different laboratories, is difficult and often meaningless. Nanoindentation improves on these macro- and micro-indentation tests by indenting on the nanoscale with a very precise tip shape, high spatial resolutions to place the indents, and by providing real-time load-displacement (into the surface) data while the indentation is in progress. In nanoindentation small loads and tip sizes are used, so the indentation area may only be a few square micrometres or even nanometres. This presents problems in determining the hardness, as the contact area is not easily found. Atomic force microscopy or scanning electron microscopy techniques may be utilized to image the indentation, but can be quite cumbersome. Instead, an indenter with a geometry known to high precision (usually a Berkovich tip, which has a three-sided pyramid geometry) is employed. During the course of the instrumented indentation process, a record of the depth of penetration is made, and then the area of the indent is determined using the known geometry of the indentation tip. While indenting, various parameters such as load and depth of penetration can be measured. A record of these values can be plotted on a graph to create a load-displacement curve (such as the one shown in Figure 1). These curves can be used to extract mechanical properties of the material. Young's modulus The slope of the curve, , upon unloading is indicative of the stiffness of the contact. This value generally includes a contribution from both the material being tested and the response of the test device itself. The stiffness of the contact can be used to calculate the reduced Young's modulus : Where is the projected area of the indentation at the contact depth , and is a geometrical constant on the order of unity. is often approximated by a fitting polynomial as shown below for a Berkovich tip: Where for a Berkovich tip is 24.5 while for a cube corner (90°) tip is 2.598. The reduced modulus is related to Young's modulus of the test specimen through the following relationship from contact mechanics: Here, the subscript indicates a property of the indenter material and is Poisson's ratio. For a diamond indenter tip, is 1140 GPa and is 0.07. Poisson’s ratio of the specimen, , generally varies between 0 and 0.5 for most materials (though it can be negative) and is typically around 0.3. There are two different types of hardness that can be obtained from a nano indenter: one is as in traditional macroindentation tests where one attains a single hardness value per experiment; the other is based on the hardness as the material is being indented resulting in hardness as a function of depth. Hardness The hardness is given by the equation above, relating the maximum load to the indentation area. The area can be measured after the indentation by in-situ atomic force microscopy, or by 'after-the event' optical (or electron) microscopy. An example indentation image, from which the area may be determined, is shown at right. Some nanoindenters use an area function based on the geometry of the tip, compensating for elastic load during the test. Use of this area function provides a method of gaining real-time nanohardness values from a load-displacement graph. However, there is some controversy over the use of area functions to estimate the residual areas versus direct measurement. An area function typically describes the projected area of an indent as a 2nd-order polynomial function of the indenter depth . When too many coefficients are used, the function will begin to fit to the noise in the data, and inflection points will develop. If the curve can fit well with only two coefficients, this is the best. However, if many data points are used, sometimes all 6 coefficients will need to be used to get a good area function. Typically, 3 or 4 coefficients works well. Service Document Probe Calibration; CSV-T-003 v3.0; Exclusive application of an area function in the absence of adequate knowledge of material response can lead to misinterpretation of resulting data. Cross-checking of areas microscopically is to be encouraged. Strain-rate sensitivity The strain-rate sensitivity of the flow stress is defined as where is the flow stress and is the strain rate produced under the indenter. For nanoindentation experiments which include a holding period at constant load (i.e. the flat, top area of the load-displacement curve), can be determined from The subscripts indicate these values are to be determined from the plastic components only. Activation volume Interpreted loosely as the volume swept out by dislocations during thermal activation, the activation volume is where is the temperature and kB is Boltzmann's constant. From the definition of , it is easy to see that . Hardware Sensors The construction of a depth-sensing indentation system is made possible by the inclusion of very sensitive displacement and load sensing systems. Load transducers must be capable of measuring forces in the micronewton range and displacement sensors are very frequently capable of sub-nanometer resolution. Environmental isolation is crucial to the operation of the instrument. Vibrations transmitted to the device, fluctuations in atmospheric temperature and pressure, and thermal fluctuations of the components during the course of an experiment can cause significant errors. Continuous stiffness measurement (CSM) Dynamic nanoindentation or continuous stiffness measurement (CSM, also offered commercially as CMX, dynamics...), introduced in 1989, is a significant improvement over the quasi-static mode described above. It consists into overlapping a very small, fast (> 40 Hz) oscillation onto the main loading signal and evaluate the magnitude of the resulting partial unloadings by a lock-in amplifier, so as to quasi-continuously determine the contact stiffness. This allows for the continuous evaluation of the hardness and Young's modulus of the material over the depth of the indentation, which is of great advantage with coatings and graded materials. The CSM method is also pivotal for the experimental determination of the local creep and strain-rate dependent mechanical properties of materials, as well as the local damping of visco-elastic materials. The harmonic amplitude of the oscillations is usually chosen around 2 nm (RMS), which is a trade-off value avoiding an underestimation of the stiffness due to the "dynamic unloading error" or the "plasticity error" during measurements on materials with unusually high elastic-to-plastic ratio (E/H > 150), such as soft metals. Atomic Force Microscopy The ability to conduct nanoindentation studies with nanometer depth, and sub-nanonewton force resolution is also possible using a standard AFM setup. The AFM allows for nanomechanical studies to be conducted alongside topographic analyses, without the use of dedicated instruments. Load-displacement curves can be collected similarly for a variety of materials - provided that they are softer than the AFM tip - and mechanical properties can be directly calculated from these curves. Conversely, some commercial nanoindentation systems offer the possibility to use a piezo-driven stage to image the topography of residual indents with the nanoindenter tip. Software Experimental software The indentation curves have often at least thousands of data points. The hardness and elastic modulus can quickly be calculated by using a programming language or a spreadsheet. Instrumented indentation testing machines come with the software specifically designed to analyze the indentation data from their own machine. The Indentation Grapher (Dureza) software is able to import text data from several commercial machines or custom made equipment. Spreadsheet programs such as MS-Excel or OpenOffice Calculate do not have the ability to fit to the non-linear power law equation from indentation data. A linear fit can be done by offset displacement so that the data passes through the origin. Then select the power law equation from the graphing options. The Martens hardness, , is a simple software for any programmer having minimal background to develop. The software starts by searching for the maximum displacement, , point and maximum load, . The displacement is used to calculate the contact surface area, , based on the indenter geometry. For a perfect Berkovich indenter the relationship is . The indentation hardness, is defined slightly different. Here, the hardness is related to the projected contact area . As the indent size decreases the error caused by tip rounding increases. The tip wear can be accounted for within the software by using a simple polynomial function. As the indenter tip wears the value will increase. The user enters the values for and based on direct measurements such as SEM or AFM images of the indenter tip or indirectly by using a material of known elastic modulus or an atomic force microscope (AFM) image of an indentation. Calculating the elastic modulus with software involves using software filtering techniques to separate the critical unloading data from the rest of the load-displacement data. The start and end points are usually found by using user defined percentages. This user input increases the variability because of possible human error. It would be best if the entire calculation process was automatically done for more consistent results. A good nanoindentation machine prints out the load unload curve data with labels to each of the segments such as loading, top hold, unload, bottom hold, and reloading. If multiple cycles are used then each one should be labeled. However mores nanoindenters only give the raw data for the load-unload curves. An automatic software technique finds the sharp change from the top hold time to the beginning of the unloading. This can be found by doing a linear fit to the top hold time data. The unload data starts when the load is 1.5 times standard deviation less than the hold time load. The minimum data point is the end of the unloading data. The computer calculates the elastic modulus with this data according to the Oliver—Pharr (nonlinear). The Doerner-Nix method is less complicated to program because it is a linear curve fit of the selected minimum to maximum data. However, it is limited because the calculated elastic modulus will decrease as more data points are used along the unloading curve. The Oliver-Pharr nonlinear curve fit method to the unloading curve data where is the depth variable, is the final depth and and are constants and coefficients. The software must use a nonlinear convergence method to solve for , and that best fits the unloading data. The slope is calculated by differentiating at the maximum displacement. An image of the indent can also be measured using software. The atomic force microscope (AFM) scans the indent. First the lowest point of the indentation is found. Make an array of lines around the using linear lines from indent center along the indent surface. Where the section line is more than several standard deviations (>3 ) from the surface noise the outline point is created. Then connect all of the outline points to build the entire indent outline. This outline will automatically include the pile-up contact area. For nanoindentation experiments performed with a conical indenter on a thin film deposited on a substrate or on a multilayer sample, the NIMS Matlab toolbox is useful for load-displacement curves analysis and calculations of Young's modulus and hardness of the coating. In the case of pop-in, the PopIn Matlab toolbox is a solution to analyze statistically pop-in distribution and to extract critical load or critical indentation depth, just before pop-in. Finally, for indentation maps obtained following the grid indentation technique, the TriDiMap Matlab toolbox offers the possibility to plot 2D or 3D maps and to analyze statistically mechanical properties distribution of each constituent, in case of a heterogeneous material by doing deconvolution of probability density function. Computational software Molecular dynamics (MD) has been a very powerful technique to investigate the nanoindentation at atomic scale. For instance, Alexey et al employed MD to simulate the nanoindentation process of a titanium crystal, dependence of deformation of the crystalline structure on the type of the indenter is observed, which is very hard to harvest in experiment. Tao et al performed MD simulations of nanoindentation on Cu/Ni nanotwinned multilayers films using a spherical indenter and investigated the effects of hetero-twin interface and twin thickness on hardness. Recently, a review paper by Carlos et al is published upon the atomistic studies of nanoindentation. This review covers different nanoindentation mechanisms and effects of surface orientation, crystallography (fcc, bcc, hcp, etc), surface and bulk damage on plasticity. All of the MD-obtained results are very difficult to be achieved in experiment due to the resolution limitation of structural characterization techniques. Among various MD simulation software, such as GROMACS, Xenoview, Amber, etc., LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator), which is developed by Sandia National Laboratories, is the most widely used for simulation. An interaction potential and an input file including information of atom ID, coordinates, charges, ensemble, time step, etc are fed to the simulator, and then running could be executed. After specified running timesteps, information such as energy, atomic trajectories, and structural information (such as coordination number) could be output for further analysis, which makes it possible to investigate the nanoindentation mechanism at atomic-scale. Another interesting Matlab toolbox called STABiX has been developed to quantify slip transmission at grain boundaries by analyzing indentation experiments in bicrystal. Applications Nanoindentation is a robust technique for determination of mechanical properties. By combining the application of low loads, measuring the resulting displacement, and determining the contact area between the tip of the indenter and the sample a wide range of mechanical properties are able to be measured. The application that drove the innovation of the technique is testing thin film properties for which conventional testing are not feasible. Conventional mechanical testing such as tensile testing or dynamic mechanical analysis (DMA) can only return the average property without any indication of variability across the sample. However, nanoindentation can be used for determination of local properties of homogeneous as well as heterogeneous materials. The reduction in sample size requirements has allowed the technique to become broadly applied to products where the manufactured state does not present enough material for microhardness testing. Applications in this area include medical implants, consumer goods, and packaging. Alternative uses of the technique are used to test MEMs devices by utilizing the low-loads and small scale displacements the nanoindenter is capable of. Limitations Conventional nanoindentation methods for calculation of Modulus of elasticity (based on the unloading curve) are limited to linear, isotropic materials. Pile up and sink in Problems associated with the "pile-up" or "sink-in" of the material on the edges of the indent during the indentation process remain a problem that is still under investigation. It is possible to measure the pile-up contact area using computerized image analysis of atomic force microscope (AFM) images of the indentations. This process also depends on the linear isotropic elastic recovery for the indent reconstruction. Nanoindentation on soft materials Nanoindentation of soft material has intrinsic challenges due to adhesion, surface detection and tip dependency of results. There is an ongoing research to overcome such problems. Two critical issues need to be considered when attempting nanoindentation measurements on soft materials: stiffness and viscoelasticity. The first is the requirement that in any force-displacement measurement platform the stiffness of the machine () must approximately match the stiffness of the sample (), at least in order of magnitude. If is too high, then the indenter probe will simply run through the sample without being able to measure the force. On the other hand, if is too low, then the probe simply will not indent into the sample, and no reading of the probe displacement can be made. For samples that are very soft, the first of these two possibilities is likely. The stiffness of a sample is given by ≈× where is the size of the contact region between the indenter and the sample, and is the sample’s elastic modulus. Typical atomic-force microscopy (AFM) cantilevers have in the range 0.05 to 50 N/m, and probe size in the range ~10 nm to 1 μm. Commercial nanoindenters are also similar. Therefore, if ≈, then a typical AFM cantilever-tip or a commercial nanoindenter can only measure in the ~kPa to GPa range. This range is wide enough to cover most synthetic materials including polymers, metals and ceramics, as well as a large variety of biological materials including tissues and adherent cells. However, there may be softer materials with moduli in the Pa range, such as floating cells, and these cannot be measured by an AFM or a commercial nanoindenter. To measure in the Pa range, “pico-indentation” using an optical tweezers system is suitable. Here, a laser beam is used to trap a translucent bead which is then brought into contact with the soft sample so as to indent it. The trap stiffness () depends on the laser power and bead material, and a typical value is ~50 pN/μm. The probe size can be a micron or so. Then the optical trap can measure (≈/)in the Pa range. The second issue concerning soft samples is their viscoelasticity. Methods to handle viscoelasticity include the following. In the classical treatment of viscoelasticity, the load-displacement (P-h) response measured from the sample is fitted to predictions from an assumed constitutive model (e.g. the Maxwell model) of the material comprising spring and dashpot elements. Such an approach can be very time consuming, and cannot in general prove the assumed constitutive law in an unambiguous manner. Dynamic indentation with an oscillatory load can be performed, and the viscoelastic behavior of the sample is presented in terms of the resultant storage and loss moduli, often as variations over the load frequency. However, the storage and loss moduli obtained this way are not intrinsic material constants, but depend on the oscillation frequency and the indenter probe geometry. A rate-jump method can be used to return an intrinsic elastic modulus of the sample that is independent of the test conditions. In this method, a constitutive law comprising any network of (in general) non-linear dashpots and linear elastic springs is assumed to hold within a very short time window about the time instant tc at which a sudden step change in the loading rate is applied on the sample. Since the dashpots are described by relations of the form ij=ij(kl) but stress kl is continuous across the step change ∆ij in the stress rate field kl at tc, there will not be any corresponding change in the strain rate field ij across the dashpots. However, because the linear elastic springs are described by relations of the form ij=Sikjlkl where Sikjl are elastic compliances, a step change ∆ij across the springs will result according to ∆ij=Sikjl∆kl The last equation indicates that the fields ∆kl and ∆ij can be solved as a linear elastic problem with the elastic spring elements in the original viscoelastic network model while the dashpot elements are ignored. The solution for a given test geometry is a linear relation between the step changes in the load and displacement rates at tc, and the linking proportionality constant is a lumped value of the elastic constants in the original viscoelastic model. Fitting such a relation to experimental results allows this lumped value to be measured as an intrinsic elastic modulus of the material. Specific equations from this rate-jump method have been developed for specific test platforms. For example, in depth-sensing nanoindentation, the elastic modulus and hardness are evaluated at the onset of an unloading stage following a load-hold stage. Such an onset point for unloading is a rate-jump point, and solving the equation ij=Sikjlkl across this leads to the Tang-Ngan method of viscoelastic correction ===- where S = dP/dh is the apparent tip-sample contact stiffness at the onset of unload, is the displacement rate just before the unload, is the unloading rate, and is the true (i.e. viscosity-corrected) tip-sample contact stiffness which is related to the reduced modulus and the tip-sample contact size by the Sneddon relation . The contact size a can be estimated from a pre-calibrated shape function = of the tip, where the contact depth is obtainable using the Oliver—Pharr relation with the apparent contact stiffness replaced by the true stiffness : = - = - where is a factor depending on the tip (say, 0.72 for Berkovich tip). Tip dependence While nanoindentation testing can be relatively simple, the interpretation of results is challenging. One of the main challenges is the use of proper tip depending on the application and proper interpretation of the results. For instance, it has been shown that the elastic modulus can be tip dependent. References Further reading Hardness tests
2000141
https://en.wikipedia.org/wiki/Linux%20%28disambiguation%29
Linux (disambiguation)
Linux is a family of computer operating systems based on the Linux kernel. Linux may also refer to: 9885 Linux, an asteroid Linux distribution, an operating system made as a collection of software based on the Linux kernel Linux kernel, an operating system kernel See also List of Linux distributions
5166
https://en.wikipedia.org/wiki/Copenhagen
Copenhagen
Copenhagen ( or .; ) is the capital and most populous city of Denmark. As of 1 January 2022, the city had a population of 805,402 (644,431 in Copenhagen Municipality, 103,608 in Frederiksberg Municipality, 42,723 in Tårnby Municipality, and 14,640 in Dragør Municipality). It forms the core of the wider urban area of Copenhagen (population 1,336,982) and the Copenhagen metropolitan area (population 2,057,142). Copenhagen is situated on the eastern coast of the island of Zealand; another portion of the city is located on Amager, and it is separated from Malmö, Sweden, by the strait of Øresund. The Øresund Bridge connects the two cities by rail and road. Originally a Viking fishing village established in the 10th century in the vicinity of what is now Gammel Strand, Copenhagen became the capital of Denmark in the early 15th century. Beginning in the 17th century, it consolidated its position as a regional centre of power with its institutions, defences, and armed forces. During the Renaissance the city served as the de facto capital being of the Kalmar Union, being the seat of monarchy, governing the entire present day Nordic region in a personal union with Sweden and Norway ruled by the Danish monarch serving as the head of state. The city flourished as the cultural and economic center of Scandinavia under the union for well over 120 years, starting in the 15th century up until the beginning of the 16th century when the union was dissolved with Sweden leaving the union through a rebellion. After a plague outbreak and fire in the 18th century, the city underwent a period of redevelopment. This included construction of the prestigious district of Frederiksstaden and founding of such cultural institutions as the Royal Theatre and the Royal Academy of Fine Arts. After further disasters in the early 19th century when Horatio Nelson attacked the Dano-Norwegian fleet and bombarded the city, rebuilding during the Danish Golden Age brought a Neoclassical look to Copenhagen's architecture. Later, following the Second World War, the Finger Plan fostered the development of housing and businesses along the five urban railway routes stretching out from the city centre. Since the turn of the 21st century, Copenhagen has seen strong urban and cultural development, facilitated by investment in its institutions and infrastructure. The city is the cultural, economic and governmental centre of Denmark; it is one of the major financial centres of Northern Europe with the Copenhagen Stock Exchange. Copenhagen's economy has seen rapid developments in the service sector, especially through initiatives in information technology, pharmaceuticals and clean technology. Since the completion of the Øresund Bridge, Copenhagen has become increasingly integrated with the Swedish province of Scania and its largest city, Malmö, forming the Øresund Region. With a number of bridges connecting the various districts, the cityscape is characterised by parks, promenades, and waterfronts. Copenhagen's landmarks such as Tivoli Gardens, The Little Mermaid statue, the Amalienborg and Christiansborg palaces, Rosenborg Castle, Frederik's Church, Børsen and many museums, restaurants and nightclubs are significant tourist attractions. Copenhagen is home to the University of Copenhagen, the Technical University of Denmark, Copenhagen Business School and the IT University of Copenhagen. The University of Copenhagen, founded in 1479, is the oldest university in Denmark. Copenhagen is home to the F.C. Copenhagen. The annual Copenhagen Marathon was established in 1980. Copenhagen is one of the most bicycle-friendly cities in the world. Movia is the public mass transit company serving all of eastern Denmark, except Bornholm. The Copenhagen Metro, launched in 2002, serves central Copenhagen. Additionally, the Copenhagen S-train, the Lokaltog (private railway), and the Coast Line network serve and connect central Copenhagen to outlying boroughs. Serving roughly 2.5 million passengers a month, Copenhagen Airport, Kastrup, is the busiest airport in the Nordic countries. Etymology Copenhagen's name ( in Danish), reflects its origin as a harbour and a place of commerce. The original designation in Old Norse, from which Danish descends, was (cf. modern Icelandic: , Faroese ), meaning 'merchants' harbour'. By the time Old Danish was spoken, the capital was called , with the current name deriving from centuries of subsequent regular sound change. An exact English equivalent would be "chapman's haven". The English chapman, German , Dutch , Swedish , Danish , and Icelandic share a derivation from Latin , meaning 'tradesman'. However, the English term for the city was adapted from its Low German name, . Copenhagen's Swedish name is , a direct translation of the mutually intelligible Danish name. History Early history Although the earliest historical records of Copenhagen are from the end of the 12th century, recent archaeological finds in connection with work on the city's metropolitan rail system revealed the remains of a large merchant's mansion near today's Kongens Nytorv from c. 1020. Excavations in Pilestræde have also led to the discovery of a well from the late 12th century. The remains of an ancient church, with graves dating to the 11th century, have been unearthed near where Strøget meets Rådhuspladsen. These finds indicate that Copenhagen's origins as a city go back at least to the 11th century. Substantial discoveries of flint tools in the area provide evidence of human settlements dating to the Stone Age. Many historians believe the town dates to the late Viking Age, and was possibly founded by Sweyn I Forkbeard. The natural harbour and good herring stocks seem to have attracted fishermen and merchants to the area on a seasonal basis from the 11th century and more permanently in the 13th century. The first habitations were probably centred on Gammel Strand (literally 'old shore') in the 11th century or even earlier. The earliest written mention of the town was in the 12th century when Saxo Grammaticus in Gesta Danorum referred to it as , meaning 'Merchants' Harbour' or, in the Danish of the time, . Traditionally, Copenhagen's founding has been dated to Bishop Absalon's construction of a modest fortress on the little island of Slotsholmen in 1167 where Christiansborg Palace stands today. The construction of the fortress was in response to attacks by Wendish pirates who plagued the coastline during the 12th century. Defensive ramparts and moats were completed and by 1177 St. Clemens Church had been built. Attacks by the Wends continued, and after the original fortress was eventually destroyed by the marauders, islanders replaced it with Copenhagen Castle. Middle Ages In 1186, a letter from Pope Urban III states that the castle of Hafn (Copenhagen) and its surrounding lands, including the town of Hafn, were given to Absalon, Bishop of Roskilde 1158–1191 and Archbishop of Lund 1177–1201, by King Valdemar I. On Absalon's death, the property was to come into the ownership of the Bishopric of Roskilde. Around 1200, the Church of Our Lady was constructed on higher ground to the northeast of the town, which began to develop around it. As the town became more prominent, it was repeatedly attacked by the Hanseatic League, and in 1368 successfully invaded during the Second Danish-Hanseatic War. As the fishing industry thrived in Copenhagen, particularly in the trade of herring, the city began expanding to the north of Slotsholmen. In 1254, it received a charter as a city under Bishop Jakob Erlandsen who garnered support from the local fishing merchants against the king by granting them special privileges. In the mid 1330s, the first land assessment of the city was published. With the establishment of the Kalmar Union (1397–1523) between Denmark, Norway and Sweden, by about 1416 Copenhagen had emerged as the capital of Denmark when Eric of Pomerania moved his seat to Copenhagen Castle. The University of Copenhagen was inaugurated on 1 June 1479 by King Christian I, following approval from Pope Sixtus IV. This makes it the oldest university in Denmark and one of the oldest in Europe. Originally controlled by the Catholic Church, the university's role in society was forced to change during the Reformation in Denmark in the late 1530s. 16th and 17th centuries In disputes prior to the Reformation of 1536, the city which had been faithful to Christian II, who was Catholic, was successfully besieged in 1523 by the forces of Frederik I, who supported Lutheranism. Copenhagen's defences were reinforced with a series of towers along the city wall. After an extended siege from July 1535 to July 1536, during which the city supported Christian II's alliance with Malmö and Lübeck, it was finally forced to capitulate to Christian III. During the second half of the century, the city prospered from increased trade across the Baltic supported by Dutch shipping. Christoffer Valkendorff, a high-ranking statesman, defended the city's interests and contributed to its development. The Netherlands had also become primarily Protestant, as were northern German states. During the reign of Christian IV between 1588 and 1648, Copenhagen had dramatic growth as a city. On his initiative at the beginning of the 17th century, two important buildings were completed on Slotsholmen: the Tøjhus Arsenal and Børsen, the stock exchange. To foster international trade, the East India Company was founded in 1616. To the east of the city, inspired by Dutch planning, the king developed the district of Christianshavn with canals and ramparts. It was initially intended to be a fortified trading centre but ultimately became part of Copenhagen. Christian IV also sponsored an array of ambitious building projects including Rosenborg Slot and the Rundetårn. In 1658–1659, the city withstood a siege by the Swedes under Charles X and successfully repelled a major assault. By 1661, Copenhagen had asserted its position as capital of Denmark and Norway. All the major institutions were located there, as was the fleet and most of the army. The defences were further enhanced with the completion of the Citadel in 1664 and the extension of Christianshavns Vold with its bastions in 1692, leading to the creation of a new base for the fleet at Nyholm. 18th century Copenhagen lost around 22,000 of its population of 65,000 to the plague in 1711. The city was also struck by two major fires that destroyed much of its infrastructure. The Copenhagen Fire of 1728 was the largest in the history of Copenhagen. It began on the evening of 20 October, and continued to burn until the morning of 23 October, destroying approximately 28% of the city, leaving some 20% of the population homeless. No less than 47% of the medieval section of the city was completely lost. Along with the 1795 fire, it is the main reason that few traces of the old town can be found in the modern city. A substantial amount of rebuilding followed. In 1733, work began on the royal residence of Christiansborg Palace which was completed in 1745. In 1749, development of the prestigious district of Frederiksstaden was initiated. Designed by Nicolai Eigtved in the Rococo style, its centre contained the mansions which now form Amalienborg Palace. Major extensions to the naval base of Holmen were undertaken while the city's cultural importance was enhanced with the Royal Theatre and the Royal Academy of Fine Arts. In the second half of the 18th century, Copenhagen benefited from Denmark's neutrality during the wars between Europe's main powers, allowing it to play an important role in trade between the states around the Baltic Sea. After Christiansborg was destroyed by fire in 1794 and another fire caused serious damage to the city in 1795, work began on the classical Copenhagen landmark of Højbro Plads while Nytorv and Gammel Torv were converged. 19th century On 2 April 1801, a British fleet under the command of Admiral Sir Hyde Parker attacked and defeated the neutral Danish-Norwegian fleet anchored near Copenhagen. Vice-Admiral Horatio Nelson led the main attack. He famously disobeyed Parker's order to withdraw, destroying many of the Dano-Norwegian ships before a truce was agreed. Copenhagen is often considered to be Nelson's hardest-fought battle, surpassing even the heavy fighting at Trafalgar. It was during this battle that Lord Nelson was said to have "put the telescope to the blind eye" in order not to see Admiral Parker's signal to cease fire. The Second Battle of Copenhagen (or the Bombardment of Copenhagen) (16 August – 5 September 1807) was from a British point of view a preemptive attack on Copenhagen, targeting the civilian population to yet again seize the Dano-Norwegian fleet. But from a Danish point of view, the battle was a terror bombardment on their capital. Particularly notable was the use of incendiary Congreve rockets (containing phosphorus, which cannot be extinguished with water) that randomly hit the city. Few houses with straw roofs remained after the bombardment. The largest church, Vor frue kirke, was destroyed by the sea artillery. Several historians consider this battle the first terror attack against a major European city in modern times. The British landed 30,000 men, they surrounded Copenhagen and the attack continued for the next three days, killing some 2,000 civilians and destroying most of the city. The devastation was so great because Copenhagen relied on an old defence-line whose limited range could not reach the British ships and their longer-range artillery. Despite the disasters of the early 19th century, Copenhagen experienced a period of intense cultural creativity known as the Danish Golden Age. Painting prospered under C.W. Eckersberg and his students while C.F. Hansen and Gottlieb Bindesbøll brought a Neoclassical look to the city's architecture. In the early 1850s, the ramparts of the city were opened to allow new housing to be built around The Lakes () that bordered the old defences to the west. By the 1880s, the districts of Nørrebro and Vesterbro developed to accommodate those who came from the provinces to participate in the city's industrialization. This dramatic increase of space was long overdue, as not only were the old ramparts out of date as a defence system but bad sanitation in the old city had to be overcome. From 1886, the west rampart (Vestvolden) was flattened, allowing major extensions to the harbour leading to the establishment of the Freeport of Copenhagen 1892–94. Electricity came in 1892 with electric trams in 1897. The spread of housing to areas outside the old ramparts brought about a huge increase in the population. In 1840, Copenhagen was inhabited by approximately 120,000 people. By 1901, it had some 400,000 inhabitants. 20th century By the beginning of the 20th century, Copenhagen had become a thriving industrial and administrative city. With its new city hall and railway station, its centre was drawn towards the west. New housing developments grew up in Brønshøj and Valby while Frederiksberg became an enclave within the city of Copenhagen. The northern part of Amager and Valby were also incorporated into the City of Copenhagen in 1901–02. As a result of Denmark's neutrality in the First World War, Copenhagen prospered from trade with both Britain and Germany while the city's defences were kept fully manned by some 40,000 soldiers for the duration of the war. In the 1920s there were serious shortages of goods and housing. Plans were drawn up to demolish the old part of Christianshavn and to get rid of the worst of the city's slum areas. However, it was not until the 1930s that substantial housing developments ensued, with the demolition of one side of Christianhavn's Torvegade to build five large blocks of flats. World War II In Denmark during World War II, Copenhagen was occupied by German troops along with the rest of the country from 9 April 1940 until 4 May 1945. German leader Adolf Hitler hoped that Denmark would be "a model protectorate" and initially the Nazi authorities sought to arrive at an understanding with the Danish government. The 1943 Danish parliamentary election was also allowed to take place, with only the Communist Party excluded. But in August 1943, after the government's collaboration with the occupation forces collapsed, several ships were sunk in Copenhagen Harbor by the Royal Danish Navy to prevent their use by the Germans. Around that time the Nazis started to arrest Jews, although most managed to escape to Sweden. In 1945 Ole Lippman, leader of the Danish section of the Special Operations Executive, invited the British Royal Air Force to assist their operations by attacking Nazi headquarters in Copenhagen. Accordingly, air vice-marshal Sir Basil Embry drew up plans for a spectacular precision attack on the Sicherheitsdienst and Gestapo building, the former offices of the Shell Oil Company. Political prisoners were kept in the attic to prevent an air raid, so the RAF had to bomb the lower levels of the building. The attack, known as "Operation Carthage", came on 22 March 1945, in three small waves. In the first wave, all six planes (carrying one bomb each) hit their target, but one of the aircraft crashed near Frederiksberg Girls School. Because of this crash, four of the planes in the two following waves assumed the school was the military target and aimed their bombs at the school, leading to the death of 123 civilians (of which 87 were schoolchildren). However, 18 of the 26 political prisoners in the Shell Building managed to escape while the Gestapo archives were completely destroyed. On 8 May 1945 Copenhagen was officially liberated by British troops commanded by Field Marshal Bernard Montgomery who supervised the surrender of 30,000 Germans situated around the capital. Post-war decades Shortly after the end of the war, an innovative urban development project known as the Finger Plan was introduced in 1947, encouraging the creation of new housing and businesses interspersed with large green areas along five "fingers" stretching out from the city centre along the S-train routes. With the expansion of the welfare state and women entering the work force, schools, nurseries, sports facilities and hospitals were established across the city. As a result of student unrest in the late 1960s, the former Bådsmandsstræde Barracks in Christianshavn was occupied, leading to the establishment of Freetown Christiania in September 1971. Motor traffic in the city grew significantly and in 1972 the trams were replaced by buses. From the 1960s, on the initiative of the young architect Jan Gehl, pedestrian streets and cycle tracks were created in the city centre. Activity in the port of Copenhagen declined with the closure of the Holmen Naval Base. Copenhagen Airport underwent considerable expansion, becoming a hub for the Nordic countries. In the 1990s, large-scale housing developments were realized in the harbour area and in the west of Amager. The national library's Black Diamond building on the waterfront was completed in 1999. Gallery 21st century Since the summer of 2000, Copenhagen and the Swedish city of Malmö have been connected by the Øresund Bridge, which carries rail and road traffic. As a result, Copenhagen has become the centre of a larger metropolitan area spanning both nations. The bridge has brought about considerable changes in the public transport system and has led to the extensive redevelopment of Amager. The city's service and trade sectors have developed while a number of banking and financial institutions have been established. Educational institutions have also gained importance, especially the University of Copenhagen with its 35,000 students. Another important development for the city has been the Copenhagen Metro, the railway system which opened in 2002 with additions until 2007, transporting some 54 million passengers by 2011. On the cultural front, the Copenhagen Opera House, a gift to the city from the shipping magnate Mærsk Mc-Kinney Møller on behalf of the A.P. Møller foundation, was completed in 2004. In December 2009 Copenhagen gained international prominence when it hosted the worldwide climate meeting COP15. Geography Copenhagen is part of the Øresund Region, which consists of Zealand, Lolland-Falster and Bornholm in Denmark and Scania in Sweden. It is located on the eastern shore of the island of Zealand, partly on the island of Amager and on a number of natural and artificial islets between the two. Copenhagen faces the Øresund to the east, the strait of water that separates Denmark from Sweden, and which connects the North Sea with the Baltic Sea. The Swedish towns of Malmö and Landskrona lie on the Swedish side of the sound directly across from Copenhagen. By road, Copenhagen is northwest of Malmö, Sweden, northeast of Næstved, northeast of Odense, east of Esbjerg and southeast of Aarhus by sea and road via Sjællands Odde. The city centre lies in the area originally defined by the old ramparts, which are still referred to as the Fortification Ring (Fæstningsringen) and kept as a partial green band around it. Then come the late-19th- and early-20th-century residential neighbourhoods of Østerbro, Nørrebro, Vesterbro and Amagerbro. The outlying areas of Kongens Enghave, Valby, Vigerslev, Vanløse, Brønshøj, Utterslev and Sundby followed from 1920 to 1960. They consist mainly of residential housing and apartments often enhanced with parks and greenery. Topography The central area of the city consists of relatively low-lying flat ground formed by moraines from the last ice age while the hilly areas to the north and west frequently rise to above sea level. The slopes of Valby and Brønshøj reach heights of over , divided by valleys running from the northeast to the southwest. Close to the centre are the Copenhagen lakes of Sortedams Sø, Peblinge Sø and Sankt Jørgens Sø. Copenhagen rests on a subsoil of flint-layered limestone deposited in the Danian period some 60 to 66 million years ago. Some greensand from the Selandian is also present. There are a few faults in the area, the most important of which is the Carlsberg fault which runs northwest to southeast through the centre of the city. During the last ice age, glaciers eroded the surface leaving a layer of moraines up to thick. Geologically, Copenhagen lies in the northern part of Denmark where the land is rising because of post-glacial rebound. Beaches Amager Strandpark, which opened in 2005, is a long artificial island, with a total of of beaches. It is located just 15 minutes by bicycle or a few minutes by metro from the city centre. In Klampenborg, about 10 kilometers from downtown Copenhagen, is Bellevue Beach. It is long and has both lifeguards and freshwater showers on the beach. The beaches are supplemented by a system of Harbour Baths along the Copenhagen waterfront. The first and most popular of these is located at Islands Brygge and has won international acclaim for its design. Climate Copenhagen is in the oceanic climate zone (Köppen: Cfb). Its weather is subject to low-pressure systems from the Atlantic which result in unstable conditions throughout the year. Apart from slightly higher rainfall from July to September, precipitation is moderate. While snowfall occurs mainly from late December to early March, there can also be rain, with average temperatures around the freezing point. June is the sunniest month of the year with an average of about eight hours of sunshine a day. July is the warmest month with an average daytime high of 21 °C. By contrast, the average hours of sunshine are less than two per day in November and only one and a half per day from December to February. In the spring, it gets warmer again with four to six hours of sunshine per day from March to May. February is the driest month of the year. Exceptional weather conditions can bring as much as 50 cm of snow to Copenhagen in a 24-hour period during the winter months while summer temperatures have been known to rise to heights of . Because of Copenhagen's northern latitude, the number of daylight hours varies considerably between summer and winter. On the summer solstice, the sun rises at 04:26 and sets at 21:58, providing 17 hours 32 minutes of daylight. On the winter solstice, it rises at 08:37 and sets at 15:39 with 7 hours and 1 minute of daylight. There is therefore a difference of 10 hours and 31 minutes in the length of days and nights between the summer and winter solstices. Administration According to Statistics Denmark, the urban area of Copenhagen () consists of the municipalities of Copenhagen, Frederiksberg, Albertslund, Brøndby, Gentofte, Gladsaxe, Glostrup, Herlev, Hvidovre, Lyngby-Taarbæk, Rødovre, Tårnby and Vallensbæk as well as parts of Ballerup, Rudersdal and Furesø municipalities, along with the cities of Ishøj and Greve Strand. They are located in the Capital Region (). Municipalities are responsible for a wide variety of public services, which include land-use planning, environmental planning, public housing, management and maintenance of local roads, and social security. Municipal administration is also conducted by a mayor, a council, and an executive. Copenhagen Municipality is by far the largest municipality, with the historic city at its core. The seat of Copenhagen's municipal council is the Copenhagen City Hall (), which is situated on City Hall Square. The second largest municipality is Frederiksberg, an enclave within Copenhagen Municipality. Copenhagen Municipality is divided into ten districts (bydele): Indre By, Østerbro, Nørrebro, Vesterbro/Kongens Enghave, Valby, Vanløse, Brønshøj-Husum, Bispebjerg, Amager Øst, and Amager Vest. Neighbourhoods of Copenhagen include Slotsholmen, Frederiksstaden, Islands Brygge, Holmen, Christiania, Carlsberg, Sluseholmen, Sydhavn, Amagerbro, Ørestad, Nordhavnen, Bellahøj, Brønshøj, Ryparken, and Vigerslev. Law and order Most of Denmark's top legal courts and institutions are based in Copenhagen. A modern-style court of justice, Hof- og Stadsretten, was introduced in Denmark, specifically for Copenhagen, by Johann Friedrich Struensee in 1771. Now known as the City Court of Copenhagen (), it is the largest of the 24 city courts in Denmark with jurisdiction over the municipalities of Copenhagen, Dragør and Tårnby. With its 42 judges, it has a Probate Division, an Enforcement Division and a Registration and Notorial Acts Division while bankruptcy is handled by the Maritime and Commercial Court of Copenhagen. Established in 1862, the Maritime and Commercial Court () also hears commercial cases including those relating to trade marks, marketing practices and competition for the whole of Denmark. Denmark's Supreme Court (), located in Christiansborg Palace on Prins Jørgens Gård in the centre of Copenhagen, is the country's final court of appeal. Handling civil and criminal cases from the subordinate courts, it has two chambers which each hear all types of cases. The Danish National Police and Copenhagen Police headquarters is situated in the Neoclassical-inspired Politigården building built in 1918–1924 under architects Hack Kampmann and Holger Alfred Jacobsen. The building also contains administration, management, emergency department and radio service offices. In their efforts to deal with drugs, the police have noted considerable success in the two special drug consumption rooms opened by the city where addicts can use sterile needles and receive help from nurses if necessary. Use of these rooms does not lead to prosecution; the city treats drug use as a public health issue, not a criminal one. The Copenhagen Fire Department forms the largest municipal fire brigade in Denmark with some 500 fire and ambulance personnel, 150 administration and service workers, and 35 workers in prevention. The brigade began as the Copenhagen Royal Fire Brigade on 9 July 1687 under King Christian V. After the passing of the Copenhagen Fire Act on 18 May 1868, on 1 August 1870 the Copenhagen Fire Brigade became a municipal institution in its own right. The fire department has its headquarters in the Copenhagen Central Fire Station which was designed by Ludvig Fenger in the Historicist style and inaugurated in 1892. Environmental planning Copenhagen is recognized as one of the most environmentally friendly cities in the world. As a result of its commitment to high environmental standards, Copenhagen has been praised for its green economy, ranked as the top green city for the second time in the 2014 Global Green Economy Index (GGEI). In 2001 a large offshore wind farm was built just off the coast of Copenhagen at Middelgrunden. It produces about 4% of the city's energy. Years of substantial investment in sewage treatment have improved water quality in the harbour to an extent that the inner harbour can be used for swimming with facilities at a number of locations. Copenhagen aims to be carbon-neutral by 2025. Commercial and residential buildings are to reduce electricity consumption by 20 percent and 10 percent respectively, and total heat consumption is to fall by 20 percent by 2025. Renewable energy features such as solar panels are becoming increasingly common in the newest buildings in Copenhagen. District heating will be carbon-neutral by 2025, by waste incineration and biomass. New buildings must now be constructed according to Low Energy Class ratings and in 2020 near net-zero energy buildings. By 2025, 75% of trips should be made on foot, by bike, or by using public transit. The city plans that 20–30% of cars will run on electricity or biofuel by 2025. The investment is estimated at $472 million public funds and $4.78 billion private funds. The city's urban planning authorities continue to take full account of these priorities. Special attention is given both to climate issues and efforts to ensure maximum application of low-energy standards. Priorities include sustainable drainage systems, recycling rainwater, green roofs and efficient waste management solutions. In city planning, streets and squares are to be designed to encourage cycling and walking rather than driving. Further, the city administration is working with smart city initiatives to improve how data and technology can be used to implement new solutions that support the transition toward a carbon-neutral economy. These solutions support operations covered by the city administration to improve e.g. public health, district heating, urban mobility and waste management systems. Smart city operations in Copenhagen are maintained by Copenhagen Solutions Lab, the city's official smart-city development unit under the Technical and Environmental Administration. Demographics and society Copenhagen is the most populous city in Denmark and one of the most populous in the Nordic countries. For statistical purposes, Statistics Denmark considers the City of Copenhagen () to consist of the Municipality of Copenhagen plus three adjacent municipalities: Dragør, Frederiksberg, and Tårnby. Their combined population stands at 763,908 (). The Municipality of Copenhagen is by far the most populous in the country and one of the most populous Nordic municipalities with 601,448 inhabitants (). There was a demographic boom in the 1990s and first decades of the 21st century, largely due to immigration to Denmark. According to figures from the first quarter of 2016, approximately 76% of the municipality's population was of Danish descent, defined as having at least one parent who was born in Denmark and has Danish citizenship. Much of the remaining 24% were of a foreign background, defined as immigrants (18%) or descendants of recent immigrants (6%). There are no official statistics on ethnic groups. The adjacent table shows the most common countries of birth of Copenhagen residents. According to Statistics Denmark, Copenhagen's urban area has a larger population of 1,280,371 (). The urban area consists of the municipalities of Copenhagen and Frederiksberg plus 16 of the 20 municipalities of the former counties Copenhagen and Roskilde, though five of them only partially. Metropolitan Copenhagen has a total of 2,016,285 inhabitants (). The area of Metropolitan Copenhagen is defined by the Finger Plan. Since the opening of the Øresund Bridge in 2000, commuting between Zealand and Scania in Sweden has increased rapidly, leading to a wider, integrated area. Known as the Øresund Region, it has 4.1 million inhabitants (of whom 2.7 million (August 2021) live in the Danish part of the region). Religion A majority (56.9%) of those living in Copenhagen are members of the Lutheran Church of Denmark which is 0.6% lower than one year earlier according to 2019 figures. The National Cathedral, the Church of Our Lady, is one of the dozens of churches in Copenhagen. There are also several other Christian communities in the city, of which the largest is Roman Catholic. Foreign migration to Copenhagen, rising over the last three decades, has contributed to increasing religious diversity; the Grand Mosque of Copenhagen, the first in Denmark, opened in 2014. Islam is the second largest religion in Copenhagen, accounting for approximately 10% of the population. While there are no official statistics, a significant portion of the estimated 175,000–200,000 Muslims in the country live in the Copenhagen urban area, with the highest concentration in Nørrebro and the Vestegnen. There are also some 7,000 Jews in Denmark, most of them in the Copenhagen area where there are several synagogues. There is a long history of Jews in the city, and the first synagogue in Copenhagen was built in 1684. Today, the history of the Jews of Denmark can be explored at the Danish Jewish Museum in Copenhagen. Quality of living For a number of years, Copenhagen has ranked high in international surveys for its quality of life. Its stable economy together with its education services and level of social safety make it attractive for locals and visitors alike. Although it is one of the world's most expensive cities, it is also one of the most liveable with its public transport, facilities for cyclists and its environmental policies. In elevating Copenhagen to "most liveable city" in 2013, Monocle pointed to its open spaces, increasing activity on the streets, city planning in favour of cyclists and pedestrians, and features to encourage inhabitants to enjoy city life with an emphasis on community, culture and cuisine. Other sources have ranked Copenhagen high for its business environment, accessibility, restaurants and environmental planning. However, Copenhagen ranks only 39th for student friendliness in 2012. Despite a top score for quality of living, its scores were low for employer activity and affordability. Economy Copenhagen is the major economic and financial centre of Denmark. The city's economy is based largely on services and commerce. Statistics for 2010 show that the vast majority of the 350,000 workers in Copenhagen are employed in the service sector, especially transport and communications, trade, and finance, while less than 10,000 work in the manufacturing industries. The public sector workforce is around 110,000, including education and healthcare. From 2006 to 2011, the economy grew by 2.5% in Copenhagen, while it fell by some 4% in the rest of Denmark. In 2017, the wider Capital Region of Denmark had a gross domestic product (GDP) of €120 billion, and the 15th largest GDP per capita of regions in the European Union. Several financial institutions and banks have headquarters in Copenhagen, including Alm. Brand, Danske Bank, Nykredit and Nordea Bank Danmark. The Copenhagen Stock Exchange (CSE) was founded in 1620 and is now owned by Nasdaq, Inc. Copenhagen is also home to a number of international companies including A.P. Møller-Mærsk, Novo Nordisk, Carlsberg and Novozymes. City authorities have encouraged the development of business clusters in several innovative sectors, which include information technology, biotechnology, pharmaceuticals, clean technology and smart city solutions. Life science is a key sector with extensive research and development activities. Medicon Valley is a leading bi-national life sciences cluster in Europe, spanning the Øresund Region. Copenhagen is rich in companies and institutions with a focus on research and development within the field of biotechnology, and the Medicon Valley initiative aims to strengthen this position and to promote cooperation between companies and academia. Many major Danish companies like Novo Nordisk and Lundbeck, both of which are among the 50 largest pharmaceutical and biotech companies in the world, are located in this business cluster. Shipping is another import sector with Maersk, the world's largest shipping company, having their world headquarters in Copenhagen. The city has an industrial harbour, Copenhagen Port. Following decades of stagnation, it has experienced a resurgence since 1990 following a merger with Malmö harbour. Both ports are operated by Copenhagen Malmö Port (CMP). The central location in the Øresund Region allows the ports to act as a hub for freight that is transported onward to the Baltic countries. CMP annually receives about 8,000 ships and handled some 148,000 TEU in 2012. Copenhagen has some of the highest gross wages in the world. High taxes mean that wages are reduced after mandatory deduction. A beneficial researcher scheme with low taxation of foreign specialists has made Denmark an attractive location for foreign labour. It is however also among the most expensive cities in Europe. Denmark's Flexicurity model features some of the most flexible hiring and firing legislation in Europe, providing attractive conditions for foreign investment and international companies looking to locate in Copenhagen. In Dansk Industri's 2013 survey of employment factors in the ninety-six municipalities of Denmark, Copenhagen came in first place for educational qualifications and for the development of private companies in recent years, but fell to 86th place in local companies' assessment of the employment climate. The survey revealed considerable dissatisfaction in the level of dialogue companies enjoyed with the municipal authorities. Tourism Tourism is a major contributor to Copenhagen's economy, attracting visitors due to the city's harbour, cultural attractions and award-winning restaurants. Since 2009, Copenhagen has been one of the fastest growing metropolitan destinations in Europe. Hotel capacity in the city is growing significantly. From 2009 to 2013, it experienced a 42% growth in international bed nights (total number of nights spent by tourists), tallying a rise of nearly 70% for Chinese visitors. The total number of bed nights in the Capital Region surpassed 9 million in 2013, while international bed nights reached 5 million. In 2010, it is estimated that city break tourism contributed to DKK 2 billion in turnover. However, 2010 was an exceptional year for city break tourism and turnover increased with 29% in that one year. 680,000 cruise passengers visited the port in 2015. In 2019 Copenhagen was ranked first among Lonely Planet's top ten cities to visit. In October 2021, Copenhagen was shortlisted for the European Commission's 2022 European Capital of Smart Tourism award along with Bordeaux, Dublin, Florence, Ljubljana, La Palma de Mallorca and Valencia. Cityscape The city's appearance today is shaped by the key role it has played as a regional centre for centuries. Copenhagen has a multitude of districts, each with its distinctive character and representing its own period. Other distinctive features of Copenhagen include the abundance of water, its many parks, and the bicycle paths that line most streets. Architecture The oldest section of Copenhagen's inner city is often referred to as (the medieval city). However, the city's most distinctive district is Frederiksstaden, developed during the reign of Frederick V. It has the Amalienborg Palace at its centre and is dominated by the dome of Frederik's Church (or the Marble Church) and several elegant 18th-century Rococo mansions. The inner city includes Slotsholmen, a little island on which Christiansborg Palace stands and Christianshavn with its canals. Børsen on Slotsholmen and Frederiksborg Palace in Hillerød are prominent examples of the Dutch Renaissance style in Copenhagen. Around the historical city centre lies a band of congenial residential boroughs (Vesterbro, Inner Nørrebro, Inner Østerbro) dating mainly from late 19th century. They were built outside the old ramparts when the city was finally allowed to expand beyond its fortifications. Sometimes referred to as "the City of Spires", Copenhagen is known for its horizontal skyline, broken only by the spires and towers of its churches and castles. Most characteristic of all is the Baroque spire of the Church of Our Saviour with its narrowing external spiral stairway that visitors can climb to the top. Other important spires are those of Christiansborg Palace, the City Hall and the former Church of St. Nikolaj that now houses a modern art venue. Not quite so high are the Renaissance spires of Rosenborg Castle and the "dragon spire" of Christian IV's former stock exchange, so named because it resembles the intertwined tails of four dragons. Copenhagen is recognised globally as an exemplar of best practice urban planning. Its thriving mixed use city centre is defined by striking contemporary architecture, engaging public spaces and an abundance of human activity. These design outcomes have been deliberately achieved through careful replanning in the second half of the 20th century. Recent years have seen a boom in modern architecture in Copenhagen both for Danish architecture and for works by international architects. For a few hundred years, virtually no foreign architects had worked in Copenhagen, but since the turn of the millennium the city and its immediate surroundings have seen buildings and projects designed by top international architects. British design magazine Monocle named Copenhagen the World's best design city 2008. Copenhagen's urban development in the first half of the 20th century was heavily influenced by industrialisation. After World War II, Copenhagen Municipality adopted Fordism and repurposed its medieval centre to facilitate private automobile infrastructure in response to innovations in transport, trade and communication. Copenhagen's spatial planning in this time frame was characterised by the separation of land uses: an approach which requires residents to travel by car to access facilities of different uses. The boom in urban development and modern architecture has brought some changes to the city's skyline. A political majority has decided to keep the historical centre free of high-rise buildings, but several areas will see or have already seen massive urban development. Ørestad now has seen most of the recent development. Located near Copenhagen Airport, it currently boasts one of the largest malls in Scandinavia and a variety of office and residential buildings as well as the IT University and a high school. Parks, gardens and zoo Copenhagen is a green city with many parks, both large and small. King's Garden (), the garden of Rosenborg Castle, is the oldest and most frequented of them all. It was Christian IV who first developed its landscaping in 1606. Every year it sees more than 2.5 million visitors and in the summer months it is packed with sunbathers, picnickers and ballplayers. It serves as a sculpture garden with both a permanent display and temporary exhibits during the summer months. Also located in the city centre are the Botanical Gardens noted for their large complex of 19th-century greenhouses donated by Carlsberg founder J. C. Jacobsen. Fælledparken at is the largest park in Copenhagen. It is popular for sports fixtures and hosts several annual events including a free opera concert at the opening of the opera season, other open-air concerts, carnival and Labour Day celebrations, and the Copenhagen Historic Grand Prix, a race for antique cars. A historical green space in the northeastern part of the city is Kastellet, a well-preserved Renaissance citadel that now serves mainly as a park. Another popular park is the Frederiksberg Gardens, a 32-hectare romantic landscape park. It houses a colony of tame grey herons and other waterfowl. The park offers views of the elephants and the elephant house designed by world-famous British architect Norman Foster of the adjacent Copenhagen Zoo. Langelinie, a park and promenade along the inner Øresund coast, is home to one of Copenhagen's most-visited tourist attractions, the Little Mermaid statue. In Copenhagen, many cemeteries double as parks, though only for the more quiet activities such as sunbathing, reading and meditation. Assistens Cemetery, the burial place of Hans Christian Andersen, is an important green space for the district of Inner Nørrebro and a Copenhagen institution. The lesser known Vestre Kirkegaard is the largest cemetery in Denmark () and offers a maze of dense groves, open lawns, winding paths, hedges, overgrown tombs, monuments, tree-lined avenues, lakes and other garden features. It is official municipal policy in Copenhagen that by 2015 all citizens must be able to reach a park or beach on foot in less than 15 minutes. In line with this policy, several new parks, including the innovative Superkilen in the Nørrebro district, have been completed or are under development in areas lacking green spaces. Landmarks by district Indre By The historic centre of the city, Indre By or the Inner City, features many of Copenhagen's most popular monuments and attractions. The area known as Frederiksstaden, developed by Frederik V in the second half of the 18th century in the Rococo style, has the four mansions of Amalienborg, the royal residence, and the wide-domed Marble Church at its centre. Directly across the water from Amalienborg, the 21st-century Copenhagen Opera House stands on the island of Holmen. To the south of Frederiksstaden, the Nyhavn canal is lined with colourful houses from the 17th and 18th centuries, many now with lively restaurants and bars. The canal runs from the harbour front to the spacious square of Kongens Nytorv which was laid out by Christian V in 1670. Important buildings include Charlottenborg Palace, famous for its art exhibitions, the Thott Palace (now the French embassy), the Royal Danish Theatre and the Hotel D'Angleterre, dated to 1755. Other landmarks in Indre By include the parliament building of Christiansborg, the City Hall and Rundetårn, originally an observatory. There are also several museums in the area including Thorvaldsen Museum dedicated to the 18th-century sculptor Bertel Thorvaldsen. Closed to traffic since 1964, Strøget, one of the world's oldest and longest pedestrian streets, runs the from Rådhuspladsen to Kongens Nytorv. With its speciality shops, cafés, restaurants, and buskers, it is always full of life and includes the old squares of Gammel Torv and Amagertorv, each with a fountain. Rosenborg Castle on Øster Voldgade was built by Christian IV in 1606 as a summer residence in the Renaissance style. It houses the Danish crown jewels and crown regalia, the coronation throne and tapestries illustrating Christian V's victories in the Scanian War. Christianshavn Christianshavn lies to the southeast of Indre By on the other side of the harbour. The area was developed by Christian IV in the early 17th century. Impressed by the city of Amsterdam, he employed Dutch architects to create canals within its ramparts which are still well preserved today. The canals themselves, branching off the central Christianshavn Canal and lined with house boats and pleasure craft are one of the area's attractions. Another interesting feature is Freetown Christiania, a fairly large area which was initially occupied by squatters during student unrest in 1971. Today it still maintains a measure of autonomy. The inhabitants openly sell drugs on "Pusher Street" as well as their arts and crafts. Other buildings of interest in Christianshavn include the Church of Our Saviour with its spiralling steeple and the magnificent Rococo Christian's Church. Once a warehouse, the North Atlantic House now displays culture from Iceland and Greenland and houses the Noma restaurant, known for its Nordic cuisine. Vesterbro Vesterbro, to the southwest of Indre By, begins with the Tivoli Gardens, the city's top tourist attraction with its fairground atmosphere, its Pantomime Theatre, its Concert Hall and its many rides and restaurants. The Carlsberg neighbourhood has some interesting vestiges of the old brewery of the same name including the Elephant Gate and the Ny Carlsberg Brewhouse. The Tycho Brahe Planetarium is located on the edge of Skt. Jørgens Sø, one of the Copenhagen lakes. Halmtorvet, the old hay market behind the Central Station, is an increasingly popular area with its cafés and restaurants. The former cattle market Øksnehallen has been converted into a modern exhibition centre for art and photography. Radisson Blu Royal Hotel, built by Danish architect and designer Arne Jacobsen for the airline Scandinavian Airlines System (SAS) between 1956 and 1960 was once the tallest hotel in Denmark with a height of and the city's only skyscraper until 1969. Completed in 1908, Det Ny Teater (the New Theatre) located in a passage between Vesterbrogade and Gammel Kongevej has become a popular venue for musicals since its reopening in 1994, attracting the largest audiences in the country. Nørrebro Nørrebro to the northwest of the city centre has recently developed from a working-class district into a colourful cosmopolitan area with antique shops, non-Danish food stores and restaurants. Much of the activity is centred on Sankt Hans Torv and around Rantzausgade. Copenhagen's historic cemetery, Assistens Kirkegård halfway up Nørrebrogade, is the resting place of many famous figures including Søren Kierkegaard, Niels Bohr, and Hans Christian Andersen but is also used by locals as a park and recreation area. Østerbro Just north of the city centre, Østerbro is an upper middle-class district with a number of fine mansions, some now serving as embassies. The district stretches from Nørrebro to the waterfront where The Little Mermaid statue can be seen from the promenade known as Langelinie. Inspired by Hans Christian Andersen's fairy tale, it was created by Edvard Eriksen and unveiled in 1913. Not far from the Little Mermaid, the old Citadel (Kastellet) can be seen. Built by Christian IV, it is one of northern Europe's best preserved fortifications. There is also a windmill in the area. The large Gefion Fountain () designed by Anders Bundgaard and completed in 1908 stands close to the southeast corner of Kastellet. Its figures illustrate a Nordic legend. Frederiksberg Frederiksberg, a separate municipality within the urban area of Copenhagen, lies to the west of Nørrebro and Indre By and north of Vesterbro. Its landmarks include Copenhagen Zoo founded in 1869 with over 250 species from all over the world and Frederiksberg Palace built as a summer residence by Frederick IV who was inspired by Italian architecture. Now a military academy, it overlooks the extensive landscaped Frederiksberg Gardens with its follies, waterfalls, lakes and decorative buildings. The wide tree-lined avenue of Frederiksberg Allé connecting Vesterbrogade with the Frederiksberg Gardens has long been associated with theatres and entertainment. While a number of the earlier theatres are now closed, the Betty Nansen Theatre and Aveny-T are still active. Amagerbro Amagerbro (also known as Sønderbro) is the district located immediately south-east of Christianshavn at northernmost Amager. The old city moats and their surrounding parks constitute a clear border between these districts. The main street is Amagerbrogade which after the harbour bridge Langebro, is an extension of H. C. Andersens Boulevard and has a number of various stores and shops as well as restaurants and pubs. Amagerbro was built up during the two first decades of the twentieth century and is the city's northernmost block built area with typically 4–7 floors. Further south follows the Sundbyøster and Sundbyvester districts. Other districts Not far from Copenhagen Airport on the Kastrup coast, The Blue Planet completed in March 2013 now houses the national aquarium. With its 53 aquariums, it is the largest facility of its kind in Scandinavia. Grundtvig's Church, located in the northern suburb of Bispebjerg, was designed by P.V. Jensen Klint and completed in 1940. A rare example of Expressionist church architecture, its striking west façade is reminiscent of a church organ. Culture Apart from being the national capital, Copenhagen also serves as the cultural hub of Denmark and wider Scandinavia. Since the late 1990s, it has undergone a transformation from a modest Scandinavian capital into a metropolitan city of international appeal in the same league as Barcelona and Amsterdam. This is a result of huge investments in infrastructure and culture as well as the work of successful new Danish architects, designers and chefs. Copenhagen Fashion Week, the largest fashion event in Northern Europe, takes place every year in February and August. Museums Copenhagen has a wide array of museums of international standing. The National Museum, , is Denmark's largest museum of archaeology and cultural history, comprising the histories of Danish and foreign cultures alike. Denmark's National Gallery () is the national art museum with collections dating from the 12th century to the present. In addition to Danish painters, artists represented in the collections include Rubens, Rembrandt, Picasso, Braque, Léger, Matisse, Emil Nolde, Olafur Eliasson, Elmgreen and Dragset, Superflex and Jens Haaning. Another important Copenhagen art museum is the Ny Carlsberg Glyptotek founded by second generation Carlsberg philanthropist Carl Jacobsen and built around his personal collections. Its main focus is classical Egyptian, Roman and Greek sculptures and antiquities and a collection of Rodin sculptures, the largest outside France. Besides its sculpture collections, the museum also holds a comprehensive collection of paintings of Impressionist and Post-Impressionist painters such as Monet, Renoir, Cézanne, van Gogh and Toulouse-Lautrec as well as works by the Danish Golden Age painters. Louisiana is a Museum of Modern Art situated on the coast just north of Copenhagen. It is located in the middle of a sculpture garden on a cliff overlooking Øresund. Its collection of over 3,000 items includes works by Picasso, Giacometti and Dubuffet. The Danish Design Museum is housed in the 18th-century former Frederiks Hospital and displays Danish design as well as international design and crafts. Other museums include: the Thorvaldsens Museum, dedicated to the oeuvre of romantic Danish sculptor Bertel Thorvaldsen who lived and worked in Rome; the Cisternerne museum, an exhibition space for contemporary art, located in former cisterns that come complete with stalactites formed by the changing water levels; and the Ordrupgaard Museum, located just north of Copenhagen, which features 19th-century French and Danish art and is noted for its works by Paul Gauguin. Entertainment and performing arts The new Copenhagen Concert Hall opened in January 2009. Designed by Jean Nouvel, it has four halls with the main auditorium seating 1,800 people. It serves as the home of the Danish National Symphony Orchestra and along with the Walt Disney Concert Hall in Los Angeles is the most expensive concert hall ever built. Another important venue for classical music is the Tivoli Concert Hall located in the Tivoli Gardens. Designed by Henning Larsen, the Copenhagen Opera House () opened in 2005. It is among the most modern opera houses in the world. The Royal Danish Theatre also stages opera in addition to its drama productions. It is also home to the Royal Danish Ballet. Founded in 1748 along with the theatre, it is one of the oldest ballet troupes in Europe, and is noted for its Bournonville style of ballet. Copenhagen has a significant jazz scene that has existed for many years. It developed when a number of American jazz musicians such as Ben Webster, Thad Jones, Richard Boone, Ernie Wilkins, Kenny Drew, Ed Thigpen, Bob Rockwell, Dexter Gordon, and others such as rock guitarist Link Wray came to live in Copenhagen during the 1960s. Every year in early July, Copenhagen's streets, squares, parks as well as cafés and concert halls fill up with big and small jazz concerts during the Copenhagen Jazz Festival. One of Europe's top jazz festivals, the annual event features around 900 concerts at 100 venues with over 200,000 guests from Denmark and around the world. The largest venue for popular music in Copenhagen is Vega in the Vesterbro district. It was chosen as "best concert venue in Europe" by international music magazine Live. The venue has three concert halls: the great hall, Store Vega, accommodates audiences of 1,550, the middle hall, Lille Vega, has space for 500 and Ideal Bar Live has a capacity of 250. Every September since 2006, the Festival of Endless Gratitude (FOEG) has taken place in Copenhagen. This festival focuses on indie counterculture, experimental pop music and left field music combined with visual arts exhibitions. For free entertainment one can stroll along Strøget, especially between Nytorv and Højbro Plads, which in the late afternoon and evening is a bit like an impromptu three-ring circus with musicians, magicians, jugglers and other street performers. Literature Most of Denmarks's major publishing houses are based in Copenhagen. These include the book publishers Gyldendal and Akademisk Forlag and newspaper publishers Berlingske and Politiken (the latter also publishing books). Many of the most important contributors to Danish literature such as Hans Christian Andersen (1805–1875) with his fairy tales, the philosopher Søren Kierkegaard (1813–1855) and playwright Ludvig Holberg (1684–1754) spent much of their lives in Copenhagen. Novels set in Copenhagen include Baby (1973) by Kirsten Thorup, The Copenhagen Connection (1982) by Barbara Mertz, Number the Stars (1989) by Lois Lowry, Miss Smilla's Feeling for Snow (1992) and Borderliners (1993) by Peter Høeg, Music and Silence (1999) by Rose Tremain, The Danish Girl (2000) by David Ebershoff, and Sharpe's Prey (2001) by Bernard Cornwell. Michael Frayn's 1998 play Copenhagen about the meeting between the physicists Niels Bohr and Werner Heisenberg in 1941 is also set in the city. On 15–18 August 1973, an oral literature conference took place in Copenhagen as part of the 9th International Congress of Anthropological and Ethnological Sciences. The Royal Library, belonging to the University of Copenhagen, is the largest library in the Nordic countries with an almost complete collection of all printed Danish books since 1482. Founded in 1648, the Royal Library is located at four sites in the city, the main one being on the Slotsholmen waterfront. Copenhagen's public library network has over 20 outlets, the largest being the Central Library () on Krystalgade in the inner city. Art Copenhagen has a wide selection of art museums and galleries displaying both historic works and more modern contributions. They include Statens Museum for Kunst, i.e. the Danish national art gallery, in the Østre Anlæg park, and the adjacent Hirschsprung Collection specialising in the 19th and early 20th century. Kunsthal Charlottenborg in the city centre exhibits national and international contemporary art. Den Frie Udstilling near the Østerport Station exhibits paintings created and selected by contemporary artists themselves rather than by the official authorities. The Arken Museum of Modern Art is located in southwestern Ishøj. Among artists who have painted scenes of Copenhagen are Martinus Rørbye (1803–1848), Christen Købke (1810–1848) and the prolific Paul Gustav Fischer (1860–1934). A number of notable sculptures can be seen in the city. In addition to The Little Mermaid on the waterfront, there are two historic equestrian statues in the city centre: Jacques Saly's Frederik V on Horseback (1771) in Amalienborg Square and the statue of Christian V on Kongens Nytorv created by Abraham-César Lamoureux in 1688 who was inspired by the statue of Louis XIII in Paris. Rosenborg Castle Gardens contains several sculptures and monuments including August Saabye's Hans Christian Andersen, Aksel Hansen's Echo, and Vilhelm Bissen's Dowager Queen Caroline Amalie. Copenhagen is believed to have invented the photomarathon photography competition, which has been held in the City each year since 1989. Cuisine , Copenhagen has 15 Michelin-starred restaurants, the most of any Scandinavian city. The city is increasingly recognized internationally as a gourmet destination. These include Den Røde Cottage, Formel B Restaurant, Grønbech & Churchill, Søllerød Kro, Kadeau, Kiin Kiin (Denmark's first Michelin-starred Asian gourmet restaurant), the French restaurant Kong Hans Kælder, Relæ, Restaurant AOC, Noma (short for Danish: nordisk mad, English: Nordic food) with two Stars and Geranium with three. Noma, was ranked as the Best Restaurant in the World by Restaurant in 2010, 2011, 2012, and again in 2014, sparking interest in the New Nordic Cuisine. Apart from the selection of upmarket restaurants, Copenhagen offers a great variety of Danish, ethnic and experimental restaurants. It is possible to find modest eateries serving open sandwiches, known as smørrebrød – a traditional, Danish lunch dish; however, most restaurants serve international dishes. Danish pastry can be sampled from any of numerous bakeries found in all parts of the city. The Copenhagen Bakers' Association (Danish: ) dates back to the 1290s and Denmark's oldest confectioner's shop still operating, Conditori La Glace, was founded in 1870 in Skoubogade by Nicolaus Henningsen, a trained master baker from Flensburg. Copenhagen has long been associated with beer. Carlsberg beer has been brewed at the brewery's premises on the border between the Vesterbro and Valby districts since 1847 and has long been almost synonymous with Danish beer production. However, recent years have seen an explosive growth in the number of microbreweries so that Denmark today has more than 100 breweries, many of which are located in Copenhagen. Some like Nørrebro Bryghus also act as brewpubs where it is also possible to eat on the premises. Nightlife and festivals Copenhagen has one of the highest number of restaurants and bars per capita in the world. The nightclubs and bars stay open until 5 or 6 in the morning, some even longer. Denmark has a very liberal alcohol culture and a strong tradition for beer breweries, although binge drinking is frowned upon and the Danish Police take driving under the influence very seriously. Inner city areas such as Istedgade and Enghave Plads in Vesterbro, Sankt Hans Torv in Nørrebro and certain places in Frederiksberg are especially noted for their nightlife. Notable nightclubs include Bakken Kbh, ARCH (previously ZEN), Jolene, The Jane, Chateau Motel, KB3, At Dolores (previously Sunday Club), Rust, Vega Nightclub, Culture Box and Gefährlich, which also serves as a bar, café, restaurant, and art gallery. Copenhagen has several recurring community festivals, mainly in the summer. Copenhagen Carnival has taken place every year since 1982 during the Whitsun Holiday in Fælledparken and around the city with the participation of 120 bands, 2,000 dancers and 100,000 spectators. Since 2010, the old B&W Shipyard at Refshaleøen in the harbour has been the location for Copenhell, a heavy metal rock music festival. Copenhagen Pride is a gay pride festival taking place every year in August. The Pride has a series of different activities all over Copenhagen, but it is at the City Hall Square that most of the celebration takes place. During the Pride the square is renamed Pride Square. Copenhagen Distortion has emerged to be one of the biggest street festivals in Europe with 100,000 people joining to parties in the beginning of June every year. Amusement parks Copenhagen has the two oldest amusement parks in the world. Dyrehavsbakken, a fair-ground and pleasure-park established in 1583, is located in Klampenborg just north of Copenhagen in a forested area known as Dyrehaven. Created as an amusement park complete with rides, games and restaurants by Christian IV, it is the oldest surviving amusement park in the world. Pierrot (), a nitwit dressed in white with a scarlet grin wearing a boat-like hat while entertaining children, remains one of the park's key attractions. In Danish, Dyrehavsbakken is often abbreviated as . There is no entrance fee to pay and Klampenborg Station on the C-line, is situated nearby. The Tivoli Gardens is an amusement park and pleasure garden located in central Copenhagen between the City Hall Square and the Central Station. It opened in 1843, making it the second-oldest amusement park in the world. Among its rides are the oldest still operating rollercoaster from 1915 and the oldest ferris wheel still in use, opened in 1943. Tivoli Gardens also serves as a venue for various performing arts and as an active part of the cultural scene in Copenhagen. Education Copenhagen has over 94,000 students enrolled in its largest universities and institutions: University of Copenhagen (38,867 students), Copenhagen Business School (19,999 students), Metropolitan University College and University College Capital (10,000 students each), Technical University of Denmark (7,000 students), KEA (c. 4,500 students), IT University of Copenhagen (2,000 students) and the Copenhagen campus of Aalborg University (2,300 students). The University of Copenhagen is Denmark's oldest university founded in 1479. It attracts some 1,500 international and exchange students every year. The Academic Ranking of World Universities placed it 30th in the world in 2016. The Technical University of Denmark is located in Lyngby in the northern outskirts of Copenhagen. In 2013, it was ranked as one of the leading technical universities in Northern Europe. The IT University is Denmark's youngest university, a mono-faculty institution focusing on technical, societal and business aspects of information technology. The Danish Academy of Fine Arts has provided education in the arts for more than 250 years. It includes the historic School of Visual Arts, and has in later years come to include a School of Architecture, a School of Design and a School of Conservation. Copenhagen Business School (CBS) is an EQUIS-accredited business school located in Frederiksberg. There are also branches of both University College Capital and Metropolitan University College inside and outside Copenhagen. Sport The city has a variety of sporting teams. The major football teams are the historically successful FC København and Brøndby. FC København plays at Parken in Østerbro. Formed in 1992, it is a merger of two older Copenhagen clubs, B 1903 (from the inner suburb Gentofte) and KB (from Frederiksberg). Brøndby plays at Brøndby Stadion in the inner suburb of Brøndbyvester. BK Frem is based in the southern part of Copenhagen (Sydhavnen, Valby). Other teams of more significant stature are FC Nordsjælland (from suburban Farum), Fremad Amager, B93, AB, Lyngby and Hvidovre IF. Copenhagen has several handball teams—a sport which is particularly popular in Denmark. Of clubs playing in the "highest" leagues, there are Ajax, Ydun, and HIK (Hellerup). The København Håndbold women's club has recently been established. Copenhagen also has ice hockey teams, of which three play in the top league, Rødovre Mighty Bulls, Herlev Eagles and Hvidovre Ligahockey all inner suburban clubs. Copenhagen Ice Skating Club founded in 1869 is the oldest ice hockey team in Denmark but is no longer in the top league. Rugby union is also played in the Danish capital with teams such as CSR-Nanok, Copenhagen Business School Sport Rugby, Frederiksberg RK, Exiles RUFC and Rugbyklubben Speed. Rugby league is now played in Copenhagen, with the national team playing out of Gentofte Stadion. The Danish Australian Football League, based in Copenhagen is the largest Australian rules football competition outside of the English-speaking world. Copenhagen Marathon, Copenhagen's annual marathon event, was established in 1980. Round Christiansborg Open Water Swim Race is a open water swimming competition taking place each year in late August. This amateur event is combined with a Danish championship. In 2009 the event included a FINA World Cup competition in the morning. Copenhagen hosted the 2011 UCI Road World Championships in September 2011, taking advantage of its bicycle-friendly infrastructure. It was the first time that Denmark had hosted the event since 1956, when it was also held in Copenhagen. Transport Airport The greater Copenhagen area has a very well established transportation infrastructure making it a hub in Northern Europe. Copenhagen Airport, opened in 1925, is Scandinavia's largest airport, located in Kastrup on the island of Amager. It is connected to the city centre by metro and main line railway services. October 2013 was a record month with 2.2 million passengers, and November 2013 figures reveal that the number of passengers is increasing by some 3% annually, about 50% more than the European average. Road, rail and ferry Copenhagen has an extensive road network including motorways connecting the city to other parts of Denmark and to Sweden over the Øresund Bridge. The car is still the most popular form of transport within the city itself, representing two-thirds of all distances travelled. This can however lead to serious congestion in rush hour traffic. The Øresund train links Copenhagen with Malmö 24 hours a day, 7 days a week. Copenhagen is also served by a daily ferry connection to Oslo in Norway. In 2012, Copenhagen Harbour handled 372 cruise ships and 840,000 passengers. The Copenhagen S-Train, Copenhagen Metro and the regional train networks are used by about half of the city's passengers, the remainder using bus services. Nørreport Station near the city centre serves passengers travelling by main-line rail, S-train, regional train, metro and bus. Some 750,000 passengers make use of public transport facilities every day. Copenhagen Central Station is the hub of the DSB railway network serving Denmark and international destinations. The Copenhagen Metro expanded radically with the opening of the City Circle Line (M3) on September 29, 2019. The new line connects all inner boroughs of the city by metro, including the Central Station, and opens up 17 new stations for Copenhageners. On March 28, 2020, the Nordhavn extension of the Harbour Line (M4) opened. Running from Copenhagen Central Station, the new extension is a branch line of M3 Cityring to Osterport. The M4 Sydhavn branch is expected to open in 2024. The new metro lines are part of the city's strategy to transform mobility towards sustainable modes of transport such as public transport and cycling as opposed to automobility. Copenhagen is cited by urban planners for its exemplary integration of public transport and urban development. In implementing its Finger Plan, Copenhagen is considered the world's first example of a transit metropolis, and areas around S-Train stations like Ballerup and Brøndby Strand are among the earliest examples of transit-oriented development. Cycling Copenhagen has been rated as the most bicycle-friendly city in the world since 2015, with bicycles outnumbering its inhabitants. In 2012 some 36% of all working or studying city-dwellers cycled to work, school, or university. With 1.27 million km covered every working day by Copenhagen's cyclists (including both residents and commuters), and 75% of Copenhageners cycling throughout the year. The city's bicycle paths are extensive and well used, boasting of cycle lanes not shared with cars or pedestrians, and sometimes have their own signal systems – giving the cyclists a lead of a couple of seconds to accelerate. Healthcare Promoting health is an important issue for Copenhagen's municipal authorities. Central to its sustainability mission is its "Long Live Copenhagen" () scheme in which it has the goal of increasing the life expectancy of citizens, improving quality of life through better standards of health, and encouraging more productive lives and equal opportunities. The city has targets to encourage people to exercise regularly and to reduce the number who smoke and consume alcohol. Copenhagen University Hospital forms a conglomerate of several hospitals in Region Hovedstaden and Region Sjælland, together with the faculty of health sciences at the University of Copenhagen; Rigshospitalet and Bispebjerg Hospital in Copenhagen belong to this group of university hospitals. Rigshospitalet began operating in March 1757 as Frederiks Hospital, and became state-owned in 1903. With 1,120 beds, Rigshospitalet has responsibility for 65,000 inpatients and approximately 420,000 outpatients annually. It seeks to be the number one specialist hospital in the country, with an extensive team of researchers into cancer treatment, surgery and radiotherapy. In addition to its 8,000 personnel, the hospital has training and hosting functions. It benefits from the presence of in-service students of medicine and other healthcare sciences, as well as scientists working under a variety of research grants. The hospital became internationally famous as the location of Lars von Trier's television horror mini-series The Kingdom. Bispebjerg Hospital was built in 1913, and serves about 400,000 people in the Greater Copenhagen area, with some 3,000 employees. Other large hospitals in the city include Amager Hospital (1997), Herlev Hospital (1976), Hvidovre Hospital (1970), and Gentofte Hospital (1927). Media Many Danish media corporations are located in Copenhagen. DR, the major Danish public service broadcasting corporation consolidated its activities in a new headquarters, DR Byen, in 2006 and 2007. Similarly TV2, which is based in Odense, has concentrated its Copenhagen activities in a modern media house in Teglholmen. The two national daily newspapers Politiken and Berlingske and the two tabloids Ekstra Bladet and BT are based in Copenhagen. Kristeligt Dagblad is based in Copenhagen and is published six days a week. Other important media corporations include Aller Media which is the largest publisher of weekly and monthly magazines in Scandinavia, the Egmont media group and Gyldendal, the largest Danish publisher of books. Copenhagen has a large film and television industry. Nordisk Film, established in Valby, Copenhagen in 1906 is the oldest continuously operating film production company in the world. In 1992 it merged with the Egmont media group and currently runs the 17-screen Palads Cinema in Copenhagen. Filmbyen (movie city), located in a former military camp in the suburb of Hvidovre, houses several movie companies and studios. Zentropa is a film company, co-owned by Danish director Lars von Trier. He is behind several international movie productions as well and founded the Dogme Movement. CPH:PIX is Copenhagen's international feature film festival, established in 2009 as a fusion of the 20-year-old NatFilm Festival and the four-year-old CIFF. The CPH:PIX festival takes place in mid-April. CPH:DOX is Copenhagen's international documentary film festival, every year in November. In addition to a documentary film programme of over 100 films, CPH:DOX includes a wide event programme with dozens of events, concerts, exhibitions and parties all over town. Twin towns – sister cities Copenhagen is twinned with: Beijing, China Marseille, France Honorary citizens People awarded the honorary citizenship of Copenhagen are: While honorary citizenship is no longer granted in Copenhagen, three people have been awarded the title of honorary Copenhageners (æreskøbenhavnere). See also :Category: People from Copenhagen 2009 United Nations Climate Change Conference in Copenhagen Architecture in Copenhagen Carlsberg Fault zone, a concealed tectonic formation that runs across the city Copenhagen Climate Council List of urban areas in Denmark by population Outline of Denmark Ports of the Baltic Sea Footnotes Citations Copenhagen City - Driving in Denmark References Further reading External links VisitCopenhagen.dk – Official VisitCopenhagen tourism website Capitals in Europe Cities and towns in the Capital Region of Denmark Municipal seats in the Capital Region of Denmark Municipal seats of Denmark Populated places established in the 11th century Port cities and towns in Denmark Port cities and towns of the Baltic Sea
27864261
https://en.wikipedia.org/wiki/Eastern%20Junior%20A%20Hockey%20League
Eastern Junior A Hockey League
The Eastern Junior A Hockey League (EJHL, EJAHL) was a Junior "A" ice hockey league from Cape Breton Island, Nova Scotia, Canada. The Eastern Junior A Hockey League was in competition for the Manitoba Centennial Cup, the National Junior A Championship from 1975 until 1978. History In the mid-1970s, the Maritime Amateur Hockey Association allowed the Eastern Junior B Hockey League of Cape Breton Island to play at the Junior A level. The 1975 Champion Port Hawkesbury Strait Pirates and Antigonish Bulldogs refused to jump to Junior A and elected to play in the Northumberland Junior B Hockey League. The EJAHL expanded with a new team in New Waterford and continued on with four teams. Over the next two seasons, the Sydney Millionaires were the cream of the crop in the EJAHL. Winning both the 1976 and 1977 Regular Season and Playoff Championships, the Millionaires twice made it into the Centennial Cup playdowns. In 1976, the Millionaires were dropped in five games by the Charlottetown Colonels of the Island Junior Hockey League. A year later, the Millionaires first played the Corner Brook Jr. Royals of the Newfoundland Junior A Hockey League sweeping them in the process. In the second round of the playoffs, the Millionaires met Charlottetown again and were swept again. In the 1977–78 season, the New Waterford Jets found their legs and took both the regular season and playoff crowns. In an anticlimactic ending to the league, the Jets took on the Metro Valley Junior Hockey League's first ever Junior A champion in 1978, the Cole Harbour Colts from Nova Scotia's mainland and were defeated by them. In 1979, New Waterford repeated as champions. They lost to the MVJHL's Halifax Lions 4-games-to-none in the provincial final. In 1980, the Northside Trojans won the league. They lost to the MVJHL's Cole Harbour Colts in four straight games. In the early 1980s, the league was demoted to Jr. B and merged with the Northumberland Junior B Hockey League. In 1992, that league merged with the Mainland Junior B Hockey League to form the Nova Scotia Junior B Hockey League that exists to this day. Even now, the Strait Pirates of Port Hawkesbury still exist and the Cape Breton Canadians who were formed as the Sydney Millionaires in the 1970s are still playing in the NSJHL. Teams Antigonish Bulldogs County Centennials Glace Bay Miners New Waterford Jets North Sydney Victorias/Northside Trojans Port Hawkesbury Strait Pirates Sydney Millionaires Regular Season Champions 1975 Port Hawkesbury Strait Pirates 1976 Sydney Millionaires 1977 Sydney Millionaires 1978 New Waterford Jets 1979 New Waterford Jets 1980 Glace Bay Miners Playoff Champions 1975 Port Hawkesbury Strait Pirates 1976 Sydney Millionaires 1977 Sydney Millionaires 1978 New Waterford Jets 1979 New Waterford Jets 1980 Northside Trojans National Playdowns 1976 - Lost Eastern Manitoba Centennial Cup Semi-final Charlottetown Colonels (IJHL) defeated Sydney Millionaires 4-games-to-1 in East MCC semi-final 1977 - Lost Eastern Manitoba Centennial Cup Semi-final Sydney Millionaires defeated Corner Brook Jr. Royals (NJAHL) 4-games-to-none in East MCC quarter-final Charlottetown Generals (IJHL) defeated Sydney Millionaires 4-games-to-none in East MCC semi-final 1978 - Lost Eastern Manitoba Centennial Cup Quarter-final Cole Harbour Colts (MVJHL) defeated New Waterford Jets 4-games-to-none in East MCC quarter-final 1979 - Lost Eastern Manitoba Centennial Cup Semi-final Halifax Lions (MVJHL) defeated New Waterford Jets 4-games-to-none in East MCC semi-final 1980 - Lost Eastern Manitoba Centennial Cup Semi-final Cole Harbour Colts (MVJHL) defeated Northside Trojans 4-games-to-none in East MCC semi-final See also Maritime Hockey League Canadian Junior Hockey League Fred Page Cup Royal Bank Cup Hockey Nova Scotia External links MHL website CJHL website Defunct junior ice hockey leagues in Canada
33984733
https://en.wikipedia.org/wiki/Applied%20Maths
Applied Maths
Applied Maths NV, a bioMérieux company headquartered in Sint-Martens-Latem, Belgium, is a bioinformatics company developing software for the biosciences. History Applied Maths was founded in 1992 and gained worldwide recognition with the software GelCompar, used as a standard tool for the normalization and comparative analysis of electrophoresis patterns (PFGE, AFLP, RAPD, REP-PCR and variants, etc.). GelCompar II was released in 1998 to deal with the ever growing amounts of information following the success and expansion of electrophoresis and other fingerprinting techniques in various application fields in microbiology, virology and mycology. Following the introduction of the concepts of polyphasic taxonomy and the growing need to combine genotypic, phenotypic, electrophoresis and sequence information, Applied Maths released in 1996 the software package BIONUMERICS which still today is a platform for the management, storage and (statistical) analysis of all types of biological data. BIONUMERICS and GelCompar II are used by several networks around the globe, such as PulseNet and CaliciNet, to share and identify strain information. In January 2016, Applied Maths was acquired by bioMérieux. Products BIONUMERICS: BIONUMERICS is a commercial suite of 4 configurations used for the analysis of all major applications in bioinformatics: 1D electrophoresis gels, chromatographic and spectrometric profiles, phenotype characters, microarrays, sequences, etc. GelCompar II: GelCompar II is a suite of 5 modules developed for the analysis of fingerprint patterns, covering the normalization, import into a relational database and the comparative analysis. BNServer: BNserver is the web-based platform generally installed between a centrally maintained database and distributed clients using BIONUMERICS, GelCompar II or a web browser to exchange biological information and analysis results. BNServer has been used since the nineties in Food outbreak detection. Reception Over 15000 peer-reviewed research articles mention the use of Applied Maths software packages BIONUMERICS or Gelcompar II. References External links Biotechnology companies of Belgium Computational science Bioinformatics companies Software companies established in 1992 Biotechnology companies established in 1992 1992 establishments in Belgium Software companies of Belgium Companies based in East Flanders
1016348
https://en.wikipedia.org/wiki/Paul%20Baran
Paul Baran
Paul Baran (born Pesach Baran ; April 29, 1926 – March 26, 2011) was a Polish-American engineer who was a pioneer in the development of computer networks. He was one of the two independent inventors of packet switching, which is today the dominant basis for data communications in computer networks worldwide, and went on to start several companies and develop other technologies that are an essential part of modern digital communication. Early life He was born in Grodno (then Second Polish Republic, since 1945 part of Belarus) on April 29, 1926. He was the youngest of three children in his Jewish family, with the Yiddish given name "Pesach". His family moved to the United States on May 11, 1928, settling in Boston and later in Philadelphia, where his father, Morris "Moshe" Baran (1884–1979), opened a grocery store. He graduated from Drexel University (then called Drexel Institute of Technology) in 1949, with a degree in electrical engineering. He then joined the Eckert-Mauchly Computer Company, where he did technical work on UNIVAC models, the first brand of commercial computers in the United States. In 1955 he married Evelyn Murphy, moved to Los Angeles, and worked for Hughes Aircraft on radar data processing systems. He obtained his master's degree in engineering from UCLA in 1959, with advisor Gerald Estrin while he took night classes. His thesis was on character recognition. While Baran initially stayed on at UCLA to pursue his doctorate, a heavy travel and work schedule forced him to abandon his doctoral work. Packet switched network design After joining the RAND Corporation in 1959, Baran took on the task of designing a "survivable" communications system that could maintain communication between end points in the face of damage from nuclear weapons during the Cold War. Then, most American military communications used high-frequency connections, which could be put out of action for many hours by a nuclear attack. Baran decided to automate RAND Director Franklin R. Collbohm's previous work with emergency communication over conventional AM radio networks and showed that a distributed relay node architecture could be survivable. The Rome Air Development Center soon showed that the idea was practicable. Using the minicomputer technology of the day, Baran and his team developed a simulation suite to test basic connectivity of an array of nodes with varying degrees of linking. That is, a network of n-ary degree of connectivity would have n links per node. The simulation randomly "killed" nodes and subsequently tested the percentage of nodes that remained connected. The result of the simulation revealed that networks in which n ≥ 3 had a significant increase in resilience against even as much as 50% node loss. Baran's insight gained from the simulation was that redundancy was the key. His first work was published as a RAND report in 1960, with more papers generalizing the techniques in the next two years. After proving survivability, Baran and his team needed to show proof of concept for that design so that it could be built. That involved high-level schematics detailing the operation, construction, and cost of all the components required to construct a network that leveraged the new insight of redundant links. The result was one of the first store-and-forward data layer switching protocols, a link-state/distance vector routing protocol, and an unproved connection-oriented transport protocol. Explicit detail of the designs can be found in the complete series of reports On Distributed Communications, published by RAND in 1964. The design flew in the face of telephony design of the time by placing inexpensive and unreliable nodes at the center of the network and more intelligent terminating 'multiplexer' devices at the endpoints. In Baran's words, unlike the telephone company's equipment, his design did not require expensive "gold plated" components to be reliable. The Distributed Network that Baran introduced was intended to route around damage. It provided connection to others through many points, not one centralized connection. Fundamental to the scheme was the division of the information into "blocks" before they were sent out across the network. That enabled the data to travel faster and communications lines to be used more efficiently. Each block was sent separately, traveling different paths and rejoining into a whole when they were received at their destination. Selling the idea After the publication of On Distributed Communications, he presented the findings of his team to a number of audiences, including AT&T engineers (not to be confused with Bell Labs engineers, who at the time provided Paul Baran with the specifications for the first generation of T1 circuit that he used as the links in his network design proposal). In subsequent interviews, Baran mentioned how the AT&T engineers scoffed at his idea of non-dedicated physical circuits for voice communications, at times claiming that Baran simply did not understand how voice telecommunication worked. Donald Davies, at the National Physical Laboratory in the United Kingdom, also thought of the same idea and implemented a trial network. While Baran used the term "message blocks" for his units of communication, Davies used the term "packets," as it was capable of being translated into languages other than English without compromise. He applied the concept to a general-purpose computer network. Davies's key insight came in the realization that computer network traffic was inherently "bursty" with periods of silence, compared with relatively-constant telephone traffic. It was in fact Davies's work on packet switching, not Baran's, that initially caught the attention of the developers of ARPANET at the Symposium on Operating Systems Principles in October 1967. Baran was happy to acknowledge that Davies had come up with the same idea as him independently. In an e-mail to Davies, he wrote: Leonard Kleinrock, a contemporary working on analyzing message flow using queueing theory, developed a theoretical basis for the operation of message switching networks in his proposal for a Ph.D. thesis in 1961-2, published as a book in 1964. In the early 1970s, he applied this theory to model the performance of packet switching networks. However, the representation of Kleinrock's early work as originating the concept of packet switching is disputed, including by Robert Taylor, Baran and Davies. Baran and Davies are recognized by historians and the U.S. National Inventors Hall of Fame for independently inventing the concept of digital packet switching used in modern computer networking including the Internet. In 1969, when the US Advanced Research Projects Agency (ARPA) started developing the idea of an internetworked set of terminals to share computing resources, the reference materials that they considered included Baran and the RAND Corporation's "On Distributed Communications" volumes. The resiliency of a packet-switched network that uses link-state routing protocols, which are used on the Internet, stems in some part from the research to develop a network that could survive a nuclear attack. Later work In 1968, Baran was a founder of the Institute for the Future and was then involved in other networking technologies developed in Silicon Valley. He participated in a review of the NBS proposal for a Data Encryption Standard in 1976, along with Martin Hellman and Whitfield Diffie of Stanford University. In the early 1980s, Baran founded PacketCable, Inc, "to support impulse-pay television channels, locally generated videotex, and packetized voice transmission." PacketCable, also known as Packet Technologies, spun off StrataCom to commercialize his packet voice technology for the telephony market. That technology led to the first commercial pre-standard Asynchronous Transfer Mode product. He founded Telebit after conceiving its discrete multitone modem technology in the mid-1980s. It was one of the first commercial products to use orthogonal frequency-division multiplexing, which was later widely deployed in DSL modems and Wi-Fi wireless modems. In 1985, Baran founded Metricom, the first wireless Internet company, which deployed Ricochet, the first public wireless mesh networking system. In 1992, he also founded Com21, an early cable modem company. After Com21, Baran founded and was president of GoBackTV, which specializes in personal TV and cable IPTV infrastructure equipment for television operators. Most recently, he founded Plaster Networks, providing an advanced solution for connecting networked devices in the home or small office through existing wiring. Baran extended his work in packet switching to wireless-spectrum theory, developing what he called "kindergarten rules" for the use of wireless spectrum. In addition to his innovation in networking products, he is also credited with inventing the first doorway gun detector. He received an honorary doctorate when he gave the commencement speech at Drexel in 1997. Death Baran died in Palo Alto, California, at the age of 84 on March 26, 2011 from complications caused by lung cancer. Upon his death, RAND President James Thomson, stated, "Our world is a better place for the technologies Paul Baran invented and developed, and also because of his consistent concern with appropriate public policies for their use." One of the fathers of the Internet, Vinton Cerf, stated, "Paul wasn't afraid to go in directions counter to what everyone else thought was the right or only thing to do." According to Paul Saffo, Baran also believed that innovation was a "team process" and avoided seeking credit for himself. On hearing news of his death, Robert Kahn, co-inventor of the Internet, said: "Paul was one of the finest gentlemen I ever met and creative to the very end." Awards and honors IEEE Alexander Graham Bell Medal (1990) Marconi Prize (1991) Nippon Electronics Corporation C&C Prize (1996) Bower Award and Prize for Achievement in Science (2001) Fellow of the American Academy of Arts and Sciences (2003) Fellow of the Computer History Museum (2005) "for fundamental contributions to the architecture of the Internet and for a lifetime of entrepreneurial activity." National Inventors Hall of Fame (2007) National Medal of Technology and Innovation (2007) UCLA Engineering Alumnus of the Year (2009) Internet Hall of Fame (2012) See also Internet pioneers References External links A 44-page transcript in which Baran describes his working environment at RAND, his initial interest in survivable communications, the evolution of his plan for distributed networks, the objections he received, and the writing and distribution of his eleven-volume work, On Distributed Communications. Baran discusses his interaction with the group at ARPA who were responsible for the later development of the ARPANET. This describes Paul Baran's development of packet switching and its application to wireless computing. A transcript of Baran's keynote address at the Countdown to Technology 2000 Winter Conference that includes a photo. Paul Baran named 1991 Marconi Fellow 1926 births 2011 deaths American communications businesspeople American people of Belarusian-Jewish descent American people of Polish-Jewish descent Belarusian Jews Drexel University alumni Fellows of the American Academy of Arts and Sciences Internet pioneers Packets (information technology) People from Grodno People from Białystok Voivodeship (1919–1939) National Medal of Technology recipients RAND Corporation people Polish emigrants to the United States UCLA Henry Samueli School of Engineering and Applied Science alumni 20th-century American inventors
28808483
https://en.wikipedia.org/wiki/John%20C.%20Mitchell
John C. Mitchell
John Clifford Mitchell is professor of computer science and (by courtesy) electrical engineering at Stanford University. He has published in the area of programming language theory and computer security. John C. Mitchell was the Vice Provost for Teaching and Learning at Stanford University, the Mary and Gordon Crary Family Professor in Computer Science and Electrical Engineering at Stanford University, co-director of the Stanford Computer Security Lab, and Professor (by courtesy) of Education. He is a member of the steering committee for Stanford University's Cyber Initiative. Mitchell has been Vice Provost at Stanford University since 2012, first as the inaugural Vice Provost for Online Learning and now in a broader role for Teaching and Learning. Under Mitchell's direction, the Office of the Vice Provost for Teaching and Learning (VPTL) is advancing teaching and learning through faculty-driven initiatives and research, transforming education in Stanford's classrooms and beyond. Mitchell's first research project in online learning started in 2009 when he and six undergraduate students built Stanford CourseWare, an innovative platform that expanded to support interactive video and discussion. CourseWare served as the foundation for initial flipped classroom experiments at Stanford and helped inspire the first massive open online courses (MOOCs) from Stanford that captured worldwide attention in 2011. The Office of the Vice Provost for Online Learning was as established in August 2012, after Mitchell served as John L. Hennessy's — Stanford University's 10th President — special assistant for educational technology and chaired a faculty committee that established initial priorities for Stanford and developed intellectual property guidelines for publicly released online courses. To help build faculty experience and a catalogue of online material, Vice Provost Mitchell launched a faculty seed grant program in Summer 2012. This program has helped faculty across campus transform their Stanford campus courses and release public courses to the world, generating informed discussion and debate among faculty in the process. In addition to supporting delivery of digital course content, the VPTL engineering team is working to expand the features of Lagunita, Stanford's instance of the open-source release of the edX platform. Mitchell and his team, in partnership with edX, announced the release of Open edX in June 2013: an open-source hosting platform, providing a customizable alternative for all colleges and universities and supporting open educational research and innovation. Stanford's online courses are generating a wealth of course participant data. In collaboration with Stanford centers of scholarship such as the Lytics Lab, which is jointly supervised by Mitchell, and Mitchell Stevens and Candace Thille of the Graduate School of Education, VPTL is playing a key role in evaluating educational outcomes and improving online learning based on data-driven research and iterative design. In May 2014, Mitchell's team issued a comprehensive report to share benchmark information with other institutions of higher education. Mitchell holds a B.S. from Stanford University and a M.S. and Ph.D. from the Massachusetts Institute of Technology (MIT). He has served on the editorial board of ten academic journals, acted as consultant and advisor to numerous companies, and spent sabbaticals at the Newton Institute for Mathematical Science and Coverity, Inc. Mitchell is the author of two books, over 170 research papers, and is among the most-cited scholars in computer science. Research Together with Gordon Plotkin he noted the connection between existential types and abstract data types. Mitchell's early computer science research focused on programming analysis and design, where he played a pivotal role in developing type theory as a foundation for programming languages, a view that is now dominant in the field. For the past 15 years, his research has focused on computer security, developing analysis methods and improving network protocol security, authorization and access control, web security, and privacy. Mitchell has been at the forefront of Web and network security research and education for more than a decade and has helped train thousands of students in programming languages and hundreds of expert-level professionals in the area of cyber-security. His efforts have resulted in the development of concepts used in the popular Java programming language, improved the security of widely used wireless networking protocols, contributed to the security architecture of the Chrome browser and other components of the modern web. In August 2012, Mitchell was appointed by Stanford President John L. Hennessy as the Vice Provost for Online Learning, a newly created position responsible for overseeing Stanford's online learning initiatives. References American computer scientists Programming language researchers Living people Year of birth missing (living people)
28095113
https://en.wikipedia.org/wiki/XenClient
XenClient
XenClient is a desktop virtualization solution from Citrix that runs secure virtual desktops on endpoint devices. Desktops are run locally, without hosting applications or the operating system in a datacenter. It consists of a Type-1 Xen client hypervisor and a management server, which provides features such as centralized provisioning, patching, updating, monitoring, policy controls, and de-provisioning. It enforces security through features including AES-256 full disk encryption, VM isolation, remote kill, lockout, USB filtering, and VLAN tagging. XenClient supports use cases such as disconnected operation on laptops, limited connectivity environments (such as branch offices), and other use cases where use of local execution is desired and centralized management is required. History XenClient was announced as "Project Independence", a joint project with Intel in January 2009. XenClient 1.0 was released by Citrix in September 2010 after working with partners such as Intel, Dell, HP, and Microsoft. In May 2012, Citrix acquired Virtual Computer, another provider of client-hosted desktop virtualization solutions. Today, Citrix has combined the NxTop solution from Virtual Computer with the Xen hypervisor in XenClient 4.5, which was released in December 2012. Overview XenClient is available in three different editions: XenClient Enterprise, XenClient Express, and XenClient XT. XenClient Enterprise is a desktop virtualization product composed of a client hypervisor, the XenClient Enterprise Engine, and a management server, the XenClient Enterprise Synchronizer. It includes features such as image management, patching and updating, backup and recovery, security and policy management, and PC migration. It is designed for enterprises and is available either standalone or through XenDesktop Enterprise and Platinum editions. XenClient Express is the free edition of XenClient Enterprise which provides access to the XenClient Engine and a license to use the XenClient Synchronizer for up to 10 devices. It is designed for IT professionals, consultants, and small businesses. XenClient XT has a special, hardened version of the XenClient Engine client hypervisor and hardware-assisted features for a high level of security. It is designed for the public sector and other industries with extreme security requirements. How XenClient Enterprise works XenClient Enterprise has two major components. The first is the XenClient Enterprise Engine, which includes the client hypervisor and any virtual machines managed by the hypervisor. The second is the XenClient Enterprise Synchronizer, the management server which manages multiple XenClient Enterprise Engines. XenClient Enterprise Engine XenClient Enterprise Engine is a Type-1 client hypervisor which runs on bare metal or directly on the hardware. It uses the open source Xen hypervisor which lets users run multiple virtual machines simultaneously. Since the virtual machines are executed locally, XenClient users can work online or offline, regardless of network connection or location. XenClient Enterprise Synchronizer Users of XenClient Enterprise Engine download images from the XenClient Enterprise Synchronizer and run them locally on their laptops or PCs. These images are created on the Synchronizer and deployed (often as a golden image) to users across an organization. Images can be backed up and restored, patched and updated, and security policies may be defined for them. Security policies include remote wipe/kill capabilities, AES-NI 256-bit full disk encryption, and time-based lockout policies. Moreover, images can be migrated to a different laptop or PC in case of loss, corruption, or disaster. The Synchronizer also has a remote office server capability to support branch office and remote users. Remote office servers are managed by a central Synchronizer and images are downloaded from the central server to remote servers over the WAN. These images are then cached and transmitted over the LAN to branch office and remote users to reduce WAN utilization. Backups are also done locally over the LAN for improved performance and disaster recovery. Use XenClient is used by organizations for many different purposes. Some of the major uses of XenClient are to: • Simplify management of PCs by letting IT admins deploy and manage a single golden image for many PCs • Allow users to work from anywhere and give IT central control by offering desktop virtualization for offline laptops • Protect corporate data by enforcing security policies and backing up corporate and user data • Reduce the cost of supporting PCs in remote and branch offices with remote office servers • Support Windows 7 migrations with hardware-independent images propagated to all users • Manage shared PCs for kiosks, labs and training facilities by provisioning and managing several PCs at a time • Repurpose existing PCs as thin clients by managing the thin-client device Customers Many businesses in different industries have deployed XenClient, including companies from industries such as healthcare, energy, and retail. XenClient customers include: Swisscom, Afval Energie Bedrijf (AEB), Advanced Medical Imaging, Visma ITC, Residential Finance Corporation (RFC), Eyecare Medical Group, InTown Veterinary Group, Town of Lincoln, Massachusetts Recognition XenClient has been recognized as an industry leader in client virtualization. Citrix placed in the Leaders category of the latest “Worldwide Client Virtualization 2012 Vendor Analysis” IDC MarketScape report. In 2011, Frank Ohlhorst of SearchVirtualDesktop said that XenClient ‘offers excellent performance and improved stability’. In 2012, Andrew Wood of The Virtualization Practice called XenClient ‘a revolution in desktop virtualization for laptops’. Known limitations Since XenClient is a Type-1 client hypervisor, it does not work with every type of operating system or hardware. For example, it does not support the Mac operating system. However, the latest version of XenClient (XenClient 5.0) has an expanded Hardware Compatibility List (HCL) with support for newer laptops or PCs, including Ultrabooks and 3rd generation Intel Core processors. In addition, XenClient has been demonstrated on the Mac OS in the past, meaning that a Mac solution is possible sometime in the future. References External links Citrix XenClient site Citrix Systems Virtualization software
32611383
https://en.wikipedia.org/wiki/Duplicati
Duplicati
Duplicati is a backup client that securely stores encrypted, incremental, compressed remote backups of local files on cloud storage services and remote file servers. Duplicati supports not only various online backup services like OneDrive, Amazon S3, Backblaze, Rackspace Cloud Files, Tahoe LAFS, and Google Drive, but also any servers that support SSH/SFTP, WebDAV, or FTP. Duplicati uses standard components such as rdiff, zip, AESCrypt, and GnuPG. This allows users to recover backup files even if Duplicati is not available. Released under the terms of the GNU Lesser General Public License (LGPL), Duplicati is free software. Technology Duplicati is written mostly in C# and implemented completely within the CLR, which enables it to be cross-platform. It runs well on 32-bit and 64-bit versions on Windows, macOS and Linux using either .NET Framework or Mono. Duplicati has both a graphical user interface with a wizard-style interface and a command-line version for use in headless environments. Both interfaces use the same core and thus have the same set of features and capabilities. The command-line version is similar to the Duplicity interface. Duplicati has some unique features that are usually only found in commercial systems, such as remote verification of backup files, disk snapshots, and backup of open files. The disk snapshots are performed with VSS on Windows and LVM on Linux. History The original Duplicati project was started in June 2008 and intended to produce a graphical user interface for the Duplicity program. This included a port of the Duplicity code for use on Windows, but was dropped in September 2008, where work on a clean re-implementation began. This re-implementation includes all the sub-programs found in Duplicity, such as rdiff, ftp, etc. This initial version of Duplicati saw an initial release in June 2009. In 2012, work on Duplicati 2 started, which is a complete rewrite. It includes a new storage engine that allows efficient, incremental, continuous backups. The new user interface is web-based, which makes it possible to install Duplicati 2 on headless systems like servers or a NAS. As it is also responsive, it can be easily used on mobile devices. Implementation The Duplicati GUI and command-line interface both call a common component called Main, which serves as a binding point for all the operations supported. Currently the encryption, compression and storage component are considered subcomponent and are loaded at runtime, making it possible for a third-party developer to inject a subcomponent into Duplicati without access to the source or any need to modify Duplicati itself. The license type is also flexible enough to allow redistribution of Duplicati with a closed-source storage provider. Duplicati is designed to be as independent of the provider as possible, which means that any storage medium that supports the common commands (GET, PUT, LIST, DELETE) can work with Duplicati. The Duplicity model, on which Duplicati is based, relies heavily on components in the system, such as librdiff, TcFTP and others. Since Duplicati is intended to be cross-platform, and it is unlikely that all those components are available on all platforms, Duplicati re-implements the components instead. Most notably, Duplicati features an rdiff and AESCrypt implementation that work on any system that supports a Common Language Runtime. Limitations of Duplicati 1 The GUI frontend in Duplicati 1.x is intended to be used on a single machine with a display attached. However, it is also possible to install Duplicati as a Windows service or Linux daemon, and set the Duplicati system tray from starting the Duplicati service. This limitation has been addressed in Duplicati 2, which has a web interface and can be used on headless systems. Duplicati 1.x has extremely slow file listings, so browsing a file tree to do restores can take a long time. Since Duplicati produces incremental backups, a corrupt or missing incremental volume can render all following incremental backups (up to the next full backup) useless. Duplicati 2 regularly tests the backup to detect corrupted files early. Duplicati 1.x only stores the file modification date, not metadata like permissions and attributes. This has been addressed in Duplicati 2. See also List of backup software References External links 2008 software Free backup software Free software programmed in C Sharp
496990
https://en.wikipedia.org/wiki/Didaktik
Didaktik
The Didaktik was a series of 8-bit home computers based on the clones of Intel 8080 and Zilog Z80 processors produced in former Czechoslovakia. Didaktik Alfa Didaktik Alfa was produced in 1986, as a "more professional" clone of PMD 85. It featured 2.048 MHz Intel 8080 CPU, 48 KB RAM, 8 KB ROM with built-in BASIC, good keyboard (compared with PMD 85), monitor video output (but no TV output) with 288×256 resolution and four possible colours. Despite some changes in ROM, it was mostly compatible with PMD 85. Didaktik Alfa 1 was a clone of PMD 85-1, Didaktik Alfa 2 of PMD 85-2. Didaktik Beta Didaktik Beta was a slightly improved version of previous Didaktik Alfa, having almost identical hardware. While Didaktik Alfa and Beta were mostly deployed in schools (to replace older PMD 85 computers), there was another production line, meant as home computers. These were Sinclair ZX Spectrum 48K clones. Didaktik Gama Didaktik Gama was the first clone of the ZX Spectrum with one speciality: 80 KB RAM divided into two switched 32 KB memory banks and 16 KB of slower RAM containing graphical data for video output, while the size of ROM was 16 KB. This computer had become an unreachable dream for many children and adults in former socialist Czechoslovakia as the computer was considerably expensive and seldom available to buy. It is said there were waiting lists several years long. The design of the computer was very simple just a grey or black box the size of A5 with flat plastic keyboard and connectors mounted on the rear side. All games developed for the ZX Spectrum 48K were generally compatible with this computer. There is no need to say that it established massive and flourishing black market with these games country-wide as they were officially unavailable behind the "iron curtain". An audio cassette was used as the data store and a TV served as the monitor. Didaktik Gama was produced in three variants: the first, Gama '87, fixed some bugs in the original ZX Spectrum ROM, thus breaking compatibility in some percentage of applications (read: games), and introduced its own bugs effectively inhibiting the use of the second 32 KB memory bank from BASIC. Gama '88 fixed the original ZX Spectrum bugs in a more compatible way and fixed the memory switching bug. The last and the best model was Gama '89 which fixed some more bugs. Production of Didaktik Gama computers ceased in 1992. Didaktik M The next version, the Didaktik M introduced in 1990, was more advanced in design and reliability. The machine resembled more of a professional home computer with arrow keys separated from the rest of the keyboard and a more ergonomic shape of the case. Inside there was only 64 KB of total memory (16 KB ROM and 48 KB RAM) which was a disappointment in comparison to the Gama. The computer was considerably redesigned. A custom circuit from Russian company Angstrem was used instead of the original ULA as a result the screen had a square aspect ratio instead of a rectangle 4:3. In addition the whole RAM was realized by one set of 64 KB chips from which only 48 KB were used and there was no difference between the fast and slow memory with the video content. There were two separated connectors for joysticks and one connector for additional interfaces, such as a printer interface. Unlike the previous version of Didaktik, these connectors were typical "socialistic solution" compatible with nothing that was then available in the ČSSR. Thus, users were forced to develop and produce various and sometimes funny home-made interfaces to satisfy their needs. Data storing and monitor type was the same as in the case of the Gama. Two floppy disk drives were developed and released later to offer the possibility of fast saving/loading of various programs. 5.25-inch floppy disk drive called D40 was introduced in 1992 and featured a "Snapshot" (see also Hibernation (computing)) button that allowed to store current content of the memory (memory image) on diskette. It was also possible later to load the memory image and continue playing the game (or whatever was stored) from the respective state. 3.5-inch floppy disk drive called D80 was also introduced later in 1992 at the same time as Didaktik Kompakt was released. Didaktik Kompakt The Didaktik Kompakt from 1992 was basically a Didaktik M with built-in floppy 3.5-inch 720 KB drive and parallel printer port. These computers were famous for their simplicity allowing people with little technical ability to produce various hardware add-ons such as FDD controllers, AD/DA converters or software (such as Desktop unique WYSIWYG word processor with functions like proportional text, pictures in text support, block functions, multi-font support etc.). Both version of these computers had been produced in Skalica, Slovakia. Didaktik's glory diminished with the falling price of the 16-bit computers, such as the Atari and Amiga, around the middle of the 1990s until it was finally steam-rolled by the PC soon after. The production of Didaktik computers stopped in the year 1994. References External links Didaktik computers Didaktik computers on old-computers.com PCB scans A schematic including the inside of the modulator Didaktik družstvo Skalica the website of the company Т34ВГ1 an article in the Russian Wikipedia about the Russian ULA replacement Computer-related introductions in 1986 Home computers ZX Spectrum clones Science and technology in Czechoslovakia
35391005
https://en.wikipedia.org/wiki/Larsen%20%26%20Toubro%20Infotech
Larsen & Toubro Infotech
Larsen & Toubro Infotech Limited (LTI) is an Indian multinational information technology services and consulting company based in Mumbai, India. In 2017, NASSCOM ranked LTI as the sixth-largest Indian IT services company in terms of export revenues. It was among the top 15 IT service providers globally in 2017, according to the Everest Group's PEAK Matrix for IT service providers. It employs standards of the Software Engineering Institute's (SEI) Capability Maturity Model Integration (CMMI) and is a Maturity Level 5 assessed organization. History Founded as L&T Information Technology Ltd in December 1996, LTI is a wholly owned subsidiary of Larsen & Toubro (L&T). During 2001–2002 the company's name was changed from L&T Information Technology Limited to Larsen & Toubro Infotech Limited and in the same year the company achieved the assessed level of SEI Level 5. L&T Infotech dropped the word 'Infotech' from its name to reflect the changed business environment and rebranded itself as 'LTI' with a tag line of 'Let's Solve' in May 2017. Global presence LTI has its presence across the following regions: India: Mumbai – (Powai, Airoli, Mahape) Pune – (Shivajinagar, Hinjawadi) Bangalore/Bengaluru – (Whitefield) Chennai – (Manapakkam) Hyderabad – (Kokapet) North America: Canada, Mexico, United States. Europe: United Kingdom, Germany, Denmark, France, Sweden, Norway, Finland, Belgium, Ireland, Netherlands, Poland, Spain, Luxembourg, Switzerland Middle East: Kuwait, United Arab Emirates, Saudi Arabia, Qatar South America: Costa Rica Africa: South Africa, Morocco Asia Pacific: Australia, Japan, Singapore, Philippines, China, Thailand Subsidiaries Acquisitions LTI Mosaic LTI Mosaic is the data-to-decisions platform that offers data engineering, advanced analytics, knowledge-led automation, IoT connectivity using exponential technology. In 2019 with the acquisition of Lymbyc, LTI added Leni, a virtual intelligent analyst to its Mosaic portfolio. Leni lets users conversationally access information and insights at a large scale. Leni claims to be the world’s first virtual analyst. Awards and recognition In January 2019, LTI and ACORD announced collaboration to drive digital adoption in the insurance industry In November 2018, Inclusive Tech Alliance recognized LTI President Sales Sudhir Chaturvedi among the top 100 most influential leaders in UK Tech Sector In January 2018, LTI CEO and MD Sanjay Jalona chosen as Exemplary CEO of the Year by BW Business World In March 2017, LTI Ranked Amongst 'Super 50' in Dalal Street Investment Journal (DSIJ) 150 Wealth-Creators List. In March 2017, LTI Named a Top 15 Sourcing Service Provider by Information Services Group (ISG). In March 2016, LTI positioned among Everest Group's Top 20 IT Services Providers. In February 2016, L&T Group Executive Chairman A.M. Naik was awarded an honorary Doctorate of Letters Degree from the Veer Narmad South Gujarat University. The doctorate is in recognition of Mr. Naik's long and distinctive contribution to the nation to society. In December 2015, LTI Named as one of the 'Star Performers' and 'Major Contenders' in Everest Group's PEAK MatrixTM Assessment 2015 for Global Insurance Service Provider In 2014, LTI ranked number 6 in India IT companies in 2013–2014,2014–2015 and 2015–2016 by NASSCOM rating. Controversy In 2016, LTI management revoked offer letter of few recruits after waiting for a period of 18 months. This caused some disruption in Chennai's IT Corridor and there was protest against the company's management in front of its Chennai office. See also List of IT consulting firms Larsen & Toubro L&T Technology Services References External links Companies based in Mumbai Software companies of India Information technology consulting firms of India Larsen & Toubro Software companies based in Mumbai Software companies established in 1996 1996 establishments in Maharashtra Indian companies established in 1996 Companies listed on the National Stock Exchange of India Companies listed on the Bombay Stock Exchange
63807
https://en.wikipedia.org/wiki/Shorten%20%28codec%29
Shorten (codec)
Shorten (SHN) is a file format used for compressing audio data. It is a form of data compression of files and is used to losslessly compress CD-quality audio files (44.1 kHz 16-bit stereo PCM). Shorten is no longer developed and other lossless audio codecs such as FLAC, Monkey's Audio (APE), TTA, and WavPack (WV) have become more popular. However, Shorten is still in use by some people because there are legally traded concert recordings in circulation that are encoded as Shorten files. Shorten files use the .shn file extension. Handling Shorten files Since few players or media writers attempt to decompress Shorten files, a standalone decompression program is usually required to convert to a different file format that those applications can handle. Some Rockbox applications can play Shorten files without decompression, and third-party Shorten plug-ins exist for Nero Burning ROM, Foobar2000, and Winamp. All libavcodec based players and converters support the Shorten codec. Converting on Linux Current versions of ffmpeg or avconv support the shorten format. To convert all .shn files in the current directory to FLAC on Linux: for f in *.shn; do ffmpeg -i "$f" "${f/%.shn/.flac}"; done There are also various GUI programs which can be used, like SoundConverter. Converting on Windows A similar command using the freely available ffmpeg for the Microsoft Windows command line: for /r %i in (*.shn) do ffmpeg -i "%~ni%~xi" "%~ni.flac"For a GUI-based solution, dBpoweramp can be used, however on a 64-bit version of Windows the 32-bit version of the app must be installed, as the Shorten codec does not come in a 64-bit variant. To install the 32-bit version on a 64-bit system, hold-down the right shift key and double-click the installer; keep it held-down until the installer is on-screen. Converting on macOS X Lossless Decoder (XLD), an open source graphical and command line application powered by the libsndfile and SoX libraries, supports transcoding Shorten files to a variety of lossless and lossy formats. ffmpeg is also available and can be interfaced with through the terminal identically to how it is used on Linux See also FLAC MPEG-4 ALS Meridian Lossless Packing Monkey's Audio (APE) TTA WavPack References External links Shorten Research Paper, written by the author of Shorten and detailing how it works. Trader's Little Helper Download page. Trader's Little helper converts shn to wav among other things etree.org Wiki article. etree.org is a trading site for authorized recordings of live performances; etree formerly used Shorten exclusively but is increasingly using FLAC. Shorten FAQ (note: If looking for software to play .shn files, you will probably be better served by the etree software page, as the Shorten FAQ has many broken and outdated links.) Lossless audio formats, a performance comparison of lossless audio formats, including Shorten. A Small SHN and MD5 FAQ Includes a decent list of programs to handle Shorten files. Lossless audio codecs Cross-platform software
31291516
https://en.wikipedia.org/wiki/Pillai%27s%20Institute%20of%20Information%20Technology%20Engineering%2C%20Media%20Studies%20%26%20Research
Pillai's Institute of Information Technology Engineering, Media Studies & Research
Pillai College of Engineering (autonomous) formerly Pillai's Institute of Information Technology, Engineering, management Studies and Research is an engineering college in New Panvel, Navi Mumbai, Maharashtra, India was established in 1999 (commencement of courses from A.Y. 2000–2001) as a self financed Malayalam Linguistic Minority autonomous Institute affiliated to University of Mumbai, approved by AICTE and Recognised by Govt. of Maharashtra. The college is recognized by UGC under section 12(b) and 2(f). It is operating under the banner of Mahatma education society (MES). It is commonly referred to as PCE also as PIIT, also as PIITE. Pillai college is one of the best college to be considered if studying in Navi Mumbai with excellent infrastructure and talented faculty and more a good blend of students from Mumbai and Navi Mumbai. It is an autonomous institute affiliated to the University of Mumbai. Pillai College of Engineering was started as Pillai Institute of Information Technology, Engineering, management Studies and Research, and popularly known as PIIT in the year 2000. The name of the institute was changed as Pillai College of Engineering in the year 2016 and is recognized by the All India Council for Technical Education (AICTE), Government of India. Pillai College of Engineering is accredited A+ grade by National Assessment and Accreditation Council (NAAC). PCE offers a four-year Bachelor of Technology (B.Tech.) courses in Computer, Information Technology, Electronics and Telecommunication, Mechanical Engineering, Automobile Engineering, and Electronics Engineering streams. In addition, it offers a two-year master's degree (M.E.) courses in Electronics, Mechanical (CAD/CAM&ROBOTICS), Mechanical (Thermal Engineering), Computer engineering and Information Technology. It also offers numerous value added and skill courses to enhance the employability of its students. Fields Automobile Engineering Information Technology Computer Engineering Electronics & Telecommunication Engineering Mechanical Engineering Electronics and Computer Science Engineering The college has postgraduate (MTech) as well as PhD courses in electronics, mechanical engineering, computer engineering and information technology. Festival Alegria – The Festival Of Joy is held annually at PCE. It is one of the biggest college festivals in Navi Mumbai. See also University of Mumbai List of Mumbai Colleges References External links Official website Education in Navi Mumbai Engineering colleges in Mumbai Affiliates of the University of Mumbai Educational institutions established in 2000 2000 establishments in Maharashtra
37706146
https://en.wikipedia.org/wiki/2011%20SL25
2011 SL25
, also written as 2011 SL25, is an asteroid and Mars trojan candidate that shares the orbit of the planet Mars at its point. Discovery, orbit and physical properties was discovered on 21 September 2011 at the Alianza S4 Observatory on Cerro Burek in Argentina and classified as Mars-crosser by the Minor Planet Center. It follows a relatively eccentric orbit (0.11) with a semi-major axis of 1.52 AU. This object has noticeable orbital inclination (21.5°). Its orbit was initially poorly constrained, with only 76 observations over 42 days, but was recovered in January 2014. has an absolute magnitude of 19.5 which gives a characteristic diameter of 575 m. Mars trojan and orbital evolution Recent calculations indicate that it is a stable Mars Trojan with a libration period of 1400 yr and an amplitude of 18°. values as well as its short-term orbital evolution are similar to those of 5261 Eureka. Origin Long-term numerical integrations show that its orbit is stable on Gyr time-scales (1 Gyr = 1 billion years). It appears to be stable at least for 4.5 Gyr but its current orbit indicates that it has not been a dynamical companion to Mars for the entire history of the Solar System. See also 5261 Eureka (1990 MB) References Further reading Three new stable L5 Mars Trojans de la Fuente Marcos, C., de la Fuente Marcos, R. 2013, Monthly Notices of the Royal Astronomical Society: Letters, Vol. 432, Issue 1, pp. 31–35. Orbital clustering of Martian Trojans: An asteroid family in the inner solar system? Christou, A. A. 2013, Icarus, Vol. 224, Issue 1, pp. 144–153. External links data at MPC. Mars trojans Minor planet object articles (unnumbered) 20110921
36048359
https://en.wikipedia.org/wiki/Michael%20Lin%20%28mathematician%29
Michael Lin (mathematician)
Michael Lin () (born June 8, 1942) is an Israeli mathematician, who has published scientific articles in the field of probability concentrating on Markov chains and ergodic theory. He serves as professor emeritus at the Department of Mathematics in Ben-Gurion University of the Negev (BGU). Additionally, he is a member of the academic board and serves as the academic coordinator at Achva Academic College. Professor Lin is considered a Zionist, as he gave up a position at Ohio State University in order to promote the field of mathematics in Israel. Biography Michael Lin was born in Israel. He holds a Bachelor of Science in Mathematics and Physics from The Hebrew University of Jerusalem (1963), Master of Science in Mathematics (1967) and a PhD in Mathematics also from The Hebrew University of Jerusalem (1971). In 1971 he was appointed as an assistant professor in Ohio State University. In 1976 he returned to Israel and became a senior lecturer in the Department of Mathematics at Ben-Gurion University of the Negev. Only 4 years later, at 1979, he became an associate professor and in 1984 he became a full professor. In 2011, Professor Lin retired and nowadays he serves as professor emeritus. During his career at Ben-Gurion University of the Negev he acted as: Computer Science Coordinator, Department of Mathematics and Computer Science, BGU. Member of BGU Computer Policy committee. Chairman and Computer Science Coordinator, Department of Mathematics and Computer Science, BGU. Senate representative to Executive Committee of Board of Trustees of BGU. Senate representative to the BGU Executive Committee's subcommittee for student affairs. President, Israel Mathematical Union. Head of the Ethical Code Committee of BGU. In 2004 Professor Lin also acted as a member of the committee electing the recipients of the Israel Prize in mathematics. In addition to his academic activities, Professor Lin made a social-academic contribution, as he took a part in the 'Kamea Program'. The program helped immigrant scientists to continue working in their profession in the academy in Israel. Professor Lin assisted in the absorption of these immigrants in Ben-Gurion University of the Negev, specifically in the Department of Mathematics. Until his retirement, he was listed as an absorbing researcher of two scientists in his department. He was also responsible for the immigrants’ employment terms and insisted that they will be members of the Academic Staff Union. Additionally, Professor Lin was the university representative in a discussion regarding this program in the Israeli Parliament (knesset) and acted as an advisor regarding the newcomers’ academic seniority. Research and Publications Professor Lin's published work focuses on two main areas of research in the field of probability: Ergodic theory and Markov chain. More specifically, he researched in several areas: mean and individual Ergodic theory, Central limit theorem and functional analysis. Professor Lin has received various grants including: Israel Science Foundation (1995–1998) grant, Research Support Fund grant (2001–2002), Milken Families Foundation Chair in Mathematics grant (1989 until retirement). Among others, these grants supported his scientific research which has already yielded 85 scientific publications. References 1942 births Living people Israeli mathematicians Einstein Institute of Mathematics alumni
1313463
https://en.wikipedia.org/wiki/Vergilius%20Vaticanus
Vergilius Vaticanus
The Vergilius Vaticanus, also known as Vatican Virgil (Vatican, Biblioteca Apostolica, Cod. Vat. lat. 3225), is a Late Antique illuminated manuscript containing fragments of Virgil's Aeneid and Georgics. It was made in Rome in around 400 C.E., and is one of the oldest surviving sources for the text of the Aeneid. It is the oldest and one of only three ancient illustrated manuscripts of classical literature. Contents The two other surviving illustrated manuscripts of classical literature are the Vergilius Romanus and the Ambrosian Iliad. The Vergilius Vaticanus is not to be confused with the Vergilius Romanus (Vatican City, Biblioteca Apostolica, Cod. Vat. lat. 3867) or the unillustrated Vergilius Augusteus, two other ancient Vergilian manuscripts in the Biblioteca Apostolica. Virgil created a classic of Roman literature in the Aeneid. He used vivid locations and emotions to create images through poetry in the story. It discusses Aeneas and his Trojan comrades wandering into the seas until reaching their final destination in Italy after escaping the city of Troy after the Trojan War. The narrative includes themes of love, loss, and war. The Trojan War is an inspiration and a catalyst in a long line of epic poetry that came after Virgil. Attribution The illustrations were added by three different painters, all of whom used iconographic copybooks. The first worked on the Georgics and parts of the Eclogues; only two worked on the Aeneid. Each individual artist's illustrations are apparent based on their ability. The first artist is distinguished by his knowledge of spatial perspective and anatomy. His illustrations creates in the Georgics and Eclogues focus on his skill of creating distances and landscapes. The illustration of the herd being led to water is found in the artist's illustration of the Georgics. Each figure and object in the background is distinguishable with a realistic spatial arrangement. Compared to the first artist, the second artist, who worked on the Aeneid, lacks the same familiarity with spatial perspective. The inclusion of crowds with buildings, people and mountains creates a striking contrast in the discovery of Carthage. There are ideas of spatial perspective, realistic space and figures in the third artist's illustrations. He used his ability to depict a realistic background based on his skill in human anatomy. One example was where the deceased Dido lays in repose in her decorated bedchamber in the Lamentation Over Dido. Description There are 76 surviving leaves in the manuscript with 50 illustrations. If, as was common practice at the time, the manuscript contained all of the canonical works of Virgil, the manuscript would originally have had about 440 leaves and 280 illustrations. The illustrations are contained within frames and include landscapes and architectural and other details. Many of the folios survive in fragments. Some fragments are grouped in fours or fives. There are 50 damaged illustrations in poor condition. It is simple to reconstruct the original book based on each fragment. The canonical works of Virgil, containing 440 folios with 280 illustrations, was customary at the time containing no introductions. It is easy and handy to read. There is no evidence of a significantly older book which can be compared based on quality. Of the several editions of Virgil, the Vergilius Vaticanus is the first edition in codex form. It may have been copied from a set of scrolls, which caused a lack of clarity with the transmission of the text. There was a well-organized workshop that created the Vergilius Vaticanus. By leaving spaces at certain places in the text, a master scribe planned the inclusion of illustrations when copying the text. Out of tradition and convenience, there were iconographic models that were from three different artists who filled in the illustrations. For the painter to finish this work, a set of illustrated rolls was studied and adapted which were to serve as iconographic models for the Aeneid. These kinds of small illustrations were placed in the columns of text which was written in papyrus style on rolls. Text and script The text was written by a single scribe in rustic capitals. As was common at the time, there is no separation between words. One scribe used a late antique brown ink to write the entire text. Using eight-power magnification, the ink is grainless which appears smooth and well-preserved. The Vergilius Vaticanus is in very good condition, compared to other manuscripts from the same time period, which had been inefficiently prepared causing the parchment to break from oak gallnuts and ferrous sulphate. For distinctions to be made between thick and thin strokes the scribe used a trimmed broad pen. The pen was nearly held at 60 degrees where most strokes would occur. Illustrations The miniatures are set within the text column, although a few miniatures occupy a full page. The human figures are painted in classical style with natural proportions and drawn with vivacity. The illustrations often convey the illusion of depth quite well. The gray ground of the landscapes blend into bands of rose, violet, or blue to give the impression of a hazy distance. The interior scenes are based on earlier understanding of perspective, but occasional errors suggest that the artists did not fully understand the models used. The style of these miniatures has much in common with the surviving miniatures of the Quedlinburg Itala fragment and have also been compared to the frescos found at Pompeii. Each miniature had a proportional figure with a landscape creating a hazy effect, featuring classical architecture and clothing. Popular myths such as Hylas and the Nymphs were represented by the nine surviving illustrations of the Georgics through a command of pastoral and genre scenes by competent artists. The illustrations in the Aeneid are mixed with the Georgics; however, the Georgics’ artistic ingenuity is greater than the Aeneid illustrations. Without decorated frames and painted backgrounds or landscape settings, the illustrated verses used only the essentials for telling a story using minimal figures and objects. Neither framed nor painted in the background, the Vergilius Vaticanus uses a roll of illustrations in the Papyrus style. Miniatures One important miniature depicts Aeneas and Achates discovering Carthage (folio 13). The artist sacrificed style for pictorial accuracy in order to capture the city in its urgent progress and unity. There are two Trojans standing on a cliff surveying Carthage in an image that lacks perspective. Achates and Aeneas are identified with labels above their heads. Aeneas’ body provides material for examination such as deteriorated drapes of clothes which prevents Achates’ body from being examined. Aeneas’s awkwardly composed anatomy is created with the artist’s smooth and thick brushstrokes. The drapery unnaturally positions the legs after covering the body. Aeneas’s body, in contrast to Achates, implies speech based on the extended stance. Maintaining textual accuracy, there is a clumsy perspective on the right of Aeneas which outlines Carthage. Two workers and an overseer, in the quarry below Aeneas, extracts raw materials for construction. There are stonemasons being watched by the supervisors in the background. Only stone arches and walls are being displayed, not the actions in progress. Materials Preparing 220 sheets of parchment paper measured from 25 by 43 centimeters (10 by 17 in) was the first step in the bookmaking process. This is comparable to other luxurious manuscripts of the time, some of which required approximately 74 sheep in order for the manuscript to be created. Despite a few holes in the Vergilius Vaticanus, it remained in excellent quality. Parchment is very thin with a smooth polished surface which are described by being the best books of the period. The hair side is slightly yellow creating a curl while the side of the flesh is white. The usage of the parchment at extensive length will lead to deterioration, fortunately the parchment can be used at great length based on its stiffness. Provenance The manuscript was probably made for a pagan noble. This late antique illuminated manuscript remained actively used for at least 200 years in Italy. The first record of the almost complete manuscript showed up at the monastery of Saint-Martin in Tours during the second quarter of the ninth century. There are only a few corrections and replacement have been done on the text. Before the manuscript was dismembered and further disappeared, the illustrations from the manuscript were studied delicately by artists. Those artists later established one of the most influential painting schools of Carolingian painting and one of the artists actually copied two figures from the manuscript to use in the Vivian Bible. In 1448, the Italian humanist Giovanni Pontano studied and collected the manuscript. At that time, as the manuscript lost around 164 folios. In addition, several folios were appear to have been cut by someone else. In around 1514, the manuscript, after it disappeared and suffered more dismemberment, showed up in Rome in the circle of Raphael where several of the surviving illustrations were copied and adapted for other purpose. People feared that the manuscript would eventually completely deteriorate, so a copy of all the illustrations from the manuscript was created in the circle of Raphael (now Princeton MS104). In 1513, the manuscript was transferred to Rome, to the library of the humanist Pietro Bembo. When Pope Leo X died in 1521, Bembo retired to Padua and brought the manuscript with him. Next, Pietro Bembo passed down the manuscript to his son, Torquato Bembo. Finally in 1579, the manuscript returned to Rome and some folios got trimmed down. Fulvio Orsini bought the manuscript from Torquato Bembo. Fulvio Orsini eventually bequeathed his library to the Vatican library in 1600.  Shortly after 1642, the manuscript was altered due to rebinding the codex and added patches on the parchment. Facsimiles The manuscript was published with color reproductions in 1980. Print Facsimile: Wright, David H. Vergilius Vaticanus: vollständige Faksimile-Ausgabe im Originalformat des Codex Vaticanus Latinus 3225 der Biblioteca Apostolica Vaticana. Graz, Austria: Akademische Druck- u. Verlagsanstalt, 1984. Digital Facsimile: Vatican Library All the illustrations are online, with commentary, in Wright, David H., The Vatican Vergil, a Masterpiece of Late Antique Art, Berkeley, University of California Press, 1993, google books, full online text Notes References Calkins, Robert G. Illuminated Books of the Middle Ages. Ithaca, New York: Cornell University Press, 1983. Walther, Ingo F. and Norbert Wolf. Codices Illustres: The world's most famous illuminated manuscripts, 400 to 1600. Köln, TASCHEN, 2005. Weitzmann, Kurt. Late Antique and Early Christian Book Illumination. New York: George Braziller, 1977. Weitzmann, Kurt, ed., Age of spirituality: late antique and early Christian art, third to seventh century, no. 203 & 224, 1979, Metropolitan Museum of Art, New York, ; full text available online from The Metropolitan Museum of Art Libraries Stevenson, Thomas B. Miniature decoration in the Vergilius Vaticanus : a study in late antique iconography. Tübingen, Verlag E. Wasmuth, 1983. David Wright, “From Copy to Facsimile: A Millennium of Studying the Vatican Vergil,” The British Library Journal, Vol. 17, No. 1 (Spring 1991), 12-35. Wright, David H., The Vatican Vergil, a Masterpiece of Late Antique Art . Berkeley, University of California Press, 1993, google books, full online text Wright, David H. Codicological notes on the Vergilius Romanus (Vat. lat. 3867). Vatican City, Biblioteca apostolica vaticana, 1992. External links Complete reproduction of Vergilius Vaticanus (Digital version at DigitaVaticana) More information at Earlier Latin Manuscripts Italian poetry collections Literary illuminated manuscripts Aeneid 5th-century illuminated manuscripts Manuscripts of the Vatican Library
67064985
https://en.wikipedia.org/wiki/Vir%20Phoha
Vir Phoha
Vir Virander Phoha is a professor of electrical engineering and computer science at Syracuse University College of Engineering and Computer Science. Phoha is known for developing practicable foundations of behavioral biometrics for active and continuous authentication. His research focuses on attack-averse authentication, spoof-resistance, anomaly detection, machine learning, optimized attack formulation, and spatial-temporal pattern detection and event recognition. Phoha's work also provides protection for many classified information systems and his inventions have resulted in the widespread commercial use of active authentication biometric methods. Education Phoha earned his MSc in Mathematical statistics at Kurukshetra University in Kurukshetra, India. He came to the United States in 1988 as a graduate student at Texas Tech University, where he worked under William J. B. Oldham. His 1992 PhD thesis was titled "Self-repair and adaptation in collective and parallel computational networks". Career Phoha began his career as professor of computer science at the University of Central Texas, now the Texas A&M University–Central Texas and was a faculty at Northeastern State University in Tahlequah, Oklahoma. Later, he was a professor of computer science and the director of the Center for Secure Cyberspace at Louisiana Tech University in Ruston, Louisiana. In 2015, Phoha was appointed professor of electrical engineering and computer science in the L.C. Smith College of Engineering and Computer Science at Syracuse University. Phoha has published 250 papers and six books on security related topics and holds 14 U.S. patents in behavioral authentication. Phoha serves as an associate editor for Digital Threats: Research and Practice and Transactions on Computational Social Systems journals. Awards In 2008, Phoha was selected as a distinguished scientist by the Association for Computing Machinery. In 2017, Phoha was awarded the IEEE Region 1 Technological Innovation in Academia Award for his contributions to authentication using behavioral biometrics. He is also a fellow of Society for Design and Process Science (SDPS). In 2018 He was elected as Fellow at the American Association for the Advancement of Science. In 2020, Phoha was elected as a National Academy of Inventors fellow. References External links Official Website 20th-century Indian engineers Year of birth missing (living people) American computer scientists Fellows of the American Association for the Advancement of Science Indian emigrants to the United States Living people Texas Tech University alumni Texas A&M University faculty Northeastern State University faculty Louisiana Tech University faculty Syracuse University faculty Fellows of the National Academy of Inventors
29016797
https://en.wikipedia.org/wiki/Mohammad%20S.%20Obaidat
Mohammad S. Obaidat
Mohammad Salameh Obaidat is a Jordanian American Academic/ Computer Engineer/computer Scientist and Founding Dean of College of Computing and Informatics at the University of Sharjah, UAE. He is the Past President & Chair of Board of Directors of and a Fellow of the Society for Modeling and Simulation International (SCS), and a Fellow of the Institute of Electrical and Electronics Engineers (IEEE). He was born in Jordan to The Obaidat known Family. He is the cousin of the Former Prime Minister of Jordan, Ahmed Obaidat (also spelled Obeidat) and received his M.S. and Ph.D. in computer engineering from the Ohio State University, Columbus, Ohio, USA. He is known for his contributions in the fields of cybersecurity, Biometrics-based Cybersecurity, wireless networks, modeling and simulation, AI/Data Analytics. He served as President and Char of Board of Directors of the Society for Modeling and Simulation International, SCS, a Tenured Professor & Chair of Department of Computer Science at Monmouth University, Tenured Professor & Chair of Department of computer and Information Sciences at Fordham University, USA, Dean of College of Engineering at Prince Sultan University, and Advisor to the President of Philadelphia University for Research, Development and IT. He has chaired numerous international conferences and has given numerous keynote speeches. Biography Obaidat was born in Kufr Soum, Irbid, Jordan to The Obaidat known Family. He is the cousin of the Former Prime Minister of Jordan, Ahmed Obaidat (also spelled Obeidat). He obtained his MS and PH. D. Degrees from The Ohio State University, Columbus, Ohio, USA, where he was on a scholarship. After graduation He worked briefly at Jordan University of Science and Technology, JUST, and then moved to the USA and worked at several Universities, as an assistant professor, associate professor and full professor as well as an academic leader, including City University of New York, Monmouth University and Fordham University. During the 2014/2015 academic year, he was awarded the Fulbright Distinguished Award and served in Jordan as Advisor to the President of Philadelphia University for Research, Development and IT- The later, Dr. Adnan Badran, became the Prime Minister of Jordan in 2005. He has received extensive research funding and published To Date (2020) about One Thousand (1,000) refereed technical articles-About half of them are journal articles, over 95 books, and 55 Book Chapters. He is the Editor-in-Chief of the Wiley International Journal of Communication Systems, the Founding Editor-in Chief of Wiley Security and Privacy Journal and an editor of other international journals. Moreover, he is founder or co-founder of 5 International Conferences: SPECTS, CITS, CCCI, DCNET, SIMULTECH. Among his previous positions are Advisor to the Former President of President of Philadelphia University (HE Dr. Dr. Adnan Badran) for Research, Development and Information Technology (who became the Prime Minister of Jordan in 2005), President and Chair of Board of Directors of the Society for Molding and Simulation International, SCS, Senior Vice President of SCS, Dean of the College of Engineering at Prince Sultan University, Chair and tenured Professor at the Department of Computer and Information Science and Director of the MS Graduate Program in Data Analytics at Fordham university, Chair and tenured Professor of the Department of Computer Science and Director of the Graduate Program at Monmouth University, Tenured Full Professor at King Abdullah II School of Information Technology, University of Jordan, The PR of China Ministry of Education Distinguished Overseas Professor at the University of Science and Technology Beijing, China and an Honorary Distinguished Professor at the Amity University- A Global University. He is now the Founding Dean of the College of Computing and Informatics at The University of Sharjah, UAE. He held visiting professorships at various universities, including Aberdeen University, UK, INSA-Rouen, France, Philadelphia University, Jordan, University of Seville, Spain, University of Oviedo, Spain, National Ilan University, Taiwan, Tamkang University, Taiwan, Korea Advanced Institute of Science and Technology (KAIST), South Korea, University of Girona, Spain, Genoa University, Italy, KFUPM, Saudi Arabia, Carthage University, Tunisia, EMI, Morocco, Torino Polytechnic, Italy, Bogazci University, Turkey, and International Islamic University Malaysia, Prince Sultan University, Saudi Arabia, Beihang University, China, Nanjing University of Posts & Telecommunications, China, Beijing University of Posts and Telecommunications (BUPT), China, University of Science and Technology Beijing, China, Fudan University, China, University of Calabria, Italy, Khalifa University, UAE, Huazhong University of Science and Technology, China, University of Haute Alsace, France, Indian Institute of Technology-Dhanbad, RV College of Engineering and Technology, India, Sri Padmavati Mahila University Tirupati, India, Habin Institute of Technology, China, Jaypee University of Information Technology, Haldia Institute of Technology, India. He has chaired numerous international conferences and has given numerous keynote speeches worldwide. He has served as ABET/CSAB evaluator and on IEEE CS Fellow Evaluation Committee. He has served as IEEE CS Distinguished Speaker/Lecturer and an ACM Distinguished Lecturer. Since 2004 has been serving as an SCS Distinguished Lecturer. He received many best paper awards for his papers including ones from IEEE ICC 2018, IEEE Globecom 2009, AICSA 2009, CITS 2015, CITS 2019, SPECTS 2018, DCNET 2011 International conferences. He also received Best Paper awards from IEEE Systems Journal in 2018 and in 2019 (2 Best Paper Awards). In 2020, he received 4 best paper awards from IEEE Systems Journal. He also received many other worldwide awards for his technical contributions including: The 2018 IEEE ComSoc-Technical Committee on Communications Software 2018 Technical Achievement Award for contribution to Cybersecurity, Wireless Networks Computer Networks and Modeling and Simulation, SCS prestigious McLeod Founder's Award , Presidential Service Award, SCS Hall of Fame –Lifetime Achievement Award for his technical contribution to modeling and simulation and for his outstanding visionary leadership and dedication to increasing the effectiveness and broadening the applications of modeling and simulation worldwide. He also received the SCS Outstanding Service Award. He was awarded the IEEE CITS Hall of Fame Distinguished and Eminent Award. He is a Life Fellow of IEEE and a Fellow of SCS. Editorship of Journals Since 1997, Obaidat has been the editor–in-chief of the International Journal of Communication Systems, published by John Wiley & Sons. He is also the Founding Editor-in Chief of Wiley Security and Privacy Journal since 2018, and an editor of other international journals. Obaidat is the editor of the Journal of Convergence published by FTRA and Journal of Information Processing Systems, KIPS. He is currently Editor, Advisory editor, or editorial board member of numerous other scholarly journals and transactions including IEEE Systems Journal, IEEE Wireless Communications, Simulation: SCS Transactions of the Society for Modeling & Simulation International, Elsevier Computer Communications Journal, Springer, Journal of Supercomputing, among others. He also served as Editor of IEEE Transactions on SMC-Parts A, B and C. Prof. Obaidat has chaired a number of international conferences and served as an advisor to many international conferences and organizations. He has given numerous keynote speeches and invited talks worldwide. Awards Obaidat has received many awards, including the many best paper awards such as the ones from IEEE ICC 2018, IEEE Globecom 2009, AICSA 2009, CITS 2015, CITS 2019, SPECTS 2018, DCNET 2011 International conferences. He also received Best Paper awards from IEEE Systems Journal in 2018 and in 2019 (2 Best Paper Awards). In 2020, he received 4 best paper awards from IEEE Systems Journal. He also received many other worldwide awards for his technical contributions including: The 2018 IEEE ComSoc-Technical Committee on Communications Software 2018 Technical Achievement Award for contribution to Cybersecurity, Wireless Networks Computer Networks and Modeling and Simulation, SCS prestigious McLeod Founder's Award , Presidential Service Award, SCS Hall of Fame –Lifetime Achievement Award for his technical contribution to modeling and simulation and for his outstanding visionary leadership and dedication to increasing the effectiveness and broadening the applications of modeling and simulation worldwide. He also received the SCS Outstanding Service Award the Nokia Research Fellowship Award, and the Fulbright Distinguished Scholar Award He was awarded the IEEE CITS Hall of Fame Distinguished and Eminent Award and IEEE Life Fellow Award References 1. ^ "Archived copy". Archived from the original on 2012-03-13. Retrieved 2010-10-04. 2. ^ "Fellow & Distinguished Lectureship". 3. ^ "International Journal of communication Systems". International Journal of Communication Systems. 2008. CiteSeerX 10.1.1.606.6233. doi:10.1002/(ISSN)1099-1131. 4. ^ The Society for Modeling & Simulation International. "Awards | The Society for Modeling & Simulation International". Scs.org. Retrieved 2010-10-02. 5. ^ "Archived copy". Archived from the original on 2011-02-22. Retrieved 2010-10-04. 6. ^ "IEEE Fellows Directory - Alphabetical Listing". services27.ieee.org. Retrieved January 25, 2019 7. IEEE Systems Journal Best Paper Awards: https://ieeesystemscouncil.org/awards/systems-journal-best-paper-award?page=1 8. University of Sharjah 9. Founding Dean of College of Computing and Informatics 10. Fulbright Distinguished Award 11. 2018 IEEE ComSoc-Technical Committee on Communications Software 2018 Technical Achievement Award 12. IEEE ICC 2018 13. IEEE Globecom 2009 14. Wiley Security and Privacy Journal https://en.wikipedia.org/wiki/List_of_Arab_Americans American computer scientists 21st-century American engineers Fellow Members of the IEEE Living people Modeling and simulation Ohio State University alumni International Islamic University Malaysia faculty Fordham University faculty American people of Jordanian descent University of Oviedo faculty Year of birth missing (living people)
23347
https://en.wikipedia.org/wiki/Paul%20Allen
Paul Allen
Paul Gardner Allen (January 21, 1953 – October 15, 2018) was an American business magnate, computer programmer, researcher, investor, and philanthropist. He co-founded Microsoft Corporation with childhood friend Bill Gates in 1975, which helped spark the microcomputer revolution of the 1970s and 1980s. Microsoft became the world's largest personal computer software company. Allen was ranked as the 44th-wealthiest person in the world by Forbes in 2018, with an estimated net worth of $20.3 billion at the time of his death. Allen left regular work at Microsoft in early 1983 after a Hodgkin lymphoma diagnosis, remaining on its board as vice-chairman. He and his sister, Jody Allen, founded Vulcan Inc. in 1986, a privately held company that managed his business and philanthropic efforts. He had a multi-billion dollar investment portfolio, including technology and media companies, scientific research, real estate holdings, private space flight ventures, and stakes in other sectors. He owned the Seattle Seahawks of the National Football League and the Portland Trail Blazers of the National Basketball Association, and was part-owner of the Seattle Sounders FC of Major League Soccer. In 2000 he resigned from his position on Microsoft's board and assumed the post of senior strategy advisor to the company's management team. Allen founded the Allen Institutes for Brain Science, Artificial Intelligence and Cell Science, as well as companies like Stratolaunch Systems and Apex Learning. He gave more than $2 billion to causes such as education, wildlife and environmental conservation, the arts, healthcare, and community services. In 2004, he funded the first crewed private spaceplane with SpaceShipOne. He received numerous awards and honors, and was listed among the Time 100 Most Influential People in the World in 2007 and 2008. Allen was diagnosed with Non-Hodgkin lymphoma in 2009. He died of septic shock related to cancer on October 15, 2018, at the age of 65. Early life Allen was born on January 21, 1953, in Seattle, Washington, to Kenneth Sam Allen and Edna Faye (née Gardner) Allen. From 1965 to 1971 he attended Lakeside School, a private school in Seattle where he befriended Bill Gates, with whom he shared an enthusiasm for computers. They used Lakeside's Teletype terminals to develop their programming skills on several time-sharing computer systems. They also used the laboratory of the Computer Science Department of the University of Washington for personal research and computer programming until they were banned in 1971 for abusing their privileges. Gates and Allen joined with Ric Weiland and Gates' childhood best friend and first collaborator, Kent Evans, to form the Lakeside Programming Club and find bugs in Computer Center Corporation's software, in exchange for extra computer time. In 1972, after Evans' sudden death due to a mountain climbing accident, Gates turned to Allen for help finishing an automated class scheduling system for Lakeside. They then formed Traf-O-Data to make traffic counters based on the Intel 8008 processor. According to Allen, he and Gates would go dumpster diving during their teenage years for computer program code. Allen achieved a perfect SAT score of 1600 and went to Washington State University, where he joined the Phi Kappa Theta fraternity. He dropped out of college after two years to work as a programmer for Honeywell in Boston near Harvard University where Gates was enrolled. Allen convinced Gates to drop out of Harvard in order to found Microsoft. Microsoft Allen and Gates formed Microsoft in 1975 in Albuquerque, New Mexico, and began marketing a BASIC programming language interpreter, with their first employee being high school friend and collaborator Ric Weiland. Allen came up with the name of "Micro-Soft", a combination of "microcomputer" and "software". Microsoft committed to delivering a disk operating system (DOS) to IBM for the original IBM PC in 1980, although they had not yet developed one, and Allen spearheaded a deal for Microsoft to purchase QDOS (Quick and Dirty Operating System) written by Tim Paterson who was employed at Seattle Computer Products. As a result of this transaction, Microsoft secured a contract to supply the DOS that ran on IBM's PC line, which opened the door to Allen's and Gates' wealth and success. The relationship between Allen and Gates became strained as they argued even over small things. Allen effectively left Microsoft in 1982 after being diagnosed with Hodgkin's lymphoma, though he remained on the board of directors as vice chairman. Gates reportedly asked Allen to give him some of his shares to compensate for the higher amount of work that Gates was doing. According to Allen, Gates said that he "did almost everything on BASIC" and the company should be split 60–40 in his favor. Allen agreed to this arrangement, which Gates later renegotiated to 64–36. In 1983, Gates tried to buy Allen out at $5 per share, but Allen refused and left the company with his shares intact; this made him a billionaire when Microsoft went public. Gates later repaired his relationship with Allen, and the two men donated $2.2 million to their childhood school Lakeside in 1986. They remained friends for the rest of Allen's life. Allen resigned from his position on the Microsoft board of directors on November 9, 2000, but he remained as a senior strategy advisor to the company's executives. In January 2014, he still held 100 million shares of Microsoft. Businesses and investments Financial and technology Vulcan Capital is an investment arm of Allen's Seattle-based Vulcan Inc., which has managed his personal fortune. In 2013, Allen opened a new Vulcan Capital office in Palo Alto, California, to focus on making new investments in emerging technology and internet companies. Patents: Allen held 43 patents from the United States Patent and Trademark Office. Apps: Allen backed A.R.O., the startup behind the mobile app Saga; SportStream, a social app for sports fans; and a content-management app called Fayve. Interval Research Corporation: In 1992, Allen and David Liddle co-founded Interval Research Corporation, a Silicon Valley-based laboratory and new business incubator that was dissolved in 2000 after generating over 300 patents, four of which were the subject of Allen's August 2010 patent infringement lawsuit against AOL, Apple, eBay, Facebook, Google, Netflix, Office Depot, OfficeMax, Staples, Yahoo!, and YouTube. Ticketmaster: In November 1993, Allen invested more than $325 million to acquire 80% of Ticketmaster. In 1997, Home Shopping Network acquired 47.5% of Allen's stock in exchange for $209 million worth of their own stock. Charter Communications: In 1998, Allen bought a controlling interest in Charter Communications. Charter filed for bankruptcy reorganization in 2009, with Allen's loss estimated at $7 billion. Allen kept a small stake after Charter emerged from reorganization, worth $535 million in 2012. The company's 2016 purchase and subsequent merger of Time Warner Cable with Charter's subsidiary, Spectrum, made Charter Communications the second-largest cable company in the U.S. Aerospace Allen confirmed that he was the sole investor behind aerospace engineer and entrepreneur Burt Rutan's SpaceShipOne suborbital commercial spacecraft on October 4, 2004. The craft was developed and flown by Mojave Aerospace Ventures, which was a joint venture between Allen and Rutan's aviation company, Scaled Composites. SpaceShipOne climbed to an altitude of over the Mojave Air and Space Port and was the first privately funded effort to successfully put a civilian in suborbital space. It won the Ansari X Prize competition and received the $10 million prize. On December 13, 2011, Allen announced the creation of Stratolaunch Systems, based at the Mojave Air and Space Port. The Stratolaunch is a proposed orbital launch system consisting of a dual-bodied, 6-engine jet aircraft, capable of carrying a rocket to high altitude; the rocket would then separate from its carrier aircraft and fire its own engines to complete its climb into orbit. If successful, this project would be the first wholly privately funded space transport system. Stratolaunch, which is partnering with Orbital ATK and Scaled Composites, is intended to launch in inclement weather, fly without worrying about the availability of launch pads and to operate from different locations. Stratolaunch plans to ultimately host six to ten missions per year. On April 13, 2015, Vulcan Aerospace was announced. It is the company within Allen's Vulcan Inc. that plans and executes projects to shift how the world conceptualizes space travel through cost reduction and on-demand access. On April 13, 2019, the Stratolaunch aircraft made its maiden flight, reaching and 165 kn (305 km/h) in a 2 h 29 min flight. Stratolaunch CEO Jean Floyd offered this comment: "We dedicate this day to the man who inspired us all to strive for ways to empower the world's problem-solvers, Paul Allen. Without a doubt, he would have been exceptionally proud to see his aircraft take flight". Upon its flight, the airplane became the largest in history by wingspan. As of the end of May 2019, Stratolaunch Systems Corporation had ceased operations. Real estate Allen's Vulcan Real Estate division offers development and portfolio management services, and is known for the redevelopment of the South Lake Union neighborhood immediately north of downtown Seattle. Vulcan has developed of new residential, office, retail and biotechnology research space, and has a total development capacity of . Vulcan advocated for the Seattle Streetcar line known as South Lake Union Streetcar, which runs from Seattle's Westlake Center to the south end of Lake Union. In 2012, The Wall Street Journal called Allen's South Lake Union investment "unexpectedly lucrative" and one that led to his firm selling a office complex to Amazon.com for US$1.16 billion, one of the most expensive office deals ever in Seattle. "It's exceeded my expectations", Allen said of the South Lake Union development. Venues Sports and event centers: Allen funded the development of Portland's Moda Center, which he purchased in 2007. He also contributed $130 million to help build CenturyLink Field in Seattle. Seattle Cinerama: Allen purchased Seattle's historic Cinerama Theater in 1998, and upgraded it with 3-D capability and digital sound, in addition to interior and exterior refurbishing. The theater installed the world's first commercial digital laser projector in 2014. Hospital Club: Allen opened the Hospital Club in London in 2004 as a professional and social hub for people working in the creative arts. A second location in Los Angeles is under construction. Sports team ownership Portland Trail Blazers Allen purchased the Portland Trail Blazers NBA team in 1988 from California real estate developer Larry Weinberg for $70 million. He was instrumental in the development and funding of the Moda Center (previously known as the Rose Garden), the arena where the Blazers play. He purchased the arena on April 2, 2007, and stated that this was a major milestone and a positive step for the franchise. The Allen-owned Trail Blazers reached the playoffs 19 times including the NBA Finals in 1990 and 1992. According to Forbes, the Blazers were valued at $2.09 billion in 2021 and ranked No. 13 out of 30 NBA teams. Seattle Seahawks Allen purchased the NFL's Seattle Seahawks in 1997 from owner Ken Behring, who had attempted to move the team to southern California the previous year. Herman Sarkowsky, a former Seahawks minority owner, told The Seattle Times about Allen's decision to buy the team, "I'm not sure anybody else in this community would have done what [Allen] did." In 2002, the team moved into Seahawks Stadium (now known as Lumen Field), after Allen invested into the upgrade of the stadium. Acquired for US$200 million in 1997, the Seahawks were valued at $1.33 billion in August 2014 by Forbes, which says the team has "one of the most rabid fan bases in the NFL". Under the helm of Allen, the Seahawks made the Super Bowl three times following NFC Championship victories (2005, 2013, 2014), and won Super Bowl XLVIII in February 2014. Seattle Sounders FC Allen's Vulcan Sports & Entertainment is part of the ownership team of the Seattle Sounders FC, a Major League Soccer (MLS) franchise that began play in 2009 at CenturyLink Field, a stadium which was also controlled by Allen. The ownership team also includes film producer Joe Roth, businessman Adrian Hanauer, and comedian Drew Carey. The Sounders sold out every home game during its first season, setting a new MLS record for average match attendance. Filmmaking Allen and his sister, Jody Allen, together were the owners and executive producers of Vulcan Productions, a television and film production company headquartered in Seattle within the entertainment division of Vulcan Inc. Their films have received various recognition, ranging from a Peabody Award to Independent Spirit Awards, Grammys and Emmys. In 2014 alone, Allen's film, We The Economy, won 12 awards including a Webby award for best Online News & Politics Series. The films have also been nominated for Golden Globes and Academy Awards among many others. Vulcan Productions' films and documentary projects include Far from Heaven (2002), Hard Candy (2005), Rx for Survival: A Global Health Challenge (2005), Where God Left His Shoes (2006), Judgment Day: Intelligent Design on Trial (2007), This Emotional Life (2010), We The Economy (2014) Racing Extinction (2015) and Oscar-nominated Body Team 12 (2015). In 2013, Vulcan Productions co-produced the Richard E. Robbins-directed film Girl Rising which tells the stories of girls from different parts of the world who seek an education. Globally, over 205 million households watched Girl Rising during the CNN premier, and over 4 million people have engaged with Girl Rising through websites and social media. Through the associated 10×10 program, over $2.1 million has been donated to help girls receive an education worldwide. Also in 2013, Vulcan Productions signed on as a producing partner of Pandora's Promise, a documentary about nuclear power, directed by Oscar-nominated director Robert Stone. It was released on CNN in November 2013. A variety of college and private screenings as well as panel discussions have been hosted throughout the country. Philanthropy Allen gave more than $2 billion towards the advancement of science, technology, education, wildlife conservation, the arts, and community services in his lifetime. The Paul G. Allen Family Foundation, which he founded with his sister Jody, was established to administer a portion of Allen's philanthropic contributions. Since its formation, the foundation has given more than $494 million to over 1,500 nonprofits; and, in 2010, Allen became a signatory of The Giving Pledge, promising to give at least half of his fortune to philanthropic causes. Allen received commendations for his philanthropic commitments including the Andrew Carnegie Medal of Philanthropy and Inside Philanthropy'''s "Philanthropist of the Year". Science and research In September 2003, Allen launched the Allen Institute for Brain Science with a $100 million contribution dedicated to understanding how the human brain works. In total, Allen donated $500 million to the institute, making it his single largest philanthropic recipient. Since its launch, the Allen Institute for Brain Science has taken a Big Science and open science approach to tackle projects. The institute makes research tools available to the scientific community using an open data model. Some of the institute's projects include the Allen Mouse Brain Atlas, Allen Human Brain Atlas and the Allen Mouse Brain Connectivity Atlas. The Allen Institute is also helping to advance and shape the White House's BRAIN Initiative as well as the Human Brain Project. Founded in 2014, the Allen Institute for Artificial Intelligence (AI2)'s main focus is to research and engineer artificial intelligence. The institute is modeled after the Allen Institute for Brain Science and led by researcher and professor, Dr. Oren Etzioni. AI2 has undertaken four main projects, Aristo, Semantic Scholar, Euclid, and Plato. Project Aristo is working to build an AI system capable of passing an 8th-grade science exam. In December 2014, Allen committed $100 million to create the Allen Institute for Cell Science in Seattle. The institute is investigating and creating a virtual model of cells in the hope of bringing forth treatment of different diseases. Like the institutes before it, all data generated and tools developed will be made publicly available online. Launched in 2016 with a $100 million commitment, The Paul G. Allen Frontiers Group aims to discover and support ideas at the frontier of bioscience in an effort to accelerate the pace of discovery. The group will target scientists and research areas that "some might consider out-of-the-box at the very edges of knowledge". Allen launched the Allen Distinguished Investigators Awards (ADI) in 2010 to support scientists pursuing early-stage research projects who often have difficulty securing funding from traditional sources. Allen donated the seed money to build SETI's Allen Telescope Array, eventually contributing $30 million to the project. The Paul Allen's flower fly was named in recognition of his contributions to Dipterology. Environment and conservation Allen provided more than $7 million to fund a census of elephant populations in Africa, the largest such endeavour since the 1970s. The Great Elephant Census team flew over 20 countries to survey African savannah elephants. The survey results were published in 2015 and showed rapid rates of decline which were accelerating. He began supporting the University of British Columbia's Sea Around Us Project in 2014 to improve data on global fisheries as a way to fight illegal fishing. Part of his $2.6 million in funding went towards the creation of FishBase, an online database about adult finfish. Allen funded the Global FinPrint initiative, launched in July 2015, a three-year survey of sharks and rays in coral reef areas. The survey is the largest of its kind and designed to provide data to help conservation programs.On the hunt for sunken ships, Paul Allen's team captures rare footage of a sixgill shark The Seattle Times Allen backed Washington state initiative 1401 to prohibit the purchase, sale and distribution of products made from 10 endangered species including elephants, rhinos, lions, tigers, leopards, cheetahs, marine turtles, pangolins, sharks and rays. The initiative gained enough signatures to be on the state's ballot on November 3, 2015, and passed. Alongside the United States Department of Transportation (USDOT), Allen and Vulcan Inc. launched the Smart City Challenge, a contest inviting American cities to transform their transportation systems. Created in 2015 with the USDOT's $40 million commitment as well as $10 million from Allen's Vulcan Inc., the challenge aims to create a first-of-its-kind modern city that will demonstrate how cities can improve quality of life while lowering greenhouse gas emissions. The winning city was Columbus, Ohio. As a founding member of the International SeaKeepers Society, Allen hosted its proprietary SeaKeeper 1000TM oceanographic and atmospheric monitoring system on all three of his megayachts. Allen funded the building of microgrids, which are small-scale power grids that can operate independently, in Kenya, to help promote reusable energy and empower its businesses and residents. He was an early investor in the Mawingu Networks, a wireless and solar-powered Internet provider which aims to connect rural Africa with the world, and Off Grid Electric, a company focused on providing solar energy to people in emerging nations. Ebola In 2014, Allen pledged at least $100 million toward the fight to end the Ebola virus epidemic in West Africa, making him the largest private donor in the Ebola crisis. He also created a website called TackleEbola.org as a way to spread awareness and serve as a vehicle for donors to fund projects in need. The site highlighted organizations working to stop Ebola that Allen supported, such as International Red Cross and Red Crescent Movement, Médecins Sans Frontières, Partners in Health, UNICEF and World Food Program USA. On April 21, 2015, Allen brought together key leaders in the Ebola fight at the Ebola Innovation Summit in San Francisco. The summit aimed to share key learnings and reinforce the need for continued action and support to reduce the number of Ebola cases to zero, which was achieved in January 2016. In October 2015, the Paul G. Allen Family Foundation announced it would award seven new grants totaling $11 million to prevent future widespread outbreaks of the virus. Exploration In 2012, along with his research team and the Royal Navy, Allen attempted to retrieve the ship's bell from , which sank in the Denmark Strait during World War II, but the attempt failed due to poor weather. On August 7, 2015, they tried again and recovered the bell in very good condition. It was restored and put on display in May 2016 in the National Museum of the Royal Navy, Portsmouth, in remembrance of the 1,415 crewmen lost. Since 2015, Allen funded the research ship , which he purchased in 2016. The project team aboard Petrel was responsible for locating the in 2015. In 2017, at Allen's direction, Petrel found , , the wrecks of the Battle of Surigao Strait and the Battle of Ormoc Bay. In 2018, Petrel found a lost US Navy C-2A Greyhound aircraft in the Philippine Sea, in the Coral Sea and the off the coast of the Solomon Islands. The arts Allen established non-profit community institutions to display his collections of historic artifacts. These include: Museum of Pop Culture, or MoPOP, is a nonprofit museum, dedicated to contemporary popular culture inside a Frank Gehry-designed building at Seattle Center, established in 2000. Flying Heritage Collection, which showcases restored vintage military aircraft and armaments primarily from the World War II era, established in 2004. STARTUP Gallery, a permanent exhibit at the New Mexico Museum of Natural History and Science in Albuquerque dedicated to the history of the microcomputer, established in 2007. Living Computer: Museum + Labs, a collection of vintage computers in working order and available for interactive sessions on-site or through networked access, opened to the public in 2012. An active art collector, Allen gifted more than $100 million to support the arts. On October 15, 2012, the Americans for the Arts gave Allen the Eli and Edythe Broad Award for Philanthropy in the Arts. Allen loaned out more than 300 pieces from his private art collection to 47 different venues. The original 541-page typescript of Bram Stoker's novel Dracula was in his collection at one point. In 2013, Allen sold Barnett Newman's Onement VI (1953) at Sotheby's in New York for $43.8 million, then the record for a work by the abstract artist. In 2015, Allen founded the Seattle Art Fair, a four-day event with 60-plus galleries from around the world including the participation of the Gagosian Gallery, David Zwirner. The event drew thousands and inspired other satellite fairs throughout the city. In August 2016, Allen announced the launch of Upstream Music Fest + Summit, an annual festival fashioned after South by Southwest. Held in Pioneer Square, the first festival took place in May 2017. Education In 1989, Allen donated $2 million to the University of Washington to construct the Allen Library, which was named after his father Kenneth S. Allen, a former associate director of the University of Washington library system. In the same year, Allen donated an additional $8 million to establish the Kenneth S. Allen Library Endowment. In 2012, the endowment was renamed the Kenneth S. and Faye G. Allen Library Endowment after Allen's mother (a noted bibliophile) died. In 2002, Allen donated $14 million to the University of Washington to construct the Paul G. Allen Center for Computer Science and Engineering. The building was dedicated in October 2003. In 2010, Allen announced a gift of $26 million to build the Paul G. Allen School of Global Animal Health at Washington State University, his alma mater. The gift was the largest private donation in the university's history. In 2016, Allen pledged a $10 million donation over four years for the creation of the Allen Discovery Centers at Tufts University and Stanford University. The centers would fund research that would read and write the morphogenetic code. Over eight years the donation could be as much as $20 million. In 2017, Allen donated $40 million (with an additional $10 million from Microsoft) to reorganize the University of Washington's Computer Science and Engineering department into the Paul G. Allen School of Computer Science and Engineering. Personal life While Allen expressed interest in romantic love and one day having a family, he never married and had no children. His marriage plans with his first girlfriend were cancelled as he felt he "was not ready to marry at 23". He was sometimes considered reclusive. Music Allen received his first electric guitar at the age of sixteen, and was inspired to play it by listening to Jimi Hendrix. In 2000, Allen played rhythm guitar on the independently produced album Grown Men. In 2013, he had a major label release on Sony's Legacy Recordings: Everywhere at Once by Paul Allen and the Underthinkers. PopMatters.com described Everywhere at Once as "a quality release of blues-rock that's enjoyable from start to finish". On February 7, 2018, an interview with Quincy Jones was released by the magazine New York on their Vulture website. In this interview, Jones said that he had extreme respect for Eric Clapton, his band Cream, and Allen. Referencing Allen's Hendrix-like play, the article mentioned a jam session on a yacht with Stevie Wonder. Yachting Allen's yacht, , was launched in 2003. As of 2019, it was 20th on the list of motor yachts by length. The yacht is equipped with two helicopters, a submarine, an ROV, a swimming pool, a music studio and a basketball court. Octopus is a member of AMVER, a voluntary group ship reporting system used worldwide by authorities to arrange assistance for those in distress at sea. The ship is also known for its annual celebrity-studded parties which Allen hosted at the Cannes film festival, where Allen and his band played for guests. These performances included musicians such as Usher and David A. Stewart. Octopus was also used in the search for a missing American pilot and two officers whose plane disappeared off Palau and the study of a rare fish called a coelacanth among many others. Allen also owned , which is one of the world's 100 largest yachts. In January 2016, it was reported that Tatoosh allegedly damaged coral in the Cayman Islands. In April 2016, the Department of Environment (DoE) and Allen's Vulcan Inc. successfully completed a restoration plan to help speed recovery and protect the future of coral in this area. Idea Man In 2011, Allen's memoir, Idea Man: A Memoir by the Cofounder of Microsoft, was published by Portfolio, a Penguin Group imprint. The book recounts how Allen became enamored with computers at an early age, conceived the idea for Microsoft, recruited his friend Bill Gates to join him, and launched what would become the world's most successful software company. It also explores Allen's business and creative ventures following his 1983 departure from Microsoft, including his involvement in SpaceShipOne, his purchase of the Portland Trail Blazers and Seattle Seahawks, his passion for music, and his ongoing support for scientific research. The book made the New York Times Best Seller list. A paperback version, which included a new epilogue, was published on October30, 2012.Allen, Paul, Idea Man: a memoir by the co-founder of Microsoft, New York : Portfolio/Penguin, 2011. Death Allen was diagnosed with Stage 1-A Hodgkin's lymphoma in 1982. His cancer was successfully treated by several months of radiation therapy, Allen was diagnosed with non-Hodgkin lymphoma in 2009. Likewise, the cancer was successfully treated until it returned in 2018. It ultimately caused his death by septic shock on October 15, 2018. He was 65 years old. Following his death, Allen's sister Jody Allen was named executor and trustee of all of Paul Allen's estate, pursuant to his instructions. She had responsibility for overseeing the execution of his will and settling his affairs with tax authorities and parties with an interest in his projects. Several Seattle-area landmarks, including the Space Needle, Columbia Center and Lumen Field, as well as various Microsoft offices throughout the United States, were illuminated in blue on November 3, 2018, as a tribute to Allen. He was also honored by his early business partner and lifelong friend Bill Gates, who said in a statement: Awards and recognition Allen received numerous awards in many different areas, including sports, technology, philanthropy, and the arts: In 2004, Allen, Burt Rutan, Doug Shane, Mike Melvill, and Brian Binnie won the Collier Trophy for SpaceShipOne. On March 9, 2005, Allen, Rutan, and the rest of the SpaceShipOne team were awarded the 2005 National Air and Space Museum Trophy for Current Achievement. In 2007 and 2008, Allen was listed among the Time 100 Most Influential People in The World. He received the Vanguard Award from the National Cable & Telecommunications Association on May 20, 2008. On October 30, 2008, the Seattle-King County Association of Realtors honored Allen for his "unwavering commitment to nonprofit organizations in the Pacific Northwest and lifetime giving approaching US$1 billion". In 2009, Allen's philanthropy as the long-time owner of the Portland Trail Blazers was recognized with an Oregon Sports Award. On October 26, 2010, Allen was awarded the W. J. S. Krieg Lifetime Achievement Award for his contributions to the field of neuroscience by the Cajal Club. On January 26, 2011, at Seattle's Benaroya Hall, Allen was named Seattle Sports Commission Sports Citizen of the Year, an award that has been renamed the Paul Allen Award. In 2011, Allen was elected to the American Academy of Arts and Sciences. On October 15, 2012, Allen received the Eli and Edythe Broad Award for Philanthropy in the Arts at the National Arts Awards. On February 2, 2014, Allen received a Super Bowl ring as the Seattle Seahawks won the Vince Lombardi Trophy. On October 22, 2014, Allen received a Lifetime Achievement Award from Seattle Business magazine for his impact in and around the greater Puget Sound region. On December 31, 2014, online philanthropy magazine Inside Philanthropy made Allen their inaugural "Philanthropist of the Year" for his ongoing effort to stop the Ebola outbreak in West Africa, breaking ground on a new research center in Seattle, and his battle to save the world's oceans. In 2014, Allen was inducted into the International Space Hall of Fame. On July 18, 2015, Ischia Global Film and Music Festival recognized Allen with the Ischia Humanitarian Award. Event organizers honored Allen for his contributions to social issues through his philanthropic efforts. On August 25, 2015, Allen was named a recipient of the Andrew Carnegie Medal of Philanthropy for his work to "save endangered species, fight Ebola, research the human brain, support the arts, protect the oceans, and expand educational opportunities for girls". On October 3, 2015, the Center for Infectious Disease Research presented Allen with the 2015 "Champion for Global Health Award" for his leadership and effort to fight Ebola. On December 10, 2016, Allen, as co-owner of the Seattle Sounders, won the 2016 MLS Cup. On October 3, 2019, Allen was posthumously inducted into the Seattle Seahawks Ring of Honor, ironically he was the 12th person inducted into the Ring Of Honor, which is a fitting for the number 12, which represents the fans. Honorary degrees Honorary degree from the Washington State University. The university bestowed its highest honor, the Regents' Distinguished Alumnus Award, upon him. Honorary doctorate in Philosophy from Nelson Mandela Metropolitan University. Honorary doctorate of Science from the Cold Spring Harbor Laboratory's Watson School of Biological Sciences. Honorary degree from the École Polytechnique Fédérale de Lausanne. See also Altair 8800 The Spaceship Company Open Letter to Hobbyists List of select cases of Hodgkin's DiseasePirates of Silicon Valley, a 1999 film about the rise of the personal computer. Allen is portrayed by Josh Hopkins.Black Sky: The Race For Space, a 2005 documentary about Allen, SpaceShipOne and the Ansari X Prize. References Further reading Rich, Laura, The accidental zillionaire: demystifying Paul Allen, Hoboken, N.J. : John Wiley & Sons, 2003. External links Paul Allen entry from The Oregon Encyclopedia Paul Allen at THOCP.net Business profile at Forbes'' Bloomberg Billionaires Index entry 1953 births 2018 deaths 20th-century American businesspeople 20th-century American guitarists 20th-century American male musicians 21st-century American businesspeople 21st-century American guitarists 21st-century American male musicians 21st-century American memoirists American aerospace businesspeople American art collectors American billionaires American communications businesspeople American computer businesspeople American computer programmers American construction businesspeople American documentary filmmakers American film producers American humanitarians American inventors American investors American male guitarists American mass media owners American philanthropists American real estate businesspeople American soccer chairmen and investors American software engineers American technology chief executives American technology company founders American technology writers American television producers American venture capitalists Businesspeople from Seattle Businesspeople in software Deaths from cancer in Washington (state) Deaths from lymphoma Fellows of the American Academy of Arts and Sciences Film producers from Washington (state) Giving Pledgers Guitarists from Washington (state) History of Microsoft Lakeside School alumni Major League Soccer executives Major League Soccer owners Members of the United States National Academy of Engineering Microsoft employees Musicians from Seattle People named in the Paradise Papers Portland Trail Blazers owners Record producers from Washington (state) Seattle Seahawks owners Space advocates Washington State University alumni Writers from Seattle
91099
https://en.wikipedia.org/wiki/Business%20continuity%20planning
Business continuity planning
Business continuity may be defined as "the capability of an organization to continue the delivery of products or services at pre-defined acceptable levels following a disruptive incident", and business continuity planning (or business continuity and resiliency planning) is the process of creating systems of prevention and recovery to deal with potential threats to a company. In addition to prevention, the goal is to enable ongoing operations before and during execution of disaster recovery. Business continuity is the intended outcome of proper execution of both business continuity planning and disaster recovery. Several business continuity standards have been published by various standards bodies to assist in check listing ongoing planning tasks. An organization's resistance to failure is "the ability ... to withstand changes in its environment and still function". Often called resilience, it is a capability that enables organizations to either endure environmental changes without having to permanently adapt, or the organization is forced to adapt a new way of working that better suits the new environmental conditions. Overview Any event that could negatively impact operations should be included in the plan, such as supply chain interruption, loss of or damage to critical infrastructure (major machinery or computing /network resource). As such, BCP is a subset of risk management. In the US, government entities refer to the process as continuity of operations planning (COOP). A Business Continuity Plan outlines a range of disaster scenarios and the steps the business will take in any particular scenario to return to regular trade. BCP's are written ahead of time and can also include precautions to be put in place. Usually created with the input of key staff as well as stakeholders, a BCP is a set of contingencies to minimize potential harm to businesses during adverse scenarios. Resilience A 2005 analysis of how disruptions can adversely affect the operations of corporations and how investments in resilience can give a competitive advantage over entities not prepared for various contingencies extended then-common business continuity planning practices. Business organizations such as the Council on Competitiveness embraced this resilience goal. Adapting to change in an apparently slower, more evolutionary manner - sometimes over many years or decades - has been described as being more resilient, and the term "strategic resilience" is now used to go beyond resisting a one-time crisis, but rather continuously anticipating and adjusting, "before the case for change becomes desperately obvious". This approach is sometimes summarized as: preparedness, protection, response and recovery. Resilience Theory can be related to the field of Public Relations. Resilience is a communicative process that is constructed by citizens, families, media system, organizations and governments through everyday talk and mediated conversation. The theory is based on the work of Patrice M. Buzzanell, a professor at the Brian Lamb School of Communication at Purdue University. In her 2010 article, "Resilience: Talking, Resisting, and Imagining New Normalcies Into Being" Buzzanell discussed the ability for organizations to thrive after having a crisis through building resistance. Buzzanell notes that there are five different processes that individuals use when trying to maintain resilience- crafting normalcy, affirming identity anchors, maintaining and using communication networks, putting alternative logics to work and downplaying negative feelings while foregrounding positive emotions. When looking at the resilience theory, the crisis communication theory is similar, but not the same. The crisis communication theory is based on the reputation of the company, but the resilience theory is based on the process of recovery of the company. There are five main components of resilience: crafting normalcy, affirming identity anchors, maintaining and using communication networks, putting alternative logics to work, and downplaying negative feelings while foregrounding negative emotions. Each of these processes can be applicable to businesses in crisis times, making resilience an important factor for companies to focus on while training. There are three main groups that are affected by a crisis. They are micro (individual), meso (group or organization) and macro (national or interorganizational). There are also two main types of resilience, which are proactive and post resilience. Proactive resilience is preparing for a crisis and creating a solid foundation for the company. Post resilience includes continuing to maintain communication and check in with employees. Proactive resilience is dealing with issues at hand before they cause a possible shift in the work environment and post resilience maintaining communication and accepting chances after an incident has happened. Resilience can be applied to any organization. Continuity Plans and procedures are used in business continuity planning to ensure that the critical organizational operations required to keep an organization running continue to operate during events when key dependencies of operations are disrupted. Continuity does not need to apply to every activity which the organisation undertakes. For example, under ISO 22301:2019, organisations are required to define their business continuity objectives, the minimum levels of product and service operations which will be considered acceptable and the maximum tolerable period of disruption (MTPD) which can be allowed. A major cost in planning for this is the preparation of audit compliance management documents; automation tools are available to reduce the time and cost associated with manually producing this information. Inventory Planners must have information about: Equipment Supplies and suppliers Locations, including other offices and backup/work area recovery (WAR) sites Documents and documentation, including which have off-site backup copies: Business documents Procedure documentation Analysis The analysis phase consists of impact analysis threat analysis and impact scenarios. Quantifying of loss ratios must also include "dollars to defend a lawsuit." It has been estimated that a dollar spent in loss prevention can prevent "seven dollars of disaster-related economic loss." Business impact analysis (BIA) A Business impact analysis (BIA) differentiates critical (urgent) and non-critical (non-urgent) organization functions/activities. A function may be considered critical if dictated by law. Each function/activity typically relies on a combination of constituent components in order to operate: Human resources (full-time staff, part-time staff, or contractors) IT systems Physical assets (mobile phones, laptops/workstations etc.) Documents (electronic or physical) For each function, two values are assigned: Recovery Point Objective (RPO) – the acceptable latency of data that will not be recovered. For example, is it acceptable for the company to lose 2 days of data? The recovery point objective must ensure that the maximum tolerable data loss for each activity is not exceeded. Recovery Time Objective (RTO)  – the acceptable amount of time to restore the function Maximum time constraints for how long an enterprise's key products or services can be unavailable or undeliverable before stakeholders perceive unacceptable consequences have been named as: (MTPoD) Maximum Tolerable Downtime (MTD) Maximum Tolerable Outage (MTO) Maximum Allowable Outage (MAO) According to ISO 22301 the terms maximum acceptable outage and maximum tolerable period of disruption mean the same thing and are defined using exactly the same words. Consistency When more than one system crashes, recovery plans must balance the need for data consistency with other objectives, such as RTO and RPO. Recovery Consistency Objective (RCO) is the name of this goal. It applies data consistency objectives, to define a measurement for the consistency of distributed business data within interlinked systems after a disaster incident. Similar terms used in this context are "Recovery Consistency Characteristics" (RCC) and "Recovery Object Granularity" (ROG). While RTO and RPO are absolute per-system values, RCO is expressed as a percentage that measures the deviation between actual and targeted state of business data across systems for process groups or individual business processes. The following formula calculates RCO with "n" representing the number of business processes and "entities" representing an abstract value for business data: 100% RCO means that post recovery, no business data deviation occurs. Threat and risk analysis (TRA) After defining recovery requirements, each potential threat may require unique recovery steps. Common threats include: The above areas can cascade: Responders can stumble. Supplies may become depleted. During the 2002-2003 SARS outbreak, some organizations compartmentalized and rotated teams to match the incubation period of the disease. They also banned in-person contact during both business and non-business hours. This increased resiliency against the threat. Impact scenarios Impact scenarios are identified and documented: need for medical supplies need for transportation options civilian impact of nuclear disasters need for business and data processing supplies These should reflect the widest possible damage. Tiers of preparedness SHARE's seven tiers of disaster recovery released in 1992, were updated in 2012 by IBM as an eight tier model: Tier 0 - No off-site data • Businesses with a Tier 0 Disaster Recovery solution have no Disaster Recovery Plan. There is no saved information, no documentation, no backup hardware, and no contingency plan. Typical recovery time: The length of recovery time in this instance is unpredictable. In fact, it may not be possible to recover at all. Tier 1 - Data backup with no Hot Site • Businesses that use Tier 1 Disaster Recovery solutions back up their data at an off-site facility. Depending on how often backups are made, they are prepared to accept several days to weeks of data loss, but their backups are secure off-site. However, this Tier lacks the systems on which to restore data. Pickup Truck Access Method (PTAM). Tier 2 - Data backup with Hot Site • Tier 2 Disaster Recovery solutions make regular backups on tape. This is combined with an off-site facility and infrastructure (known as a hot site) in which to restore systems from those tapes in the event of a disaster. This tier solution will still result in the need to recreate several hours to days worth of data, but it is less unpredictable in recovery time. Examples include: PTAM with Hot Site available, IBM Tivoli Storage Manager. Tier 3 - Electronic vaulting • Tier 3 solutions utilize components of Tier 2. Additionally, some mission-critical data is electronically vaulted. This electronically vaulted data is typically more current than that which is shipped via PTAM. As a result there is less data recreation or loss after a disaster occurs. Tier 4 - Point-in-time copies • Tier 4 solutions are used by businesses that require both greater data currency and faster recovery than users of lower tiers. Rather than relying largely on shipping tape, as is common in the lower tiers, Tier 4 solutions begin to incorporate more disk-based solutions. Several hours of data loss is still possible, but it is easier to make such point-in-time (PIT) copies with greater frequency than data that can be replicated through tape-based solutions. Tier 5 - Transaction integrity • Tier 5 solutions are used by businesses with a requirement for consistency of data between production and recovery data centers. There is little to no data loss in such solutions; however, the presence of this functionality is entirely dependent on the application in use. Tier 6 - Zero or little data loss • Tier 6 Disaster Recovery solutions maintain the highest levels of data currency. They are used by businesses with little or no tolerance for data loss and who need to restore data to applications rapidly. These solutions have no dependence on the applications to provide data consistency. Tier 7 - Highly automated, business-integrated solution • Tier 7 solutions include all the major components being used for a Tier 6 solution with the additional integration of automation. This allows a Tier 7 solution to ensure consistency of data above that of which is granted by Tier 6 solutions. Additionally, recovery of the applications is automated, allowing for restoration of systems and applications much faster and more reliably than would be possible through manual Disaster Recovery procedures. Solution design Two main requirements from the impact analysis stage are: For IT: the minimum application and data requirements and the time in which they must be available. Outside IT: preservation of hard copy (such as contracts). A process plan must consider skilled staff and embedded technology. This phase overlaps with Disaster recovery disaster recovery planning. The solution phase determines: crisis management command structure telecommunication architecture between primary and secondary work sites data replication methodology between primary and secondary work sites Backup site - applications, data and work space required at the secondary work site ISO Standards There are many standards that are available to support Business continuity planning and management. ISO has for example developed a whole series of standards on Business continuity management systems under responsibility of technical committee ISO/TC 292: ISO 22300:2021 Security and resilience – Vocabulary ISO 22301:2019 Security and resilience – Business continuity management systems – Requirements ISO 22313:2020 Security and resilience – Business continuity management systems – Guidance on the use of ISO 22301 ISO/TS 22317:2021 Security and resilience – Business continuity management systems – Guidelines for business impact analysis ISO/TS 22318:2021 Security and resilience – Business continuity management systems – Guidelines for supply chain continuity ISO/TS 22330:2018 Security and resilience – Business continuity management systems – Guidelines for people aspects on business continuity ISO/TS 22331:2018 Security and resilience – Business continuity management systems – Guidelines for business continuity strategy ISO/TS 22332:2021 Security and resilience – Business continuity management systems – Guidelines for developing business continuity plans and procedures ISO/IEC/TS 17021-6:2015 Conformity assessment – Requirements for bodies providing audit and certification of management systems – Part 6: Competence requirements for auditing and certification of business continuity management systems ISO/IEC 27031:2011 Security techniques — Guidelines for information and communication technology readiness for business continuity. British standards The BSI Group British Standards Institution (BSI) released a series of standards: 1995: BS 7799, peripherally addressed information security procedures. (withdrawn) 2006: BCP — BS 25999-1 Business Continuity Management. Code of Practice (withdrawn) 2007: BS 25999-2 Specification for Business Continuity Management, which specifies requirements for implementing, operating and improving a documented business continuity management system (BCMS). (withdrawn) 2008: BS 25777, specifically to align computer continuity with business continuity. (withdrawn March 2011) These standards has been withdrawn and replaced by the ISO standards above. Within the UK, BS 25999-2:2007 and BS 25999-1:2006 were being used for business continuity management across all organizations, industries and sectors. These documents give a practical plan to deal with most eventualities—from extreme weather conditions to terrorism, IT system failure, and staff sickness. ITIL has defined some of these terms. Civil Contingencies Act In 2004, following crises in the preceding years, the UK government passed the Civil Contingencies Act of 2004: Businesses must have continuity planning measures to survive and continue to thrive whilst working towards keeping the incident as minimal as possible. The Act was separated into two parts: Part 1: civil protection, covering roles & responsibilities for local responders Part 2: emergency powers Australia and New Zealand United Kingdom and Australia have incorporated resilience into their continuity planning. In the United Kingdom, resilience is implemented locally by the Local Resilience Forum. In New Zealand, the Canterbury University Resilient Organizations programme developed an assessment tool for benchmarking the Resilience of Organizations. It covers 11 categories, each having 5 to 7 questions. A Resilience Ratio summarizes this evaluation. Implementation and testing The implementation phase involves policy changes, material acquisitions, staffing and testing. Testing and organizational acceptance The 2008 book Exercising for Excellence, published by The British Standards Institution identified three types of exercises that can be employed when testing business continuity plans. Tabletop exercises - a small number of people concentrate on a specific aspect of a BCP. Another form involves a single representative from each of several teams. Medium exercises - Several departments, teams or disciplines concentrate on multiple BCP aspects; the scope can range from a few teams from one building to multiple teams operating across dispersed locations. Pre-scripted "surprises" are added. Complex exercises - All aspects of a medium exercise remain, but for maximum realism no-notice activation, actual evacuation and actual invocation of a disaster recovery site is added. While start and stop times are pre-agreed, the actual duration might be unknown if events are allowed to run their course. Maintenance Biannual or annual maintenance cycle maintenance of a BCP manual is broken down into three periodic activities. Confirmation of information in the manual, roll out to staff for awareness and specific training for critical individuals. Testing and verification of technical solutions established for recovery operations. Testing and verification of organization recovery procedures. Issues found during the testing phase often must be reintroduced to the analysis phase. Information/targets The BCP manual must evolve with the organization, and maintain information about who has to know what a series of checklists job descriptions, skillsets needed, training requirements documentation and document management definitions of terminology to facilitate timely communication during disaster recovery, distribution lists (staff, important clients, vendors/suppliers) information about communication and transportation infrastructure (roads, bridges) Technical Specialized technical resources must be maintained. Checks include: Virus definition distribution Application security and service patch distribution Hardware operability Application operability Data verification Data application Testing and verification of recovery procedures Software and work process changes must be documented and validated, including verification that documented work process recovery tasks and supporting disaster recovery infrastructure allow staff to recover within the predetermined recovery time objective. See also References Further reading United States Bibliography Business Continuity Planning, FEMA, Retrieved: June 16, 2012 Continuity of Operations Planning (no date). U.S. Department of Homeland Security. Retrieved July 26, 2006. Purpose of Standard Checklist Criteria For Business Recovery (no date). Federal Emergency Management Agency. Retrieved July 26, 2006. NFPA 1600 Standard on Disaster/Emergency Management and Business Continuity Programs (2010). National Fire Protection Association. United States General Accounting Office Y2k BCP Guide (August 1998). United States Government Accountability Office. SPC.1-2009, "Organizational Resilience: Security, Preparedness, and Continuity Management Systems—Requirements with Guidance for Use", approved by American National Standards Institute International Organization for Standardization ISO 22300:2018 Security and resilience - Vocabulary ISO 22301:2019 Security and resilience – Business continuity management systems – Requirements ISO 22313:2013 Security and resilience - Business continuity management systems - Guidance on the use of ISO 22301 ISO/TS 22315:2015 Societal security – Business continuity management systems – Guidelines for business impact analysis (BIA) ISO/PAS 22399:2007 Guideline for incident preparedness and operational continuity management (withdrawn) ISO/IEC 24762:2008 Guidelines for information and communications technology disaster recovery services ISO/IEC 27001:2013 (formerly BS 7799-2:2002) Information technology — Security techniques — Information security management systems — Requirements ISO/IEC 27002:2013 Information technology — Security techniques — Code of practice for information security controls ISO/IEC 27031:2011 Information technology – Security techniques – Guidelines for information and communication technology readiness for business continuity IWA 5:2006 Emergency Preparedness (withdrawn) British Standards Institution BS 25999-1:2006 Business Continuity Management Part 1: Code of practice (superseded, withdrawn) BS 25999-2:2007 Business Continuity Management Part 2: Specification (superseded, withdrawn) Australia Standards HB 292-2006, "A practitioners guide to business continuity management" HB 293-2006, "Executive guide to business continuity management" Others International Glossary for Resilience, DRI International. External links The tiers of Disaster Recovery and TSM. Charlotte Brooks, Matthew Bedernjak, Igor Juran, and John Merryman. In, Disaster Recovery Strategies with Tivoli Storage Management. Chapter 2. Pages 21–36. Red Books Series. IBM. Tivoli Software. 2002. SteelStore Cloud Storage Gateway: Disaster Recovery Best Practices Guide. Riverbed Technology, Inc. October 2011. Disaster Recovery Levels. Robert Kern and Victor Peltz. IBM Systems Magazine. November 2003. Business Continuity: The 7-tiers of Disaster Recovery. Recovery Specialties. 2007. Continuous Operations: The Seven Tiers of Disaster Recovery. Mary Hall. The Storage Community (IBM). 18 July 2011. Retrieved 26 March 2013.</ref> Maximum Tolerable Period of Disruption (MTPOD) Maximum Tolerable Period of Disruption (MTPOD): BSI committee response Wayback Machine Janco Associates Department of Homeland Security Emergency Plan Guidelines CIDRAP/SHRM Pandemic HR Guide Toolkit Adapt and respond to risks with a business continuity plan (BCP) Systems thinking Business continuity and disaster recovery Collaboration Backup Disaster preparedness Disaster recovery Emergency management IT risk management
2617814
https://en.wikipedia.org/wiki/AMD%20FirePro
AMD FirePro
AMD FirePro was AMD's brand of graphics cards designed for use in workstations and servers running professional Computer-aided design (CAD), Computer-generated imagery (CGI), Digital content creation (DCC), and High-performance computing/GPGPU applications. The GPU chips on FirePro-branded graphics cards are identical to the ones used on Radeon-branded graphics cards. The end products (i.e. the graphics card) differentiate substantially by the provided graphics device drivers and through the available professional support for the software. The product line is split into two categories: "W" workstation series focusing on workstation and primarily focusing on graphics and display, and "S" server series focused on virtualization and GPGPU/High-performance computing. The release of the Radeon Pro Duo in April 2016 and the announcement of the Radeon Pro WX Series in July 2016 marked the succession of Radeon Pro as AMD's professional workstation graphics card solution. Radeon Instinct is the current brand for servers. Competitors included Nvidia's Quadro-branded and to an extent, Nvidia Tesla-branded product series and Intel's Xeon Phi-branded products. History The FireGL line was originally developed by the German company Spea Software AG until it was acquired by Diamond Multimedia in November 1995. The first FireGL board used the 3Dlabs GLINT 3D processor chip. Deprecated brand names are ATI FireGL, ATI FirePro 3D, and AMD FireStream. In July 2016, AMD announced it would be replacing the FirePro brand with Radeon Pro for workstations. The new brand for servers is Radeon Instinct. Features Multi-monitor support AMD Eyefinity can support multi-monitor set-ups. One graphics card can drive up to a maximum of six monitors; the supported number depends on the distinct product and the number of DisplayPort displays. The device driver facilitates the configuration of diverse display group modes. Differences with the Radeon Line The FirePro line is designed for compute intensive, multimedia content creation (such as video editors), and mechanical engineering design software (such as CAD programs). Their Radeon counterparts are suited towards video games and other consumer applications. Because they use the same drivers (Catalyst) and are based on the same architectures and chipsets, the major differences are essentially limited to price and double-precision performance. However, some FirePro cards may have major feature differences to the equivalent Radeon card, such as ECC RAM and differing physical display outputs. Since the 2007 series, high-end and ultra-end FireGL/FirePro products (based on the R600 architecture) have officially implemented stream processing. The Radeon line of video cards, although present in hardware, did not offer any support for stream processing until the HD 4000 series where beta level OpenCL 1.0 support is offered, and the HD 5000 series and later, where full OpenCL 1.1 support is offered. Heterogeneous System Architecture HSA is intended to facilitate the programming for stream processing and/or GPGPU in combination with CPUs and DSPs. All models implementing the Graphics Core Next microarchitecture support hardware features defined by the HSA Foundation and AMD has provided corresponding software. FirePro DirectGMA GPUOpen: http://gpuopen.com/compute-product/direct-gma/ Soft-mods Because of the similarities between FireGL and Radeon cards, some users soft-mod their Radeon cards by using third-party software or automated scripts accompanied with a modified FireGL driver patch, to allow FireGL capabilities for their hardware, effectively getting a cheaper, equivalent, FireGL cards, often with better OpenGL capabilities, but usually half of the amount of video memory. Some variants can also be soft-modded to a FireStream stream processor. The trend of soft-mods was continued with the 2007 series FireGL cards, as follows: Products Workstation Pre-ATI FireGL cards FireGL Series 1 Vertex shaders : Pixel shaders : Texture mapping units : Render output units 2 Unified shaders : Texture mapping units : Render output units : Compute Units FireMV (Multi-View) Series 1 Vertex shaders : Pixel shaders : Texture mapping unit : Render output units 2 Unified shaders : Texture mapping unit : Render output units FirePro (Multi-View) Series FirePro 3D Series (V000) 1 Unified shaders : Texture mapping units : Render output units : Compute Units 2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory 3 Windows 7, 8.1, 10 Support for Fire Pro Cards with Terascale 2 and later by firepro driver 15.301.2601 FirePro Series (Vx900) 1 Unified shaders : Texture mapping units : Render output units : Compute Units 2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory. 3 Support for Windows 7, 8.1 for OpenGL 4.4 and OpenCL 2.0, when Hardware is prepared with firepro driver 14.502.1045 FirePro Workstation Series (Wx000) Vulkan 1.0 and OpenGL 4.5 possible for GCN with Driver Update FirePro equal to Radeon Crimson 16.3 or higher. Vulkan 1.1 possible for GCN with Radeon Pro Software 18.Q1.1 or higher. It might not fully apply to GCN 1.0 or 1.1 GPUs. 1 Unified shaders : Texture mapping units : Render output units : Compute Units 2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory. 3 OpenGL 4.4: support with AMD FirePro driver release 14.301.000 or later, in footnotes of specs FirePro D-Series In 2013, AMD released the D-Series specifically for Mac Pro workstations. 1 Unified shaders : Texture mapping units : Render output units : compute units FirePro Workstation Series (Wx100) Vulkan 1.0 and OpenGL 4.5 possible for GCN with Driver Update FirePro equal to Radeon Crimson 16.3 or higher. OpenCL 2.1 and 2.2 possible for all OpenCL 2.0-Cards with Driver Update in Future (Khronos). Linux Support for OpenCL is limited with AMDGPU Driver 16.60 actual to Version 1.2. Vulkan 1.1 possible for GCN with Radeon Pro Software 18.Q1.1 or higher. It might not fully apply to GCN 1.0 or 1.1 GPUs. OpenGL 4.6 is available in 18.Q2 (or later) analog to Adrenalin 18.4.1. 1 Unified shaders : Texture mapping units : Render output units : compute units 2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory. 3 OpenGL 4.4: support with AMD FirePro driver release 14.301.000 or later, in footnotes of specs FirePro Workstation Series (Wx300) Vulkan 1.1 possible for GCN with Radeon Pro Software 18.Q1.1 or higher. It might not fully apply to GCN 1.0 or 1.1 GPUs. Mobile Workstation Mobility FireGL Series FirePro Mobile Series Server FireStream Series FirePro Remote Series 1 Unified shaders : Texture mapping units : Render output units : compute units 2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory. FirePro Server Series (S000x/Sxx 000) Vulkan 1.0 and OpenGL 4.5 possible for GCN with Driver Update FirePro equal to Radeon Crimson 16.3 or higher. OpenGL 4.5 was only in Windows available. Actual Linux Driver support OpenGL 4.5 and Vulkan 1.0, but only OpenCL 1.2 by AMDGPU Driver 16.60. Vulkan 1.1 possible for GCN with Radeon Pro Software 18.Q1.1 or higher. It might not fully apply to GCN 1.0 or 1.1 GPUs. OpenGL 4.6 is available in 18.Q2 (or later) analog to Adrenalin 18.4.1. 1 Unified shaders : Texture mapping units : Render output units: Compute units 2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory. 3 OpenGL 4.4: support with AMD FirePro driver release 14.301.000 or later, in footnotes of specs Radeon Sky Series 1 Unified shaders : Texture mapping units : Render output units : compute units 2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory. See also Nvidia Quadro – Nvidia's competing workstation graphics solution Nvidia Tesla – Nvidia's competing GPGPU solution Xeon Phi - Intel's competing High-performance computing solution List of AMD graphics processing units References External links AMD FirePro Professional Graphics Advanced Micro Devices graphics cards Graphics cards
25010449
https://en.wikipedia.org/wiki/Federal%20Investigation%20Agency
Federal Investigation Agency
The Federal Investigation Agency (; reporting name: FIA) is a border control, criminal investigation, counter-intelligence and security agency under the control of the Interior Secretary of Pakistan, tasked with investigative jurisdiction on undertaking operations against terrorism, espionage, federal crimes, smuggling as well as infringement and other specific crimes. Codified under the Constitution of Pakistan in 1974, the institution functions under the Ministry of Interior (MoI). The FIA also undertakes international operations with the close co-operation and co-ordination of Interpol. Headquartered in Islamabad, the agency has various branch and field offices located in all major cities throughout Pakistan. Objectives and Mission Statements The FIA's main goal and priority is to protect the nation's interests and defend Pakistan, to uphold and enforce criminal law, and law enforcement in the country. Its current mission statement is: Departments & Priorities of FIA As of 2021, FIA has 11 active departments to lead criminal charges and investigation, with priorities: Anti-Corruption Anti Human Trafficking and Smuggling Counter-Terrorism Economic Crime Electricity, Gas, Oil Anti Theft Unit FIA Academy Immigration Interpol Intellectual Property Rights Integrated Border Management System National Response Center For Cyber Crimes Priorities Counter-terrorism Wing (CTW)—Tasked to protect Pakistan from all kinds of terrorist attacks, including cyber, bioterrorism, chemical, electronic and nuclear terrorism (see counter-terrorism). Anti-Corruption Wing (ACW)—Tasked with undertaking investigations and combat all public corruption at all levels of command (see also NAB). Economic Crime Wing (ECW)—Mandate to protect Pakistan from economic terrorism and protection of intellectual property rights of the people. (see also: Economic terrorism). Immigration Wing (IW)—Combat human trafficking activities and resist illegal immigration in Pakistan. Technical Wing (TW)—Tasked to make efforts to protect Pakistan against foreign intelligence operations and espionage (see counterintelligence and counterproliferation) as well as using scientific assistance to resolve high-technology crimes. Legal Branch (LB)—Responsible to provide legal guidance in all administrative and operational matters as well as protect civil rights. National Central Bureau (NCB)—Tasked to combat transnational/national criminal organisations and enterprises (see organised crime) with assistance from Interpol and the US Federal Bureau of Investigation (FBI). Anti Trafficking Unit (ATU)—Tasked to combat major violent crimes, to ensure country-wide coverage of human trafficking, as well as to prevent and protect the victims of trafficking. History Background In 1942, the British government established the Special Police Establishment (SPE) in British India during the midst of World War II. Originally the Police Establishment was tasked to take up investigation of corruption in Indian Civil Service (ICS), rampant in the Supplies and Procurement Department of the then British Raj. After the independence of Pakistan in 1947, the SPE was equally divided between the governments of India, which formed the CBI, and Pakistan, which retained it as the Pakistan Special Police Establishment (PSPE) in 1947. With the passage of time, the PSPE, apart from investigating the offences of bribery and corruption against officials of the federal government, was also given powers to investigate cases relating to offences under the following laws: Official Secret Act, 1923 Foreign Exchange Regulation Act, 1947 Passport Offenses Act, 1952 Customs Act, 1959 Creation After the 1971 war with India, police reforms were carried out by Prime Minister Zulfikar Ali Bhutto after adopting recommendations from the report submitted by bureaucrat G. Ahmad in Prime Minister Secretariat, on 7 March 1972. The Federal Investigation Agency (FIA) was created on 13 January 1975, after being codified in the Constitution with the passing of the FIA Act, 1974, by the State parliament. Initially, its first roles were to build efforts against organised crimes, smuggling, human trafficking, immigration offences, and passport scandals. When the FIA was created, it took cases ON corruption at every level of the government. Although ostensibly a crime-investigation service, the FIA also did investigations of accused political opponents and critics of financial impropriety, from tax evasions to taking bribery while in office. National security and efforts against terrorism Initially, its role was to conduct investigations on public corruption but the scope of the FIA's investigation was increased to take actions against communist terrorism in the 1980s. In 1981, the FIA agents successfully investigated and interrogated the culprits behind the hijacking of a Pakistan International Airlines Boeing 720 CR, immediately holding Murtaza Bhutto for its responsibility. The FIA keenly tracked the whereabouts of Murtaza Bhutto in Syria, and successfully limited the influence of his al-Zulfikar group. In 1985, the FIA's undercover operation busted the drug trade, with the illicit trade leaders and their culprits apprehended by the FIA. Known as the "Pakistan League Affair", the FIA effectively put an end to the illicit drug trade with the arrest of the gang's key drug lord. From 1982 to 1988, the FIA launched a series of investigations and probes against Pakistan Communist Party leader Jam Saqi and aided the court proceedings relating to its findings. In 1986, the FIA successfully infiltrated the terrorist group responsible for hijacking Pan Am Flight 73, and quickly detained the Libyan commercial pilot suspected of having a role in the hijack. After securing voting plurality in 1993 Pakistani general elections, Prime Minister Benazir Bhutto further widened the scope of FIA, making FIA akin to Inter-Services Intelligence (ISI) in the intelligence community. After approving the appointment of senior FIA agent Rehman Malik, the FIA's intelligence and investigations were now conducted at the international level, with close co-ordination with the American FBI. The FIA notably worked together with the FBI to conduct investigations of the 1993 bombing at the World Trade Center in New York, United States. The FIA and FBI tracked down the mastermind of that bombing, Ramzi Yousef, in Pakistan. In 1995, the successful investigation led to the extradition of Yousef to the United States. In the 1990s, the FIA directed by Malik, was involved in leading investigations and actions against al-Qaeda operatives, Khalid Sheikh Mohammed and Ramzi Yousef, and assisted the FBI to apprehend Youssef in 1995, and Mohammed in 2002. The FIA pushed its efforts against terrorism and tracked crime syndicated organisations affiliated with the terrorist organisations. The FIA was said to be launching secret intelligence operations against the terrorist organisations, which mounted a secret competition with the ISI. Despite difficulties, the FIA had gained world prominence after reportedly leading successful operations against terrorism in 1996. In 2001, the FIA successfully investigated the case against Sultan Mahmood for his alleged part in nuclear terrorism, though the FIA cleared Mahmood of his charges in 2002. In 2003, the role of counter-terrorism was assigned to the FIA, which led to an establishment of the Counter-Terrorism Wing (CTW). The CTW agents were provided extensive training and equipment handling by the FBI under the Anti Terrorism Assistance Program (ATTP). The FIA began investigating Khalid Sheikh Mohammed and his movement around the world was monitored by the FIA. The FIA agents kept the investigations with FBI agents over the case of Mohammed. Eventually FIA's successful investigations led to Mohammed's capture in Rawalpindi, Punjab, in a paramilitary operation conducted jointly by the CIA and the ISI in 2003. In 2002, the FIA continued its investigation and had strict surveillance of the movements of Afia Siddiqui in Karachi. In 2003, the FIA had been investigating the investigation on Siddiqui's movements and activities, subsequently sharing with the United States. Anti-infringement efforts Efforts on probes against copyright violation was increased after a petition was filed by the FBI which disputed Pakistan's commitment to rooting out infringement within its national borders in 2001. In 2002, the FIA launched several probes against copyright infringement and Pakistan was en route to having its US duty-free GSP agreements being taken away from it in 2005. To avert any further negative fallout, sections of the Copyright Ordinance 1962 were included in the FIA's schedule of offences. This legislation paved the way in 2005, under the direction of the federal interior ministry, to raid the country's largest video wholesale centre: The Rainbow Centre. Raiding the factories of the dealers that operated within the centre proved to be a highly successful enterprise, resulting in a reduction of 60% of sales of bootlegged video material. A spokesman from the International Federation of the Phonographic Industry (IFPI), later confirmed, that many of the outlets had stopped selling unlicensed video goods and were now selling mobile phones, highlighting that the FIA's raids and the resultant legal action were a success. Intelligence Operations In 1972–73, Prime Minister Zulfikar Ali Bhutto adopted many recommendations of the Hamoodur Rahman Commission's papers after seeing the intelligence failure in East Pakistan. This led the reformation of the FIA as Prime Minister Bhutto visioned the FIA as equivalent to American FBI which not only protects the country from internal crises but also from foreign suspected threats therefore he established the FIA on the same pattern. In 1970s, Prime Minister Bhutto had the Pakistan intelligence to actively run military intelligence programs in various countries to procure scientific expertise and technical papers in line of Alsos Mission of Manhattan Project. Both the FIA and the Intelligence Bureau (IB) were empowered during the government and the scope of their operation was expanded during 1970s. Though ISI did lose its importance in the 1970s, the ISI regainEd its importance in the 1980s after successfully running the military intelligence program against the Soviet Union. Sensing the nature of competition, President Zia-ul-Haq consolidated the intelligence services after the ISI getting training from the CIA in 1980s, and subsequently improved its methods of intelligence. In 1990s, the ISI and FIA, in many ways, were at war in the poverty-stricken landscape of Pakistan politics. The ISI used its Islamic guerrillas as deniable foot soldiers to strike at FIA credibility, and according to published accounts, the FIA turned to Israeli Mossad and Israeli Intelligence Community through Pervez Musharraf to helped down the terrorist networks in the country. Throughout the 1990s, the intelligence community remained under fire and competition in each services for credibility. After the 11 September 2001, the attacks in the United States history, the FBI launched the largest investigation in its history and soon determined that the hijackers were linked to al-Qaeda, led by Saudi exiled Osama bin Laden. Same as just after 9/11 attacks in the United States, the FIA gained credibility over the ISI in the United States. The FIA and ISI were also mentioned in The Path to 9/11 television series. Special FIA teams On 9 June 1975, the FIA formed the Immigration Wing (IW) to help NARA agents and officials to conduct probes against illegal immigration to Pakistan. In 1979, the Anti-Human Smuggling Wing was formed to take evasive actions against the human smugglers and trafficking. In 1975, another special team, the Technical Wing, was formed to tackle financial crimes. In 2003, the FIA formed an elite counter-terrorism unit to help deal with exterminating terrorism in the country. Named Counter-Terrorism Wing (CTW), it acts as an Elite team in related procedures and all counter-terrorism cases. Also the same year, the FIA formed the Cyber Investigation Unit (CIU) and the Computer Analysis and Forensic Laboratory. Since then, the FIA has assigned its agents, and officials have received physical training and electronic equipment from the United States. In 2011, the CTW was further expanded and FIA established another Terrorist Financing Investigation Unit (TFIU) to conduct and lead operations against terrorism financing. The CTW operatives have also assisted other intelligence agencies to conduct joint investigations against the terror groups. To date CTW has always been an integral part of high-profile terrorist cases. It has also managed the most wanted list of individuals involved indirectly or directly in taking part in terrorism. In 2008, the FIA successfully led and concluded the joint investigations on the Marriott Hotel bombing in Islamabad. The FIA shared most of its findings gathered through interrogation of the arrested suspects to help hunt down the top members of al-Qaeda, with the FBI. Other FIA special teams included the Anti-Corruption Wing (ACW) which was established in 2004 to lead probes against corrupt officials and other white collar criminals. In 2004, the Economic Crime Wing (ECW) was also established in 2004 but transferred to the NAB; though it was restored to the FIA in 2008. In 2014, the FIA formed an elite response team with the National Counter Terrorism Authority (NACTA) agency to take decisive and through counter-terrorism actions, based on gained internal intelligence, against the terror groups. Criticisms and controversies Covert operations on political groups The FIA has used covert operations against domestic political groups since its inception; the FIA launched covert operations against the right-wing activists of the PNA. In 1990s, the FIA was involved in running active intelligence operation against the Bonded Labour Liberation Front (BLLF) on behest of the government. Critics of FIA have called the agency "secret police". In the 1980s, the FIA also targeted Pakistani leftist groups and was instrumental in conducting inquiries (Jam Saqi case) in preparation against the Communist Party of Pakistan. 2005 FIA scandal In 2005, Pakistani Ministry of Justice agents successfully infiltrated the immigration wing of the FIA, leading the FIA to launch an investigation against its own department. The special agents of the FIA arrested five of FIA's immigration agents for their alleged role in human trafficking, and most of them were dismissed from the service. 2008 Lahore bombings On 11 March 2008, the Pakistani Taliban (TTP) coordinated the twin suicide bombings to target the counter-terrorist unit of FIA based in Lahore. The building was severely damaged during the suicidal attack. The Geo News later reported that "the building also housed the offices of a special US-trained units of FBI created to counter terrorism" suggesting a motive. International access Cooperation with foreign counterparts In 2006, the FIA resumed operational links with its Indian counterpart, the Central Bureau of Investigation, after a gap of 17 years. The FIA participates in the PISCES project which was initiated by the US Department of State, Terrorist Interdiction Program (TIP) in 1997, as a system to improve their watch-listing capabilities by providing a mainframe computer system to facilitate immigration processing. The PISCES system has been installed at seven major airports of the country i.e. Islamabad, Karachi, Lahore, Peshawar, Quetta, Multan and Faisalabad airports. The system has provision to accommodate about information on suspects from all law enforcement agencies like Immigration, Police, Narcotics Control, Anti-smuggling, and Intelligence Services. Organization The FIA is headed by the appointed Director-General who is appointed by the Prime Minister, and confirmed by the President. Appointment for the Director of FIA either comes from the high-ranking officials of police or the civil bureaucracy. The DG FIA reports to the Interior Secretary of Pakistan. The Director General of the FIA is assisted by three Additional Director-Generals and ten Directors for effective monitoring and smooth functioning of the operations spread all over the country. The FIA is headquartered in Islamabad and also maintains a separate training FIA Academy, also in Islamabad which was opened in 1976. In 2002, FIA formed a specialised wing for investigating Information and Communication Technology (ICT) related crimes. This wing is commonly known as the National Response Centre for Cyber Crimes (NR3C) and has the credit of arresting 12 hackers, saving millions of dollars for the government exchequer. This wing of the FIA has state-of-the-art Digital Forensic Laboratories managed by highly qualified Forensic Experts and is specialised in Computer and Cell Phone Forensics, cyber/electronic crime investigation, Information System Audits and Research & Development. Officers of the NR3C carry out training for Officers of Police and other Law Enforcement Agencies of Pakistan. Organization structure The organizational structure of the Federal Investigation Agency (FIA) is as follows: Structure The Federal Investigation Agency (FIA) has its headquarters in Islamabad. The chief presiding officer of the agency is called the director general, and is appointed by the Ministry of Interior. The headquarters provide support to 7 provincial offices, i.e. Sindh-I, Sindh-II, Punjab-I, Punjab-II, KPK, Balochistan and Federal Capital Islamabad. The provincial heads of agency are called "directors". Further there are about (?) smaller offices known as wings or circles e.g. Crime, Corporate Crime, Banking Crime and Anti-Human Trafficking Circles at the provincial level. They are headed by additional and deputy directors, helped by investigation officers (I.O.) like assistant directors, inspectors, sub-inspectors and assistant sub-inspectors etc. for running of bureau business. The wings are major segments of the agency known as Anti-Corruption or Crime Wing, Cyber Crime Wing, Immigration Wing, Technical Wing, Legal Wing, Administration Wing and National Response Centre for Cyber Crimes (NR3C). Further there are branches working under the command of above-mentioned wings, viz. counter-terrorism branch (SIG), Interpol branch, legal branch, crime branch, economic crime branch, intellectual property rights branch, immigration branch, anti-human smuggling branch, PISCES branch, administration branch, implementation and monitoring branch. FIA Director Generals The FIA is headed by a Director-General (DG), appointed by the federal government, generally a very senior Police Officer of BPS 21/22, based at the Headquarters in Islamabad. The current Director-General is Sanaullah Abbasi, a PSP officer who was appointed on 9 June 2021. References Further reading External links Law enforcement in Pakistan 1975 establishments in Pakistan Pakistani intelligence agencies National Central Bureaus of Interpol Government agencies established in 1975 Pakistan federal departments and agencies Ministry of Interior (Pakistan)
956071
https://en.wikipedia.org/wiki/Windows%20domain
Windows domain
A Windows domain is a form of a computer network in which all user accounts, computers, printers and other security principals, are registered with a central database located on one or more clusters of central computers known as domain controllers. Authentication takes place on domain controllers. Each person who uses computers within a domain receives a unique user account that can then be assigned access to resources within the domain. Starting with Windows Server 2000, Active Directory is the Windows component in charge of maintaining that central database. The concept of Windows domain is in contrast with that of a workgroup in which each computer maintains its own database of security principals. Configuration Computers can connect to a domain via LAN, WAN or using a VPN connection. Users of a domain are able to use enhanced security for their VPN connection due to the support for a certification authority which is gained when a domain is added to a network, and as a result, smart cards and digital certificates can be used to confirm identities and protect stored information. Domain controller In a Windows domain, the directory resides on computers that are configured as domain controllers. A domain controller is a Windows or Samba server that manages all security-related aspects between user and domain interactions, centralizing security and administration. A domain controller is generally suitable for networks with more than 10 PCs. A domain is a logical grouping of computers. The computers in a domain can share physical proximity on a small LAN or they can be located in different parts of the world. As long as they can communicate, their physical location is irrelevant. Integration Where PCs running a Windows operating system must be integrated into a domain that includes non-Windows PCs, the free software package Samba is a suitable alternative. Whichever package is used to control it, the database contains the user accounts and security information for the resources in that domain. Active Directory Computers inside an Active Directory domain can be assigned into organizational units according to location, organizational structure, or other factors. In the original Windows Server Domain system (shipped with Windows NT 3.x/4), machines could only be viewed in two states from the administration tools; computers detected (on the network), and computers that actually belonged to the domain. Active Directory makes it easier for administrators to manage and deploy network changes and policies (see Group Policy) to all of the machines connected to the domain. Workgroups Windows Workgroups, by contrast, is the other model for grouping computers running Windows in a networking environment which ships with Windows. Workgroup computers are considered to be 'standalone' - i.e. there is no formal membership or authentication process formed by the workgroup. A workgroup does not have servers and clients, and hence represents the peer-to-peer (or client-to-client) networking paradigm, rather than the centralized architecture constituted by Server-Client. Workgroups are considered difficult to manage beyond a dozen clients, and lack single sign on, scalability, resilience/disaster recovery functionality, and many security features. Windows Workgroups are more suitable for small or home-office networks. See also Active Directory Security Accounts Manager (SAM) Notes Microsoft server technology Windows architecture Computer networking
18806794
https://en.wikipedia.org/wiki/Comparison%20of%20tablet%20computers
Comparison of tablet computers
This is a list of tablet computers, grouped by intended audience and form factor. Media tablets Multimedia tablets are compared in the following tables. Larger than screen Following two tables compare larger than screen multimedia tablets released in 2012 and later. Android This table compares multimedia tablets running Android operating systems. iOS This table compares multimedia tablets running iOS operating systems. Windows This table compares multimedia tablets running Windows operating systems. screen This table compares screen (multi-)media tablets released in 2012 and later. Android This table compares multimedia tablets running Android operating systems. iOS This table compares multimedia tablets running iOS operating systems. Windows This table compares multimedia tablets running Windows operating systems. Older 2011 screen and larger This table compares and larger screen (multi-)media tablets released in 2011. screen This table compares screen (multi-)media tablets released in 2011. 2010 screen and larger This table compares and larger screen (multi-)media tablets released in 2010. screen This table compares screen (multi-)media tablets released in 2010. 2009 and earlier This table compares (multi-)media tablets released in 2009 and earlier. Industrial tablets This table compares tablet computers designed to be used by professionals in various harsh environmental conditions. Most of them are rugged. Some are meant to be mounted in vehicles or used as terminals. Convertible tablets Hybrid tablets See also Tablet computer Comparison of e-book readers Comparison of netbooks Slate phone, a mobile phone form factor Comparison of portable media players Smartbook Ultra-mobile PC External links Annotated bibliography of references to gesture and pen computing Notes on the History of Pen-based Computing (YouTube) Tablet News References List Linux-based devices Computing comparisons Tablet computers
17668450
https://en.wikipedia.org/wiki/Hellenic%20American%20University
Hellenic American University
The Hellenic American University was founded in 2004 in Manchester, New Hampshire, United States, as a private degree-granting institution of higher education by an act of the New Hampshire State Legislature. The university rents office space in Nashua, New Hampshire, but its primary campus is located in Athens, Greece, in buildings it shares with the Hellenic American Union (HAU) and the Hellenic American College (HAEC), a Greek affiliate. Hellenic American University is a member of the New Hampshire College & University Council (NHCUC), a non-profit consortium of 17 public and private institutions of higher education in the state of New Hampshire. Description Hellenic American University began with a small and focused program, a Master's in Business Administration, launched in November 2004, following approval by the New Hampshire Postsecondary Education Commission (NH-PEC), now the New Hampshire Department of Education Division of Higher Education—Higher Education Commission. The university's leadership, including its first president, Chris Spirou, and most faculty members, reside in Greece. Initial funding for the university came from the Hellenic American Union (HAU), a non-profit Greek association established in 1957 with U.S. government encouragement to promote U.S.-Greek educational and cultural relations, including through English-language teaching and testing. Spirou served as president of the HAU board from 1994 until 2020, and as president of the university until 2012, when he was replaced as university president by Leonidas-Phoibos Koskos, who served concurrently as managing director of the HAU. The university shares use of the classrooms, library, cafe, and other facilities of the Union in central Athens. Article 16 of the Greek Constitution prohibits the establishment of university-level institutions by private persons. However, foreign universities can operate Greek affiliates, provided they do not use the word "university" in their name. Therefore, the Hellenic American University conducts most of its academic programs in Greece through an entity called the Hellenic American College, while degrees are awarded by the university through its Nashua offices. Over the ensuing six years, the institution evolved steadily from its initial identity as a business school to its beginnings as a fully developed university: with NH-PEC approval, it developed and implemented twelve more degree programs: a Professional Master's in Business Administration (PMBA), a Bachelor of Science in Business Administration (BSBA), a Bachelor of Science in Information Technology (BSIT), a Master of Arts in Applied Linguistics (MAAL), a Bachelor of Arts in English Language and Literature (BAELL), a Master of Science in Information Technology (MSIT), a Master of Arts in Translation (MAT), a Bachelor of Music (BM), a Bachelor of Science in Psychology (BSPSY), an Associate of Science in Enterprise Network Administration (ASENA), and a Master of Arts in Conference Interpretation (MACI). In September 2008, the university launched its first doctoral program, a Ph.D. in Applied Linguistics. In 2017 the university moved its U.S. offices from Manchester to Nashua, New Hampshire. In September 2011 the university started two new associate programs, the Associate of Science in General Engineering (ASGE) and the Associate of Science in Hospitality Management (ASHM). In fall 2012, the Master's of Science in Psychology (MSPsy) was launched. The State of New Hampshire Post-Secondary Education Commission has granted the Hellenic American University the authority to grant the following degrees: The Undergraduate Program at Hellenic American University incorporates the following major concentrations: Bachelor of Arts in English Language and Literature Bachelor of Science in Business Administration Bachelor of Science in Information Technology Bachelor of Music Bachelor of Science in Psychology The Associate of Science in General Engineering The Associate of Science in Hospitality Management The Graduate Programs offered at Hellenic American University are: Master of Business Administration (MBA) Professional Master of Business Administration (PMBA) Master of Science in Information Technology (MSIT) Master of Arts in Applied Linguistics (MAAL) Master of Arts in Translation (MAT) Master of Arts in Conference Interpretation (MACI). Master of Science in Psychology (MSPsy) Hellenic American University is accredited by the New England Commission of Higher Education (NECHE). References External links Official website Hellenic American College website Universities and colleges in Hillsborough County, New Hampshire Buildings and structures in Manchester, New Hampshire Education in Manchester, New Hampshire
696613
https://en.wikipedia.org/wiki/Stickies%20%28Apple%29
Stickies (Apple)
Stickies is an application for Apple Macintosh computers that puts Post-it note-like windows on the screen for the user to write short reminders, notes and other clippings. Contents are automatically stored, and restored when the application is restarted. An unrelated freeware program with the same name and functionality is available for Microsoft Windows. Similar applications (described as "desktop notes") are available for most operating systems. History In 1994, the first version of Stickies was written by Apple employee Jens Alfke and included in System 7.5. Alfke had originally developed it in his free time as Antler Notes and intended to release it as shareware, doing business as Antler Software. Apple planned to acquire it from him, but realized that they already legally owned it under the terms of his employment. During the transition to Mac OS X in 2001, Stickies was rewritten in Cocoa, and is still included in macOS, with features such as transparent notes, styled text, lists, and the ability to hold pictures. The ability to collapse note windows, which is present in all versions of Stickies, is a holdover from System 7.5's WindowShade feature. The window button layout, which is unusual for a modern macOS application, is retained from Mac OS 8. In macOS Big Sur, the icon has been changed to look like a stack of sticky notes in a rounded square design. Features The Stickies application currently supports the following usage scenarios: Float above all windows: Press Command-Option-F with a sticky note selected and it will float above any other windows that are visible Translucent: Press Command-Option-T with a sticky note selected and it will become translucent Embed other media: Add other media besides text, such as QuickTime movies, PDFs, images, just by dragging them into the sticky note. Backup data: Stickies keeps all information in a self-contained database. The database can be backed up by copying the file StickiesDatabase, which is located in a user's Library directory ("~/Library/StickiesDatabase") Search data: Press Command-F to search for words or phrases within a sticky note Create a note from selection: In a Cocoa-based application (such as the Safari web browser), select some text and press Command-Shift-Y or use the Services menu to create a new sticky note with the text selection. References External links Desktop widgets Macintosh operating systems
53199712
https://en.wikipedia.org/wiki/DXC%20Technology
DXC Technology
DXC Technology is an American multinational information technology (IT) services and consulting company headquartered Ashburn, Virginia U.S. History Creation DXC Technology was founded on April 3, 2017 when the Hewlett Packard Enterprise Company (HPE) spun off its Enterprise Services business and merged it with Computer Sciences Corporation (CSC). At the time of its creation, DXC Technology had revenues of $25 billion, employed 170,000 people and operated in 70 countries. By June 2021, the employee count of DXC has come down to 134,000. The spinoff from Hewlett Packard Enterprise did not include two parts of the Enterprise Services segment: the Mphasis Limited reporting unit and the Communications and Media Solutions product group. In India, the company started a three-year plan to reduce the number of offices in the country from 50 to 26, and reduce headcount by 5.9% (around 10,000) employees. With about 43,000 employees (more than 1/3rd of its workforce) in India, the company is restructuring its workforce to meet its new revenue profile. In 2017, DXC split off its US public sector segment to create a new company, Perspecta Inc. In 2019 Mike Salvino was named president and CEO of DXC Technology. He previously served as group chief executive for Accenture Operations. In February 2021, French technology services and consulting firm Atos ended talks for a potential acquisition of DXC. Atos has proposed for US$10 billion including debt for acquisition. By June 2021, DXC had 98,000 employees of which 47,000 employees in countries such as India, the Philippines, Eastern Europe, and Vietnam. Acquisitions In 2017, the company completed its first acquisition, buying Tribridge, a provider of Microsoft Dynamics 365 software. In 2018, it announced additional acquisitions, including Molina Medicaid Solutions (previously part of Molina Healthcare), Argodesign and two ServiceNow partners, BusinessNow and TESM. In January 2019, DXC Technology acquired Luxoft. According to information from the SEC database, DXC Technology then owned 83% of Luxoft. The deal closed in June 2019. Programs and sponsorships Dandelion Program Piloted in Adelaide, Australia, in 2014; the DXC Dandelion Program has grown to over 100 employees in Australia, working with more than 240 organizations in 71 countries to acquire sustainable employment for individuals with autism. In June 2021, DXC piloted the Dandelion Program in the UK. Sport The company sponsors Team Penske with 2016 Series Champion and 2019 Indianapolis 500 winner Simon Pagenaud, and in 2018 became title sponsor of IndyCar Series race DXC Technology 600. DXC is also a partner of Australian Rugby Union team Brumbies. See also List of IT consulting firms References External links American companies established in 2017 Companies based in Fairfax County, Virginia Technology companies established in 2017 Companies listed on the New York Stock Exchange 2017 establishments in Virginia Information technology consulting firms of the United States International information technology consulting firms
1807889
https://en.wikipedia.org/wiki/Handheld%20PC
Handheld PC
A handheld personal computer (PC) is a miniature computer typically built around a clamshell form factor and is significantly smaller than any standard laptop computer, but based on the same principles. It is sometimes referred to as a palmtop computer, not to be confused with Palmtop PC which was a name used mainly by Hewlett-Packard. Most handheld PCs use an operating system specifically designed for mobile use. Ultra-compact laptops capable of running common x86-compatible desktop operating systems are typically classified as subnotebooks. The first hand-held device compatible with desktop IBM personal computers of the time was the Atari Portfolio of 1989. Other early models were the Poqet PC of 1989 and the Hewlett Packard HP 95LX of 1991 which run the MS-DOS operating system. Other DOS-compatible hand-held computers also existed. After 2000 the handheld PC segment practically halted, replaced by other forms, although later communicators such as Nokia E90 can be considered to be of the same class. The name Handheld PC was used by Microsoft from 1996 until the early 2000s to describe a category of small computers having keyboards and running the Windows CE operating system. Microsoft's Handheld PC standard The Handheld PC (with capital "H") or H/PC for short was the official name of a hardware design for personal digital assistant (PDA) devices running Windows CE. The intent of Windows CE was to provide an environment for applications compatible with the Microsoft Windows operating system, on processors better suited to low-power operation in a portable device. It provides the appointment calendar functions usual for any PDA. Microsoft was wary of using the term "PDA" for the Handheld PC. Instead, Microsoft marketed this type of device as a "PC companion". To be classed as a Windows CE Handheld PC, the device must: Run Microsoft's Windows CE Be bundled with an application suite only found through an OEM Platform Release and not in Windows CE itself Use ROM Have a screen supporting a resolution of at least 480×240 Include a keyboard (except tablet models) Include a PC card slot Include an infrared (IrDA) port Provide wired serial and/or Universal Serial Bus (USB) connectivity HP's first displays' widths were more than a third larger than that of Microsoft's specification. Soon, all of their competition followed. Examples of Handheld PC devices are the NEC MobilePro 900c, HP 320LX, Sharp Telios, HP Jornada 720, IBM WorkPad Z50, and Vadem Clio. Also included are tablet computers like the Fujitsu PenCentra 130, and even communicators like the late Samsung NEXiO S150. In 1998 Microsoft released the Palm-size PC, which have smaller screen sizes and lack keyboards compared to Handheld PC. Palm-size PC became Pocket PC in 2000. Due to limited success of Handheld PC, Microsoft focused more on the keyboard-less Pocket PC. In September 2000, the updated Handheld PC 2000 was announced which is based on version 3.0 of Windows CE. Interest in the form factor overall quickly evaporated, and by early 2002 Microsoft were no longer working on Handheld PC, with its distinct functionality removed from version 4.0 of Windows CE. HP and Sharp both discontinued their Windows CE H/PCs in 2002, while NEC was last to leave the market in 2005. However, some manufacturers abandoned the format even before Microsoft did, such as Philips and Casio. See also References Mobile computers Windows CE devices de:Handheld PC lt:Delninis kompiuteris
10919910
https://en.wikipedia.org/wiki/Whiting%20School%20of%20Engineering
Whiting School of Engineering
The G.W.C. Whiting School of Engineering, is a division of the Johns Hopkins University located in the university's Homewood campus in Baltimore, Maryland, United States. History Engineering at Johns Hopkins was originally created in 1913 as an educational program that included exposure to liberal arts and scientific inquiry. In 1919, the engineering department became a separate school, known as the School of Engineering. By 1937, over 1,000 students had graduated with engineering degrees. By 1946 the school had six departments. In 1961, the School of Engineering changed its name to the School of Engineering Sciences and, in 1966, merged with the Faculty of Philosophy to become part of the School of Arts and Sciences. In 1979, the engineering programs were organized into a separate academic division that was named the G.W.C. Whiting School of Engineering. The school's named benefactor is George William Carlyle Whiting, co-founder of The Whiting-Turner Contracting Company. Several departments at the school have been nationally and historically recognized. The Johns Hopkins Department of Biomedical Engineering is recognized as the top-ranked program in the nation. The Department of Geography and Environmental Engineering has consistently ranked as one of the top 5 programs nationally by U.S. News & World Report in recent years. The Department of Mechanical Engineering is well known for its fundamental and historic contributions, especially in the fields of mechanics and fluid dynamics. Although it has always been a very small department, an uncharacteristically large number of highly acclaimed scholars have been associated with it over the years. These include Clifford Truesdell, Owen Martin Philips, Jerald Ericksen, James Bell, Stanley Corrsin, Robert Kraichnan, John L. Lumley, Leslie Kovasznay, Walter Noll, K. R. Sreenivasan, Hugh Dryden, Shiyi Chen, Andrea Prosperetti, Fazle Hussain, Harry Swinney, Stephen H. Davis, Gregory L. Eyink, Charles Meneveau, Joseph Katz (professor), Lauren Marie Gardner, Gretar Tryggvason and Mohamed Gad-el-Hak. Many of the landmark papers in the field of fluid mechanics (turbulence in particular) were written using data from the Corrsin Wind Tunnel Laboratory. The wind tunnel is still in operation today. The department was also home to the school of rational mechanics. It was recently ranked as one of the top 5 departments in the nation for research activity by the National Research Council (the department was ranked 13th by the generic U.S. News & World Report rankings), and is still considered one of the main centers of fundamental research in fluid dynamics and solid mechanics. Departments The Whiting School contains nine departments: Applied Mathematics & Statistics Biomedical Engineering Chemical and Biomolecular Engineering Computer Science Electrical and Computer Engineering Civil and Systems Engineering Environmental Health and Engineering (formerly Geography and Environmental Engineering) Materials Science & Engineering Mechanical Engineering (link) Engineering Programs for Professionals The Engineering for Professionals (EP) program is a part-time and online program at Whiting School. EP offers master's degree programs and courses in 21 distinct disciplines. The Johns Hopkins University first offered courses to working engineers in 1916, held “Night Courses for Technical Workers” in response to the potential for United States involvement in World War I. The part-time undergraduate engineering program realized its largest enrollments for a time after World War II when returning servicemen and women received GI Bill benefits for a college education. Until the late 1950s, part-time courses were primarily offered at the undergraduate level on the Johns Hopkins Homewood campus. In 1958, the Johns Hopkins Applied Physics Laboratory (APL) began to offer advanced technical courses at the graduate level with credit toward Johns Hopkins academic degrees under the auspices of that institution’s Evening College. By 1963, APL established a formal center for the Evening College to meet growing demand. Over the years, the number and variety of engineering and applied science courses and master’s degree program expanded, so that by 1983 five master's degrees were offered at the APL Education Center: Applied Physics, Computer Science, Electrical Engineering, Numerical Science, and Technical Management. In 1983, the APL-based programs came under the oversight of the re-established engineering school at Johns Hopkins, the G.W.C. Whiting School of Engineering. At that time, eight additional degree programs were added: undergraduate programs in Civil, Electrical, and Mechanical Engineering and five master's degree programs in Chemical Engineering, Civil Engineering, Environmental Engineering, Materials Engineering and Mechanical Engineering. Johns Hopkins professionals engineering education has changed its name several times to reflect added programs, advancing technology, and a changing workforce. Its name was the Part-Time Engineering Program from 1983 to 1987, Continuing Professional Programs from 1987 to 1992, Part-Time Programs in Engineering and Applied Science from 1992 to 2004, Engineering and Applied Science Programs for Professionals from 2004 to 2008, and Engineering for Professionals from 2008 to the present. Also, several degree programs have changed their names to reflect changes in focus, and in several cases, concentrations in existing programs have become new programs in their own right. EP is accredited by the Middle States Association of Colleges and Schools. Academic Centers and Institutes Center for Educational Outreach Center for Leadership Education Johns Hopkins University Information Security Institute is the newest addition to the graduate programs affiliated with Johns Hopkins. The Institute is the "university's focal point for research and education in information security, assurance and privacy." JHUISI is the only Institute in the Whiting School with an academic degree program, offering the Master of Science in Security Informatics (MSSI). Research Centers and Institutes Advanced Technology Laboratory Applied Physics Laboratory Center for Advanced Metallic and Ceramic Systems Center for Systems Science and Engineering Malone Center for Engineering in Healthcare Center for Cardiovascular Bioinformatics and Modeling Center for Contaminant Transport, Fate, and Remediation Center for Environmental and Applied Fluid Mechanics Center for Hazardous Substances in Urban Environments Center for Imaging Science Center for Language and Speech Processing Center for Materials Sensing and Detection Center for Multi-Functional Appliqué Center for Networking and Distributed Systems Chemical Propulsion Information Analysis Center Engineering Research Center for Computer-Integrated Surgical Systems and Technology Institute for Computational Medicine Institute for Data Intensive Engineering and Science Institute in Multiscale Modeling of Biological Interactions Institute for NanoBioTechnology Johns Hopkins Systems Institute Johns Hopkins University Information Security Institute Materials Research Science and Engineering Center Whitaker Biomedical Engineering References External links School of Engineering Official Website Engineering Engineering schools and colleges in the United States Engineering universities and colleges in Maryland Educational institutions established in 1913 1913 establishments in Maryland
43246686
https://en.wikipedia.org/wiki/Battleborn%20%28video%20game%29
Battleborn (video game)
Battleborn was a free-to-play first-person shooter video game developed by Gearbox Software and published by 2K Games for Microsoft Windows, PlayStation 4 and Xbox One. The game was released worldwide on May 3, 2016. Battleborn was a hero shooter with elements of multiplayer online battle arenas (MOBA). Players select one of several pre-designed characters with different attacks and skills, and participate in either single player, cooperative matches, or competitive matches with other players. During matches, players gain experience to advance their character along the Helix Tree, selecting from one of two new abilities or buffs with each advancement step that allows the player to create a custom loadout for that character for the duration of that match. Furthermore, as the player completes matches, they earn randomized gear (generated similarly to the Borderlands series' randomized weapon feature) that can also be equipped as part of the loadout to provide further buffs and abilities, or purchased through microtransactions. The game received mixed reviews upon release, with reviewers citing on the difficulties of learning the complex gameplay systems as being ultimately deep and rewarding but off-putting to new players. Battleborn was overshadowed by Blizzard Entertainment's Overwatch, another hero shooter, that was released a few weeks later and which caused a large drop in Battleborns player count within the month. In response, Gearbox made adjustments in pricing and downloadable content to try to draw new players to the game. In June 2017, it was transitioned to a pricing scheme comparable to a free-to-play title. Gearbox announced a year-long phased shutdown of the game's servers by January 31, 2021, with the game removed from sale in November 2019 and planned shuttering of in-game purchases by February 2020. Gameplay Battleborn is described as a hero shooter, primarily a first-person shooter incorporating multiplayer online battle arena elements. In any of the game's modes, the player selects from one of several pre-defined hero characters that they have available, each with their own unique attributes, attacks, powers and skills, which can include casting magic and area-of-effect attacks. During character selection, the player also selects a loadout that can have up to three pieces of gear to bring into a match, earned from previous matches. Each piece requires a number of shards, in-game currency collected during a match, to activate during a match, and once activated, stay active for the remainder of the match. This equipment can boost or detract from base attributes or give additional benefits to the character. Shards can also be used to activate special turrets on maps to strategically defend points. The hero's base combat level starts at 1 at the start of a round, and as the player defeat enemies and complete other objectives, they will gain experience towards further levels. With each new level, the player can then select one of two or more perks specific to that character along that character's "Helix Tree", including unlocking the character's strongest "ultimate ability". These experience levels only apply within the current match and reset on the start of a new match. In Battleborns meta-game, both the player's "command level" (applying across all characters for a given player) and a character-specific level can be increased based on performance in matches. In this meta-game, new character levels will unlock new perks that can be selected on the Helix Tree, alternative outfits, and other cosmetic improvements. Higher command levels will unlock additional characters that the player can select from, among other benefits. New gear is earned as loot drops from the completion of a match, or through picking up loot during a match, or may be purchased in special loot chests using in-game coins earned from playing matches or from gaining command levels. The player can only store a limited number of pieces of equipment, but can sell unneeded equipment for coins to apply at the meta-game. Battleborn requires a constant Internet connection to play due to the game's meta-game features. The game features 25 playable characters upon release and later added 5 downloadable content characters, with each having different abilities and weapons; characters are broadly categorized based on their movement speed and agility, combat range and effectiveness, and the difficulty of playing the character. For example, Rath is a melee-based character who is equipped with a katana, while Thorn is a long-ranged character, whose primary weapon is a bow. Characters of supporting role, such as Miko, who specializes in healing other characters of the same team, are also playable. During first time after game's release not all characters were available when the player starts the game, they were unlocked for play by completing both missions and multiplayer games, or raising player's command rank. Starting from big update from February 2017, all 25 base game characters become available to play right after tutorial story mission. After "Free Trial" update in June 2017 access to all base game heroes in free version now given after purchasing "Full Game update", along with all Story missions, otherwise they could be purchased separately in Marketplace. Battleborn includes a story-based mission mode that enables five players to cooperate to complete various missions that include objective-driven narratives used as backstory for some of the hero characters. The additional 5-story missions are called Story Operations and were added as DLC; their main differences from main story mode are earning of OPs points, which gives player various rewards depending on chosen character and increases mission's difficulty, and changing of dialogues of main non-playable characters, which fully reveals story's narrative on 10th playthrough. There are also three main multiplayer modes in the game that pit two teams of five players each against each other (or against a team of artificial intelligence controlled heroes): Meltdown, a mode where the team should escort and sacrifice AI controlled robots while killing robots of enemy team; Incursion, a mode similar to most MOBAs where teams have to take the opponents' base while protecting their own base; Capture, a mode where teams vie to seize and hold several control points on a map. The other two later added multiplayer modes are: Face-off, a mode where the main objective is hoarding masks of AI controlled Varelsi mobs, the team that deposits most masks for a given time or score limit wins; Supercharge, which unlike other modes feature 3v3 teams and in fact is a slightly different version of Meltdown mode with a second main objective being capture of advantage giving Supercharge pad. Battleborn initially did not support microtransactions, and it was only introduced in a June 2016 update. Players can use real money to obtain in-game credits that can be spent on skins and taunts for the various characters (some which can already be earned through advancement with the character), but otherwise has no direct influence on gameplay. Gearbox planned to release downloadable content that would include new skins and taunts for existing characters prior to announcement of the game's shutdown. Setting The game is set in a space fantasy setting, in which every species fled to a star known as Solus after a disastrous event destroyed most other planets and stars in the universe. These species are divided into different factions upon their arrivals, and eventually they united and cooperate with each other by sending out the best fighters, who are labelled as Battleborn, to fight against Varelsi, the origin of the catastrophe. Plot After the leader of the Jennerit Imperium Lothar Rendain started a rampage of destroying all the stars in the universe, the captain of the UPR (United Peacekeeping Republics) Trevor Ghalt formed a coalition in order to save the last star, Solus. The first mission forces the player to play as Mellka, an Eldrid operative who is ordered to rendezvous with Deande, a former Jennerit assassin. During this mission, Rendain offers Deande one last chance to prove her loyalty to him, but she refused. After this mission, the player is allowed to use any of the Battleborn for the next mission, which requires you to defeat ISIC, to make him an ally. In the third mission, the player has to escort an M7 Super Sentry and kill the Varelsi Conservator. Development Battleborn was announced by Gearbox Software and 2K Games and revealed by Game Informer on July 8, 2014. It is the first original game developed by Gearbox Software since the release of Borderlands in 2009, and is also claimed to be "the most ambitious video game that Gearbox has ever created" and a "genre-fused" video game by Gearbox Software's president, Randy Pitchford. In a 2017 interview, Pitchford said that prior to Battleborns development, the principle way to market and promote first-person shooters was to emphasise what abilities the player-character had, which made elements like a detailed game world or narrative a secondary consideration. This led to them thinking about creating a game with "a wide spectrum" of characters and abilities which then could be used in large-player game modes. Pitchford recognized that they already had experience creating a diverse array of characters through the Borderlands series, and that this roster of heroes approach was gaining popularity in multiplayer online battle arenas. With Battleborn, Pitchford wanted to take the combination of first-person shooter and role-playing elements they had from Borderlands, and mix in the range of heroes as to create a new genre, what has become known as the "hero shooter". Several gameplay elements from Brothers in Arms: Furious 4, another project from Gearbox which was cancelled in July 2015, were transferred to the game. A closed technical test for this game, which allows Gearbox to test the multiplayer servers and alter the balance between characters, was held on October 29, 2015. A beta for the game was released on April 8, 2016. It came to the PlayStation 4 first before other platforms. Battleborns PC open beta began on April 13, 2016 and lasted until April 18, 2016. Players were able to participate in both story mode and two competitive multiplayer modes: Incursion and Meltdown. More than two million players participated in the beta. The game was set to be released worldwide for Microsoft Windows, PlayStation 4, Xbox One on February 9, 2016, but was later delayed to May 3, 2016. The game's graphics are inspired by computer-generated imagery like the movies produced by Pixar, as well as anime. When creating the game's 2D graphics, the team hired Michel Gagné to work on the 2D effects of the game's maps and characters' abilities. The team also drew inspirations from a variety of fighting games, multiplayer online battle arena games, role-playing games and toys from the 1980s. In addition to the standard version, players can purchase the Digital Deluxe Edition, which includes the game's Season Pass and cosmetic items. Five additional characters are set to be released for the game upon release for free, and five different paid packs, which includes additional story content, are also scheduled to be released after the game's launch. Rumors surrounding the game turning free-to-play started to circulate around the internet, but Gearbox president Randy Pitchford declined any plans of turning the game free-to-play, although he did mention ideas of releasing a free "trial version". On October 3, 2016 2K Games announced the addition of a competitive game mode called Face-Off, to be released for free to all players who own Battleborn on October 13, 2016. The game mode launched alongside the first downloadable content Story Operation, titled "Attikus and the Thrall Rebellion". In June 2017, the game was updated to add support in for its "Free Trial" mode. This mode effectively makes the game a free-to-play title; at no cost, players can download the game, play a rotating selection of the game's heroes in any of the game's public multiplayer modes, and earn in-game currency towards permanently unlocking other characters and upgrades; the Free Trial does not include the game's story missions, nor allows these players to set up or participate in private matches. Players that already owned the game are designated as "Founders" and were given in-game rewards. Players still can purchase the game, unlocking most of the roster of characters and gaining access to private and story-based game modes. Reception Battleborn received "mixed or average" reviews, according to video game review aggregator Metacritic. Destructoid awarded it a score of 6 out of 10, saying that the game is "Slightly above average or simply inoffensive. Fans of the genre should enjoy this game, but a fair few will be left unfulfilled." GameSpot awarded it a score of 7 out of 10, saying "With so many moving parts that never quite gel, I found plenty of things to love but just as much to feel confused by and ambivalent about." IGN awarded it a score of 7.1 out of 10, saying "Battleborn's fun heroes and leveling will keep you hooked despite a lack of content." Hardcore Gamer awarded it a score of 4.5 out of 5, saying "Battleborn has done what I would have previously thought was impossible: it has kept me interested in its multiplayer. I typically grow bored with adversarial multiplayer after about an hour or two, yet I have spent so much time with this title already and want to keep going." PlayStation LifeStyle awarded it a score of 8 out of 10, saying "If Borderlands and the MOBA genre could have a baby, I imagine it would look something like Battleborn. Gearbox Software’s signature style shines here, even if the humor falls flat most of the time" In looking back at the game a year after its release, Destructoids Darren Nakamura felt Battleborn was an excellent game once a player had put time to acquire a larger number of playable characters to select from, and had amassed gear that created unique and powerful loadouts for specific characters, but a player would need to work through tens to hundreds of hours of gameplay to acquire these elements, making it off-putting to new players. Battleborn was the best selling retail game in its week of release in the UK. It fell, however, to being the 12th best selling retail game in the UK the next week. According to director Randy Varnell, the launch week sales of the game is similar to that of the first Borderlands, which had gone on to sell over 7.8 million units. It was the fourth highest-selling game in the United States for May 2016 according to NPD Group. It has grossed $18 million. Decline and shutdown The game's player base quickly declined after release, primarily due to the release of another hero shooter, Overwatch by Blizzard Entertainment, on May 24, 2016. Gearbox had already been developing Battleborn and had fixed their expected release date when Blizzard announced Overwatch for release within a few weeks of Battleborn. Though they had considered trying to change the release date as Blizzard had a larger promotional budget that could easily overwhelm 2K Games, Pitchford opted to stay to what they planned and instead make sure the game's release went smoothly and to highlight how Battleborn differed from Overwatch. In a September 2017 interview, Pitchford believed that the fallout in sales from Battleborn was not necessarily due to the presence of Overwatch, but that people had made comparisons between the two games which reflected negatively on Battleborn and cost them sales. By July 2016, the number of concurrent players on PC had dropped below 1000, compared to more than 12,000 at the launch of the game, and Destructoid reported that at the game's first anniversary, the number of concurrent players sometimes dropped below 100 during off-peak hours, making it difficult to launch a match. Although Take-Two revealed that the game did not meet their sales expectations, 2K announced their intention to continue to support the game through add-on content and virtual currency. An industry rumour in September 2016 suggested that the game would soon switch to a free to play model, which would follow a similar path that 2K Games' Evolve had done in July 2016 and subsequently saw an increase in player base size. This rumor was later refuted by Gearbox president Randy Pitchford. Pitchford claimed they have seen 3 million unique players across all systems and that "we're okay. I'm not freaking out. We're fine." in terms of sales. On June 6, 2017, Battleborn released a patch that introduced the "Free Trial" of the game, effectively changing the game to a free-to-play model despite Pitchford's comments. This introduced several changes to the game, such as a character rotation, a founder's pack, as well as unlockable characters and cosmetics. By September 2017, Gearbox announced that after the release of the game's final update, released on October 23, 2017, their support for the title would enter "maintenance mode", effectively shutting down any further development but keeping a skeleton team to maintain the game's servers and fix any critical bugs that may be found. Gearbox announced in November 2019 that the game's servers would be completely shut down by January 2021, rendering the game unplayable. As part of the closeout of the game, the game was removed from sale from digital storefronts and Gearbox announced plans to disable the in-game store in February 2020. The server shutdown was originally scheduled for January 25, 2021. 2K Support later announced that this was rescheduled to January 31, 2021, after which the game became unplayable. References External links 2016 video games First-person shooters Free-to-play video games Gearbox Software games Hero shooters Inactive multiplayer online games Multiplayer and single-player video games Multiplayer online battle arena games Online-only games PlayStation 4 games Products and services discontinued in 2021 Science fantasy video games Split-screen multiplayer games Take-Two Interactive games Unreal Engine games Video games scored by Cris Velasco Video games developed in the United States Video games featuring female protagonists Windows games Xbox One games PlayStation 4 Pro enhanced games
2582359
https://en.wikipedia.org/wiki/Homeworld
Homeworld
Homeworld is a real-time strategy video game developed by Relic Entertainment and published by Sierra Studios on September 28, 1999, for Microsoft Windows. Set in space, the science fiction game follows the Kushan exiles of the planet Kharak after their home planet is destroyed by the Taiidan Empire in retaliation for developing hyperspace jump technology. The survivors journey with their spacecraft-constructing mothership to reclaim their ancient homeworld of Hiigara from the Taiidan, encountering a variety of pirates, mercenaries, traders, and rebels along the way. In each of the game's levels, the player gathers resources, builds a fleet, and uses it to destroy enemy ships and accomplish mission objectives. The player's fleet carries over between levels, and can travel in a fully three-dimensional space within each level rather than being limited to a two-dimensional plane. Homeworld was created over two years, and was the first game developed by Relic. Studio co-founders Alex Garden and Luke Moloney served as the director and lead programmer for the game, respectively. The initial concept for the game's story is credited to writer David J. Williams, while the script itself was written by Martin Cirulis and the background lore was written by author Arinn Dembo. The music of the game was written by composer Paul Ruskay as the first title from his Studio X Labs, with the exceptions of Samuel Barber's 1936 Adagio for Strings, considered the defining theme of the game, and a licensed track from English rock band Yes, "Homeworld (The Ladder)". Homeworld is listed by review aggregator Metacritic as the highest rated computer game of 1999, and the fourth-highest on any platform for the year. Critics praised the game's graphics, unique gameplay elements, and multiplayer system, though opinions were divided on the game's plot and high difficulty. The game sold over 500,000 copies in its first six months, and received several awards and nominations for best strategy game of the year and best game of the year. A release of the game's source code in 2003 sparked unofficial ports to Mac OS X and Linux, and three more games in the Homeworld series have been produced: Homeworld: Cataclysm (2000), Homeworld 2 (2003), and Homeworld: Deserts of Kharak (2016). Gearbox Software purchased the rights to the series from then-owners THQ in 2013, and released a remastered collection of Homeworld and Homeworld 2 in 2015 for Windows and OS X which was also highly regarded. Gearbox announced the fifth game in the series, Homeworld 3, on August 30, 2019; the game development is being crowdfunded through Fig, and is slated for a Q4 2022 release. Gameplay Homeworld is a real-time strategy game set in space. Gameplay, as in most real-time strategy titles, is focused on gathering resources, building military forces, and using them to destroy enemy forces and accomplish an objective. The game includes both single-player and multiplayer modes; the single-player mode consists of one story-driven campaign, broken up into levels. In each level, the player has an objective to accomplish before they can end the level, though the ultimate objective of the mission can change as the level's story unfolds. Between each of the 16 levels is a hand-drawn, black-and-white cutscene with narrative voiceovers. The central ship of the player's fleet is the mothership, a large base which can construct other ships; unlike other spacecraft, in the single-player campaign the mothership is unable to move. Present in each level are stationary rocks, gas clouds, or dust clouds, which can be mined by specialized harvesting ships (resource collectors) which then empty their loads at the mothership in the form of "resources", the game's only currency. Resources can be spent by the player on building new ships, which are constructed by the mothership. Buildable ships come in a variety of types, which are discovered over the course of the game. They include resource collectors, fighters and corvettes, frigates, destroyers, and heavy cruisers, as well as specialized non-combat ships such as research vessels and repair corvettes. Fighter ships need to dock with support ships or return to the mothership periodically to refuel, while salvage corvettes can capture enemy ships and tow them to the mothership to become part of the player's fleet. In some levels, new ship types can be unlocked by capturing an enemy ship of that type, through research performed at the research vessel, or through plot elements. At the beginning of the campaign, the player may select between controlling the "Kushan" or "Taiidan" fleet; this affects the designs of the ships and changes some of the specialized ship options, but has no effect on the plot or gameplay. Each level's playable area is a sphere, bisected by a circular plane. Ships can be directed to move anywhere in that sphere, either singularly or in groups. The game's camera can be set to follow any ship and view them from any angle, as well as display the ship's point of view. The player may also view the "Sensors Manager", wherein they can view the entire game map along with all visible ships. Ships can be grouped into formations, such as wedges or spheres, in order to provide tactical advantages during combat with enemy ships. Non-specialized ships are equipped with weapons to fire upon enemy ships, which include ballistic guns, beam weapons, and missiles. As a ship is damaged by weapons its health bar depletes, visual effects such as fire and smoke are added, and it can eventually explode. When all mission objectives are completed, the player is given the option to make a hyperspace jump to end the level. This may be postponed in order to gather more resources or build more ships. When the hyperspace jump is initiated, all fighters and corvettes return to the mothership while larger ships line up next to it, and blue rectangles, or hyperspace gates, pass over the ships, and all ships are brought to the next level. The player retains their fleet between levels, and the difficulty of each mission is adjusted to a small extent based on how many ships are in the player's fleet at the beginning of each level. In multiplayer games, the objective is typically to destroy the enemy mothership(s) and any carriers, though other battle-oriented victory conditions are available. In multiplayer mode, the mothership is capable of slow movement, and all research options permitted by the map are available via a technology tree, rather than dependent on a plot point. Multiple maps are available, as are options to turn off the need to research technologies or fuel consumption for smaller ships. Plot A century prior to the start of the game, the Kushan, humanoid inhabitants of the desert planet Kharak, discovered a spaceship buried in the sands, which holds a stone map marking Kharak and another planet across the galaxy labelled "Hiigara", meaning "home". The discovery united the clans of Kharak, who had previously determined that they were not indigenous to the planet. Together, they spent the next century developing and building a giant mothership that would carry 600,000 people to Hiigara, with neuroscientist Karan S'jet neurally wired into the ship as Fleet Command to replace an unsustainably large crew. The game opens with the maiden voyage of the mothership, testing the hyperspace drive which brings the fleet to a new destination by faster than light travel. Instead of the support ship that was expected to be there, the mothership finds a hostile alien carrier. After driving them off, the mothership returns to Kharak, to discover that the planet has been razed by another alien fleet, and that only the 600,000 migrants in suspended animation have survived. A captured enemy captain claims that the Kharak genocide was the consequence of their violation of a 4,000-year-old treaty between the interstellar Taiidan Empire and the Kushan, which forbade the latter from developing hyperspace technology. After destroying the remnants of both alien fleets, the nascent Kushan fleet sets out for Hiigara, intent on reclaiming their ancient homeworld. Their multi-stage journey across the galaxy takes them through asteroid fields, a giant nebula, a ship graveyard and several imperial outposts. Along the way, they fight other descendants of their Hiigaran ancestors who have started worshipping a nebula which conceals them as a holy place, and who do not allow outsiders to leave due to fear of discovery. They also meet the Bentusi, a race of traders, who sell them advanced technology. After discovering that the Bentusi have given aid to the exiles, the empire attempts to destroy them, but are stopped by the Kushan fleet. The Bentusi then reveal that the Kushan had once ruled their own empire, before being destroyed by the Taiidan, and were exiled from Hiigara. In gratitude for the Kushan's intervention, they promise to summon the Galactic Council to recognize their claim to Hiigara. As their journey continues, the Kushan fleet gives sanctuary to the rebel imperial captain Elson. After helping him access a rebel communication network, he provides information on the defenses around Hiigara. In a final battle above Hiigara, he arrives with a rebel fleet to help fight the Imperial fleet led by the emperor himself. The emperor manages to knock Karan into a coma via her neural connection with the mothership, but the combined Kushan and rebel fleets defeat the emperor regardless. The Galactic Council arrives shortly thereafter and confirms the Kushan's claim to Hiigara, a lush world in contrast to the desert planet of Kharak. When the Kushan make landfall, Karan insists that she be the last one to set foot on the planet. Development Relic Entertainment was founded in Vancouver, Canada, on June 1, 1997, and began work on Homeworld as their first title. Relic co-founders Alex Garden and Luke Moloney served as the director and lead programmer for the game, respectively, while Erin Daly was the designer and Aaron Kambeitz the lead artist. Garden was 22 years old when he founded the company. Writer David J. Williams is credited with the original story concept, while the script itself was written by Martin Cirulis and the background lore was written by author Arinn Dembo. Cirulis and Dembo, credited jointly as "Marcus Skyler", were selected by the publisher, Sierra Studios, partway through development to expand the story concept of Relic and Williams. Sierra agreed to publish the game early in development based on, according to Garden, "two whiteboard presentations and no demo". The development of the game took over two years; the game systems were largely complete by the final eight months, which Relic spent polishing and improving the game, including adding the whole-map Sensors Manager view. In a February 1999 interview, Garden said the game's testers had found it to be much harder to play than it was for the developers, leading to the addition of features like short briefings at the beginning of levels to explain new concepts. The game was initially expected to be released at the end of 1998; Garden stated in a 1999 interview that the team found creating the core game itself much easier than getting it to the quality level they wanted, and that if they had known how difficult it was going to be they may have chosen not to do the project. He claimed that Sierra did not put much pressure on the studio to release the game before it was ready, and that Relic felt much more pressure from impatient fans. Several ideas for the game, including ship customization, convoy routes, and different unit types for the Kushan and Taiidan fleets were cut during development as they could not be done well enough for the project. Relic did not specifically set out to create a real-time strategy game; Garden and Relic were primarily focused on making a game with exciting large-scale space battles, and chose the genre in order to support that. As a result, they did not try to make innovative gameplay changes in the real-time strategy genre, but instead worked on making implementing the genre in a fully 3D space to make the space battles they envisioned. Garden told Computer Games Magazine in 1998 that "there's no sort of design philosophy behind it. The fact that it's real-time strategy was almost a fluke." The original Star Wars film trilogy was one of the game's primary inspirations, along with the 1970s television series Battlestar Galactica: in a 1998 interview with PC Zone, Garden stated that his original concept for the game was "a 3D game that looked like you were watching Star Wars but had a storyline like Battlestar Galactica". He drew further inspiration from correcting what he felt were the limitations of the first-person space flight game Star Wars: X-Wing vs. TIE Fighter. He felt that having the player control a ship from a cockpit view detracted from the feeling of the overall battles, and so chose to have the player control a whole fleet from an external view. According to art director Rob Cunningham, the visual design of the game was inspired by the sci-fi art by Peter Elson, Chris Foss, and John Harris, as well by Star Wars movies and Masamune Shirow. The vertical design of the mothership and the horizontal galaxy in the background were intended to give the player a visual alignment to orient themselves in 3D space. The focus on the combat drove several other areas of focus for the development team: according to Garden, Relic spent considerable effort on making high-quality ship models, computer-controlled flight tactics, and maneuvers like immelmann turns because "everyone is going to zoom right in on the first battle, just to watch". They felt that the advanced unit-level maneuvers in the context of large fleet-wide battles would increase immersion in the game for players, and create a "Star Wars feeling" to the battles. To accentuate this, instead of recording stock sound files for units to use when maneuvering, the team instead recorded several thousand smaller clips which are combined to describe exactly which ships were taking which action, and are then modified by the game's audio engine to reflect the position and motion of the ships relative to the player's camera. The working title for the game was Spaghetti Ball, chosen for Garden's early vision of the battles in the game as a mass of tangled flight paths as ships maneuvered around each other, contained within a larger sphere of available space. Although early previews expressed concern about the difficulties of controlling so many ships in 3D space, according to Garden the team felt that moving the game's camera and controlling the fleet were two wholly separate actions, and by treating them as such it made designing them and using them much simpler. The sound design, audio production, and music composition for the game was contracted to composer Paul Ruskay and Studio X Labs, which he founded in February 1999 after starting on Homeworld in October 1998. In addition to his original music compositions, Ruskay used a recording of the 1936 piece for string orchestra by Samuel Barber, Adagio for Strings, in an early scene in the game when the player finds the destruction of Kharak. The piece, in turn, became a central theme on the soundtrack. Adagio for Strings was proposed by Garden, who heard it on the radio and felt it fit the mood of the game "perfectly"; Ruskay had a new recording of the piece made by a University of California choir, as the license fees for the recording Garden heard were too high for the studio. The closing song of the game, "Homeworld (The Ladder)", was composed for the game by English rock band Yes. Yes were in Vancouver at the time recording The Ladder, and learned of the in-development Homeworld. Lead singer Jon Anderson was interested in writing something based on a videogame, and wrote the lyrics to fit with the game. The soundtrack was released in a 13-track album that was bundled with the Game of the Year Edition of Homeworld in May 2000, and again in a 37-track Homeworld Remastered Original Soundtrack digital album alongside the Homeworld Remastered Collection in March 2015. Reception Homeworld was highly regarded by critics upon release, and is listed by review aggregator Metacritic as the highest rated computer game of 1999, and the fourth-highest on any platform for the year. The graphics were highly praised; Michael Ryan of GameSpot claimed it had "some of the most impressive graphics ever", while Jason Levine of Computer Games Magazine said that "no game—ever—has made space itself look like this". Eurogamer's review praised the "big, brash and colourful" backgrounds, which was echoed by Levine and John Keefer of GameSpy. Multiple reviewers, such as Vincent Lopez of IGN, also praised the detail and variety of the spaceships, and Jason Samuel of GamePro noted that Relic was able to use their graphics engine to create the game's intricate cutscenes rather than relying on prerecorded videos. Greg Fortune of Computer Gaming World added that the rotatable camera was one of the "real joys of the game", allowing the player to view the action from any angle or ship's viewpoint and "creating an impressively sweeping cinematic feel". The sound and music were also lauded; Levine claimed the sound was "on par with the graphics", praising how it changed when the player zoomed into or away from a battle, while Eurogamer and Lopez applauded the "atmospheric" soundtrack for creating the mood of the game. The gameplay advances were also highly praised by critics: Lopez claimed that "Relic not only tackled space, but may have just changed strategy games forever." Reviewers praised the full 3D nature of the game as elevating it from its otherwise standard real-time strategy gameplay systems; Levine said that the 3D was what made the game unique, and Ryan explicitly termed the base gameplay as "fairly similar to any tried-and-true real-time strategy game" but said that the 3D elements and connected mission structure turned it into a "different breed" of game. Fortune, however, focused instead on the difficulty of the missions themselves, praising the challenge and variety of tactics needed to complete the game and lauding it for having "some of the best fleet battles ever seen in a computer game". Levine, Ryan, Keefer, and Lopez all noted the connected mission structure as an innovation in the genre: by limiting the resources to build ships and pulling the same fleet through the missions, Homeworld converted what would usually be a set of disconnected missions into "chapters" of a continuous game. They felt this connected the player to their fleet as more than disposable units, and added a level of strategy to the game. Keefer and Levine did note, however, that this added a great deal of difficulty to the game, especially for more casual players, as the player could make decisions in earlier levels that rendered later ones very difficult to complete without an explicit difficulty level to counteract it. Ryan and Keefer also considered the 3D movement disorienting at first, though Levine and Samuel said the controls were "as easy as possible". Homeworlds single-player plot received mixed reviews; Lopez claimed it would keep players "rapt with attention", Samuel summarized it as a "superb story", and Levine said that it was "the first computer game to capture the grandeur and epic feel of the Star Wars movies". The Eurogamer review, however, considered it only "(mostly) engaging", and Keefer said that "although the story line is fluid and intriguing", for each mission "the overall theme is the same: Kill the enemy", while Ryan criticized the length for such a "meager single-player game". The multiplayer gameplay was praised, especially against human opponents: Levine stated that "multiplayer in Homeworld is a joy", while the Eurogamer review called it the game's strongest part. The Eurogamer review, along with Samuel, also called the multiplayer mode more difficult and engaging than the single-player game. Legacy Homeworld was released to strong sales and won multiple awards; it sold more than 250,000 copies in its initial weeks, and over 500,000 in its first six months. In the United States alone, the game's sales surpassed 95,000 copies by the end of 1999, while sales in Germany reached 60,000 units by April 2000. It debuted in third place on Germany's computer game sales rankings for October 1999, before dropping to 25th, 31st and 32nd in the following three months, respectively. Homeworld also received a "Silver" sales award from the Entertainment and Leisure Software Publishers Association (ELSPA), indicating sales of at least 100,000 copies in the United Kingdom. Homeworld won Best Strategy Game at the 1999 Game Critics Awards prior to release, and was nominated for Computer Game of the Year and Computer Strategy Game of the Year at the 2000 Academy of Interactive Arts & Sciences Interactive Achievement Awards. It was awarded Game of the Year by IGN and PC Gamer, won Strategy Game of the Year by Computer Gaming World and was nominated for the same by Computer Games Magazine, and won Best Original Storyline and Best Original Score at the 2000 Eurogamer Gaming Globes awards. It also won PC Gamers "Special Achievement in Music" and "Special Achievement in Art Direction" prizes. Maximum PC claimed in 2003 that Homeworld "did what no game had successfully done before: create a truly three-dimensional space-combat strategy game". HardwareMAG similarly claimed the "revolutionary" and "ground-breaking" status of the game in the real-time strategy genre in 2004, as did Computer Gaming World in 2003. Homeworld inspired a series of real-time strategy games in the same universe, beginning in September 2000 when Sierra Studios released a stand-alone expansion by Barking Dog Studios, Homeworld: Cataclysm. Taking place 15 years after the events of Homeworld, the story centers on Kiith Somtaaw—a Hiigaran clan—and its struggles to protect Hiigara from a parasitic entity known as the Beast. A full sequel, Homeworld 2, was developed by Relic Entertainment and released by Sierra in late 2003. The game, set a century after the original Homeworld, pits the Hiigarans against a powerful, nomadic raider race called the Vaygr. A fourth game in the series, Homeworld: Deserts of Kharak, was developed by Blackbird Interactive and published by Gearbox Software in 2016. A prequel to the series, it is set on the planet of Kharak instead of in space, and features a war between Kushan clans during the discovery of the buried spaceship from Homeworld. Gearbox announced the fifth game in the series, Homeworld 3, on August 30, 2019; the game development is being crowdfunded through Fig, and is slated for a Q4 2022 release. Additionally, in 2003 Relic released the source code for Homeworld under license to members of the Relic Developer Network. The source code became the base of several source ports to alternative platforms, such as Linux. Remaster In 2004, Relic Entertainment was bought by THQ, which confirmed in 2007 that it had acquired the rights to the series from Relic and Sierra. No further game in the series was made before THQ declared bankruptcy in 2013; on April 22, 2013, Gearbox Software announced that they had bought the rights to the series at auction for US$1.35 million. On July 19, 2013, Gearbox announced the production of remakes of Homeworld and Homeworld 2 as Homeworld HD, later renamed Homeworld Remastered Collection. The following month, collection producer Brian Burleson stated that Gearbox had purchased the property with the express purpose of making a collection including the original and remastered versions of the game. He also noted that neither game had code in a releasable or playable state when purchased, and they ended up recreating many of the original development tools with the assistance of the Homeworld mod community. A later posting by a developer at Gearbox further praised the mod community for their assistance in getting the original code to be playable on modern computers. The stand-alone expansion Homeworld: Cataclysm was not announced for a remake, despite the outspoken interest of Gearbox, as they were unable to find the original source code for the game. Released digitally on February 25, 2015, for Windows computers by Gearbox and on August 6, 2015, for OS X by Aspyr Media, the collection includes the original and remastered versions of the two games. A retail edition of the PC version of the game was released by Ubisoft on May 7, 2015. In addition to compatibility fixes for modern computers, the "classic" version of Homeworld removes local multiplayer and the licensed Yes song; the "remastered" version adds a new game engine for the two games, and upgraded visuals, graphical effects, models, and sound. It also initially removed some functionality not present in Homeworld 2, such as the fuel system, ballistic projectile modeling, and tactical ship formations; some of these were restored in a 2016 patch. As of February 2017, Steam Spy estimates that over 700,000 copies of the Homeworld Remastered Collection have been sold on the Steam distribution platform. The remastered version was warmly received by critics; reviewers such as IGN's Dan Stapleton and Game Informers Daniel Tack praised the story as still "fantastic" and "emotional", while Kevin VanOrd of GameSpot claimed that the gameplay was still entertaining 16 years later, and Tom Senior of PC Gamer applauded Gearbox's visual updates to the game. Reviewers were more mixed on the gameplay changes included as part of the upgrade to a new game engine; VanOrd noted that some of the changes did not fit with the original game, while Tack made note of several bugs due to the gameplay modifications. Overall, the updated game was highly praised, with Senior concluding that "Homeworld is simply incredible and everyone should play it." References External links (archived) Official website for Homeworld Remastered Collection Homeworld Embracer Group franchises 1999 video games MacOS games Multiplayer and single-player video games Real-time strategy video games Relic Entertainment games Space opera video games Sierra Entertainment games Video games with Steam Workshop support Video games developed in Canada Video games scored by Paul Ruskay Video games with available source code Video games with expansion packs Windows games
3543438
https://en.wikipedia.org/wiki/Business%20process%20interoperability
Business process interoperability
Business process interoperability (BPI) is a property referring to the ability of diverse business processes to work together, to so called "inter-operate". It is a state that exists when a business process can meet a specific objective automatically utilizing essential human labor only. Typically, BPI is present when a process conforms to standards that enable it to achieve its objective regardless of ownership, location, make, version or design of the computer systems used. Overview The main attraction of BPI is that a business process can start and finish at any point worldwide regardless of the types of hardware and software required to automate it. Because of its capacity to offload human "mind" labor, BPI is considered by many as the final stage in the evolution of business computing. BPI's twin criteria of specific objective and essential human labor are both subjective. The objectives of BPI vary, but tend to fall into the following categories: Enable end-to-end straight-through processing ("STP") by interconnecting data and procedures trapped in information silos Let systems and products work with other systems or products without special effort on the part of the customer Increase productivity by automating human labor Eliminate redundant business processes and data replications Minimize errors inherent in manual processes Introduce mainstream enterprise software-as-a-service Give top managers a practical means of overseeing processes used to run business operations Encourage development of innovative Internet-based business processes Place emphasis on business processes rather than on the systems required to operate them Strengthen security by eliminating gaps among proprietary software systems Improve privacy by giving users complete control over their data Enable realtime enterprise scenarios and forecasts Business process interoperability is limited to enterprise software systems in which functions are designed to work together, such as a payroll module and a general ledger module that are part of the same program suite, and in controlled software environments that use EDI. Interoperability is also present between incompatible systems where middleware has been applied. In each of these cases, however, the processes seldom meet the test of BPI because they are constrained by information silos and the systems' inability to freely communicate among each other. History The term "Business process interoperability" (BPI) was coined in the late 1990s, mostly in connection with the value chain in electronic commerce. BPI has been utilized in promotional materials by various companies, and appears as a subject of research at organizations concerned with computer science ontologies. Despite the attention it has received, business process interoperability has not been applied outside of limited information system environments. A possible reason is that BPI requires universal conformance to standards so that a business process can start and finish at any point worldwide. The standards themselves are fairly straightforward—organizations use a finite set of shared processes to manage most of their operations. Bringing enterprises together to create and adopt the standards is another matter entirely. The world of management systems is, after all, characterized by information silos. Moving away from silos requires organizations to deal with cultural issues such as ownership and sharing of processes and data, competitive forces and security, not to mention the effect of automation on their work forces. While the timetable or adoption of BPI cannot be predicted, it remains a subject of interest in organizations and think tanks alike. Testing for BPI To test for BPI, an organization analyzes a business process to determine if it can meet its specific objective utilizing essential human labor only. The specific objective must be clearly defined from start to finish. Start and finish are highly subjective, however. In one organization, a process may start when a customer orders a product and finish when the product is delivered to the customer. In another organization, the same process may be preceded with product manufacture and distribution, and may be followed by management of after-sale warranty and repairs. Essential human labor includes: Tasks that must be performed by people because no practical means of automation is available. Examples include fighting a fire, driving a bus and preparing a meal. Tasks that, in the opinion of management, are more effectively performed by people. Examples include answering a telephone call with a human voice and offering investment advice in person. Tasks where the cost of automation is greater than the cost of human labor. To qualify for BPI, every process task must be taken into account from start to finish, including the labor that falls between the cracks created by incompatible software applications, such as gathering data from one system and re-inputting it in another, and preparing reports that include data from disparate systems. The process must flow uninterrupted regardless of the underlying computerized systems used. If non-essential human labor exists at any point, the process fails the test of BPI. Achieving BPI To assure that business processes can meet their specific objectives automatically utilizing essential human labor only, BPI takes a “service-oriented architecture“ (SOA) approach, which focuses on the processes rather than on the technologies required to automate them. A widely used SOA is an effective way to address the problems caused by any disparate system that is the heart of each information silo. SOA makes practical sense because organizations cannot be expected to replace or modify their current enterprise software to achieve BPI, regardless of the benefits involved. Many workers' jobs are built around the applications they use, and most organizations have sizable investments in their current information infrastructures which are so complex that even the smallest modification can be very costly, time-consuming and disruptive. Even if software makers were to unite and conform their products to a single set of standards, the problem would not be solved. Besides software from well-known manufacturers, organizations use a great many legacy software systems, custom applications, manual procedures and paper forms. Without SOA, streamlining such a huge number of disparate internal processes so that they interoperate across the entire global enterprise spectrum is simply out of the question. To create an SOA for widespread use, BPI relies on a centralized database repository containing shared data and procedures common to applications in every industry and geographical area. In essence, the repository serves as a top application layer, enabling organizations to export their data to its distributed database and obtain the programs they need by simply logging on via a portal. To assure security and commercial neutrality, the repository conforms to standards promulgated by the community of BPI stakeholders. Organizations and interest groups that wish to achieve business process interoperability begin by establishing one or more BPI initiatives. See also Information silo, the antithesis of BPI References Further reading O. Adam et al. (2005). A Collaboration Framework for Cross-enterprise Business Process Management. Paper First International Conference on Interoperability of Enterprise Software and Applications, INTEROP-ESA'2005. Khalid Belhajjame, Marco Brambilla. Ontology-Based Description and Discovery of Business Processes. In Proceedings of the 10th Workshop on Business Process Modeling, Development, and Support (BPMDS) at CAiSE 2009, Amsterdam, June 2009, Springer LNBIP, vol. 29, pp. 85–98. Kurt Kosanke (2005). "INTEROP-ESA’2005, Summary of Papers" Richard A. Martin (2004). "A Standards’ Foundation for Interoperability" Paper 2004 International Conference on Enterprise Integration and Modelling Technology. 9–11 October 2004. University of Toronto, Canada. M.P. Papazoglou et al. (2000) "Integrated value chains and their implications from a business and technology standpoint," Decision Support Systems 29 2000 p. 323–342 External links Center for E-Commerce Infrastructure Development Business software Enterprise modelling Interoperability Business process
1637868
https://en.wikipedia.org/wiki/Actor%20model
Actor model
The actor model in computer science is a mathematical model of concurrent computation that treats actor as the universal primitive of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging (removing the need for lock-based synchronization). The actor model originated in 1973. It has been used both as a framework for a theoretical understanding of computation and as the theoretical basis for several practical implementations of concurrent systems. The relationship of the model to other work is discussed in actor model and process calculi. History According to Carl Hewitt, unlike previous models of computation, the actor model was inspired by physics, including general relativity and quantum mechanics. It was also influenced by the programming languages Lisp, Simula, early versions of Smalltalk, capability-based systems, and packet switching. Its development was "motivated by the prospect of highly parallel computing machines consisting of dozens, hundreds, or even thousands of independent microprocessors, each with its own local memory and communications processor, communicating via a high-performance communications network." Since that time, the advent of massive concurrency through multi-core and manycore computer architectures has revived interest in the actor model. Following Hewitt, Bishop, and Steiger's 1973 publication, Irene Greif developed an operational semantics for the actor model as part of her doctoral research. Two years later, Henry Baker and Hewitt published a set of axiomatic laws for actor systems. Other major milestones include William Clinger's 1981 dissertation introducing a denotational semantics based on power domains and Gul Agha's 1985 dissertation which further developed a transition-based semantic model complementary to Clinger's. This resulted in the full development of actor model theory. Major software implementation work was done by Russ Atkinson, Giuseppe Attardi, Henry Baker, Gerry Barber, Peter Bishop, Peter de Jong, Ken Kahn, Henry Lieberman, Carl Manning, Tom Reinhardt, Richard Steiger and Dan Theriault in the Message Passing Semantics Group at Massachusetts Institute of Technology (MIT). Research groups led by Chuck Seitz at California Institute of Technology (Caltech) and Bill Dally at MIT constructed computer architectures that further developed the message passing in the model. See Actor model implementation. Research on the actor model has been carried out at California Institute of Technology, Kyoto University Tokoro Laboratory, Microelectronics and Computer Technology Corporation (MCC), MIT Artificial Intelligence Laboratory, SRI, Stanford University, University of Illinois at Urbana–Champaign, Pierre and Marie Curie University (University of Paris 6), University of Pisa, University of Tokyo Yonezawa Laboratory, Centrum Wiskunde & Informatica (CWI) and elsewhere. Fundamental concepts The actor model adopts the philosophy that everything is an actor. This is similar to the everything is an object philosophy used by some object-oriented programming languages. An actor is a computational entity that, in response to a message it receives, can concurrently: send a finite number of messages to other actors; create a finite number of new actors; designate the behavior to be used for the next message it receives. There is no assumed sequence to the above actions and they could be carried out in parallel. Decoupling the sender from communications sent was a fundamental advance of the actor model enabling asynchronous communication and control structures as patterns of passing messages. Recipients of messages are identified by address, sometimes called "mailing address". Thus an actor can only communicate with actors whose addresses it has. It can obtain those from a message it receives, or if the address is for an actor it has itself created. The actor model is characterized by inherent concurrency of computation within and among actors, dynamic creation of actors, inclusion of actor addresses in messages, and interaction only through direct asynchronous message passing with no restriction on message arrival order. Formal systems Over the years, several different formal systems have been developed which permit reasoning about systems in the actor model. These include: Operational semantics Laws for actor systems Denotational semantics Transition semantics There are also formalisms that are not fully faithful to the actor model in that they do not formalize the guaranteed delivery of messages including the following (See Attempts to relate actor semantics to algebra and linear logic): Several different actor algebras Linear logic Applications The actor model can be used as a framework for modeling, understanding, and reasoning about a wide range of concurrent systems. For example: Electronic mail (email) can be modeled as an actor system. Accounts are modeled as actors and email addresses as actor addresses. Web services can be modeled with Simple Object Access Protocol (SOAP) endpoints modeled as actor addresses. Objects with locks (e.g., as in Java and C#) can be modeled as a serializer, provided that their implementations are such that messages can continually arrive (perhaps by being stored in an internal queue). A serializer is an important kind of actor defined by the property that it is continually available to the arrival of new messages; every message sent to a serializer is guaranteed to arrive. Testing and Test Control Notation (TTCN), both TTCN-2 and TTCN-3, follows actor model rather closely. In TTCN actor is a test component: either parallel test component (PTC) or main test component (MTC). Test components can send and receive messages to and from remote partners (peer test components or test system interface), the latter being identified by its address. Each test component has a behaviour tree bound to it; test components run in parallel and can be dynamically created by parent test components. Built-in language constructs allow the definition of actions to be taken when an expected message is received from the internal message queue, like sending a message to another peer entity or creating new test components. Message-passing semantics The actor model is about the semantics of message passing. Unbounded nondeterminism controversy Arguably, the first concurrent programs were interrupt handlers. During the course of its normal operation a computer needed to be able to receive information from outside (characters from a keyboard, packets from a network, etc). So when the information arrived the execution of the computer was interrupted and special code (called an interrupt handler) was called to put the information in a data buffer where it could be subsequently retrieved. In the early 1960s, interrupts began to be used to simulate the concurrent execution of several programs on one processor. Having concurrency with shared memory gave rise to the problem of concurrency control. Originally, this problem was conceived as being one of mutual exclusion on a single computer. Edsger Dijkstra developed semaphores and later, between 1971 and 1973, Tony Hoare and Per Brinch Hansen developed monitors to solve the mutual exclusion problem. However, neither of these solutions provided a programming language construct that encapsulated access to shared resources. This encapsulation was later accomplished by the serializer construct ([Hewitt and Atkinson 1977, 1979] and [Atkinson 1980]). The first models of computation (e.g., Turing machines, Post productions, the lambda calculus, etc.) were based on mathematics and made use of a global state to represent a computational step (later generalized in [McCarthy and Hayes 1969] and [Dijkstra 1976] see Event orderings versus global state). Each computational step was from one global state of the computation to the next global state. The global state approach was continued in automata theory for finite-state machines and push down stack machines, including their nondeterministic versions. Such nondeterministic automata have the property of bounded nondeterminism; that is, if a machine always halts when started in its initial state, then there is a bound on the number of states in which it halts. Edsger Dijkstra further developed the nondeterministic global state approach. Dijkstra's model gave rise to a controversy concerning unbounded nondeterminism (also called unbounded indeterminacy), a property of concurrency by which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. Hewitt argued that the actor model should provide the guarantee of service. In Dijkstra's model, although there could be an unbounded amount of time between the execution of sequential instructions on a computer, a (parallel) program that started out in a well defined state could terminate in only a bounded number of states [Dijkstra 1976]. Consequently, his model could not provide the guarantee of service. Dijkstra argued that it was impossible to implement unbounded nondeterminism. Hewitt argued otherwise: there is no bound that can be placed on how long it takes a computational circuit called an arbiter to settle (see metastability (electronics)). Arbiters are used in computers to deal with the circumstance that computer clocks operate asynchronously with respect to input from outside, e.g., keyboard input, disk access, network input, etc. So it could take an unbounded time for a message sent to a computer to be received and in the meantime the computer could traverse an unbounded number of states. The actor model features unbounded nondeterminism which was captured in a mathematical model by Will Clinger using domain theory. In the actor model, there is no global state. Direct communication and asynchrony Messages in the actor model are not necessarily buffered. This was a sharp break with previous approaches to models of concurrent computation. The lack of buffering caused a great deal of misunderstanding at the time of the development of the actor model and is still a controversial issue. Some researchers argued that the messages are buffered in the "ether" or the "environment". Also, messages in the actor model are simply sent (like packets in IP); there is no requirement for a synchronous handshake with the recipient. Actor creation plus addresses in messages means variable topology A natural development of the actor model was to allow addresses in messages. Influenced by packet switched networks [1961 and 1964], Hewitt proposed the development of a new model of concurrent computation in which communications would not have any required fields at all: they could be empty. Of course, if the sender of a communication desired a recipient to have access to addresses which the recipient did not already have, the address would have to be sent in the communication. For example, an actor might need to send a message to a recipient actor from which it later expects to receive a response, but the response will actually be handled by a third actor component that has been configured to receive and handle the response (for example, a different actor implementing the observer pattern). The original actor could accomplish this by sending a communication that includes the message it wishes to send, along with the address of the third actor that will handle the response. This third actor that will handle the response is called the resumption (sometimes also called a continuation or stack frame). When the recipient actor is ready to send a response, it sends the response message to the resumption actor address that was included in the original communication. So, the ability of actors to create new actors with which they can exchange communications, along with the ability to include the addresses of other actors in messages, gives actors the ability to create and participate in arbitrarily variable topological relationships with one another, much as the objects in Simula and other object-oriented languages may also be relationally composed into variable topologies of message-exchanging objects. Inherently concurrent As opposed to the previous approach based on composing sequential processes, the actor model was developed as an inherently concurrent model. In the actor model sequentiality was a special case that derived from concurrent computation as explained in actor model theory. No requirement on order of message arrival Hewitt argued against adding the requirement that messages must arrive in the order in which they are sent to the actor. If output message ordering is desired, then it can be modeled by a queue actor that provides this functionality. Such a queue actor would queue the messages that arrived so that they could be retrieved in FIFO order. So if an actor X sent a message M1 to an actor Y, and later X sent another message M2 to Y, there is no requirement that M1 arrives at Y before M2. In this respect the actor model mirrors packet switching systems which do not guarantee that packets must be received in the order sent. Not providing the order of delivery guarantee allows packet switching to buffer packets, use multiple paths to send packets, resend damaged packets, and to provide other optimizations. For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing. Whether a message is pipelined is an engineering tradeoff. How would an external observer know whether the processing of a message by an actor has been pipelined? There is no ambiguity in the definition of an actor created by the possibility of pipelining. Of course, it is possible to perform the pipeline optimization incorrectly in some implementations, in which case unexpected behavior may occur. Locality Another important characteristic of the actor model is locality. Locality means that in processing a message, an actor can send messages only to addresses that it receives in the message, addresses that it already had before it received the message, and addresses for actors that it creates while processing the message. (But see Synthesizing addresses of actors.) Also locality means that there is no simultaneous change in multiple locations. In this way it differs from some other models of concurrency, e.g., the Petri net model in which tokens are simultaneously removed from multiple locations and placed in other locations. Composing actor systems The idea of composing actor systems into larger ones is an important aspect of modularity that was developed in Gul Agha's doctoral dissertation, developed later by Gul Agha, Ian Mason, Scott Smith, and Carolyn Talcott. Behaviors A key innovation was the introduction of behavior specified as a mathematical function to express what an actor does when it processes a message, including specifying a new behavior to process the next message that arrives. Behaviors provided a mechanism to mathematically model the sharing in concurrency. Behaviors also freed the actor model from implementation details, e.g., the Smalltalk-72 token stream interpreter. However, it is critical to understand that the efficient implementation of systems described by the actor model require extensive optimization. See Actor model implementation for details. Modeling other concurrency systems Other concurrency systems (e.g., process calculi) can be modeled in the actor model using a two-phase commit protocol. Computational Representation Theorem There is a Computational Representation Theorem in the actor model for systems which are closed in the sense that they do not receive communications from outside. The mathematical denotation denoted by a closed system is constructed from an initial behavior ⊥S and a behavior-approximating function progressionS. These obtain increasingly better approximations and construct a denotation (meaning) for as follows [Hewitt 2008; Clinger 1981]: In this way, S can be mathematically characterized in terms of all its possible behaviors (including those involving unbounded nondeterminism). Although is not an implementation of , it can be used to prove a generalization of the Church-Turing-Rosser-Kleene thesis [Kleene 1943]: A consequence of the above theorem is that a finite actor can nondeterministically respond with an number of different outputs. Relationship to logic programming One of the key motivations for the development of the actor model was to understand and deal with the control structure issues that arose in development of the Planner programming language. Once the actor model was initially defined, an important challenge was to understand the power of the model relative to Robert Kowalski's thesis that "computation can be subsumed by deduction". Hewitt argued that Kowalski's thesis turned out to be false for the concurrent computation in the actor model (see Indeterminacy in concurrent computation). Nevertheless, attempts were made to extend logic programming to concurrent computation. However, Hewitt and Agha [1991] claimed that the resulting systems were not deductive in the following sense: computational steps of the concurrent logic programming systems do not follow deductively from previous steps (see Indeterminacy in concurrent computation). Recently, logic programming has been integrated into the actor model in a way that maintains logical semantics. Migration Migration in the actor model is the ability of actors to change locations. E.g., in his dissertation, Aki Yonezawa modeled a post office that customer actors could enter, change locations within while operating, and exit. An actor that can migrate can be modeled by having a location actor that changes when the actor migrates. However the faithfulness of this modeling is controversial and the subject of research. Security The security of actors can be protected in the following ways: hardwiring in which actors are physically connected computer hardware as in Burroughs B5000, Lisp machine, etc. virtual machines as in Java virtual machine, Common Language Runtime, etc. operating systems as in capability-based systems signing and/or encryption of actors and their addresses Synthesizing addresses of actors A delicate point in the actor model is the ability to synthesize the address of an actor. In some cases security can be used to prevent the synthesis of addresses (see Security). However, if an actor address is simply a bit string then clearly it can be synthesized although it may be difficult or even infeasible to guess the address of an actor if the bit strings are long enough. SOAP uses a URL for the address of an endpoint where an actor can be reached. Since a URL is a character string, it can clearly be synthesized although encryption can make it virtually impossible to guess. Synthesizing the addresses of actors is usually modeled using mapping. The idea is to use an actor system to perform the mapping to the actual actor addresses. For example, on a computer the memory structure of the computer can be modeled as an actor system that does the mapping. In the case of SOAP addresses, it's modeling the DNS and the rest of the URL mapping. Contrast with other models of message-passing concurrency Robin Milner's initial published work on concurrency was also notable in that it was not based on composing sequential processes. His work differed from the actor model because it was based on a fixed number of processes of fixed topology communicating numbers and strings using synchronous communication. The original communicating sequential processes (CSP) model published by Tony Hoare differed from the actor model because it was based on the parallel composition of a fixed number of sequential processes connected in a fixed topology, and communicating using synchronous message-passing based on process names (see Actor model and process calculi history). Later versions of CSP abandoned communication based on process names in favor of anonymous communication via channels, an approach also used in Milner's work on the calculus of communicating systems and the π-calculus. These early models by Milner and Hoare both had the property of bounded nondeterminism. Modern, theoretical CSP ([Hoare 1985] and [Roscoe 2005]) explicitly provides unbounded nondeterminism. Petri nets and their extensions (e.g., coloured Petri nets) are like actors in that they are based on asynchronous message passing and unbounded nondeterminism, while they are like early CSP in that they define fixed topologies of elementary processing steps (transitions) and message repositories (places). Influence The actor model has been influential on both theory development and practical software development. Theory The actor model has influenced the development of the π-calculus and subsequent process calculi. In his Turing lecture, Robin Milner wrote: Now, the pure lambda-calculus is built with just two kinds of thing: terms and variables. Can we achieve the same economy for a process calculus? Carl Hewitt, with his actors model, responded to this challenge long ago; he declared that a value, an operator on values, and a process should all be the same kind of thing: an actor. This goal impressed me, because it implies the homogeneity and completeness of expression ... But it was long before I could see how to attain the goal in terms of an algebraic calculus... So, in the spirit of Hewitt, our first step is to demand that all things denoted by terms or accessed by names—values, registers, operators, processes, objects—are all of the same kind of thing; they should all be processes. Practice The actor model has had extensive influence on commercial practice. For example, Twitter has used actors for scalability. Also, Microsoft has used the actor model in the development of its Asynchronous Agents Library. There are many other actor libraries listed in the actor libraries and frameworks section below. Addressed issues According to Hewitt [2006], the actor model addresses issues in computer and communications architecture, concurrent programming languages, and Web services including the following: Scalability: the challenge of scaling up concurrency both locally and nonlocally. Transparency: bridging the chasm between local and nonlocal concurrency. Transparency is currently a controversial issue. Some researchers have advocated a strict separation between local concurrency using concurrent programming languages (e.g., Java and C#) from nonlocal concurrency using SOAP for Web services. Strict separation produces a lack of transparency that causes problems when it is desirable/necessary to change between local and nonlocal access to Web services (see Distributed computing). Inconsistency: inconsistency is the norm because all very large knowledge systems about human information system interactions are inconsistent. This inconsistency extends to the documentation and specifications of very large systems (e.g., Microsoft Windows software, etc.), which are internally inconsistent. Many of the ideas introduced in the actor model are now also finding application in multi-agent systems for these same reasons [Hewitt 2006b 2007b]. The key difference is that agent systems (in most definitions) impose extra constraints upon the actors, typically requiring that they make use of commitments and goals. Programming with actors A number of different programming languages employ the actor model or some variation of it. These languages include: Early actor programming languages Act 1, 2 and 3 Acttalk Ani Cantor Rosette Later actor programming languages ABCL AmbientTalk Axum CAL Actor Language D Dart E Elixir Erlang Fantom Humus Io LFE Encore Pony Ptolemy Project P P# Rebeca Modeling Language Reia SALSA Scala TNSDL Actor libraries and frameworks Actor libraries or frameworks have also been implemented to permit actor-style programming in languages that don't have actors built-in. Some of these frameworks are: See also Data flow Gordon Pask Input/output automaton Scientific community metaphor References Further reading Gul Agha. Actors: A Model of Concurrent Computation in Distributed Systems. MIT Press 1985. Paul Baran. On Distributed Communications Networks IEEE Transactions on Communications Systems. March 1964. William A. Woods. Transition network grammars for natural language analysis CACM. 1970. Carl Hewitt. Procedural Embedding of Knowledge In Planner IJCAI 1971. G.M. Birtwistle, Ole-Johan Dahl, B. Myhrhaug and Kristen Nygaard. SIMULA Begin Auerbach Publishers Inc, 1973. Carl Hewitt, et al. Actor Induction and Meta-evaluation Conference Record of ACM Symposium on Principles of Programming Languages, January 1974. Carl Hewitt, et https://link.springer.com/chapter/10.1007/3-540-06859-7_147al. Behavioral Semantics of Nonrecursive Control Structure Proceedings of Colloque sur la Programmation, April 1974. Irene Greif and Carl Hewitt. Actor Semantics of PLANNER-73 Conference Record of ACM Symposium on Principles of Programming Languages. January 1975. Carl Hewitt. How to Use What You Know IJCAI. September, 1975. Alan Kay and Adele Goldberg. Smalltalk-72 Instruction Manual Xerox PARC Memo SSL-76-6. May 1976. Edsger Dijkstra. A discipline of programming Prentice Hall. 1976. Carl Hewitt and Henry Baker Actors and Continuous Functionals Proceeding of IFIP Working Conference on Formal Description of Programming Concepts. August 1–5, 1977. Carl Hewitt and Russ Atkinson. Synchronization in Actor Systems Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages. 1977 Carl Hewitt and Russ Atkinson. Specification and Proof Techniques for Serializers IEEE Journal on Software Engineering. January 1979. Ken Kahn. A Computational Theory of Animation MIT EECS Doctoral Dissertation. August 1979. Carl Hewitt, Beppe Attardi, and Henry Lieberman. Delegation in Message Passing Proceedings of First International Conference on Distributed Systems Huntsville, AL. October 1979. Nissim Francez, C.A.R. Hoare, Daniel Lehmann, and Willem-Paul de Roever. Semantics of nondetermiism, concurrency, and communication Journal of Computer and System Sciences. December 1979. George Milne and Robin Milner. Concurrent processes and their syntax JACM. April 1979. Daniel Theriault. A Primer for the Act-1 Language MIT AI memo 672. April 1982. Daniel Theriault. Issues in the Design and Implementation of Act 2 MIT AI technical report 728. June 1983. Henry Lieberman. An Object-Oriented Simulator for the Apiary Conference of the American Association for Artificial Intelligence, Washington, D. C., August 1983 Carl Hewitt and Peter de Jong. Analyzing the Roles of Descriptions and Actions in Open Systems Proceedings of the National Conference on Artificial Intelligence. August 1983. Carl Hewitt and Henry Lieberman. Design Issues in Parallel Architecture for Artificial Intelligence MIT AI memo 750. Nov. 1983. C.A.R. Hoare. Communicating Sequential Processes Prentice Hall. 1985. Carl Hewitt. The Challenge of Open Systems Byte. April 1985. Reprinted in The foundation of artificial intelligence: a sourcebook Cambridge University Press. 1990. Carl Manning. Traveler: the actor observatory ECOOP 1987. Also appears in Lecture Notes in Computer Science, vol. 276. William Athas and Charles Seitz Multicomputers: message-passing concurrent computers IEEE Computer August 1988. William Athas and Nanette Boden Cantor: An Actor Programming System for Scientific Computing in Proceedings of the NSF Workshop on Object-Based Concurrent Programming. 1988. Special Issue of SIGPLAN Notices. Jean-Pierre Briot. From objects to actors: Study of a limited symbiosis in Smalltalk-80 Rapport de Recherche 88–58, RXF-LITP, Paris, France, September 1988 William Dally and Wills, D. Universal mechanisms for concurrency PARLE 1989. W. Horwat, A. Chien, and W. Dally. Experience with CST: Programming and Implementation PLDI. 1989. Carl Hewitt. Towards Open Information Systems Semantics Proceedings of 10th International Workshop on Distributed Artificial Intelligence. October 23–27, 1990. Bandera, Texas. Akinori Yonezawa, Ed. ABCL: An Object-Oriented Concurrent System MIT Press. 1990. K. Kahn and Vijay A. Saraswat, "Actors as a special case of concurrent constraint (logic) programming", in SIGPLAN Notices, October 1990. Describes Janus. Carl Hewitt. Open Information Systems Semantics Journal of Artificial Intelligence. January 1991. Carl Hewitt and Jeff Inman. DAI Betwixt and Between: From "Intelligent Agents" to Open Systems Science IEEE Transactions on Systems, Man, and Cybernetics. Nov./Dec. 1991. Carl Hewitt and Gul Agha. Guarded Horn clause languages: are they deductive and Logical? International Conference on Fifth Generation Computer Systems, Ohmsha 1988. Tokyo. Also in Artificial Intelligence at MIT, Vol. 2. MIT Press 1991. William Dally, et al. The Message-Driven Processor: A Multicomputer Processing Node with Efficient Mechanisms IEEE Micro. April 1992. S. Miriyala, G. Agha, and Y.Sami. Visualizing actor programs using predicate transition nets Journal of Visual Programming. 1992. Carl Hewitt and Carl Manning. Negotiation Architecture for Large-Scale Crisis Management AAAI-94 Workshop on Models of Conflict Management in Cooperative Problem Solving. Seattle, WA. Aug. 4, 1994. Carl Hewitt and Carl Manning. Synthetic Infrastructures for Multi-Agency Systems Proceedings of ICMAS '96. Kyoto, Japan. December 8–13, 1996. S. Frolund. Coordinating Distributed Objects: An Actor-Based Approach for Synchronization MIT Press. November 1996. W. Kim. ThAL: An Actor System for Efficient and Scalable Concurrent Computing PhD thesis. University of Illinois at Urbana Champaign. 1997. Jean-Pierre Briot. Acttalk: A framework for object-oriented concurrent programming-design and experience 2nd France-Japan workshop. 1999. N. Jamali, P. Thati, and G. Agha. An actor based architecture for customizing and controlling agent ensembles IEEE Intelligent Systems. 14(2). 1999. Don Box, David Ehnebuske, Gopal Kakivaya, Andrew Layman, Noah Mendelsohn, Henrik Nielsen, Satish Thatte, Dave Winer. Simple Object Access Protocol (SOAP) 1.1 W3C Note. May 2000. M. Astley, D. Sturman, and G. Agha. Customizable middleware for modular distributed software CACM. 44(5) 2001. Edward Lee, S. Neuendorffer, and M. Wirthlin. Actor-oriented design of embedded hardware and software systems Journal of Circuits, Systems, and Computers''. 2002. P. Thati, R. Ziaei, and G. Agha. A Theory of May Testing for Actors Formal Methods for Open Object-based Distributed Systems. March 2002. P. Thati, R. Ziaei, and G. Agha. A theory of may testing for asynchronous calculi with locality and no name matching Algebraic Methodology and Software Technology. Springer Verlag. September 2002. LNCS 2422. Stephen Neuendorffer. Actor-Oriented Metaprogramming PhD Thesis. University of California, Berkeley. December, 2004 Carl Hewitt (2006a) The repeated demise of logic programming and why it will be reincarnated What Went Wrong and Why: Lessons from AI Research and Applications. Technical Report SS-06-08. AAAI Press. March 2006. Carl Hewitt (2006b) What is Commitment? Physical, Organizational, and Social COIN@AAMAS. April 27, 2006b. Carl Hewitt (2007a) What is Commitment? Physical, Organizational, and Social (Revised) Pablo Noriega .et al. editors. LNAI 4386. Springer-Verlag. 2007. Carl Hewitt (2007b) Large-scale Organizational Computing requires Unstratified Paraconsistency and Reflection COIN@AAMAS'07. D. Charousset, T. C. Schmidt, R. Hiesgen and M. Wählisch. Native actors: a scalable software platform for distributed, heterogeneous environments in AGERE! '13 Proceedings of the 2013 workshop on Programming based on actors, agents, and decentralized control. External links Hewitt, Meijer and Szyperski: The Actor Model (everything you wanted to know, but were afraid to ask) Microsoft Channel 9. April 9, 2012. Functional Java – a Java library that includes an implementation of concurrent actors with code examples in standard Java and Java 7 BGGA style. ActorFoundry – a Java-based library for actor programming. The familiar Java syntax, an ant build file and a bunch of example make the entry barrier very low. ActiveJava – a prototype Java language extension for actor programming. Akka – actor based library in Scala and Java, from Lightbend Inc. GPars – a concurrency library for Apache Groovy and Java Asynchronous Agents Library – Microsoft actor library for Visual C++. "The Agents Library is a C++ template library that promotes an actor-based programming model and in-process message passing for coarse-grained dataflow and pipelining tasks. " ActorThread in C++11 – base template providing the gist of the actor model over naked threads in standard C++11 Concurrent computing
8328149
https://en.wikipedia.org/wiki/Storm%20Impact
Storm Impact
Storm Impact was a Macintosh software developer and publisher located in Glenview, Illinois, active from 1989 to 1997. Storm Impact's development team consisted of David Cook and artist Tom Zehner, with help from Dan Schwimmer and Dave Friedman. Storm Impact initially licensed their products to third-party publishers, but switched to self-publishing their products as shareware in 1993. Products Storm Impact's first product was the role-playing video game TaskMaker, released in 1989 by XOR Corporation. In 1990, XOR released Storm Impact's most commercially successful published product, the skiing sim MacSki. MacSki was reviewed positively in Macworld, and was inducted into the Macworld Game Hall of Fame as Best Sports Game for 1990. In 1993, Storm Impact released its most commercially successful shareware product, an upgraded version of TaskMaker. In 1994, Storm Impact released an upgraded shareware version of MacSki. In 1996, Storm Impact released the shareware shoot 'em up Asterbamm and the technical support utility Technical Snapshot. David Cook describes both releases as "sales bombs". In 1997, Storm released The Tomb of the TaskMaker, a sequel to TaskMaker. Litigation In 1998, Storm Impact and David Cook sued Software of the Month Club, a California corporation that distributed a commercial shareware compilation CD-ROM including the shareware versions of TaskMaker and MacSki. Storm Impact and Cook alleged that Software of the Month Club's distribution of Storm Impact's products constituted copyright infringement, unfair competition, false designation of origin, and deceptive trade practices. United States District Judge James Zagel found for Storm on the count of copyright infringement, stating that Software of the Month Club "unquestionably violated the express restrictions of both TaskMaker and MacSki, eviscerating any claim that Storm effectively consented to unlimited distribution of its products by posting them on the Internet." Storm was awarded $20,000 in statutory damages. Demise Storm Impact's owner David Cook attributes the company's demise to market change and undercapitalization. He notes a number of contributing factors: the Macintosh's market share had declined, game technology progressed beyond the company's ability to produce a competitive product, and the company's shareware model meant that developers had to process orders and support products years after their release. References External links Storm Impact Macintosh software companies Video game development companies Defunct software companies of the United States Defunct video game companies of the United States
44595351
https://en.wikipedia.org/wiki/Vocaloid%20%28software%29
Vocaloid (software)
Vocaloid is a singing voice synthesizer and the first engine released in the Vocaloid series. It was succeeded by Vocaloid 2. This version was made to be able to sing both English and Japanese. History The earliest known development related to Vocaloid was a project that had occurred two years prior and funded by Yamaha. The project was codenamed "Elvis" and did not become a product because of the scale of its vocal building required for just a single song. It is credited as the project that established many of the earliest models and ideas that would later be tested and tried for Vocaloid. Yamaha started development of Vocaloid in March 2000 and announced it for the first time at the German fair Musikmesse on March 5–9, 2003. It was created under the name "Daisy", in reference to the song "Daisy Bell", but for copyright reasons, this name was dropped in favor of "Vocaloid". The first Vocaloids, Leon and Lola, were released by the studio Zero-G on March 3, 2004, both of which were sold as a "Virtual Soul Vocalist". Leon and Lola made their first appearance at the NAMM Show on January 15, 2004. Leon and Lola were also demonstrated at the Zero-G Limited booth during Wired Nextfest and won the 2005 Electronic Musician Editor's Choice Award. Zero-G later released Miriam, with her voice provided by Miriam Stockley, in July 2004. Later that year, Crypton Future Media also released the first Japanese Vocaloid Meiko who, along with Kaito, was developed by Yamaha. In June 2005, Yamaha upgraded the engine version to 1.1. A patch was later released to update all Vocaloid engines to Vocaloid 1.1.2, adding new features to the software, although there were differences between the output results of the engine. A total of five Vocaloid products were released from 2004 to 2006. Vocaloid was also noted for its huskier results then later engine versions. Vocaloid had no previous rival technology to contend with at the time of its release, with the English version only having to face the later release of VirSyn's Cantor software during its original run. Despite having Japanese phonetics, the interface lacked a Japanese version and both Japanese and English vocals had an English interface. The only differences between versions were the color and logo that changed per template. As of 2011, this version of the software is no longer supported by Yamaha and will no longer be updated. All Vocaloid 1 products were permanently retired on January 1, 2014. Products A total of five products were released for the engine version. Leon A male vocal released capable of singing in English. He was released on January 15, 2004. He was built as a soul singer. His provider is unknown, save for the fact that he was a "British black singer". Leon, along with his complementary vocal "Lola", were noted for their attempt to convey racial qualities within a vocal, due to their genre choice of "soul music". He was considered the overall weaker of the two soul singer voices and the weakest of the three English vocals. Lola A female vocal released as a complementary vocal to Leon, Lola also sings in English. Also released on January 15, 2004. Lola was noted for her deep tone that left her sounding "like Big Ma", but was generally considered the better of the two Soul singers. Nothing is known about her voice provider except she was a black singer who was established in Great Britain, whose roots were noted back in the Caribbean. A notable issue with her voice was that when used outside of genres other than soul, her provider's Caribbean accent would sound out, giving an atypical soul singer result. Lola is also known to have the oldest Vocaloid works on website Nico Nico Douga out of all Vocaloids. She was also used by Susumu Hirasawa on various tracks for the film Paprika. Miriam Miriam, based on the voice of Miriam Stockley, was the third English vocal released for the engine. Released on July 1, 2004, Miriam was an improvement over Leon and Lola and had a softer toned vocal. She was considered the strongest English vocal released for the engine version. Meiko Meiko was the first of the two Japanese vocals developed by Yamaha. Her commercial handling was done by Crypton Future Media and she was released on November 5, 2004. Meiko was originally codenamed "Hanako", she was the most successful of the 5 vocals released for the engine initially. Kaito A complementary male vocal for Meiko, Kaito was released on February 17, 2006 for the Japanese version of the software. He was developed by Yamaha and was originally codenamed "Taro". Kaito was initially acknowledged as a commercial failure and sold poorly, bringing in only 500 units in contrast to Meiko's 3,000. His lack of sales in contrast to Meiko was put down to the reader demographic of DTM magazine, which 80% were male. Sales picked up suddenly in 2008 to the surprise of Wataru Sasaki and other members of Crypton Future Media and by 2010 had over taken Meiko in popularity. Reception Reviewers such as Michael Stipe of R.E.M. praised it when it was first announced in 2003. Stipe noted that one of the more useful aspects of the software was that it gave singers a method of preserving their voice for future use should they lose their own, but as the technology progressed it could also be used to bring back the voices of singers whose voices have already been lost. However, while the provider of "Miriam", Miriam Stockley, had accepted that there was little point in fighting progress, she had noted there was little control over how her voice was used once the software was in the hands of others. At the time of its release, Popular Science reported that, "Synthetic vocals have never even come close to fooling the ear, and outside of certain Kraftwerk chestnuts, robo-crooning is offputting." It was noted that the Vocaloid software was the first to touch the uncanny job of recreating the human voice. Yamaha received much praise and Vocaloid was hailed as a "quantum leap" on vocal synthesis, while Vocaloid itself received much attention and praise within the industry. Sales of the product were also reported to be very sluggish at first. The CEO of Crypton Future Media noted the lack of interest in Vocaloids overall was put down to the lack of response in the initial Vocaloid software. With regard to the development of the English version of the software specifically, many studios when approached by Crypton Future Media for recommendations towards developing the English Vocaloids had no interest in the software initially, with one particular company representative calling it a "toy". A level of failure was put on Leon and Lola for lack of sales in the United States, putting the blame on their British accents. Prior to the release of the Hatsune Miku product, Crypton Future Media had also noted there was some criticism at choosing to release the original Vocaloid engine as a commercial licensing product, although felt that the choice was for the better of the engine. Furthermore, it was noted that the original Vocaloid engine felt more like a prototype for future engine versions. References Speech synthesis software Vocaloid
1805449
https://en.wikipedia.org/wiki/Software%20brittleness
Software brittleness
In computer programming and software engineering, software brittleness is the increased difficulty in fixing older software that may appear reliable, but actually fails badly when presented with unusual data or altered in a seemingly minor way. The phrase is derived from analogies to brittleness in metalworking. Causes When software is new, it is very malleable; it can be formed to be whatever is wanted by the implementers. But as the software in a given project grows larger and larger, and develops a larger base of users with long experience with the software, it becomes less and less malleable. Like a metal that has been work-hardened, the software becomes a legacy system, brittle and unable to be easily maintained without fracturing the entire system. Brittleness in software can be caused by algorithms that do not work well for the full range of input data. A good example is an algorithm that allows a divide by zero to occur, or a curve-fitting equation that is used to extrapolate beyond the data that it was fitted to. Another cause of brittleness is the use of data structures that restrict values. This was commonly seen in the late 1990s as people realized that their software only had room for a 2 digit year entry; this led to the sudden updating of tremendous quantities of brittle software before the year 2000. Another more commonly encountered form of brittleness is in graphical user interfaces that make invalid assumptions. For example, a user may be running on a low resolution display, and the software will open a window too large to fit the display. Another common problem is expressed when a user uses a color scheme other than the default, causing text to be rendered in the same color as the background, or a user uses a font other than the default, which won't fit in the allowed space and cuts off instructions and labels. Very often, an old code base is simply abandoned and a brand-new system (which is intended to be free of many of the burdens of the legacy system) created from scratch, but this can be an expensive and time-consuming process. Some examples and reasons behind software brittleness: Users expect a relatively constant user interface; once a feature has been implemented and exposed to the users, it is very difficult to convince them to accept major changes to that feature, even if the feature was not well designed or the existence of the feature blocks further progress. A great deal of documentation may describe the current behavior and would be expensive to change. In addition, it is essentially impossible to recall all copies of the existing documentation, so users are likely to continue to refer to obsolete manuals. The original implementers (who knew how things really worked) have moved on and left insufficient documentation of the internal workings of the software. Many small implementation details were only understood through the oral traditions of the design team, and many of these details eventually are irretrievably lost, although some can be rediscovered through the diligent (and expensive) application of software archaeology. Patches have probably been issued throughout the years, subtly changing the behavior of the software. In many cases, these patches, while correcting the overt failure for which they were issued, introduce other, more subtle, failures into the system. If not detected by regression testing, these subtle failures make subsequent changes to the system more difficult. More subtle forms of brittleness commonly occur in artificial intelligence systems. These systems often rely on significant assumptions about the input data. When these assumptions aren't met – and, because they may not be stated, this may easily be the case – then the system will respond in completely unpredictable ways. Systems can also be brittle if the component dependencies are too rigid. One example of this is seen in the difficulties transitioning to new versions of dependencies. When one component expects another to output only a given range of values, and that range changes, then it can cause errors to ripple through the system, either during building or at runtime. Fewer technical resources are available to support changes when a system is in a maintenance phase than there are for a system that is in a development or implementation phase of the Systems Development Life Cycle (SDLC). See also Brittle system Software entropy Software rot Robustness (computer science) References Computer errors Computer jargon Software maintenance
17009855
https://en.wikipedia.org/wiki/Explorer/85
Explorer/85
The Netronics Explorer 85 was an Intel 8085 based computer produced by Netronics R&D Ltd. located in New Milford, Connecticut between 1979 and 1984. Netronics also produced the more well known ELF II computer, and the ill-fated Explorer 88 computer. Specifications The Intel 8085 CPU used a 6.144 MHz crystal, resulting in the processor operating at 3.072 MHz. The basic system had 256 bytes of RAM and 2048 bytes of ROM. The base system also had cassette tape IO, serial IO which could be configured for RS-232 or current loop, and thirty eight bits of parallel IO. The system could be expanded to have from two to six S-100 bus sockets. Unique features Although this computer did have an S-100 bus, it was different from most of its contemporary S-100 bus computers in that it had a large motherboard containing the CPU and associated circuity, and only two S-100 bus sockets. The Sol-20 computer also had this arrangement. Another unique feature of this computer was its serial port. The serial data was connected to the Intel 8085's SID and SOD (Serial In Data and Serial Out Data) pins. This allowed the use of the Intel 8085's RIM and SIM instructions to read the level on the SID and set the level on SOD line. What was so unique about this implementation was that after resetting the Explorer 85, the user had to press the space bar on the attached computer terminal. The Explorer 85's firmware would measure the time between the start bit and the first data bit in the ASCII code for the space character. This allowed the Explorer 85 to automatically calculate and match the baud rate of the terminal. The down side of this technique was that the firmware needed to be in a loop monitoring the level on the SID pin to receive data from the terminal. If the processor was doing some other task when the user pressed a key on the terminal, that data would be lost. In addition to having a reset button on the front of the computer, the Explorer 85 had an interrupt button. This allow the user to interrupt a locked up program and return to the debugger, without resetting the computer and losing all of their work. Available configurations The Explorer 85 was available at five different levels. Level A Level A was just the motherboard with no S-100 bus sockets loaded. This could be ordered with firmware configured for either a computer terminal, or for a hexadecimal keypad which was available from Netronics. The Level A configuration did not include a power supply, so the user had to provide their own eight volt power supply, or purchase one from Netronics. The Level A motherboard contained a prototyping area, where the user could add circuitry of their own design. In 1982 the Level A system sold for $129.95. Level B Level B added the circuitry to drive the two S-100 bus connectors which you could solder into the Level A motherboard. This allowed the owner to use any of the myriad of available S-100 bus on the market. In 1982 the Level B upgrade sold for $49.95, and $4.85 for each S-100 bus connector. Level C Level C was a card cage and S-100 bus expander card. This card would plug into one of the S-100 bus sockets on the motherboard. You could then plug up to five more S-100 bus cards into the expander card. One of the two original slots on the motherboard was still available for use, giving the Level C Explorer 85 a capacity of six S-100 bus cards. The card cage held all of the cards in place. In 1982 the Level C upgrade sold for $39.95, and $4.85 for each S-100 bus connector. Level D Level D was a RAM upgrade. This could take the form of up to 4k of RAM on the motherboard, or Netronics S-100 Jaws memory board. The Jaws memory board used from eight to thirty two 4116 16k by 1 bit dynamic random access memory chips, which could be added in groups of eight. The Jaws memory board used an Intel 8202 dynamic random access memory controller chip to refresh the memory, and multiplex the address bits. In 1982 the Level D upgrade sold for as little as $49.95 for the 4k motherboard upgrade, to as much as $299.95 for the 64k Jaws upgrade. Level E Level E activated the ROM sockets in the motherboard. This allowed the user to put custom programs in the ROM socket, or the user could purchase a Microsoft ROM BASIC interpreter from Netronics. The Microsoft ROM BASIC could store and load programs using a cassette tape recorder. In 1982 the Level E upgrade sold for as little as $5.95. References Home computers Early microcomputers Computer-related introductions in 1979
57093107
https://en.wikipedia.org/wiki/Manifold%20Garden
Manifold Garden
Manifold Garden is an indie first-person puzzle video game developed by American artist William Chyr. It was released on Windows, macOS, and iOS on October 18, 2019. The player must navigate an abstract series of structures that appear to repeat into infinity, while solving a progression of puzzles. Ports for PlayStation 4, Nintendo Switch and Xbox One were released on August 18, 2020. An upgraded version of the game was released for Xbox Series X and Series S as a launch title on November 10, 2020, and an upgraded PlayStation 5 version released on May 20, 2021. Gameplay The game takes place in a "universe with a different set of physical laws" where the player can manipulate gravity, being able to "turn walls into floors". The player must solve puzzles using the world's geometry in addition to devices within the architecture of the world. To aid the player, the world's tone takes on one of six colors depending on which direction they have manipulated gravity. Several facets of the game's world may only be interacted with when the gravity is oriented directly, with these objects sharing one of the six colors. The game's worlds frequently appear to repeat into infinity into all directions. Because of this, many puzzles revolve around falling off a ledge from a lower part of a structure to land back on the top of the next version of that structure below. Later puzzles will involve growing trees and natural elements to bring life back to the "sterile" world. Development Creator William Chyr had a passion for large-scale artwork and was previously known for massive balloon sculptures. Seeking to change his work with sculptures, and finding other mediums cost-prohibitive, he decided to move to a video game with no space limitations. Work on the game started in November 2012. Further he wanted to make a puzzle game that he felt he could finish as a player, having learned that after playing through The Witness that there had been an entire section of puzzles he had missed there. The game was originally known as Relativity, after the M.C. Escher print Relativity upon which it was based, before it was re-revealed as Manifold Garden. Chyr observed that some have called the game taking place in non-Euclidean geometry, but he asserts Manifold Garden uses "impossible geometry" in Euclidean space, employing a method of world wrapping in three-dimensional space to make the world appear infinite. This often required rendering more than 500 times the typical geometries that a 3D game engine would provide. The game engine still supports the use and rendering of non-Euclidean portals, with the player able to see through a portal into a different level, and with possible recursion if other portals are in view. Chyr attributes the capabilities of the engine to their graphics programmer Arthur Brussee. A core part of Chyr's design for the game was to leave it absent of any explicit instructions, using an initial puzzle that required the player, in order to activate a button across a chasm, to jump into the chasm and land on the other side as a result of falling through the game's repeating geometries. His team early on had discovered that playtesters would often become confused in walking around the various levels, combined with the shifting changes in gravity. To help, they designed each outdoor level to appear unique so that players could immediately recognize the setting, and strategic use of windows and other visibility features so that players could have a sense of a reference point with respect to gravity. Chyr had used about 2000 hours of professional playtesting through the game's development to make sure players could navigate the game world and solve the puzzles. Chyr worked on the game on his own for the first three years, circa 2015, at which point he brought on additional staff to help build out the game with a stronger focus on finishing the game. He had become concerned at this time by industry fears of a possible "indiepocalpse" due to possible oversaturation of indie games and was not sure if Manifold Garden would reach completion. By April 2018, the development of the game had been ongoing for five years. Chyr stated then that he may leave game development after it is released, due to art games not being financially sustainable. He estimated that the game would have to sell at least 40,000 units to be successful and not a "total disaster" financially. He stated that a lack of experience cost him a year or two of development time, and the development time was protracted by its scope, which was realistically large enough to require three developers. At that point, he and the other developers on the game were not taking a salary outside cost-of-living to maximize the amount of funding they had been able to get. Sometime before release, Chyr signed a year-long exclusivity deal with the Epic Games Store, as well as with Apple Inc., which helped to finance the completion of the game as well as seeing it through the period after release. The game's inspirations include video games Starseed Pilgrim, Antichamber, Infinifactory, Portal, Fez, and The Witness. Chyr was also inspired by the films Inception, Blade Runner, and 2001: A Space Odyssey, the books House of Leaves by Mark Z. Danielewski and Blame! by Tsutomu Nihei, and the architecture of Frank Lloyd Wright and Tadao Ando. The game was also intended as a metaphor for the last 400 years of physics discoveries. The game was released on October 18, 2019 for Microsoft Windows, macOS, and for iOS devices through Apple Arcade. Versions for Nintendo Switch, PlayStation 4, and Xbox One were released on August 18, 2020. Reception Manifold Garden received "generally favorable reviews" according to review aggregator Metacritic. The game was called "beautifully hypnotic" by Philippa Warr of Rock, Paper, Shotgun. Nathan Grayson of Kotaku called the game "incredibly pretty" and "damn cool", remarking that the game "broke" his brain. Christian Donlan of Eurogamer named Manifold Garden as a "Eurogamer Essential" title, praising its puzzle design as its "epiphanies are not cheap at all. They are surprisingly frequent, sure, but each one stands for something, a little bit more understanding, a little bit more progress, a fresh way of viewing the world that had been obscured until now." Polygons Nicole Carpenter called the game a "surreal masterpiece", and also commented favorably on the puzzle design, stating "I often feel like I have no idea what I’m doing, but I never feel despondent or troubled by that confusion." The game was nominated for "Game of the Year" and "Best Audio/Visual Accomplishment" at the Pocket Gamer Mobile Games Awards, and for "Best Debut" with William Chyr Studios at the Game Developers Choice Awards. It also received honorable mentions for Best Audio Design at the Independent Games Festival, and was also nominated for "Debut Game" at the 16th British Academy Games Awards. References External links 2019 video games Apple Arcade games Art games First-person video games Indie video games IOS games MacOS games Nintendo Switch games PlayStation 4 games PlayStation 5 games Puzzle video games Single-player video games Video games developed in the United States Video games inspired by M. C. Escher Windows games Xbox One games Xbox Series X and Series S games
36219037
https://en.wikipedia.org/wiki/Ansible%20%28software%29
Ansible (software)
Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows. It includes its own declarative language to describe system configuration. Ansible was written by Michael DeHaan and acquired by Red Hat in 2015. Ansible is agentless, temporarily connecting remotely via SSH or Windows Remote Management (allowing remote PowerShell execution) to do its tasks. History The term "ansible" was coined by Ursula K. Le Guin in her 1966 novel Rocannon's World, and refers to fictional instantaneous communication systems. The Ansible tool was developed by Michael DeHaan, the author of the provisioning server application Cobbler and co-author of the Fedora Unified Network Controller (Func) framework for remote administration. Ansible, Inc. (originally AnsibleWorks, Inc.) was the company founded in 2013 by Michael DeHaan, Timothy Gerla, and Saïd Ziouani to commercially support and sponsor Ansible. Red Hat acquired Ansible in October 2015. Ansible is included as part of the Fedora distribution of Linux, owned by Red Hat, and is also available for Red Hat Enterprise Linux, CentOS, openSUSE, SUSE Linux Enterprise, Debian, Ubuntu, Scientific Linux, and Oracle Linux via Extra Packages for Enterprise Linux (EPEL), as well as for other operating systems. Architecture Overview Ansible helps to manage multiple machines by selecting portions of Ansible's inventory stored in simple ASCII text files. The inventory is configurable, and target machine inventory can be sourced dynamically or from cloud-based sources in different formats (YAML, INI). Sensitive data can be stored in encrypted files using Ansible Vault since 2014. In contrast with other popular configuration-management software — such as Chef, Puppet, and CFEngine — Ansible uses an agentless architecture, with Ansible software not normally running or even installed on the controlled node. Instead, Ansible orchestrates a node by installing and running modules on the node temporarily via SSH. For the duration of an orchestration task, a process running the module communicates with the controlling machine with a JSON-based protocol via its standard input and output. When Ansible is not managing a node, it does not consume resources on the node because no daemons are run or software installed. Dependencies Ansible requires Python to be installed on all managing machines, including pip package manager along with configuration-management software and its dependent packages. Managed network devices require no extra dependencies and are agentless. Control node The control node (master host) is intended to manage (orchestrate) target machines (nodes termed as "inventory", see below). Control nodes are only available for Linux and alike; Windows OSs are not supported. Multiple control nodes are allowed. Ansible does not require a single controlling machine for orchestration ensuring that disaster recovery is simple. Nodes are managed by the controlling node over SSH. Design goals The design goals of Ansible include: Minimal in nature. Management systems should not impose additional dependencies on the environment. Consistent. With Ansible one should be able to create consistent environments. Secure. Ansible does not deploy agents to nodes. Only OpenSSH and Python are required on the managed nodes. Reliable. When carefully written, an Ansible playbook can be idempotent, to prevent unexpected side-effects on the managed systems. It is possible to write playbooks that are not idempotent. Minimal learning required. Playbooks use an easy and descriptive language based on YAML and Jinja templates. Modules Modules are mostly standalone and can be written in a standard scripting language (such as Python, Perl, Ruby, Bash, etc.). One of the guiding properties of modules is idempotency, which means that even if an operation is repeated multiple times (e.g., upon recovery from an outage), it will always place the system into the same state. Inventory configuration Location of target nodes is specified through inventory configuration lists (INI or YAML formatted) located at (on Linux). The configuration file lists either the IP address or hostname of each node that is accessible by Ansible. In addition, nodes can be assigned to groups. An example inventory (INI format): 192.168.6.1 [webservers] foo.example.com bar.example.com This configuration file specifies three nodes: the first node is specified by an IP address and the latter two nodes are specified by hostnames. Additionally, the latter two nodes are grouped under the webservers group. Ansible can also use a custom Dynamic Inventory script, which can dynamically pull data from a different system, and supports groups of groups. Playbooks Playbooks are YAML files that store lists of tasks for repeated executions on managed nodes. Each Playbook maps (associates) a group of hosts to a set of roles. Each role is represented by calls to Ansible tasks. Ansible Tower Ansible Tower is a REST API, web service, and web-based interface (application) designed to make Ansible more accessible to people with a wide range of IT skillsets. It is a hub for automation tasks. Tower is a commercial product supported by Red Hat, Inc. but derived from AWX upstream project, which is open source since September 2017. There was also another open source alternative to Tower, Semaphore, written in Go Platform support Control machines have to be a Linux/Unix host (for example BSD, CentOS, Debian, macOS, Red Hat Enterprise Linux, SUSE Linux Enterprise, Ubuntu), and Python 2.7 or 3.5 is required. Managed nodes, if they are Unix-like, must have Python 2.4 or later. For managed nodes with Python 2.5 or earlier, the python-simplejson package is also required. Since version 1.7, Ansible can also manage Windows nodes. In this case, native PowerShell remoting supported by the WS-Management protocol is used instead of SSH. Cloud integration Ansible can deploy to bare metal hosts, virtual machines, and cloud environments, including Amazon Web Services, Atomic, Lumen, Cloudscale, CloudStack, DigitalOcean, Dimension Data, Docker, Google Cloud Platform, KVM, Linode, LXC, LXD, Microsoft Azure, OpenStack, Oracle Cloud, OVH, oVirt, Packet, Profitbricks, PubNub, Rackspace, Scaleway, SmartOS, SoftLayer, Univention, VMware, Webfaction, and XenServer. AnsibleFest AnsibleFest is an annual conference of the Ansible community of users, contributors, etc. See also Comparison of open-source configuration management software Infrastructure as code (IaC) Infrastructure as Code Tools References External links Free software programmed in Python Orchestration software Remote administration software Software distribution Software using the GPL license
1931059
https://en.wikipedia.org/wiki/Lincity
Lincity
Lincity is a free and open-source software construction and management simulation game, which puts the player in control of managing a city's socio-economy, similar in concept to SimCity. The player can develop a city by buying appropriate buildings, services and infrastructure. Its name is both a Linux reference and a play on the title of the original city-building game, SimCity, and it was released under the GNU General Public License v2. Gameplay Lincity features complex 2D and top-down gameplay. The simulation considers population, employment, basic water management and ecology, goods (availability and production), raw materials (ore, steel, coal), services (education, health, fire protection, leisures), energy (electricity and charcoal, coal with finite reserves, solar and wind power) and other constraints such as finance, pollution and transports. The player has to take care of population growth and various socio-economic balances. Lincity can be won in two ways: reaching sustainable development, or evacuating the entire population with spacecraft. The Lincity homepage has a Hall of Fame, listing players who have succeeded in one of these two goals. History Lincity was created around 1995 as Simcity clone for Linux by I. J. Peters and hosted on SourceForge in 2001. Lincity was originally designed for Linux, but was ported later to Microsoft Windows, BeOS, OS/2, AmigaOS 4, and other operating systems. Mac OS X is supported when compiled from source code using GCC and run using X11.app. It uses SVGALib or X11 as its graphics interface API on Unix systems. As Lincity does software rendering it requires no 3D graphics card and also has very low demands on other computing resources, e.g. much memory or a fast processor. Since 1999 there have been only minor changes to Lincity; the last update was in August 2004. In 2005 significant development continued with the fork Lincity-NG, which was later transferred to Google Code and then to GitHub. Lincity-NG uses SDL and OpenGL, and features an isometric view, based on Simcity 3000, and graphics which resemble Simcity 3000's. Critical reception In 2000, a CNN article on Linux games highlighted Lincity's sophistication. It was The Linux Game Tome Game of The Month for January 2005. Lincity was 2008 a featured freeware title on 1up.com. The Washington Post featured Lincity in 2009. See also List of open source games Simutrans OpenTTD OpenCity Micropolis References External links Lincity, at SourceForge Lincity-NG Source, at GitHub (mirror) Lincity-NG downloads (alternate) 1995 video games AmigaOS 4 games BeOS games Open-source video games City-building games Fangames Linux games OS/2 games Video game clones Amiga games Free software programmed in C MacOS games SimCity Unix games Video games developed in the United Kingdom Video games with isometric graphics Windows games
866768
https://en.wikipedia.org/wiki/Win4Lin
Win4Lin
Win4Lin is a discontinued proprietary software application for Linux which allowed users to run a copy of Windows 9x, Windows 2000 or Windows XP applications on their Linux desktop. Win4Lin was based on Merge software, a product which changed owners several times until it was bought by Win4Lin Inc. Citing changes in the desktop virtualization industry, the software's publisher, Virtual Bridges, has discontinued Win4Lin Pro. Products and technology In 2006, Win4Lin came in three different versions, depending on the virtualization requirements of the user. Win4Lin 9x allowed the user to run a full copy of Windows 98 or Windows Me inside a virtual machine. Win4Lin Home allowed users to only emulate applications. Win4Lin Pro offered users the ability to install a fully virtualized Windows 2000 or Windows XP. The Win4Lin 9x/Pro (henceforth the only technology discussed in this section) operates by running Windows applications in a virtual machine. Unlike Wine or CrossOver which are emulation-based, virtualization-based software such as VMware or Win4Lin require users to have a Windows license in order to run applications since they must install a full copy of Windows within the virtual machine. Unlike VMware, however, Win4Lin provides the virtual guest operating system with access to the native Linux filesystem, and allows the Linux host to access the guest's files even when the virtual machine is not running. In addition to the convenience this offers, Computerworld found in their 2002 review that Win4Lin gained significant performance over VMware by using the native Linux filesystem, but also noted that this approach (unlike VMware's) limited the installation of only one version of Windows on a Win4Lin machine. When the Win4Lin application starts it displays a window on the Linux desktop which contains the Windows desktop environment. Users can then install or run applications as they normally would from within Windows. Win4Lin supports Linux printers, internet connections, and Windows networking, but , does not support DirectX and by extension most Windows games. They also offered Win4BSD for FreeBSD. History Win4Lin was initially based on Merge software originally developed at Locus Computing Corporation, and which changed hands several times until it ended in the assets of NeTraverse, which were purchased in 2005 by Win4Lin Inc., which introduced Win4Lin Pro Desktop. This was based on a 'tuned' version of QEMU and KQEMU, and it hosted [Windows NT]-versions of Windows. In June 2006, Win4Lin released Win4VDI for Linux based on the same code base. Win4VDI for Linux served Microsoft Windows desktops to thin clients from a Linux server. Virtual Bridges discontinued support for Win4Lin 9x in 2007. The Win4Lin Pro Desktop product ceased to be supported in March 2010. Reception Many users reported that the 9x version ran windows software at near-native speed, even on quite low-powered machines, such as Pentium-IIs. Nicholas Petereley praised Win4Lin in two of his columns in the year 2000, for its significantly faster performance than its competitor VMware. See also x86 virtualization References External links Win4Lin 5.0 makes big improvements, Linux.com, 2008 Win4Lin Pro Desktop 4.0 lags behind free alternatives, Linux.com, 2007 Break the Hardware Upgrade Cycle with Win4Lin Windows Virtual Desktop Server, Linux Journal, 2007 Run Windows On Linux: Win4Lin Revisited [Win4Lin Pro 3.0 review], Tom's Hardware, 2006 INQUIRER helps debug Win4Lin Pro [2.7], The Inquirer, 2006 Product Review — Running Windows on Linux, Win4Lin 2.7 vs. VMware Workstation 5.5.1., Open Source Magazine, 2006 Review: Win4Lin Pro [2.0], Linux.com, 2005 A Look at Win4Lin 5.1, OSNews, 2004 Review of Win4Lin 4.0, OSNews, 2002 VMware Express 2.0 and Win4Lin 2.0: A Comparison Review, Linux Journal, 2001 TreLOS's Win4Lin (2000) Virtualization software Linux emulation software Discontinued software
61531741
https://en.wikipedia.org/wiki/Alexander%20Heid
Alexander Heid
Alexander Heid is an American computer security consultant, white hat hacker, and business executive. Heid is a co-founder of the South Florida hacker conference and hacker group known as HackMiami, and currently serves as the chief research officer of the New York City information security firm SecurityScorecard. Early life and education Alexander Heid grew up in Miami, Florida and attended Barbara Goleman Senior High School. Career Alexander Heid currently serves as chief research officer of the New York City information security firm SecurityScorecard. Heid joined the company in 2014, working directly with Aleksandr Yampolskiy and Sam Kassoumeh to develop the signal collection methodologies that powers the cyber threat intelligence and third party management aspects of the platform. Heid is documented as being one of the first researchers to attribute the Equifax data breach to a vulnerability in Apache Struts 2 within the first hours of the breach announcement. Prior to SecurityScorecard, Heid was the head of threat intelligence at Prolexic. Heid developed counterattack and neutralization methodologies against DDoS campaigns by discovering vulnerabilities in the attacker's botnet command and control servers. During the time at Prolexic, Heid was involved in the defense and mitigation of the Operation Ababil campaigns that were targeting the financial sector. Additionally, Heid has held senior security roles within the banking industry, specializing in web application vulnerability analysis and botnet cyber threat intelligence. Heid has given multiple presentations at hacker conferences demonstrating exploitable vulnerabilities within crimeware applications that can be leveraged by white hat researchers for the purposes of attribution and threat neutralization. Heid is also the author of the 2013 cryptocurrency threat intelligence report, "Analysis of the Cryptocurrency Marketplace," which was the first forensic report about malware threats relating to blockchain technologies. The report is ranked as one of the Top 1000 'Most Cited Blockchain Publications' by BlockchainLibrary. References Living people Hackers American technology executives People in information technology Year of birth missing (living people)
66601502
https://en.wikipedia.org/wiki/Kobalos%20%28malware%29
Kobalos (malware)
Kobalos is a type of backdoor malware that runs on Linux, FreeBSD, and Solaris. The malware has been targeting supercomputers, especially those used in academia and scientific institutions, by stealing SSH credentials. Artifacts in the code may show that it may have once run on AIX, Windows 95, and Windows 3.11. References Linux malware Trojan horses
14989884
https://en.wikipedia.org/wiki/IBM%20Rome%20Software%20Lab
IBM Rome Software Lab
The IBM Rome Software Lab (formerly known as IBM Tivoli Rome Laboratory) is one of the largest software development laboratories in Italy, and one of the largest IBM Software Group Labs in Europe. Founded in 1978, the Rome Lab (located in Rome) now has more than 550 professionals among software developers, project managers, IT specialists, and IT architects. The main mission of the Rome Lab is focused on IBM Tivoli development, including Tivoli Configuration Manager, Tivoli Remote Control, Tivoli Workload Scheduler, and Tivoli Monitoring. In 2004, the laboratory focused on the adoption of the IBM Rational Unified Process (RUP) in Tivoli and in the wider Software Group inside IBM. In 2007, IBM opened the "Rome SOA Leadership Center" hosted by and formed with a team of experts on service-oriented architecture (SOA) coming from the IBM Rome Software Laboratory. The Rome laboratory constantly collaborates with Italian Universities and Research Centers. Another portion of this reality is the "Rome Solutions Lab", which includes two main areas: the "Publishing" area involved in the development, support and delivery of the IBM NICA (Networked Interactive Content Access); and the "Industry Solutions" area, an internal development organization, working on the development, support and delivery of customer software solutions based on and providing extensions to IBM Software Group products. References IBM facilities
46528021
https://en.wikipedia.org/wiki/EC-IT%20product%20dispute
EC-IT product dispute
European communities and its member states — tariff treatment of certain information technology products or for short EC - IT Product Dispute (DS376) is a WTO dispute initiated by Japan with European Communities and its member states as respondents. Dispute was initiated on September 28, 2008 and was settled on August 16, 2010, with Dispute Settlement Body recognizing European Union's violations. Products at issue Products at issue in the Japan - European Union dispute were IT products such as FDPs (Flat panel display devices), that have digital DVI connectors used for connecting to computers and other equipment; STBCs (set-top boxes) which have a communication function of accessing Internet and recording; and MFMs (multifunctional digital machines), used for scanning, printing, copying, and faxing. History The dispute was initiated by Japan, when it requested consultation with European Communities on September 28, 2008. Issue at hand was tariff treatment of certain information technology products. Japan claimed that tariff treatment was violating commitments of European Union to Information Technology Agreement of WTO. According to Japan, instead of duty-free tariff concessions (provided by ITA), European Union and its member states were imposing duties on certain Japanese technologies. On August 18, 2008, Japan, along with United States and Chinese Taipei jointly requested establishment of a panel. On 21 July 2009, the Chairman of the panel informed the Dispute Settlement Body (DSB) that due to complex nature of issue, panel would not be able to complete work in six months from the date of the creation of panel. It was estimated that the panel would issue final report in December 2009, but issuance was postponed several times, with final issuance date of July 2010. On August 16, 2010 panel was concluded and reports were circulated to Members. WTO findings and ruling European Communities have committed to provide duty-free treatment of certain IT products in Information Technology Agreement (ITA). Panel found that measures at issue (tariffs) were not consistent with following Articles: Art. II:1(a) and II:1(b) of General Agreement on Tariffs and Trade of 1994. The articles require classification of certain FDPs, STBCs and MFMs under dutiable headings although these products fell under HS1996 subheadings which are supposed to be duty-free. Panel concluded that European Communities failed prompt publishing of explanatory notes to enable governments and traders to become acquainted with them. The Panel also found that the European Communities violated Art. X:2 by enforcing the explanatory notes before they were published officially. After WTO ruling On October 25, 2010, the European Union informed Dispute Settlement Body that it intended to implement recommendations and rulings of the panel. It requested a reasonable period of time. Japan and European Union informed Dispute Settlement Body that the stated time period would be nine months and nine days (ending on 20 June 2011). On 20 July 2011, the European Union stated that it had adopted all measures necessary to follow DSB's rulings in June, although Japan said that it had several concerns about certain measures taken by European union and was not yet in a position to agree to the European Union's claim. In order to come to resolution and reduce scope of procedural disputes, Japan and the European Union had concluded a sequencing agreement. See also List of WTO dispute settlement cases References World Trade Organization dispute settlement cases Economy of the European Union Foreign trade of Japan
79155
https://en.wikipedia.org/wiki/Phegeus
Phegeus
In Greek mythology, Phegeus (Ancient Greek: Φηγεύς) was the name of the following characters: Phegeus, another name for Aegialeus, son of Inachus and king of Sicyon. Phegeus, king of Psophis. Phegeus, was one of the Thebans who ambushed Tydeus during the war of the Seven against Thebes. Like others participating in this ambush he was killed by Tydeus. Phegeus, a defender of Thebes in the war of the Seven against Thebes. He was killed by Agreus. Phegeus, an Athenian messenger whom Theseus sent to Creon with a threat of war against Thebes, if Creon would not let the bodies of those who had died attacking Thebes in the war of the Seven against Thebes be burned. Phegeus, son of Dares, priest of Hephaestus at Troy. He was the brother of Idaeus and was killed by Diomedes during the Trojan War. Phegeus, one of Aeneas' companions in Italy. He was killed by Turnus, the man who opposed Aeneas in Italy. Phegeus, soldier in the army of Aeneas. He was killed by Turnus, the man who opposed Aeneas in Italy. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Publius Papinius Statius, The Thebaid translated by John Henry Mozley. Loeb Classical Library Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1928. Online version at the Topos Text Project. Publius Papinius Statius, The Thebaid. Vol I-II. John Henry Mozley. London: William Heinemann; New York: G.P. Putnam's Sons. 1928. Latin text available at the Perseus Digital Library. Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library. Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library. Trojans People of the Trojan War Characters in the Iliad Characters in the Aeneid Characters in Seven against Thebes Argive characters in Greek mythology Attican characters in Greek mythology Theban characters in Greek mythology Characters in Greek mythology Theseus Mythology of Argolis Mythology of Sicyon
19282424
https://en.wikipedia.org/wiki/Devil%20Summoner
Devil Summoner
Devil Summoner, initially marketed as Shin Megami Tensei: Devil Summoner, is a video game franchise developed and primarily published by Atlus. Focused on a series of role-playing video game, Devil Summoner is a spin-off from Atlus' Megami Tensei franchise. The first entry in the series, Shin Megami Tensei: Devil Summoner, was released in 1995 for the Sega Saturn. The series has seen several more games since, with the most recent main entry being 2022's upcoming Soul Hackers 2. Devil Summoner began as a spin-off based on the positively-received Shin Megami Tensei If... (1994). The games, set on an alternate Earth between the 1920s and a fictionalised near-future, featuring a person either related to or holding the Kuzunoha family name using demons to investigate cases involving the supernatural. Created by Kouji Okada, the series was developed by multiple Megami Tensei veterans including artist Kazuma Kaneko and composer Shoji Meguro. While each entry has a different story and time period, it shares a universe and uses recurrent detective story elements. The series remained exclusive to Japan until the release of Devil Summoner: Raidou Kuzunoha vs. the Soulless Army in 2006, with all games apart from the original receiving English localizations from Atlus USA. Several entries have been supported by spin-off media and supplementary game materials. The first two Devil Summoner titles were among the best-selling titles for the Saturn. Games in the series have seen generally positive reception in both Japan and the West. Titles Main series Shin Megami Tensei: Devil Summoner is the first entry in the series, and was released in Japan for the Sega Saturn in 1995. The game was later ported to the PlayStation Portable, released in Japan in 2009. To date, it remains exclusive to Japan. Set in the coastal city of Hirasaki, the game follows a silent protagonist brought back from death into the body of supernatural detective Kyouji Kuzunoha as he investigates recent supernatural activities. Devil Summoner: Soul Hackers is the second entry in the series, released in Japan for the Sega Saturn in 1997. An expanded PlayStation port was released in Japan in 1999. Another port for Nintendo 3DS was released in 2012 in Japan, and in the West in 2013. Set in the new coastal city of Amami City, created and run by the tech company Algon Soft, the game follows a member of the Spookies hacker group investigating unusual events surrounding Algon's virtual city. Devil Summoner: Raidou Kuzunoha vs. the Soulless Army is the third entry in the series, and was released for the PlayStation 2 (PS2) in 2006 in Japan and North America, and 2007 in Europe. Following the exploits of titular protagonist Raidou Kuzunoha XIV, the storyline follows his investigation of the Soulless Army, which is threatening Tokyo. Devil Summoner 2: Raidou Kuzunoha vs. King Abaddon is the fourth game in the series, and was released for PS2 in 2008 in Japan and 2009 in North America as a limited release. Continuing the story of Raidou Kuzunoha XIV, the storyline follows an investigation into the titular antagonist King Abbadon. Soul Hackers 2 is the upcoming fifth game and a sequel to Soul Hackers, releasing worldwide in 2022 for Microsoft Windows, PlayStation 4, PlayStation 5, Xbox One and Xbox Series X/S. Set in an unspecified near future, the plot follows agents of the divine being Aion as they gather summoners from rival clans to face a world-ending threat. Related media The characters and art of the original Devil Survivor were used for the mobile pinball game Shin Magemi Tensei Pinball: Judgement, released in Japan in 2006 through EZweb. Soul Hackers saw two mobile follow-ups in 2008 through EZweb. They are Devil Summoner: Soul Hackers Intruder a tactical role-playing with adventure game elements; and Devil Summoner: Soul Hackers New Generation, a turn-based game set in a virtual world. Devil Summoner was adapted into a live-action television series in 1997, with its popularity prompting a second series in 1998. Two novels based on the series written by Ryo Suzukaze were published by the Aspect Books imprint of Media Works in 1996. Soul Hackers received two novel spin-offs; Devil Summoner Soul Hackers: Death City Korin by Osamu Makino in April 1998 from Aspect Books, and Devil Summoner Soul Hackers: Nightmare of the Butterfly by Shinya Kasai in May 1999 from Famitsu Bunko. A manga adaptation, written by Fumio Sasahara and illustrated by Kazumi Takasawa, was released in two volumes in March and August 1999 by Kadokawa Shoten. The Raidou Kuzunoha duology saw multiple media expansions. A spin-off novel called Devil Summoner: Raidou Kuzunoha vs. the Dead Messengers, written by Boogey Toumon and illustrated by Kazuma Kaneko, was released by Kadokawa Shoten in 2006. A spin-off manga Devil Summoner: Raidou Kuzunoha vs. Kodoku no Marebito, began serialization through the online Famitsu Comic Clear in 2009, being released in six volumes between 2010 and 2012. The manga was written by Kirihito Ayamura based on a story draft by Kaneko, and supervised by Atlus's Kazuyuki Yamai. A two-part CD drama, Devil Summoner: Raidou Kuzunoha vs. the One-Eyed God, was released by Frontier Works during 2009. Recurring elements Rather than the post-apocalyptic setting of the main Megami Tensei series, Devil Summoner takes place in an alternate modern Earth where humans form contracts with supernatural demons using devices called COMPs, becoming known as "Devil Summoners". The protagonists, Devil Summoners often associated with the Kuzuhona family name, investigate misuse of demons. A recurring element is rival clans of summoners, the benevolent Yatagarasu and the malevolent Phantom Society. The Raidou Kuzunoha duology take place in a fictionalized version of Japan's Taishō period. The storylines follow the protagonist, the fourteenth trained devil summoner to take on the title of Raidou Kuzunoha, facing supernatural threats in Tokyo while working at the Narumi Detective Agency. The first two titles use a traditional turn-based battle system taken from the main Megami Tensei series, with the player character and a team of up to five demons taking part in battles from a first-person perspective while navigating both an overworld map and dungeons. Soul Hackers 2 again uses a turn-based battle system, taking elements of exploiting enemy weaknesses for extra turns from the main Shin Megami Tensei series. The Raidou Kuzunoha duology shifts to an action-based battle system, navigating pre-rendered town and dungeon environments. The protagonist fights in separate battle arenas with two assigned demons through random encounters, with Kuzuhona capturing demons during battles in the first game, and persuading them to join him through a conversation system in the second. An assigned demon can also be used to solve environmental puzzles. A recurring element is the player's relationship with their demons. While demons are acquired in different ways across the games, a demon's alignment and actions in battle are all play a role in how they respond to commands. If a demon uses a skill they have low affinity for too many times, they will not respond as well to commands. Recruited demons can also be fused into new demons, carrying over particular traits from their predecessors. A resource called Magnatite or its equivalent is needed to keep demons summoned or powers different elements of attacks appears in multiple entries. A recurring character throughout the series is Dr Victor, a person who takes charge of demon fusion and takes on different appearances throughout the series. History and development Following the success of the Megami Tensei spin-off Shin Megami Tensei If... in 1994, lead developer Kouji Okada decided to create spin-off series to explore different narrative possibilities; the two initial spin-off titles were Revelations: Persona (1996) for the PlayStation, and Shin Megami Tensei: Devil Summoner for the Saturn. Devil Summoner drew on themes from detective fiction, particularly the melancholic and hardboiled fiction of Raymond Chandler. It was the first Megami Tensei title to be released on a 32-bit fifth-generation home video game console, and the first Megami Tensei game to feature 3D graphics. The staff included Okada as director, recurring writer Ryutaro Ito, and artist Kazuma Kaneko. Following the success of Devil Summoner, development of a sequel moved forward, drawing inspiration from the potential dangers of the internet. Okada and Kaneko returned to their respective roles, while Shogo Isogai created the scenario based on Kaneko's draft. Following the release of Shin Megami Tensei III: Nocturne in 2003, producer Kazuyuki Yamai wanted a project for his team that would offer new challenges, deciding to make a new Devil Summoner title based on staff feedback. Kaneko returned as character designer. A sequel was produced shortly afterwards, continuing Raidou Kuzunoha's story while being a standalone entry for newcomers, with Kaneko returning as both character designer and producer. It saw mechanical improvements and additions taken from the main series. A sequel to Soul Hackers was long requested by fans, though the original game was growing in age and losing mainstream recognition. Eiji Ishida and Mitsuru Hirata, who had previously worked on multiple entries in the Megami Tensei series, began production on a sequel with reworked mechanics and a new art style led by Shirow Miwa. While a release overseas was rumoured at the time of its release, the original Devil Summoner remains exclusive to Japan, with its age compared to other titles keeping it from being released during the PS2 era. Soul Hackers was originally also exclusive to Japan, with an overseas release only coming with the 3DS port. The localization was created by Atlus USA, with their focus being on emulating its time period through slang and references to cyberpunk fiction. The two Raidou Kuzunoha titles were the first Devil Summoner titles to be released in the West. For the localization of the first Raidou Kuzunoha, project leader Yu Namba incorporated slang from the 1920s to ground the storyline in that period. The second Raidou Kuzunoha saw a limited release in North America. Only three games have been released in Europe through third-party publishers; Soul Hackers by NIS America, the original Raidou Kuzunoha by Koei, and Soul Hackers 2 by Sega. Music The original game's music was composed by Toshiko Tasaki and Tsukasa Masuko, with several tracks being repurposed during production or switching role. For Soul Hackers, Tasaki and Masuko were joined by Shoji Meguro; Meguro focused on the game's cyberpunk themes and atmosphere, lamenting his lack of creative freedom compared to his work on Maken X. Meguro returned for the Raidou Kuzunoha duology, using brass and jazz instrumentation to emulate the 1920s alongside his signature guitar-heavy "MegaTen sound". The music of Soul Hackers 2 is being composed by the music group Monaca. Reception The first two games were among the best-selling titles on the Sega Saturn in Japan. Technology Tell, in a retrospective article on the Megami Tensei franchise, noting it as standing out from the rest of the franchise due to not having a post-apocalyptic setting. Reception of the first Raidou Kuzunoha game was generally positive, with critics noting its break from the traditional turn-based gameplay of other Megami Tensei series, though noting some problems caused by the new elements. Raidou Kuzunoha 2 saw a stronger reception. with critics praising it as an improvement over the first game. References Notes Atlus games Role-playing video games by series Sega Games franchises Video game franchises Video game franchises introduced in 1995 Megami Tensei
48028592
https://en.wikipedia.org/wiki/Falklands%20%2782
Falklands '82
Falklands '82 (released as Malvinas '82 in Spanish markets) is a 1986 turn-based strategy video game developed and published by Personal Software Services for the ZX Spectrum and Commodore 64. It is the fifth instalment of the Strategic Wargames series. The game is set during the 1982 Falklands War and revolves around the Argentine occupation and subsequent British re-capture of the Falkland Islands. The player controls the British Task Force as they must either defeat all Argentine forces on the archipelago or re-capture every settlement. A port for the Amstrad CPC was advertised but never released. During development, the developers obtained information and statistics of the war from NATO. The game met with mixed reviews and controversy: critics praised the detailed graphics, but some were divided over the gameplay and authenticity; others criticised the in-game potential of an Argentine "victory". Gameplay Falklands '82 is a turn-based strategy game focusing on land battles during the Falklands War. The player commands the British Task Force against the Argentine ground forces, who are occupying the islands. The game begins by allocating fifteen Royal Navy ships for the task force; a proportionate amount must be devoted for attack and defence purposes. The player must then choose one of four landing spots in northern East Falkland to begin the invasion: Port Stanley, Berkeley Sound, Cow Bay and San Carlos Bay. The SAS or SBS are available throughout the game to provide intelligence on Argentine movements; however, intelligence is limited and may only be collected a certain number of times. At any time, the player may request reinforcements from either one of the two aircraft carriers, HMS Hermes or HMS Invincible. The main objective of the game is to either defeat all occupying Argentine forces in the archipelago, or to capture and hold all ten settlements of the Falklands simultaneously. Depending on the difficulty setting, the game lasts 25 or 30 turns; if every settlement has not been occupied or any Argentine forces remain by the end of the last turn, the game will end. The capital of the Falklands, Stanley, has the highest concentration of Argentine forces. There are a total of four choices for combat: attack, move, pass, and "recce". The game includes a weather system that changes from every turn and provides obstructions for various forces. For example, stormy seas will temporarily render naval vessels and troop reinforcements unavailable, while fog will render both naval and air forces unavailable. During the course of the game, Argentine airstrikes will frequently sink Royal Navy ships, depending on how many of them were initially allocated to defensive positions. In addition, Argentine air forces will occasionally bomb and destroy British forces on the ground, which are represented as animated sprites on the map. The map also displays terrain details, including rivers and mountains. If troops are situated on top of a mountain, they will receive a defensive bonus once attacked; however, due to the steep terrain, they will move more slowly. If the player chooses to enter an enemy-controlled zone, the move will instantly end, leaving the unit vulnerable to an Argentine attack. Background and release Personal Software Services was founded in Coventry, England, by Gary Mays and Richard Cockayne in November 1981. The company was known for creating games that revolved around historic war battles and conflicts, such as Theatre Europe, Bismarck and Battle of Britain. The company had a partnership with French video game developer ERE Informatique and published localised versions of their products to the United Kingdom. The Strategic Wargames series was conceptualised in 1984 by software designer Alan Steel; during development of these titles, Steel would often research the upcoming game's topic and pass on his findings to associates in Coventry and London. In 1983, the company was recognised as "one of the top software houses" in the United Kingdom, and was a finalist for BBC Radio 4's New Business Enterprise Award. During development of both games, Cockayne and Mays obtained statistics for both the Cold War and Falklands War from NATO and the Soviet embassy in London. In an interview with Your Computer magazine, Richard Cockayne stated that both Theatre Europe and Falklands '82 received heavy criticism from the Campaign for Nuclear Disarmament and The Sun newspaper, respectively. An editor from The Sunday Press suggested that Falklands '82 was "distasteful" because of the game's possibility of an Argentine victory. The game was planned for an Amstrad CPC port, but was never released for that computer. In Spanish markets, the game was released as Malvinas '82, the Spanish name for the Falkland Islands. In 1986, Cockayne decided to alter products for release on 16-bit consoles, since smaller 8-bit consoles, such as the ZX Spectrum, lacked the processing power for larger strategy games. The decision was falsely interpreted by video game journalist Phillipa Irving as "pulling out" from the Spectrum market. Following years of successful sales throughout the mid-1980s, Personal Software Services experienced financial difficulties, and Cockayne admitted in a retrospective interview that "he took his eye off the ball". The company was acquired by Mirrorsoft in February 1987, and was later dispossessed due to debt. Reception The Sun newspaper criticised Falklands '82 for including a scenario where "Argentina could win," but Cockayne maintained that his company's video games did not trivialise the war. The game received mostly positive reviews from critics upon release. Rachael Smith of Your Sinclair praised the overall experience of the gameplay, stating that it was "ideal" for newcomers and plays "smooth"; however, she criticised it for being "annoyingly slow" at times. Sean Masterson of Crash criticised the gameplay, stating that it fails to "offer a serious challenge" and prohibits the player from experimenting with choices the real commanders never had, such as planning tactical air strikes. A reviewer from Sinclair User praised the gameplay, stating that it was "swift" and had "nice touches" for beginners to the wargame genre. He sarcastically remarked that the inability to play on the Argentine side would help improve Anglo-Argentinian relations. A reviewer from Zzap!64 criticised the game's lack of authenticity and strategy, stating that the developer's previous games had more credence if the player "played them with their eyes shut". M. Evan Brooks reviewed the game for Computer Gaming World, and stated that "While Iwo/Falklands may not be to the taste of the experienced wargamer, they may prove just the ticket to gaining another convert to computer conflict simulations." A reviewer from ZX Computing heralded the graphics and details of the map but suggested that "hardened wargamers" would not be interested in graphical advancements. A reviewer from Computer Gamer praised its simplicity, stating that it was a "simple game" and would prove to be an "excellent" introduction to the wargame strategy genre. In a 1994 survey of wargames Computer Gaming World gave the title one star out of five, stating that "it has aged poorly". In a retrospective review, Tim Stone of Rock, Paper, Shotgun praised the game's ability to display the war in a neutral manner; however, he questioned the inability to play on the Argentine side. Stone concluded that the game had "greater significance" over other war strategy games at the time and had an "undeniable quality". References 1986 video games Commodore 64 games Falklands War video games Turn-based strategy video games Video games developed in the United Kingdom ZX Spectrum games Personal Software Services games
2556691
https://en.wikipedia.org/wiki/Long%20mode
Long mode
In the x86-64 computer architecture, long mode is the mode where a 64-bit operating system can access 64-bit instructions and registers. 64-bit programs are run in a sub-mode called 64-bit mode, while 32-bit programs and 16-bit protected mode programs are executed in a sub-mode called compatibility mode. Real mode or virtual 8086 mode programs cannot be natively run in long mode. Overview An x86-64 processor acts identically as an IA-32 processor when running in real mode or protected mode, which are supported sub-modes when the processor is not in long mode. A bit in the CPUID extended attributes field informs programs in real or protected modes if the processor can go to long mode, which allows a program to detect an x86-64 processor. This is similar to the CPUID attributes bit that Intel IA-64 processors use to allow programs to detect if they are running under IA-32 emulation. With a computer running legacy BIOS, the BIOS and the boot loader is running in Real mode, then the 64-bit operating system kernel checks and switches the CPU into Long mode and then starts new kernel-mode threads running 64-bit code. With a computer running UEFI, the UEFI firmware (except CSM and legacy Option ROM), the UEFI boot loader and the UEFI operating system kernel is all running in Long mode. Memory limitations While register sizes have increased to 64 bits from the previous x86 architecture, memory addressing has not yet been increased to the full 64 bits. For the time being, it is impractical to equip computers with sufficient memory to require a full 64 bits. As long as that remains the case, load/store unit(s), cache tags, MMUs and TLBs can be simplified without any loss of usable memory. Despite this limitation, software is programmed using full 64-bit pointers, and will therefore be able to use progressively larger address spaces as they become supported by future processors and operating systems. Current limits The first CPUs implementing the x86-64 architecture, namely the AMD Athlon 64 / Opteron (K8) CPUs, had 48-bit virtual and 40-bit physical addressing. The virtual address space of these processors is divided into two 47-bit regions, one starting at the lowest possible address, the other extending down from the largest. Attempting to use addresses falling outside this range will cause a general protection fault. The limit of physical addressing constrains how much installed RAM is able to be accessed by the computer. On a ccNUMA multiprocessor system (Opteron) this includes the memory which is installed in the remote nodes, because the CPUs can directly address (and cache) all memory regardless if it is on the home node or remote. The 1 TB limit (40-bit) for physical memory for the K8 is huge by typical personal computer standards, but might have been a limitation for use in supercomputers. Consequently, the K10 (or "10h") microarchitecture implements 48-bit physical addresses and so can address up to 256 TB of RAM. When there is need, the microarchitecture can be expanded step by step without side-effects from software and simultaneously save cost with its implementation. For future expansion, the architecture supports expanding virtual address space to 64 bits, and physical memory addressing to 52 bits (limited by the page table entry format). This would allow the processor to address 264 bytes (16 exabytes) of virtual address space and 252 bytes (4 petabytes) of physical address space. See also x86-64 64-bit compatibility mode References External links X86 operating modes Programming language implementation de:AMD64#Betriebsmodi
367283
https://en.wikipedia.org/wiki/Degrees%20of%20the%20University%20of%20Oxford
Degrees of the University of Oxford
The system of academic degrees at the University of Oxford can be confusing to those not familiar with it. This is not merely because many degree titles date from the Middle Ages, but also because many changes have been haphazardly introduced in recent years. For example, the (medieval) BD, BM, BCL, etc. are postgraduate degrees, while the (modern) MPhys, MEng, etc. are undergraduate degrees. In postnominals, "University of Oxford" is normally abbreviated "Oxon.", which is short for (Academia) Oxoniensis: e.g., MA (Oxon.), although within the university itself the abbreviation "Oxf" can be used. Undergraduate Awards Bachelor of Arts (BA) Bachelor of Fine Art (BFA) The bachelor's degree is awarded soon after the end of the degree course (three or four years after matriculation). Contrary to common UK practice, Oxford does not award bachelor's degrees with honours; however, a student whose degree is classified third class or higher is considered "to have achieved honours status". Until recently, all undergraduates studied for the degree of Bachelor of Arts. The BFA was introduced in 1978. Holders of the degrees of BA and BFA both proceed in time to the degree of Master of Arts (MA). Note that the BA is still awarded even for some science courses, such as the three-year Physics degree which also still adheres to the traditional degree classification structure, with only 30% of candidates being awarded an upper second class honours or above. The degree of Bachelor of Science (BSc) has never been awarded as an undergraduate degree at Oxford; it used to be awarded as a graduate qualification, however. Bachelor of Theology (BTh) Bachelor of Education (BEd) The BTh is awarded primarily to students of the various theological colleges and halls, such as Wycliffe Hall, Regent's Park College, Blackfriars, St Stephen's House, Ripon College Cuddesdon and the former Westminster College, Oxford. Usually, these students are candidates for the ordained ministry of one of the mainstream Christian denominations, but may be drawn from any faith background or none at the discretion of the College or Hall. It should not be confused with the degree of bachelor of divinity (BD), which is a postgraduate degree. The BEd was formerly awarded to students residing at Westminster College, Culham College of Education, the Lady Spencer Churchill College of Education, and Milton Keynes College of Education (formerly the North Buckinghamshire College of Education) who read concurrently at the university. Undergraduate Advanced Diploma (UGAdvDip) The UGAdvDip is a FHEQ Level 6 award which is equivalent to the third year of undergraduate study and it is generally accepted as equivalent to a second bachelor's degree or a Graduate Diploma. Undergraduate Advanced Diplomas are only offered at the University of Cambridge and the University of Oxford. Undergraduate Master's degrees In the 1990s the following degrees were introduced to increase public recognition of the four-year undergraduate science programmes in these subjects: Master of Biochemistry (MBiochem) Master of Chemistry (MChem) Master of Computer Science (MCompSci) Master of Computer Science and Philosophy (MCompSciPhil), first entry 2012 Master of Earth Sciences (MEarthSc) Master of Engineering (MEng) Master of Mathematics (MMath) Master of Mathematics and Computer Science (MMathCompSci) Master of Mathematics and Philosophy (MMathPhil) Master of Physics (MPhys) Master of Physics and Philosophy (MPhysPhil) The holders of these degrees have the academic precedence and standing of BAs until the twenty-first term from matriculation, when they rank as MAs. From 2014 graduates with these degrees wear the same academic gown as a Master of Studies, with a black silk hood lined with sand fabric. Previously the academic dress was simply the BA gown and hood, (from 2009, even after achieving MA status). In contrast, science undergraduates at Cambridge may be granted the additional degree of Master of Natural Sciences (MSci) while continuing to be awarded the BA (and the subsequent MA). Note that biology and physiology undergraduates are still awarded the BA/MA, as these are currently three-year courses. All other undergraduates, whether their degree courses last three years or four years, are awarded the BA/MA Degree of Master of Arts The degree of Master of Arts is awarded to BAs and BFAs seven years after matriculation, without further examination, upon the payment of a nominal fee. Recipients of undergraduate master's degrees are not eligible to incept as MA, but are afforded the same privileges after the statutory twenty-one terms. This system dates from the Middle Ages, when the study of the liberal arts took seven years. Postgraduate degrees Bachelors' degrees Bachelor of Divinity (BD) Bachelor of Medicine & Bachelor of Surgery (BM, BCh) Bachelor of Civil Law (BCL) Bachelor of Letters (BLitt) (no longer awarded) Bachelor of Science (BSc) (no longer awarded) Bachelor of Music (BMus) Bachelor of Philosophy (BPhil) (now only awarded in Philosophy) In medieval times a student could not study some subjects until he had completed his study in the liberal arts. These were known as the higher faculties. The degrees in Science and Letters were added in the 19th century, and the degree in Philosophy was added in 1914. The higher bachelor's degree programme is generally a taught programme of one or two years for graduates. In Medicine and Surgery this corresponds to the clinical phase of training, after which they are accorded the courtesy title "Doctor". The BD and BMus are open only to Oxford graduates who have done well in the BA examinations in divinity and music, respectively. The BPhil/MPhil is a part-taught, part-research degree which is often a stepping stone to the DPhil. Masters' degrees Master of Surgery (MCh) The MCh is the higher degree in surgery, and is awarded on similar conditions to higher doctorates such as the DM, e.g., ten years must have passed since the lower degree in the faculty. In medieval times the distinction between a master and doctor was not significant, and both words signified the higher degree in a faculty. The title "master" is used instead of "doctor", as surgeons in England are traditionally known as "Mr" rather than "Dr". Master of Philosophy (MPhil) Master of Letters (MLitt) Master of Science (MSc) (awarded by examination or by research) Due to pressure from employers and overseas applicants to conform with United States practice, which is also that of most other UK universities, the BLitt, BSc, and BPhil (in degrees other than philosophy) were re-titled master's degrees. Magister Juris (MJur) Master of Studies (MSt) Master of Theology (MTh) Master of Business Administration (MBA) Master of Education (MEd) Master of Fine Arts (MFA) Master of Public Policy (MPP) The MJur and MBA are awarded after taught courses, the MJur being the equivalent of the BCL for students from non-common-law backgrounds. The MSt is a one-year hybrid research/taught course which is the equivalent of the taught master's degree in most other UK universities. The MTh is an applied theology course for those intending to enter holy orders. The degree of Master of Education was formerly awarded to students at Westminster College, when that course was validated by the University. Diplomas Historically at the oldest universities in the world (University of Oxford and University of Cambridge), a Diploma was a postgraduate qualification but it has mostly been replaced by the more common master's degree. However, in some cases the naming remains unchanged for historically significant areas or very specialized curriculum (for instance, the Cambridge Diploma in Computer Science retains its archaic name due to its historical significance in the history of computer science). It depends on the programme. However, in many cases at Oxbridge, Diploma is a Master-level qualification and require a thesis (the naming Diploma is used in case of a very specialised area that through learning process is going to be deeply explored). E.g. Oxford: Diploma in Geography, J. N. L. Baker (1921), Diploma in Legal Studies, Barney Williams (2004). Doctorates Doctor of Divinity (DD) Doctor of Civil Law (DCL) Doctor of Medicine (DM) Doctor of Letters (DLitt) Doctor of Science (DSc) Doctor of Music (DMus) Bachelors in the higher faculties other than Medicine can proceed to a doctorate in the same faculty without further examination, on presentation of evidence of an important contribution to their subject, e.g., published work, research, etc. Doctorates in the higher faculties may also be awarded honoris causa, i.e., as honorary degrees. It is traditional for the Chancellor to be made a DCL jure officio (by virtue of his office). Until the 19th century all bishops who had studied at Oxford were made DDs jure officio. Doctor of Philosophy (DPhil) The DPhil is a research degree, modelled on the German and American PhD, that was introduced in 1914. Rather atypically, Oxford was the first university in the UK to accept this innovation. Doctor of Clinical Psychology (DClinPsychol) Doctor of Engineering (EngD) The new degrees of DClinPsychol and EngD are professional degrees in the American model. The EngD is the only Oxford degree to use the Cambridge abbreviation format. Order of academic standing Members of the University of Oxford are ranked according to their degree. The order is as follows: Doctor of Divinity Doctor of Civil Law Doctor of Medicine if also a Master of Arts Doctor of Letters if also a Master of Arts Doctor of Science if also a Master of Arts Doctor of Music if also a Master of Arts Doctor of Philosophy if also a Master of Arts Doctor of Clinical Psychology if also a Master of Arts Doctor of Engineering if also a Master of Arts Master of Surgery if also a Master of Arts Master of Science if also a Master of Arts Master of Letters if also a Master of Arts Master of Philosophy if also a Master of Arts Master of Studies if also a Master of Arts Master of Theology if also a Master of Arts Master of Education if also a Master of Arts Master of Business Administration if also a Master of Arts Master of Fine Art if also a Master of Arts Master of Public Policy if also a Master of Arts Master of Arts, or Master of Biochemistry or Chemistry or Computer Science or Earth Sciences or Engineering or Mathematics or Mathematics and Computer Science or Mathematics and Philosophy or Physics or Physics and Philosophy with effect from the twenty-first term from matriculation Doctor of Medicine if not also a Master of Arts Doctor of Letters if not also a Master of Arts Doctor of Science if not also a Master of Arts Doctor of Music if not also a Master of Arts Doctor of Philosophy if not also a Master of Arts Doctor of Clinical Psychology if not also a Master of Arts Doctor of Engineering if not also a Master of Arts Master of Surgery if not also a Master of Arts Master of Science if not also a Master of Arts Master of Letters if not also a Master of Arts Master of Philosophy if not also a Master of Arts Master of Studies if not also a Master of Arts Master of Theology if not also a Master of Arts Master of Education if not also a Master of Arts Master of Business Administration if not also a Master of Arts Master of Fine Art if not also a Master of Arts Master of Public Policy if not also a Master of Arts Bachelor of Divinity Bachelor of Civil Law Magister Juris Bachelor of Medicine Bachelor of Surgery Bachelor of Letters Bachelor of Science Bachelor of Music Bachelor of Philosophy Bachelor of Arts, or Master of Biochemistry or Chemistry or Computer Science or Earth Sciences or Engineering or Mathematics or Mathematics and Computer Science or Mathematics and Philosophy or Physics or Physics and Philosophy until the twenty-first term from matriculation Bachelor of Fine Art Bachelor of Theology Bachelor of Education Within each degree the holders are ranked by the date on which they proceeded to their degree. In the case of people who graduated on the same day they are ranked by alphabetical order. If the Degree of Master of Biochemistry, or Chemistry, or Computer Science, or Earth Sciences, or Engineering, or Mathematics, or Mathematics and Computer Science, or Mathematics and Philosophy, or Physics, or Physics and Philosophy, is held together with a higher degree, the holder will rank in precedence equally with a person who holds the same higher degree together with the Degree of Master of Arts. See also Academic degree Bachelor's degree Master's degree Doctorate University of Oxford Academic dress of the University of Oxford References External links University of Oxford Oxford University degrees Degrees
66391444
https://en.wikipedia.org/wiki/Richard%20Benham
Richard Benham
Richard Benham (born August 1965) is Professor of Cyber Security Management at Coventry University. Career Coventry University appointed Benham to the Chair of Cyber Security Management in 2013, where together with the team at Coventry Business School he wrote and accredited The National MBA in Cyber Security. This degree received the personal support of the Prime Minister David Cameron and is the only degree to have been launched at The House of Commons with cross party support. He also holds visiting Chairs at The University of Gloucestershire and Staffordshire University. Since 2016 Benham has been a council member of The Winston Churchill Memorial Trust. In March 2016 Benham wrote the Inaugural Institute of Directors Cyber White Paper ‘Cyber Security – underpinning the Digital Economy’. In 2017 he founded The Cyber Trust, a charity to help educate disadvantaged groups on cyber and digital safety, chaired by Dame Janet Trotter, and in 2019 he established The National Cyber Awards. Benham currently holds a British Army Officer commission with the rank of Major advising the Ministry of Defence on Cyber related matters. Publications Richard Benham, Cyber Risk Management, Kogan Page 2018. Prof Richard Benham, ‘ Cyber Security Underpinning the digital economy’, Institute of Directors Policy Paper, March 2016 Richard Benham, ‘Cyber Security: Ensuring Business is Ready for the 21st Century’, Institute of Directors Policy Paper, March 2017. Richard Benham, ‘Ten tips to help beat the hackers and stay safe online’, ITProPortal, February 2017. Richard Benham, ‘The Cyber Ripple Theory’, in Dr A. Radley, Absolute Security: Theory and Principles of Secure Communication (2016). The Cyber Ripple Theory highlights the human aspect of an attack, defining what is human and at what point ‘cyber’ occurs. James Barrington and Richard Benham, Cyberstrike: London, Canelo Books 2020. James Barrington and Richard Benham, ‘’Cyberstrike: DC’’, Canelo Books 2021 Richard Benham, ‘Black Ops, Red Alert, PS Editions 2021 Notes 1965 births Writers about computer security Living people
1988240
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Kalm%C3%A1r
László Kalmár
László Kalmár (27 March 1905, Edde – 2 August 1976, Mátraháza) was a Hungarian mathematician and Professor at the University of Szeged. Kalmár is considered the founder of mathematical logic and theoretical computer science in Hungary. Biography Kalmár was of Jewish ancestry. His early life mixed promise and tragedy. His father died when he was young, and his mother died when he was 17, the year he entered the University of Budapest, making him essentially an orphan. Kalmár's brilliance manifested itself while in Budapest schools. At the University of Budapest, his teachers included Kürschák and Fejér. His fellow students included the future logician Rózsa Péter. Kalmár graduated in 1927. He discovered mathematical logic, his chosen field, while visiting Göttingen in 1929. Upon completing his doctorate at Budapest, he took up a position at the University of Szeged. That university was mostly made up of staff from the former University of Kolozsvár, a major Hungarian university before World War I that found itself after the War in Romania. Kolozsvár was renamed Cluj. The Hungarian university moved to Szeged in 1920, where there had previously been no university. The appointment of Haar and Riesz turned Szeged into a major research center for mathematics. Kalmár began his career as a research assistant to Haar and Riesz. Kalmár was appointed a full professor at Szeged in 1947. He was the inaugural holder of Szeged's chair for the Foundations of Mathematics and Computer Science. He also founded Szeged's Cybernetic Laboratory and the Research Group for Mathematical Logic and Automata Theory. In mathematical logic, Kalmár proved that certain classes of formulas of the first-order predicate calculus were decidable. In 1936, he proved that the predicate calculus could be formulated using a single binary predicate, if the recursive definition of a term was sufficiently rich. (This result is commonly attributed to a 1954 paper of Quine's.) He discovered an alternative form of primitive recursive arithmetic, known as elementary recursive arithmetic, based on primitive functions that differ from the usual kind. He did his utmost to promote computers and computer science in Hungary. He wrote on theoretical computer science, including programming languages, automatic error correction, non-numerical applications of computers, and the connection between computer science and mathematical logic. Kalmar is one of the very few logicians who has raised doubts about Church's thesis that all intuitively mechanistic, algorithmic functions are representable by recursive functions. Kalmar was elected to the Hungarian Academy of Sciences in 1949, and was awarded the Kossuth Prize in 1950 and the Hungarian State Prize in 1975. In 1933 Kalmár married Erzsébet Arvay; they had four children. Elementary functions Kalmar defined what are known as elementary functions, number-theoretic functions (i.e. those based on the natural numbers) built up from the notions of composition and variables, the constants 0 and 1, repeated addition + of the constants, proper subtraction ∸, bounded summation and bounded product (Kleene 1952:526). Elimination of the bounded product from this list yields the subelementary or lower elementary functions. By use of the abstract computational model called a register machine Schwichtenberg provides a demonstration that "all elementary functions are computable and totally defined" (Schwichtenberg 58). Notes References Stephen C. Kleene 1952, 1971 6th reprint with emendations, 10th printing 1999, Introduction to Metamathematics, North-Holland Publishing Company, Amsterdam NY. Helmut Schwichtenberg, see under "Computability" at http://sakharov.net/foundation.html, or http://www.mathematik.uni-muenchen.de/~schwicht/lectures/logic/ws03/comp.pdf . Exact source of this TBD. Kalmar, L., Zurückführung des Entscheidungsproblems auf den Fall von Formeln mit einer einzigen binären Funktionsvariablen, Comp. Math. Bd. 4 (1936) Quine, W. V. Reduction to a Dyadic Predicate. J. Symbolic Logic 19 (1954), no. 3, 180-182 External links MacTutor The source for most of this entry. 1905 births 1976 deaths University of Szeged faculty Members of the Hungarian Academy of Sciences Hungarian computer scientists Mathematical logicians Hungarian logicians Hungarian Jews Jewish philosophers 20th-century Hungarian mathematicians Austro-Hungarian mathematicians 20th-century Hungarian philosophers
43689873
https://en.wikipedia.org/wiki/Beqom
Beqom
beqom is a global provider of compensation management software, delivered using a cloud computing platform. beqom has its global headquarters in Nyon, Switzerland, with offices throughout North America and EMEA. Components of the software include functionality to manage sales incentives, bonuses, equity, merit pay, long term incentives and channel partner incentives. Competitors include Xactly, Oracle, and SuccessFactors. History beqom was founded as Excentive International in 2009, by software executives coming from Hyperion, OutlookSoft and SAP. The company gained notability in 2009 as one of the few HR software vendors offering a comprehensive compensation application. That same year, the company began its worldwide expansion with the opening of its North American headquarters. On September 1, 2013, the company changed its name to beqom. The company has completed five funding rounds, starting with a $2.5 million seed round in 2009 and a $8.5 million venture round in 2011. In July 2014, Swisscom Ventures, the investing arm of Swisscom AG, and Renaissance PME, a Swiss pension investing fund managed by Swiss technology investor Vinci Capital, announced that it had completed a financing round with beqom of $10.6 million to help fund its international expansion efforts. In July 2017, beqom raised a further $35 million from Goldman Sachs to reinforce its global presence, increase the direct sales force, and further develop indirect sales channels. Altogether, beqom has raised a total of $59.6 million in funding. In October 2019, beqom CEO Fabio Ronga was distinguished as the winner of the 2019 EY Entrepreneur Of The Year award. Products and Services beqom provides a Software as a Service (SaaS) solution, using Microsoft Azure, for enterprise compensation. It addresses performance and compensation aspects such as salary review, bonus, long-term incentives, commissions, benefits, and non-cash rewards. The company estimates it has approximately 100 user firms. Some of the large, multinational companies that have publicly disclosed contracts or deployments of beqom include PepsiCo, Daimler, Syngenta, DPDHL, and Capgemini. In January 2019 beqom was named a "Challenger" in the Gartner Magic Quadrant for Sales Performance Management. Notes External links Cloud applications Software companies of Switzerland Software companies established in 2009 Swiss companies established in 2009
45456991
https://en.wikipedia.org/wiki/Swecha
Swecha
Swecha is a non-profit organization formerly called as Free Software Foundation Andhra Pradesh (FSF-AP) later changed name to Swecha. It is a Telugu Operating System released in the year 2005, and is a part of Free Software Movement of India (FSMI). The organization is a social movement working towards educating the masses with the essence of Free Software and to provide knowledge to the commoners. Swecha organizes workshops and seminars in the Indian state of Telangana and Andhra Pradesh. Presently Swecha is active GLUG (GNU/Linux User Group) in many engineering colleges like International Institute of Information Technology, Hyderabad , Jawaharlal Nehru Technological University, Hyderabad, Chaitanya Bharathi Institute of Technology, St. Martin's Engineering College, Sridevi Women's Engineering College, Mahatma Gandhi Institute of Technology, SCIENT Institute of Technology, CMR Institute of Technology, Hyderabad, Jyothishmathi College of Engineering and Technology, MVGR College of Engineering, K L University and Ace Engineering College. Objectives The main objectives of the organization are as follows: To take forward free software and its ideological implications to all corners of our country from the developed domains to the underprivileged. To create awareness among computer users in the use of free software. To work towards usage of free software in all streams of sciences and research. To take forward implementation and usage of free software in school education, academics and higher education. To work towards e-literacy and bridging digital divide based on free software and mobilizing the underprivileged. To work among developers on solutions catering to societal & national requirements. To work towards a policy change favoring free software in all walks of life. Activities Swecha hosted a National Convention for Academics and Research which was attended by researchers and academicians from different parts of the country. Former President of India Dr A.P.J. Abdul Kalam while inaugurating the conference and declaring it open has asked everyone to Embrace Free Software and the Philosophy associated with it. Technology should also be within the reach of everybody. At a time when technology has become all pervasive and people are increasingly dependent on it, transparency in software code is the need of the hour."One needs to know what is going on in your mobile phone or computer," said D. Bhuvan Krishna, co-convener of Swecha project which code is available for anyone and everyone to modify fills this gap, felt speakers at an event organised to spread the word of Free and open-source software(FOSS) and celebrate the launch of latest web browser from the Mozilla Foundation's stable, Mozilla Firefox 3.5. Swecha, has organised a 15-day workshop in Chaitanya Bharathi Institute of Technology(CBIT) for budding software engineers from across the country. The idea is to provide students with an opportunity on the importance of students contributing towards the development of free software as it would not only allow students to exercise their creative faculties but also will help society to free itself from the clutches of proprietary software. Swecha, an organisation floated to promote free software movement in India, has organised a one-day workshop on free software in the Department of Computer Science and Systems Engineering, Andhra University College of Engineering, the students who attended the workshop along with the faculty members joined to formally launch the GNU/Linux User Group(GLUG). In order to build a mass movement for free software, Swecha organizes Freedom Fest to promoting use of free software, About 1,500 students from 80 colleges from Andhra Pradesh, Tamil Nadu, Chhattisgarh converged on the campus to voice their concerns against proprietary software and share their passion for free software. Swecha hosted A 2 Days International Technical Symposium on Free Internet (DFI) & Free Software in Hyderabad, Gachibowli in the month of 24 January 2014. More than 4700 participants attended including the 20+ delegates from ThoughtWorks & Social Activist across the world. Swecha organises summer camps every year in which large number of students participate. The camps focus on training students on Free Software Technology and the culture of sharing and collaborative development of free software. In the 2014 itself 15 days camps were conducted for 2000+ students. It is here participants collaboratively engage in the conduct of the Summer Camps. Projects Swecha is a free software project aimed at coming out with a localised version of Linux Operating System in Telugu and providing global software solutions to the local people with the Free Software development model by working together with the community of developers and users all over. The prime objective of Swecha OS is to provide a complete computing solution to a population that speaks and understands only Telugu. The target users of the Distro being the entire community that is a prey of the digital divide. This project helps in coming out with a solution for the digital divide and allows the possibility of digital unite becoming a reality. The project aims at bridging the gap between the computer technology that exists predominantly in English and the Telugu-speaking community of India. The project also aims at providing a framework for development and maintenance of Free Software projects taken up by the community. Bala Swecha is a free software project initiated by the Swecha for tiny tots, It is a school distro with many of the useful interactive applications for the school goers. Its stack is filled with educational suites for all the standards right from elementary to tenth standards. They cover a wide range of applications which make the student learn Maths, Physics, Geography, Chemistry etc., very easily. Swecha has taken up many activities in training the school teachers, computer instructors of several government schools. The aim of the Distro is to deliver a Free Software-based operating system for the project of "Sarva Shiksha Abhiyan" initiated by the government. There isn't such operating system till now which gives full freedom with an educational stack. Swecha has the plans of localizing BalaSwecha for the benefit of Telugu medium students. E-Swecha is a free software project initiated by the Swecha and is aimed at developing a free Operating System, which is not built by a software firm.. neither is it built by a few programmers.. it is a collaborative work of hundreds of Swecha Volunteers/engineering students in and around Hyderabad to, for and by the engineering students. Activism Swecha organised a free software workshop and delivered a talk on "The Age of Inequality", Mr. Palagummi Sainath told the gathered engineering students and researchers that half of the country's children suffered from malnourishment and at the same time, the situation was getting worse for farmers and suicides among them were high, Despite a high growth rate, malnourishment of children in the country remained at 46 per cent which was behind countries of Sub-Saharan Africa where these figures stood between 32 per cent to 35 percent Swecha widespread protests taking place across the country after the arrest of two girls over a Facebook comment, have now reached Hyderabad. On Sunday, a group, consisting mostly of IT professionals, students and academicians, protested at Indira Park against the controversial Section 66 (A) of Information Technology Act The Swecha was in the forefront of the protests against the inclusion of proprietary software in the representation to All India Council for Technical Education (AICTE) against the deal with Microsoft. Swecha organised a seminar on "Employment opportunities in changing technology landscape", on 23 September at Mahima Gardens, Member of German Hacker Association Chaos Computer Club, Andy Müller-Maguhn, director of Social and Economic Justice at Thoughtworks, Matt Simons and secretary of Free Software Movement of India Y.Kiran Chandra addressed the students and later joined the Free Software Movement started by Richard Stallman. The seminar was organised by Swecha, on the free software development model, Mr Neville Roy Singham explained that spying or surveillance can be easily done through hardware as well as software, and that no electronic device can be safe from it. The NSA is doing it because it is simply very cheap for them, and that they are taking in literally every piece of information they can get. More than 3,000 students, mainly from the engineering stream attended the seminar, which continued till late evening, as many other speakers like Renata Avila, a human rights lawyer and internet freedom activist from Guatemala, Dmytri Kleiner, Telemiscommunications specialist, and Zack Exley, ex-Chief Revenue Officer, Wikimedia foundation also conducted seminars and interacted with students. Internet surveillance and digital snooping on the people is the biggest threat to democracy said by the Richard Stallman Internet surveillance and spying is dangerous and threatens the functioning of democracy, Dr. Stallman told students at a seminar on "Free Software and Internet Freedom", organised by Swecha on the Acharya Nagarjuna University campus. Swecha has demanded that the Central and State governments bring in policy changes on information technology to give fillip to hardware manufacturing, setting up of data centres and software design centres. Mr. Y. Kiranchandra Chairman of Swecha quoted data from the mobile telephony market to augment his demand. "The annual market for mobile telephones in the country is about Rs. 16,000 crore, yet India is yet to have its own mobile manufacturing unit. The mobile handsets designed in China, South Korea and Finland are simply being relabelled and sold in the country". See also Free Software Foundation Free Software Movement Free Software Foundation Tamil Nadu Free Software Movement of Karnataka Public Patent Foundation Software Freedom Law Center Guifi.net References External links Free Software Foundation Intellectual property activism Organisations based in Telangana Organisations based in Andhra Pradesh Organizations established in 2005 Science and technology think tanks Non-profit organisations based in India Free and open-source software organizations Software industry in India Digital rights organizations Non-profit technology Human rights organisations based in India 2005 establishments in Andhra Pradesh
30971915
https://en.wikipedia.org/wiki/Instructure
Instructure
Instructure, Inc. is an educational technology company based in Salt Lake City, Utah. It is the developer and publisher of Canvas, a Web-based learning management system, and MasteryConnect, an Assessment Management System. The company is owned by private equity firm Thoma Bravo. Canvas is used by some schools and is used to it make easy for students to submit work and for teachers to post work. Students can access and upload media through Canvas's studio feature, retrieve things from their Google Drive, and submit things saved to their computer. Canvas also supports a mobile app that students can download onto their personal devices, making it easier for them to do their work. Teachers also have student view to better understand what it is like for students to navigate their courses. History Instructure was founded in 2008 by two BYU graduate students, Brian Whitmer and Devlin Daley,. Instructure's initial funding came from Mozy founder Josh Coates, who served as Instructure's CEO from 2010-2018 and chairman of the board through 2020, as well as from Epic Ventures. In December 2010 the Utah Education Network (UEN), which represents a number of Utah colleges and universities, announced that Instructure would be replacing Blackboard as their preferred LMS supplier. By January 2013, Instructure's platform was in use by more than 300 colleges, universities and K–12 districts, and the company's customer base had increased to 9 million users by the end of 2013. In February 2011 Instructure announced that they were making their flagship product, Canvas, freely available under an AGPL license as open source software, a move which garnered press coverage due to its threat to then market leader Blackboard. As of 2020, while the core Canvas LMS remains open source, according to its GitHub FAQ some Canvas functions and add-ons are proprietary. In June 2013 Instructure secured $30 Million in Series D Funding, bringing their lifetime funding total to $50M. In February 2015, it raised another $40 Million in Series E Funding, raising their lifetime funding total to $90M. CEO Josh Coates described it as "a pre-IPO round." On November 13, 2015, the firm began trading as a publicly held company on the New York Stock Exchange. In 2016 Glassdoor named the firm #4 on its Best Places to Work list. In 2018, the Salt Lake Tribune named it its #6 "Top Work Places" for large businesses. In December 2019 Instructure announced that Thoma Bravo would acquire the company for $2 billion. Thoma Bravo completed the acquisition of Instructure in March 2020. In June 2021, Instructure again filed for an Initial Public Offering. Products Canvas Instructure Inc. was created to support the continued development of a learning management system (LMS) originally named Instructure. Once incorporated, the founders changed the name of the software to Canvas. The Utah-based company tested the LMS at several local schools including Utah State University and Brigham Young University before officially launching the system. As of 2020, it is used in approximately 4,000 institutions around the world. In 2011 Canvas launched its iOS app, and in 2013, its Android app, enabling mobile access to the platform. The apps were eventually split into Canvas Student and Canvas Teacher, separating features for students and instructors. In 2016, the firm launched Canvas Parent, their mobile app for parents, for both iOS and Android, allowing parents of K–12 students to stay informed on their children's assignments, grades, and overall schooling. In August 2020, 16 states across the United States confirmed a partnership with Instructure in order to adopt its Canvas LMS platform for their educational institutions. The states said that, by adopting Canvas as a statewide solution to the challenges raised by the global COVID-19 pandemic, they hoped to provide stronger support to educators and students. Canvas Catalog In 2014 Instructure launched Canvas Catalog, a course catalog tool designed to help schools market their course offerings to non-traditional learners and for professional development. Canvas Studio In 2016 Instructure launched Canvas Studio (originally branded as Arc) a video learning platform tightly integrated with Canvas LMS designed to deliver asynchronous video content and video quizzing, as well as video editing and archiving functionality. Portfolium In 2019 Instructure acquired Portfolium and integrated their Pathways, Program Assessment and ePortfolio network into the Canvas learning management platform. Portfolium is designed to simplify the assessment of student learning, showcase evidence of learning, and keep students engaged along pathways to prepare them for careers. MasteryConnect In 2019 Instructure also acquired MasteryConnect. MasteryConnect is a K-12 assessment management system that enables data-driven instruction and personalized learning with tools for formative and interim assessments, teacher collaboration, and curriculum planning. Certica In 2020 Instructure acquired Certica Solutions, which provides assessment content and analytics primarily to K-12 districts in the USA. Products added through this acquisition include the Navigate Item Bank, CASE Benchmarks, CASE Item Banks, Videri, Academic Benchmarks, Certify, DataConnect, and prebuilt formative offerings. Impact In 2021 Instructure acquired EesySoft, a Dutch educational technology company, and rebranded the EesySoft product to Impact by Instructure. Impact is designed to help educators and students to more effectively use educational technology products by embedding guides, tutorials, and prompts into the software. Bridge On February 18, 2015, Instructure officially launched Bridge, a cloud-based corporate LMS. This launch followed $40 million raised in Series E funding, led by Insight Venture Partners. EPIC Ventures and Bessemer Venture Partners also participated in the funding. This round raised their total lifetime funding to close to $90 million. At launch, Bridge served six corporate clients, including Oregon State University, which had also been using Instructure's Canvas LMS for their students since fall 2015. Bridge now serves more than 500 clients. On February 15, 2021 it was announced that Bridge was acquired by Learning Technologies Group (LTG). [33][34] See also History of virtual learning environments Virtual learning environment References 2008 establishments in the United States 2008 establishments in Utah 2015 initial public offerings 2019 mergers and acquisitions American companies established in 2008 Companies based in Salt Lake City Companies formerly listed on the New York Stock Exchange Education companies established in 2008 Educational software companies Educational technology companies of the United States Learning management systems Learning management systems Private equity portfolio companies Software companies based in Utah Software companies established in 2008 Software companies of the United States Virtual learning environments
8446001
https://en.wikipedia.org/wiki/IEEE%20802.1AE
IEEE 802.1AE
IEEE 802.1AE (also known as MACsec) is a network security standard that operates at the medium access control layer and defines connectionless data confidentiality and integrity for media access independent protocols. It is standardized by the IEEE 802.1 working group. Details Key management and the establishment of secure associations is outside the scope of 802.1AE, but is specified by 802.1X-2010. The 802.1AE standard specifies the implementation of a MAC Security Entities (SecY) that can be thought of as part of the stations attached to the same LAN, providing secure MAC service to the client. The standard defines MACsec frame format, which is similar to the Ethernet frame, but includes additional fields: Security Tag, which is an extension of the EtherType Message authentication code (ICV) Secure Connectivity Associations that represent groups of stations connected via unidirectional Secure Channels Security Associations within each secure channel. Each association uses its own key (SAK). More than one association is permitted within the channel for the purpose of key change without traffic interruption (standard requires devices to support at least two) A default cipher suite of GCM-AES-128 (Galois/Counter Mode of Advanced Encryption Standard cipher with 128-bit key) GCM-AES-256 using a 256 bit key was added to the standard 5 years later. Security tag inside each frame in addition to EtherType includes: association number within the channel packet number to provide unique initialization vector for encryption and authentication algorithms as well as protection against replay attack optional LAN-wide secure channel identifier (not required on point-to-point links). The IEEE 802.1AE (MACsec) standard specifies a set of protocols to meet the security requirements for protecting data traversing Ethernet LANs. MACsec allows unauthorised LAN connections to be identified and excluded from communication within the network. In common with IPsec and TLS, MACsec defines a security infrastructure to provide data confidentiality, data integrity and data origin authentication. By assuring that a frame comes from the station that claimed to send it, MACSec can mitigate attacks on Layer 2 protocols. Publishing history: 2006 – Original publication (802.1AE-2006) 2011 – 802.1AEbn amendment adds the option to use 256 bit keys to the standard. (802.1AEbn-2011) 2013 – 802.1AEbw amendment defines GCM-AES-XPN-128 and GCM-AES-XPN-256 cipher suites in order to extend the packet number to 64 bits. (802.1AEbw-2013) 2017 – 802.1AEcg amendment specifies Ethernet Data Encryption devices. (802.1AEcg-2017) 2018 – 802.1AE-2018 See also Kerberos – using tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner Virtual LAN (VLAN) – any broadcast domain that is partitioned and isolated in a computer network at the data link layer IEEE 802.11i-2004 (WPA2) Wi-Fi Protected Access (WPA) Wired Equivalent Privacy (WEP) References External links 802.1AE-2018 MACsec Toolkit - A source code toolkit implementation of IEEE 802.1X-2010 (MACsec control plane) and IEEE802.1AE (MACsec data plane) IEEE 802 Cryptography standards Networking standards Link protocols
35931597
https://en.wikipedia.org/wiki/University%20of%20Missouri%20College%20of%20Engineering
University of Missouri College of Engineering
The University of Missouri College of Engineering is one of the 19 academic schools and colleges of the University of Missouri, a public land-grant research university in Columbia, Missouri. The College, also known as Mizzou Engineering, has an enrollment of 3,204 students who are enrolled in 10 bachelor’s programs, nine master’s programs and seven doctorate programs. There are six academic departments within the College: Biomedical, Biological and Chemical Engineering; Civil and Environmental Engineering; Electrical Engineering and Computer Science; Industrial and Manufacturing Systems Engineering; Information Technology; and Mechanical and Aerospace Engineering. The college traces its beginning to the first engineering courses taught west of the Mississippi River in 1849. The college was ranked 88th nationally by the U.S. News and World Report in 2016. History 1849-1979: Early courses and department In 1849, the University of Missouri offered the first collegiate engineering course west of the Mississippi River – a civil engineering course focusing on "Surveying, Levelling and Classical Topography," taught by the university's acting president, William Wilson Hudson. Hudson would go on to become the first chair of civil engineering in 1856, and the Board of Curators’ officially would create a School of Civil Engineering in 1859 before losing it in an organizational reshuffling in 1860. The Morrill Land-Grant Acts, the first of which passed in 1862 and accepted by the State of Missouri the following year, provided space for institutions with specialties in agriculture and engineering. By the end of the 1860s, the University of Missouri had departments of civil and military engineering, and in 1871, the School of Engineering was incorporated by the College of Agriculture as a special department before separating into its own institution in 1877 with Thomas J. Lowry as its first dean. The building that eventually would become the current Thomas and Nell Lafferre Hall was constructed in 1893, giving the college its own home. 1980-2019: Addition of IT and Department Mergers Computer science moved from the University of Missouri College of Arts and Science to Engineering in 1995, with the Information Technology program launching in 2005. James Thompson stepped down as dean of the University of Missouri College of Engineering on September 1, 2014, after being in the role for around 20 years. While he had been dean, the college stated it had added new programs in bioengineering, computer science, and IT, and the General Assembly approved the funds to renovate Lafferre Hall. There had also been some controversy in 2012 about merging with the Nuclear Science and Engineering Institute. On August 15, 2015, the university and the College of Engineering announced the hiring of the college's 11th full-time dean, Elizabeth Loboa, who previously served as associate chair and professor of the Joint Department of Biomedical Engineering at University of North Carolina-Chapel Hill and North Carolina State University, and a professor of materials science and engineering at North Carolina State. Loboa began her tenure on October 15 as the first female dean in the history of the college. Mizzou Engineering was one of fewer than 65 engineering colleges in the United States with a female dean as of 2018. In 2017, the Department of Electrical and Computer Engineering and Department of Computer Science were merged to form the Department of Electrical Engineering and Computer Science. In 2018, MU became the first public university in the state to offer a degree in biomedical engineering. That same year, Mizzou Engineering combined its focus on bioengineering and chemical engineering fields by merging the Department of Bioengineering and Department of Chemical Engineering to form the Department of Biomedical, Biological and Chemical Engineering, overseen by both College of Engineering and the College of Agriculture, Food and Natural Resources. Academics As of the end of the 2017-18 academic year, the MU College of Engineering had a total enrollment of 3,207 students — 2,724 undergraduates, 190 masters students and 283 doctoral candidates. The average freshman ACT score for College of Engineering students was 29.2. For the 2017-18 academic year, total scholarship money totaled more than $1.3 million. The average starting salary for a Mizzou Engineering alumnus was $61,315 as of 2018. The college's 10 undergraduate degree programs had their ABET accreditation renewed in 2018. More than 50 student organizations and design teams are affiliated with the college, several of which regularly win awards and accolades either from the University of Missouri or their national chapters. Degrees In 2016, Mizzou Engineering offered degrees in 10 undergraduate, eight masters and seven doctoral programs. The undergraduate degree programs were: Bioengineering Biomedical engineering Chemical engineering Civil engineering Computer engineering Computer science Electrical engineering Industrial engineering Information technology Mechanical engineering Graduate degrees are awarded in the following disciplines: Bioengineering Chemical Engineering Civil & Environmental Engineering Computer Science Electrical & Computer Engineering Industrial Manufacturing & Systems Engineering Mechanical & Aerospace Engineering Data Science & Analytics Mizzou Engineering will begin offering a fully online master's degree in both bioengineering and industrial engineering and a bachelor's degree in information technology beginning in 2019, and the college offers online courses in various disciplines. Research In 2016, Mizzou Engineering has raised its number of awarded research grants by 32%, including grants from the National Science Foundation, National Institutes of Health, Department of Energy, Department of Defense and many more. Faculty and alumni The total amount of faculty is 113, and the college has more than 38,000 living alumni, more than 500 of which currently serve as owners, presidents or CEOs of companies in industry. Deans James Thompson (around 1994 to September 1, 2014) Dean Robert Schwartz - interim dean (2014 until November 4, 2015) Elizabeth Loboa - 11th Dean (October 15, 2015 to May 2020) Noah Manring - dean (May 2020 to present) Notable staff Steven Nagel (died 2014) - former NASA astronaut, he joined the University Of Missouri College of Engineering as an instructor in the Mechanical and Aerospace Engineering Department in 2011 William Carson - professor emeritus Alumni Alumni of the college include the following: Mike Brown - Euronet Worldwide chairman, president and CEO Peggy Cherng - Panda Express co-founder, No. 12 on Forbes’ 2016 “America’s Richest Self-Made Women” Steve Edwards - Black & Veatch CEO, 2015 Kansas City Business Journal Power 100 Jim Fitterling - Dow Chemical Company president and COO of the Materials Science Division of DowDuPont David Haffner - Leggett & Platt CEO and chairman (retired) Martin Heinrich - U.S. Senator for New Mexico Kelly King - AT&T president of mobility and consumer market for south central U.S. Ray Kowalik - Burns & McDonnell CEO, 2016 Kansas City Business Journal Power 100 Thompson Lin - Applied Optoelectronics Inc. founder, chairman and CEO Michael Melton - TME Enterprises president and CEO Jim O’Neill - The Boeing Co. president of defense space and security development and St. Louis senior executive (retired) Daniel O’Shaughnessy - mission systems engineer position at Johns Hopkins University’s applied physics lab for the NASA MESSENGER program Christine Pierson - NexGen Technology Solutions managing partner Rodger O. Riney - Scottrade Inc. founder, owner, president and CEO, named to Forbes’ 2016 “America’s Best Mid-Size Employers” list William F. Baker - structural engineering partner at Skidmore, Owings & Merrill LLP David D. Casey - co-founder of Garmin International, Inc. (retired) Jeffrey Davis - chairman, president and CEO at Perficient, Inc. William S. Thompson - CEO of Pacific Investment Management (retired) Amit Midha - president, Asia Pacific and Japan Commercial at Dell EMC Mohsen Sohi - CEO, speaker of the Management Board at Freudenberg & Co. Events and student life Engineers' Week The tradition of celebrating St. Patrick as the patron Saint of engineers began at Mizzou Engineering. From its beginnings the tradition spread to the offshoot of the university that eventually became the Missouri University of Science and Technology. Today, it is celebrated on campuses nationwide. The concept began at the University of Missouri with the "discovery" that St. Patrick was an engineer in 1903. As former dean Huber O. Croft wrote in "A Brief History of the College of Engineering – University of Missouri-Columbia": By 1905, the event grew to include a parade and kowtow to a student dressed as St. Patrick, the latter a tradition that continues to this day. Several lasting traditions of Engineers’ Week began by 1906, including the Engineer's Song, St. Patrick's Ball, the knighting ceremony, and the discovery of the "Blarney Stone." Since the early days, Engineers’ Week has grown to include the green tea ceremony, lighting the dome of Jesse Hall green, the tradition of knight candidates being required to carry large, ornate shillelaghs at all times, and more. St. Patrick and the shamrock have become symbols of the MU College of Engineering, and legend has it that anyone who walks across the shamrock painted in the courtyard of Lafferre Hall is destined to one day marry an engineer. Buildings Thomas and Nell Lafferre Hall is the main building of the MU College of Engineering, with F. Robert and Patricia Naka Hall and Engineering Building North providing additional classroom, laboratory and office space. Lafferre Hall has been named for College of Engineering alumnus Thomas Lafferre and his wife, Nell, since 2004. The original buildings that provide the foundation of what is now Lafferre Hall were built in 1892 and 1893, with additions constructed in 1935, 1944, 1958, 1991 and 2009. In 2014, the State of Missouri's Board of Public Buildings — Governor Jay Nixon, Lieutenant Governor Peter Kinder and Attorney General Chris Koster — approved $38.5 million in bonds issued by the Missouri General Assembly for renovations and repairs to Lafferre Hall. Demolition of the 1935 and 1944 sections of the building began in May 2015, and the project was expected to be finished by December 2016. The renovation was to "allow for space to accommodate student competition teams, student conference rooms and study spaces on the main floor, alongside expanded laboratory space to better accommodate research. The building's various additions will be connected, and the project will make the entire building accessible according to the guidelines set forth by the Americans with Disabilities Act." The College held a ribbon cutting for the newly-renovation section of the building in December 2016. Naka Hall, formerly Engineering Building West, was renamed for College of Engineering alumnus F. Robert Naka and his wife, Patricia, in 2016. Naka is considered the father of stealth technology and was a former Chief Scientist of the United States Air Force. Rankings U.S. News & World Report's Best Engineering Schools: #88 (2017) References External links College of Engineering Missouri Engineering schools and colleges in the United States Missouri University subdivisions in Missouri
32248020
https://en.wikipedia.org/wiki/Blue%20Whale%20Clustered%20file%20system
Blue Whale Clustered file system
Blue Whale Clustered file system (BWFS) is a shared disk file system (also called clustered file system, shared storage file systems or SAN file system) made by Tianjin Zhongke Blue Whale Information Technologies Company in China. Overview BWFS enables simultaneous file access across heterogeneous platforms and high-performance file creation, storing, and sharing. BWFS is installed on hosts that are connected to the same disk array in a storage area network (SAN) . Client systems are not required to run the same operating system to access a shared filesystem containing StorNext data. As of January 2010, the operating systems with available client software are Microsoft Windows, Linux, and Mac OS X. BWFS can convert many FibreChannel or iSCSI disk arrays into a storage cluster that supports multi-server for parallel processing, provide high-performance and extensible file-sharing service, and sustains multi-machine workflow or applications under cluster environment. BWFS file system is realized in the mode of direct data access. Shared file data directly access to FC or iSCSI disk array through SAN network to transfer data by skipping file server or NAS head, which fully displays the advantage of high bandwidth of SAN environment. BWFS allows great enhancement of system on processing ability for simultaneous file without changing front-end application environment and back-end SAN condition. BWFS backs the MDC of redundant structure (Meta Data Controller), providing excellent performance and high availability capabilities, combined with SAN infrastructure to bring system reliability and data security for storage at enterprise level. Data access process BWFS supporting heterogeneous multi-operating system platform, allowing multiple servers to concurrently access the same set of disk and files without concerning the type of their respective file system. Currently, BWFS supports a variety of enterprise-class Linux platform and Windows 2000, Windows XP and Windows 2003. Aiming at different operating systems, BWFS has different client programs, some of which is able to identify and provide the access to BWFS shared file system, and ensure consistent presentation of file system in different operating system. IO requests can be handled properly. When multiple servers concurrently access the same file system, certain mechanism is needed to prevent two servers from writing to the same disk location. It should also be ensured that certain server will not read different content in reading file while other server is upgrading this file. In BWFS, such mechanism and function is provided by MetaData Controller. MDC is responsible for coordinating the access of server to BWFS file system, located outside the read and write path of file data. Client communicates through a separate IP links and MDC to obtain the location of files and resource allocation information of data block. And then, through SAN network, the disk is directly read and written in block-level mode. Such design of architecture is called “out of band transmission frame” or "asymmetric architecture" in technical term: Data access process can be broken down as follows: Application program issues a write request BWFS client sends an operating request to MDC through LAN MDC processes this request and responds to the client for which disk blocks can be read in data through LAN. BWFS client directly writes data in file system at line speed. BWFS is designed on the basis of SAN environment, allowing a large number of servers or workstations connecting to FC SAN or IP SAN (iSCSI) to directly access the same file system. BWFS FC can use one or more FC links to access disk resources, so that the IO performance of a single server can be extended to several GB / s from more than 100 MB/s by simply increasing FC HBA card. Of course, the overall performance of a system is not only relevant to the performance of host and network, but also influenced by the performance of the disk constituting file system. So, BWFS file system can be structured by the LUN from multiple disk arrays. It equals to another layer of RAID structured between multiple disk arrays, which maximizes the performance of disk arrays. Another factor performance factor should be considered is the location of metadata. A file consists of actual data and metadata. Actual data is the content of a file, while metadata includes file attributes, permissions and so on. When a file is created, modified, or deleted, metadata information shall be modified, which means a file is processed by reading both file data and metadata. Usually, large file is read and written continuously, while metadata shall be read by moving magnetic-disc head to other location. For the disk, its read and write mode is much higher than randomness degree. If the data and metadata are memorized in the same disk (mode of the most file systems), the randomness degree of large file will be enhanced accordingly to reduce read and write performance. For this reason, BWFS file system memorizes metadata in different disk or volume in layout, so that the continuous file reading and writing is separated with the randomness of metadata. They are not mutually influenced, so as to provide higher IO bandwidth as much as possible. In addition, after separation of data and metadata, data and metadata can be processed independently in different hosts without occupying bandwidth of data channel, which can improve the concurrency of data and metadata to further enhance file system performance. Commercialization A 2006 Gartner publication said: "BWFS, an Internet Protocol (IP) cluster file system (CFS), has moved beyond the research lab and into the commercialization stage, and has now been successfully deployed in various industries including the energy, automotive, military and the media sectors. Its success demonstrates the strengths of China's research institutes in the technology realm, despite their relative lack of commercial experience and investment resources compared to many Western technology providers. Although CFSs are not yet prevalent in the mainstream storage market, for some users who need very high input/output I/O performance — especially leading-edge applications such as oil and gas, biotech and computer-aided design (CAD) — BWFS offers a good price/performance solution. Users should also consider BWFS if looking for a lower-priced CFS. Users that need a more commercialized solution — or that like to have a more “out of box” interface — should consider other vendors such as Panasas, Isilon and Ibrix rather than BWFS." BWFS was developed at the National Research Centers for High Performance Computers of the Chinese Academy of Sciences. In 2007, FalconStor announced a joint venture to sell the software. The joint venture was named Tianjin Zhongke Blue Whale Information Technologies Company, located in Tianjin, China. Venture capital firm VantagePoint Capital also made an investment. It was announced that BWFS would be used for video from a satellite intended to cover the 2008 Summer Olympics. See also List of file systems References Further reading Zhenhan Liu, Xiaoxuan Meng, Lu Xu. Lock management in blue whale file system. In Proceedings of the 2nd International Conference on Interaction Sciences: Information Technology, Culture and Human (ICIS 2009) A Storage Slab Allocator for Disk Storage Management in File System[Q],NAS’09,2009 Lu Xu, Hongyuan Ma, Zhenjun Liu, Huan Zhang, Shuo Feng, Xiaoming Han, "Experiences with Hierarchical Storage Management Support in Blue Whale File System," pdcat, pp. 369–374, 2010 International Conference on Parallel and Distributed Computing, Applications and Technologies, 2010 External links Shared disk file systems
30873602
https://en.wikipedia.org/wiki/Realm-Specific%20IP
Realm-Specific IP
RSIP also stands for "Radar System Improvement Program", see E-3 Sentry. Realm-Specific IP was an experimental IETF framework and protocol intended as an alternative to network address translation (NAT) in which the end-to-end integrity of packets is maintained. RSIP lets a host borrow one or more IP addresses (and UDP/TCP port) from one or more RSIP gateways, by leasing (usually public) IP addresses and ports to RSIP hosts located in other (usually private) addressing realms. The RSIP client requests registration with an RSIP gateway. The gateway in turn delivers either a unique IP address or a shared IP address and a unique set of TCP/UDP ports and associates the RSIP host address to this address. The RSIP host uses this address to send packets to destinations in the other realm. The tunnelled packets between RSIP host and gateway contain both addresses, and the RSIP gateway strips off the host address header and sends the packet to the destination. RSIP can also be used to relay traffic between several different privately addressed networks by leasing several different addresses to reach different destination networks. RSIP should be useful for NAT traversal as an IETF standard alternative to Universal Plug and Play (UPnP). , the protocol was in the experimental stage and not yet in widespread use. See also Interactive Connectivity Establishment (ICE) Middlebox Middlebox Communications (MIDCOM) Simple Traversal of UDP over NATs (STUN) SOCKS Traversal Using Relay NAT (TURN) Universal Plug and Play (UPnP) IETF References - Realm Specific IP: Framework - Realm Specific IP: Protocol Specification - RSIP Support for End-to-end IPsec Internet protocols
46694890
https://en.wikipedia.org/wiki/Information%20Management%20Body%20of%20Knowledge
Information Management Body of Knowledge
The Information Management Body of Knowledge (IMBOK) is a management framework that organizes the concept of Information Management in the full context of business and organizational strategy, management and operations. It is specifically intended to provide researchers and practicing managers with a tool that makes clear the conjunction of the worlds of information technology and the world at large. The IMBOK framework The IMBOK comprises six 'knowledge' areas and four 'process' areas. The knowledge areas identify domains of management expertise and capability that are each distinctly different to the others, as shown in the figure. The process areas identify critical activities that move the value from the left to the right. For example: Projects transform Information technology into information systems by engineering technology components into systems that deliver the required functionality Business change management deploys information systems in business processes so as to improve the performance and capability of those business processes Business operations deliver the business benefits expected by stakeholders Performance management ensures and oversees the delivery of benefits appropriate to an organization's strategic intentions. Origin The IMBOK was a major deliverable out of a research project at the University of the Western Cape in South Africa, funded by the Carnegie Corporation of New York. It has been adopted as a standard Information Management and Information Systems course text in South Africa, Europe, North America and elsewhere. A monograph describing the IMBOK was made available on the World Wide Web in 2004, but it has been withdrawn and republished in an extended form in a book: "Investing in Information". Community The Community web site at IMBOK.ORG has lapsed and the content is now available at IMBOK.INFO. See also Information management Records management External links Supporting website for the IMBOK Community References Information technology Information management Information systems Works about information
1068512
https://en.wikipedia.org/wiki/VINSON
VINSON
VINSON is a family of voice encryption devices used by U.S. and allied military and law enforcement, based on the NSA's classified Suite A SAVILLE encryption algorithm and 16 kbit/s CVSD audio compression. It replaces the Vietnam War-era NESTOR (KY-8/KY-28|28/KY-38|38) family. These devices provide tactical secure voice on UHF and VHF line of sight (LOS), UHF SATCOM communication and tactical phone systems. These terminals are unclassified Controlled Cryptographic Items (CCI) when unkeyed and classified to the keymat of the key when going secure. VINSON devices include: KY-57 KY-58 KY-68 KY-99a (MINTERM) KY-100 (AIRTERM) KYV-2 FASCINATOR VINSON is embedded into many modern military radios, such as SINCGARS. Many multi-algorithm COMSEC modules are also backwards-compatible with VINSON. See also Advanced Narrowband Digital Voice Terminal (ANDVT) system for low bandwidth secure voice communications that replaced VINSON. National Security Agency encryption devices
38439866
https://en.wikipedia.org/wiki/AFX%20Windows%20Rootkit%202003
AFX Windows Rootkit 2003
AFX Windows Rootkit 2003 is a user mode rootkit that hides files, processes and registry. Installation When the installer of the rootkit is executed, the installer creates the files iexplore.dll and explorer.dll in the system directory. The iexplore.dll is injected into explorer.exe, and the explorer.dll is injected into all running processes. Payload The injected DLLs hooks the Windows API functions to hide files, processes and registry. References Encyclopedia entry: Trojan:Win32/Delf.M - Learn more about malware - Microsoft Malware Protection Center Rootkits Windows malware
37197870
https://en.wikipedia.org/wiki/Space%20Defense%20Center
Space Defense Center
The Space Defense Center (SDC) was a space operation center of the North American Aerospace Defense Command. It was successively housed at two Colorado locations, Ent Air Force Base, followed by Cheyenne Mountain's Group III Space Defense Center The 1st Aerospace Control Squadron manned the SDC at both locations, which used the Electronic Systems Division's 496L System for processing and displaying data combined from the U.S. "Air Force's Space Track and the Navy's Spasur" (NAVSPASUR). The photo is of a console introduced for the 427M system, the 496L inputs were only card readers and paper tape readers, the only output was from two large line printers. History The initial 496L System was at Hanscom Field's National Space Surveillance Control Center and the second was installed at Ent Air Force Base's Space Defense Center. The Ent SDC was one of several facilities providing data to the Cheyenne Mountain Combat Operation Center when the nuclear bunker achieved full operational capability on July 1, 1966. The SDC's approximately $5-million Delta I computer system at Cheyenne Mountain became operational on October 28, 1966, with about 53 individual computer programs totaling 345,000 instructions. The Space Defense Center mission moved from Ent Air Force Base to "adjacent to the NORAD command center" in Cheyenne Mountain on February 6, 1967. The NORAD Cheyenne Mountain Complex Improvements Program (ESD program $427M contracted in 1972, operational in 1979) included the Space Computational Center (SCC)" intended to replace the Space Defense Center (the 1979 Space Defense Operations Center (SPADOC) was for "replacing the in Cheyenne Mountain during October.") The Space Computational Center (using the 427M computer system) replaced the Space Defense Center (which used the 496L computer system) and the SDC was closed. References Cold War military computer systems of the United States North American Aerospace Defense Command Cheyenne Mountain Complex
6529589
https://en.wikipedia.org/wiki/Tinker%20%28software%29
Tinker (software)
Tinker, previously stylized as TINKER, is a suite of computer software applications for molecular dynamics simulation. The codes provide a complete and general set of tools for molecular mechanics and molecular dynamics, with some special features for biomolecules. The core of the software is a modular set of callable routines which allow manipulating coordinates and evaluating potential energy and derivatives via straightforward means. Tinker works on Windows, macOS, Linux and Unix. The source code is available free of charge to non-commercial users under a proprietary license. The code is written in portable FORTRAN 77, Fortran 95 or CUDA with common extensions, and some C. Core developers are: (a) the Jay Ponder lab, at the Department of Chemistry, Washington University in St. Louis, St. Louis, Missouri. Laboratory head Ponder is Full Professor of Chemistry, and of Biochemistry & Molecular Biophysics; (b) the Pengyu Ren lab , at the Department of Biomedical Engineering University of Texas in Austin, Austin, Texas. Laboratory head Ren is Full Professor of Biomedical Engineering; (c) Jean-Philip Piquemal's research team at Laboratoire de Chimie Théorique, Department of Chemistry, Sorbonne University, Paris, France. Research team head Piquemal is Full Professor of Theoretical Chemistry. Features The Tinker package is based on several related codes: (a) the canonical Tinker, version 8, (b) the Tinker9 package as a direct extension of canonical Tinker to GPU systems, (c) the Tinker-HP package for massively parallel MPI applications on hybrid CPU and GPU-based systems, (d) Tinker-FFE for visualization of Tinker calculations via a Java-based graphical interface, and (e) the Tinker-OpenMM package for Tinker's use with GPUs via an interface for the OpenMM software. All of the Tinker codes are available from the TinkerTools organization site on GitHub. Additional information is available from the TinkerTools community web site. Programs are provided to perform many functions including: energy minimizing over Cartesian coordinates, torsional angles, or rigid bodies via conjugate gradient, variable metric or a truncated Newton method molecular, stochastic, and rigid body dynamics with periodic boundaries and control of temperature and pressure normal mode vibrational analysis distance geometry including an efficient random pairwise metrization building protein and nucleic acid structures from sequence simulated annealing with various cooling protocols analysis and breakdown of single point potential energies verification of analytical derivatives of standard and user defined potentials location of a transition state between two minima full energy surface search via a Conformation Scanning method free energy calculations via free energy perturbation or weighted histogram analysis fitting of intermolecular potential parameters to structural and thermodynamic data global optimizing via energy surface smoothing, including a Potential Smoothing and Search (PSS) method Awards Tinker-HP received the 2018 Atos-Joseph Fourier Prize in High Performance Computing. See also List of software for Monte Carlo molecular modeling Comparison of software for molecular mechanics modeling Molecular dynamics Molecular geometry Molecular design software Comparison of force field implementations References License External links Science software Molecular dynamics software Monte Carlo molecular modelling software Washington University in St. Louis
16361606
https://en.wikipedia.org/wiki/Win32-loader
Win32-loader
win32-loader (officially Debian-Installer Loader ) is a component of the Debian Linux distribution that runs on Windows and has the ability to load the actual Debian installer either from the network (as in the version in an official website) or from CD-ROM media (as in the version included in Jessie CD images). win32-loader was born as an independent project, for which only the network version was available. Later the code went through a long review and polishing process to become part of the official Debian distribution. Influences win32-loader strongly relies on projects such as NSIS, GRUB 2, loadlin and Debian-Installer to perform its task. Additionally, it has drawn inspiration and ideas from similar projects such as Wubi and Instlux. Features Auto-detects 64-bit (x86-64) support in host CPUs, and automatically selects the x86-64 flavor of Debian whenever supported, completely transparent to the user. Detects a number of settings from the Windows environment (time zone, proxy settings, etc.) and feeds them to the Debian Installer via a "preseeding" mechanism so that the user doesn't have to select them. Translated to 51 languages. The selected language is displayed for user interaction since the first template, and is seamlessly passed on to the Debian Installer via "preseeding". Similar projects Topologilinux: uses coLinux to run on Windows. Instlux, included on openSUSE since the 10.3 release. Wubi UNetbootin See also NSIS UNetbootin References External links Package description in Debian The network version homepage The network version (exe) The network version (README) Free system software Linux installation software Windows-only free software
3172082
https://en.wikipedia.org/wiki/Network%20Access%20Protection
Network Access Protection
Network Access Protection (NAP) is a Microsoft technology for controlling network access of a computer, based on its health. With NAP, system administrators of an organization can define policies for system health requirements. Examples of system health requirements are whether the computer has the most recent operating system updates installed, whether the computer has the latest version of the anti-virus software signature, or whether the computer has a host-based firewall installed and enabled. Computers with a NAP client will have their health status evaluated upon establishing a network connection. NAP can restrict or deny network access to the computers that are not in compliance with the defined health requirements. NAP was deprecated in Windows Server 2012 R2 and removed from Windows Server 2016. Overview Network Access Protection Client Agent makes it possible for clients that support NAP to evaluate software updates for their statement of health. NAP clients are computers that report their system health to a NAP enforcement point. A NAP enforcement point is a computer or device that can evaluate a NAP client’s health and optionally restrict network communications. NAP enforcement points can be IEEE 802.1X-capable switches or VPN servers, DHCP servers, or Health Registration Authorities (HRAs) that run Windows Server 2008 or later. The NAP health policy server is a computer running the Network Policy Server (NPS) service in Windows Server 2008 or later that stores health requirement policies and provides health evaluation for NAP clients. Health requirement policies are configured by administrators. They define criteria that clients must meet before they are allowed undeterred connection; these criteria may include the version of the operating system, a personal firewall, or an up-to-date antivirus program. When a NAP-capable client computer contacts a NAP enforcement point, it submits its current health state. The NAP enforcement point sends the NAP client’s health state to the NAP health policy server for evaluation using the RADIUS protocol. The NAP health policy server can also act as a RADIUS-based authentication server for the NAP client. The NAP health policy server can use a health requirement server to validate the health state of the NAP client or to determine the current version of software or updates that need to be installed on the NAP client. For example, a health requirement server might track the latest version of an antivirus signature file. If the NAP enforcement point is an HRA, it obtains health certificates from a certification authority for NAP clients that it deems to be compliant with the relevant requirements. NAP clients can be placed on a restricted network if they are deemed non-compliant. The restricted network is a logical subset of the intranet and contains resources that allow a noncompliant NAP client to correct its system health. Servers that contain system health components or updates are known as remediation servers. A noncompliant NAP client on the restricted network can access remediation servers and install the necessary components and updates. After remediation is complete, the NAP client can perform a new health evaluation in conjunction with a new request for network access or communication. NAP client support A NAP client ships with Windows Vista, Windows 7, Windows 8 and Windows 8.1 but not with Windows 10. A limited NAP client is also included in Windows XP Service Pack 3. It has no MMC snap-in and does not support AuthIP-based IPsec enforcement. As such, it can only be managed via a command-line tool called netsh, and the IPsec enforcement is IKE-based only. Microsoft partners provide NAP clients for other operating systems such as Mac OS X and Linux. See also Access control Network Admission Control Network Access Control Network security Computer security PacketFence References External links Microsoft's Network Access Protection Web page Microsoft's Network Access Protection Web page on Microsoft Technet NAP Blog on Microsoft Technet Microsoft's Network Access Protection Design Guide on Microsoft Technet Microsoft's Network Access Protection Deployment Guide on Microsoft Technet Microsoft's Network Access Protection Troubleshooting Guide on Microsoft Technet Microsoft Windows security technology Windows Server
19368110
https://en.wikipedia.org/wiki/Tennessee%20College%20of%20Applied%20Technology%20-%20Shelbyville
Tennessee College of Applied Technology - Shelbyville
The Tennessee College of Applied Technology - Shelbyville is one of 27 colleges of applied technology in the Tennessee Board of Regents System, one of the largest systems of higher education in the nation. This system comprises thirteen community colleges and twenty-seven colleges of applied technology. More than 60 percent of all Tennessee students attending public institutions are enrolled in a Tennessee Board of Regents institution. History This institution was authorized by House Bill 633, passed by the Tennessee General Assembly on March 15, 1963, and approved by the Governor on March 22, 1963. The college was governed by the Tennessee Department of Education until 1983 when control was transferred to the Tennessee Board of Regents by House Bill 697 and Senate Bill 746. Located on a tract of land at 1405 Madison Street (U.S. Highway 41-A) approximately two miles east of downtown Shelbyville, the college serves individuals from a broad geographical area including but not limited to Bedford, Coffee, Franklin, Lincoln, Marshall, Moore, and Rutherford counties. The first of its kind to be constructed, the Tennessee College of Applied Technology - Shelbyville, opened its doors on November 30, 1964, for full-time preparatory programs with forty-one students enrolled in six programs (Air Conditioning/Refrigeration, Auto Mechanics, Drafting, Industrial Electricity, Machine Shop and Welding). The Tennessee Technology Center at Shelbyville became the Tennessee College of Applied Technology - Shelbyville on July 1, 2013 under Senate Bill No. 643 House Bill No. 236*. Approval of Public Chapter No. 473. The colleges have been recognized by the Bill and Melinda Gates Foundation, Harvard's Graduate School of Education, the New York Times, EcoSouth and other leading organizations for job placement and completion rates. The colleges were also credited for completion and placement rates in the New York Times. Office of Tennessee Colleges of Applied Technology The office of the Tennessee Colleges of Applied Technology is in Nashville Tennessee at the Tennessee Board of Regents offices. James King is the Vice Chancellor for the Tennessee Colleges of Applied Technology. Academic programs Each of the Tennessee Colleges of Applied Technology offers programs based on geographic needs of businesses and industry. Therefore, each college can have different academic programs and offerings. The Tennessee College of Applied Technology - Shelbyville offers Certificates and Diplomas in the following programs: In January 1965, evening programs (part-time) were opened. Automotive Technology Administrative Office Technology Collision Repair Drafting and CAD HVAC (Heating, Ventilation, Air Conditioning) Information Technology and Infrastructure Management Industrial Electricity Industrial Maintenance Automation Machine Tool Technology Patient Care Tech/Medical Assistant Practical Nursing Truck Driving Welding The college offers supplemental programs based on business, industry and public demand. These classes include Computer Technology, Leadership, Office Occupations, Industrial (Electricity, Machine Tool, Industrial Maintenance) or can be customized to meet client needs. Beginning in July 2000 the college began delivering professional testing through Prometric. This testing allows for career based testing. The college began delivering professional exams through Pearson VUE in 2007 allowing for additional delivery of career based testing expanding its services to allow professional certifications and higher-education exams. During this same year, the college began using Certiport and COMPASS as additional ways to achieve certifications and qualification based testing. Beginning in July 2012 the college began delivering ASE certifications through Prometric. In October 2012, the Collision Repair class began delivering virtual hands on painting, using state of the art 3D virtualization. Beginning in January 2014, the Industrial Maintenance program expanded to a campus in Winchester, Tennessee. The Medical Assistant program also opened at a remote campus on the west side of Shelbyville. In August 2015, the college expanded to Lewisburg, Shelbyville (MTEC Building) and Tullahoma, Tennessee with the Industrial Maintenance Program. The Computer Information Technology program also expanded to the MTEC Building in August. In 2017 construction on a new building in Winchester, Tennessee will begin and will provide classrooms for Information Technology and Infrastructure Management, Machine Tool Technology, Industrial Maintenance, Welding, CNA and Nursing. Beginning August 1, 2017, the CIT program became the Information Technology and Infrastructure Management Program. Student organizations TCAT Shelbyville provides memberships and organizations for students. SkillsUSA National Technical Honor Society Student Government Association TCAT Shelbyville Technical Blog, Web 2.0 and Cloud Services The Tennessee College of Applied Technology - Shelbyville began a technical blog in September 2007 to supplement programs and focus on new technologies. TCAT Shelbyville Technical Blog's readership grew to over 3.6 million by mid 2019 and has a global following. In 2010 the information technology department implemented the Tennessee College of Applied Technology - Shelbyville Learning Management System. This LMS Cloud array is used to supplement classes with Moodle Learning Management System Servers, Nida Servers, streaming video, online classes, Microsoft SharePoint Services, medical education, file sharing and collaboration. Currently TCAT Shelbyville is the only institution with a comprehensive online learning center. Beginning August 2011, TCAT Shelbyville became the first institution to offer online through their on campus LMS cloud servers. Beginning in 2012 the Industrial Maintenance department implemented a web-based SCADA curriculum. This curriculum uses the physical hardware in the cloud combining and integrating the existing curriculum of Programmable Logic Controllers (PLCs), Robotics, touchscreens along with industrial high speed cameras and other hardware on campus. The Program became one of the first classrooms in the TBR system allowing the integration of cloud based SCADA/PLC systems with an on ground industrial training environment. In May 2012 the CIT program moved live hardware into the cloud for live hands on. This move became one of the first higher education live hardware projects in the cloud presenting CIT students IaaS (infrastructure as a service) cloud computing to practice configuring servers, network devices and other advanced hardware from anywhere in the world. In September 2013 TCAT Shelbyville expanded their cloud services to include file sharing for instructors and students. Professional memberships Air Conditioning Contractors of America American Design and Drafting Association American Digital Design Association American Technical Educators Association American Welding Society Automotive Service Excellence CompTIA HVAC Excellence Microsoft National League for Nursing National Association of Publicly Funded Truck Driving Schools National Center for Women of Information Technology National Association of Student Financial Aid Administrators National League of Nursing Precision Metalforming Association Professional Truck Driving Institute (PTDI) Shelbyville/Bedford County Chamber of Commerce SkillsUSA Southern Association of Student Financial Aid Administrators Tennessee Business Education Association Tennessee State Board of Dentistry Tennessee State Board of Nursing Tennessee State Board of Vocational Education Building expansions In 1981 the school was expanded to give more space for existing programs. In July 1994, the name was changed by the Tennessee Legislature to "Tennessee Technology Center at Shelbyville". The name was again changed July 2013 by Tennessee Legislature to "Tennessee College of Applied Technology - Shelbyville". Another expansion in 1996 with the addition of approximately and renovation to the existing building. The expansion brought the total square footage of the college to approximately . Also included in the expansion was money for the upgrade of equipment in all program and classroom areas. Renovations in 2008–2009 included monies to update classrooms with state-of-the-art equipment and to renovate the lobby and all hallways. Plans are to expand the school to Winchester, TN in a new building, expand the Information Technology and Infrastructure Management program to Spot Lowe in Lewisburg, TN and to Lincoln County schools in August 2019. In 2020, the college will start an Airframe and Powerplant program in Winchester. Remote campuses were added in Winchester, Tullahoma, Fayetteville, Lewisburg and Shelbyville, Tennessee. Accreditation The Tennessee College of Applied Technology - Shelbyville is accredited by the Council of Occupational Education (COE). The Council on Occupational Education (COE), is a national accrediting agency which was originally established in 1971 as a regional agency under the Southern Association of Colleges and Schools. Program awards and recognition Program Awards and recognition See also List of colleges and universities in Tennessee References External links Tennessee College of Applied Technology - Shelbyville TBR Tennessee Colleges of Applied Technology Profile Tennessee Board of Regents Online Programs for TCATS Tennessee College of Applied Technology - Shelbyville Blog Education in Tennessee Educational institutions established in 1964 Buildings and structures in Bedford County, Tennessee Education in Bedford County, Tennessee 1964 establishments in Tennessee Shelbyville, Tennessee Public universities and colleges in Tennessee
19750688
https://en.wikipedia.org/wiki/Diane%20Souvaine
Diane Souvaine
Diane L. Souvaine (born 1954) is a professor of computer science and adjunct professor of mathematics at Tufts University. Contributions Souvaine's research is in computational geometry and its applications, including robust non-parametric statistics and molecular modeling. She has also worked to encourage women and minorities in mathematics and the sciences and to advocate gender neutrality in science teaching. Education and career After undergraduate and masters studies at Radcliffe College of Harvard University and at Dartmouth College, Souvaine earned her Ph.D. in 1986 from Princeton University under the supervision of David P. Dobkin. She held a faculty position at Rutgers University from 1986 to 1998, and from 1992 to 1994 served first as acting associate director and then as acting director of DIMACS. From 1994 to 1995 she took a visiting position in mathematics at the Institute for Advanced Study in Princeton, New Jersey, and in 1998 she took a permanent position at Tufts University. Leadership and administration At Tufts, Souvaine was department chair from 2002 to 2005 and (after a sabbatical at Harvard and the Massachusetts Institute of Technology) was reappointed as chair in 2006. She was Vice Provost for Research from 2012 to 2016. She joined National Science Board, a 24-member body that governs the National Science Foundation and advises the United States government about science policy, in 2008, and is chair of the board for 2018–2020. She also served for several years on the board of advisors for the Computer Science Department at the University of Vermont as well as for the Computer Science Department at Lehigh University. Recognition In 2008 Souvaine won Tufts' Lillian and Joseph Leibner Award for Excellence in Teaching and Advising of Students. In 2011 she was listed as a fellow of the Association for Computing Machinery for her research in computational geometry and her service to the computing community. She became a fellow of the American Association for the Advancement of Science in 2016. The Association for Women in Mathematics has included her in the 2020 class of AWM Fellows for "sustained advocacy, support and mentorship of women and students underrepresented in STEM fields in mathematics and theoretical computer science at multiple scales, from impacting individual mentees and advisees, to creating deep and broad institutional cultural change". References 1954 births Living people Radcliffe College alumni Dartmouth College alumni Princeton University alumni Rutgers University faculty Tufts University faculty Researchers in geometric algorithms American women mathematicians American women computer scientists Fellows of the American Association for the Advancement of Science Fellows of the Association for Computing Machinery Fellows of the Association for Women in Mathematics American computer scientists 20th-century American mathematicians 21st-century American mathematicians 20th-century women mathematicians 21st-century women mathematicians 20th-century American women 21st-century American women
38545877
https://en.wikipedia.org/wiki/RetroArch
RetroArch
RetroArch (pronounced ) is a free and open-source, cross-platform frontend for emulators, game engines, video games, media players and other applications. It is the reference implementation of the libretro API, designed to be fast, lightweight, portable and without dependencies. It is licensed under the GNU GPLv3. RetroArch runs programs converted into dynamic libraries called libretro cores, using several user interfaces such as command-line interface, a few graphical user interfaces (GUI) optimized for gamepads (the most famous one being called XMB, a clone of Sony's XMB), several input, audio and video drivers, plus other sophisticated features like dynamic rate control, audio filters, multi-pass shaders, netplay, gameplay rewinding, cheats, etc. RetroArch has been ported to many platforms. It can run on several PC operating systems (Windows, macOS, Linux), home consoles (PlayStation 3, Xbox 360, Wii U, etc.), handheld consoles (PlayStation Vita, Nintendo 3DS, etc.), on smartphones (Android, iOS, etc.), single-board computers (Raspberry Pi, ODROID, etc.) and even on web browsers by using the Emscripten compiler. History Formerly known as SSNES, initially based on pseudonymous programmer Near's libretro predecessor libsnes, it began its development in 2010 with Hans-Kristian "themaister" Arntzen committing the first change on GitHub. It was intended as a replacement to bsnes's Qt-based interface but it grew to support more emulation "cores". On April 21, 2012, SSNES was officially renamed to RetroArch to reflect this change in direction. RetroArch's version 1.0.0.0 was released on January 11, 2014, and at the time was available on seven distinct platforms. On February 16, 2016, RetroArch became one of the first ever applications to implement support for the Vulkan graphics API, having done so on the same day of the API's official release day. On November 27, 2016, the Libretro Team announced that, alongside Lakka (LibreELEC-based RetroArch operating system), RetroArch would be on the Patreon crowdfunding platform to allow providing bounties for developers who fix specific software bugs and to cover the costs for matchmaking servers. In December 2016, GoGames – a company contracted by video game developer and publisher Sega – approached the RetroArch developers with the intention of using their software in their SEGA Forever project but ultimately the cooperation did not come to fruition due to licensing disagreements. In April 2018, an input lag compensation feature called "Run-Ahead" was added. The Libretro Team planned to release RetroArch onto Steam as a free download, integrating Steamworks features into the platform in July 2019. It will be the first major dedicated emulation title to be released on the platform. In August 2020, someone impersonating a trusted member of the team got access to the buildbot server and the GitHub account for the libretro organization, causing vandalism and server wipes. In November 2020, RetroArch in conjunction with a PCSX2 libretro core allowed the Xbox Series X and Series S to emulate the PlayStation 2, something that Sony's own PlayStation 5 could not do at the time. On September 14, 2021, RetroArch was released on Steam. Features Its major features include: Advanced GPU shader support - A multi-pass post-processing shader pipeline to allow efficient usage of image scaling algorithms, emulation of complex CRT, NTSC video artifacts and other effects; Dynamic Rate Control to synchronize video and audio while smoothing out timing imperfections; FFmpeg recording - Built-in support for lossless video recording using FFmpeg's libavcodec; Gamepad abstraction layer called Retropad; Gamepad auto-configuration - Zero-input needed from the user after plugging gamepads in; Peer-to-peer netplay that uses a rollback technique similar to GGPO; Audio DSP plugins like an equalizer, reverb and other effects; Advanced savestate features - Automatic savestate loading, disabling SRAM overwriting, etc.; Frame-by-frame gameplay rewinding; Button overlays for touchscreen devices like smartphones; Thumbnails of game box art; Low input and audio lag options; Automatically build categorized playlists by scanning directories for games/ROMs; Multiple interfaces including: CLI, XMB (optimized for gamepads), GLUI/MaterialUI (optimized for touch devices), RGUI and Ozone (available everywhere); Game ROM scanner - Automatically constructs playlists by comparing the hashsums of a directory's files against databases of hashsums of known good game copies; Libretro database of cores, games, cheats, etc.; OpenGL and Vulkan API support; Run-Ahead - Hide the input lag of emulated systems by using both savestates and fast-forwarding; Achievement tracking - Integration with the RetroAchievements service to unlock trophies and badges; AI Service - Uses machine translation external services to translate game text on screen. Supported systems RetroArch can run any libretro core. While RetroArch is available for many platforms, the availability of a specific core varies per platform. Below is a non-exhaustive table of which systems are available to RetroArch and what project the core is based on: Below is a non-exhaustive list of things that do not fit in the list above, such as individual games, libraries, or programming languages. Reception RetroArch has been praised for the number of systems and games it can play under a single interface. It has been criticized for how difficult it is to configure, due to the extensive number of options available to the user, and at the same time has been praised for the more advanced features it possesses. On Android, it has been praised for the fact that overlays can be customized, for the expandability of the libretro cores it supports, for its compatibility with several USB and Bluetooth controller peripherals, in addition to the app being free and having no ads. Tyler Loch, writing for Ars Technica, said that RetroArch's 'Run-Ahead' feature is "arguably the biggest improvement to the experience the retro gaming community has yet seen". See also List of free and open-source software packages List of video game emulators References Android emulation software Arcade video game emulators Atari 2600 Doom (franchise) DOS emulators Free and open-source Android software Free emulation software Game Boy Advance emulators Game Boy emulators MSX Nintendo DS emulators Nintendo Entertainment System emulators PlayStation emulators Sega Genesis emulators Sega Master System emulators Sega Saturn Super Nintendo Entertainment System emulators TurboGrafx-16 emulators
20111042
https://en.wikipedia.org/wiki/Edos
Edos
Edos is a discontinued operating system based upon IBM's original mainframe DOS (not to be confused with the unrelated and better known MS-DOS for the IBM PC). The name stood for extended (or enhanced) disk operating system. In 1970, IBM announced the IBM/370 product line along with new peripherals, software products, and operating systems, including DOS/VS that supplanted DOS. Although IBM was rightly focused on their new products, the computing world was dominated by the IBM/360 line, which left a lot of users nervous about their investment. Although there were a couple of projects emulating the IBM/370 on the IBM/360 (e.g., CFS, Inc.), a couple of companies took a different approach, extending the then-current (and limited) DOS. The Computer Software Company (TCSC) took the latter approach. Starting in 1972, they developed Edos, Extended Disk Operating System. They extended the number of fixed program space partitions from 3 to 6, added support for new hardware, and included features that IBM had offered separately. The first version of Edos was released in 1972, in response to the announcement by IBM that DOS Release 26 was the last DOS release to be supported on the System 360, and future DOS Releases would support System 370 machines only. They also made available other third party enhancements such as a spooler and DOCS, from CFS, Inc. Edos/VS and Edos/VSE TCSC enhanced EDOS to become EDOS/VS, which was announced in 1977 and delivered it to beta test sites in 1978. In May 1977, TCSC announced it would release Edos/VS in response to IBM's release of DOS/VS Release 34 and Advanced Functions-DOS/VS. Edos/VS was based on IBM's DOS/VS Release 34, and provided equivalent functionality to IBM's Advanced Functions-DOS/VS product. Unlike IBM's offerings, Edos/VS would run on System 360 machines and System 370 machines lacking virtual storage hardware (non-VS machines), whereas IBM's offerings only supported the latest System 370 models with VS hardware included. TCSC identified the parts of IBM's DOS/VS Release 34 operating system which relied upon System/370-only machine instructions and rewrote them to use instructions supported by the System/360. TCSC was legally able to reuse IBM's DOS/VS Release 34 code, since IBM had (intentionally) published the code without a copyright notice, which made it public domain under US copyright law at the time. In 1981, NCSC announced plans to release an Edos/VSE 2.0, based on IBM DOS/VSE Release 35, suitable for IBM 4300 machines. TCSC corporate history TCSC was founded by Jerry Enfield and Tom Steel, responsible for development and marketing, respectively. Company headquarters were in Richmond, Virginia. TCSC expanded into Canada, Australia, and Europe. In 1980, the company was acquired by Nixdorf and became NCSC. Other products of TCSC included the Extended Console (Econ) system, which enabled display of the system console using a CRT terminal such as an IBM 3270. Econ was available for IBM's DOS and DOS/VS and TCSC's Edos and Edos/VS operating systems. TCSC licensed DATACOM/DB from Applied Data Research (ADR) to run under its EDos and EDos/VS operating systems. When in 1980 Nixdorf bought TCSC, Nixdorf sought to continue the licensing arrangement; ADR and NCSC went to court in a dispute over whether the licensing arrangement was terminated by the acquisition. ADR and Nixdorf settled out of court in 1981, with an agreement that Nixdorf could continue to resell ADR's products. Add-on products for Edos In 1973, TCSC released a remote job entry (RJE) option for Edos. In 1975, TCSC released a tape management system for Edos known as TMS. In 1983, NCSC announced a Unix compatibility subsystem for IBM mainframes running IBM's DOS/VS(E) and Nixdorf's Edos/VS and Edos/VSE operating systems, known as Programmer Work Station/VSE-Advanced Functions, or PWS/VSE-AF for short. PWS/VSE-AF was based on the Coherent Unix clone developed by Mark Williams Company. References Disk operating systems IBM mainframe operating systems 1972 software
37085
https://en.wikipedia.org/wiki/Software%20bug
Software bug
A software bug is an error, flaw or fault in computer software that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. The process of finding and correcting bugs is termed "debugging" and often uses formal techniques or tools to pinpoint bugs. Since the 1950s some computer systems have been designed to deter, detect or auto-correct various computer bugs during operations. Most bugs arise from mistakes and errors made in either a program's design or its source code, or in components and operating systems used by such programs. A program with many, or serious, bugs is said to be buggy. Bugs can trigger errors that may have ripple effects. Bugs may have subtle effects, or cause a program to crash, or freeze the computer. Other bugs qualify as security bugs and might, for example, enable a malicious user to bypass access controls in order to obtain unauthorized privileges. Some software bugs have been linked to disasters. Bugs in code that controlled the Therac-25 radiation therapy machine were directly responsible for patient deaths in the 1980s. In 1996, the European Space Agency's US$1 billion prototype Ariane 5 rocket was destroyed less than a minute after launch due to a bug in the on-board guidance computer program. In 1994, an RAF Chinook helicopter crashed, killing 29; this was initially blamed on pilot error, but was later thought to have been caused by a software bug in the engine-control computer. Buggy software caused the early 21st century British Post Office scandal, the most widespread miscarriage of justice in British legal history. In 2002, a study commissioned by the US Department of Commerce's National Institute of Standards and Technology concluded that "software bugs, or errors, are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product". History The Middle English word bugge is the basis for the terms "bugbear" and "bugaboo" as terms used for a monster. The term "bug" to describe defects has been a part of engineering jargon since the 1870s and predates electronics and computers; it may have originally been used in hardware engineering to describe mechanical malfunctions. For instance, Thomas Edison wrote in a letter to an associate in 1878: Baffle Ball, the first mechanical pinball game, was advertised as being "free of bugs" in 1931. Problems with military gear during World War II were referred to as bugs (or glitches). In a book published in 1942, Louise Dickinson Rich, speaking of a powered ice cutting machine, said, "Ice sawing was suspended until the creator could be brought in to take the bugs out of his darling." Isaac Asimov used the term "bug" to relate to issues with a robot in his short story "Catch That Rabbit", published in 1944. The term "bug" was used in an account by computer pioneer Grace Hopper, who publicized the cause of a malfunction in an early electromechanical computer. A typical version of the story is: Hopper was not present when the bug was found, but it became one of her favorite stories. The date in the log book was September 9, 1947. The operators who found it, including William "Bill" Burke, later of the Naval Weapons Laboratory, Dahlgren, Virginia, were familiar with the engineering term and amusedly kept the insect with the notation "First actual case of bug being found." This log book, complete with attached moth, is part of the collection of the Smithsonian National Museum of American History. The related term "debug" also appears to predate its usage in computing: the Oxford English Dictionarys etymology of the word contains an attestation from 1945, in the context of aircraft engines. The concept that software might contain errors dates back to Ada Lovelace's 1843 notes on the analytical engine, in which she speaks of the possibility of program "cards" for Charles Babbage's analytical engine being erroneous: "Bugs in the System" report The Open Technology Institute, run by the group, New America, released a report "Bugs in the System" in August 2016 stating that U.S. policymakers should make reforms to help researchers identify and address software bugs. The report "highlights the need for reform in the field of software vulnerability discovery and disclosure." One of the report's authors said that Congress has not done enough to address cyber software vulnerability, even though Congress has passed a number of bills to combat the larger issue of cyber security. Government researchers, companies, and cyber security experts are the people who typically discover software flaws. The report calls for reforming computer crime and copyright laws. Terminology While the use of the term "bug" to describe software errors is common, many have suggested that it should be abandoned. One argument is that the word "bug" is divorced from a sense that a human being caused the problem, and instead implies that the defect arose on its own, leading to a push to abandon the term "bug" in favor of terms such as "defect", with limited success. Since the 1970s Gary Kildall somewhat humorously suggested to use the term "blunder". In software engineering, mistake metamorphism (from Greek meta = "change", morph = "form") refers to the evolution of a defect in the final stage of software deployment. Transformation of a "mistake" committed by an analyst in the early stages of the software development lifecycle, which leads to a "defect" in the final stage of the cycle has been called 'mistake metamorphism'. Different stages of a "mistake" in the entire cycle may be described as "mistakes", "anomalies", "faults", "failures", "errors", "exceptions", "crashes", "glitches", "bugs", "defects", "incidents", or "side effects". Prevention The software industry has put much effort into reducing bug counts. These include: Typographical errors Bugs usually appear when the programmer makes a logic error. Various innovations in programming style and defensive programming are designed to make these bugs less likely, or easier to spot. Some typos, especially of symbols or logical/mathematical operators, allow the program to operate incorrectly, while others such as a missing symbol or misspelled name may prevent the program from operating. Compiled languages can reveal some typos when the source code is compiled. Development methodologies Several schemes assist managing programmer activity so that fewer bugs are produced. Software engineering (which addresses software design issues as well) applies many techniques to prevent defects. For example, formal program specifications state the exact behavior of programs so that design bugs may be eliminated. Unfortunately, formal specifications are impractical for anything but the shortest programs, because of problems of combinatorial explosion and indeterminacy. Unit testing involves writing a test for every function (unit) that a program is to perform. In test-driven development unit tests are written before the code and the code is not considered complete until all tests complete successfully. Agile software development involves frequent software releases with relatively small changes. Defects are revealed by user feedback. Open source development allows anyone to examine source code. A school of thought popularized by Eric S. Raymond as Linus's law says that popular open-source software has more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow". This assertion has been disputed, however: computer security specialist Elias Levy wrote that "it is easy to hide vulnerabilities in complex, little understood and undocumented source code," because, "even if people are reviewing the code, that doesn't mean they're qualified to do so." An example of an open-source software bug was the 2008 OpenSSL vulnerability in Debian. Programming language support Programming languages include features to help prevent bugs, such as static type systems, restricted namespaces and modular programming. For example, when a programmer writes (pseudocode) LET REAL_VALUE PI = "THREE AND A BIT", although this may be syntactically correct, the code fails a type check. Compiled languages catch this without having to run the program. Interpreted languages catch such errors at runtime. Some languages deliberately exclude features that easily lead to bugs, at the expense of slower performance: the general principle being that, it is almost always better to write simpler, slower code than inscrutable code that runs slightly faster, especially considering that maintenance cost is substantial. For example, the Java programming language does not support pointer arithmetic; implementations of some languages such as Pascal and scripting languages often have runtime bounds checking of arrays, at least in a debugging build. Code analysis Tools for code analysis help developers by inspecting the program text beyond the compiler's capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable (see halting problem), these tools exploit the fact that human programmers tend to make certain kinds of simple mistakes often when writing software. Instrumentation Tools to monitor the performance of the software as it is running, either specifically to find problems such as bottlenecks or to give assurance as to correct working, may be embedded in the code explicitly (perhaps as simple as a statement saying PRINT "I AM HERE"), or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten. Testing Software testers are people whose primary task is to find bugs, or write code to support testing. On some projects, more resources may be spent on testing than in developing the program. Measurements during testing can provide an estimate of the number of likely bugs remaining; this becomes more reliable the longer a product is tested and developed. Debugging Finding and fixing bugs, or debugging, is a major part of computer programming. Maurice Wilkes, an early computing pioneer, described his realization in the late 1940s that much of the rest of his life would be spent finding mistakes in his own programs. Usually, the most difficult part of debugging is finding the bug. Once it is found, correcting it is usually relatively easy. Programs known as debuggers help programmers locate bugs by executing code line by line, watching variable values, and other features to observe program behavior. Without a debugger, code may be added so that messages or values may be written to a console or to a window or log file to trace program execution or show values. However, even with the aid of a debugger, locating bugs is something of an art. It is not uncommon for a bug in one section of a program to cause failures in a completely different section, thus making it especially difficult to track (for example, an error in a graphics rendering routine causing a file I/O routine to fail), in an apparently unrelated part of the system. Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the programmer. Such logic errors require a section of the program to be overhauled or rewritten. As a part of code review, stepping through the code and imagining or transcribing the execution process may often find errors without ever reproducing the bug as such. More typically, the first step in locating a bug is to reproduce it reliably. Once the bug is reproducible, the programmer may use a debugger or other tool while reproducing the error to find the point at which the program went astray. Some bugs are revealed by inputs that may be difficult for the programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug (specifically, a race condition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to duplicate it. Other bugs may stop occurring whenever the setup is augmented to help find the bug, such as running the program with a debugger; these are called heisenbugs (humorously named after the Heisenberg uncertainty principle). Since the 1990s, particularly following the Ariane 5 Flight 501 disaster, interest in automated aids to debugging rose, such as static code analysis by abstract interpretation. Some classes of bugs have nothing to do with the code. Faulty documentation or hardware may lead to problems in system use, even though the code matches the documentation. In some cases, changes to the code eliminate the problem even though the code then no longer matches the documentation. Embedded systems frequently work around hardware bugs, since to make a new version of a ROM is much cheaper than remanufacturing the hardware, especially if they are commodity items. Benchmark of bugs To facilitate reproducible research on testing and debugging, researchers use curated benchmarks of bugs: the Siemens benchmark ManyBugs is a benchmark of 185 C bugs in nine open-source programs. Defects4J is a benchmark of 341 Java bugs from 5 open-source projects. It contains the corresponding patches, which cover a variety of patch type. BEARS is a benchmark of continuous integration build failures focusing on test failures. It has been created by monitoring builds from open-source projects on Travis CI. Bug management Bug management includes the process of documenting, categorizing, assigning, reproducing, correcting and releasing the corrected code. Proposed changes to software – bugs as well as enhancement requests and even entire releases – are commonly tracked and managed using bug tracking systems or issue tracking systems. The items added may be called defects, tickets, issues, or, following the agile development paradigm, stories and epics. Categories may be objective, subjective or a combination, such as version number, area of the software, severity and priority, as well as what type of issue it is, such as a feature request or a bug. A bug triage reviews bugs and decides whether and when to fix them. The decision is based on the bug's priority, and factors such as project schedules. The triage is not meant to investigate the cause of bugs, but rather the cost of fixing them. The triage happens regularly, and goes through bugs opened or reopened since the previous meeting. The attendees of the triage process typically are the project manager, development manager, test manager, build manager, and technical experts. Severity Severity is the intensity of the impact the bug has on system operation. This impact may be data loss, financial, loss of goodwill and wasted effort. Severity levels are not standardized. Impacts differ across industry. A crash in a video game has a totally different impact than a crash in a web browser, or real time monitoring system. For example, bug severity levels might be "crash or hang", "no workaround" (meaning there is no way the customer can accomplish a given task), "has workaround" (meaning the user can still accomplish the task), "visual defect" (for example, a missing image or displaced button or form element), or "documentation error". Some software publishers use more qualified severities such as "critical", "high", "low", "blocker" or "trivial". The severity of a bug may be a separate category to its priority for fixing, and the two may be quantified and managed separately. Priority Priority controls where a bug falls on the list of planned changes. The priority is decided by each software producer. Priorities may be numerical, such as 1 through 5, or named, such as "critical", "high", "low", or "deferred". These rating scales may be similar or even identical to severity ratings, but are evaluated as a combination of the bug's severity with its estimated effort to fix; a bug with low severity but easy to fix may get a higher priority than a bug with moderate severity that requires excessive effort to fix. Priority ratings may be aligned with product releases, such as "critical" priority indicating all the bugs that must be fixed before the next software release. Software releases It is common practice to release software with known, low-priority bugs. Bugs of sufficiently high priority may warrant a special release of part of the code containing only modules with those fixes. These are known as patches. Most releases include a mixture of behavior changes and multiple bug fixes. Releases that emphasize bug fixes are known as maintenance releases, to differentiate it from major releases that emphasize feature additions or changes. Reasons that a software publisher opts not to patch or even fix a particular bug include: A deadline must be met and resources are insufficient to fix all bugs by the deadline. The bug is already fixed in an upcoming release, and it is not of high priority. The changes required to fix the bug are too costly or affect too many other components, requiring a major testing activity. It may be suspected, or known, that some users are relying on the existing buggy behavior; a proposed fix may introduce a breaking change. The problem is in an area that will be obsolete with an upcoming release; fixing it is unnecessary. "It's not a bug, it's a feature". A misunderstanding has arisen between expected and perceived behavior or undocumented feature. Types In software development projects, a "mistake" or "fault" may be introduced at any stage. Bugs arise from oversights or misunderstandings made by a software team during specification, design, coding, data entry or documentation. For example, a relatively simple program to alphabetize a list of words, the design might fail to consider what should happen when a word contains a hyphen. Or when converting an abstract design into code, the coder might inadvertently create an off-by-one error which can be a "<" where "<=" was intended, and fail to sort the last word in a list. Another category of bug is called a race condition that may occur when programs have multiple components executing at the same time. If the components interact in a different order than the developer intended, they could interfere with each other and stop the program from completing its tasks. These bugs may be difficult to detect or anticipate, since they may not occur during every execution of a program. Conceptual errors are a developer's misunderstanding of what the software must do. The resulting software may perform according to the developer's understanding, but not what is really needed. Other types: Arithmetic Division by zero. Arithmetic overflow or underflow. Loss of arithmetic precision due to rounding or numerically unstable algorithms. Logic Infinite loops and infinite recursion. Off-by-one error, counting one too many or too few when looping. Syntax Use of the wrong operator, such as performing assignment instead of equality test. For example, in some languages x=5 will set the value of x to 5 while x==5 will check whether x is currently 5 or some other number. Interpreted languages allow such code to fail. Compiled languages can catch such errors before testing begins. Resource Null pointer dereference. Using an uninitialized variable. Using an otherwise valid instruction on the wrong data type (see packed decimal/binary-coded decimal). Access violations. Resource leaks, where a finite system resource (such as memory or file handles) become exhausted by repeated allocation without release. Buffer overflow, in which a program tries to store data past the end of allocated storage. This may or may not lead to an access violation or storage violation. These are known as security bugs. Excessive recursion which—though logically valid—causes stack overflow. Use-after-free error, where a pointer is used after the system has freed the memory it references. Double free error. Multi-threading Deadlock, where task A cannot continue until task B finishes, but at the same time, task B cannot continue until task A finishes. Race condition, where the computer does not perform tasks in the order the programmer intended. Concurrency errors in critical sections, mutual exclusions and other features of concurrent processing. Time-of-check-to-time-of-use (TOCTOU) is a form of unprotected critical section. Interfacing Incorrect API usage. Incorrect protocol implementation. Incorrect hardware handling. Incorrect assumptions of a particular platform. Incompatible systems. A new API or communications protocol may seem to work when two systems use different versions, but errors may occur when a function or feature implemented in one version is changed or missing in another. In production systems which must run continually, shutting down the entire system for a major update may not be possible, such as in the telecommunication industry or the internet. In this case, smaller segments of a large system are upgraded individually, to minimize disruption to a large network. However, some sections could be overlooked and not upgraded, and cause compatibility errors which may be difficult to find and repair. Incorrect code annotations Teamworking Unpropagated updates; e.g. programmer changes "myAdd" but forgets to change "mySubtract", which uses the same algorithm. These errors are mitigated by the Don't Repeat Yourself philosophy. Comments out of date or incorrect: many programmers assume the comments accurately describe the code. Differences between documentation and product. Implications The amount and type of damage a software bug may cause naturally affects decision-making, processes and policy regarding software quality. In applications such as human spaceflight or automotive safety, since software flaws have the potential to cause human injury or even death, such software will have far more scrutiny and quality control than, for example, an online shopping website. In applications such as banking, where software flaws have the potential to cause serious financial damage to a bank or its customers, quality control is also more important than, say, a photo editing application. NASA's Software Assurance Technology Center managed to reduce the number of errors to fewer than 0.1 per 1000 lines of code (SLOC) but this was not felt to be feasible for projects in the business world. According to a NASA study on "Flight Software Complexity", "an exceptionally good software development process can keep defects down to as low as 1 defect per 10,000 lines of code." Other than the damage caused by bugs, some of their cost is due to the effort invested in fixing them. In 1978, Lientz et al. showed that the median of projects invest 17 per cent of the development effort in bug fixing. In research in 2020 on GitHub repositories showed the median is 20%. Well-known bugs A number of software bugs have become well-known, usually due to their severity: examples include various space and military aircraft crashes. Possibly the most famous bug is the Year 2000 problem or Y2K bug, which caused many programs written long before the transition from 19xx to 20xx dates to malfunction, for example treating a date such as "25 Dec 04" as being in 1904, displaying "19100" instead of "2000", and so on. A huge effort at the end of the 20th century resolved the most severe problems, and there were no major consequences. The 2012 stock trading disruption involved one such incompatibility between the old API and a new API. In popular culture In both the 1968 novel 2001: A Space Odyssey and the corresponding 1968 film 2001: A Space Odyssey, a spaceship's onboard computer, HAL 9000, attempts to kill all its crew members. In the follow-up 1982 novel, 2010: Odyssey Two, and the accompanying 1984 film, 2010, it is revealed that this action was caused by the computer having been programmed with two conflicting objectives: to fully disclose all its information, and to keep the true purpose of the flight secret from the crew; this conflict caused HAL to become paranoid and eventually homicidal. In the English version of the Nena 1983 song 99 Luftballons (99 Red Balloons) as a result of "bugs in the software", a release of a group of 99 red balloons are mistaken for an enemy nuclear missile launch, requiring an equivalent launch response, resulting in catastrophe. In the 1999 American comedy Office Space, three employees attempt (unsuccessfully) to exploit their company's preoccupation with the Y2K computer bug using a computer virus that sends rounded-off fractions of a penny to their bank account—a long-known technique described as salami slicing. The 2004 novel The Bug, by Ellen Ullman, is about a programmer's attempt to find an elusive bug in a database application. The 2008 Canadian film Control Alt Delete is about a computer programmer at the end of 1999 struggling to fix bugs at his company related to the year 2000 problem. See also Anti-pattern Bug bounty program Glitch removal ISO/IEC 9126, which classifies a bug as either a defect or a nonconformity Orthogonal Defect Classification Racetrack problem RISKS Digest Software defect indicator Software regression Software rot Automatic bug fixing References External links "Common Weakness Enumeration" – an expert webpage focus on bugs, at NIST.gov BUG type of Jim Gray – another Bug type "The First Computer Bug!" – an email from 1981 about Adm. Hopper's bug "Toward Understanding Compiler Bugs in GCC and LLVM". A 2016 study of bugs in compilers
1963187
https://en.wikipedia.org/wiki/Institution%20of%20Engineering%20and%20Technology
Institution of Engineering and Technology
The Institution of Engineering and Technology (IET) is a multidisciplinary professional engineering institution. The IET was formed in 2006 from two separate institutions: the Institution of Electrical Engineers (IEE), dating back to 1871, and the Institution of Incorporated Engineers (IIE) dating back to 1884. Its worldwide membership is currently in excess of 158,000 in 153 countries. The IET's main offices are in Savoy Place in London, England and at Michael Faraday House in Stevenage, England. In the United Kingdom, the IET has the authority to establish professional registration for the titles of Chartered Engineer, Incorporated Engineer, Engineering Technician, and ICT Technician, as a licensed member institution of the Engineering Council. The IET is registered as a charity in England and Wales, and in Scotland. Formation Discussions started in 2004 between the IEE and the IIE about merging to form a new institution. In September 2005, both institutions held votes of the merger and the members voted in favour (73.5% IEE, 95.7% IIE). This merger also needed government approval, so a petition was then made to the Privy Council of the United Kingdom for a Supplemental Charter, to allow the creation of the new institution. This was approved by the Privy Council on December 14, 2005, and the new institution emerged on March 31, 2006. History of the IEE The Society of Telegraph Engineers (STE) was formed on May 17, 1871, and it published the Journal of the Society of Telegraph Engineers from 1872 through 1880. Carl Wilhelm Siemens was first President of IEE in 1872. On December 22, 1880, the STE was renamed as the Society of Telegraph Engineers and of Electricians and, as part of this change, it renamed its journal the Journal of the Society of Telegraph Engineers and of Electricians (1881 – 82) and later the Journal of the Society of Telegraph-Engineers and Electricians (1883 – 88). Following a meeting of its Council on 10 November 1887, it was decided to adopt the name of the Institution of Electrical Engineers (IEE). As part of this change, its Journal was renamed Journal of the Institution of Electrical Engineers in 1889, and it kept this title through 1963. In 1921, the Institution was Incorporated by royal charter and, following mergers with the Institution of Electronic and Radio Engineers (IERE) in 1988 and the Institution of Manufacturing Engineers (IMfgE) in 1990, it had a worldwide membership of around 120,000. The IEE represented the engineering profession, operated Professional Networks (worldwide groups of engineers sharing common technical and professional interests), had an educational role including the accreditation of degree courses and operated schemes to provide awards scholarships, grants and prizes. It was well known for publication of the IEE Wiring Regulations which now continue to be written by the IET and to be published by the British Standards Institution as BS 7671. The IET hosts the archive for the Women's Engineering Society (WES) and it has also provided office space for WES since 2005. History of the IIE The modern Institution of Incorporated Engineers (IIE) traced its heritage to The Vulcanic Society that was founded in 1884 and became the Junior Institution of Engineers in 1902, which became the Institution of General Technician Engineers in 1970. It changed its name in 1976 to the Institution of Mechanical and General Technician Engineers. At this point it merged with the Institution of Technician Engineers in Mechanical Engineering and formed the Institution of Mechanical Incorporated Engineers in 1988. The Institution of Engineers in Charge, which was founded in 1895, was merged into the Institution of Mechanical Incorporated Engineers (IMechIE) in 1990. The Institution of Electrical and Electronic Technician Engineers, the Society of Electronic and Radio Technicians, and the Institute of Practitioners in Radio and Electronics merged in 1990 to form the Institution of Electronics and Electrical Incorporated Engineers (IEEIE). The IIE was formed in April 1998 by the merger of The Institution of Electronic and Electrical Incorporated Engineers (IEEIE), The Institution of Mechanical Incorporated Engineers (IMechIE), and The Institute of Engineers and Technicians (IET, not to be confused with the later-formed Institution of Engineering and Technology). In 1999 there was a further merger with The Institution of Incorporated Executive Engineers (IIExE). The IIE had a worldwide membership of approximately 40,000. History of the Institution of Manufacturing Engineers This forerunner institution was known in all but its last year as the Institution of Production Engineers (IProdE) and was initiated by H. E. Honer. He wrote to technical periodical Engineering Production suggesting that the time was ripe to form an institution for the specialised interests of engineers engaged in manufacture/production. The resulting mass of correspondence spawned a meeting at Cannon Street Hotel on 26 February 1921. There it was decided to form the IProdE to: establish the status and designation of production or manufacturing engineers promote the science of practical production in industry facilitate the interchange of ideas between engineers, manufacturers and other specialists. The term ‘production engineering’ came into use to describe the management of factory production techniques first developed by Henry Ford, which had expanded greatly during World War I. The IProdE was incorporated in 1931 and was granted armorial bearings in 1937. From the outset it operated through decentralised branches called local sections wherever enough members existed. These were self-governing and elected their own officers. They held monthly meetings at which papers were read and discussed. Outstanding papers were published in the IProdE's Journal. The work of six foremost production engineers took centre stage in certain national meetings: Viscount Nuffield, Sir Alfred Herbert, Colonel George Bray, Lord Sempill, E. H. Hancock, and J. N. Kirby. National and regional conferences were arranged dealing with specific industrial problems. Sister Councils took hold including in Australia, Canada, India, New Zealand, and South Africa. The Institution's education committee established a graduate examination which all junior entrants undertook from 1932 onwards. An examination for Associate Membership was introduced in 1951. World War II accelerated developments in production engineering and by 1945 membership of the IProdE stood at 5,000. The 1950s and 1960s were perhaps the most fruitful period for the Institution. Major conferences such as ‘The Automatic Factory’ in 1955 ensured that the Institution held a place at the forefront of production technology. A Royal Charter was granted in 1964 and membership stood at over 17,000 by 1969. In 1981 the IProdE instituted four medals starting from its Diamond Jubilee: the International Award, the Mensforth Gold Medal, the Nuffield Award and the Silver Medal. The Mensforth Gold Medal was named after Sir Eric Mensforth, founder and chairman of Westland Helicopters and a former IProdE President. It was awarded to British recipients who had made an outstanding contribution to the advancement of production engineering technology. Renamed the Mensforth Manufacturing Gold Medal, it is the IET's top manufacturing award. Financial constraints, a slowing in membership and a blurring of distinctions between the various branches of engineering led the IProdE to merger proposals in the late 1980s. The Institution of Electrical Engineers (IEE) had interests very close to those of the IProdE. The IEE was a much larger organisation than the IProdE and the proposal was that the IProdE should be represented as a specialist division within the IEE. While these talks were reaching fruition in 1991 the IProdE changed its name to the Institution of Manufacturing Engineers. A merger with the IEE took place the same year, with the IMfgE becoming the IEE's new Manufacturing Division. IET Presidents The IET is governed by the President and Board of Trustees. The IET Council, on the other hand, serves as the advisory and consultative body, representing views of the members at large and offering advice to the Board of Trustees. Since founding the IET, several prominent engineers have served as its President and the recent Presidents are listed below: Purpose and function The IET represents the engineering profession in matters of public concern and assists governments to make the public aware of engineering and technological issues. It provides advice on all areas of engineering, regularly advising Parliament and other agencies. The IET also grants Chartered Engineer, Incorporated Engineer, Engineering Technician, and ICT Technician professional designations on behalf of the Engineering Council UK. IEng is roughly equivalent to North American Professional Engineer designations and CEng is set at a higher level. Both designations have far greater geographical recognition. This is made possible through a number of networks for engineers established by the IET including the Professional Networks, worldwide groups of engineers sharing common technical and professional interests. Through the IET website, these networks provide up-to-date sector-specific news, stock a library of technical articles and give members the opportunity to exchange knowledge and ideas with peer groups through dedicated discussion forums. Particular areas of focus include education, IT, energy and the environment. The IET has an educational role, seeking to support its members through their careers by offering a professional home for life, producing advice and guidance at all levels to secure the future of engineering. For instance, the IET accredits degree courses worldwide in subjects relevant to electrical, electronic, manufacturing and information engineering. In addition, it secures funding for professional development schemes for engineering graduates including awards scholarships, grants and prizes. For the public, the IET website provides factfiles on topics such as solar power, nuclear power, fuel cells, micro-generation and the possible effects on health of mobile phones and power lines. The IET runs the bibliographic information service Inspec, which is a major indexing database of scientific and technical literature and publishes books, journals such as Electronics Letters, magazines such as Engineering & Technology and conference proceedings. Over 80,000 technical articles are available via the IET Digital Library. IET.tv is one of the world's largest collated resources of authoritative and multidisciplinary engineering and technology content. Comprising in excess of 6,500 presentation, lecture and training videos, this high quality engineering information offers research insight, workflow solutions and access to inspirational events and expert communities. With a range of search and user functionalities, IET.tv enables online video access to a range of topics and expertise. IET.tv also has an YouTube presence, where it publishes a wide variety of content related to engineering and technology. In August 2019 the Department for Digital, Culture, Media and Sport (DCMS) appointed the IET as the lead organisation in charge of designing and delivering the new UK Cyber Security Council, alongside 15 other cyber security professional organisations collectively known as the Cyber Security Alliance. The council, which officially launched in April 2021, will be "charged with the development of a framework that speaks across the different specialisms, setting out a comprehensive alignment of career pathways, including the certifications and qualifications required within certain levels". Membership & Fellow The IET has several categories of membership, some with designatory postnominals: Honorary Fellow (HonFIET) Refers to distinguished individuals whom the IET desires to honour for services rendered to the IET. Fellow (FIET) Fellow of the Institution of Engineering and Technology (FIET) refers to a person who has demonstrated significant individual responsibility, sustained achievement and professionalism in engineering areas relevant to the interests of the Institution. Member (MIET or TMIET) This category is open to professional engineers (MIET) and technicians (TMIET) with suitable qualifications and involvement in areas relevant to the interests of the Institution. MIET is a regulated professional title recognised in Europe by the Directive 2005/36. MIET is listed on the part 2 professions regulated by professional bodies incorporated by Royal Charter-Statutory Instruments 2007 No. 2781 Professional Qualifications-The European Communities (Recognition of Professional Qualifications) Regulations 2007. Associate Open to persons with an interest in areas relevant to the interests of the Institution who do not qualify for the Member category. Student Open to persons studying to become professional engineers and technicians. Publications The IET has a journals publishing programme, totalling 24 titles such as IET Software as of March 2012 (with the addition of IET Biometrics and IET Networks). The journals contain both original and review-oriented papers relating to various disciplines in electrical, electronics, computing, control, biomedical and communications technologies. Electronics Letters is a peer-reviewed rapid-communication journal, which publishes short original research papers every two weeks. Its scope covers developments in all electronic and electrical engineering related fields. Also available to Electronics Letters subscribers are something called the Insight Letters. Micro & Nano Letters, first published in 2006, specialises in the rapid online publication of short research papers concentrating on advances in miniature and ultraminiature structures and systems that have at least one dimension ranging from a few tens of micrometres to a few nanometres. It offers a rapid route for international dissemination of research findings generated by researchers from the micro and nano communities. Awards and scholarships Achievement Medals The IET Achievement Medals are awarded to individuals who have made major and distinguished contributions in the various sectors of science, engineering and technology. The medals are named after famous engineers and persons, such as Michael Faraday, John Ambrose Fleming, J. J. Thomson, and Oliver Heaviside. The judging panel look for outstanding and sustained excellence in one or more activities. For example: research and development, innovation, design, manufacturing, technical management, and the promotion of engineering and technology. Faraday Medal The Faraday Medal is the highest medal and honor of the IET. Named after Michael Faraday, the medal is awarded for notable scientific or industrial achievement in engineering or for conspicuous service rendered to the advancement of science, engineering and technology without restriction as regards nationality, country of residence or membership of the Institution. It is awarded not more frequently than once a year. The award was established in 1922 to commemorate the 50th Anniversary of the first Ordinary Meeting of the Society of Telegraph Engineers. J J Thomson Medal for Electronics The J J Thomson Medal for Electronics, named after J. J. Thomson, was created in 1976 by the Electronics Divisional Board of the Institution of Electrical Engineers (IEE), and is awarded to candidates who have made major and distinguished contributions in electronics. Ambrose Fleming Medal The Ambrose Fleming Medal for Information and Communications were first awarded in 2007 to Professor Simon Kingsley. It was named after John Ambrose Fleming, the inventor of vacuum tubes, and is awarded to candidates who have made outstanding and distinguished contributions to digital communications, telecommunications, and information engineering. Mensforth Manufacturing Gold Medal The Mensforth Manufacturing Gold Medal is awarded to candidates who have made major and distinguished contributions to advancing the manufacturing sector. Like the Faraday Medal, the Mensforth Manufacturing Gold Medal is awarded without restriction regarding nationality, country of residence or membership of the IET. Mountbatten Medal The Mountbatten Medal celebrates individuals who have made an outstanding contribution, over a period of time, to the promotion of electronics or information technology and their application. Contributions can be within the spheres of science, technology, industry or commerce and in the dissemination of the understanding of electronics and information technology, whether to young people, or adults. The Medal was founded by the National Electronics Council in 1992 and named after The Earl Mountbatten of Burma, the Admiral of the Fleet and Governor-General of India. Other Recognitions IET Volunteer Medal Introduced in 2015, the IET Volunteer Medal is awarded to individuals for major and outstanding contributions voluntarily given to furthering the aims of the IET. Young Woman Engineer (YWE) Since 1978 the IET has awarded the Young Woman Engineer award to top female engineers in the UK to recognize the contribution they make and to encourage young women and girls to consider engineering as a career. The award was created as part of an initiative to address the shortage of women in engineering roles. Scholarships The IET offer Diamond Jubilee undergraduate scholarships for first year students studying an IET accredited degree. Winners receive between £1,000 to £2,000 per year, for up to four years to help with their studies. Eligibility is partially based on the exam results at the final year of school prior to university. IET also offers postgraduate scholarships intended for IET members carrying out doctoral research, the postgraduate scholarships offered by the IET assist members with awards of up to £10,000, to further research engineering related topics at universities. The IET Engineering Horizons Bursary are offered at £1,000 per year for undergraduate students on IET accredited degree courses in the UK and apprentices starting an IET Approved Apprenticeship scheme. For those UK residents who have overcome personal challenges to pursue an engineering education. The IET outside the United Kingdom The IET refers to its region-specific branches as "Local Networks". Australia IET Australia is the Australian Local Network of the IET (Institution of Engineering and Technology). The Australian Local Network of the IET has representation in all the states and territories of Australia. These include the state branches, their associated Younger Members Sections, and university sections in Australia. The Younger Members Sections are divided in categories based on each state, e.g. IET YMS New South Wales (IET YMS NSW). Canada The IET Toronto Network covers IET activities in the Southern and Western areas of Ontario and has approximately 500 members. The first Canadian Branch of the IEE (now the IET) was inaugurated by John Thompson, FIEE, and Harry Copping, FIEE, in Toronto in the early 1950s. China IET China office is in Beijing. It started in 2005 with core purposes of international collaboration, engineering exchange, organization of events and seminars, and the promotion of the concept/requirements of and awarding of the title of Chartered Engineer. Hong Kong IET Hong Kong is the Hong Kong Local Network (formerly Branch) of the IET (Institution of Engineering and Technology). The Hong Kong Local Network of the IET has representations in the Asian region and provides a critical link into mainland China. It includes six sections, i.e. Electronics & Communications Section (ECS); Informatics and Control Technologies Section (ICTS); Management Section(MS); Power and Energy Section (PES); Manufacturing & Industrial Engineering (MIES); Railway Section( RS), as well as the Younger Members Section. It has over 5,000 members and activities are coordinated locally. It is one of the professional organisations for chartered engineers in Hong Kong. Italy IET Italy Local Network was established in 2007 by a group of active members led by Dr M Fiorini with the purpose to represent locally the aims and services of the IET. The vision of sharing and advancing knowledge throughout the global science, engineering and technology community to enhance people's lives is achieved building-up an open, flexible and global knowledge network supported by individuals, companies and institutions and facilitated by the IET and its members. India An IET India Office was established in 2006. It has eight Local Networks: Bengaluru, Chennai, Delhi, Kanyakumari, Kolkata, Mumbai, Nashik and Pune. Kenya An IET in Kenya was established on November 16, 2011. It has been enacted with powers including its awards being recognized by the Kenya National Assembly. With support of Faculty at the newly established Technical University of Kenya (formerly the Kenya Polytechnic) and Jomo Kenyatta University of Agriculture & Technology, the institution considers registration of Technologists, Technicians and Craftspeople, particularly being open to those excluded from Engineering Board of Kenya registration. Kuwait IET Kuwait community was established in 2013 by Dr. Abdelrahman Abdelazim. The community is very active in the region, overseeing 4 student chapters in Kuwait universities. The community's most notable event was the 2015 GCC robotics challenge, which involved collaboration with many networks in the region. Malaysia IET Malaysia Local Network has more than 1,900 members in Malaysia. In addition, the network has facilitated On Campuses in public and private universities. These are mentored by the Young Professional Section (YPS) of IET. As of December 2019, there are 19 active On Campuses. See also Engineering Glossary of engineering Engineering ethics Faraday Medal IET Achievement Medals Mountbatten Medal Society of Engineers UK Society of Professional Engineers UK References External links Engineering Communities 2006 establishments in the United Kingdom Bibliographic database providers ECUK Licensed Members Electrical engineering organizations Engineering societies based in the United Kingdom Learned societies of the United Kingdom Organisations based in Hertfordshire Organisations based in the City of Westminster Organizations established in 1871 Organizations established in 2006 Engineering and Technology Science and technology in Hertfordshire Scientific organisations based in the United Kingdom Scientific organizations established in 2006 Stevenage
594417
https://en.wikipedia.org/wiki/README
README
A README file contains information about the other files in a directory or archive of computer software. A form of documentation, it is usually a simple plain text file called README, Read Me, READ.ME, README.TXT, README.md (to indicate the use of Markdown), or README.1ST. The file's name is generally written in uppercase. On Unix-like systems in particular, this causes it to stand outboth because lowercase filenames are more common, and because the ls command commonly sorts and displays files in ASCII-code order, in which uppercase filenames will appear first. Contents A README file typically encompasses: Configuration instructions Installation instructions Operating instructions A file manifest (a list of files in the directory or archive) Copyright and licensing information Contact information for the distributor or author A list of known bugs Troubleshooting instructions Credits and acknowledgments A changelog (usually aimed at fellow programmers) A news section (usually aimed at end users) History It is unclear when the convention of including a README file began, but examples dating to the mid-1970s have been found. Early Macintosh system software installed a Read Me on the Startup Disk, and README files commonly accompanied third-party software. In particular, there is a long history of free software and open-source software including a README file; the GNU Coding Standards encourage including one to provide "a general overview of the package". Since the advent of the web as a de facto standard platform for software distribution, many software packages have moved (or occasionally, copied) some of the above ancillary files and pieces of information to a website or wiki, sometimes including the README itself, or sometimes leaving behind only a brief README file without all of the information required by a new user of the software. The popular source code hosting website GitHub strongly encourages the creation of a README fileif one exists in the main (top-level) directory of a repository, it is automatically presented on the repository's front page. In addition to plain text, various other formats and file extensions are also supported, and HTML conversion takes extensions into accountin particular a README.md is treated as GitHub Flavored Markdown. As a generic term The expression "readme file" is also sometimes used generically, for other files with a similar purpose. For example, the source-code distributions of many free software packages (especially those following the Gnits Standards or those produced with GNU Autotools) include a standard set of readme files: {| class="wikitable" |- |README |General information |- |AUTHORS |Credits |- |THANKS |Acknowledgments |- |CHANGELOG |A detailed changelog, intended for programmers |- |NEWS |A basic changelog, intended for users |- |INSTALL |Installation instructions |- |COPYING / LICENSE |Copyright and licensing information |- |BUGS |Known bugs and instructions on reporting new ones |- |CONTRIBUTING / HACKING |Guide for prospective contributors to the project |- |} Also commonly distributed with software packages are an FAQ file and a TODO file, which lists planned improvements. See also FILE_ID.DIZ DESCRIPT.ION .nfo man page Notes References Further reading Software documentation Filenames
5044374
https://en.wikipedia.org/wiki/Chain%20of%20trust
Chain of trust
In computer security, a chain of trust is established by validating each component of hardware and software from the end entity up to the root certificate. It is intended to ensure that only trusted software and hardware can be used while still retaining flexibility. Introduction A chain of trust is designed to allow multiple users to create and use software on the system, which would be more difficult if all the keys were stored directly in hardware. It starts with hardware that will only boot from software that is digitally signed. The signing authority will only sign boot programs that enforce security, such as only running programs that are themselves signed, or only allowing signed code to have access to certain features of the machine. This process may continue for several layers. This process results in a chain of trust. The final software can be trusted to have certain properties, because if it had been illegally modified its signature would be invalid, and the previous software would not have executed it. The previous software can be trusted, because it, in turn, would not have been loaded if its signature had been invalid. The trustworthiness of each layer is guaranteed by the one before, back to the trust anchor. It would be possible to have the hardware check the suitability (signature) for every single piece of software. However, this would not produce the flexibility that a "chain" provides. In a chain, any given link can be replaced with a different version to provide different properties, without having to go all the way back to the trust anchor. This use of multiple layers is an application of a general technique to improve scalability, and is analogous to the use of multiple certificates in a certificate chain. Computer security In computer security, digital certificates are verified using a chain of trust. The trust anchor for the digital certificate is the root certificate authority (CA). The certificate hierarchy is a structure of certificates that allows individuals to verify the validity of a certificate's issuer. Certificates are issued and signed by certificates that reside higher in the certificate hierarchy, so the validity and trustworthiness of a given certificate is determined by the corresponding validity of the certificate that signed it. The chain of trust of a certificate chain is an ordered list of certificates, containing an end-user subscriber certificate and intermediate certificates (that represents the intermediate CA), that enables the receiver to verify that the sender and all intermediate certificates are trustworthy. This process is best described in the page Intermediate certificate authority. See also X.509 certificate chains for a description of these concepts in a widely used standard for digital certificates. Data security Trusted computing Public-key cryptography
23762941
https://en.wikipedia.org/wiki/Healthcare%20Cost%20and%20Utilization%20Project
Healthcare Cost and Utilization Project
The Healthcare Cost and Utilization Project (HCUP, pronounced "H-Cup") is a family of healthcare databases and related software tools and products from the United States that is developed through a Federal-State-Industry partnership and sponsored by the Agency for Healthcare Research and Quality (AHRQ). General Information HCUP provides access to healthcare databases for research and policy analysis, as well as tools and products to enhance the capabilities of the data. HCUP databases combine the data collection efforts of State data organizations, hospital associations, private data organizations, and the Federal Government to create a national information resource of patient-level healthcare data. State organizations that provide data to HCUP are called Partners. HCUP includes multiyear hospital administrative (inpatient, outpatient, and emergency department) data in the United States, with all-payer, encounter-level information beginning in 1988. These databases enable research on health and policy issues at the national, State, and local levels, including cost and quality of health services, medical practice patterns, access to healthcare, and outcomes of treatments. AHRQ has also developed a set of software tools to be used when evaluating hospital data. These software tools can be used with the HCUP databases and with other administrative databases. HCUP’s Supplemental Files are only for use with HCUP databases. HCUP databases have been used in various studies on a number of topics, such as breast cancer, depression, and multimorbidity, incidence and cost of injuries, role of socioeconomic status in patients leaving against medical advice, multiple chronic conditions and disparities in readmissions, and hospitalization costs for cystic fibrosis. HCUP User Support Website (HCUP-US) The HCUP User Support website is the main repository of information for HCUP. It is designed to answer HCUP-related questions; provide detailed information on HCUP databases, tools, and products; and offer technical assistance to HCUP users. HCUP’s tools, publications, documentation, news, services, HCUP Fast Stats, and HCUPnet (the online data query system) may all be accessed through HCUP-US. HCUP-US is located at https://www.hcup-us.ahrq.gov. HCUP Overview Course HCUP has developed an interactive online course that provides an overview of the features, capabilities, and potential uses of HCUP. The course is modular, so users can either move through the entire course or access the resources in which they are most interested. The On-line HCUP Overview Course (https://www.hcup-us.ahrq.gov/overviewcourse.jsp) can work both as an introduction to HCUP data and tools and a refresher for established users. HCUP Online Tutorial Series The HCUP Online Tutorial Series (https://www.hcup-us.ahrq.gov/tech_assist/tutorials.jsp) is a set of interactive training courses that provide HCUP data users with information about HCUP data and tools, and training on technical methods for conducting research with HCUP data. The online courses are modular, so users can move through an entire course or access the sections in which they are most interested. Topics include loading and checking HCUP data, understanding HCUP’s sampling design, calculating standard errors, producing national estimates, conducting multiyear analysis, and using the Nationwide Readmissions Database (NRD). HCUP Databases HCUP databases bring together data from State data organizations, hospital associations, private data organizations, and the Federal Government to create an information resource of patient-level healthcare data. HCUP’s databases (https://www.hcup-us.ahrq.gov/databases.jsp) date back to 1988 data files. The databases contain encounter-level information for all payers compiled in a uniform format with privacy protections in place. Researchers and policymakers can use the records to identify, track, and analyze national trends in healthcare use, access, charges, quality, and outcomes. HCUP databases are released approximately 6 to 18 months after the end of a given calendar year, with State databases available earlier than the national or nationwide datasets. Currently, there are eight types of HCUP databases: four with national- and regional-level data and three with State- and local-level data. National Databases National Inpatient Sample (NIS) (formerly the Nationwide Inpatient Sample): A 20 percent stratified sample of all-payer, inpatient discharges from U.S. community hospitals (excluding rehabilitation and long-term acute-care hospitals). The NIS is available from 1988 forward, and a new database is released annually, approximately 18 months after the end of a calendar year. Kids’ Inpatient Database (KID): A nationwide sample of all-payer pediatric inpatient care discharges. Its large sample size is ideal for developing national and regional estimates and enables analyses of rare conditions, such as congenital anomalies, as well as uncommon treatments, such as organ transplantation. The KID was released every 3 years, from 1997 to 2012 and resumed release again in 2016. Nationwide Emergency Department Sample (NEDS): An all-payer emergency department (ED) database of approximately 30 million records that yields national estimates of 145 million ED visits. The NEDS captures encounters where the patient is admitted for inpatient treatment, as well as those in which the patient is treated and released. The NEDS is released annually and is available from 2006 forward. Nationwide Readmissions Database (NRD): The NRD is designed to support various types of analyses of national readmission rates for all patients, regardless of expected payer for the hospital stay. The NRD is released annually and is available from 2010 forward. Nationwide Ambulatory Surgery Sample (NASS): The NASS is the largest all-payer ambulatory surgery database that has been constructed in the United States, yielding national estimates of major ambulatory surgery encounters performed in hospital-owned facilities. The NASS is released annually and is available starting with the 2016 data year. State Databases State Inpatient Databases (SID): The SID are databases from the universe of inpatient discharge abstracts from participating States, released annually. Data are available from 1995 forward. The SID are released on a rolling basis, as early as 6 months following the end of a calendar year. State Ambulatory Surgery and Services Databases (SASD): The SASD are ambulatory surgery and other outpatient service abstracts from hospital-owned and sometimes freestanding ambulatory surgery sites in participating States. Data are available from 1997 forward. The SASD are released on a rolling basis, as early as 6 months following the end of a calendar year. State Emergency Department Databases (SEDD): The SEDD are hospital-affiliated emergency department data for visits in participating States that do not result in hospitalizations. Data are available from 1999 forward. The SEDD are released on a rolling basis, as early as 6 months following the end of a calendar year. HCUP Tools and Software HCUP provides a number of tools and software programs that can be applied to HCUP and other similar administrative databases. Readily Available HCUP Statistics HCUPnet HCUPnet (https://hcupnet.ahrq.gov/) is an online query system that provides healthcare statistics and information from the HCUP national (NIS, NEDS, KID, and NRD) and State (SID, SASD, and SEDD) databases for those States that have agreed to participate. HCUPnet can be used for identifying, tracking, analyzing, and comparing statistics on hospital inpatient stays, emergency care, and ambulatory surgery, as well as obtaining measures of quality-based information from the AHRQ Quality Indicators. Select statistics are available at a national- and county-level. HCUPnet can also be used for trend analysis with healthcare data available from 1993 forward. HCUPnet also includes a feature called hospital readmissions that provides users with some statistics on hospital readmissions within 7 and 30 days of hospital discharge. HCUP Fast Stats HCUP Fast Stats (https://www.hcup-us.ahrq.gov/faststats/landing.jsp) is a web-based tool that provides easy access to the latest HCUP-based statistics for healthcare information topics. HCUP Fast Stats uses visual statistical displays in standalone graphs, trend figures, or simple tables to convey complex information at a glance. Fast Stats topics are updated regularly (quarterly or annually, as newer data become available) for timely, topic-specific national and State-level statistics. The following topics are available: State Trends in Hospital Use by Payer (formerly called Effect of Health Insurance Expansion on Hospital Use and Effect of Medicaid Expansion on Hospital Use). This topic includes statistics from up to 44 States on the number of hospital discharges by payer group. National Hospital Utilization and Costs. This topic focuses on national statistics on inpatient stays: Trends, Most Common Diagnoses, and Most Common Operations. State Trends in Emergency Department Visits by Payer. These ED statistics are a supplement to the existing State-level inpatient stay trends by expected payer. Opioid-Related Hospital Use, National and State. This topic reports population-based rates of opioid-related hospital use by discharge quarter. Trends are available for inpatient stays and emergency department visits by expected payer. Neonatal Abstinence Syndrome (NAS), National and State. This new topic provides trends in NAS-related newborn hospitalizations at the national and State level. Rates of NAS per 1,000 newborn hospitalizations are presented overall as well as by sex, expected payer, community-level income, and patient location. Hurricane Impact on Hospital Use. This new topic provides historical inpatient and treat-and-release emergency department utilization information from 11 U.S. hurricanes between 2005 and 2017. Supported by the Patient-Centered Outcomes Research Trust Fund (PCORTF) and created in collaboration with the Office of the Assistant Secretary for Planning and Evaluation (ASPE) and the Office of the Assistant Secretary for Preparedness and Response (ASPR), this topic is designed to help HCUP users understand medical care utilization during and after past hurricanes to assist in the preparation for and deployment of medical services in future disasters. HCUP Software The HCUP software can be applied to HCUP databases, to systematically create new data elements from existing data, thereby enhancing a researcher's ability to conduct analyses. While designed to be used with HCUP databases, the analytic tools may be applied to other administrative databases. Clinical Classifications Software (CCS) The Clinical Classifications Software (CCS) provides a method for classifying diagnoses or procedures into clinically meaningful categories. These can be used for aggregate statistical reporting of a variety of topics, such as identifying populations for disease- or procedure-specific studies or developing statistical reports providing information (i.e., charges and length of stay) about relatively specific conditions. Four versions of the CCS Software are available: The Clinical Classifications Software Refined (CCSR) for ICD-10-CM aggregates more than 70,000 ICD-10-CM diagnosis codes into a manageable number of clinically meaningful categories. The categories are organized across 21 body systems, which generally follow the structure of the ICD-10-CM diagnosis chapters. The Clinical Classification Software (CCS) for ICD-10-PCS procedures (beta version) categorizes more than 77,000 ICD-10-PCS procedure codes into clinically meaningful categories and can be used to identify populations for procedure-specific studies or to develop statistical reports about relatively specific procedures. Clinical Classifications Software (CCS) for ICD-9-CM is based on the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), a uniform and standardized coding system. The CCS for ICD-9-CM provides a diagnosis and procedure categorization scheme that can be used to classify more than 14,000 ICD-9-CM diagnosis and 3,900 procedure codes into a manageable number of clinically meaningful categories. The CCS for ICD-9-CM was updated annually starting January 1980 through September 30, 2015. ICD-9-CM codes were frozen in preparation for ICD-10-CM implementation and regular maintenance of the codes has been suspended. Clinical Classifications Software (CCS) for Services and Procedures provides users with a method of classifying Current Procedural Terminology (CPT®) codes and Healthcare Common Procedure Coding System (HCPCS) codes into 244 clinically meaningful procedure categories. More than 9,000 CPT/HCPCS codes and 6,000 HCPCs codes are accounted for. The CCS versions and their user guides are available for download from the HCUP-US website: https://www.hcup-us.ahrq.gov/tools_software.jsp. Chronic Condition Indicator The Chronic Condition Indicator (CCI) facilitates health services research on diagnoses using administrative data. The CCI tools categorize ICD-9-CM/ICD-10-CM diagnoses codes into two classifications: chronic or not chronic. A chronic condition is defined as a condition that lasts 12 months or longer and meets one or both of the following tests: (a) it places limitations on self-care, independent living, and social interactions; and (b) it results in the need for ongoing intervention with medical products, services, and special equipment. Two versions of the CCI software are available, CCI for ICD-9-CM and CCI for ICD-10-CM (beta version). The ICD-9-CM CCI was updated annually and is valid for codes from January 1, 1980, through September 20, 2015. The ICD-10-CM CCI is updated annually and is valid for codes from October 1, 2015, forward. The CCI Software is available for download on the HCUP-US website: https://www.hcup-us.ahrq.gov/tools_software.jsp. Elixhauser Comorbidity Software Elixhauser Comorbidity Software assigns variables that identify comorbidities in hospital discharge records using ICD-9-CM or ICD-10-CM diagnosis coding. Two versions of the Elixhauser Comorbidity Software are available: Elixhauser Comorbidity Software for ICD-10-CM (beta version) and Elixhauser Comorbidity Software for ICD-9-CM. The Elixhauser Software for ICD-9-CM was updated annually from January 1, 1980, through September 30, 2015. The Elixhauser Comorbidity Software for ICD-10-CM (beta version) is updated annually and based on the ICD-10-CM and MS-DRG codes that are valid through September 30 of the designated fiscal year after October 1, 2015. The Elixhauser Comorbidity Software is available for download on the HCUP-US website: https://www.hcup-us.ahrq.gov/tools_software.jsp. Procedure Classes Procedure Classes facilitate research on hospital services using administrative data by identifying whether an ICD-9-CM or ICD-10-CM procedure is (a) diagnostic or therapeutic, and (b) minor or major in terms of invasiveness and/or resource use. There are two versions of Procedure Classes tools, Procedure Classes for ICD-9-CM and Procedure Classes for ICD-10-PCS (beta version). The Procedure Classes can be used to categorize procedure codes into one of four broad categories: minor diagnostic, minor therapeutic, major diagnostic, and major therapeutic. The Procedure Classes for ICD-9-CM were updated annually from January 1, 1980, through September 30, 2015. The Procedure Classes for ICD-10-PCS (beta version) are updated annually and valid for codes from October 1, 2015, forward. Procedure Classes are available for download from the HCUP-US website: https://www.hcup-us.ahrq.gov/tools_software.jsp. Utilization Flags Utilization Flags combine information from Uniform Billing (UB-04) revenue codes and ICD-9-CM or ICD-10-PCS procedure codes to create flags—or indicators—of utilization of services rendered in healthcare settings such as hospitals, emergency departments, and ambulatory surgery centers. The Utilization Flags can be used to study a broad range of services, including simple diagnostic tests and resource-intense procedures, such as use of intensive care units. They can also be used to more reliably examine utilization of diagnostic and therapeutic services. There are two types of Utilization Flags, Utilization Flags for ICD-9-CM and Utilization Flags for ICD-10-CM/PCS (beta version). The Utilization Flags for ICD-9-CM were updated annually from January 1, 2003, through September 30, 2015. The Utilization Flags for ICD-10-CM/PCS (beta version) are updated annually and valid for codes from October 1, 2015, forward. The Utilization Flags are available for download from the HCUP-US website: https://www.hcup-us.ahrq.gov/tools_software.jsp. Surgery Flags Surgery Flag Software classifies procedures and encounters in ICD-9-CM or CPT-based inpatient and ambulatory surgery into two types of surgical categories: NARROW and BROAD. NARROW surgery is based on a narrow, targeted, and restrictive definition and includes invasive surgical procedures. An invasive therapeutic surgical procedure involves incision, excision, manipulation, or suturing of tissue that penetrates or breaks the skin; typically requires use of an operating room; and requires regional anesthesia, general anesthesia, or sedation to control pain. BROAD surgery includes procedures that fall under the NARROW category but adds less invasive therapeutic surgeries and diagnostic procedures often performed in surgical settings. Users must agree to a license agreement with the American Medical Association to use the Surgery Flags before accessing the software. The Surgery Flags are available for download from the HCUP-US website: https://www.hcup-us.ahrq.gov/tools_software.jsp. AHRQ Quality Indicators (QIs) The AHRQ Quality Indicators (QIs) (https://www.qualityindicators.ahrq.gov/) are standardized, evidence-based measures of healthcare quality that can be used with readily available hospital inpatient administrative data to measure and track clinical performance and outcomes. The AHRQ QIs consist of four modules measuring various aspects of quality: Prevention Quality Indicators (PQIs) identify issues of access to outpatient care, including appropriate followup care after hospital discharge. More specifically, the PQIs use data from hospital discharges to identify admissions that might have been avoided through access to high-quality outpatient care. The PQIs are population based and adjusted for covariates. Inpatient Quality Indicators (IQIs) provide a perspective on quality of care inside hospitals, including: Inpatient mortality for surgical procedures and medical conditions; Utilization of procedures for which there are questions of overuse, underuse, and misuse; and Volume of procedures for which hospital procedure volume is an important indicator of performance. Patient Safety Indicators (PSIs) provide information on potentially avoidable safety events that represent opportunities for improvement in the delivery of care. More specifically, they focus on potential in-hospital complications and adverse events following surgeries, procedures, and childbirth. Pediatric Quality Indicators (PDIs) focus on potentially preventable complications and iatrogenic events for pediatric patients treated in hospitals and on preventable hospitalizations among pediatric patients, taking into account the special characteristics of the pediatric population. HCUP Supplemental Files The HCUP Supplemental Files augment applicable HCUP databases with additional data elements or analytically useful information that is not available when the HCUP databases are originally released. They cannot be used with other administrative databases. The HCUP Supplemental Files are available for download from the HCUP-US website: https://www.hcup-us.ahrq.gov/tools_software.jsp. Cost-to-Charge Ratio Files (CCR) The Cost-to-Charge Ratio (CCR) Files (https://www.hcup-us.ahrq.gov/db/state/costtocharge.jsp) are hospital-level files designed to convert the hospital total charge data to total cost estimates for services when merged with data elements exclusively in the HCUP NIS, KID, NRD, and SID. HCUP databases are limited to information on total hospital charges, which reflect the amount billed to the payer per patient encounter. Total charges do not reflect the actual cost of providing care or the payment received by the hospital for services provided. This total charge data can be converted into cost estimates using the CCR Files, which include hospital-wide values of the all-payer inpatient cost-to-charge ratio for nearly every hospital in the participating NIS, KID, NRD, and SID. The CCR Files are updated annually and available for the HCUP inpatient databases beginning with 2001 data. CCR Files for use with the HCUP emergency department databases (NEDS and SEDD) are under development. Hospital Market Structure (HMS) Files The Hospital Market Structure (HMS) Files (https://www.hcup-us.ahrq.gov/toolssoftware/hms/hms.jsp) are hospital-level files designed to supplement the data elements in the NIS, KID, and SID databases. The HMS Files contain various measures of hospital market competition. Hospital market definitions were based on hospital locations, and in some cases, patient ZIP Codes. Hospital locations were obtained from the American Hospital Association (AHA) Annual Survey Database, Area Resource File (ARF), HCUP Historical Urban/Rural – County (HURC) file, and ArcView GIS. Patient ZIP Codes were obtained from the SID. HMS Files are useful for performing empirical analyses that examine the effects of hospital competition on the cost, access, and quality of hospital services. The HCUP HMS Files are available for the 1997, 2000, 2003, 2006, and 2009 data years. HCUP Supplemental Variables for Revisit Analyses The HCUP Supplemental Variables for Revisit Analyses (https://www.hcup-us.ahrq.gov/toolssoftware/revisit/revisit.jsp) allow users to track sequential visits for a patient within a State and across facilities and hospital settings (inpatient, emergency department, and ambulatory surgery) while adhering to strict privacy guidelines. Users can use the available clinical information to determine if sequential visits are unrelated, an expected followup, complications from a previous treatment, or an unexpected revisit or rehospitalization. The supplemental files must be merged with the corresponding SID, SASD, or SEDD for any analysis. Beginning with 2009 data, the revisit variables are included in the Core file of the HCUP State Databases when possible. NIS and KID Trend Files The NIS-Trends (https://www.hcup-us.ahrq.gov/db/nation/nis/nistrends.jsp) and KID-Trends (https://www.hcup-us.ahrq.gov/db/nation/kid/kidtrends.jsp) files are available to help researchers conduct longitudinal analyses. They are discharge-level files that provide researchers with the trend weights and, in the case of the NIS-Trends, data elements that are consistently defined across data years. American Hospital Association (AHA) Linkage Files The American Hospital Association (AHA) Linkage Files (https://www.hcup-us.ahrq.gov/db/state/ahalinkage/aha_linkage.jsp) are hospital-level files that contain a small number of data elements that allow researchers to supplement the HCUP State Databases with information from the AHA Annual Survey Databases (https://www.ahadata.com/aha-annual-survey-database). The files are designed to support richer empirical analysis where hospital characteristics may be important factors. Linkage is only possible in States that allow the release of hospital identifiers and are unique by State and year. HCUP News and Reports HCUP produces material to report new findings based on HCUP data and to announce HCUP news. The HCUP eNews summarizes activities of the HCUP project quarterly. The HCUP Mailing List sends email updates on news, product releases, events, and the quarterly eNews to interested subscribers. HCUP Statistical Briefs provide healthcare statistics for various topics based on HCUP databases. HCUP Infographics present visual representations of data from the HCUP Statistical Brief series. Topics have included inpatient vs. outpatient surgeries in U.S. hospitals, neonatal hospital stays related to substance use, and characteristics of hospital stays involving malnutrition. HCUP Methods Series Reports offer methodological information on the HCUP databases and software tools. HCUP Findings-At-A-Glance provide snapshots covering a broad range of health policy issues related to hospital use and costs. See also Agency for Healthcare Research and Quality United States Department of Health and Human Services MONAHRQ International Statistical Classification of Diseases and Related Health Problems Medicine Patient safety Emergency Department Hospital Inpatient care References Zoorob RJ, Salemi JL, Mejia de Grubb MC, et al. A nationwide study of breast cancer, depression, and multimorbidity among hospitalized women and men in the United States. Breast Cancer Res Treat 2018 Nov 21. [Epub ahead of print] Zonfrillo MR, Spicer RS, Lawrence BA, et al. Incidence and costs of injuries to children and adults in the United States. Inj Epidemiol 2018 Oct 8;5(1):37. Yuan S, Ashmore S, Chaudhary KR, et al. The role of socioeconomic status in individuals that leave against medical advice. S D Med 2018 May;71(5):214-219. Basu J, Hanchate A, Koroukian S. Multiple chronic conditions and disparities in 30-day hospital readmissions among nonelderly adults. J Ambul Care Manage 2018 Oct/Dec;41(4):262-273. Vadagam P, Kamal KM. Hospitalization costs of cystic fibrosis in the United States: a retrospective analysis. Hosp Pract (1995) 2018 Oct;46(4):203-213. Epub 2018 Aug 9. Bath J, Dombrovskiy VY, Vogel TR. Impact of Patient Safety Indicators on readmission after abdominal aortic surgery. J Vasc Nurs 2018 Dec;36(4):189-195. Epub 2018 Oct 2. Nguyen MC, Moffatt-Bruce SD, Van Buren A. Daily review of AHRQ patient safety indicators has important impact on value-based purchasing, reimbursement, and performance scores. Surgery 2018 Mar;163(3):542-546. Epub 2017 Dec 21. Engineer LD, Winters BD, Weston CM, et al. Hospital characteristics and the Agency for Healthcare Research and Quality Inpatient Quality Indicators: a systematic review. SMJ Healthc Qual 2016 Sep-Oct;38(5):304-313. Al-Qurayshi Z, Baker SM, Garstka M, et al. Post-operative infections: trends in distribution, risk factors, and clinical and economic burdens. Surg Infect (Larchmt) 2018 Oct;19(7):717-722. Epub 2018 Sep 5. Chan L, Chauhan K, Poojary P, et al. National estimates of 30-day unplanned readmissions of patients on maintenance hemodialysis. Clin J Am Soc Nephrol 2017 Oct 6;12(10):1652-1662. Epub 2017 Sep 28. Gardner J, Sexton KW, Taylor J, et al. Defining severe traumatic brain injury readmission rates and reasons in a rural state. Trauma Surg Acute Care Open 2018 Sep 8;3(1):e000186. eCollection 2018. Moore BJ, White S, Washington R, et al. Identifying increased risk of readmission and in-hospital mortality using hospital administrative data: the AHRQ Elixhauser Comorbidity Index. Med Care 2017 Jul;55(7):698-705. Harris CR, Osterberg EC, Sanford T, et al. National variation in urethroplasty cost and predictors of extreme cost: a cost analysis with policy implications. Urology 2016 Aug;94:246-54. Epub 2016 Apr 20. Cerullo M, Chen SY, Dillhoff M, et al. Variation in markup of general surgical procedures by hospital market concentration. Am J Surg 2018 Apr;215(4):549-556. Epub 2017 Oct 23. United States Department of Health and Human Services Medical databases Databases in the United States
3960515
https://en.wikipedia.org/wiki/Jumbo%20frame
Jumbo frame
In computer networking, jumbo frames are Ethernet frames with more than 1500 bytes of payload, the limit set by the IEEE 802.3 standard. Commonly, jumbo frames can carry up to 9000 bytes of payload, but smaller and larger variations exist and some care must be taken using the term. Many Gigabit Ethernet switches and Gigabit Ethernet network interface controllers and some Fast Ethernet switches and Fast Ethernet network interface cards can support jumbo frames. Inception Each Ethernet frame must be processed as it passes through the network. Processing the contents of a single large frame is preferable to processing the same content broken up into smaller frames, as this makes better use of available CPU time by reducing interrupts. This also minimizes the overhead byte count and reduces the number of frames needing to be processed. This is analogous to physically mailing a packet of papers instead of several single envelopes with one sheet each, saving envelopes and cutting sorting time. Jumbo frames gained initial prominence in 1998, when Alteon WebSystems introduced them in their ACEnic Gigabit Ethernet adapters. Many other vendors also adopted the size; however, jumbo frames are not part of the official IEEE 802.3 Ethernet standard. Adoption Jumbo frames have the potential to reduce overheads and CPU cycles and have a positive effect on end-to-end TCP performance. The presence of jumbo frames may have an adverse effect on network latency, especially on low-bandwidth links. The frame size used by an end-to-end connection is typically limited by the lowest frame size in intermediate links. 802.5 Token Ring can support frames with a 4464-byte MTU, FDDI can transport 4352-byte, ATM 9180-byte and 802.11 can transport 7935-byte MTUs. The IEEE 802.3 Ethernet standard originally mandated support for 1500-byte MTU frames, 1518 byte total frame size (1522 byte with the optional IEEE 802.1Q VLAN/QoS tag). The IEEE 802.3as update grandfathered in multiple common headers, trailers, and encapsulations by creating the concept of an envelope where up to 482 bytes of header and trailer could be included, and the largest IEEE 802.3 supported Ethernet frame became 2000 bytes. The use of 9000 bytes as preferred payload size for jumbo frames arose from discussions within the Joint Engineering Team of Internet2 and the U.S. federal government networks. Their recommendation has been adopted by all other national research and education networks. In order to meet this mandatory purchasing criterion, manufacturers have in turn adopted 9000 bytes as the conventional MTU size, with a jumbo frame size of at least 9018/9022 bytes (without/with IEEE 802.1Q field). Most Ethernet equipment can support jumbo frames up to 9216 bytes. IEEE 802.1AB-2009 and IEEE 802.3bc-2009 added LLDP discovery to standard Ethernet for maximum frame length (TLV subtype 4). It allows frame length detection on a port by a two-octet field. As of IEEE 802.3-2015, allowed values are 1518 (only basic frames), 1522 (802.1Q-tagged frames), and 2000 (multi-tagged, envelope frames). Error detection Simple additive checksums as contained within the UDP and TCP transports have proven ineffective at detecting bus-specific bit errors because with simple summations, these errors tend to self-cancel. Before the adoption of RFC 3309, testing with simulated error injection against real data showed that as much as 2% of these errors were not being detected. Larger frames are more likely to suffer undetected errors with the simple CRC32 error detection used in Ethernet frames – as packet size increases it becomes more likely that multiple errors cancel each other out. One IETF approach for adopting jumbo frames avoids data integrity reduction of the service data unit by performing an extra CRC at the next network protocol layer above Ethernet. Stream Control Transmission Protocol (SCTP) transport (RFC 4960) and iSCSI (RFC 7143) use the Castagnoli CRC polynomial. The Castagnoli polynomial 0x1EDC6F41 achieves the Hamming distance HD=6 beyond one Ethernet MTU (to a 16,360-bit data word length) and HD=4 to 114,663 bits, which is more than 9 times the length of an Ethernet MTU. This gives two additional bits of error detection ability at MTU-sized data words compared to the Ethernet CRC standard polynomial while not sacrificing HD=4 capability for data word sizes up to and beyond 72 kbits. Support of Castagnoli CRC polynomial within a general-purpose transport designed to handle data chunks, and within a TCP transport designed to carry SCSI data, both provide improved error detection rates despite the use of jumbo frames where an increase of the Ethernet MTU would otherwise have resulted in a significant reduction in error detection. Configuration Some vendors include the headers in the size settings while others do not, that is either the maximum frame size (including frame headers, maximum layer-2 packet size) or the maximum transmission unit (maximum layer 3 packet size excluding frame headers). Therefore, you might find that different values must be configured in equipment from different vendors to make the settings match. A mixture of devices configured for jumbo frames and devices not configured for jumbo frames on a network has the potential to cause network performance issues. Bandwidth efficiency Jumbo frames can increase the efficiency of Ethernet and network processing in hosts by reducing the protocol overhead, as shown in the following example with TCP over IPv4. The processing overhead of the hosts can potentially decrease by the ratio of the payload sizes (approximately six times improvement in this example). Whether this is significant depends on how packets are processed in the host. Hosts that use a TCP offload engine will receive less benefit than hosts that process frames with their CPU. The relative scalability of network data throughput as a function of packet transfer rates is related in a complex manner to payload size per packet. Generally, as line bit rate increases, the packet payload size should increase in direct proportion to maintain equivalent timing parameters. This however implies the scaling of numerous intermediating logic circuits along the network path to accommodate the maximum frame size required. Baby giant frames Baby giant or baby jumbo frames are Ethernet frames that are only slightly larger than allowed by the IEEE Ethernet standards. Baby giant frames are, for example, required for IP/MPLS over Ethernet to deliver Ethernet services with standard 1500 byte payloads. Most implementations will require non-jumbo user frames to be encapsulated into MPLS frame format which in turn may be encapsulated into a proper Ethernet frame format with EtherType values of 0x8847 and 0x8848. The increased overhead of extra MPLS and Ethernet headers means that the support for frames up to 1600 bytes is required in Carrier Ethernet networks. Super jumbo frames Super jumbo frames (SJFs) are frames that have a payload size over 9000 bytes. As it has been a relatively difficult, and somewhat lengthy, process to increase the path MTU of high-performance national research and education networks from 1500 bytes to 9000 bytes or so, a subsequent increase, possibly to 64,000 bytes, is under consideration. The main factor involved is an increase in the available memory buffer size in every intervening persistence mechanism along the path. Another important factor to consider is the further reduction of CRC32's effectiveness in detecting errors within even larger frame sizes. Alternate approach Large send offload and large receive offload offload per-frame processing making CPU load largely independent of frame size. It is another way to eliminate the per-packet overhead that jumbo frames were designed to reduce. Jumbo frames are still useful from a bandwidth perspective, as they reduce the amount of bandwidth used for non-data overhead. See also Jumbogram, large packets for IPv6 Notes References External links Jumbo Frames – Where to use it? Jumbo frames? Yes!, by Selina Lo, Alteon Networks, 2/23/1998 in NetworkWorld Pushing up the Internet MTU IEEE 802.3as Frame Expansion Task Force 32-Bit Cyclic Redundancy Codes for Internet Applications Need To Know: Jumbo Frames in Small Networks Jumbo frames in Arch Linux wiki Network architecture Packets (information technology)
42320190
https://en.wikipedia.org/wiki/Caching%20SAN%20adapter
Caching SAN adapter
In an enterprise server, a Caching SAN Adapter is a host bus adapter (HBA) for storage area network (SAN) connectivity which accelerates performance by transparently storing duplicate data such that future requests for that data can be serviced faster compared to retrieving the data from the source. A caching SAN adapter is used to accelerate the performance of applications across multiple clustered or virtualized servers and uses DRAM, NAND Flash or other memory technologies as the cache. The key requirement for the memory technology is that it is faster than the media storing the original copy of the data to ensure performance acceleration is achieved. A caching SAN adapter's cached data is not captive to the server which hosts the adapter and enables clustered enterprise servers to share the cache for fault tolerance and application performance acceleration. Server application transparency is a key attribute of a caching SAN adapter as it ensures caching benefits without additional changes to the operating system and application stacks that can adversely impact interoperability and latency. Caching SAN Adapters are a new, hybrid approach to server-based caching addresses the drawbacks of the traditional implementations. Rather than creating a discrete captive cache for each server, a Caching SAN Adapter uses a cache that is integrated with a Host Bus Adapter. The adapter uses a cache-coherent implementation that uses the existing SAN infrastructure to create a shared cache resource distributed over multiple physical servers. This capability eliminates the single-server limitation for caching and provides the performance benefits of cached-data acceleration to the high I/O demands of clustered applications and highly virtualized data center environments. A Caching SAN Adapter incorporates a class of host-based, intelligent I/O optimization engines that provide integrated storage network connectivity, storage capacity, and the embedded processing required to make all cache management entirely transparent to the host. The only host-resident software required for operation is a standard host operating system (OS) device driver. A Caching SAN Adapter appears to the host as a standard Host Bus Adapter and uses a common Host Bus Adapter driver. Caching SAN Adapters delivers something beyond that of server-based caching implementations: the ability to provide cluster caching for SAN adapters and then share their caches between servers. Clustering Caching SAN Adapters creates a logical group that delivers a single point of management and cooperates to maintain cache coherence, high availability, and allocation of cache resources. Unlike standard Host Bus Adapters, Caching SAN Adapters communicate with each other as both initiators and targets, using the Fibre Channel or similar storage networking infrastructure. This communication allows the Caching SAN Adapter cluster to share and manage caches across multiple server nodes. This distributed cache model enables a single copy of cache data, which ensures coherent cache operation, maximizes the use of caching resources, simplifies the architecture, and increases scalability. History The caching SAN adapter in the enterprise server is a hybrid of two widely used technologies: the SAN HBA and cached memory. SAN HBAs provide server access to a high-speed network of consolidated storage devices. Cached memory has had two primary implementations: DRAM modules and NAND Flash in the form of a Solid State Drive (SSD). DRAM cache is used primarily at the CPU level and offers the highest levels of performance. Due to its high cost per gigabyte (GB) and volatility, DRAM cache has not been used extensively to accelerate larger data sets. SSD caching has gained popularity in recent years and typically comes in a drive, PCIe or other form factor. SSD cache offers an exceptional performance boost to server operations but is captive to the server in which it is contained. Typically, a single server SSD residing in a physical server is not sharable with other servers within a clustered environment. Furthermore, SSD caches lack transparency requiring caching software, special operating system drivers and other changes to make applications aware the cache is available for utilization. Future Caching SAN adapter platforms are available now that utilize server-based NAND flash technology as the cache and Fibre Channel as the server-to-storage interconnect. This configuration enables clustered servers to share cached data. There are new caching SAN adapter technologies in development that will expand utilization to other server-to-storage communication methodologies including Ethernet-based iSCSI and Fibre Channel over Ethernet (FCoE). See also Cache Host Bus Adapter Storage Area Network References Data Center Knowledge: A Distributed Caching Approach to Server-based Storage Acceleration Network World: Where to deploy flash-based SSD caching to optimize application acceleration Storage Switzerland: Overcoming the Weaknesses of Server-Side Flash Storage Networking Industry Association: Architecting Flash-based Caching for Storage Acceleration External links Taneja Group: Solving Caching Issues in Virtualized and Clustered Environments Enterprise Storage Group (ESG) Validates Caching SAN Adapter Performance The Register: 3 Words - Caching SAN Adapter. Just blew your mind, didn't we? Storage Review: QLogic Announces FabricCache Server-Based SSD Caching The SSD Review: Server Side SSD Caching Achieves A New Summit For SAN Storage SSG-Now: Mount Rainier project simplifies and advances server-side cache - Flash memory cache can be transparently shared across multiple physical servers Storage area networks
2615837
https://en.wikipedia.org/wiki/MasPar
MasPar
MasPar Computer Corporation was a minisupercomputer vendor that was founded in 1987 by Jeff Kalb. The company was based in Sunnyvale, California. History While Kalb was the vice-president of the division of Digital Equipment Corporation (DEC) that built integrated circuits, some researchers in that division were building a supercomputer based on the Goodyear MPP (massively parallel processor) supercomputer. The DEC researchers enhanced the architecture by: making the processor elements to be 4-bit instead of 1-bit increasing the connectivity of each processor element to 8 neighbors from 4. adding a global interconnect for all of the processing elements, which was a triple-redundant switch which was easier to implement than a full crossbar switch. After Digital decided not to commercialize the research project, Kalb decided to start a company to sell this minisupercomputer. In 1990, the first generation product MP-1 was delivered. In 1992, the follow-on MP-2 was shipped. The company shipped more than 200 systems. MasPar along with nCUBE criticized the open government support, by DARPA, of competitors Intel for their hypercube Personal SuperComputers (iPSC) and the Thinking Machines Connection Machine on the pages of Datamation. Samples of MasPar MPs, from the NASA Goddard Space Flight Center, are in storage at the Computer History Museum. MasPar offered a family of SIMD machines, second sourced by DEC. The processor units are proprietary. There was no MP-3. MasPar exited the computer hardware business in June 1996, halting all hardware development and transforming itself into a new data mining software company called NeoVista Software. NeoVista was acquired by Accrue Software in 1999, which in turn sold the division to JDA Software in 2001. Hardware MasPar is unique in being a manufacturer of SIMD supercomputers (as opposed to vector machines). In this approach, a collection of ALU's listen to a program broadcast from a central source. The ALUs can do their own data fetch, but are all under control of a central Array Control Unit. There is a central clock. The emphasis is on communications efficiency, and low latency. The MasPar architecture is designed to scale, and balance processing, memory, and communication. The Maspar MP-1 PE and the later binary-compatible Maspar MP-2 PE are full custom CMOS chips, designed in-house, and fabricated by various vendors such as HP or TI. The Array Control Unit (ACU) handles instruction fetch. It is a load-store architecture. The MasPar architecture is Harvard in a broad sense. The ACU implements a microcoded instruction fetch, but achieves a RISC-like 1 instruction per clock. The Arithmetic units, ALUs with data fetch capability, are implemented 32 to a chip. Each ALU is connected in a nearest neighbor fashion to 8 others. The edge connections are brought off-chip. In this scheme, the perimeters can be toroid-wrapped. Up to 16,384 units can be connected within the confines of a cabinet. A global router, essentially a cross-bar switch, provides external I/O to the processor array. The MP-2 PE chip contains 32 processor elements, each a full 32-bit ALU with floating point, registers, and a barrel shifter. Only the instruction fetch feature is removed, and placed in the ACU. The PE design is literally replicated 32 times on the chip. The chip is designed to interface to DRAM, to other processor array chips, and to communication router chips. Each ALU, called a PE slice, contains sixty four 32 bit registers that are used for both integer and floating point. The registers are both bit and byte addressable. The floating point unit handles single precision and double precision arithmetic on IEEE format numbers. Each PE slice contains two registers for data memory address, and the data. Each PE also has two one-bit serial ports, one for inbound and one for outbound communication to its nearest neighbor. The direction of communication is controlled globally. The PEs also have inbound and outbound paths to a global router for I/O. A broadcast port allows a single instance of data to be "promoted" to parallel data. Alternately, global data can be 'or-ed' to a scalar result. The serial links support 1 Mbyte/s bit-serial communication that allows coordinated register-register communication between processors. Each processor has its own local memory, implemented in DRAM. No internal memory is included on the processors. Microcoded instruction decode is used. The 32 PEs on a chip are clustered into two groups sharing a common memory interface, or M-machine, for access. A global scoreboard keeps track of memory and register usage. The path to memory is 16 bits wide. Both big and little endian formats are supported. Each processor has its own 64 Kbyte of memory. Both direct and indirect data memory addressing are supported. The chip is implemented in 1.0-micrometre, two-level, metal CMOS, dissipates 0.8 watt, and is packaged in a 208-pin PQFP. A relatively low clock rate of 12.5 MHz is used. The Maspar machines are front ended by a host machine, usually a VAX. They are accessed by extensions to Fortran and C. Full IEEE single- and double-precision floating point are supported. There is no cache for the ALUs. Cache is not required, due to the memory interface operating at commensurate speed with the ALU data accesses. The ALUs do not implement memory management for data memory. The ACU uses demand paged virtual memory for the instruction memory. See also Goodyear MPP ICL DAP Thinking Machines Corporation Parsytec SUPRENUM References External links Ian Kaplan's history of venture capital American companies established in 1987 American companies disestablished in 1999 Companies based in Sunnyvale, California Computer companies established in 1987 Computer companies disestablished in 1999 Defunct computer companies of the United States Defunct computer hardware companies Massively parallel computers Parallel computing Serial computers SIMD computing Supercomputers
4328138
https://en.wikipedia.org/wiki/Jmol
Jmol
Jmol is computer software for molecular modelling chemical structures in 3-dimensions. Jmol returns a 3D representation of a molecule that may be used as a teaching tool, or for research e.g., in chemistry and biochemistry. It is written in the programming language Java, so it can run on the operating systems Windows, macOS, Linux, and Unix, if Java is installed. It is free and open-source software released under a GNU Lesser General Public License (LGPL) version 2.0. A standalone application and a software development kit (SDK) exist that can be integrated into other Java applications, such as Bioclipse and Taverna. A popular feature is an applet that can be integrated into web pages to display molecules in a variety of ways. For example, molecules can be displayed as ball-and-stick models, space-filling models, ribbon diagrams, etc. Jmol supports a wide range of chemical file formats, including Protein Data Bank (pdb), Crystallographic Information File (cif), MDL Molfile (mol), and Chemical Markup Language (CML). There is also a JavaScript-only (HTML5) version, JSmol, that can be used on computers with no Java. The Jmol applet, among other abilities, offers an alternative to the Chime plug-in, which is no longer under active development. While Jmol has many features that Chime lacks, it does not claim to reproduce all Chime functions, most notably, the Sculpt mode. Chime requires plug-in installation and Internet Explorer 6.0 or Firefox 2.0 on Microsoft Windows, or Netscape Communicator 4.8 on Mac OS 9. Jmol requires Java installation and operates on a wide variety of platforms. For example, Jmol is fully functional in Mozilla Firefox, Internet Explorer, Opera, Google Chrome, and Safari. Screenshots See also Chemistry Development Kit (CDK) Comparison of software for molecular mechanics modeling List of free and open-source software packages List of molecular graphics systems Molecular graphics Molecule editor Proteopedia PyMOL SAMSON SMILES References External links Wiki with listings of websites, wikis, and moodles Jmol extension for MediaWiki Biomodel Molview Chemistry software for Linux Free chemistry software Free software programmed in Java (programming language) Molecular modelling software
30982866
https://en.wikipedia.org/wiki/3D-Coat
3D-Coat
3D-Coat is a commercial digital sculpting program from Pilgway designed to create free-form organic and hard surfaced 3D models from scratch, with tools which enable users to sculpt, add polygonal topology (automatically or manually), create UV maps (automatically or manually), texture the resulting models with natural painting tools, and render static images or animated "turntable" movies. The program can also be used to modify imported 3D models from a number of commercial 3D software products by means of plugins called Applinks. Imported models can be converted into voxel objects for further refinement and for adding high resolution detail, complete UV unwrapping and mapping, as well as adding textures for displacement, bump maps, specular and diffuse color maps. A live connection to a chosen external 3D application can be established through the Applink pipeline, allowing for the transfer of model and texture information. 3D-Coat specializes in voxel sculpting and polygonal sculpting using dynamic patch tessellation technology and polygonal sculpting tools. It includes "auto-retopology", a proprietary skinning algorithm. With minimal input from the user, this technology generates an accurate and functional polygonal mesh skin over any voxel sculpture, (composed primarily of quadrangles), which is the standard that is widely used in 3D production studios. Normally, this kind of polygonal topology must be painstakingly produced by hand. Features Texturing and physically based rendering Microvertex, per-pixel or ptex painting approaches Realtime physically based rendering viewport with HDRL Smart materials with set-up options Multiple paint layers, blending modes, layer groups Tight interaction with Photoshop Texture size up to 16K Ambient occlusion and curvature map calculation Toolset for painting tasks Digital sculpting Voxel (volumetric) sculpting No topological constraints Complex boolean operations, kit bashing workflow Traditional sculpting Adaptive dynamic tesselation (Live Clay) Dozens of sculpting brushes Boolean operations with crisp edges 3D printing export wizard Retopology tools Auto-retopology (Autopo) with user-defined edge loops Manual retopo tools Possibility to import reference mesh for retopologization Ability to use current low-poly mesh as retopo mesh Retopo groups with color palette for better management Advanced baking settings dialog UV mapping Professional toolset for creating and editing UV-sets Native global uniform unwrapping algorithm Multiple UV-sets support and management Support ABF, LSCM, and planar unwrapping algorithms Individual islands tweaking References External links 3D-Coat website Graphics software 3D modeling software for Linux Proprietary commercial software for Linux MacOS software Windows software
34384
https://en.wikipedia.org/wiki/Yukihiro%20Matsumoto
Yukihiro Matsumoto
, also known as Matz, is a Japanese computer scientist and software programmer best known as the chief designer of the Ruby programming language and its original reference implementation, Matz's Ruby Interpreter (MRI). His demeanor has brought about a motto in the Ruby community: "Matz is nice and so we are nice," commonly abbreviated as MINASWAN. , Matsumoto is the Chief Architect of Ruby at Heroku, an online cloud platform-as-a-service in San Francisco. He is a fellow of Rakuten Institute of Technology, a research and development organisation in Rakuten Inc. He was appointed to the role of technical advisor for VASILY, Inc. starting in June 2014. Early life Born in Osaka Prefecture, Japan, he was raised in Tottori Prefecture from the age of four. According to an interview conducted by Japan Inc., he was a self-taught programmer until the end of high school. He graduated with an information science degree from University of Tsukuba, where he was a member of Ikuo Nakata's research lab on programming languages and compilers. Work He works for the Japanese open source company Netlab.jp. Matsumoto is known as one of the open source evangelists in Japan. He has released several open source products, including cmail, the Emacs-based mail user agent, written entirely in Emacs Lisp. Ruby is his first piece of software that has become known outside Japan. Ruby Matsumoto released the first version of the Ruby programming language on 21 December 1995. He still leads the development of the language's reference implementation, MRI (for Matz's Ruby Interpreter). MRuby In April 2012, Matsumoto open-sourced his work on a new implementation of Ruby called mruby. It is a minimal implementation based on his virtual machine, called ritevm, and is designed to allow software developers to embed Ruby in other programs while keeping memory footprint small and performance optimised. streem In December 2014, Matsumoto open-sourced his work on a new scripting language called streem, a concurrent language based on a programming model similar to shell, with influences from Ruby, Erlang and other functional programming languages. Treasure Data Matsumoto has been listed as an investor for Treasure Data; many of the company's programs such as Fluentd use Ruby as their primary language. Written works オブジェクト指向スクリプト言語 Ruby Ruby in a Nutshell The Ruby Programming Language Recognition Matsumoto received the 2011 Award for the Advancement of Free Software from the Free Software Foundation (FSF) at the 2012 LibrePlanet conference at the University of Massachusetts Boston in Boston. Personal life Matsumoto is married and has four children. He is a member of The Church of Jesus Christ of Latter-day Saints, did standard service as a missionary and is now a counselor in the bishopric in his church ward. See also Ruby (programming language) Ruby MRI Ruby on Rails References External links Matz's web diary (and translated to English with Google Translate) Ruby Design Principles talk from IT Conversations The Ruby Programming Language – An introduction to the language by its own author Treating Code as an Essay – Matz's writeup for the book Beautiful Code, edited by Andy Oram, Greg Wilson, O'Reilly, 2007. 1965 births Free software programmers Japanese computer programmers Japanese computer scientists Japanese Latter Day Saints Living people People from Osaka Prefecture People from Tottori Prefecture Programming language designers Rakuten Ruby (programming language) University of Tsukuba alumni
4133123
https://en.wikipedia.org/wiki/Synapse%20Software
Synapse Software
Synapse Software Corporation (marketed as SynSoft in the UK) was an American video game development and publishing company founded in 1981. They initially focused on the Atari 8-bit family, then later developed for the Commodore 64 and other systems. Synapse was founded by Ihor Wolosenko and Ken Grant. The company was purchased by Broderbund in late 1984, and the Synapse label retired in 1985. Synapse is primarily known for a series of highly regarded action games such as Fort Apocalypse, Blue Max, The Pharaoh's Curse, and Shamus, including some unusual games not based on established concepts, like Necromancer and Alley Cat. The company also sold databases and a 6502 assembler, as well as a series of productivity applications which led to its downfall. Action games Synapse's first releases were for the Atari 8-bit computers, starting in 1981. Some of their early games were based on elements of contemporary arcade games. Protector (1981) uses elements of Defender, and Dodge Racer (1981) is a clone of Sega's Head On. Chicken (1982) has the same basic concept as Kaboom! for the Atari 2600, which itself is similar to the arcade game Avalanche. Nautilus (1982) features a split-screen so two players can play at once. In one-player mode the user controls a submarine, the Nautilus, in the lower screen while the computer controls a destroyer, the Colossus, in the upper screen. Similar to Atari's Combat, in two-player mode another player takes control of the destroyer. The same basic system was later re-used in other games, including Shadow World. Survivor (1982) supports up to four simultaneous players, a side effect of the Atari 400 and Atari 800 having four joystick ports. Each player commands a different part of a single spaceship. In single-player mode it operates like the ship in Asteroids, while in two player mode one drives and the other fires in any direction. In an interview with Antic, Wolosenko agreed that 1982's Shamus was the beginning of Synapse's reputation for quality products. Other high quality, better advertised games followed in 1982-3. These include Necromancer, Rainbow Walker, Blue Max, Fort Apocalypse, Alley Cat, and The Pharaoh's Curse. It was during this period that the company branched out and started supporting other systems, especially the Commodore 64, which became a major platform. Many of Synapse's games made their way to the UK as part of the initial wave of U.S. Gold-distributed imports (under the "Synsoft" imprint). Some were also converted to run on the more popular UK home computers, such as the Sinclair ZX Spectrum. Synapse was an early developer for the unsuccessful graphics-accelerated Mindset PC project and created the first-person game Vyper (1984). Ports and re-releases Synapse developed an official port of the arcade game Zaxxon for the Commodore 64. The Atari port was from Datasoft. Synapse also published Encounter! in 1983, which was originally released in the UK by Novagen Software without the exclamation mark in the name. Utilities and productivity software Although it is for their success with arcade-style games that they are primarily remembered, Synapse started out selling database software for the Atari 8-bit computers. In 1982 Synapse released SynAssembler, a 6502 development system which was much faster than Atari's offerings at the time. SynAssembler is a port of the S-C Assembler II Version 4.0 from the Apple II. The port was done by Steve Hales, who also wrote a number of games for Synapse. Synapse was developing a series of home productivity and financial applications: SynFile+ (written in Forth by Steve Ahlstrom and Dan Moore of The 4th Works), SynCalc, SynChron, SynComm, SynStock, and SynTrend. Interactive fiction Some time before their demise, Synapse had started work on interactive fiction games (or as they called them, "Electronic Novels"). The games were all based on a parser called "BTZ" (Better Than Zork), written by William Mataga and Steve Hales. Seven games were written using the system but only four released, the best-known being the critically acclaimed Mindwheel. Downfall By early 1984 Synapse was the largest third-party provider of Atari 8-bit software, but 65% of its sales came from the Commodore market. The company ran into financial difficulty. According to Steve Hales they had taken a calculated risk in developing the series of productivity applications and had entered into a collaboration with Atari, Inc. When Jack Tramiel purchased Atari's consumer division from Warner Communications, he refused to pay for the 40,000 units of software that had been shipped. Thrown into a cash crisis, Synapse was purchased by Broderbund Software in late 1984. Although the intention had been to keep Synapse going, the market had changed, and they were unable to make money from the electronic novels. Approximately one year after the takeover, Broderbund closed Synapse down. Software published by Synapse Games separated by a slash were sold together as "Double Plays," with one being a bonus game on the other side of the disk. Rainbow Walker was initially sold by itself, and the second game added later. Showcase Software At the 1983 Consumer Electronics Show, Synapse announced it would publish games for the VIC-20. These were a mix of original titles and ports and were sold under the name Showcase Software. Only some of the announced games were released. Astro Patrol Salmon Run - originally published in 1982 for the Atari 8-bit family through the Atari Program Exchange Squeeze References External links Scans and information on Synapse's Atari products Atari 8-bit family Defunct software companies of the United States Defunct video game companies of the United States 1984 disestablishments in California American companies disestablished in 1984
14286866
https://en.wikipedia.org/wiki/Virtual%20actor
Virtual actor
A virtual human, virtual persona, or digital clone is the creation or re-creation of a human being in image and voice using computer-generated imagery and sound, that is often indistinguishable from the real actor. The idea of a virtual actor was first portrayed in the 1981 film Looker, wherein models had their bodies scanned digitally to create 3D computer generated images of the models, and then animating said images for use in TV commercials. Two 1992 books used this concept: Fools by Pat Cadigan, and Et Tu, Babe by Mark Leyner. In general, virtual humans employed in movies are known as synthespians, virtual actors, vactors, cyberstars, or "silicentric" actors. There are several legal ramifications for the digital cloning of human actors, relating to copyright and personality rights. People who have already been digitally cloned as simulations include Bill Clinton, Marilyn Monroe, Fred Astaire, Ed Sullivan, Elvis Presley, Bruce Lee, Audrey Hepburn, Anna Marie Goddard, and George Burns. By 2002, Arnold Schwarzenegger, Jim Carrey, Kate Mulgrew, Michelle Pfeiffer, Denzel Washington, Gillian Anderson, and David Duchovny had all had their heads laser scanned to create digital computer models thereof. Early history Early computer-generated animated faces include the 1985 film Tony de Peltrie and the music video for Mick Jagger's song "Hard Woman" (from She's the Boss). The first actual human beings to be digitally duplicated were Marilyn Monroe and Humphrey Bogart in a March 1987 film "Rendez-vous in Montreal" created by Nadia Magnenat Thalmann and Daniel Thalmann for the 100th anniversary of the Engineering Institute of Canada. The film was created by six people over a year, and had Monroe and Bogart meeting in a café in Montreal, Quebec, Canada. The characters were rendered in three dimensions, and were capable of speaking, showing emotion, and shaking hands. In 1987, the Kleiser-Walczak Construction Company (now Synthespian Studios), founded by Jeff Kleiser and Diana Walczak coined the term "synthespian" and began its Synthespian ("synthetic thespian") Project, with the aim of creating "life-like figures based on the digital animation of clay models". In 1988, Tin Toy was the first entirely computer-generated movie to win an Academy Award (Best Animated Short Film). In the same year, Mike the Talking Head, an animated head whose facial expression and head posture were controlled in real time by a puppeteer using a custom-built controller, was developed by Silicon Graphics, and performed live at SIGGRAPH. In 1989, The Abyss, directed by James Cameron included a computer-generated face placed onto a watery pseudopod. In 1991, Terminator 2: Judgment Day, also directed by Cameron, confident in the abilities of computer-generated effects from his experience with The Abyss, included a mixture of synthetic actors with live animation, including computer models of Robert Patrick's face. The Abyss contained just one scene with photo-realistic computer graphics. Terminator 2: Judgment Day contained over forty shots throughout the film. In 1997, Industrial Light & Magic worked on creating a virtual actor that was a composite of the bodily parts of several real actors. 21st century By the 21st century, virtual actors had become a reality. The face of Brandon Lee, who had died partway through the shooting of The Crow in 1994, had been digitally superimposed over the top of a body-double in order to complete those parts of the movie that had yet to be filmed. By 2001, three-dimensional computer-generated realistic humans had been used in Final Fantasy: The Spirits Within, and by 2004, a synthetic Laurence Olivier co-starred in Sky Captain and the World of Tomorrow. Star Wars The Star Wars franchise has become particularly notable for its prominent usage of virtual actors, driven by a desire in recent entries to reuse characters that first appeared in the original trilogy during the late 20th century. The 2016 Star Wars Anthology film Rogue One: A Star Wars Story is a direct prequel to the 1977 film Star Wars: A New Hope, with the ending scene of Rogue One leading almost immediately into the opening scene of A New Hope. As such, Rogue One necessitated digital recreations of Peter Cushing in the role Grand Moff Tarkin (played and voiced by Guy Henry), and Carrie Fisher as Princess Leia (played by Ingvild Deila), appearing the same as they did in A New Hope. Fisher's sole spoken line near the end of Rogue One was added using archival voice footage of her saying the word "hope". Cushing had died in 1994, while Fisher was not available to play Leia during production and died a few days after the film's release. Industrial Light & Magic created the special effects. Similarly, the 2020 second season of The Mandalorian briefly featured a digital recreation of Mark Hamill's character Luke Skywalker (played by an uncredited body double and voiced by Hamill) as portrayed in the 1983 film Return of the Jedi. Canonically, The Mandalorian's storyline takes place roughly five years after the events of Return of the Jedi. Legal issues Critics such as Stuart Klawans in the New York Times expressed worry about the loss of "the very thing that art was supposedly preserving: our point of contact with the irreplaceable, finite person". Even more problematic are the issues of copyright and personality rights. Actors have little legal control over a digital clone of themselves. In the United States, for instance, they must resort to database protection laws in order to exercise what control they have (The proposed Database and Collections of Information Misappropriation Act would strengthen such laws). An actor does not own the copyright on his or her digital clones, unless they were created by her or him. Robert Patrick, for example, would not have any legal control over the liquid metal digital clone of himself that was created for Terminator 2: Judgment Day. The use of digital clones in movie industry, to replicate the acting performances of a cloned person, represents a controversial aspect of these implications, as it may cause real actors to land in fewer roles, and put them in disadvantage at contract negotiations, since a clone could always be used by the producers at potentially lower costs. It is also a career difficulty, since a clone could be used in roles that a real actor would not accept for various reasons. Both Tom Waits and Bette Midler have won actions for damages against people who employed their images in advertisements that they had refused to take part in themselves. In the USA, the use of a digital clone in advertisements is required to be accurate and truthful (section 43(a) of the Lanham Act and which makes deliberate confusion unlawful). The use of a celebrity's image would be an implied endorsement. The United States District Court for the Southern District of New York held that an advertisement employing a Woody Allen impersonator would violate the Act unless it contained a disclaimer stating that Allen did not endorse the product. Other concerns include posthumous use of digital clones. Barbara Creed states that "Arnold's famous threat, 'I'll be back', may take on a new meaning". Even before Brandon Lee was digitally reanimated, the California Senate drew up the Astaire Bill, in response to lobbying from Fred Astaire's widow and the Screen Actors Guild, who were seeking to restrict the use of digital clones of Astaire. Movie studios opposed the legislation, and as of 2002 it had yet to be finalized and enacted. Several companies, including Virtual Celebrity Productions, have purchased the rights to create and use digital clones of various dead celebrities, such as Marlene Dietrich and Vincent Price. In fiction S1m0ne, a 2002 science fiction drama film written, produced and directed by Andrew Niccol, starring Al Pacino where he created a computer-generated woman which he can easily animate to play the film's central character. The Congress, a 2013 science fiction drama film written, produced and directed by Ari Folman, starring Robin Wright deals with this issue extensively. In the Black Mirror episode "USS Callister," Callister Inc.'s CTO Robert Daly (portrayed by Jesse Plemons) uses the DNA samples he secretly obtains from discarded items used by his Callister Inc. co-workers to create sentient digital clones through his Digital Clone Replicator to be his crew member on the titular spaceship in his modded version of the MMORPG video game "Infinity" which Callister Inc. created. In Tokumei Sentai Go-Busters, the Metaloid Filmloid created evil clones of the Go-Busters. In Power Rangers Beast Morphers, they were adapted as the Evil Beast Morpher Ranger clones created by Filmloid's English adapted equivalent Gamertron. See also Avatar (computing) Live2D Timeline of computer animation in film and television Timeline of CGI in film and television Uncanny valley Virtual band Virtual cinematography Virtual humans Virtual influencer Virtual newscaster Virtual YouTuber References Further reading — a detailed discussion of the law, as it stood in 1997, relating to virtual humans and the rights held over them by real humans — how trademark law affects digital clones of celebrities who have trademarked their personæ 3D computer graphics Animated characters Anatomical simulation
1323039
https://en.wikipedia.org/wiki/1981%20in%20video%20games
1981 in video games
Fueled by the previous year's release of the colorful and appealing Pac-Man, the audience for arcade games in 1981 became much wider. Pac-Man influenced maze games began appearing in arcades and on home systems. Pac-Man was again the year's highest-grossing video game for the second year in a row. Nintendo released the arcade game Donkey Kong, which defined the platformer genre. Other arcade hits released in 1981 include Defender, Scramble, Frogger, Galaga and Zaxxon. The year's best-selling home system was Nintendo's Game & Watch, for the second year in a row. Financial performance The arcade video game market in the US generates $4.8 billion in revenue (equivalent to $ adjusted for inflation). The home video game market in the US generates $1 billion in sales revenue (equivalent to $ adjusted for inflation), with Atari remaining the market leader. The home video game market in Europe is worth $200 million (equivalent to $ adjusted for inflation). Highest-grossing arcade games The year's highest-grossing video game was Pac-Man with in arcade game revenue ( adjusted for inflation), three times the box office revenue of the highest-grossing film Star Wars (1977) in five years. Japan In Japan, the following titles were the highest-grossing arcade video games of 1981, according to the annual Game Machine chart. United States In the United States, the following titles were the top three highest-grossing arcade games of 1981, according to the annual Cash Box and RePlay arcade charts. The following titles were the top-grossing arcade games of each month in 1981, according to the Play Meter and RePlay arcade charts. Best-selling home video games The following titles were the best-selling home video games in 1981. Best-selling home systems Events Magazines January – Atari computer magazine ANALOG Computing begins 9 years of publication. Most issues include at least one BASIC game and one machine language game. November – The British video game magazine Computer and Video Games (C&VG) starts. Winter – Arnie Katz and Bill Kunkel found Electronic Games, the first magazine on video games and generally recognized as the beginning of video game journalism. Business New companies: DK'Tronics, Games by Apollo, Gebelli Software, Imagic, Spectravision, Starpath, Synapse Software Defunct: APF Electronics Births May May 11 – JP Karliak: American actor, voice actor and comedian Notable releases Games Arcade February – Konami releases Scramble, the first side-scrolling shooter with forced scrolling and multiple distinct levels. February – Williams Electronics releases influential scrolling shooter Defender. July 9 – Nintendo releases Donkey Kong, which introduces the characters of Donkey Kong and Mario, and sets the template for the platform game genre. It is also one of the first video games with an integral storyline. September – Namco releases Galaga, the sequel to Galaxian which becomes more popular than the original. June – Konami releases Frogger. October – Frogger is distributed in North America by Sega-Gremlin. October 18 – Sega releases Turbo, a racing video game that features a third-person perspective, rear-view racer format. October 21 – Williams Electronics releases Stargate, the sequel to Defender. October – Rock-Ola's Fantasy is the first game with a continue feature. October – Atari Inc. releases Tempest, one of the first games to use Atari's Color-QuadraScan vector display technology. It was also the first game to allow the player to choose their starting level (a system Atari dubbed "SkillStep"). November – Namco releases Bosconian, a multidirectional shooter with voice. December – Jump Bug, the first scrolling platform game, developed by Hoei/Coreland and Alpha Denshi, is distributed in North America by Rock-Ola under license from Sega. Midway releases fixed-shooter Gorf with multiple distinct stages. Taito releases abstract, twin-stick shooter Space Dungeon. Data East releases the vertically-scrolling isometric maze game Treasure Island. Console Atari, Inc.'s port of Asteroids is a major release for the Atari VCS, and is the first game for the system to use bank-switching. Mattel releases Utopia for Intellivision, one of the first city construction games and possibly the first sim game for a console. Computer June – Ultima is released, beginning a successful computer role-playing game series. September – Wizardry for the Apple II is the first in a computer role-playing franchise that eventually spans eight games. IBM and Microsoft include the game DONKEY.BAS with the IBM PC, arguably the first IBM PC compatible game. Muse Software releases the stealth action adventure Castle Wolfenstein for the Apple II. The Atari Program Exchange publishes Caverns of Mars, a vertically scrolling shooter for the Atari 8-bit family, and wargame Eastern Front (1941). APX also sells the source code to Eastern Front. Epyx releases turn-based monster game Crush, Crumble and Chomp!. BudgeCo's Raster Blaster sparks interest in more realistic Apple II pinball simulations and is the precursor to Pinball Construction Set. Infocom releases Zork II: The Wizard of Frobozz. Hardware Arcade July – the Namco Warp & Warp arcade system board is released. October – the VCO Object, the first arcade system board dedicated to pseudo-3D, sprite-scaling graphics, debuts with the release of Turbo. Computer March 5 – Timex releases the Sinclair Research ZX81 in the UK, which is significantly less expensive than other computers on the market. June – Texas Instruments releases the TI-99/4A, an update to 1979's TI-99/4. August 12 – the IBM Personal Computer is released for USD$1,565, with 16K RAM, no disk drives, and 4-color CGA graphics. Astrovision distributes the Bally Computer System after buying the rights from Bally/Midway. Acorn Computers Ltd releases the BBC Micro home computer. Commodore Business Machines releases the Commodore VIC-20 home computer. NEC releases the PC-8801 home computer in Japan. Handheld November – Nintendo's Game & Watch is released in Sweden. Microvision is discontinued. References Video games Video games by year
14018049
https://en.wikipedia.org/wiki/Influence%20of%20the%20IBM%20PC%20on%20the%20personal%20computer%20market
Influence of the IBM PC on the personal computer market
Following the introduction of the IBM Personal Computer, or IBM PC, many other personal computer architectures became extinct within just a few years. It led to a wave of IBM PC compatible systems being released. Before the IBM PC's introduction Before the IBM PC was introduced, the personal computer market was dominated by systems using the 6502 and Z80 8-bit microprocessors, such as the TRS 80, Commodore PET, and Apple II series, which used proprietary operating systems, and by computers running CP/M. After IBM introduced the IBM PC, it was not until 1984 that IBM PC and clones became the dominant computers. In 1983, Byte forecast that by 1990, IBM would command only 11% of business computer sales. Commodore was predicted to hold a slim lead in a highly competitive market, at 11.9%. Around 1978, several 16-bit CPUs became available. Examples included the Data General mN601, the Fairchild 9440, the Ferranti F100-L, the General Instrument CP1600 and CP1610, the National Semiconductor INS8900, Panafacom's MN1610, Texas Instruments' TMS9900, and, most notably, the Intel 8086. These new processors were expensive to incorporate in personal computers, as they used a 16-bit data bus and needed rare (and thus expensive) 16-bit peripheral and support chips. More than 50 new business-oriented personal computer systems came on the market in the year before IBM released the IBM PC. Very few of them used a 16- or 32-bit microprocessor, as 8-bit systems were generally believed by the vendors to be perfectly adequate, and the Intel 8086 was too expensive to use. Some of the main manufacturers selling 8-bit business systems during this period were: Acorn Computers Apple Computer Inc. Atari Inc. Commodore International Cromemco Digital Equipment Corporation Durango Systems Inc. Hewlett-Packard Intersystems Morrow Designs North Star Computers Ohio Scientific Olivetti Processor Technology Sharp South West Technical Products Corporation Tandy Corporation Zenith Data Systems/Heathkit The IBM PC On August 12, 1981, IBM released the IBM Personal Computer. One of the most far-reaching decisions made for IBM PC was to use an open architecture, leading to a large market for third party add-in boards and applications; but finally also to many competitors all creating "IBM-compatible" machines. The IBM PC used the then-new Intel 8088 processor. Like other 16-bit CPUs, it could access up to 1 megabyte of RAM, but it used an 8-bit-wide data bus to memory and peripherals. This design allowed use of the large, readily available, and relatively inexpensive family of 8-bit-compatible support chips. IBM decided to use the Intel 8088 after first considering the Motorola 68000 and the Intel 8086, because the other two were considered to be too powerful for their needs. Although already established rivals like Apple and Radio Shack had many advantages over the company new to microcomputers, IBM's reputation in business computing allowed the IBM PC architecture to take a substantial market share of business applications, and many small companies that sold IBM-compatible software or hardware rapidly grew in size and importance, including Tecmar, Quadram, AST Research, and Microsoft. As of mid-1982, three other mainframe and minicomputer companies sold microcomputers, but unlike IBM, Hewlett-Packard, Xerox, and Control Data Corporation chose the CP/M operating system. Many other companies made "business personal computers" using their own proprietary designs, some still using 8-bit microprocessors. The ones that used Intel x86 processors often used the generic, non-IBM-compatible specific version of MS-DOS or CP/M-86, just as 8-bit systems with an Intel 8080 compatible CPU normally used CP/M. The use of MS-DOS on non-IBM PC compatible systems Within a year of the IBM PC's introduction, Microsoft licensed MS-DOS to over 70 other companies. One of the first computers to achieve 100% PC compatibility was the Compaq Portable, released in November 1982; it remained the most compatible clone into 1984. Before the PC dominated the market, however, most systems were not clones of the IBM PC design, but had different internal designs, and ran Digital Research's CP/M. The IBM PC was difficult to obtain for several years after its introduction. Many makers of MS-DOS computers intentionally avoided full IBM compatibility because they expected that the market for what InfoWorld described as "ordinary PC clones" would decline. They feared the fate of companies that sold computers plug-compatible with IBM mainframes in the 1960s and 1970s—many of which went bankrupt after IBM changed specifications—and believed that a market existed for personal computers with a similar selection of software to the IBM PC, but with better hardware. While Microsoft used a sophisticated installer with its DOS programs like Multiplan that provided device drivers for many non IBM PC-compatible computers, most other software vendors did not. Columbia University discussed the difficulty of having Kermit support many different clones and MS-DOS computers. Peter Norton, who earlier had encouraged vendors to write software that ran on many different computers, by early 1985 admitted—after experiencing the difficulty of doing so while rewriting Norton Utilities—that "there's no practical way for most software creators to write generic software". Dealers found carrying multiple versions of software for clones of varying levels of compatibility to be difficult. To get the best results out of the 8088's modest performance, many popular software applications were written specifically for the IBM PC. The developers of these programs opted to write directly to the computer's (video) memory and peripheral chips, bypassing MS-DOS and the BIOS. For example, a program might directly update the video refresh memory, instead of using MS-DOS calls and device drivers to alter the appearance of the screen. Many notable software packages, such as the spreadsheet program Lotus 1-2-3, and Microsoft's Microsoft Flight Simulator 1.0, directly accessed the IBM PC's hardware, bypassing the BIOS, and therefore did not work on computers that were even trivially different from the IBM PC. This was especially common among PC games. As a result, the systems that were not fully IBM PC-compatible could not run this software, and quickly became obsolete. Rendered obsolete with them was the CP/M-inherited concept of OEM versions of MS-DOS meant to run (through BIOS calls) on non IBM-PC hardware. Cloning the PC BIOS In 1984, Phoenix Technologies began licensing its clone of the IBM PC BIOS. The Phoenix BIOS and competitors such as AMI BIOS made it possible for anyone to market a PC compatible computer, without having to develop a compatible BIOS like Compaq. Decline of the Intel 80186 Although based on the i8086 and enabling the creation of relatively low-cost x86-based systems, the Intel 80186 quickly lost appeal for x86-based PC builders because the supporting circuitry inside the Intel 80186 chip was incompatible with those used in the standard PC chipset as implemented by IBM. It was very rarely used in personal computers after 1982. Domination of the clones "Is it PC compatible?" In February 1984 BYTE described how "the personal computer market seems to be shadowed under a cloud of compatibility: the drive to be compatible with the IBM Personal Computer family has assumed near-fetish proportions", which it stated was "inevitable in the light of the phenomenal market acceptance of the IBM PC". The magazine cited the announcement by North Star in fall 1983 of its first PC-compatible microcomputer. Founded in 1976, North Star had long been successful with 8-bit S-100 bus products, and had introduced proprietary 16-bit products, but now the company acknowledged that the IBM PC had become a "standard", one which North Star needed to follow. BYTE described the announcement as representative of the great impact IBM had made on the industry: The magazine expressed concern that "IBM's burgeoning influence in the PC community is stifling innovation because so many other companies are mimicking Big Blue". Admitting that "it's what our dealers asked for", Kaypro also introduced the company's first IBM compatible that year. Tandy—which had once had as much as 60% of the personal-computer market, but had attempted to keep technical information secret to monopolize software and peripheral sales—also began selling non-proprietary computers; four years after its Jon Shirley predicted to InfoWorld that the new IBM PC's "major market would be IBM addicts", the magazine in 1985 similarly called the IBM compatibility of the Tandy 1000 "no small concession to Big Blue's dominating stranglehold" by a company that had been "struggling openly in the blood-soaked arena of personal computers". The 1000 was compatible with the PC but not compatible with its own Tandy 2000 MS-DOS computer. IBM's mainframe rivals, the BUNCH, introduced their own compatibles, and when Hewlett-Packard introduced the Vectra InfoWorld stated that the company was "responding to demands from its customers for full IBM PC compatibility". Mitch Kapor of Lotus Development Corporation said in 1984 that "either you have to be PC-compatible or very special". "Compatibility has proven to be the only safe path", Microsoft executive Jim Harris stated in 1985, while InfoWorld wrote that IBM's competitors were "whipped into conformity" with its designs, because of "the total failure of every company that tried to improve on the IBM PC". Customers only wanted to run PC applications like 1-2-3, and developers only cared about the massive PC installed base, so any non-compatible—no matter its technical superiority—from a company other than Apple failed for lack of customers and software. Compatibility became so important that Dave Winer joked that year (referring to the PC AT's incomplete compatibility with the IBM PC), "The only company that can introduce a machine that isn't PC compatible and survive is IBM". By 1985, the shortage of IBM PCs had ended, causing financial difficulties for many vendors of compatibles; nonetheless, Harris said, "The only ones that have done worse than the compatibles are the noncompatibles". The PC standard was similarly dominant in Europe, with Honeywell Bull, Olivetti, and Ericsson selling compatibles and software companies focusing on PC products. By the end of the year PC Magazine stated that even IBM could no longer introduce a rumored proprietary, non-compatible operating system. Noting that the company's unsuccessful PCjr's "cardinal sin was that it wasn't PC compatible", the magazine wrote that "backward compatibility [with the IBM PC] is the single largest concern of hardware and software developers. The user community is too large and demanding to accept radical changes or abandon solutions that have worked in the past." Within a few years of the introduction of fully compatible PC clones, almost all rival business personal computer systems, and alternate x86 using architectures, were gone from the market. Despite the inherent dangers of an industry based on a de facto "standard", a thriving PC clone industry emerged. The only other non-IBM PC-compatible systems that remained were those systems that were classified as home computers, such as the Apple II series, or business systems that offered features not available on the IBM PC, such as a high level of integration (e.g., bundled accounting and inventory) or fault-tolerance and multitasking and multi-user features. Wave of inexpensive clones Compaq's prices were comparable to IBM's, and the company emphasized its PC compatibles' features and quality to corporate customers. From mid-1985, what Compute! described as a "wave" of inexpensive clones from American and Asian companies caused prices to decline; by the end of 1986, the equivalent to a $1600 real IBM PC with 256K RAM and two disk drives cost as little as $600, lower than the price of the Apple IIc. Consumers began purchasing DOS computers for the home in large numbers; Tandy estimated that half of its 1000 sales went to homes, the new Leading Edge Model D comprised 1% of the US home-computer market that year, and toy and discount stores sold a clone manufactured by Hyundai, the Blue Chip PC, like a stereo—without a demonstrator model or salesman. Tandy and other inexpensive clones succeeded with consumers—who saw them as superior to lower-end game machines—where IBM failed two years earlier with the PCjr. They were as inexpensive as home computers of a few years earlier, and comparable in price to the Commodore Amiga, Atari ST, and Apple IIGS. Unlike the PCjr, clones were as fast as or faster than the IBM PC and highly compatible so users could bring work home; the large DOS software library reassured those worried about orphaned technology. Consumers used them for both spreadsheets and entertainment, with the former ability justifying buying a computer that could also perform the latter. PCs and compatibles also gained a significant share of the educational market, while longtime leader Apple lost share. At the January 1987 Consumer Electronics Show, both Commodore and Atari announced their own clones. By 1987 the PC industry was growing so quickly that the formerly business-only platform had become the largest and most important market for computer game companies, outselling games for the Apple II or Commodore 64. With the EGA video card, an inexpensive clone was better for games than the other computers. MS-DOS software was 77% of all personal computer software sold by dollar value in the third quarter of 1988, up 47% year over year. By 1989 80% of readers of Compute! owned DOS computers, and the magazine announced "greater emphasis on MS-DOS home computing". IBM's influence on the industry decreased, as competition increased and rivals introduced computers that improved on IBM's designs while maintaining compatibility. In 1986 the Compaq Deskpro 386 was the first computer based on the Intel 80386. In 1987 IBM unsuccessfully attempted to regain leadership of the market with the Personal System/2 line and proprietary MicroChannel Architecture. Clones conquer the home By 1990, Computer Gaming World told a reader complaining about the many reviews of PC games that "most companies are attempting to get their MS-DOS products out the door, first". It reported, in a US context, that MS-DOS comprised 65% of the computer-game market, with the Amiga at 10%; all other computers, including the Macintosh, were below 10% and declining. The Amiga and most others, such as the ST and various MSX2 computers, remained on the market until PC compatibles gained sufficient multimedia capabilities to compete with home computers. With the advent of inexpensive versions of the VGA video card and the Sound Blaster sound card (and its clones), most of the remaining home computers were driven from the market. The market in 1990 was more diverse outside the United States, but MS-DOS/Windows machines nonetheless came to dominate by the end of the decade. By 1995, other than the Macintosh, almost no new consumer-oriented systems were sold that were not IBM PC clones. The Macintosh originally used Motorola's 68000 family of processors, later migrating to the PowerPC architecture. Throughout the 1990s Apple would steadily transition the Macintosh platform from proprietary expansion interfaces to use standards from the PC world such as IDE, PCI and USB. In 2006, Apple converted the Macintosh to the Intel x86 architecture. Macintosh computers released between 2006 and 2020 were essentially IBM PC compatibles, capable of booting Microsoft Windows and running most IBM PC-compatible software, but still retained unique design elements to support Apple's Mac OS X operating system. In 2008, Sid Meier listed the IBM PC as one of the three most important innovations in the history of video games. Systems launched shortly after the IBM PC Shortly after the IBM PC was released, an obvious split appeared between systems that opted to use an x86-compatible processor, and those that chose another architecture. Almost all of the x86 systems provided a version of MS-DOS. The others used many different operating systems, although the Z80-based systems typically offered a version of CP/M. The common usage of MS-DOS unified the x86-based systems, promoting growth of the x86/MS-DOS "ecosystem". As the non-x86 architectures died off, and x86 systems standardized into fully IBM PC compatible clones, a market filled with dozens of different competing systems was reduced to a near-monoculture of x86-based, IBM PC compatible, MS-DOS systems. x86-based systems (using OEM-specific versions of MS-DOS) Early after the launch of the IBM PC in 1981, there were still dozens of systems that were not IBM PC-compatible, but did use Intel x86 chips. They used Intel 8088, 8086, or 80186 processors, and almost without exception offered an OEM version of MS-DOS (as opposed to the OEM version customized for IBM's use). However, they generally made no attempt to copy the IBM PC's architecture, so these machines had different I/O addresses, a different system bus, different video controllers, and other differences from the original IBM PC. These differences, which were sometimes rather minor, were used to improve upon the IBM PC's design, but as a result of the differences, software that directly manipulated the hardware would not run correctly. In most cases, the x86-based systems that did not use a fully IBM PC compatible design did not sell well enough to attract support from software manufacturers, though a few computer manufacturers arranged for compatible versions of popular applications to be developed and sold specifically for their machines. Fully IBM PC-compatible clones appeared on the market shortly thereafter, as the advantages of cloning became impossible to ignore. But before that some of the more notable systems that were x86-compatible, but not real clones, were: the ACT Apricot by ACT the Dulmont Magnum the Epson QX-16 the Seequa Chameleon the HP-150 by Hewlett-Packard and the later HP 95LX, HP 100LX, HP 200LX, HP 1000CX, HP OmniGo 700LX, HP OmniGo 100, and HP OmniGo 120. the Hyperion by Infotech Cie used its own H-DOS OEM version of MS-DOS and was, for a time, licensed but never manufactured by Commodore, as its first PC compatible. the MBC-550 by Sanyo had many differences, including non-interchangeability of diskettes and non-standard ROM location. the DG-One by Data General was an early laptop with full 80x25 LCD screen that could boot some generic DOSes but worked best with their OEM version of MS-DOS, and had some hardware incompatibilities (especially in the serial I-O chip) as part of the compromise to reduce power consumption. Later models were more compatible with generic PC clones. the DG/10 by Data General had two processors, one an Intel 8086, running a very-modified version of MSDOS (alternatively: CP/M-86) in a patented closely coupled arrangement with Data General's own microECLIPSE (the 8086 "invisibly" calling the microECLIPSE whenever it needed access to some peripherals, such as disks, while the 8086 had control over other peripherals such as the screen). the 80186-based Mindset graphics computer the Morrow Designs' Morrow Pivot the MZ-5500 by Sharp the Decision Mate V from NCR Corporation; its version of MS-DOS was called NCR-DOS the MikroMikko 2 by Nokia the NorthStar Advantage the PC-9801 systems from NEC the Rainbow 100 from DEC had both an 8088 and Zilog Z80 for Digital Research's CP/M-80 Operating System the RM Nimbus by RM plc the Tandy 2000 by RadioShack had a Intel 8186 the Texas Instruments TI Professional the Torch Graduate by Torch Computers the Tulip System-1 by Tulip the Victor 9000 by Sirius Systems Technology the :YES by Philips was late on the market, ran DOS Plus and MS-DOS, but by using an 80186 it was incompatible with IBM's PC the Z-100 by Zenith with an MS-DOS OEM version named Z-DOS Non-x86-based systems Not all manufacturers immediately switched to the Intel x86 microprocessor family and MS-DOS. A few companies continued releasing systems based on non-Intel architectures. Some of these systems used a 32-bit microprocessor, the most popular being the Motorola 68000. Others continued to use 8-bit microprocessors. Many of these systems were eventually forced out of the market by the onslaught of the IBM PC clones, although their architectures may have had superior capabilities, especially in the area of multimedia. Other non-x86-based systems available at the IBM PC's launch the Apple II and Apple II+ with MOS Technology's 6502 CPU In January 1983, the Apple IIe was introduced the 8-bit Commodore PET and CBM series The Commodore 64 was released a year later the 8-bit Atari 400, Atari 800 and successors the Cromemco CS-1 Intertec's Compustar II VPU Model 20 the Corvus Concept the Kaypro 10 the Fujitsu Micro 16s the Micro Decision by Morrow Designs the MTU-130 by Micro Technology Unlimited the Xerox 820 the Epson QX-10 the RoadRunner from MicroOffice the TRS-80 Model II and TRS-80 Model III by Tandy/Radio Shack the following year the TRS-80 Model 12 and TRS-80 models 16 and 16e See also Open standard Open architecture Compaq Compaq Portable and Compaq Portable series Timeline of DOS operating systems Comparison of DOS operating systems Wintel PC DOS MS-DOS History of computing hardware (1960s–present) IBM PC compatible De facto standard Dominant design List of machines running CP/M References External links Dedicated to the preservation and restoration of the IBM 5150 personal computer OLD-COMPUTERS.COM : The Museum History of computing hardware IBM PC compatibles
3932847
https://en.wikipedia.org/wiki/List%20of%20statistical%20software
List of statistical software
Statistical software are specialized computer programs for analysis in statistics and econometrics. Open-source ADaMSoft – a generalized statistical software with data mining algorithms and methods for data management ADMB – a software suite for non-linear statistical modeling based on C++ which uses automatic differentiation Chronux – for neurobiological time series data DAP – free replacement for SAS Environment for DeveLoping KDD-Applications Supported by Index-Structures (ELKI) a software framework for developing data mining algorithms in Java Epi Info – statistical software for epidemiology developed by Centers for Disease Control and Prevention (CDC). Apache 2 licensed Fityk – nonlinear regression software (GUI and command line) GNU Octave – programming language very similar to MATLAB with statistical features gretl – gnu regression, econometrics and time-series library intrinsic Noise Analyzer (iNA) – For analyzing intrinsic fluctuations in biochemical systems JASP – A free software alternative to IBM SPSS Statistics with additional option for Bayesian methods Just another Gibbs sampler (JAGS) – a program for analyzing Bayesian hierarchical models using Markov chain Monte Carlo developed by Martyn Plummer. It is similar to WinBUGS JMulTi – For econometric analysis, specialised in univariate and multivariate time series analysis KNIME – An open source analytics platform built with Java and Eclipse using modular data pipeline workflows LIBSVM – C++ support vector machine libraries mlpack – open-source library for machine learning, exploits C++ language features to provide maximum performance and flexibility while providing a simple and consistent application programming interface (API) Mondrian – data analysis tool using interactive statistical graphics with a link to R Neurophysiological Biomarker Toolbox – Matlab toolbox for data-mining of neurophysiological biomarkers OpenBUGS OpenEpi – A web-based, open-source, operating-independent series of programs for use in epidemiology and statistics based on JavaScript and HTML OpenNN – A software library written in the programming language C++ which implements neural networks, a main area of deep learning research OpenMx – A package for structural equation modeling running in R (programming language) Orange, a data mining, machine learning, and bioinformatics software Pandas – High-performance computing (HPC) data structures and data analysis tools for Python in Python and Cython (statsmodels, scikit-learn) Perl Data Language – Scientific computing with Perl Ploticus – software for generating a variety of graphs from raw data PSPP – A free software alternative to IBM SPSS Statistics R – free implementation of the S (programming language) Programming with Big Data in R (pbdR) – a series of R packages enhanced by SPMD parallelism for big data analysis R Commander – GUI interface for R Rattle GUI – GUI interface for R Revolution Analytics – production-grade software for the enterprise big data analytics RStudio – GUI interface and development environment for R ROOT – an open-source C++ system for data storage, processing and analysis, developed by CERN and used to find the Higgs boson Salstat – menu-driven statistics software Scilab – uses GPL-compatible CeCILL license SciPy – Python library for scientific computing that contains the stats sub-package which is partly based on the venerable |STAT (a.k.a. PipeStat, formerly UNIX|STAT) software scikit-learn – extends SciPy with a host of machine learning models (classification, clustering, regression, etc.) statsmodels – extends SciPy with statistical models and tests (regression, plotting, example datasets, generalized linear model (GLM), time series analysis, autoregressive–moving-average model (ARMA), vector autoregression (VAR), non-parametric statistics, ANOVA, empirical likelihood) Shogun (toolbox) – open-source, large-scale machine learning toolbox that provides several SVM (Support Vector Machine) implementations (like libSVM, SVMlight) under a common framework and interfaces to Octave, MATLAB, Python, R Simfit – simulation, curve fitting, statistics, and plotting SOCR SOFA Statistics – desktop GUI program focused on ease of use, learn as you go, and beautiful output Stan (software) – open-source package for obtaining Bayesian inference using the No-U-Turn sampler, a variant of Hamiltonian Monte Carlo. It is somewhat like BUGS, but with a different language for expressing models and a different sampler for sampling from their posteriors Statistical Lab – R-based and focusing on educational purposes TOPCAT (software) – interactive graphical analysis and manipulation package for astronomers that understands FITS, VOTable and CDF formats. Torch (machine learning) – a deep learning software library written in Lua (programming language) Weka (machine learning) – a suite of machine learning software written at the University of Waikato Public domain CSPro (core is public domain but without publicly available source code; the web UI has been open sourced under Apache version 2 and the help system under GPL version 3) X-13ARIMA-SEATS (public domain in the United States only; outside of the United States is under US government copyright) Freeware BV4.1 GeoDA MaxStat Lite – general statistical software MINUIT WinBUGS – Bayesian analysis using Markov chain Monte Carlo methods Winpepi – package of statistical programs for epidemiologists Proprietary Alteryx – analytics platform with drag and drop statistical models; R and Python integration Analytica – visual analytics and statistics package Angoss – products KnowledgeSEEKER and KnowledgeSTUDIO incorporate several data mining algorithms ASReml – for restricted maximum likelihood analyses BMDP – general statistics package DataGraph – visual analysis with linear and nonlinear regression DB Lytix – 800+ in-database models EViews – for econometric analysis FAME (database) – a system for managing time-series databases GAUSS – programming language for statistics Genedata – software solution for integration and interpretation of experimental data in the life science R&D GenStat – general statistics package GLIM – early package for fitting generalized linear models GraphPad InStat – very simple with much guidance and explanations GraphPad Prism – biostatistics and nonlinear regression with clear explanations IMSL Numerical Libraries – software library with statistical algorithms JMP – visual analysis and statistics package LIMDEP – comprehensive statistics and econometrics package LISREL – statistics package used in structural equation modeling Maple – programming language with statistical features Mathematica – a software package with statistical particularly ŋ features MATLAB – programming language with statistical features MaxStat Pro – general statistical software MedCalc – for biomedical sciences Microfit – econometrics package, time series Minitab – general statistics package MLwiN – multilevel models (free to UK academics) Nacsport Video Analysis Software – software for analysing sports and obtaining statistical intelligence NAG Numerical Library – comprehensive math and statistics library Neural Designer – commercial deep learning package NCSS – general statistics package NLOGIT – comprehensive statistics and econometrics package nQuery Sample Size Software – Sample Size and Power Analysis Software O-Matrix – programming language OriginPro – statistics and graphing, programming access to NAG library PASS Sample Size Software (PASS) – power and sample size software from NCSS Plotly – plotting library and styling interface for analyzing data and creating browser-based graphs. Available for R, Python, MATLAB, Julia, and Perl Primer-E Primer – environmental and ecological specific PV-WAVE – programming language comprehensive data analysis and visualization with IMSL statistical package Qlucore Omics Explorer – interactive and visual data analysis software RapidMiner – machine learning toolbox Regression Analysis of Time Series (RATS) – comprehensive econometric analysis package SAS (software) – comprehensive statistical package SHAZAM (Econometrics and Statistics Software) – comprehensive econometrics and statistics package Simul – econometric tool for multidimensional (multi-sectoral, multi-regional) modeling SigmaStat – package for group analysis SmartPLS – statistics package used in partial least squares path modeling (PLS) and PLS-based structural equation modeling SOCR – online tools for teaching statistics and probability theory Speakeasy (computational environment) – numerical computational environment and programming language with many statistical and econometric analysis features SPSS Modeler – comprehensive data mining and text analytics workbench SPSS Statistics – comprehensive statistics package Stata – comprehensive statistics package StatCrunch – comprehensive statistics package, originally designed for college statistics courses Statgraphics – general statistics package to include cloud computing and Six Sigma for use in business development, process improvement, data visualization and statistical analysis, design of experiment, point processes, geospatial analysis, regression, and time series analysis are all included within this complete statistical package. Statistica – comprehensive statistics package StatsDirect – statistics package designed for biomedical, public health and general health science uses StatXact – package for exact nonparametric and parametric statistics Systat – general statistics package SuperCROSS – comprehensive statistics package with ad-hoc, cross tabulation analysis S-PLUS – general statistics package Unistat – general statistics package that can also work as Excel add-in The Unscrambler – free-to-try commercial multivariate analysis software for Windows WarpPLS – statistics package used in structural equation modeling Wolfram Language – the computer language that evolved from the program Mathematica. It has similar statistical capabilities as Mathematica. World Programming System (WPS) – statistical package that supports the use of Python, R and SAS languages within in a single user program. XploRe Add-ons Analyse-it – add-on to Microsoft Excel for statistical analysis Statgraphics Sigma Express – add-on to Microsoft Excel for Six Sigma statistical analysis SUDAAN – add-on to SAS and SPSS for statistical surveys XLfit add-on to Microsoft Excel for curve fitting and statistical analysis See also Comparison of statistical packages Econometric software Free statistical software List of computer algebra systems List of graphing software List of numerical analysis software List of numerical libraries Mathematical software Psychometric software References External links Statistical packages Statistical packages
1219489
https://en.wikipedia.org/wiki/Siemens%20Mobile
Siemens Mobile
Siemens Mobile was a German mobile phone manufacturer and a division of Siemens AG. Siemens sold Siemens Mobile to the Taiwan-based BenQ in 2005, subsequently becoming BenQ-Siemens and succeeded by Gigaset. The last Siemens-branded mobile phones, the AL21, A31 and AF51, were released in November 2005. History The first Siemens mobile phone, the Siemens Mobiltelefon C1, was launched in 1985. In 1994 the Siemens S1 GSM phone was launched. In 1997 Siemens launched the first phone with a colour screen, the Siemens S10, with a screen capable of displaying red, green, blue and white. In the same year Siemens launched the first "outdoor" phone, the Siemens S10 Active, with enhanced shock, dust and splash protection. Siemens launched the first slider phone, the Siemens SL10, in 1999. Siemens acquired the mobile phone division of Bosch in 2000. In the same year Siemens launched one of the first phones with an MP3 player and external memory card support (MultiMediaCard), the Siemens SL45. In 2003 Siemens launched its first phone running on the Symbian OS operating system, the Siemens SX1. The phone featured hot swappable MultiMediaCard. In the same year Siemens launched the Xelibri range of fashion phones. In 2005 Siemens launched the first phone with real GPS support, the Siemens SXG75. As of Q3 2000, Siemens had an 8.6% mobile handset market share, putting it behind Ericsson, Motorola and Nokia. For the calendar year 2003, Siemens was again fourth behind Samsung, Motorola and Nokia, with a figure of 8.5%. In 2004 it decreased to 7.2%. Siemens Mobile was making large losses and plummeting sales at this time. By the first quarter of 2005, market share was down to 5.6% as it fell behind competitors LG and Sony Ericsson. Their Xelibri range of phones, which was the company's answer to the fashionable handset trend at the time, became a costly failure. On 7 June 2005, the Taiwanese company BenQ agreed to acquire the loss-making Siemens Mobile from Siemens, together with exclusive right to use the Siemens trademark on its mobile phones for 5 years. Before transferring the mobile phone subsidiary to BenQ, Siemens invested 250 million euros and wrote down assets amounting to 100 million euros. Siemens also acquired a 2.5% stake in BenQ for 50 million euros. BenQ subsequently released mobile phones under the BenQ-Siemens brand, from its German unit. In 2006 the German unit of BenQ filed for bankruptcy. Siemens restarted the production of mobile phones under the Gigaset brand name. Product Classification SIEMENS Phone Cassifications Depending on their name, the Siemens mobiles have the following classifications: A series: low-cost phones with basic functionality. C series: mid-range phones. M series: phones with military specifications for outdoor activities. S series: flagship phones U series: UMTS devices manufactured in 2003 by Motorola Mobility, USA and rebranded under "Siemens" name. Secondary alphabet position initials xF Cradle phones xL Sliding phones. (Exceptions: Siemens SL45 , Siemens CL75 ) xX Phones with advanced features. xK Phones with a full QWERTY keyboard ( Siemens SK65 ) Within a class, the numbers have the following meaning: The first digit (1-9) denotes the level of functionality of the phone. The second digit indicates the generation of the phone. The phones are also of the first generation of the model, containing the numbers 1, 6, 8. Siemens A31 Siemens A35 Siemens A36 Siemens A40 Siemens A50: one of the most popular Siemens phones, Shares same hardware as C45 and can be upgraded to C45 by firmware flash. Siemens A51 Siemens A52: monochrome, GSM 900 / GSM 1800, no GPRS, no USB, IrDA or Bluetooth. Same as A55/A56 Siemens A53 Siemens A55: monochrome, GSM 900 / GSM 1800, no GPRS, no USB, no IrDA or Bluetooth. Shares same hardware as C55 and can be upgraded to C55 by firmware flash. Siemens A56: monochrome, GSM 850 / GSM 1900, no GPRS, no USB, no IrDA or Bluetooth. Shares same hardware as C56 and can be upgraded to C56 by firmware flash. Siemens A56i Siemens A57 Siemens A60 Siemens A62 Siemens A65: CSTN 101x80 display, Triband (900 / 1800 / 1900) Siemens A70, last Siemens with monochrome display, Triband (900 / 1800 / 1900), completely unrelated to the A55/C55. Siemens A75 Siemens AF51 Siemens AP75 (Developed by BenQ, same with BenQ M580) Siemens AL21/AL21 Hello Kitty (TFT 130x130 display, GSM 900 / 1800 / 1900, no camera (same as CF110)) Siemens AX72 Siemens AX75 (taking pictures supported when connected with the QuickPic IQP-500 external accessories camera module like S55, S57, CF62 etc.) Siemens C1 Siemens C2 Siemens C3 Siemens C4 Siemens C5 Siemens C10 Siemens C11 Siemens C25 Siemens C28 Siemens C30 Siemens C35 Siemens C35i Siemens C45 Siemens C55: monochrome, GSM 900 / GSM 1800, with GPRS, no USB, no IrDA or Bluetooth. One of the most popular Siemens phones Siemens C56: monochrome, GSM 850 / GSM 1900, with GPRS, no USB, no IrDA or Bluetooth. Siemens C60 Siemens C61 Siemens C62 (Co-developed with Sony Ericsson sameness platform with T600, T610i) Siemens C65 Siemens C66 Siemens C70: Identical hardware to C65 Siemens C71 Siemens C72 Siemens C75 Siemens CC75 (cancelled) Siemens CF62 Siemens CF65 Siemens CF75 (CF65 based) Siemens CF110: TFT 130x130 display, GSM 900 / 1800 / 1900, no camera Siemens CFX65 (the first Siemens clamshells form factor with built in VGA camera and flash. CFX65 is the only model of Siemens that had a torch, you close the flip and double press volume the up key to start the torch.) Siemens CL50 (Developed By Arista) Siemens CL55 (Developed By LG Electronics ) Siemens CL75, CL75 Black Edition, CL75 Poppy Edition Siemens CX65: TFT 132x176 display, GSM 900 / 1800 / 1900, VGA (640 x 480 pixel) camera, IRDA (Based on M65) Siemens CX66 (minor changes for some countries) Siemens CX70: TFT 132x176 display, GSM 900 / 1800 / 1900, VGA (640 x 480 pixel) camera, IRDA (Identical hardware to CX65) Siemens CX70 Emoty (in the CX70 family’s) Siemens CXV70 (in the CX70 family’s for Vodafone) Siemens CXT70 (in the CX70 family’s for T-Mobile) Siemens CXO70 (in the CX70 family’s for O2) Siemens CX75 Siemens E10 Siemens M30: one of the most popular Siemens phones Siemens M35 Siemens M35i: one of the most popular Siemens phones Siemens M45 Siemens M50 Siemens M55 Scorpions Siemens M56: GSM 850 / GSM 1900 version of M55. Note that it is not triband. Siemens M65 Siemens M65 Rescue Edition Siemens M75 Siemens MC60 Siemens ME45 Siemens ME75 (C75 Based) Siemens MT50 Siemens P1 Siemens S1 Siemens S3 Siemens S3 COM 1995 Siemens S4 Siemens S6 Siemens S10 Siemens S15 Siemens S25 Siemens S35i Siemens S40 Siemens S42 Siemens S45 Siemens S45i Siemens S46 Siemens S55 (Fully wireless connections with Bluetooth 1.1 and IrDA port) Siemens S55 Formula One Siemens S56 Siemens S57 (Only IrDA port supportable) Siemens S65 Siemens S65 DVBH, SXX65 (Concept phone) run on LINUX Siemens S66 (Minor-changes for some countries), S66 or S65 (for Xingular) Siemens S68 (rebranding and selling in BenQ-Siemens S68, the SP65 successors) Siemens S75 ( FE75 Internal pre-models) Siemens SF65 (co-development with Philips N.V. (France), the same models with Philips 760 twists) Siemens SFG75 (Developed by BenQ, same with BenQ S80 UMTS) Siemens SG75 (cancelled, in the same family with SXG75) Siemens SK65, SK6B and SK6R (Co-development with The RIM BlackBerry and run parallels between “Siemens_X65 platform” and “BlackBerry 3.8 OS“. Select BB Icons on Main Menu then press “Option”-> “About” for the BBOS version pre installed info.) Siemens SK65 Burlwood Editions Siemens SP65 (S65 without Camera) Siemens SL10: the world firstly sliding mobile phone with a four-colour screen (Red, Green, Blue and White) Siemens SL42 Siemens SL45: the worlds first phone with an MP3 Player Siemens SL45i Siemens SL55 Siemens SL56: the American version of the Siemens SL55, available only on AT&T. Siemens SL65: TFT 130x130 display, GSM 900 / 1800 / 1900, VGA (640 x 480 pixel) camera, IRDA, slider form factor Siemens SL65 ESCADA Edition Siemens SL75/ SL7C Siemens SL80 (re-branding and selling in BenQ-Siemens SL80 Hello Kitty Edition) Siemens SLV140 (re-branding and selling in BenQ-Siemens EF81) Siemens ST55 (ODM Developed By Arista) Siemens ST60 (For T-Mobile) Siemens SX1, first and only smartphone developed and produced by Siemens AG, run on Symbian OS Siemens SX2 (cancelled) run on Symbian OS Siemens SX45 (Developed by hTC, run on Windows Mobile 2003 Edition) Siemens SX56 (Developed by hTC, run on Windows Mobile 5.0 Edition) Siemens SX66 (Developed by hTC, run on Windows Mobile 5.0 Edition can upgradable to WM 6.0 Edition) Siemens SXG75 (The first Siemens with BrewOS by Qualcomm and build internal GPS real-time functions.) Siemens SXX75 (Concept phone, Full touchscreen-slides platform. runs on BrewOS by Qualcomm) Siemens U10 (UMTS)(The first Siemens UMTS (3G) model’s run on Moto Linux platform A and it was developed by Motorola Mobility.) Siemens U15 (The second Siemens UMTS (3G) model’s run on Moto Linux platform A, and it was developed by Motorola Mobility.) Xelibri X1 Xelibri X2 Xelibri X3 Xelibri X4 Xelibri X5 Xelibri X6 Xelibri X7 Xelibri X8 Gallery Sponsorships Siemens Mobile sponsored Lazio from the 2000/2001 season to the 2002/2003 season and Real Madrid from the 2002/2003 season to the 2005/2006 season. On July 17, 2002 began the Siemens contract with Real Madrid, that season was signed Ronaldo Nazario da Lima. However, with Siemens, the Madrid team only won five titles, which are known as the drought of titles from 2003 to 2006, with the resignation of Florentino Perez. Siemens Mobile also sponsored non-shirt partnerships with a number of English football clubs including Liverpool and Chelsea. Siemens Mobile albo sponsored ski jumping season 2000/2001/2002 for example tricoat of Adam Małysz skijumper. Siemens Mobile also sponsored Olympiacos until season 2005/2006. Last season shirt sponsorship of Greek, in Group F of the UEFA Champions League, Real Madrid and Olympiacos agreed, making the two parties, the visitor does not take advertising for Siemens Mobile. Siemens Mobile sponsored McLaren Mercedes Formula-1 team between 2001-2004. References German brands Defunct mobile phone manufacturers Mobile phone companies of Germany Telecommunications companies established in 1985 Companies disestablished in 2005 Consumer electronics brands 1985 establishments in Germany
929837
https://en.wikipedia.org/wiki/UltraEdit
UltraEdit
UltraEdit is a commercial text editor for Microsoft Windows, Linux and OS X created in 1994 by the founder of IDM Computer Solutions Inc., Ian D. Mead, and owned by Idera, Inc. since August 2021. The editor contains tools for programmers, including macros, configurable syntax highlighting, code folding, file type conversions, project management, regular expressions for search-and-replace, a column-edit mode, remote editing of files via FTP, interfaces for APIs or command lines of choice, and more. Files can be browsed and edited in tabs, and it also supports Unicode and hex editing mode. Originally called MEDIT, it was designed to run in Windows 3.1. A version called UltraEdit-32 was later created to run in Windows NT and Windows 95. The last 16-bit UltraEdit program was 6.20b. Beginning with version 11, the Wintertree spell check engine was replaced by GNU Aspell. In version 13 (2007), JavaScript was added to the existing Macro facility for automation tasks. UltraEdit's JavaScript uses JavaScript 1.7. UltraEdit-32 was renamed to UltraEdit in version 14.00. Version 22.2 was the first native 64-bit version of the text editor. An installation of UltraEdit takes about 100 MB of disk space. HTML editing features include: Integration with CSE HTML Validator for offline HTML, XHTML and CSS checking HTML toolbar preconfigured for popular functions and HTML tags Customize tags in HTML toolbar or create new tags and buttons UltraEdit is Trialware: It can be evaluated for free for 30 or 15 days, depending on usage. After expiration of this period, the application will work only with a regular license key. UltraEdit also includes UltraCompare Professional. UltraEdit is CA Veracode Verified. Key Features Facilitates the opening and editing of large files, up to 4GB and greater in size Native 64-bit architecture Multi-caret editing and multi-select Document map navigation Column (block) mode editing Regular expression find and replace Find/Replace in Files Extensible code highlighting, with 'wordfiles' already available for many languages Code folding and hierarchical function listing Beautify and reformat source code XML editing features, such as XML tree view, reformatting and validation Auto-closing XML and HTML tags Smart templates for code completion Editor themes Integrated FTP, SSH, telnet Hex editing Log file polling File/data sorting File encryption and decryption Project management Bookmarking Automation via macros and scripts Integrated file compare UltraCompare File, Folder, Excel, PDF, Zip/Rar/Jar Compare Features include: Compare, Merge, Sync, UltraEdit integration, Source Control Integration, handles large files UEStudio It is a variant with additional support for IDE editing. It also enhances file handling, file editing, HTML editing over UltraEdit. IDE features include: Workspace Manager, project builder (interactive and batch), resource editor, project converter, class viewer, native compiler support, debugger with integrated debugging (via WinDBG). File handling features include: Project Manager, Git/SVN/CVS version control. File editing features include: Tabbed Output Window for script commands, intelligent auto complete tooltip. HTML editing features include: Integrated PHP, Ruby support. UltraFinder Finds text in files Features: Search capability of 2,000,000+ files in minutes, duplicate identification Searches: Files, PC, Network, Servers UltraFTP FTP Client Features: Commercially supported, 64-bit FTP client, Native Unicode, fast upload/download, tabbed connections, secure Reviews In a review published on June 4, 2004 PC Magazine, the author said that UltraEdit v10.0 is the editor's favorite text editor. In a review published on July 9, 2006 Softpedia wrote UltraEdit contains plenty of features useful for all types of users and that it considered the program "excellent". CNET/Download.com says about UltraEdit: "With its clear layout and powerful project and work-space features, it can handle complex and sophisticated software-development projects. But despite its vast range of features, UltraEdit never feels overwhelming. It's flexible and easy to customize, and the polished user interface provides easy access to the most important options..." See also List of text editors Comparison of text editors Comparison of file comparison tools References External links Windows text editors MacOS text editors Linux text editors Proprietary commercial software for Linux
3341249
https://en.wikipedia.org/wiki/Graphic%20design%20occupations
Graphic design occupations
Graphic design careers include creative director, art director, art production manager, brand identity developer, illustrator and layout artist. Graphic art managers The following are positions or responsibilities and usually titles, held by experienced graphic designers in related management roles: Creative Director A creative director's range of experience can be broad and encompass a number of disciplines; visual design; copywriting, art direction, advertising account director, film/video director . A Creative Director's job is to initiate the creative concept of a project and drive the direction of the project. The role of a Creative Director is to formulate creative concepts, whether it is an advertising campaign, brand identity, TV commercial, marketing campaign. A Creative Director was often referred to the 'Ideas Guy' and works with a team of 'creatives' - art director, graphic designer, copywriter, film director to produce the concept and final production. Art director Art directors make sure that illustrators and production artists produce and complete their work on time and to the creative director or client's satisfaction. Art directors also play a major role in the development of a project by making decisions on the visual elements of the project, and by giving the final say on the selection of models, art, props, colors, and other elements. Art directors need advanced training in graphic design as they often do artwork and designing themselves. However, an art director's time may be consumed doing supervisory and administrative work . Art production manager Art production managers or traffic managers oversee the production aspect of art to improve efficiency and cost effectiveness. Art production managers supervise artists or advise the supervisors of artists. Creative directors and art directors often assume the role of art production managers, especially when production cost is not a critical enough concern to designate a manager for the specific role. Hands-on graphic designer The following are positions or responsibilities, not necessarily titles, held by art directors and graphic designers: Brand identity developer Brand identity design is concerned with the visual aspects of a company or organization's brand or identity. A brand identity design is the visual element that represents how a company wants to be seen; it is the company's visual identity, and is how a company illustrates its ‘image.’ A company's brand identity can be represented in terms of design through a unique logo, or signage, and is then often integrated throughout all the elements of a company's materials such as business cards, stationery, packaging, media advertising, promotions, and more. Brand identity may include logo design. Brand identity development is usually a collaborative effort between creative directors, art directors, copywriters, account managers and the client. Broadcast designers A broadcast designer is a person involved with creating graphic designs and electronic media incorporated in television productions that are used by character generator (CG) operators. A broadcast designer may have a degree in digital media (or a similar degree), or is self-taught in the software needed to create such content. Logo designer The job of a logo designer is to provide a new and innovative way to express the key points of a company through an image. Logo designers take the information given to them by the client and work, using their own creativity along with marketing strategy to find an appropriate image that their client can use to represent what they are trying to encourage, sell, or what they are. It is not likely that a company will specialize in logo design or have a position for a designated logo designer. Art directors and graphic designers usually perform logo designs. Marketing designer A marketing designer creates illustrations and digital images. He or she also develops presentations for companies and businesses to market and promote their goods and services to the public. Illustrator Illustrators conceptualize and create illustrations that represent an idea or a story through two-dimensional or three-dimensional images. Illustrators may do drawings for printed materials such as books, magazines, and other publications, or for commercial products such as textiles, packaging, wrapping paper, greeting cards, calendars, stationery, and more. Illustrators use many different media, from pencil and paint to digital formatting, to prepare and create their illustrations. An illustrator consults with clients in order to determine what illustrations will best meet the story they are trying to tell, or what message they are trying to communicate. Illustrating may be a secondary skill requirement of graphic design or a specialty skill of a freelance artist, usually known for a unique style of illustrating. Illustration may be published separately as in fine art. However, illustrations are usually inserted into page layouts for communication design in the context of graphic design professions. Visual image developer Similar to illustration are other methods of developing images such as photography, 3D modeling, and image editing. Creative professionals in these positions are not usually called illustrators, but are utilized the same way. Photographers are likely to freelance. 3D modelers are likely to be employed for long-term projects. Image editing is usually a secondary skill to either of the above, but may also be a specialty to aid web development, software development, or multimedia development in a job title known as multimedia specialist. Although these skills may require technical knowledge, graphic design skills may be applied as well. Multimedia developer Multimedia developers may come from a graphic design or illustration background and apply those talents to motion, sound, or interactivity. Motion designers are graphic designers for motion. Animators are illustrators for motion. Videographers are photographers for motion. Multimedia developers may also image edit, sound edit, program, or compose multimedia just as multimedia specialists. Content developer Content developers include illustrators, visual image developers, and multimedia developers in software and web development. Content development includes non-graphical content as well. Visual journalist Visual journalists, also known as Infographic Artists create information graphics or Infographics; visual representations of information, data or knowledge. These graphics are used anywhere where information needs to be explained quickly or simply, such as in signs, maps, journalism, technical writing, and education. They are also used extensively as tools by computer scientists, mathematicians, and statisticians to ease the process of developing and communicating conceptual information. They are applied in all aspects of scientific visualization. Layout artist A layout artist deals with the structure and layout of images and text in a pleasing format. This can include magazine work, brochures, flyers, books, CD booklets, posters, and similar formats. For magazines and similar productions, color, typeface, text formatting, graphic layout and more must be considered. Page layouts are usually done by art directors, graphic designers, artworkers, production artists or a combination of those positions. Entry level layout work is often known as paste up art. Entry level layout graphic designers are often known as production artists. In an in-house art department, layout artists are sometimes known as DTP artists or DTP associates. Interface designer Interface designers are graphical user interface (GUI) layout artists. They are employed by multimedia, software, and web development companies. Because graphical control elements are interactive, interface design often overlaps interaction design. Because interfaces are not usually composed as single computer files, interface design may require technical understanding, including graphical integration with code. Because interfaces may require hundreds of assets, knowledge of how to automate graphic production may be required. An interface designer may hold the job title of web designer in a web development company. Web designer A web designer's work could be viewed by thousands of people every day. Web designers create the pages, layout, and graphics for web pages, and play a key role in the development of a website. Web designers have the task of creating the look and feel of a website by choosing the style, and by designing attractive graphics, images, and other visual elements, and adapting them for the website's pages. Web designers also design and develop the navigation tools of a site. Web designers may make decisions regarding what content is included on a web page, where things are placed, and how the aesthetic and continuity is maintained from one screen to the next. All of this involves skill and training in computer graphics, graphic design, and in the latest in computer and web technology. Depending on the scope of the project, web design may involve collaboration between software engineers and graphic designers. The graphic design of a website may be as simple as a page layout sketch or handling just the graphics in an HTML editor, while the advance coding is done separately by programmers. In other cases, graphic designers may be challenged to become both graphic designer and programmer in the process of web design in positions often known as web masters. Package designer A package designer or packaging technician may utilize technical skills aside from graphic design. Knowledge of cuts, crease, folding, nature and behavior of the packaging material such as paper, corrugated sheet, synthetic or other type of materials may also be required. A customer may see the top/outside of a package at first, but may also be drawn to other package design features. A packaging design may require three-dimensional space layout skills in addition to visual communication to consider how well a design works at multiple angles. CAD software applications specifically for packaging design may be utilized See also Advertising agency Graphic design Graphic arts References The Universal Arts of Graphic Design – Documentary produced by Off Book Graphic Designers, entry in the Occupational Outlook Handbook of the Bureau of Labor Statistics of the United States Department of Labor Communication design Visual arts occupations Graphic design he:מקצועות בתחום העיצוב הגרפי
25739
https://en.wikipedia.org/wiki/Rich%20Text%20Format
Rich Text Format
The Rich Text Format (often abbreviated RTF) is a proprietary document file format with published specification developed by Microsoft Corporation from 1987 until 2008 for cross-platform document interchange with Microsoft products. Prior to 2008, Microsoft published updated specifications for RTF with major revisions of Microsoft Word and Office versions. Most word processors are able to read and write some versions of RTF. There are several different revisions of RTF specification; portability of files will depend on what version of RTF is being used. RTF should not be confused with enriched text or its predecessor Rich Text, nor with IBM's RFT-DCA (Revisable Format Text-Document Content Architecture), as these are different specifications. History Richard Brodie, Charles Simonyi, and David Luebbert, members of the Microsoft Word development team, developed the original RTF in the middle to late 1980s. The first RTF reader and writer shipped in 1987 as part of Microsoft Word 3.0 for Macintosh, which implemented the RTF version 1.0 specification. All subsequent releases of Microsoft Word for Macintosh, as well as all Windows versions, can read and write in RTF format. Microsoft maintains RTF. The final version was 1.9.1 in 2008, which implemented features of Office 2007. Microsoft has discontinued enhancements to the RTF specification, so features new to Word 2010 or a later version will not save properly to RTF. Microsoft anticipates no further updates to RTF, but has stated willingness to consider editorial and other non-substantive modifications of the RTF Specification during an associated ISO/IEC 29500 balloting period. RTF files were used to produce Windows Help files, though these have since been superseded by Microsoft Compiled HTML Help files. Code syntax It is programmed using groups, a backslash, a control word and a delimiter. Groups are contained within curly braces ({}) and indicate which attributes should be applied to certain text. The backslash (\) introduces a control word, which is a specifically programmed command for RTF. Control words can have certain states in which they are active. These states are represented by numbers. For example, indicates that the Bold text is off indicates that the Bold text is on A delimiter is one of three things: A space A digit or hyphen (e.g. -23, 23, 275) A character other than a digit or letter (e.g. \, /, }) As an example, the following RTF code {\rtf1\ansi{\fonttbl\f0\fswiss Helvetica;}\f0\pard This is some {\b bold} text.\par } would be rendered as follows: This is some bold text. Character encoding A standard RTF file can only consist of 7-bit ASCII characters, but can use escape sequences to encode other characters. The two character escapes are code page escapes and, starting with RTF 1.5, Unicode escapes. In a code page escape, two hexadecimal digits following a backslash and typewriter apostrophe denote a character taken from a Windows code page. For example, if the code page is set to Windows-1256, the sequence \'c8 will encode the Arabic letter bāʼ ب. It is also possible to specify a "Character Set" in the preamble of the RTF document and associate it to a header. For example, the preamble has the text \f3\fnil\fcharset128, then, in the body of the document, the text \f3\'bd\'f0 will represent the code point 0xbd 0xf0 from the Character Set 128 (which corresponds to the Shift-JIS code page), which encodes "金". For a Unicode escape, the control word \u is used, followed by a 16-bit signed integer which corresponds to the Unicode UTF-16 code unit number. For the benefit of programs without Unicode support, this must be followed by the nearest representation of this character in the specified code page. For example, \u1576? would give the Arabic letter bāʼ ب, but indicates that older programs which do not support Unicode should render it as a question mark instead. The control word \uc0 can be used to indicate that subsequent Unicode escape sequences within the current group do not specify the substitution character. Until RTF specification version 1.5 release in 1997, RTF only handled 7-bit characters directly and 8-bit characters encoded as hexadecimal (using \'xx). Since RTF 1.5, however, RTF control words generally accept signed 16-bit numbers as arguments. Unicode values greater than 32767 must be expressed as negative numbers. If a Unicode character is outside BMP, it is encoded with a surrogate pair. Support for Unicode was made due to text handling changes in Microsoft Word – Microsoft Word 97 is a partially Unicode-enabled application and it handles text using the 16-bit Unicode character encoding scheme. Microsoft Word 2000 and later versions are Unicode-enabled applications that handle text using the 16-bit Unicode character encoding scheme. Because RTF files are usually 7-bit ASCII plain text, they can be easily transmitted between PC-based operating systems. Converters that communicate with Microsoft Word for MS Windows or Macintosh generally expect data transfer as 8-bit characters and binary data which can contain any 8-bit values. Human readability RTF is a data format for saving and sharing documents, not a markup language; it is not intended for intuitive and easy typing. Nonetheless, unlike many word processing formats, RTF code can be human-readable. When an RTF file containing mostly Latin characters without diacritics is viewed as a plain text file, the underlying ASCII text is readable, provided that the author has kept formatting concise. When RTF was released, most word processors used binary file formats; Microsoft Word, for example, used the .DOC file format. RTF was unique in its simple formatting control which allowed non-RTF aware programs like Microsoft Notepad to open and provide readable files. Today, the most word processors have moved to XML-based file formats (Word has switched to the .docx file format). Regardless, these files contain large amounts of formatting code, so are often ten or more times larger than the corresponding plain text. To be standard-compliant RTF, non-ASCII characters must be escaped. Thus, even with concise formatting, text that uses certain dashes and quotation marks is less legible. Latin languages with many diacritics are particularly difficult to read in RTF, as they result in substitutions like \'f1 for ñ and \'e9 for é. Non-Latin scripts are illegible in RTF — \u21563, for example, is used for 吻. From the beginning, RTF has also supported Microsoft OLE embedded objects and Macintosh Edition Manager subscriber objects, which are not human-readable. Common uses and interoperability Most word processing software support either RTF format importing and exporting for some RTF specification or direct editing, which makes it a "common" format between otherwise incompatible word processing software and operating systems. Most applications that read RTF files silently ignore unknown RTF control words. These factors contribute to its interoperability, though it is still dependent on the specific RTF version in use. There are several consciously designed or accidentally born RTF dialects. RTF is the internal markup language used by Microsoft Word. Since 1987, RTF files have been able to be transferred back and forth between many old and new computer systems (and now over the Internet), despite differences between operating systems and their versions. This makes it a useful format for basic formatted text documents such as instruction manuals, résumés, letters, and modest information documents. These documents, at minimum, support bold, italic and underline text formatting. Also typically supported are left-, center- and right-aligned text, font specification and document margins. Font and margin defaults, style presets and other functions vary according to program defaults. There may also be incompatibilities between different RTF versions, e.g. between RTF 1.0 1987 and later specifications, or between RTF 1.0-1.4 and RTF 1.5+ in use of Unicode characters. And though RTF supports metadata like title and author, not all implementations support this. Nevertheless, the RTF format is consistent enough to be considered highly portable and acceptable for cross-platform use. Objects Microsoft Object Linking and Embedding (OLE) objects and Macintosh Edition Manager subscriber objects allow embedding of other files inside the RTF, such as tables or charts from spreadsheet application. However, since these objects are not widely supported in programs for viewing or editing RTF files, they also limit RTF's interoperability. If software that understands a particular OLE object is not available, the object is displayed using a picture of the object which is embedded along with it. Pictures RTF supports inclusion of JPEG, PNG, Enhanced Metafile (EMF), Windows Metafile (WMF), Apple PICT, Windows device-dependent bitmap, Windows device-independent bitmap and OS/2 Metafile picture types in hexadecimal (the default) or binary format in a RTF file. Not all of these picture types are supported in all RTF readers, however. When a RTF document is opened in software that does not support the picture type of an inserted picture, the picture is not displayed. RTF writers usually either convert an inserted picture in an unsupported picture type to one in a supported picture type, or do not include picture at all. For better compatibility with Microsoft products, some RTF writers include the same picture in two different picture types in one RTF file: one supported picture type to display, and one uncompressed WMF copy of the original picture to improve compatibility with some Microsoft applications like Wordpad. This method increases the RTF file size dramatically. The RTF specification does not require this method, and several implementations do not include the WMF copy (e.g. Abiword or Ted). For Microsoft Word, it is also possible to set a specific registry value ("ExportPictureWithMetafile=0") to prevent Word from saving the WMF copy. Fonts RTF supports embedding of fonts used in the document, but this feature is not widely supported in software implementations. RTF also supports generic font family names used for font substitution: roman (serif), Swiss (sans-serif), modern (monospace), script, decorative and technical. This feature is not widely supported either. Annotations Since RTF 1.0, the RTF specification has supported document annotations/comments. The RTF 1.7 specification defined some new features for annotations, including the date stamp (there was previously only "time stamp") and parents of annotations. When a RTF document with annotations is opened in an application that does not support RTF annotations, the annotations are not shown. Similarly, when a document with annotations is saved as RTF in an application that does not support RTF annotations, the annotations are not preserved in the RTF file. Some implementations, like Abiword (since version 2.8) and IBM Lotus Symphony (up to version 1.3), may hide annotations by default or require some user action to display them. The RTF specification also supports footnotes, which are widely supported in RTF implementations (e.g. in OpenOffice.org, Abiword, KWord, Ted, but not in Wordpad). Endnotes are implemented as a variation on footnotes, so applications that support footnotes but not endnotes will render an endnote as a footnote. Microsoft products do not support comments within footers, footnotes or headers. Similarly, Microsoft products do not support footnotes in headers, footers, or comments. Inserting a comment or a footnote in one of these disallowed contexts may result in a corrupted document. Drawing objects The RTF 1.2 specification defined use of drawing objects, known as shapes, such as rectangles, ellipses, lines, arrows and polygons. The RTF 1.5 specification introduced many new control words for drawing objects. However, many RTF implementations, such as Apache OpenOffice, do not support drawing objects (though they are supported in LibreOffice 4.0 on) or Abiword. Applications which do not support RTF drawing objects do not display or save the shapes. Some implementations will also not display any text inside drawing objects. Security concerns Unlike Microsoft Word's DOC format, as well as the newer Office Open XML and OpenDocument formats, RTF does not support macros. For this reason, RTF was often recommended over those formats when the spread of computer viruses through macros was a concern. However, having the .RTF extension does not guarantee a safe file, since Microsoft Word will open standard DOC files renamed with an RTF extension and run any contained macros as usual. Manual examination of a file in a plain text editor such as Notepad, or use of the file command in a UNIX-like systems, is required to determine whether or not a suspect file is really RTF. Enabling Word's "Confirm file format conversion on open" option can also assist by warning a document being opened is in a format that does not match the format implied by the file's extension, and giving the option to abort opening that file. One exploit attacking a vulnerability was patched in Microsoft Word in April 2015. Since 2014 there have been malware RTF files embedding OpenXML exploits. Implementations Each RTF implementation usually implements only some versions or subsets of the RTF specification. Many of the available RTF converters cannot understand all new features in the latest RTF specifications. The WordPad editor in Microsoft Windows creates RTF files by default. It once defaulted to the Microsoft Word 6.0 file format, but write support for Word documents (.doc) was dropped in a security update. Read support was also dropped in Windows 7. WordPad does not support some RTF features, such as headers and footers. However, WordPad can read and save many RTF features that it cannot create, including tables, strikeout, superscript, subscript, "extra" colors, text background colors, numbered lists, right or left indent, quasi-hypertext and URL linking, and various line spacings. RTF is also the data format for "rich text controls" in MS Windows APIs. The default text editor for macOS, TextEdit, can also view, edit and save RTF files as well as RTFD files, and uses the format as its default. As of July 2009, TextEdit has limited ability to edit RTF document margins. Much older Mac word processing application programs such as MacWrite and WriteNow had the same RTF abilities as TextEdit has. The free and open-source word processors AbiWord, Apache OpenOffice, Bean, Calligra, KWord, LibreOffice and NeoOffice can view, edit and save RTF files. The RTF format is also used in the Ted word processor. Scrivener uses individual RTF files for all the text files that make up a given "project". SIL International’s freeware application for developing and publishing dictionaries uses RTF as its most common form of document output. RTF files produced by Toolbox are designed to be used in Microsoft Word, but can also be used by other RTF-aware word processors. RTF can be used on some ebook readers because of its interoperability, simplicity and low CPU processing requirements. Libraries and converters The open-source script rtf2xml can partially convert RTF to XML. GNU UnRTF is an open-source program to convert RTF into HTML, LaTeX, troff macros and other formats. pyth is a Python library to create and convert documents in RTF, XHTML and PDF format. Ruby RTF is a project to create Rich Text content via Ruby. RaTFink is a library of Tcl routines, free software, to generate RTF output, and a Cost script to convert SGML to RTF. RTF::Writer is a Perl module for generating RTF documents. PHPRtfLite is an API enabling developers to create RTF documents with PHP. Pandoc is an open source document converter with multiple output formats, including RTF. RTFGen is a project to create RTF documents via pure PHP. rtf.js is a JavaScript based library to render RTF documents in HTML. The macOS command line tool textutil can convert files between rtf, rtfd, text, doc, docx, wordml, odt and webarchive formats. The editor Ted can also convert RTF files to HTML and PS format. Criticism The Rich Text Format was the standard file format for text-based documents in applications developed for Microsoft Windows. Microsoft did not initially make the RTF specification publicly available, making it difficult for competitors to develop document conversion features in their applications. Because Microsoft's developers had access to the specification, Microsoft's applications had better compatibility with the format. Also, each time Microsoft changed the RTF specification, Microsoft's own applications had a lead in time-to-market, because competitors had to redevelop their applications after studying the newer version of the format. Novell alleged that Microsoft's practices were anticompetitive in its 2004 antitrust complaint against Microsoft. See also Rich Text Format Directory (.rtfd file type) Enriched text format List of document markup languages Comparison of document markup languages Revisable-Form Text (RFT), part of IBM's Document Content Architecture (DCA) TNEF, Transport Neutral Encapsulation Format, the Microsoft Outlook default message format References External links RTF 1.9.1 specification, March 2008, from Microsoft RTF 1.9.1 specification, March 2008, via earlier download from Microsoft and Internet Archive RTF 1.8 specification, April 2004, from ysagnier.free.fr RTF 1.6 specification, May 1999, from Microsoft RTF 1.5 specification, April 1997, from biblioscape.com RTF 1.0, 1.2, 1.3, 1.5 and 1.7 specifications, from the RTF Tools open-source project RTF 1.0 specification, June 1992, from the latex2rtf open-source project RTF Pocket Guide, book homepage RTF Character Set to Code Page, last edited June 2017 Computer file formats Technical communication Office document file formats Text file formats
67888797
https://en.wikipedia.org/wiki/American%20Thief
American Thief
American Thief is a 2020 action thriller film directed by Miguel Silveira. The film is about a teen hacker seeking revenge for his father's murder becomes a pawn in a plot to derail the 2016 presidential election. The film had its international premiere at the 2020 Oldenburg International Film Festival. Synopsis Inspired by Haskell Wexler's seminal film Medium Cool, American Thief straddles fiction and documentary as its protagonists all become pawns in a plot to derail the 2016 presidential election. Filmed and scripted around true events between 2015 and 2019, American Thief is a fast-paced action/thriller that integrates fictional characters with real events, creating an atmosphere of uncertainty that seems all too familiar. Diop, a teenage hacker, wants to awaken society to the reality of overreaching government surveillance programs, while fellow-hacker Toncruz wants to use the technology to avenge his father's murder. As Toncruz connects with internet criminals on the deep web, Paul, a disgruntled vlogger, rants about political conspiracy theories. Both Paul and Toncruz are contacted by an Unidentified User who claims he can provide what is needed to expose the truth. Meanwhile, Josephina, an artificial intelligence programmer, observes what unfolds as she attempts to contain the monster she's created. An intense tale of dystopian conspiracy that will keep you transfixed. Cast Xisko Maximo Monroe – Toncruz Khadim Diop – Diop Mason Ben Becher – Paul Hunter Josefina Scaro – Josephine Aronovich Madeline McCray – Mrs. Mason Julia Morrison – Meeks Kai Monroe – Jimmorries Fernando Frias de la Parra – Rogue Systems Specialist Cacau Rosa – Claudia Jackson Johnny Ma – Hacker / Whistleblower Production American Thief was shot between 2015 and 2020 in New York City and Washington DC. The film mixes documentary and fiction as it covers the events surrounding the election of Donald Trump in 2016. The film received the Jerome Foundation Production Grant. References External links 2020 films 2020 action thriller films American films American action thriller films Films set in New York City Films shot in New York City
290072
https://en.wikipedia.org/wiki/Atari%20XEGS
Atari XEGS
The Atari XE Video Game System (Atari XEGS) is an industrial redesign of the Atari 65XE home computer and the final model in the Atari 8-bit family. It was released by Atari Corporation in 1987 and marketed as a home video game console alongside the Nintendo Entertainment System, Sega's Master System, and Atari's own Atari 7800. The XEGS is compatible with existing Atari 8-bit family hardware and software. Without keyboard, the system operates as a stand-alone game console. With the keyboard, it boots identically to the Atari XE computers. Atari packaged the XEGS as a basic set consisting of only the console and joystick, and as a deluxe set consisting of the console, keyboard, CX40 joystick, and XG-1 light gun. The XEGS release was backed by new games, including Barnyard Blaster and Bug Hunt, plus cartridge ports of older games, such as Fight Night (Accolade, 1985), Lode Runner (Broderbund, 1983), Necromancer (Synapse Software, 1982), and Ballblazer (Lucasfilm Games, 1985). Support for the system was dropped in 1992, along with the rest of Atari's 8-bit computers, the Atari 2600, and the Atari 7800. Development In 1984, following the video game crash of 1983 when Atari, Inc. had great financial difficulties as a division of Warner Communications, John J. Anderson of Creative Computing stated that Atari should have released a video game console in 1981 based on its Atari 8-bit family and compatible with that software library. The company instead released the Atari 5200, which had almost exactly the same technology as the 8-bit computers but was incompatible with their software. After Jack Tramiel purchased the company, Atari Corporation re-released two game consoles in 1986: the Atari 7800, which had previously been released in a brief test run in 1984; and a lower cost redesign of the Atari 2600. Atari conceived the console in a plan to increase the company's console market share while improving sales of its 8-bit home computer family which had started with the Atari 400 and 800. Providing a "beginning computer" and "sophisticated game console" in one device, was thought to convince more retailers and software developers to support the platform. Matthew Ratcliff, who had been contributing editor for Antic magazine, recalled this: "Atari executives asked the heads of several major toy store chains which product they'd rather sellthe powerful 65XE home computer for about $80, or a fancy new game system for about $150. The answer was, 'You can keep the computer, give us that game machine!' In May 1987, Atari's Director of Communications, Neil Harris, updated the online Atari community by outlining this plan, noting that the XEGS was intended to further the 8-bit line by providing mass-merchants with a device that was more appealing to their markets. The XEGS is a repackaged Atari 65XE home computer, compatible with the existing range of Atari 8-bit computer software and peripherals, and thus can function as a home computer. At a more premium , it co-existed with the Atari 7800 and remodeled Atari 2600 and was occasionally featured alongside those systems in Atari print ads and television commercials. Games The XEGS shipped with the Atari 8-bit version of Missile Command built in, Flight Simulator II bundled with the keyboard component, and Bug Hunt which is compatible with the light gun. As the XEGS is compatible with the earlier 8-bit software, many games released under the XEGS banner are simply older games rebadged. This was done to the extent that some games were shipped in the old Atari 400/800 packaging, bearing only a new sticker to indicate that they are also compatible with the XEGS. Peripherals The XEGS was released in a basic set and a deluxe set. The basic set includes only the console, and a standard CX40 joystick with a grey base to match the XEGS rather than its original black. The deluxe set consists of the console, the CX40 joystick, a keyboard which enables home computer functionality, and the XG-1 light gun. The keyboard and light gun were also released separately outside North America. This is the first light gun produced by Atari, and it is also compatible with the Atari 7800 and Atari 2600. The system can use Atari 8-bit computer peripherals, such as disk drives, modems, and printers. Reception Atari sold 100,000 XE Game Systems during the Christmas season in 1987, every unit that was produced during its launch window. Matthew Ratcliff called the game and computer combination "a brilliant idea", which "has been selling out almost as fast as toy stores can get them in". He said, "The XEGS may not seem like such a hot idea to serious Atari computer users. But just think about it. If you were afraid of computers or don't have the foggiest idea what to do with one, you'd have absolutely no interest in an Atari 65XEeven if it could play great games. However, you'd probably have no compunction about buying a great video game system, the XEGS, as a new addition to the family entertainment center." In 1988, he wrote in Antic magazine that, in order to switch between light gun and joystick games, active XEGS gamers are frustrated by the need to continually re-plug their devices and power cycle the system, due to the system's lack of autodetection, which is complicated by its awkwardly downward slanting ports. He said "Barnyard Blaster and Bug Hunt could have been just a bit smarter" by including the simple routine that he was forced to write and publish as a workaround. See also History of Atari Atari 8-bit computers Atari 8-bit peripherals Commodore 64 Games System References Atari 8-bit family Xegs Home video game consoles Backward-compatible video game consoles Third-generation video game consoles 1987 in video gaming 1980s toys Computer-related introductions in 1987 Products introduced in 1987 Products and services discontinued in 1992
9022289
https://en.wikipedia.org/wiki/Salt%20of%20the%20Earth%20%28song%29
Salt of the Earth (song)
"Salt of the Earth" is the final song from English rock band the Rolling Stones album Beggars Banquet (1968). Written by Mick Jagger and Keith Richards, the song includes an opening lead vocal by Richards. It is the second official track by the group to feature him on lead vocal (the first being "Something Happened to Me Yesterday" from Between the Buttons). Composition and lyrics The song was reportedly inspired by John Lennon, with Jagger attempting to write a working class anthem. The lyrics were written primarily by Jagger and salute the working class: In a twice-repeated stanza, the singer professes a distance from his subject that seemingly belies the sentiment of the verses: The song uses a quote that refers to a passage in the Bible where Jesus is trying to encourage people to give the best of themselves "Salt of the Earth" features the acoustic work of Richards, typical of most songs from Beggars Banquet. Richards also performs the slide guitar throughout the song (Brian Jones, who often played slide on previous songs, was absent from these sessions). While some songs from Beggars Banquet were recorded by Jagger and Richards using a personal tape recorder, "Salt of the Earth" was recorded at London's Olympic Sound Studios in May 1968. Featuring on the song are the Los Angeles Watts Street Gospel Choir and a piano performance by Nicky Hopkins. These additions, and their prominence near the end of the song, are further developed on their next album Let It Bleeds closing song, "You Can't Always Get What You Want". Critical reception Jim Beviglia ranked "Salt of the Earth" the 25th best Rolling Stones song in Counting Down the Rolling Stones: Their 100 Finest Songs. Paste called it "a simple ode to the proletariat" and ranked it 37th in its Top 50 Rolling Stones songs. Rolling Stone ranked it 45th in its countdown of the band's top 100 songs, praising Richards' vocals and "gospel reverie." Other appearances "Salt of the Earth" has a unique live history. It has only been played once to an instrumental playback and live five times. The first filmed rendition was for the taping of the 1968 television special The Rolling Stones Rock and Roll Circus (not released until 1996). However, this version features Keith Richards and Mick Jagger singing live while sitting with the audience as the backing track that appeared on Beggars Banquet is played. It was then revived 21 years later for three performances in Atlantic City during the 1989-1990 Steel Wheels/Urban Jungle Tour, where the Stones were joined onstage by Axl Rose and Izzy Stradlin of Guns N' Roses. Axl and Izzy were given their choice of songs, and when they chose this, the Stones had forgotten it, and had to listen to it to remember. Jagger and Richards performed it as a duet for the 2001 "The Concert for New York City", commemorating the fallen of September 11, 2001, although they changed the lyrics to make its message more positive (most notably "Let's drink to the good and the evil" was changed to "Let's drink to the good not the evil"). Its only other performance was in Twickenham Stadium on 20 September 2003 during the Licks Tour. Folk singers Joan Baez and Judy Collins each recorded versions of the song (in 1971 and 1975, respectively). Baez included the song in her set during her October 2011 performance for Occupy Wall Street protesters in Manhattan. Rotary Connection covered the song for their album, Songs, (1969). Jamaican Reggae musician Dandy Livingstone covered the song in 1971. A soul version was recorded by Johnny Adams for his album From the Heart (1984). Blues singer Bettye LaVette covered the song on her 2010 album, Interpretations: The British Songbook. "Salt of the Earth" is also the title to a documentary on the Rolling Stones 2005-06 'A Bigger Bang' World Tour. Personnel Mick Jaggervocals Keith Richardsguitars, vocals Bill Wymanbass guitar Charlie Wattsdrums Nicky Hopkinspiano Watts Street Gospel Choirbackground vocals Cover versions 1970: Jamaican reggae band the Cables as a single, released on Trojan Records in the UK. 1971: Dandy Livingstone as a single, also on Trojan. 1971: Joan Baez on the studio album, Blessed Are... 1975: Judy Collins on the studio album, Judith. 2001: Proud Mary on the studio album, The Same Old Blues. References Bibliography Stephen, Davis. Watch You Bleed: The Saga of Guns N' Roses. New York: Penguin Group, 2008. 1968 songs 1970 singles Song recordings produced by Jimmy Miller Songs written by Jagger–Richards The Rolling Stones songs Trojan Records singles