id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
46322092
|
https://en.wikipedia.org/wiki/Laurie%20Hendren
|
Laurie Hendren
|
Laurie Hendren (December 13, 1958 – May 27, 2019) was a Canadian computer scientist noted for her research in programming languages and compilers.
Biography
Hendren received a B.Sc. and M.Sc. in computer science from Queen's University, Kingston in 1982 and 1984 respectively. She received a Ph.D in computer science from Cornell University in 1990.
She then joined the School of Computer Science at the McGill University as an assistant professor in 1990. While there she was promoted to associate professor in 1995 and full professor in 2001. She also served as Associate Dean (Academic) for the Faculty of Science at McGill University from 2005 to 2014. In 2014, she became the 5 of diamonds in the Notable Women of Computing card deck.
Awards and notable achievements
Hendren was awarded the Leo Yaffe Award for Excellence in Teaching in the Faculty of Science at McGill University for the academic year 2006–2007. She was made an ACM Fellow in 2009, awarded a Canada Research Chair in 2011, and elected as a fellow of the Royal Society of Canada in 2012.
Hendren was the programming languages area editor of the Association for Computing Machinery books series and has been the program chair of the Association for Computing Machinery SIGPLAN Programming Language Design and Implementation Conference.
In 2019, Hendren was awarded the senior AITO Dahl-Nygaard Prize, but died before the ECOOP conference at which the prize is usually awarded. It was thus awarded posthumously.
Research projects
Hendren has led or co-led several big open source research projects at McGill University. These are:
Soot: a framework for analyzing and transforming Java and Android Applications
SableVM: an open implementation of a Java virtual machine
abc: the AspectBench Compiler for AspectJ
McLab: compiler tools for array-based languages
HIG: health informatics research for radiation oncology
References
External links
McGill University: Laurie Hendren, School of Computer Science
Blog: Flat-chested warriors blog about breast cancer and the Goldilocks Mastectomy
Papers and citations: Google Scholar Profile for Laurie J. Hendren
1958 births
2019 deaths
Canadian women computer scientists
Canadian computer scientists
Fellows of the Association for Computing Machinery
Fellows of the Royal Society of Canada
Cornell University alumni
McGill University faculty
People from Peterborough, Ontario
|
4778882
|
https://en.wikipedia.org/wiki/Islamic%20University%20of%20Technology
|
Islamic University of Technology
|
Islamic University of Technology (), commonly known as IUT, is an international university located in Gazipur, Bangladesh. IUT offers undergraduate and graduate programmes in Engineering and Technical Education. The university is the only international Engineering university in Bangladesh.
IUT is a subsidiary organ of Organisation of Islamic Cooperation (OIC). The university receives direct endowment from OIC member states and offers scholarships to some students in the form of tuition waiver and free accommodation.
The elegant campus was designed by Turkish architect Pamir Mehmet, an MIT graduate.
Accreditation
IUT is affiliated with / accredited by following organizations-
International Association of Universities
Federation of the Universities of the Islamic World
Association of the Universities of Asia and the Pacific (AUAP)
Institution of Engineers, Bangladesh (IEB)
University Grants Commission of Bangladesh (UGC)
History
Islamic University of Technology, established in 1978, was first known as Islamic Center for Technical, Vocational Training and Research (ICTVTR). It was proposed in the 9th Islamic Conference of Foreign Ministers (ICFM) held in Dakar, Senegal on 24–28 April 1978. The establishment of IUT in Dhaka, Bangladesh was then approved by the foreign ministers. All the members of the Organisation of the Islamic Conference (OIC) agreed to co-operate for the implementation of the project.
The implementation of the infrastructure commenced with the holding of the first meeting of the Board of Governors in June 1979. Foundation stone of ICTVTR was laid by president Ziaur Rahman of Bangladesh on 27 March 1981 in the presence of Yasir Arafat, the then-chairman of the PLO, and Habib Chatty, the then-Secretary General of OIC. The construction of the campus was completed in 1987 at a cost of US$11 Million[US$25 Million (2019)] . ICTVTR was formally inaugurated by Hossain Mohammad Ershad, president of Bangladesh on 14 July 1988.
The 22nd ICFM held in Casablanca, Morocco on 10–11 December 1994 renamed the ICTVTR as Islamic Institute of Technology (IIT). IIT was formally inaugurated by Begum Khaleda Zia, prime minister of Bangladesh on 21 September 1995. The 28th ICFM held in Bamako, Republic of Mali on 25–29 June 2001 commended the efforts of IIT and decided to rename the IIT as Islamic University of Technology (IUT). IUT was formally inaugurated by Begum Khaleda Zia, prime minister of Bangladesh on 29 November 2001.
Over the past three decades IUT has expanded with the construction of new academic buildings, halls of residence, and student facilities.
The university started offering regular courses from December 1986 and completed 34 academic years in 2021.
Administration and governance
Administrative structure
Joint General Assembly The Islamic Commission for Economic, Cultural and Social Affairs consisting of all member states of the OIC acts as the Joint General Assembly of the subsidiary organs including IUT. This assembly acts as the General Assembly of the university. It determines the general policy and provides general guidance. It examines the activities of the university and submits recommendations to the ICFM. Internal rules and regulations which govern the internal activities are shaped through the decisions of this assembly. It elects the members of the governing body and examines the whole budget for a year. The Finance Control Organ of the university audits the financial possessions of the university and submits it to this assembly.
Governing Board It is composed of nine members including a member from the host country who are selected by the Joint General Assembly. Members are selected as per geographical distribution and importance of the countries and people. The secretary general of OIC or his representative and the vice-chancellor of the university become members of this board by their status. They are included as ex-officio members. This board focuses on the precision activities and programs of IUT and sends recommendations to the Joint General Assembly. This is the body that consults about the promoting measures of IUT with General Secretariat and it approves the final curricula of training and research programs. One of its prime jobs is to grant degrees, diplomas and certificates according to academic regulations.
Executive Committee This is an organ of the Governing Board and is empowered to deal, between meetings of the board, with any matter that may be referred to it by the vice-chancellor or that may be delegated by the board. All interim actions of this committee are reported to the Governing Board. The executive committee of the board consists of the secretary of Ministry of Labour and Employment of Bangladesh as the chairman, heads of the diplomatic missions of the member states of OIC in Bangladesh (to be nominated by the Governing Board) and the Vice-Chancellor of IUT as a general member.
Academic Council Subject to other provisions this council advises the Governing Board on all academic matters. It makes the proper conduct of teaching, training and examinations and distributes the awards of fellowship, scholarship, medals and prizes.
Some statutory committees are formed to ensure management of programmes and activities in the relevant and related fields. These committees include the Administrative Advisory Committee, Departmental Committee, Disciplinary Committee, Finance Committee, Planning and Development Committee, Research Committee, Selection Committee, Students' Welfare Committee, Syllabus Committee.
List of vice-chancellors
Rafiquddin Ahmad (Director, May 1979 - April 1987)
Abdul Matin Patwari (Director General, May 1987 - April 1999)
M Anwar Hossain (May 1999 - Jan 2003)
Muhammad Fazli Ilahi (January 2003 – March 2008)
M. Imtiaz Hossain (April 2008- March 2016)
Munaz Ahmed Noor (April 2016 – February 2018)
Omar Jah, Acting Vice-Chancellor ( February 2018 – August 2020)
Prof. Dr. M. Rafiqul Islam (September 2020 – present)
Academics
Academic year at the university begins in January. Each academic year is composed of two semesters - Winter and Summer. An undergraduate degree takes four years of full-time study to complete and a master's degree takes two years to complete. The medium of instruction is English. Generally, at the end of every year, academic commencement takes place for the graduating students. Top student from each department is awarded “IUT Gold Medal” and top most student combining all the departments is awarded the international “OIC Gold Medal” for the academic excellence and overall performance.
Faculties and departments
Undergraduate and graduate programmes are offered by six departments under two faculties. The faculties are Faculty of Engineering and Faculty of Science & Technical Education.
Faculty of Engineering
Department of Computer Science & Engineering (CSE)
Department of Electrical & Electronic Engineering (EEE)
Department of Mechanical & Production Engineering (MPE)
Department of Civil & Environmental Engineering (CEE)
Faculty of Science & Technical Education
Department of Business and Technology Management (BTM)
Department of Technical and Vocational Education (TVE)
Institute
Institute of Energy and Environment (IEE)
Rankings
In the 2022 edition of QS World University Rankings, the university ranked 401-450 among the Asian universities and sixth in Bangladesh.
Enrollment
IUT was established to support students from the OIC member states. Students from South Asia, Middle East and Africa joins the university every year. Admissions are highly competitive with thousands competing for limited coveted seats. Students are admitted once per academic year. The admission process starts in August and wraps up by December.
Undergraduate programmes
Every year 630 students are accepted to various undergraduate programmes. Prospective international students from over fifty OIC member states are selected by the nominating authority of the respective member state.
Students from the host country Bangladesh are selected based on placement test conducted by the university. Thousands of initial applicants are screened to select about 5,500 applicants for the placement test based on their secondary and higher-secondary level results (grades in Mathematics, Physics, Chemistry and English). Out of the 5,500 students appearing for the placement test, only top 10% are accepted for admission.
Graduate programmes
Students for Master's and PhD programmes are selected by Post Graduate Committee (PGC) of respective departments. Similar to undergraduate admission, international students are selected by the nominating authority of the respective OIC member state. Students from the host country are required to appear for interview/placement test.
Engineering and technology programmes
Graduate programmes
Doctor of Philosophy (PhD)
Computer Science and Engineering
Electrical and Electronic Engineering
Mechanical Engineering
Civil Engineering
Master of Science / Master of Engineering
Computer Science and Engineering
Computer Science and Application
Electrical and Electronic Engineering
Mechanical Engineering
Civil Engineering
Master of Science in Technical Education
Computer Science and Engineering
Electrical and Electronic Engineering
Mechanical Engineering
Undergraduate programmes
Bachelor of Science
Electrical and Electronic Engineering
Computer Science and Engineering
Software Engineering
Mechanical Engineering
Industrial and Production Engineering
Civil Engineering
Bachelor of Business Administration
Business & Technology Management (BTM)
Bachelor of Tourism and Hospitality Technology
Bachelor of Science in Technical Education
Computer Science and Engineering
Electrical and Electronic Engineering
Mechanical Engineering
Campus
Academic buildings
IUT has three academic buildings and a network of laboratories/workshop buildings.
First Academic Building
Second Academic Building
Third Academic Building
Administrative building
The administrative building is used for the offices of the Vice-Chancellor, Pro Vice-Chancellor, Registrar, Comptroller and other administrative staff.
Library
The library is located on the first floor of the Library/Cafeteria building overlooking the lake on the eastern and western sides. It is divided into two sections- General and Research/Reference. The library books on engineering, technical and vocational subjects. The library subscribes to numerous online and printed technical journals to support research work. An automated library circulation system allows users to borrow books using the bar-code system. The library catalog is available online.
Auditorium
IUT has a fully air-conditioned multi-purpose auditorium. The auditorium can accommodate about 600 people. The degree/diploma awarding convocation ceremony, seminars, cultural functions and examinations are held in the auditorium. The auditorium has a stage, green room, special guest room, film-projection facilities along with a conference room and balconies along the adjacent lake.
Student life
Student housing
IUT has three Halls of Residence for on campus accommodation.
North Hall of Residence
South Hall of Residence
Female Hall of Residence
The north hall and the south hall are for male students, while female hall is for female students. The rooms in the halls are fully furnished. Each room can accommodate up to four students. Two common facilities buildings serve the halls- one serving the north and the south hall, while the other serving the female hall.
The administrative head of a hall of residence is a Provost, typically a senior faculty member. The provost is supported by Assistant Provosts and support staff.
Cafeterias
IUT has three self-service cafeterias (Central, North and Female) where residential students can take their meals. The cafeterias serve breakfast, lunch, evening snacks and dinner. Non-residential students can purchase meals from the cafeteria using their smart card. The cafeterias are managed by the Cafeteria Committee composed of faculty members, students and administrative staff.
Mosque
There is a mosque right at the heart of the campus. The mosque is two storied and with an adjacent minaret. The mosque is open to the public during the Friday Jumma prayer.
Athletics
The university attaches great importance to co-curricular activities and encourages the students to participate in various games and sports. The Games and Sports Committee consisting of the student members and few staff members, looks after the indoor and outdoor games. It also organizes the annual athletic competition.
Sports infrastructure
Student center
The university has two student centers with TV room, newspapers room, indoor games room, facilities for board games, chess and Table Tennis tables.
Gymnasium
The university gymnasium is a spacious area with an indoor basketball court and sporting equipment.
Fitness center
The university has a fitness center. This center is equipped with various fitness training equipment including parallel bars, uneven bars, and weight exercise equipment.
Outdoor facilities
Outdoor sporting facilities include football grounds, volleyball grounds, basketball courts, lawn tennis court, cricket practice pitch and badminton courts.
Student organizations
Student societies
IUTDS – IUT Debating Society
Established in 2002, with the motto ‘Debating for Knowledge Dissemination’, IUT Debating Society (IUTDS) is one of the most successful debating club in Bangladesh. Throughout its journey, IUTDS has had glorious achievements both in Bangladesh, by becoming national champions, and internationally, by ranking highly at the World Universities Debating Championship (WUDC). Along with these tangible achievements, IUTDS has also established a tradition and culture of free speech and exchange of ideas among its members.
IUTCS – IUT Computer Society
IUT Computer Society (IUTCS) was formed in 2008 by the students of the Department of Computer Science and Information Technology (later renamed to the Department of Computer Science and Engineering). The activities of the society include Programming Classes, Programming Contests, Application Development Classes, Co-curricular Aid and Projects. In addition workshops on current mainstream applications/ technologies as well as seminars with notable personalities and professionals in the ICT sector are a regular part of the calendar events of the IUT Computer Society.
IUTSIKS - IUT Society of Islamic Knowledge Seekers
The motto of IUT Society of Islamic Knowledge Seekers is "Seeking the true knowledge of Islam". IUTSIKS was established in 2008. Initially it was called IUT Islamic Study Society. In 2016, it was renamed to IUTSIKS.
IUTPS – IUT Photographic Society
Islamic University of Technology Photographic Society (IUTPS) is an on-campus organization aimed to bring together the students who share interest in photography and creativity. Founded in 2010, IUTPS is now a prestigious name among the photography club in the university level. IUTPS arrange classes, organize regular photo walks, and intra photography competition.
IUTCBS - IUT Career and Business Society
The Career and Business Society prepares the students for their professional lives by engaging in various career oriented activities like Career Fair and networking events. It creates a platform for the students to develop communication skills and prepare them for professional world through grooming sessions and seminars.
IUT MODEL OIC
IUT MODEL OIC is one of the most active club among the recognized societies present in IUT, This club was established in 2017 with a vision of providing youth a platform where they can learn more about the OIC and the Muslim world. The students of IUT will enhance their diplomatic and leadership skills as well as their public speaking and critical thinking abilities. The club is part of the project of OIC YOUTH Forum and the motto of IUT MODEL OIC club is "Learners today, Leaders tomorrow"
International organization chapters
IEEE IUT Student Branch
Founded in 1963 in the US, The Institute of Electrical and Electronics Engineers (IEEE) is world's largest technical professional organization with over 420,000 members in 160 countries. IEEE IUT Student Branch is a chapter of the Bangladesh Section of IEEE. The branch was founded in 1999 by the students of EEE department.
IMechE IUT Student Chapter
Founded in 1847 in the UK, Institution of Mechanical Engineers (IMechE) has over 120,000 members in 140 countries. The IMechE IUT Student Chapter was founded by the students of MPE department.
ASME IUT Student Section
Founded in 1880 as an Engineering society focused on Mechanical Engineering in North America, American Society of Mechanical Engineers (ASME) is now a multidisciplinary global organization with over 110,000 members in more than 150 countries. The ASME IUT Student Section was initiated by the students of MPE department in 2021.
ACI IUT Student Chapter
Founded in 1904 in USA, the American Concrete Institute (ACI) is a leading authority and resource worldwide for individuals and organizations involved in concrete design, construction, and materials. ACI has over 100 chapters, 200 student chapters, and 30,000 members spanning over 120 countries.
The initiative to start the ACI IUT Student Chapter was taken by students and faculty member of CEE department in April 2020. The Student Chapter started its journey on 4 May 2020.
Alumni association
IUT Alumni Association (IUTAA) was formed in 2004 by IUT alumni to establish a common platform for social, cultural and professional exchanges among the alumni. IUTAA is run by a ten-member executive committee led by the President of the association. The executive committee is elected by registered graduates through a voting process. The association maintains an office in IUT campus.
Technology festivals
Every year four technology festivals are organized by the four departments of the university. Each departmental festival is organized by the students of the respective department with support from the faculty members. Students from various universities and colleges participate in the competitions of the fests. Competitions are primarily based on engineering, technology, general knowledge and business. The university is open to the public during these festivals.
See also
Committee on Scientific and Technological Cooperation, COMSTECH
List of universities in Bangladesh
Bangladesh University of Engineering and Technology
References
Further reading
External links
Official website of OIC
Organisation of Islamic Cooperation subsidiary organs
Technological institutes of Bangladesh
Engineering universities and colleges in Bangladesh
Public universities of Bangladesh
1981 establishments in Bangladesh
Organisations based in Gazipur
Universities and colleges in Gazipur District
International universities
Islamic universities and colleges in Bangladesh
|
10933722
|
https://en.wikipedia.org/wiki/Michael%20Arias
|
Michael Arias
|
Michael Arias (born 1968) is an American-born filmmaker active primarily in Japan.
Though Arias has worked variously as visual effects artist, animation software developer, and producer, he is best known for his directorial debut, the anime feature Tekkonkinkreet, which established him as the first non-Japanese director of a major anime film.
Early life
Michael Arias was born in Los Angeles, California. His father, Ron Arias (born 1941) is a former senior writer and correspondent for People magazine and a highly regarded Chicano writer. Michael Arias' mother, Dr. Joan Arias, was a professor of Spanish and IBM Software Sales Specialist.
When still a young boy, Arias often watched movies in the theater with his parents and borrowed 16mm prints from a local public library for screening at home; it was at this stage in his life that he developed his passion for cinema.
Arias graduated from the Webb School of California at the age of 16. He then attended Wesleyan University in Connecticut, majoring in linguistics for two years, before dropping out to pursue a career as a musician. Michael's early musical associates include Moby and Margaret Fiedler McGinnis.
Soon after quitting Wesleyan, Arias moved to Los Angeles, abandoned his musical ambitions and, through the efforts of a family friend, began working in the film industry.
Career
Early filmmaking career
Michael Arias' early filmmaking career is marked by stints in both the U.S. and Japan, working in VFX, CG production and software development, and as a producer of animated films.
Dream Quest Images
Michael Arias began his film career in 1987 at nascent visual effects powerhouse Dream Quest Images (DQ), first as an unpaid intern and then as a full-time employee and IATSE member. The bulk of his time at DQ was spent as a camera assistant on the motion control stages, working on such effects-heavy Hollywood films as The Abyss, Total Recall, and Fat Man and Little Boy.
At the time, the visual effects industry had only just begun adopting digital technologies, and analog techniques such as motion control and stop motion photography, miniatures, optical compositing, matte painting, and pyrotechnics still dominated. Arias, by his own account, flourished in the hands-on environment of DQ ("a big tinkertoy factory run by car nuts and mad bikers").
Back To The Future: The Ride
After two years of working at Dream Quest, Arias returned to the East Coast with the intention of finishing his studies, this time at NYU's Music Technology program. Soon after enrolling though, Arias was contacted by visual effects veteran and fellow Abyss alumnus Susan Sitnek, who invited Arias to join the crew of Universal Studios’ immersive attraction Back To The Future: The Ride (BTTFTR), helmed by visual effects legend Douglas Trumbull. Once relocated to the Berkshires, where pre-production was underway, Arias was drafted by Trumbull to animate the attraction's flight-simulator-style ride vehicles. Of his time working under Trumbull, Arias recalls, "Doug was – IS – such an inspiring figure. For me and the other younger crew, including John Gaeta, now VFX Supervisor on the Matrix films, Doug was so generous with his knowledge; such a very warm and receptive and articulate and creative guy."
Arias' association with Trumbull proved fortuitous, not only for the experience of working daily with Trumbull himself, but also because it resulted in what would be Arias' first trip to Japan, with Trumbull, with whom he toured the Osaka Expo and visited post-production monolith Imagica and video game giant Sega Enterprises. That first visit, combined with Arias' friendship with key members of BTTFTRs largely Japanese modelmaking crew, set the stage for Arias' subsequent long-term stay in Japan.
Imagica and Sega Enterprises
In 1991 Arias accepted an offer to work as a motion-control camera operator in Imagica's Special Effects department, and moved to Tokyo. Then, after less than a year at Imagica, he was invited by up-and-coming game producer Tetsuya Mizuguchi to join a newly formed computer graphics unit at Sega Enterprises Amusement Research and Development facility. At Sega, Arias co-directed and animated the ridefilm Megalopolice: Tokyo City Battle (featured in SIGGRAPH 1993's Electronic Theater).
Syzygy Digital Cinema
In 1993 Arias returned to the US and teamed up with renowned New York City title designers Randall Balsmeyer & Mimi Everett, with whom he co-founded CG design boutique Syzygy Digital Cinema, creators of digital sequences for David Cronenberg’s M. Butterfly, Joel and Ethan Coen’s The Hudsucker Proxy, Robert Altman’s Prêt-à-Porter, and Spike Lee’s Crooklyn and Clockers. Their title sequence for M. Butterfly was honored by inclusion in the SIGGRAPH 1994 Screening Room and Montreal's Cinéma Du Futur festival of the same year.
Softimage
Exhausted by the demands of production and hoping to gain further experience developing computer graphics software, Arias accepted an offer from 3D-animation software innovator Softimage to join their newly formed Special Projects group, a "S.W.A.T." team of artists and engineers established to assist key high-end customers on-site.
Encouraged by colleagues, Arias quickly immersed himself in the Mental Ray rendering API and thereafter began experimenting with techniques for simulating traditional animation imagery using computer graphics tools. This research led to Arias' developing and eventually patenting Softimage's Toon Shaders, rendering software for facilitating integration of computer graphics imagery with cel animation. Newly minted Toon Shaders in hand, Arias worked closely with the staff of DreamWorks Animation and Studio Ghibli to add a distinct visual flavor to the traditional/digital hybrid animation of films Prince Of Egypt, The Road to El Dorado, and Hayao Miyazaki’s and .
Tekkonkinkreet Pilot Film
In 1995, after establishing himself definitively in Tokyo, Arias was introduced by a friend to Taiyō Matsumoto's manga , a work that profoundly affected him. Tekkonkinkreet (Tekkon) is a metaphysical coming-of-age story concerning two orphans, and and their struggle to survive in a pan-Asian metropolis, , beset by evil. Of first discovering Tekkon, Arias recalls that a friend loaned him Tekkon to read, "And that was it. Hooked. ...I cried many times reading it, also a new experience for me to be moved to tears by a manga."In November 1997, a conversation with animation auteur Kōji Morimoto, who had shown interest in Arias' software projects, led to Arias' introduction to manga artist Taiyō Matsumoto. From there, what had begun as a simple software demo for Morimoto rapidly escalated to a full-fledged all-CG feature-film project, helmed by Morimoto, with computer graphics efforts directed by Arias himself.
Though the completed 4-minute went on to take an Outstanding Performance award for Non-Interactive Digital Art at the Japan Media Arts Festival and be featured in the SIGGRAPH 2000 Animation Theater', the project was abandoned shortly thereafter for lack of funding and director Morimoto's flagging interest in Tekkonkinkreet.
The Animatrix
Then, in 2000, while still under contract to Softimage, Michael accepted an invitation from Joel Silver and Lilly and Lana Wachowski (the Wachowskis) to produce Warner Bros’ Matrix-inspired animation anthology The Animatrix, a project that consumed him for over three years. On being pegged to produce The Animatrix, despite his lack of experience producing, Arias recounts, "I really had to draw on a great deal of experience that had sat unused in the background while I’d been pursuing software development. Everything I’d learned until this point: a brief career in recording studios, composing music and doing sound effects for short films in college, having my own company, working in special effects. It was a great chance to exercise some dormant (or damaged) brain cells."
Arias worked closely with the Wachowskis to refine the project's unique specifications: though initially conceived of as a television series, The Animatrix evolved into a collection of nine non-episodic animated shorts, each six to ten minutes long. With co-producers Hiroaki Takeuchi and Eiko Tanaka (president of maverick animation boutique Studio 4°C, where much of The Animatrix was animated), Arias ultimately developed and produced eight of the nine Animatrix segments (the lone exception being a CG-animated short created by Square Pictures). To helm the films, Arias and his partners assembled a "dream team" of anime luminaries that included Yoshiaki Kawajiri, Kōji Morimoto, Shinichiro Watanabe, and Mahiro Maeda.The Animatrix was a commercial success and went on garner the 2004 ASIFA Annie Award for Outstanding Achievement in an Animated Home Entertainment Production.
Recent filmmaking career
Michael Arias' recent career has been focused primarily on directing.
Tekkonkinkreet
In 2003, while working on The Animatrix, Arias picked up Tekkonkinkreet again. Armed with an English-language screenplay penned by screenwriter Anthony Weintraub, and encouraged by mentor Morimoto, Arias moved forward with plans to revive Tekkon at Studio 4°C, with Animatrix collaborator and 4 °C president Eiko Tanaka producing and Arias directing.
The film was completed in August 2006 and premiered at the Tokyo International Film Festival soon thereafter.
Museum of Modern Art (MoMA) curator Barbara London named Tekkonkinkreet "Best Film of 2006" in her Art Forum roundup, and subsequently arranged for the film's North American premiere to be held at MoMA.
With his adaptation of Tekkonkinkreet, Arias, with Production Designer Shinji Kimura, had re-imagined the manga's Treasure Town as a chaotic pan-Asian hybrid, part Hong Kong, part Bombay, with futuristic and industrial elements densely layered over a foundation that borrowed much from images of Shōwa-era Tokyo. New York Times critic Manohla Dargis, in her review of Tekkonkinkreet, describes Treasure Town as "a surreal explosion of skewed angles, leaning towers, hanging wires, narrow alleys and gaudily cute flourishes that bring to mind a yakuza cityscape by way of a Hello Kitty theme park." Indeed, Tekkons sumptuous art direction was widely praised, with Production Designer Kimura receiving the Best Art Direction award at the 2008 Tokyo International Anime Fair. Tekkon was further lauded, not only for Arias' innovative use of computer graphics techniques and seamless integration of digital and traditional animation, but also for the film's handmade, documentary-style approach to storytelling. After an early Tekkon screening at the Los Angeles Asian Pacific Film Festival, filmjourney.org editor Doug Cummings elaborated:
Arias’ angles and compositions are inventive and striking, and most impressively, he incorporates a bevy of live action camera techniques: handheld framing, long tracking shots through corridors, rack focusing and shifting depths of field–that generate considerable immediacy and environmental realism (despite the obvious hand-drawn artifice). More than simple technological advances, these elements have long been untapped by feature animation due to their inability to be storyboarded – they’re traditional luxuries of live action spontaneity. For all the accolades bestowed upon Alfonso Cuarón’s digitally-composited tracking shots in Children of Men, Arias’ techniques here are arguably greater achievements.
Though Tekkonkinkreet was considered a hit locally and generally well-received by critics and audiences worldwide (particularly in France, where author Taiyō Matsumoto's work is well known among manga readers), North American anime fans questioned Arias' filmmaking credentials and criticized his decidedly non-purist approach to adapting manga to anime (including his decision to work from Anthony Weintraub's English-language screenplay rather than a Japanese text). Online animation forum Animation Insider pointedly asked, "What in the hell does Michael Arias think he's doing?"
In defense of Weintraub's screenplay, Arias explained to readers of AniPages Daily, "He really got it right – the story of Treasure Town, the sense of doom, the action in Kiddie Kastle all fit together very seamlessly." Regarding Tekkonkinkreets evident subversion of (so-called) traditional animation conventions, he added, "I wanted to do things differently.... Ōtomo once said to me and [chief animation director] Nishimi, 'if you're not doing things differently you shouldn't even bother'."
In the final analysis, Tekkonkinkreet remains a milestone in Japanese animation. It was awarded Japan's prestigious Ōfuji Noburō Award at home, and continued on to compete for two awards at the 57th Berlin International Film Festival and later win the 2008 Japan Academy Prize for Animation of the Year. The Guardian listed Tekkonkinkreet third in its roundup of the ten most underrated movies of the decade.
Association with Asmik Ace
Summer 2007 was marked by the formalization of the long-standing relationship between Arias and the Japanese film distribution and production company Asmik Ace Entertainment. Arias is the first to join the roster of Asmik's artist management division.
Short film: Okkakekko
Shortly after finishing publicity for Tekkonkinkreet, Arias began work writing and directing , one of fifteen 1-minute animated shorts comprising the NHK animation anthology . Individual segments were streamed from the official Ani*Kuri15 website, and broadcast piecemeal starting in May 2007.
Arias created the film at Studio4°C, calling on animation prodigy Takayuki Hamada (one of Tekkon'''s lead animators) to design and animate characters. Other contributors to Arias' Ani*Kuri short include colorist Miyuki Itō, CG director Takuma Sakamoto, British electronic music composers Plaid, and Sound designer Mitch Osias, all Tekkonkinkreet alumni. Tekkon chief animation director Shōjirō Nishimi and production designer Shinji Kimura each directed his own Ani*Kuri segment, and other directors included Satoshi Kon, Mamoru Oshii, and Mahiro Maeda.
The entire collection of fifteen Anikuri shorts has since been released as a DVD-book set, featuring production detail, creator interviews, and storyboard and background artwork.
Heaven's Door
In 2007 Arias began work on Heaven's Door, a Japanese live-action feature film loosely based on the German hit Knockin' on Heaven's Door directed by Thomas Jahn and written by Jahn and actor Til Schweiger. Arias' adaptation features J-Pop heartthrob Tomoya Nagase and ingenue Mayuko Fukuda as unlikely comrades who flee the hospital where they first meet, and embark on a road-trip to reach the ocean and watch the sun set there in the short time they have left.Heaven's Door marks the return of Arias' Tekkonkinkreet collaborators Min Tanaka (the voice of Tekkonkinkreets "Suzuki"), composers Plaid, and Sound Designer Mitch Osias.
Heaven's Door was released in Japanese theaters on February 7, 2009 to mixed press and lukewarm box office, but was praised for its cast, music, cinematography, and sound design. Screen International correspondent Jason Gray concludes, "I think younger audiences will find the tragedy of Heaven´s Door palpable.... For someone like me who devoured American cinema of the early 70s, tearing up at films like Thunderbolt and Lightfoot, I might not be the best judge. Who knows, Heaven's Door may become the new Léon for the teen set here and regarded as a minor indie classic overseas."
Short film: Hope
On June 23, 2009 Japanese pay-per-view broadcaster WOWOW announced the upcoming on-air premiere of Arias' surreal short film Hope, featuring popular actress Juri Ueno as a struggling animator trapped overnight in an elevator.
Hope was penned by Arisa Kaneko, shot by Heaven's Door DP Takashi Komatsu, J.S.C., and features original score by Plaid and sound design by Mitch Osias.
The animation studio interiors were shot on location at Tokyo's Madhouse studios, while veteran production designer (and frequent Sōgo Ishii collaborator) Toshihiro Isomi created a rotating set for Miyuki's elevator on a Yokohama soundstage.
Harmony
On November 27, 2014 Japanese broadcaster Fuji Television made public Arias' co-directing (with Takashi Nakamura) the feature-film adaptation of the late Project Itoh's dystopian sci-fi novel Harmony, recipient of a Philip K. Dick Award Special Citation in 2010. At the time of Fuji Television's statement, production was ongoing at Studio 4°C with the film slated for a 2015 theatrical release. Harmony was released in Japan on November 13, 2015, and internationally in the Spring of 2016. The film was praised for its innovative visuals and novel mixture of science fiction action and philosophical rumination, but at the same time criticized for its profusion of cerebral digressions.
Tokyo Alien Bros.
In May 2018 Nippon Television announced a live-action television series adaptation of Keigo Shinzō's manga Tokyo Alien Bros., co-directed by Michael Arias and veteran dorama director Shintaro Sugawara and written by Shō Kataoka. NTV broadcast the series weekly beginning on July 23, 2018, and ending with the tenth and final episode on September 24, 2018. For director Arias Tokyo Alien Bros. marked both a return to live-action filmmaking and a reunion with frequent collaborators Plaid and director of photography Takashi Komatsu.
Translator of works by Taiyō Matsumoto
Michael Arias has translated and adapted to English some of Tekkonkinkreet author Taiyō Matsumoto's manga.
Sunny
Arias' English translation of Taiyō Matsumoto's quasi-autobiographical manga Sunny was included in the Young Adult Library Services Association's Great Graphic Novels list for 2014, and awarded the Best Graphic Novel prize by the Slate Book Review and the Center for Cartoon Studies.
Cats of the Louvre
Arias is also credited with the 2019 English translation and adaptation of Matsumoto's surreal tale of anthropomorphized stray cats Cats of the Louvre for publisher Viz's Signature collection, recipient of the Eisner Award for Best U.S. Edition of International Material - Asia.
Ping Pong
In 2020 Viz published Arias' English translation of the full two-volume edition of Ping Pong, Matsumoto's popular high school table tennis epic.
No. 5
In 2021 Viz announced a forthcoming English-language edition of No. 5, Matsumoto's surreal sci-fi saga, also translated by Arias.
Personal life
Arias has lived in Tokyo, Japan since he was 23 and speaks and writes Japanese fluently.
In 2011 Arias documented his experiences providing relief to relatives in Miyagi Prefecture during the days immediately following the Tōhoku earthquake and tsunami.
Filmography
Achievements and recognitions
At SIGGRAPH 1996 Arias presented the technical sketch "Toon Shaders for Simulating Cel Animation" detailing his work on rendering software for use in simulating the appearance of cel animation.
Arias was Guest Editor of the SIGGRAPH Computer Graphics journal, Volume 32, Number 1, published February 1999. The issue focused on non-photorealistic rendering (NPR).
On October 12, 1999 the United States Patent and Trademark Office awarded Arias U.S. Patent 5,966,134 for invention of a technique for simulating cel animation and shading.
Arias was a contributor to the Animation and Special Effects program of SIGGRAPH 2000. As a member of the panel "Digital Cel Animation in Japan", Arias presented his work on the Tekkonkinkreet pilot and, with moderator Ken Anjyo and co-panelists Youichi Horry and Yoshiyuki Momose, discussed historical developments, cultural influences, and technical innovations particular to Japanese animation.
Arias is listed as a member of the Visual Effects Society.
Arias has served on the juries for the SIGGRAPH (2013) and SIGGRAPH Asia (2015 and 2016) computer animation festivals.
References
External links
Michael Arias' Official Site
Tekkonkinkreet Official Site (Japan)
Tekkonkinkreet Official Site (US)
Heaven´s Door Official Site (Japan)
Asmik Ace Entertainment
Studio4 °C Official Site
The Art Of The Title Sequence
1968 births
Wesleyan University alumni
Anime directors
Living people
Filmmakers from California
Film directors from Los Angeles
People from Tokyo
Studio 4°C
American storyboard artists
Visual effects artists
American people of Mexican descent
American film directors of Mexican descent
|
591994
|
https://en.wikipedia.org/wiki/Cryptographic%20protocol
|
Cryptographic protocol
|
A security protocol (cryptographic protocol or encryption protocol) is an abstract or concrete protocol that performs a security-related function and applies cryptographic methods, often as sequences of cryptographic primitives. A protocol describes how the algorithms should be used and includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program.
Cryptographic protocols are widely used for secure application-level data transport. A cryptographic protocol usually incorporates at least some of these aspects:
Key agreement or establishment
Entity authentication
Symmetric encryption and message authentication material construction
Secured application-level data transport
Non-repudiation methods
Secret sharing methods
Secure multi-party computation
For example, Transport Layer Security (TLS) is a cryptographic protocol that is used to secure web (HTTPS) connections. It has an entity authentication mechanism, based on the X.509 system; a key setup phase, where a symmetric encryption key is formed by employing public-key cryptography; and an application-level data transport function. These three aspects have important interconnections. Standard TLS does not have non-repudiation support.
There are other types of cryptographic protocols as well, and even the term itself has various readings; Cryptographic application protocols often use one or more underlying key agreement methods, which are also sometimes themselves referred to as "cryptographic protocols". For instance, TLS employs what is known as the Diffie–Hellman key exchange, which although it is only a part of TLS per se, Diffie–Hellman may be seen as a complete cryptographic protocol in itself for other applications.
Advanced cryptographic protocols
A wide variety of cryptographic protocols go beyond the traditional goals of data confidentiality, integrity, and authentication to also secure a variety of other desired characteristics of computer-mediated collaboration. Blind signatures can be used for digital cash and digital credentials to prove that a person holds an attribute or right without revealing that person's identity or the identities of parties that person transacted with. Secure digital timestamping can be used to prove that data (even if confidential) existed at a certain time. Secure multiparty computation can be used to compute answers (such as determining the highest bid in an auction) based on confidential data (such as private bids), so that when the protocol is complete the participants know only their own input and the answer. End-to-end auditable voting systems provide sets of desirable privacy and auditability properties for conducting e-voting. Undeniable signatures include interactive protocols that allow the signer to prove a forgery and limit who can verify the signature. Deniable encryption augments standard encryption by making it impossible for an attacker to mathematically prove the existence of a plain text message. Digital mixes create hard-to-trace communications.
Formal verification
Cryptographic protocols can sometimes be verified formally on an abstract level. When it is done, there is a necessity to formalize the environment in which the protocol operates in order to identify threats. This is frequently done through the Dolev-Yao model.
Logics, concepts and calculi used for formal reasoning of security protocols:
Burrows–Abadi–Needham logic (BAN logic)
Dolev–Yao model
π-calculus
Protocol composition logic (PCL)
Strand space
Research projects and tools used for formal verification of security protocols:
Automated Validation of Internet Security Protocols and Applications (AVISPA) and follow-up project AVANTSSAR
Constraint Logic-based Attack Searcher (CL-AtSe)
Open-Source Fixed-Point Model-Checker (OFMC)
SAT-based Model-Checker (SATMC)
Casper
CryptoVerif
Cryptographic Protocol Shapes Analyzer (CPSA)
Knowledge In Security protocolS (KISS)
Maude-NRL Protocol Analyzer (Maude-NPA)
ProVerif
Scyther
Tamarin Prover
Notion of abstract protocol
To formally verify a protocol it is often abstracted and modelled using Alice & Bob notation. A simple example is the following:
This states that Alice intends a message for Bob consisting of a message encrypted under shared key .
Examples
Internet Key Exchange
IPsec
Kerberos
Off-the-Record Messaging
Point to Point Protocol
Secure Shell (SSH)
Signal Protocol
Transport Layer Security
ZRTP
See also
List of cryptosystems
Secure channel
Security Protocols Open Repository
Comparison of cryptography libraries
References
Further reading
|
15526
|
https://en.wikipedia.org/wiki/Id%20Software
|
Id Software
|
id Software LLC () is an American video game developer based in Richardson, Texas. It was founded on February 1, 1991, by four members of the computer company Softdisk: programmers John Carmack and John Romero, game designer Tom Hall, and artist Adrian Carmack.
id Software made important technological developments in video game technologies for the PC (running MS-DOS and Windows), including work done for the Wolfenstein, Doom, and Quake franchises. id's work was particularly important in 3D computer graphics technology and in game engines that are used throughout the video game industry. The company was involved in the creation of the first-person shooter (FPS) genre: Wolfenstein 3D is often considered to be the first true FPS; Doom is a game that popularized the genre and PC gaming in general; and Quake was id's first true 3D FPS.
On June 24, 2009, ZeniMax Media acquired the company. In 2015, they opened a second studio in Frankfurt, Germany.
History
Formation
The founders of id Software – John Carmack, John Romero, and Tom Hall – met in the offices of Softdisk developing multiple games for Softdisk's monthly publishing, including Dangerous Dave. Along with another Softdisk employee, Lane Roathe, they had formed a small group they called Ideas from the Deep (IFD), a name that Romero and Roathe had come up with. In September 1990, Carmack developed an efficient way to rapidly side-scroll graphics on the PC. Upon making this breakthrough, Carmack and Hall stayed up late into the night making a replica of the first level of the popular 1988 NES game Super Mario Bros. 3, inserting stock graphics of Romero's Dangerous Dave character in lieu of Mario. When Romero saw the demo, entitled Dangerous Dave in Copyright Infringement, he realized that Carmack's breakthrough could have potential. The IFD team moonlighted over a week and over two weekends to create a larger demo of their PC version of Super Mario Bros. 3. They sent their work to Nintendo. According to Romero, Nintendo had told them that the demo was impressive, but "they didn't want their intellectual property on anything but their own hardware, so they told us Good Job and You Can't Do This". While the pair had not readily shared the demo though acknowledged its existence in the years since, a working copy of the demo was discovered in July 2021 and preserved at the Museum of Play.
Around the same time in 1990, Scott Miller of Apogee Software learned of the group and their exceptional talent, having played one of Romero's Softdisk games, Dangerous Dave, and contacted Romero under the guise of multiple fan letters that Romero came to realize all originated from the same address. When he confronted Miller, Miller explained that the deception was necessary since Softdisk screened letters it received. Although disappointed by not actually having received mail from multiple fans, Romero and other Softdisk developers began proposing ideas to Miller. One of these was Commander Keen, a side-scrolling game that incorporated the previous work they had done on the Super Mario Bros. 3 demonstration. The first Commander Keen game, Commander Keen in Invasion of the Vorticons, was released through Apogee in December 1990, which became a very successful shareware game. After their first royalty check, Romero, Carmack, and Adrian Carmack (no relation) decided to start their own company. After hiring Hall, the group finished the Commander Keen series, then hired Jay Wilbur and Kevin Cloud and began working on Wolfenstein 3D. id Software was officially founded by Romero, John and Adrian Carmack and Hall on February 1, 1991. The name "id" came out of their previous IFD; Roathe had left the group, and they opted to drop the "F" to leave "id". They initially used "id" as an initialism for "In Demand", but by the time of the fourth Commander Keen game, they opted to let "id" stand out "as a cool word", according to Romero.
The shareware distribution method was initially employed by id Software through Apogee Software to sell their products, such as the Commander Keen, Wolfenstein and Doom games. They would release the first part of their trilogy as shareware, then sell the other two installments by mail order. Only later (about the time of the release of Doom II) did id Software release their games via more traditional shrink-wrapped boxes in stores (through other game publishers).
After Wolfenstein 3Ds great success, id began working on Doom. After Hall left the company, Sandy Petersen and Dave Taylor were hired before the release of Doom in December 1993.
The end of the classic lineup
Quake was released on June 22, 1996 and was considered a difficult game to develop due to creative differences. Animosity grew within the company and it caused a conflict between Carmack and Romero, which led the latter to leave id after the game's release. Soon after, other staff left the company as well such as Abrash, Shawn Green, Jay Wilbur, Petersen and Mike Wilson. Petersen claimed in July 2021 that the lack of a team leader was the cause of it all. In fact, he volunteered to take lead as he had five years of experience as project manager in MicroProse but he was turned down by Carmack.
ZeniMax Media and Microsoft
On June 24, 2009, it was announced that id Software had been acquired by ZeniMax Media (owner of Bethesda Softworks). The deal would eventually affect publishing deals id Software had before the acquisition, namely Rage, which was being published through Electronic Arts. ZeniMax received in July a $105 million investment from StrongMail Systems for the id acquisition, it's unknown if that was the exact price of the deal. id Software moved from the "cube-shaped" Mesquite office to a location in Richardson, Texas during the spring of 2011.
On June 26, 2013, id Software president Todd Hollenshead quit after 17 years of service.
On November 22, 2013, it was announced id Software co-founder and Technical Director John Carmack had fully resigned from the company to work full-time at Oculus VR which he joined as CTO in August 2013. He was the last of the original founders to leave the company.
Tim Willits left the company in 2019. ZeniMax Media was acquired by Microsoft for in March 2021 and became part of Xbox Game Studios.
Company name
The company writes its name with a lowercase id, which is pronounced as in "did" or "kid", and, according to the book Masters of Doom, the group identified itself as "Ideas from the Deep" in the early days of Softdisk but that, in the end, the name 'id' came from the phrase "in demand".<ref>{{cite book |url=https://books.google.com/books?id=yyaxyKjyp2YC |title=Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture |author=David Kushner |publisher=Random House |date=April 24, 2003 |isbn=9780812972153 |access-date=May 5, 2016}}</ref> Disliking "in demand" as "lame", someone suggested a connection with Sigmund Freud's psychological concept of id, which the others accepted. Evidence of the reference can be found as early as Wolfenstein 3D with the statement "that's id, as in the id, ego, and superego in the psyche" appearing in the game's documentation. Prior to an update to the website, id's History page made a direct reference to Freud.
Key employees
Kevin Cloud — Artist (1992-2006), Executive producer (2007–present)
Donna Jackson — Office manager / "id mom" (1994–present)
Marty Stratton — Director of Business Development (1997-2006), Executive Producer (2006–present) Studio Director (2019–present)
Robert Duffy — Chief Technology Officer (1998–present)
Hugo Martin — Creative Director (2013–present)
Former key employees
Arranged in chronological order:
Tom Hall — Co-founder, game designer, level designer, writer, creative director (1991–1993). After a dispute with John Carmack over the designs of Doom, Hall was forced to resign from id Software in August 1993. He joined 3D Realms soon afterwards.
Bobby Prince — Music composer (1991–1994). A freelance musician who went on to pursue other projects after Doom II.
Dave Taylor — Programmer (1993–1996). Taylor left id Software and co-founded Crack dot Com.
John Romero — Co-founder, game designer, programmer (1991–1996). Romero resigned on August 6, 1996. He established Ion Storm along with Hall on November 15, 1996.
Michael Abrash — Programmer (1995–1996). Returned to Microsoft after the release of Quake.
Shawn Green — Software support (1991–1996). Left id Software to join Romero at Ion Storm.
Jay Wilbur — Business manager (1991–1997). Left id Software after Romero's departure and joined Epic Games in 1997.
Sandy Petersen — Level designer (1993–1997). Left id Software for Ensemble Studios in 1997.
Mike Wilson — PR and marketing (1994–1997). Left id Software to become CEO of Ion Storm with Romero. Left a year later to found Gathering of Developers and later Devolver Digital.
American McGee — Level designer (1993–1998). McGee was fired after the release of Quake II. He joined Electronic Arts and created American McGee's Alice.
Adrian Carmack — Co-founder, artist (1991–2005). Carmack was forced out of id Software after the release of Doom 3 because he would not sell his stock at a low price to the other owners. Adrian sued id Software and the lawsuit was settled during the Zenimax acquisition in 2009.
Todd Hollenshead — President (1996–2013) Left id Software on good terms to work at Nerve Software.
John Carmack — Co-founder, technical director (1991–2013). He joined Oculus VR on August 7, 2013, as a side project, but unable to handle two companies at the same time, Carmack resigned from id Software on November 22, 2013, to pursue Oculus full-time, making him the last founding member to leave the company.
Tim Willits — Level designer (1995– 2001), creative director (2002–2011), studio director (2012–2019) He is now the chief creative officer at Saber Interactive.
Timeline
Game development
Technology
Starting with their first shareware game series, Commander Keen, id Software has licensed the core source code for the game, or what is more commonly known as the engine. Brainstormed by John Romero, id Software held a weekend session titled "The id Summer Seminar" in the summer of 1991 with prospective buyers including Scott Miller, George Broussard, Ken Rogoway, Jim Norwood and Todd Replogle. One of the nights, id Software put together an impromptu game known as "Wac-Man" to demonstrate not only the technical prowess of the Keen engine, but also how it worked internally.
id Software has developed their own game engine for each of their titles when moving to the next technological milestone, including Commander Keen, Wolfenstein 3D, ShadowCaster, Doom, Quake, Quake II, and Quake III, as well as the technology used in making Doom 3. After being used first for id Software's in-house game, the engines are licensed out to other developers. According to Eurogamer.net, "id Software has been synonymous with PC game engines since the concept of a detached game engine was first popularized". During the mid to late 1990s, "the launch of each successive round of technology it's been expected to occupy a headlining position", with the Quake III engine being most widely adopted of their engines. However id Tech 4 had far fewer licensees than the Unreal Engine from Epic Games, due to the long development time that went into Doom 3 which id Software had to release before licensing out that engine to others.
Despite his enthusiasm for open source code, Carmack revealed in 2011 that he had no interest in licensing the technology to the mass market. Beginning with Wolfenstein 3D, he felt bothered when third-party companies started "pestering" him to license the id tech engine, adding that he wanted to focus on new technology instead of providing support to existing ones. He felt very strongly that this was not why he signed up to be a game programmer for; to be "holding the hands" of other game developers. Carmack commended Epic Games for pursuing the licensing to the market beginning with Unreal Engine 3. Even though the said company has gained more success with its game engine than id Software over the years, Carmack had no regrets by his decision and continued to focus on open source until his departure from the company in 2013.
In conjunction with his self-professed affinity for sharing source code, John Carmack has open-sourced most of the major id Software engines under the GNU General Public License. Historically, the source code for each engine has been released once the code base is 5 years old. Consequently, many home grown projects have sprung up porting the code to different platforms, cleaning up the source code, or providing major modifications to the core engine. Wolfenstein 3D, Doom and Quake engine ports are ubiquitous to nearly all platforms capable of running games, such as hand-held PCs, iPods, the PSP, the Nintendo DS and more. Impressive core modifications include DarkPlaces which adds stencil shadow volumes into the original Quake engine along with a more efficient network protocol. Another such project is ioquake3, which maintains a goal of cleaning up the source code, adding features and fixing bugs. Even earlier id Software code, namely for Hovertank 3D and Catacomb 3D, was released in June 2014 by Flat Rock Software.
The GPL release of the Quake III engine's source code was moved from the end of 2004 to August 2005 as the engine was still being licensed to commercial customers who would otherwise be concerned over the sudden loss in value of their recent investment.
On August 4, 2011, John Carmack revealed during his QuakeCon 2011 keynote that they will be releasing the source code of the Doom 3 engine (id Tech 4) during the year.
id Software publicly stated they would not support the Wii console (possibly due to technical limitations), although they have since indicated that they may release titles on that platform (although it would be limited to their games released during the 1990s). They did the same thing with the Wii U but for Nintendo Switch, they collaborated with Panic Button starting with 2016's Doom and Wolfenstein II: The New Colossus.
Since id Software revealed their engine id Tech 5, they call their engines "id Tech", followed by a version number. Older engines have retroactively been renamed to fit this scheme, with the Doom engine as id Tech 1.
Linux gaming
id Software was an early pioneer in the Linux gaming market, and id Software's Linux games have been some of the most popular of the platform. Many id Software games won the Readers' and Editors' Choice awards of Linux Journal.2000 Readers' Choice Awards Linux Journal, November 2000Editors' Choice 2006 Linux Journal, November 2006 Some id Software titles ported to Linux are Doom (the first id Software game to be ported), Quake, Quake II, Quake III Arena, Return to Castle Wolfenstein, Wolfenstein: Enemy Territory, Doom 3, Quake 4, and Enemy Territory: Quake Wars. Since id Software and some of its licensees released the source code for some of their previous games, several games which were not ported (such as Wolfenstein 3D, Spear of Destiny, Heretic, Hexen, Hexen II, and Strife) can run on Linux and other operating systems natively through the use of source ports. Quake Live also launched with Linux support, although this, alongside OS X support, was later removed when changed to a standalone title.
The tradition of porting to Linux was first started by Dave D. Taylor, with David Kirsch doing some later porting. Since Quake III Arena, Linux porting had been handled by Timothee Besset. The majority of all id Tech 4 games, including those made by other developers, have a Linux client available, the only current exceptions being Wolfenstein and Brink. Similarly, almost all of the games utilizing the Quake II engine have Linux ports, the only exceptions being those created by Ion Storm (Daikatana later received a community port). Despite fears by the Linux gaming community that id Tech 5 would not be ported to that platform, Timothee Besset in his blog stated "I'll be damned if we don't find the time to get Linux builds done". Besset explained that id Software's primary justification for releasing Linux builds was better code quality, along with a technical interest in the platform. However, on January 26, 2012, Besset announced that he had left id.
John Carmack has expressed his stance with regard to Linux builds in the past. In December 2000 Todd Hollenshead expressed support for Linux: "All said, we will continue to be a leading supporter of the Linux platform because we believe it is a technically sound OS and is the OS of choice for many server ops." However, on April 25, 2012, Carmack revealed that "there are no plans for a native Linux client" of id's most recent game, Rage. In February 2013, Carmack argued for improving emulation as the "proper technical direction for gaming on Linux", though this was also due to ZeniMax's refusal to support "unofficial binaries", given all prior ports (except for Quake III Arena, via Loki Software, and earlier versions of Quake Live) having only ever been unofficial. Carmack didn't mention official games Quake: The Offering and Quake II: Colossus ported by id Software to Linux and published by Macmillan Computer Publishing USA.
Despite no longer releasing native binaries, id has been an early adopter of Stadia, a cloud gaming service powered by Debian Linux servers, and the cross-platform Vulkan API.
Games
Commander Keen Commander Keen in Invasion of the Vorticons, a platform game in the style of those for the Nintendo Entertainment System, was one of the first MS-DOS games with smooth horizontal-scrolling. Published by Apogee Software, the title and follow-ups brought id Software success as a shareware developer. It is the series of id Software that designer Tom Hall is most affiliated with. The first Commander Keen trilogy was released on December 14, 1990.
Wolfenstein
The company's breakout product was released on May 5, 1992: Wolfenstein 3D, a first-person shooter (FPS) with smooth 3D graphics that were unprecedented in computer games, and with violent gameplay that many gamers found engaging. After essentially founding an entire genre with this game, id Software created Doom, Doom II: Hell on Earth, Quake, Quake II, Quake III Arena, Quake 4, and Doom 3. Each of these first-person shooters featured progressively higher levels of graphical technology. Wolfenstein 3D spawned a prequel and a sequel: the prequel called Spear of Destiny, and the second, Return to Castle Wolfenstein, using the id Tech 3 engine. A third Wolfenstein sequel, simply titled Wolfenstein, was released by Raven Software, using the id Tech 4 engine. Another sequel, named Wolfenstein: The New Order; was developed by MachineGames using the id Tech 5 engine and released in 2014, with it getting a prequel by the name of Wolfenstein: The Old Blood a year later; followed by a direct sequel titled Wolfenstein II: The New Colossus in 2017.
Doom
Eighteen months after their release of Wolfenstein 3D, on December 10, 1993, id Software released Doom which would again set new standards for graphic quality and graphic violence in computer gaming. Doom featured a sci-fi/horror setting with graphic quality that had never been seen on personal computers or even video game consoles. Doom became a cultural phenomenon and its violent theme would eventually launch a new wave of criticism decrying the dangers of violence in video games. Doom was ported to numerous platforms, inspired many knock-offs, and was eventually followed by the technically similar Doom II: Hell on Earth. id Software made its mark in video game history with the shareware release of Doom, and eventually revisited the theme of this game in 2004 with their release of Doom 3. John Carmack said in an interview at QuakeCon 2007 that there would be a Doom 4. It began development on May 7, 2008. Doom 2016, the fourth installation of the Doom series, was released on Microsoft Windows, PlayStation 4, and Xbox One on May 13, 2016, and was later released on Nintendo Switch on November 10, 2017. In June 2018, the sequel to the 2016 Doom, Doom Eternal was officially announced at E3 2018 with a teaser trailer, followed by a gameplay reveal at QuakeCon in August 2018.
Quake
On June 22, 1996, the release of Quake marked the third milestone in id Software history. Quake combined a cutting edge fully 3D engine, the Quake engine, with a distinctive art style to create critically acclaimed graphics for its time. Audio was not neglected either, having recruited Nine Inch Nails frontman Trent Reznor to facilitate unique sound effects and ambient music for the game. (A small homage was paid to Nine Inch Nails in the form of the band's logo appearing on the ammunition boxes for the nailgun weapon.) It also included the work of Michael Abrash. Furthermore, Quake's main innovation, the capability to play a deathmatch (competitive gameplay between living opponents instead of against computer-controlled characters) over the Internet (especially through the add-on QuakeWorld), seared the title into the minds of gamers as another smash hit.
In 2008, id Software was honored at the 59th Annual Technology & Engineering Emmy Awards for the pioneering work Quake represented in user modifiable games. id Software is the only game development company ever honored twice by the National Academy of Television Arts & Sciences, having been given an Emmy Award in 2007 for creation of the 3D technology that underlies modern shooter video games.
The Quake series continued with Quake II in 1997. Activision purchased a 49% stake in id Software, making it a second party which took publishing duties until 2009. However, the game is not a storyline sequel, and instead focuses on an assault on an alien planet, Stroggos, in retaliation for Strogg attacks on Earth. Most of the subsequent entries in the Quake franchise follow this storyline. Quake III Arena (1999), the next title in the series, has minimal plot, but centers around the "Arena Eternal", a gladiatorial setting created by an alien race known as the Vadrigar and populated by combatants plucked from various points in time and space. Among these combatants are some characters either drawn from or based on those in Doom ("Doomguy"), Quake (Ranger, Wrack), and Quake II (Bitterman, Tank Jr., Grunt, Stripe). Quake IV (2005) picks up where Quake II left off – finishing the war between the humans and Strogg. The spin-off Enemy Territory: Quake Wars acts as a prequel to Quake II, when the Strogg first invade Earth. Quake IV and Enemy Territory: Quake Wars were made by outside developers and not id.
There have also been other spin-offs such as Quake Mobile in 2005 and Quake Live, an internet browser based modification of Quake III. A game called Quake Arena DS was planned and canceled for the Nintendo DS. John Carmack stated, at QuakeCon 2007, that the id Tech 5 engine would be used for a new Quake game.
Rage
Todd Hollenshead announced in May 2007 that id Software had begun working on an all new series that would be using a new engine. Hollenshead also mentioned that the title would be completely developed in-house, marking the first game since 2004's Doom 3 to be done so. At 2007's WWDC, John Carmack showed the new engine called id Tech 5. Later that year, at QuakeCon 2007, the title of the new game was revealed as Rage.
On July 14, 2008, id Software announced at the 2008 E3 event that they would be publishing Rage through Electronic Arts, and not id's longtime publisher Activision. However, since then ZeniMax has also announced that they are publishing Rage through Bethesda Softworks.
On August 12, 2010, during Quakecon 2010, id Software announced Rage US ship date of September 13, 2011, and a European ship date of September 15, 2011. During the keynote, id Software also demonstrated a Rage spin-off title running on the iPhone. This technology demo later became Rage HD.
On May 14, 2018, Bethesda Softworks announced Rage 2, a co-development between id Software and Avalanche Studios.
Other games
During its early days, id Software produced much more varied games; these include the early 3D first-person shooter experiments that led to Wolfenstein 3D and Doom – Hovertank 3D and Catacomb 3D. There was also the Rescue Rover series, which had two games – Rescue Rover and Rescue Rover 2. Also there was John Romero's Dangerous Dave series, which included such notables as the tech demo (In Copyright Infringement) which led to the Commander Keen engine, and the decently popular Dangerous Dave in the Haunted Mansion. In the Haunted Mansion was powered by the same engine as the earlier id Software game Shadow Knights, which was one of the several games written by id Software to fulfill their contractual obligation to produce games for Softdisk, where the id Software founders had been employed. id Software has also overseen several games using its technology that were not made in one of their IPs such as ShadowCaster, (early-id Tech 1), Heretic, Hexen: Beyond Heretic (id Tech 1), Hexen II (Quake engine), and Orcs and Elves (Doom RPG engine).
Other media
id Software has also published novels based on the Doom series Doom novels. After a brief hiatus from publishing, id resumed and re-launched the novel series in 2008 with Matthew J. Costello's (a story consultant for Doom 3 and now Rage) new Doom 3 novels: Worlds on Fire and Maelstrom.
id Software became involved in film development when they oversaw the film adaption of their Doom franchise in 2005. In August 2007, Todd Hollenshead stated at QuakeCon 2007 that a Return to Castle Wolfenstein movie is in development which re-teams the Silent Hill writer/producer team, Roger Avary as writer and director and Samuel Hadida as producer. A new Doom film, titled Doom: Annihilation, was released in 2019, although id itself stressed its lack of involvement.
Controversy
id Software was the target of controversy over two of their most popular games, Doom and the earlier Wolfenstein 3D:
Doom Doom was notorious for its high levels of gore and occultism along with satanic imagery, which generated controversy from a broad range of groups. Yahoo! Games listed it as one of the top ten most controversial games of all time.
The game again sparked controversy throughout a period of school shootings in the United States when it was found that Eric Harris and Dylan Klebold, who committed the Columbine High School massacre in 1999, were avid players of the game. While planning for the massacre, Harris said that the killing would be "like playing Doom", and "it'll be like the LA riots, the Oklahoma bombing, World War II, Vietnam, Duke Nukem and Doom all mixed together", and that his shotgun was "straight out of the game". A rumor spread afterwards that Harris had designed a Doom level that looked like the high school, populated with representations of Harris's classmates and teachers, and that Harris practiced for his role in the shootings by playing the level over and over. Although Harris did design Doom levels, none of them were based on Columbine High School.
While Doom and other violent video games have been blamed for nationally covered school shootings, 2008 research featured by Greater Good Science Center shows that the two are not closely related. Harvard Medical School researchers Cheryl Olson and Lawrence Kutner found that violent video games did not correlate to school shootings. The United States Secret Service and United States Department of Education analyzed 37 incidents of school violence and sought to develop a profile of school shooters; they discovered that the most common traits among shooters were that they were male and had histories of depression and attempted suicide. While many of the killers—like the vast majority of young teenage boys—did play video games, this study did not find a relationship between gameplay and school shootings. In fact, only one-eighth of the shooters showed any special interest in violent video games, far less than the number of shooters who seemed attracted to books and movies with violent content.
Wolfenstein 3D
As for Wolfenstein 3D, due to its use of Nazi symbols such as the swastika and the anthem of the Nazi Party, Horst-Wessel-Lied, as theme music, the PC version of the game was withdrawn from circulation in Germany in 1994, following a verdict by the Amtsgericht München on January 25, 1994. Despite the fact that Nazis are portrayed as the enemy in Wolfenstein, the use of those symbols is a federal offense in Germany unless certain circumstances apply. Similarly, the Atari Jaguar version was confiscated following a verdict by the Amtsgericht Berlin Tiergarten on December 7, 1994.
Due to concerns from Nintendo of America, the Super NES version was modified to not include any swastikas or Nazi references; furthermore, blood was replaced with sweat to make the game seem less violent, and the attack dogs in the game were replaced by giant mutant rats. Employees of id Software are quoted in The Official DOOM Player Guide about the reaction to Wolfenstein, claiming it to be ironic that it was morally acceptable to shoot people and rats, but not dogs. Two new weapons were added as well. The Super NES version was not as successful as the PC version.
People
In 2003, the book Masters of Doom chronicled the development of id Software, concentrating on the personalities and interaction of John Carmack and John Romero. Below are the key people involved with id's success.
John Carmack
Carmack's skill at 3D programming is widely recognized in the software industry and from its inception, he was id's lead programmer. On August 7, 2013, he joined Oculus VR, a company developing virtual reality headsets, and left id Software on November 22, 2013.
John Romero
John Romero saw the horizontal scrolling demo Dangerous Dave in Copyright Infringement and immediately had the idea to form id Software on September 20, 1990. Romero pioneered the game engine licensing business with his "id Summer Seminar" in 1991 where the Keen4 engine was licensed to Apogee for Biomenace. John also worked closely with the DOOM community and was the face of id to its fans. One success of this engagement was the fan-made game Final DOOM, published in 1996. John also created the control scheme for the FPS, and the abstract level design style of DOOM that influenced many 3D games that came after it. John added par times to Wolfenstein 3D, and then DOOM, which started the phenomenon of Speedrunning. Romero wrote almost all the tools that enabled id Software and many others to develop games with id Software's technology. Romero was forced to resign in 1996 after the release of Quake, then later formed the company Ion Storm. There, he became infamous through the development of Daikatana, which was received negatively from reviewers and gamers alike upon release. Afterward, Romero co-founded The Guildhall in Dallas, Texas, served as chairman of the CPL eSports league, created an MMORPG publisher and developer named Gazillion Entertainment, created a hit Facebook game named Ravenwood Fair that garnered 25 million monthly players in 2011, and started Romero Games in Galway, Ireland in 2015.
Both Tom Hall and John Romero have reputations as designers and idea men who have helped shape some of the key PC gaming titles of the 1990s.
Tom Hall
Tom Hall was forced to resign by id Software during the early days of Doom development, but not before he had some impact; for example, he was responsible for the inclusion of teleporters in the game. He was let go before the shareware release of Doom and then went to work for Apogee, developing Rise of the Triad with the "Developers of Incredible Power". When he finished work on that game, he found he was not compatible with the Prey development team at Apogee, and therefore left to join his ex-id Software compatriot John Romero at Ion Storm. Hall has frequently commented that if he could obtain the rights to Commander Keen, he would immediately develop another Keen title.
Sandy Petersen
Sandy Petersen was a level designer for 19 of the 27 levels in the original Doom title as well as 17 of the 32 levels of Doom II. As a fan of H.P. Lovecraft, his influence is apparent in the Lovecraftian feel of the monsters for Quake, and he created Inferno, the third "episode" of the first Doom. He was forced to resign from id Software during the production of Quake II and most of his work was scrapped before the title was released.
American McGee
American McGee was a level designer for Doom II, The Ultimate Doom, Quake, and Quake II. He was asked to resign after the release of Quake II, and he then moved to Electronic Arts where he gained industry notoriety with the development of his own game American McGee's Alice. After leaving Electronic Arts, he became an independent entrepreneur and game developer. McGee headed the independent game development studio Spicy Horse in Shanghai, China from 2007 to 2016.
References
Literature
Kushner, David (2003). Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture'', New York: Random House. .
External links
American companies established in 1991
American corporate subsidiaries
1991 establishments in Louisiana
Microsoft subsidiaries
2009 mergers and acquisitions
Companies based in Richardson, Texas
Video game companies based in Texas
Video game companies established in 1991
Video game companies of the United States
Video game development companies
ZeniMax Media
|
335615
|
https://en.wikipedia.org/wiki/Adaptive%20Communication%20Environment
|
Adaptive Communication Environment
|
The Adaptive Communication Environment (ACE) is an open source software framework used for network programming. It provides a set of object-oriented C++ classes designed to help address the inherent complexities and challenges in network programming by preventing common errors.
History
ACE was initially developed by Douglas C. Schmidt during his graduate work at the University of California, Irvine. Development followed him to the Washington University, St. Louis, where he was employed. ACE is open-source software released by WU's Distributed Object Computer (DOC) group. Its development continued in the Institute for Software Integrated Systems (ISIS) at Vanderbilt University.
Features
ACE provides a standardized usage for operating system/machine specific features. It provides common data types and methods to access the powerful but complex features of modern operating systems. These include: inter-process communication, thread management, efficient memory management, etc.
It was designed to be portable and provide a common framework. The same code will work on most Unixes, Windows, VxWorks, QNX, OpenVMS, etc., with minimal changes. Due to this cross-platform support, it has been widely used in the development of communication software. Some of the successful projects that have used ACE includes: Motorola Iridium satellites, Boeing Wedgetail's Australian airborne early warning & control (AEW&C) system, and others.
ACE used software design patterns.
See also
Communication software
Component-integrated ACE ORB (CIAO, a CORBA implementation)
Cross-platform support middleware
TAO (software)
References
External links
Distributed object computer (DOC) Group website
Institute for Software Integrated Systems (ISIS) website
ACE Doxygen reference
ACE github code repository
Application programming interfaces
C++ libraries
Cross-platform software
|
516845
|
https://en.wikipedia.org/wiki/Dartmouth%20BASIC
|
Dartmouth BASIC
|
Dartmouth BASIC is the original version of the BASIC programming language. It was designed by two professors at Dartmouth College, John G. Kemeny and Thomas E. Kurtz. With the underlying Dartmouth Time Sharing System (DTSS), it offered an interactive programming environment to all undergraduates as well as the larger university community.
Several versions were produced at Dartmouth, implemented by undergraduate students and operating as a compile and go system. The first version ran on 1 May 1964, and it was opened to general users in June. Upgrades followed, culminating in the seventh and final release in 1979. Dartmouth also introduced a dramatically updated version known as Structured BASIC (or SBASIC) in 1975, which added various structured programming concepts. SBASIC formed the basis of the ANSI-standard Standard BASIC efforts in the early 1980s.
Most dialects of BASIC trace their history to the Fourth Edition, but generally leave out more esoteric features like matrix math. In contrast to the Dartmouth compilers, most other BASICs were written as interpreters. This decision allowed them to run in the limited main memory of early microcomputers. Microsoft BASIC is one example, designed to run in only 4 KB of memory. By the early 1980s, tens of millions of home computers were running some variant of the MS interpreter. It became the de facto standard for BASIC, which led to the abandonment of the ANSI SBASIC efforts. Kemeny and Kurtz later formed a company to develop and promote a version of SBASIC known as True BASIC.
Many early mainframe games trace their history to Dartmouth BASIC and the DTSS system. A selection of these were collected, in HP Time-Shared BASIC versions, in the People's Computer Company book What to do after you hit Return. Many of the original source listings in BASIC Computer Games and related works also trace their history to Dartmouth BASIC.
Development history
Earlier work
John G. Kemeny joined the mathematics department of Dartmouth College in 1953 and later became its department chairman. In 1956 he gained access to an IBM 704 via MIT's New England Regional Computer Center efforts. That year, he wrote the DARSIMCO language, which simplified the programming of mathematical operations. He was aided by Thomas E. Kurtz, who joined the department that year.
DARSIMCO was forgotten when the first FORTRAN compiler was installed on the machine in 1957. The arrival of FORTRAN instilled an important lesson. Kurtz, having been indoctrinated that FORTRAN was slow, spent several months writing a program in 704 assembler which had taken up about an hour of CPU time to debug and still wasn't running. Giving up, he rewrote it in FORTRAN and had it running in five minutes. The lesson was that high-level languages could save time, regardless of their measured performance.
In 1959, the school received its first computer, the drum-based LGP-30. One student wrote a FORTRAN-inspired language called DART for the machine. This led to an effort to produce an ALGOL 58 compiler, turning to ALGOL 60 when that definition was finalized. Writing the compiler was difficult due to the very small memory size, 32 KB in modern terms, and was extremely slow, based on the drum speed of 30 rpm. Nevertheless, they were able to produce a functional cut-down version known as ALGOL 30. Further development produced SCALP, the "Self-Contained Algol Processor", a one-pass compiler that was ready to run the compiled program as soon as the punched tape finished reading in the source. This compile-and-go style operation would later be used by BASIC.
In 1962, Kemeny and student Sidney Marshall began experimenting with a new language, DOPE (Dartmouth Oversimplified Programming Experiment). This used numbered lines to represent instructions, for instance, to add two numbers, DOPE used:
5 + A B C
Which meant "on line 5, perform an addition of the values in variables A and B and put the result in C". Although somewhat cryptic in layout, the basis for the future BASIC language can be seen. In addition to basic mathematical operations, the language included SQR, EXP, LOG, SIN and a simple branching construct.
Computing in liberal arts
Kemeny and Kurtz agreed on the need for programming literacy among students outside the traditional STEM fields; only 25% of the students at Dartmouth took STEM-related courses, but some level of mathematics was used in almost every field. Moreover, as computers became more important in society, they wondered "How can sensible decisions about computing and its use be made by persons essentially ignorant of it?"
Kemeny later noted that "Our vision was that every student on campus should have access to a computer, and any faculty member should be able to use a computer in the classroom whenever appropriate. It was as simple as that." But doing so would be largely impossible given what they had to work with; the turnaround on a typical SCALP run was about 15 minutes, and the languages were far too difficult for non-STEM users to use for basic tasks.
It wasn't simply the complexity that was a problem, it was the entire concept of the batch processing. Students would prepare their programs on punch cards or paper tape, submit them to the computer operators, and then at some future point receive their output. This would often reveal an error that required the entire process to be repeated. As they later put it, "If it takes on the order of 1 day for one try, the student will either lose interest or forget what the problems were. At best, he will waste time standing around waiting for the day's results to appear."
In 1959, due largely to Kemeny's reputation as an innovator in math teaching, the department won an Alfred P. Sloan Foundation award for $500,000 to build a new department building.
Developing the concept
During a 1961 visit to MIT, they were introduced to the PDP-1 and its recently completed experimental time-sharing operating system. John McCarthy asked Kurtz why they didn't use time sharing for their efforts to bring computing to the masses. Kurtz later told Kemeny "we should do time sharing", to which Kemeny replied "OK". The arrival of the Teletype Model 33 teleprinter using the newly introduced ASCII over telephone lines solved the problem of access; no longer would the programmers have to submit the programs on cards or paper tape, they would now use the Model 33 to type directly into the computer. All that was needed was a new machine that was fast enough to host a time-sharing system, and a simple language for the programmers to use.
When the topic of a simple language began to be considered seriously, Kemeny immediately suggested writing a new one. Kurtz was more interested in a cut-down version of FORTRAN or ALGOL. But these languages had so many idiosyncrasies that Kurtz quickly came to agree with Kemeny. Over time, four key elements emerged; the system would use time-sharing, a new language would be needed, to get users onto the system new courses would introduce programming as an adjunct to other subjects, and finally, the terminals would be open to all users.
Initial version
The project officially started in September 1963. The goal was to develop the language and operating system on an off-the-shelf computer. In early 1964, two grants from the National Science Foundation, one to develop the time-sharing system and another the language, along with educational discounts from General Electric led to the purchase of a GE-225 computer. This was paired with the much simpler DATANET-30 (DN-30) machine and a hard drive connected to both machines in order to share data.
The system would work by having the DN-30 run the terminals and save the user's work to the disk. When the user typed RUN, the GE-225 would read that file, compile it, run it, and pass back the results to the DN-30 which would print the output on the terminal. This combination of machines was later known as the GE-265, adding their model numbers together. GE built about fifty additional examples of the GE-265, many for their service bureau business. GE referred to these as their Mark I time-sharing systems.
In the summer of 1963, pending purchase of the computer, GE provided access to one of their GE-225s. Kemeny began working on a prototype compiler. Students Michael Busch and John McGeachie began working on the operating system design that fall. Both the language and the OS were extensively modified during this period, although the basic goals remained the same and were published in a draft form that November.
The school's machine arrived in the last week of February 1964, was operational by mid-March, and officially handed over on 1 April. By that point, the operating system design was already well developed. Most of the student programmers working on the operating system did so for 50 hours a week, in addition to their normal course load. The language was developed in parallel on borrowed time on another 225 machine. The OS was completed in April, and the entire system running on three Model 33 terminals was ready by the end of the month. John Kemeny and John McGeachie ran the first BASIC program on 1 May 1964 at 4 a.m. ET.
It is not completely clear what the first programs were. Many sources, including Dartmouth, claim it was this simple program:
PRINT 2 + 2
Over the next month the system was tested by having a numerical analysis class test programs on the system. During this period, the machine remained running properly for an average of five minutes. However, the problems were rapidly addressed, and in June it was decided to increase the number of terminals to eleven. It was around this time that a faster GE-235 replaced the 225. By the fall, 20 terminals were in use.
New system
One of the original goals of the program was to work programming into other coursework. This was a success, but it put considerable strain on the system and it became clear that it had no room for future growth.
In 1965, the team approached GE for support with ongoing development. In September, Vice President Louis Rader offered the new GE-635, which ran approximately 10 times as fast and included two CPUs. Additionally, a second DN-30 would be added to handle more lines, enough for 150 simultaneous users. To house it, a larger facility would be needed than the basement of College Hall where the 265 was running. Peter Kiewit, Class of '22, along with additional support from the NSF, led to the construction of the Kiewit Computation Center, which opened in December 1966.
While waiting for this machine to arrive, in the summer and fall of 1966 a GE-635 at the Rome Air Development Center was used to develop MOLDS, the "Multiple User On-Line Debugging System". The GE-635 was operational in early 1967, and using MOLDS the new operating system was fully functional in September, at which time the GE-265 was sold off.
GE provided the machine for free for three years as part of a wider agreement under which Dartmouth would develop new versions of BASIC while GE used it to develop a new release of their version of the operating system. This collaboration proved to be a success; GE began deploying these machines as their Mark II time-sharing systems, and by the end of the decade they were one of the largest time-sharing vendors in the world.
When this "Phase I" system became operational, the Dartmouth team began development of "Phase II", the ideal operating system. This was installed in March 1969, and changed its name to the Dartmouth Time Sharing System shortly thereafter. When the three-year period was up, GE gifted the machine to the university. Although the two teams remained in contact, and several good-faith attempts were made to continue the relationship, little further collaboration occurred and the partnership officially ended on 20 September 1972.
Expanding user base
A review in 1968 noted that 80% of the students and 70% of the faculty was making some use of the system. Hundreds of terminals were spread across the campus, from the hospital to the business school. 57% of the CPU time was used for coursework, 16% for research, and the remaining 27% for "recreational use"; Dartmouth actively encouraged users to play games as a way to get hands-on use and overcome fear of the computer.
Beginning with another NFS grant, in 1967 Dartmouth also began placing terminals in off-campus locations, including high schools in the area. In terms of user counts, these terminals hosted 69% of the total users, although they used a smaller amount of computer time. By 1971 there were 79 remote terminals, as far away as New Jersey and Bangor, Maine. These were supported by multiplexer systems that allowed up to 12 terminals to be supported over a single voice-grade telephone line. Additionally, a number of these lines were available for dial-up use with a modem.
Influence
Time-sharing was a major area of research in the 1960s, with many in the computer industry predicting that computing power would become inexpensive and widespread. This was most famously stated by John McCarthy, who said "computing may someday be organized as a public utility just as the telephone system is a public utility."
With BASIC, such services became far more accessible to end-users whose tasks would take too long to code for them to be suitable for solving on a computer. This led to a number of manufacturers who introduced computers specifically designed for this market of users who wanted to solve small or medium-scale tasks and were not as worried about outright performance. In particular, two machines aimed directly at this market became the "most widely used small time-sharing systems ever developed".
The HP 2000 ran HP Time-Shared BASIC, a combination of a BASIC and a time-share operating system almost identical to the DTSS setup. The system supported up to 32 simultaneous users, using a low-end HP 2100 CPU to run the terminals in the same fashion as the Datanet-30 of the original GE-265 setup, while the programs ran on a higher-end model of the same machine, typically differing in that it had more core memory. HP's BASIC used a semi-compiled "tokenized" format for storing programs, which improved loading times and meant "compiles" were zero-time.
Digital Equipment Corporation took a different approach, using a single-machine offering based on their existing PDP-11 line with the new RSTS/E operating system and BASIC-PLUS. BASIC-PLUS more closely followed the Fifth Edition, including the MAT commands, but was implemented as a pure interpreter as opposed to the Dartmouth compiler or HP's tokenized format. It also included a number of control structures following the JOSS model, like . Tymshare SUPER BASIC also supported JOSS-style structures and matrix math, but retained the original compile-and-go operation.
Practically every vendor of the era offered some solution to this same problem, although they may not have been so closely similar to the original. When Kurtz began considering the formation of an ANSI standard for BASIC in 1973, he found that the number of time-sharing service bureaus with BASIC available was greater than any other language. Unfortunately, this success was also a problem; by that point, there were so many variations that a standard seemed impossible.
Games in BASIC
Kemeny actively encouraged games on the DTSS platform, and considered it to be one of the major reasons for the success of the DTSS system. He was likely the author of an early mainframe game. Although Kemeny did not take credit for it, he later referred to FTBALL by stating it "was written on Sunday after a certain Dartmouth-Princeton game in 1965 when Dartmouth won the Lambert trophy. It's sort of a commemorative program". The game was an upset over heavily favored Princeton.
As the system expanded, especially after the addition of string handling in BASIC, the DTSS system became a major platform for the development of many text-based games. In the early 1970s, the People's Computer Company began publishing these in their magazine, typically converted to the more widely available HP BASIC. Many of these listings were collected in their 1975 book, What to do after you hit return. Although these are published in HP BASIC form, the majority of them trace their history to either DTSS or the Lawrence Hall of Science in California where a similar machine was set up, known as DECISION.
A more famous collection is BASIC Computer Games of 1978, where about half of the programs in the book were either written at Dartmouth, including another by Kemeny, Batnum, or more commonly, one of the many high schools that were connected to it after 1968. A particularly prolific high school was Lexington High School in Massachusetts but many other schools appear as well. A number of the programs do not list their original locations, but come from authors that were likely connected to the system through a school or public projects like Project SOLO.
Multi-user games became possible in BASIC when Stephen Garland and John McGeachie developed the MOTIF Multiple Online Terminal Interface for DTSS. To start a game, a user typed LINK followed by a session name instead of RUN, thereby enabling other users to connect to the game by typing JOIN followed by the session name. MOTIF then multiplexed input and output for the BASIC program, prepending a string identifier to the beginning of each line of input and output. The first programs developed with this interface were a two-person version of FTBALL and a five-person poker game. More serious was a management game that allowed up to ten students at the Amos Tuck School of Business Administration to compete in the production and marketing of a single product.
Versions
First Edition
The original version, retroactively known as version one, supported the commands LET, PRINT, END, FOR...NEXT, GOTO, GOSUB...RETURN, IF...THEN, DEF, READ, DATA, DIM, and REM. It included basic math instructions, , , and , as well as the up-arrow for exponents "...since on a teletype typewriter it is impossible to print superscripts." In modern varieties, the up-arrow is normally replaced by the "hat" character, . Exponents took the absolute value of the number before calculation, so to calculate , one had to use . There was a further problem in the exponent function that treated as as opposed to the correct order of operations , which was not corrected until the third release. The INT() function always truncated towards zero.
The language had a number of idiosyncrasies of its own. In contrast to later versions, the LET command was required on all statements lacking another command, so was not valid in this version. The PRINT statement used the comma when printing multiple variables, advancing to the next of five "zones". The comma was not needed in the case where one was printing a prompt and single value, so was valid. A somewhat hidden feature was that all variables were capable of representing arrays (vectors) of up to ten elements (subscripts 1 to 10, changed to 0 to 10 in the Second Edition) without being declared that way using DIM.
Variable names were limited to a single letter or a letter followed by a digit (286 possible variable names). All operations were done in floating point. On the GE-225 and GE-235, this produced a precision of about 30 bits (roughly ten digits) with a base-2 exponent range of -256 to +255.
Additionally, due to the GE-235's word size being 20-bits and using a six-bit character code, the language enshrined the use of three-letter function names because that allowed the storage of three six-bit characters in a 20-bit word (using 18 bits). This is why BASIC functions are three letters, like INT or SQR, something that remained in the many varieties of the language long after they left the GE-235.
Second Edition, CARDBASIC
The Second Edition of BASIC, although not referred to such at the time, only made minimal changes. Released in October 1964, it allowed arrays to begin as subscript 0 instead of 1 (useful for representing polynomials) and added the semicolon, , to the PRINT statement. Unlike later implementations, where this left space between items, the semicolon advanced printing to the next multiple of three characters, which allowed more numbers to be "packed" into a line of output than the existing comma separator.
The October version also included a separate definition for CARDBASIC, which was simply a version of BASIC for use on card-based workflows. CARDBASIC was almost identical to the interactive version, with the exception being that it did not include the zero-based arrays. More important to the language's future, CARDBASIC added the MAT commands that worked with numerical matrixes. CARDBASIC was not developed further, as the entire idea of BASIC had been to be interactive.
Third Edition
The Third Edition, released in 1966 and the first to use the "edition" naming, was the first designed specifically with the intent of running on the new GE-635 computer which was due to arrive shortly. This version includes the MAT functions from CARDBASIC, although they now allow for a subscript of 0.
The new SGN function gave the sign of its argument (positive?0 and negative?1), while RESTORE was added to "rewind" the position of READ/DATA. The exponentiation problem was fixed, so would be interpreted as . Additionally, the INT function was changed to be a true floor, as opposed to trim-toward-zero, which allowed rounding to be implemented with INT(X+0.5).
The major change in this version was the new INPUT statement, which allowed the user to type in numeric values, making the language truly interactive during execution for the first time; previously the only control one had during execution was to type STOP in the monitor. Additionally, the system now allowed, and encouraged, loops to be indented, a feature that was not seen on most other versions of BASIC.
Fourth Edition
The Third Edition remained in use through the GE-235's lifetime into the fall of 1967. However, as plans were made to receive the GE-635, an experimental version was created on the 635 in the spring of 1967. This version was a partnership between GE and Dartmouth, with GE contributing a new operating system as well as a number of features of BASIC from their own Mark 1 BASIC efforts.
This version, initially published as a supplement to the Third Edition, added the RANDOMIZE command to "seed" the RND function, and the ON...GOTO "computed goto" that closely matched the similar feature in FORTRAN. This version also allowed ON...THEN, arguing that IF...THEN did not require the GOTO so the same format should be allowed here. The new TAB function allowed the printing to be moved to a given column, from 0 to 74. Another internal change was to once again change the MAT to be 1-based; one could use the 0th index, but it would normally be ignored by the various commands.
Two major additions were made during the development. The first addition was string variables, along with changes to the READ/DATA statements to allow strings to be stored in them and the INPUT statement to read them in interactively. One feature of the string system was that trailing spaces were deliberately ignored in comparisons, so that "YES" and "YES " were considered equal. This was later realized to be a grave error. This version also added the semicolon to PRINT statements to do "close packing" of output.
The official Fourth Edition did not appear until 1968, which added several new features on top of the previous additions. This included the ability to define multi-line functions with the DEF command, and the powerful CHANGE statement that treated strings as arrays of ASCII-like codes to allow per-character operations without having to loop over the string. This was also the only string manipulation function; to extract a single character or substring, one had to use CHANGE to convert it into an array of numbers, manipulate that array, and then convert it back. This was the reason MAT was 1-based again, as the length of the string was placed in location zero so it was normally ignored.
Fifth Edition
The Fifth Edition, from late 1970, once again started as two supplements to the Fourth Edition, from February and April 1969.
The major change was the introduction of file handling. Previously any pre-defined data that had to be used in the program had to be placed in the DATA lines and then read in one-at-a-time using the READ command. This extension allowed files to be accessed and read in a similar fashion. The INPUT command could now be used to read a single item from a file, while PRINT would write one. For random access, the READ could now be positioned anywhere in the file with the RESET command, while WRITE would write at that location. The current location was returned by the LOC function, and the file length by LOF. One could also test if you were at the end of the file during sequential reads using the IF END THEN....
Another major change was the ability for one BASIC program to call another using the CHAIN command, and pass variables to it using the COMMON list. It was later realized that this basic concept had a number of problems, but it was nevertheless used to write some large programs.
Numerous more minor changes were also added. Among these were two-dimensional string arrays, as opposed to one-dimensional in the previous version, as well as the ability to use the DEF to define string-based functions rather than just mathematical. New system-oriented functions included CLK$, DAT$ to work with times and dates, TIM which returned the elapsed time, and USR$ which returned the user number, what would today be the username. New string functions included LEN, STR$, VAL, ASC, which are common in modern BASIC dialects. The as a short form for REM also appeared in this version.
Sixth Edition
Work on the Sixth Edition began in the fall of 1969, before the Fifth Edition was finalized. In contrast to the previous versions, where the specification documents were based on whatever changes had been made to the compiler, for the new version a complete specification was written beforehand. This version was worked on by Kemeny and Kurtz, as well as several former students who returned as faculty; Stephen Garland, John McGeachie, and Robert Hargraves. It was allowed considerable time to mature, with a beta version running for three months during the summer of 1971, before it was finally released on 21 September 1971. As a result of giving the design time to mature, it was, as Kurtz described, "probably the best-designed and most stable software system Dartmouth has ever written."
One of the more major changes was the replacement of the earlier CHAIN concept with the much better defined CALL, which operated in a fashion similar to GOSUB but referring to a function name rather than a line number. The functions were defined using SUB...SUBEND, and allowed arbitrary parameters to be passed in as a part of the call, rather than using the COMMON system. Another major change was to use file handles (numbers) created with the FILE command, which is similar to the OPEN found in most modern BASICs. New string functions included the SEG$ to return substrings in a fashion similar to the MID$ found in MS-derived BASICs, the POS that returns the location of one string inside another, and the for concatenation. PRINT USING provided formatted output in a fashion somewhat similar to FORTRAN.
The Sixth Edition was essentially the last version of the original BASIC concept. It remained unchanged for many years. Later versions were significantly different languages.
SBASIC
In 1976, Stephen Garland collected a number of structured programming additions to create Dartmouth Structured BASIC, or SBASIC. The primary goal was to replace the control structures based on IF...THEN and GOTO with a variety of block-oriented structures. It did this using a precompiler that took SBASIC source code, converted that to 6th Edition BASIC, and then compiled and ran that as normal. SBASIC also added a number of graphics features, based on the PLOT command that had been added by other programmers.
Block structures were terminated by matching statements as was the case in ALGOL 68, as opposed to the generic block structures found in languages like Pascal or C. For instance, the DO WHILE... spanned multiple lines until it ended with a LOOP. The DO loop could also be bottom exited by removing the WHILE or UNTIL and placing the conditional at the bottom on the LOOP. Infinite loops were supported using DO FOREVER or LOOP FOREVER.
"Original" Dartmouth BASIC did not allow statements after a THEN, only a line number to branch to. SBASIC allowed any statement, so for instance . This basic expansion to the IF...THEN, pioneered in 1972 with BASIC-PLUS, was already widely supported by most variety of BASICs by this point, including microcomputer versions that were being released at this time. On top of this, SBASIC added block-oriented IF by placing the THEN on a separate line and then ending the block with CONTINUE. On top of this, SBASIC added the SELECT CASE mechanism that survives to this day in Visual Basic .NET.
SBASIC also added a number of graphics commands intended to be used with plotters. This required the PLOTTER "plottername" to direct subsequent commands to a selected device, and the WINDOW... to set up its parameters. From then on, PLOT X,Y would produce dots on the selected plotter, while adding the semicolon at the end of the statement, as used in PRINT, would leave the pen on the paper and produce a line, for instance PLOT 10,10;20,20.
SBASIC eventually formed the basis of the 1987 ANSI X3.113-1987 Standard for Full Basic, which extended the earlier 1978 ANSI ANSI X3.60-1978 Standard for Minimal Basic. The long delay in producing that standard, along with the lack of regard among computer scientists for unstructured Basic, led the College Board committee developing the Advanced Placement Course in Computer Science, which Garland chaired, to opt for requiring Pascal and not allowing Basic as the language for the course.
Garland used SBASIC to teach the introductory course in computer science at Dartmouth, but rewrote his textbook for the course in Pascal so that it could be used to teach the AP course.
Seventh Edition
Garland's SBASIC was written as a pre-compiler, itself in SBASIC source code. The system would read SBASIC source, write the corresponding 6th Edition code, and then compile that output. The Seventh Edition, released in 1980, was a version of SBASIC that was a stand-alone compiler of its own. It added a number of additions of its own. Most of the changes were further elaborations to the system for calling external programs and "overlays" that allowed a program to be broken up into parts. In this version, SUBs sharing a single file shared data between them, providing a modicum of data hiding within the group of routines, or what would today be known as a module.
In addition, this edition added structured error handling and allowed arbitrary matrix math in LET statements, so one could LET A = M*4 where M was a matrix variable and output another matrix into A with all the elements in M multiplied. Finally, another major update was the subroutines now use an activation record system which allowed recursion.
ANSI BASIC, Eighth Edition
By the early 1970s, the number of BASIC implementations had grown to dozens, all of which had their own changes to the basic concept introduced in the original version. Most of these were based on the Fifth Edition, although they often lacked the MAT instructions and the ability to indent code. GE was one of these companies; they released their Mark II systems with the 5th edition rather than waiting for the 6th to arrive a few months later. BASIC-PLUS on the DEC platform was perhaps the closest implementation, including the MAT commands for instance, but then added a number of changes that were not backward-compatible.
After the release of the 6th edition, Kurtz became involved in an effort to define a standard BASIC. An American National Standards Institute (ANSI) working group, X3J2, formed in January 1974, and a corresponding European Computer Manufacturers Association (ECMA) group, TC21, that September. The goal at that time was to produce two related standards. Minimal BASIC would be similar to the Second Edition, but adding strings, a standard to which practically every BASIC would already be able to conform. Standard BASIC would add more functionality to produce something more in keeping with the real BASIC varieties seen in the market.
The process was slow, and the first draft of Minimal BASIC was not published until January 1976, leading to it being officially adopted in December 1977 by ECMA, and 1979 by ANSI as X3.60-1978. Minimal BASIC was similar to the 3rd edition, including string variables, while lacking MAT and other advanced features. In contrast, Standard BASIC had many new features that did not exist in other BASICs, and many of these were poorly considered and the subject of some criticism. For instance, the standard included a line-continuation character, but chose the ampersand, , which was also used for string concatenation. Using these in a single line could lead to very confusing code.
By this time, the release of the first microcomputer systems in 1975 had quickly led to the introduction of Altair BASIC, the first version of what would soon be known as Microsoft BASIC. MS BASIC was patterned on BASIC-PLUS, and thus ultimately the Fifth Edition, but lacked indenting, MAT, and other features. It also added the LEFT$ and RIGHT$ functions, breaking the three-letter convention. As the number of microcomputers grew, and turned into the home computer market in the late 1970s, MS BASIC became the de facto standard.
With this rapid change in the market, the Standard BASIC effort slowed further and was not formally ratified until 1987 as X3.113-1987. By this time, there was no real purpose to the standards; not only was MS BASIC everywhere, but by the mid-1980s the use of BASIC was declining as shrinkwrap software took over from type-in programs. Both standards were eventually withdrawn.
In spite of the eventual failure of the ANSI efforts, the draft of Standard BASIC was implemented at Dartmouth as the Eighth Edition in 1982.
DTSS interface
DTSS implemented an early integrated development environment (IDE): an interactive command line interface. This provided a number of user and job control commands. For instance, an idle terminal could be connected to a user account by typing HELLO, and logged out again with BYE.
Any line typed in by the user, and beginning with a line number, was added to the program, replacing any previously stored line with the same number; anything else was assumed to be a DTSS command and immediately executed. Lines which consisted solely of a line number weren't stored but did remove any previously stored line with the same number. This method of editing was necessary due to the use of teleprinters as the terminal units.
Each user account could have any number of BASIC programs stored offline, while administrator accounts could also leave programs in permanent storage. Any one of these was active at a given time for a given user account, and if no program had been loaded, a new program was assumed. Stored programs were accessed using commands that are today better known as parts of the BASIC language itself; for instance, the LIST command instructed DTSS to print out the currently active program.
List of commands
HELLO log into DTSS
BYE log off from DTSS
BASIC start BASIC mode
NEW name and begin writing a program
OLD retrieve a previously named program from permanent storage
LIST display the current program
SAVE save the current program in permanent storage
UNSAVE clear the current program from permanent storage
CATALOG display the names of programs in permanent storage
SCRATCH erase the current program without clearing its name
RENAME change the name of the current program without erasing it
RUN execute the current program
STOP interrupt the currently running program
TEST use an instructor-provided program to test the current program
FRI Friden mode for teletypes with mechanical linefeeds
NFR exit Friden mode
EXP explain (help) EXP EXP for list of commands that can be explained by the system
REPLACE save the current program using a name already in use for another file
LINK execute the current program in a multiple-terminal mode
JOIN join a program being executed in multiple-terminal mode
The commands were often believed to be part of the BASIC language by users, but, in fact, were part of the time sharing system and were also used when preparing ALGOL or FORTRAN programs via the DTSS terminals.
TEACH/Test System
Some 80% of all Dartmouth students in the late 1960s took two mathematics courses and learned Basic in the second course, either in calculus or in finite mathematics. They received two one-hour lectures on Basic near the beginning of these courses and then had to write four programs in Basic, ranging from programs for approximating π or finding a root of a quintic polynomial to programs for solving a differential equation or finding (by simulation) the limiting probability in a Markov chain. The TEACH/Test system helped students complete these assignments. When they thought they had a working program, they typed the command TEST, and an instructor-written program either approved what they had written or provided a hint about where they might have gone wrong. Students were required to hand in a listing of each program, a sample RUN, and an approval from the TEACH/Test system. The system did not grade assignments or keep a record of how many times a student typed TEST; it simply assisted students and their instructors.
BASIC language
The first release implemented the following statement types, taking some of its operators and keywords from FORTRAN II and some from ALGOL 60. Overall, the language more closely follows the FORTRAN model, in that it generally has one statement per line of code, lacks ALGOL's "blocks" to group code (these structured programming constructs were a primary reason for ALGOLs development) and the use of GOTO to control program flow.
From ALGOL it took the FOR...TO...STEP style loops that replaced FORTRAN's unwieldy DO...CONTINUE statements. BASIC also simplified the IF...THEN construct to allow simple comparisons like IF X>5 THEN GOTO 20, as opposed to FORTRAN's IF (X-5) 20,20,30. FORTRAN's style "computed IF" was reintroduced with the ON...GOTO command in later versions.
Variable names were limited to A to Z, A0 to A9, B0 to B9, ..., Z0 to Z9, giving a maximum of 286 possible distinct variables. FORTRAN's odd system for setting up the variables I through N as integers and the rest as floating point was removed, and all variables were assumed to be floating point and dimensioned with up to 10 elements. The DIM command was only required if the array held more than ten elements. Array names were restricted to A to Z only.
List of BASIC statements
DEF define single line functions
DIM (short for dimension) define the size of arrays
END define the end of the program
STOP stop a program before the textual end
FOR / TO / STEP define loops
NEXT mark the end of loops
GOSUB transfer control to simple subroutines
RETURN return control from simple subroutines
GOTO transfer control to another statement
IF / THEN decision making
LET / = assign formula results to a variable
PRINT output results
DATA store static data within the program
READ input data stored in DATA statements
REM comment ("REMark")
It also implemented floating-point numeric variables and arithmetic.
List of operators
List of functions
ABS Absolute value
ATN Arctangent value (result in radians)
COS Cosine value (argument in radians)
EXP Exponential value
INT Integer value
LOG Natural Logarithmic value
RND Random value
SIN Sine value (argument in radians)
SQR Square root value
TAN Tangent value (argument in radians)
Examples
Early versions of BASIC did not have the ability to read and write external files. To represent lists of data that would normally be read from a file, BASIC included the DATA keyword, which could be followed by an arbitrarily long list of elements, ending only at the limit of the line length. The DATA was non-executable and was skipped if encountered. READ commands would consume the data one by one, keeping track of its location within the complete collection of DATA elements in an internal pointer. In version 3, a RESTORE command was added to reset the pointer to the first DATA command in a program.
In this example, "the first three data values are read into X, Y, and Z respectively. The value -1 is read into N. The next 11 values, .1 through .3, are read into the 11 elements of array B."
15 READ X, Y, Z
20 READ N
24 FOR I=0 TO 10
25 READ B(I)
26 NEXT I
40 DATA 4.2, 7.5, 25.1, -1, .1, .01, .001, .0001
45 DATA .2, .02, .002, .0002, .015, .025, .3, .03, .003
Unlike most subsequent BASICs, Dartmouth BASIC, from the Third Edition onwards, had a matrix keyword, MAT, which could prefix a number of other commands to operate on entire arrays of data with a single command. In this example, from the 1968 manual, MAT INPUT V is used to input a series of variables. When the user enters nothing on a line, this process ends and the total number of elements is accessed in the NUM pseudovariable. The code then adds up all of the individual elements in the matrix and calculates the average. The Third Edition also added indentation, which is used here to clarify the loop structure.
5 LET S = 0
10 MAT INPUT V
20 LET N = NUM
30 IF N = 0 THEN 99
40 FOR I = 1 TO N
45 LET S = S + V(I)
50 NEXT I
60 PRINT S/N
70 GO TO 5
99 END
Notes
References
Citations
Bibliography
Further reading
Kemeny, John G. & Kurtz, Thomas E. (1985). Back to BASIC: The History, Corruption and Future of the Language. Addison-Wesley Publishing Company, Inc. .
External links
Listing of the source code for version 2 of the Dartmouth BASIC compiler circa 1965 (archived 2007)
Scans of original documentation and software.
Dartmouth BASIC Interpreter, RetroWiki.es
Dartmouth College history
Programming languages created in 1964
Dartmouth BASIC
Dartmouth BASIC
Time-sharing
BASIC programming language family
|
1332019
|
https://en.wikipedia.org/wiki/Source%20Code%20Control%20System
|
Source Code Control System
|
Source Code Control System (SCCS) is a version control system designed to track changes in source code and other text files during the development of a piece of software. This allows the user to retrieve any of the previous versions of the original source code and the changes which are stored. It was originally developed at Bell Labs beginning in late 1972 by Marc Rochkind for an IBM System/370 computer running OS/360.
A characteristic feature of SCCS is the sccsid string that is embedded into source code, and automatically updated by SCCS for each revision. This example illustrates its use in the C programming language:
static char sccsid[] = "@(#)ls.c 8.1 (Berkeley) 6/11/93";
This string contains the file name, date, and can also contain a comment. After compilation, the string can be found in binary and object files by looking for the pattern "@(#)" and can be used to determine which source code files were used during compilation. The "what" command is available to automate this search for version strings.
History
In 1972, Marc Rochkind developed SCCS in SNOBOL4 at Bell Labs for an IBM System/370 computer running OS/360 MVT. He rewrote SCCS in the C programming language for use under UNIX, then running on a PDP-11, in 1973.
The first publicly released version was SCCS version 4 from February 18, 1977. It was available with the Programmer's Workbench (PWB) edition of the operating system. Release 4 of SCCS was the first version that used a text-based history file format, earlier versions did use binary history file formats. Release 4 was no longer written or maintained by Marc Rochkind. Subsequently, SCCS was included in AT&T's commercial System III and System V distributions. It was not licensed with 32V, the ancestor to BSD. The SCCS command set is now part of the Single UNIX Specification.
SCCS was the dominant version control system for Unix until later version control systems, notably the RCS and later CVS, gained more widespread adoption. Today, these early version control systems are generally considered obsolete, particularly in the open-source community, which has largely embraced distributed version control systems. However, the SCCS file format is still used internally by a few newer version control programs, including BitKeeper and TeamWare. The latter is a frontend to SCCS. Sablime has been developed from a modified version of SCCS but uses a history file format that is incompatible with SCCS. The SCCS file format uses a storage technique called interleaved deltas (or the weave). This storage technique is now considered by many version control system developers as foundational to advanced merging and versioning techniques, such as the "Precise Codeville" ("pcdv") merge.
Apart from correcting Year 2000 problems in 1999, no active development has taken place on the various UNIX vendor-specific SCCS versions.
In 2006, Sun Microsystems (today part of Oracle) released their Solaris version of SCCS as open-source under the CDDL license as part of their efforts to open-source Solaris.
Background
The Source Code Control System (SCCS) is a system for controlling file and history changes. Software is typically upgraded to a new version by fixing bugs, optimizing algorithms and adding extra functions. Changing software causes problems that require version control to solve.
Source code takes up too much space because it is repeated in every version.
It is hard to acquire information about when and where changes occurred.
Finding the exact version which the client has problems with is difficult.
SCCS was built to solve these problems. SCCS from AT&T had five major versions for the IBM OS and five major versions for UNIX
Two specific implementations using SCCS are: PDP 11 under Unix and IBM 370 under the OS.
Composition
SCCS consists of two parts: SCCS commands and SCCS files. All basic operations (e.g., create, delete, edit) can be realized by SCCS commands. SCCS files have a unique format prefix s., which is controlled by SCCS commands.
SCCS files
An SCCS file consists of three parts:
Delta table
Access and tracking flags
Body of the text
Delta table
In SCCS, a delta is a single revision in an SCCS file. Deltas are stored in a delta table, so each SCCS file has its own record of changes.
Control and tracking flags in SCCS files
Every operation of each SCCS file is tracked by flags. Their functions are as below:
Setting permissions for editing of every SCCS file.
Control each release of every SCCS file.
Permitting collaborative editing of every SCCS file.
Mutual-referencing changes of every SCCS file.
Body
SCCS uses three types of control records for keeping track of insertions and deletions applied in different deltas. They are the insertion control record, the deletion control record, and the end control record. Whenever a user changes some part of the text, a control record is inserted surrounding the change. The control records are stored in the body along with the original text records.
SCCS basic commands
SCCS provides a set of commands in the form of macro invocations that perform or initiate source code management functions with a simple syntax, such as create, get, edit, prt. It also provides access to the revision history of files under management. These commands are implemented as argument verbs to the driver program sccs.
Create
The sccs command create uses the text of a source file to create a new history file. For example:
$ sccs create program.c
program.c:
1.1
87 lines
The outputs are name, version and lines.
The command is a macro that expands to admin to create the new history file followed by get to retrieve the file.
Edit
$ sccs edit program.c
1.1
new delta 1.2
87 lines
Edit a specific file.
The command is a macro that expands to get -e.
Delget
$ sccs delget program.c
comments? main function enhanced
1.2
10 inserted
0 deleted
87 unchanged
1.2
97 lines
Check in new version and get the new version from sccs.
The command is a macro that expands to delta to check in the new version file followed by get to retrieve the file.
Get
$ sccs get program.c
1.1
87 lines
The outputs are version and lines you want to get from specific file.
Prt
$ sccs prt program.c
This command produces a report of source code changes.
Implementations
UNIX SCCS versions
Most UNIX versions include a version of SCCS, which, however, is often no longer actively developed.
Jörg Schilling's fork
Jörg Schilling (who requested the release of SCCS in the early days of the OpenSolaris project) maintained a fork of SCCS that is based on the OpenSolaris source code. It has received major feature enhancements but remains compatible with the original SCCS versions unless using the "new project" mode.
Heirloom Project
The Heirloom Project includes a version of SCCS derived from the OpenSolaris source code and maintained between December 2006 and April 2007.
GNU conversion utility
GNU offers the SCCS compatible program GNU CSSC ("Compatibly Stupid Source Control"), which is occasionally used to convert SCCS archives to newer systems like CVS or Subversion; it is not a complete SCCS implementation and not recommended for use in new projects, but mostly meant for converting to a modern version control system.
Other version control systems
Since the 1990s, many new version control systems have been developed and become popular that are designed for managing projects with a large number of files and that offer advanced functionality such as multi-user operation, access control, automatic building, network support, release management and distributed version control. Bitkeeper and TeamWare use the SCCS file format internally and can be considered successors to SCCS.
On BSD systems, the SCCSID is replaced by a RCSID starting and ending with ; the corresponding tool is . This system is originally used by RCS and added automatically on checkout, but has since become an integral part of the style guide in the FreeBSD code base, which defines a custom keyword and a macro renamed .
The SRC version control system can also use the SCCS file format internally (or RCS's) and aims to provide a better user interface for SCCS while still managing only single-file projects.
References
Further reading
Essay from Marc Rochkind on how SCCS was invented
1972 software
Version control systems
Free version control software
Unix archivers and compression-related utilities
Unix SUS2008 utilities
Self-hosting software
Software using the CDDL license
|
16428481
|
https://en.wikipedia.org/wiki/4827%20Dares
|
4827 Dares
|
4827 Dares is a larger Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 17 August 1988 by American astronomer Carolyn Shoemaker at the Palomar Observatory in California. The dark D-type asteroid has a rotation period of 19.0 hours. It was named after Dares from Greek mythology.
Orbit and classification
Dares is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind on its orbit . It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 4.9–5.4 AU once every 11 years and 7 months (4,233 days; semi-major axis of 5.12 AU). Its orbit has an eccentricity of 0.05 and an inclination of 8° with respect to the ecliptic.
The body's observation arc begins with a precovery at Palomar in November 1954, almost 34 years prior to its official discovery observation.
Physical characteristics
In the SDSS-based taxonomy, Dares is a dark D-type asteroid. It is also characterized as a D-type by Pan-STARRS' survey.
Rotation period
In February 1994, a rotational lightcurve of Dares was obtained over five nights of observation by Stefano Mottola and Anders Erikson using the ESO 1-metre telescope at La Silla Observatory in Chile. Lightcurve analysis showed a well-defined rotation period of hours with a brightness variation of 0.24 magnitude ().
In October 2013, photometric observations in the R-band by astronomers at the Palomar Transient Factory in California gave a concurring period of 18.967 hours with an amplitude of 0.23 magnitude ().
Diameter and albedo
According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Dares measures 42.77 kilometers in diameter and its surface has an albedo of 0.067, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 44.22 kilometers based on an absolute magnitude of 10.5.
Naming
This minor planet was named by the discoverer from Greek mythology after the Trojan Dares, one of Aeneas' wandering companions (Aeneads) who were not killed or enslaved by the end of the Trojan War. The official naming citation was published by the Minor Planet Center on 25 August 1991 ().
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
004827
Discoveries by Carolyn S. Shoemaker
Minor planets named from Greek mythology
Named minor planets
19880817
|
15890762
|
https://en.wikipedia.org/wiki/Andrew%20Glassner
|
Andrew Glassner
|
Andrew S. Glassner (born 1960) is an American expert in computer graphics, well known in computer graphics community as the originator and editor of the Graphics Gems series, An Introduction to Ray Tracing, and Principles of Digital Image Synthesis. His later interests include interactive fiction, writing and directing and consulting in computer game and online entertainment industries. He worked at the New York Institute of Technology Computer Graphics Lab.
He started working in 3D computer graphics in 1978. He earned his B.S. in computer engineering (1984) from Case Western Reserve University, Cleveland, Ohio, M.S. in computer science (1987) and Ph.D. (1988, advisor Frederick Brooks) from the University of North Carolina at Chapel Hill, Chapel Hill, NC.
He was a researcher in computer graphics with Xerox Palo Alto Research Center (1988–1994) and with Microsoft Research (1994–2000).
His other positions include founding editor of the Journal of Graphics Tools, founding member of the advisory board of Journal of Computer Graphics Techniques, and editor-in-chief of ACM Transactions on Graphics (1995–1997). He served as Papers Chair for SIGGRAPH '94.
Since 1996 he has been writing the Andrew Glassner's Notebook column in the IEEE Computer Graphics & Applications journal, collected into three books.
In 2018 he digitally published the book Deep Learning From Basics to Practice.
In July 2019, he took up a position as Senior Research Scientist at visual effects company Weta Digital.
Bibliography
Deep Learning From Basics to Practice, Amazon Digital Services, 2018
Morphs, Mallards & Montagues, AK Peters Publishers, 2004,
Interactive Storytelling: Techniques for 21st Century Fiction, AK Peters Publishers, 2004,
Andrew Glassner's Third Notebook, AK Peters Publishers, 2004
Tomorrow's Stories: The Future of Interactive Entertainment, MIT Press, Cambridge, 2003
Andrew Glassner's Other Notebook, AK Peters Publishers, Natick, 2002
Andrew Glassner's Notebook, Morgan-Kaufmann Publishers, San Francisco, 1999
Principles of Digital Image Synthesis, Morgan-Kaufmann Publishers, San Francisco, 1995; author later (2011) released both volumes under Creative Commons license: Principles of Digital Image Synthesis, Version 1.01, January 19, 2011
(Series creator and editor) Graphics Gems, Academic Press, Cambridge (volumes I through V)
3D Computer Graphics: A Handbook for Artists and Designers, Design Press, New York, 1989 (Japanese translation 1990 by ASCII Press, Japan)
(editor) An Introduction to Ray Tracing, Academic Press, London, 1989 (creator, editor, and multiple contributor)
(Ph.D. thesis) Algorithms for Efficient Image Synthesis, 1988
Computer Graphics User's Guide, Howard W. Sams & Co., Indianapolis, 1984. (Japanese translation 1987 by ASCII Press, Japan)
References
External links
Real Time Rendering blog
1960 births
Living people
American computer scientists
Computer graphics professionals
University of North Carolina at Chapel Hill alumni
Case Western Reserve University alumni
New York Institute of Technology faculty
|
5574257
|
https://en.wikipedia.org/wiki/Gramps%20%28software%29
|
Gramps (software)
|
Gramps (formerly GRAMPS, an acronym for Genealogical Research and Analysis Management Programming System) is a free and open source genealogy software. Gramps is programmed in Python using PyGObject. It uses Graphviz to create relationship graphs.
Gramps is a rare example of commons-based peer production as free and open-source software created by genealogists, for genealogists. It has been described as intuitive and easy-to-use for hobbyists and "feature-complete for professional genealogists". The program is acknowledged as "most popular FOSS program for genealogy" by Eastman and others. The Australian consumer advocacy group, CHOICE, has recommended Gramps.
The program is extensible such that, in addition to human family trees, it has been used to create animal pedigree charts as well as academic genealogy showing mentoring relationships between scientists, physicians, and scholars.
Features
Gramps is one of the biggest offline genealogy suites available. Features include:
Supports multiple languages and cultures, including patronymic, matronymic, and multiple surname systems.
Full Unicode support.
Relationship calculators. Some languages have relationship terminology with no proper translation to other languages. Gramps deals with this by allowing for language specific relationship calculators.
Generates reports in multiple formats, including .odt, LaTeX, .pdf, .rtf, .html, and .txt.
Produces a wide variety of reports and charts, including relationship graphs that of large complex acyclic charts.
Gramps is easily extended via plugins called Gramplets. A Gramplet is a view of data that either changes dynamically during the running of Gramps, or provides interactivity to your genealogical data.
Gramps employs an explicit event-centric documentation approach, similar to the CIDOC Conceptual Reference Model used by many cultural heritage institutions.
"Sanity check" flagging of improbable events, such as births involving people extremely young or old.
Support for multiple calendars, e.g. Gregorian calendar, Julian calendar, Islamic calendar, etc.
Complete programmer's API documentation with free and open source code made publicly available.
File format
The core export file format of Gramps is named Gramps XML and uses the file extension .gramps. It is extended from XML. Gramps XML is a free format. Gramps usually compresses Gramps XML files with gzip. The file format Portable Gramps XML Package uses the extension .gpkg and is currently a .tar.gz archive including Gramps XML together with all referenced media. The user may rename the file extension .gramps to .gz for editing the content of the genealogy document with a text editor. Internally, Gramps uses SQLite as the default database backend, with other databases available as plugins.
Gramps can import from the following formats: Gramps XML, Gramps Package (Portable Gramps XML), Gramps 2.x .grdb (older versions Gramps), GEDCOM, CSV.
Gramps supports exporting data in the following formats: Gramps XML, Gramps Package (Portable Gramps XML), GEDCOM, GeneWeb's GW format, Web Family Tree (.WFT) format, vCard, vCalendar, CSV.
Programs that support Gramps XML
PhpGedView (version 4.1 and up) supports output to Gramps XML.
The script tmg2gramps by Anne Jessel converts The Master Genealogist v6 genealogy software datafile to a Gramps v2.2.6 XML.
The Gramps PHP component JoomlaGen for Joomla uses an upload of the GRAMPS XML database export to show genealogical information and overviews. JoomlaGen is compatible with GRAMPS 3.3.0.
Betty by Bart Feenstra generates static websites from Gramps XML and Gramps XML Package files as alternatives to GEDCOM.
Languages
Gramps is available in 45 languages (December 2014).
Gramps also has two special use sub-translation languages:
Animal pedigree which allows to keep track of the pedigree and breed of animals
Same gender/sex which gives the option of removing gender-biased verbiage from reports.
Release history The project began as GRAMPS in 2001, and the first stable release was in 2004.
The following table shows a selected history of new feature releases for project. (Patches and bug fixes are published on GitHub and periodically collated in minor "bug fix" releases.)
Full history of previous releases.
References
External links
Gramps wiki site
Gramps database formats
- Source code
- Mailing List
Reviews on Gramps
Genealogy research with Gramps. LWN.net 2014.
This article contains text from the GNU GPL Gramps Manual V2.9.
Free genealogy software
Free software programmed in Python
Cross-platform free software
Free multilingual software
MacOS software
Linux software
Windows software
Software that uses GTK
Software that uses PyGObject
|
31108829
|
https://en.wikipedia.org/wiki/Second%20Level%20Address%20Translation
|
Second Level Address Translation
|
Second Level Address Translation (SLAT), also known as nested paging, is a hardware-assisted virtualization technology which makes it possible to avoid the overhead associated with software-managed shadow page tables.
AMD has supported SLAT through the Rapid Virtualization Indexing (RVI) technology since the introduction of its third-generation Opteron processors (code name Barcelona). Intel's implementation of SLAT, known as Extended Page Table (EPT), was introduced in the Nehalem microarchitecture found in certain Core i7, Core i5, and Core i3 processors.
ARM's virtualization extensions support SLAT, known as Stage-2 page-tables provided by a Stage-2 MMU. The guest uses the Stage-1 MMU. Support was added as optional in the ARMv7ve architecture and is also supported in the ARMv8 (32-bit and 64-bit) architectures.
Overview
Modern processors use the concepts of physical memory and virtual memory; running processes use virtual addresses and when an instruction requests access to memory, the processor translates the virtual address to a physical address using a page table or translation lookaside buffer (TLB). When running a virtual system, it has allocated virtual memory of the host system that serves as a physical memory for the guest system, and the same process of address translation goes on also within the guest system. This increases the cost of memory access since the address translation needs to be performed twice once inside the guest system (using software-emulated guest page table), and once inside the host system (using physical map[pmap]).
In order to make this translation efficient, software engineers implemented software based shadow page table. Shadow page table will translate guest virtual memory directly to host physical memory address. Each VM has a separate shadow page table and hypervisor is in charge of manage them. But the cost is very expensive since every time guest update it's page table, it will trigger hypervisor to involve in.
In order to make this translation more efficient, processor vendors implemented technologies commonly called SLAT. By treating each guest-physical address as a host-virtual address, a slight extension of the hardware used to walk a non-virtualized page table (now the guest page table) can walk the host page table. With multilevel page tables the host page table can be viewed conceptually as nested within the guest page table. A hardware page table walker can treat the additional translation layer almost like adding levels to the page table.
Using SLAT and multilevel page tables, the number of levels needed to be walked to find the translation doubles when the guest-physical address is the same size as the guest-virtual address and the same size pages are used. This increases the importance of caching values from intermediate levels of the host and guest page tables. It is also helpful to use large pages in the host page tables to reduce the number of levels (e.g., in x86-64, using 2 MB pages removes one level in the page table). Since memory is typically allocated to virtual machines at coarse granularity, using large pages for guest-physical translation is an obvious optimization, reducing the depth of look-ups and the memory required for host page tables.
Implementations
Rapid Virtualization Indexing
Rapid Virtualization Indexing (RVI), known as Nested Page Tables (NPT) during its development, is an AMD second generation hardware-assisted virtualization technology for the processor memory management unit (MMU). RVI was introduced in the third generation of Opteron processors, code name Barcelona.
A VMware research paper found that RVI offers up to 42% gains in performance compared with software-only (shadow page table) implementation. Tests conducted by Red Hat showed a doubling in performance for OLTP benchmarks.
Extended Page Tables
Extended Page Tables (EPT) is an Intel second-generation x86 virtualization technology for the memory management unit (MMU). EPT support is found in Intel's Core i3, Core i5, Core i7 and Core i9 CPUs, among others. It is also found in some newer VIA CPUs. EPT is required in order to launch a logical processor directly in real mode, a feature called "unrestricted guest" in Intel's jargon, and introduced in the Westmere microarchitecture.
According to a VMware evaluation paper, "EPT provides performance gains of up to 48% for MMU-intensive benchmarks and up to 600% for MMU-intensive microbenchmarks", although it can actually cause code to run slower than a software implementation in some corner cases.
Stage-2 page-tables
Stage-2 page-table support is present in ARM processors that implement exception level 2 (EL2).
Extensions
Mode Based Execution Control
Mode Based Execution Control (MBE) is an extension to x86 SLAT implementations first available in Intel Kaby Lake and AMD Zen 2 CPUs. The extension extends the execute bit in the extended page table (guest page table) into 2 bits - one for user execute, and one for supervisor execute.
MBE was introduced to speed up guest usermode unsigned code execution with kernelmode code integrity enforcement. Under this configuration, unsigned code pages can be marked as execute under usermode, but must be marked as no-execute under kernelmode. To maintain integrity by ensuring all guest kernelmode executable code are signed even when the guest kernel is compromised, the guest kernel does not have permission to modify the execute bit of any memory pages. Modification of the execute bit, or switching of the guest page table which contains the execute bit, is delegated to a higher privileged entity, in this case the host hypervisor. Without MBE, each entrance from unsigned usermode execution to signed kernelmode execution must be accompanied by a VM exit to the hypervisor to perform a switch to the kernelmode page table. On the reverse operation, an exit from signed kernelmode to unsigned usermode must be accompanied by a VM exit to perform another page table switch. VM exits significantly impact code execution performance. With MBE, the same page table can be shared between unsigned usermode code and signed kernelmode code, with two sets of execute permission depending on the execution context. VM exits are no longer necessary when execution context switches between unsigned usermode and signed kernel mode.
Support in software
Hypervisors that support SLAT include the following:
Hyper-V for Windows Server 2008 R2, Windows 8 and later. The Windows 8 (and later Microsoft Windows) Hyper-V actually requires SLAT.
Hypervisor.framework, a native macOS hypervisor, available since macOS 10.10
KVM, since version 2.6.26 of the Linux kernel mainline
Parallels Desktop for Mac, since version 5
VirtualBox, since version 2.0.0
VMware ESX, since version 3.5
VMware Workstation. The VMware Workstation 14 (and later VMware Workstation) actually requires SLAT.
Xen, since version 3.2.0
Qubes OS — SLAT mandatory
bhyve — SLAT mandatory and slated to remain mandatory
vmm, a native hypervisor on OpenBSD — SLAT mandatory
ACRN, an open-source lightweight hypervisor, built with real-time and safety-criticality in mind, optimized for IoT and Edge usages.
Some of the above hypervisors actually require SLAT in order to work at all (not just faster) as they do not implement a software shadow page table; the list is not fully updated to reflect that.
See also
AMD-V (codename Pacifica) the first-generation AMD hardware virtualization support
Page table
VT-x
References
External links
Method and system for a second level address translation in a virtual machine environment (patent)
Second Level Address Translation Benefits in Hyper-V R2
Virtualization in Linux KVM + QEMU (PDF)
Intel x86 microprocessors
Hardware virtualization
ja:X86仮想化#プロセッサ(第2世代)
|
4522834
|
https://en.wikipedia.org/wiki/Kana%20Software
|
Kana Software
|
KANA Software, Inc. is a wholly owned subsidiary of Verint Systems (NASDAQ: VRNT) and provides on-premises and cloud computing hosted customer relationship management software products to many of the Fortune 500, mid-market businesses and government agencies.
History
Mark Gainey founded KANA, named after a rescued Shepherd-Husky mix, in 1996. The purpose was to market a software package designed to help businesses manage email and Web-based communications. It grew around this core offering.
In 1999, KANA Communications (as it was then known) acquired Connectify followed by Business Evolution and NetDialog.
In 2000, KANA made its then-largest acquisition, Silknet Software. The purchase price was $4.2 billion, despite the fact that both companies were relatively small. Silknet was an early multichannel marketing software company. Industry analysts were generally cool to the purchase though some said it made sense strategically.
In 2001, KANA merged with BroadBase software. KANA was a major stock market success during the dot-com bubble, and while it contracted significantly during the following downturn, it remained in business as an independent company through the following decade.
In 2010, Accel-KKR acquired KANA's assets and liabilities for approximately $40.82 million. The same year, KANA acquired Lagan Technologies, a government-to-citizen customer relationship management company based in Northern Ireland. The software was rebranded as LAGAN Enterprise, a package that compiles information from sources such as 311 calls and map overlays to improve resource management.
In 2011, KANA purchased Overtone, which allowed companies to monitor social media outlets like Facebook, Twitter and LinkedIn. The software was rebranded as KANA Experience Analytics.
In 2012, KANA bought Trinicom, a Dutch company that makes mid-market customer service multichannel ecommerce, especially in the BeNeLux region. Less than three months later, KANA purchased Sword Ciboodle, a company that specializes in contact center software. Industry analysts generally looked favorably on the acquisition; Ciboodle's established business process management gave KANA products for a full-featured CRM package for customer service with social media marketing. "Between the two companies, almost every aspect of customer relationship experience... is covered." The combined organization operates under the KANA brand. Ciboodle's CEO, Mike Hughes, who had led the company prior to its purchase by Sword, left the company after KANA's purchase was finalized. He was replaced by KANA executives.
In 2013, KANA announced the KANA Enterprise product which the company marketed as "a unified platform supporting both agent-based and customer self-service scenarios".
In 2014, Verint acquired the operating assets of KANA for $514 million.
Product families
KANA Enterprise: Enterprise omni-channel CRM package
LAGAN Enterprise: G2C enterprise CRM package
KANA Express: Cloud-based multichannel customer service system
References
1996 establishments in California
American companies established in 1996
Software companies based in the San Francisco Bay Area
Companies based in Sunnyvale, California
Software companies established in 1996
2014 mergers and acquisitions
CRM software companies
Software companies of the United States
|
28416221
|
https://en.wikipedia.org/wiki/SAND%20CDBMS
|
SAND CDBMS
|
SAND Nucleus CDBMS is a column-oriented DBMS software system optimized for business intelligence applications, delivering the data warehousing component, developed by SAND Technology Inc.
Company history
SAND Technology was founded in 1983.
SAND CDBMS traces its roots to developments by Nucleus International Corporation research and eventual patent issued to, among others, Edward L. Glaser on “Bit string compressor with boolean operation processing capability.”
Originally encoded on firmware, the application is now completely software based.
SAND Technology is now a division of N. Harris Computer Corporation.
Description
A fully tokenized, bit array encoded and compressed database, data storage is column-oriented using domains across schemas/tables rather than as rows of data within tables. This results in an optimized platform for data analytics and data mining, although not suitable for transaction processing.
This architecture exhibits the following characteristics:
All columns act as if they are indexed
Actual data values are stored only once and referenced by their token
Columns use lossless data compression when stored
Only columns requested in a query are accessed from the database
Queries are done directly on the compressed columns and only the result set is decompressed.
Platform agnostic, SAND CDBMS runs on 64 bit Windows or the following 64 bit Linux/Unix environments: HP-UX, IBM-AIX, Red Hat Linux, SuSE Linux and Sun Solaris.
Database Administration
SAND/DNA Analytics is managed via ANSI standard SQL and DML commands.
Database space allocation and core management are done by the database engine. This means typical database administration is focused on data modeling, data content, managing the data life cycle and managing the user profiles and access permissions.
Data loading is straight forward and only requires pointing to the source and destination in load scripts. There are multiple data manipulation functions and commands available, but all internal structure optimizations are automatically managed by the database engine.
While data load performance can be slower than row based databases, it is mitigated by not having to build any indexes or run post-load administration routines once complete. Parallel load processing or segmented pre-load processing can also be used to improve load performance.
A core feature of SAND CDBMS is its support of “virtual” mounting of a database. This provides an isolated environment for developing and testing changes to a database, where upon dismounting, the entire environment is removed.
References
External links
SAND Technology Inc. website
Data warehousing products
Proprietary database management systems
|
1568
|
https://en.wikipedia.org/wiki/Ajax%20the%20Great
|
Ajax the Great
|
Ajax () or Aias (; , Aíantos; archaic ) is a Greek mythological hero, the son of King Telamon and Periboea, and the half-brother of Teucer. He plays an important role, and is portrayed as a towering figure and a warrior of great courage in Homer's Iliad and in the Epic Cycle, a series of epic poems about the Trojan War, being second only to Achilles among Greek heroes of the war. He is also referred to as "Telamonian Ajax" (, in Etruscan recorded as Aivas Tlamunus), "Greater Ajax", or "Ajax the Great", which distinguishes him from Ajax, son of Oileus, also known as Ajax the Lesser. His name means "of the earth".
Family
Ajax is the son of Telamon, who was the son of Aeacus and grandson of Zeus, and his first wife Periboea. Through his uncle Peleus (Telamon's brother), he is the cousin of Achilles, and is the elder half-brother of Teucer. His given name is derived from the root of "to lament", translating to "one who laments; mourner". Hesiod, however, includes a story in "The Great Eoiae" that indicates Ajax received his name when Heracles prayed to Zeus that a son might be born to Telemon and Eriboea. Zeus sent an eagle (aetos – αετός) as a sign. Heracles then bade the parents call their son Ajax after the eagle. Many illustrious Athenians, including Cimon, Miltiades, Alcibiades and the historian Thucydides, traced their descent from Ajax. On an Etruscan tomb dedicated to Racvi Satlnei in Bologna (5th century BC) there is an inscription that says aivastelmunsl, which means "[family] of Telamonian Ajax".
Description
In Homer's Iliad he is described as of great stature, colossal frame and strongest of all the Achaeans. Known as the "bulwark of the Achaeans", he was trained by the centaur Chiron (who had trained Ajax's father Telamon and Achilles's father Peleus and later died of an accidental wound inflicted by Heracles, whom he was at the time training) at the same time as Achilles. He was described as fearless, strong and powerful but also with a very high level of combat intelligence. Ajax commands his army wielding a huge shield made of seven cow-hides with a layer of bronze. Most notably, Ajax is not wounded in any of the battles described in the Iliad, and he is the only principal character on either side who does not receive substantial assistance from any of the gods (except for Agamemnon) who take part in the battles, although, in book 13, Poseidon strikes Ajax with his staff, renewing his strength. Unlike Diomedes, Agamemnon, and Achilles, Ajax appears as a mainly defensive warrior, instrumental in the defense of the Greek camp and ships and that of Patroclus' body. When the Trojans are on the offensive, he is often seen covering the retreat of the Achaeans. Significantly, while one of the deadliest heroes in the whole poem, Ajax has no aristeia depicting him on the offensive.
Trojan War
In the Iliad, Ajax is notable for his abundant strength and courage, seen particularly in two fights with Hector. In Book 7, Ajax is chosen by lot to meet Hector in a duel which lasts most of a whole day. Ajax at first gets the better of the encounter, wounding Hector with his spear and knocking him down with a large stone, but Hector battles on until the heralds, acting at the direction of Zeus, call a draw, with the two combatants exchanging gifts, Ajax giving Hector a purple sash and Hector giving Ajax his sharp sword.
The second fight between Ajax and Hector occurs when the latter breaks into the Mycenaean camp, and battles with the Greeks among the ships. In Book 14, Ajax throws a giant rock at Hector which almost kills him. In Book 15, Hector is restored to his strength by Apollo and returns to attack the ships. Ajax, wielding an enormous spear as a weapon and leaping from ship to ship, holds off the Trojan armies virtually single-handedly. In Book 16, Hector and Ajax duel once again. Hector then disarms Ajax (although Ajax is not hurt) and Ajax is forced to retreat, seeing that Zeus is clearly favoring Hector. Hector and the Trojans succeed in burning one Greek ship, the culmination of an assault that almost finishes the war. Ajax is responsible for the death of many Trojan lords, including Phorcys.
Ajax often fought in tandem with his brother Teucer, known for his skill with the bow. Ajax would wield his magnificent shield, as Teucer stood behind picking off enemy Trojans.
Achilles was absent during these encounters because of his feud with Agamemnon. In Book 9, Agamemnon and the other Mycenaean chiefs send Ajax, Odysseus and Phoenix to the tent of Achilles in an attempt to reconcile with the great warrior and induce him to return to the fight. Although Ajax speaks earnestly and is well received, he does not succeed in convincing Achilles.
When Patroclus is killed, Hector tries to steal his body. Ajax, assisted by Menelaus, succeeds in fighting off the Trojans and taking the body back with his chariot; however, the Trojans have already stripped Patroclus of Achilles' armor. Ajax's prayer to Zeus to remove the fog that has descended on the battle to allow them to fight or die in the light of day has become proverbial. According to Hyginus, in total, Ajax killed 28 people at Troy.
Death
As the Iliad comes to a close, Ajax and the majority of other Greek warriors are alive and well. When Achilles dies, killed by Paris (with help from Apollo), Ajax and Odysseus are the heroes who fight against the Trojans to get the body and bury it with his companion, Patroclus. Ajax, with his great shield and spear, manages to recover the body and carry it to the ships, while Odysseus fights off the Trojans. After the burial, each claims Achilles' magical armor, which had been forged on Mount Olympus by the smith-god Hephaestus, for himself as recognition for his heroic efforts. A competition is held to determine who deserves the armor. Ajax argues that because of his strength and the fighting he has done for the Greeks, including saving the ships from Hector, and driving him off with a massive rock, he deserves the armor. However, Odysseus proves to be more eloquent, and with the aid of Athena, the council gives him the armor. Ajax, distraught by this result and “conquered by his own grief”, plunges his sword into his own chest killing himself. In the Little Iliad, Ajax goes mad with rage at Odysseus’ victory and slaughters the cattle of the Greeks. After returning to his senses, he kills himself out of shame.
The Belvedere Torso, a marble torso now in the Vatican Museums, is considered to depict Ajax "in the act of contemplating his suicide".
In Sophocles' play Ajax, a famous retelling of Ajax's demise, after the armor is awarded to Odysseus, Ajax feels so insulted that he wants to kill Agamemnon and Menelaus. Athena intervenes and clouds his mind and vision, and he goes to a flock of sheep and slaughters them, imagining they are the Achaean leaders, including Odysseus and Agamemnon. When he comes to his senses, covered in blood, he realizes that what he has done has diminished his honor, and decides that he prefers to kill himself rather than live in shame. He does so with the same sword which Hector gave him when they exchanged presents. From his blood sprang a red flower, as at the death of Hyacinthus, which bore on its leaves the initial letters of his name Ai, also expressive of lament. His ashes were deposited in a golden urn on the Rhoetean promontory at the entrance of the Hellespont.
Ajax's half-brother Teucer stood trial before his father for not bringing Ajax's body or famous weapons back. Teucer was acquitted for responsibility but found guilty of negligence. He was disowned by his father and was not allowed to return to his home, the island of Salamis off the coast of Athens.
Homer is somewhat vague about the precise manner of Ajax's death but does ascribe it to his loss in the dispute over Achilles' armor; when Odysseus visits Hades, he begs the soul of Ajax to speak to him, but Ajax, still resentful over the old quarrel, refuses and descends silently back into Erebus.
Like Achilles, he is represented (although not by Homer) as living after his death on the island of Leuke at the mouth of the Danube. Ajax, who in the post-Homeric legend is described as the grandson of Aeacus and the great-grandson of Zeus, was the tutelary hero of the island of Salamis, where he had a temple and an image, and where a festival called Aianteia was celebrated in his honour. At this festival a couch was set up, on which the panoply of the hero was placed, a practice which recalls the Roman Lectisternium. The identification of Ajax with the family of Aeacus was chiefly a matter which concerned the Athenians, after Salamis had come into their possession, on which occasion Solon is said to have inserted a line in the Iliad (2.557–558), for the purpose of supporting the Athenian claim to the island. Ajax then became an Attic hero; he was worshiped at Athens, where he had a statue in the market-place, and the tribe Aiantis was named after him. Pausanias also relates that a gigantic skeleton, its kneecap in diameter, appeared on the beach near Sigeion, on the Trojan coast; these bones were identified as those of Ajax.
Palace
In 2001, Yannos Lolos began excavating a Mycenaean palace near the village of Kanakia on the island of Salamis which he theorized to be the home of the mythological Aiacid dynasty. The multi-story structure covers and had perhaps 30 rooms. The palace appears to have been abandoned at the height of the Mycenaean civilization, roughly the same time the Trojan War may have occurred.
See also
Corpus vasorum antiquorum
List of suicides in fiction
Troy VII
Notes
Notes
References
Homer. Iliad, 7.181–312.
Homer, Odyssey 11.543–67.
Bibliotheca. Epitome III, 11-V, 7.
Graves, Robert, The Greek Myths, Harmondsworth, London, England, Penguin Books, 1960.
Graves, Robert, The Greek Myths: The Complete and Definitive Edition. Penguin Books Limited. 2017.
Ovid. Metamorphoses 12.620–13.398.
Friedrich Schiller, Das Siegerfest.
Pindar's Nemeans, 7, 8; Isthmian 4
Tzetzes, John, Allegories of the Iliad translated by Goldwyn, Adam J. and Kokkini, Dimitra. Dumbarton Oaks Medieval Library, Harvard University Press, 2015.
External links
A translation of the debate and Ajax's death. http://classics.mit.edu/Ovid/metam.13.thirteenth.html
Suitors of Helen
Achaean Leaders
Kings of Argos
Characters in the Odyssey
Suicides in Greek mythology
Tutelary deities
Metamorphoses characters
Salaminian characters in Greek mythology
Characters in Greek mythology
|
62510458
|
https://en.wikipedia.org/wiki/Zafar%20Usmanov
|
Zafar Usmanov
|
Zafar Juraevich Usmanov (; 26 August 1937 – 13 October 2021) was a Soviet and Tajik mathematician, doctor of physical and mathematical sciences (1974), professor (1983), full member of the Academy of Sciences of the Republic of Tajikistan (1981), Honored scientist of the Republic of Tajikistan (1997), laureate of the State Prize of Tajikistan in the field of science and technology named after Abu Ali ibn Sino (2013).
Biography
Zafar Juraevich Usmanov was born on 26 August 1937 in Dushanbe. Father — Jura Usmanov, historian, journalist, Mother — Hamro (Asrorova) Usmanova, party and state worker. Sibling of the academician of the Academy of Sciences of the Republic of Tajikistan, Pulat Usmanov.
1954—1959 — studies at the Faculty of Mechanics and Mathematics of MSU.
1959—1962 — graduate student of the Department of Mechanics of Moscow State University,
1962—1970 — researcher of the mathematical team of the Academy of Sciences of the Republic of Tatarstan,
1970—1973 — head of the Computing Center, (deputy head) of the Department of Mathematics with the Computing Center of the Academy of Sciences of the Republic of Tajikistan,
1973—1976 — Deputy Director for Science of the Mathematical Institute with the Computing Center of the Academy of Sciences of the Republic of Tajikistan,
1976—1984 — Head of the Computing Center of the Academy of Sciences of the Republic of Tajikistan,
1984—1988 — academician-secretary of the Department of Physical, Mathematical, Chemical and Geological Sciences of the Academy of Sciences of the Republic of Tajikistan,
1988—1999 — Director of the Institute of Mathematics of the Academy of Sciences of the Republic of Tajikistan,
From 1999 to the present — Head. Department of Mathematical Modeling, Institute of Mathematics, Academy of Sciences of the Republic of Tajikistan,
1976 — Corresponding Member of the Academy of Sciences of the Republic of Tajikistan,
1981 — Full member of the Academy of Sciences of the Republic of Tajikistan with a degree in mathematics,
1985—1990 — Deputy of the Supreme Council of the Tajik SSR,
1986—1991 — member of the Revision Commission of the Central Committee of the Communist Party of Tajikistan,
1997—2011 — Professor of the Moscow Power Engineering Institute (Technical University), Volzhsky branch of Volzhsky,
1999 — Professor, Department of Informatics, Technological University of Tajikistan, Head of the Department of «Natural Process Metrics» of the Virtual Institute of Interdisciplinary Time Studies, Moscow State University.
2018 — Chairman of the dissertation council 6D.KOA-032 at the Tajik Technical University named after academician M.S. Osimi
Scientific training
The scientific organizer of systemic training at the Institute of Mathematics is about 30 candidates of physical and mathematical sciences on modern problems of computer science. He prepared 18 candidates of sciences in the specialties of differential equations, geometry, computer science, hydromechanics, hydraulics and the history of mathematics, and 1 doctor of sciences in water problems.
Teaching
1959—1961 — Faculty of Mechanics and Mathematics, Moscow State University M.V. Lomonosova, Moscow,
1965—1994, 2007—2009 — Faculty of Mechanics, Mathematics and Physics, Tajik National University, Dushanbe,
1966—1968 — Faculty of Mathematics, Tajik State Pedagogical University, Dushanbe,
1997—2011 — Department of Power Engineering, Moscow Power Engineering Institute (Technical University), Volzhsky Branch, Volzhsky,
1999 — IV Department of Mathematics, Graz University of Technology, (spring semester), Austria,
Faculty of Information Technologies, Technological University of Tajikistan, Dushanbe.
Scientific activity
Usmanov successfully defended his Ph.D. thesis: «Some boundary value problems for systems of differential equations with singular coefficients and their applications to bendings of surfaces with a singular point» / Tajik State National University, (1966) / and doctoral dissertation: "Study of the equations of the theory of infinitesimal bendings of surfaces with positive curvature with a flattening point ", Faculty of Mechanics and Mathematics, / Moscow State University M.V. Lomonosova, (1973) /. The scientific interests of the scientist:
Generalized Cauchy-Riemann systems with singularities at isolated points and on a closed line;
Deformation of surfaces with an isolated flattening point, a conical point, and a parabolic boundary;
Modeling the proper time of an arbitrary process;
Modeling of environmental, economic, industrial and technological processes;
Automation of information processing in Tajik.
In the field of theoretical mathematics
The theory of generalized Cauchy-Riemann systems with a singular point of the 1st and above 1st order in the coefficients, as well as with the 1st order singularity in the coefficients on the boundary circle, which was a natural generalization of the classical analytical apparatus of I. N. Vekua, developed to study generalized analytic functions. Based on the fundamental achievements in the development of the theory of generalized Cauchy-Riemann systems with singularities, in-depth studies have been carried out on the effect of an isolated flattening point on infinitesimal and exact bends of surfaces of positive curvature. Some progress has been made in solving the generalized Christoffel problem of determining convex surfaces from a predetermined sum of conditional radii of curvature defined on a convex surface with an isolated flattening point (together with A. Khakimov). For a wide class of natural processes described by ordinary differential equations and partial differential equations, natural metrics such as spatio-temporal Minkowski metrics are constructed, based on which a definition of the concept of the intrinsic time of a process and constructive methods for measuring it are proposed. Computational experiments have established the promise of using the new concept to increase the effectiveness of the prognostic properties of mathematical models.
In the field of applied mathematics
A mathematical model has been developed for the evolution of collection material of an arbitrary nature (together with T. I. Khaitov); a mathematical model for describing the evolution of spiral shells by the example of gastropods (together with M.R. Dzhalilov and O.P. Sapov); a mathematical model for determining the gradations of liver failure (together with H. Kh. Mansurov and others); mathematical model of the dynamics of the desert community of the Tigrovaya Balka nature reserve (together with G. N. Sapozhnikov et al.) Some of these results were noted in the reports of the Chief Scientific Secretary of the Presidium of the USSR Academy of Sciences among the most important achievements of the USSR Academy of Sciences in the field of theoretical mathematics in the 70s years and twice in the field of computer science in the 80s.
In the field of informatization of the Tajik language
Created a scientific school in computer linguistics in Tajikistan. He prepared 5 candidates of sciences in mathematical and statistical linguistics. As a leader and direct executor of works, together with his students, he carried out extensive research on the automation of information processing in the Tajik language.
He has published over 280 scientific papers on theoretical and applied mathematics in scientific journals of the countries of near and far abroad and has registered 16 intellectual products in the National Patent Information Center of the Ministry of Economic Development and Trade of the Republic of Tatarstan
Major monographs
ZD Usmanov, Generalized Cauchy-Riemann systems with a singular point, Pitman Monographs and Surveys in Pure and Applied Mathematics, 85, Longman, Harlow, 1997, 222 p., ISSN 0269-3666, [1 ]
Programming the states of a collection, Moscow: Nauka, 1983, −124 p. (Programmer’s library, co-author T.I. Khaitov)
Periods, rhythms and cycles in nature, Handbook, Dushanbe, Donish, 1990. −151 p. (co-authors Yu. I. Gorelov, l. I. Sapova)
Modeling time, Moscow: Knowledge, 1991, −48 s. (New in life, science, technology. Series Mathematics, Cybernetics).
Generalized Cauchy-Riemann systems with a singular point, Mat. Institute with the Computing Center of the Academy of Sciences of the Republic of Tatarstan, Dushanbe, 1993, −244 p.
Generalized Cauchy-Riemann systems with a singular point, Addison Wesley Longman Ltd., Harlow, England, 1997, −222 p. (Pitman Monographs and Survey in Pure and Applied Mathematics).
The experience of computer synthesis of Tajik speech in the text, Technological University of Tajikistan, — Dushanbe, Irfon, 2010. — 146 p. (co-author H.A. Khudoiberdiev)
The problem of the layout of characters on a computer keyboard. Technological University of Tajikistan, — Dushanbe: «Irfon», 2010. — 104 p. (co-author O. M. Soliev)
Formation of the base of the morphs of the Tajik language. Dushanbe, 2014 .-- 110 s. (co-author G. M. Dovudov)
Morphological analysis of Tajik word forms. Dushanbe: Donish, 2015. — 132 p. (co-author G. M. Dovudov)
Implement results
supervised the development and implementation of an automated system for the distribution of paired cocoons in cocoon winding machines for the Dushanbe silk-winding factory;
He supervised and directly participated in the development of the mathematical foundations for optimizing the extractant enrichment process in the countercurrent extraction technological chain with the implementation of the results for the practical extraction of sea buckthorn oil from pulp;
developed the mathematical basis for the automatic design of slotted grooves of winding drums for the Tajiktekstilmash plant;
led the development of a temporary standard for Tajik graphics for use in network technology; development was sent to the Moscow representative office of MICROSOFT for inclusion in the WINDOWS editor (approved by Decree of the Government of the Republic of Tajikistan on 2 August 2004, No. 330);
Together with his graduate student O. Soliev, he implemented through the Ministry of Communications of the Republic of Tajikistan the driver we developed for the layout of Tajik letters on a computer keyboard and instructions for installing it for use in everyday work;
Together with his graduate student Kh. Khudoiberdiev, he developed and created a software and hardware complex for the automatic unstressed sounding of Tajik texts;
together with his students O. Soliev, Kh. Khudoiberdiev developed and created the Tajik computer text editor (Tajik Word);
together with S. D. Kholmatova, O. Soliev and H. Khudoyberdiev, developed and created:
— Tajik-Russian computer dictionary ,
— Russian-Tajik computer dictionary ;
— universal Russian-Tajik-Russian computer dictionary (MultiGanj);
together with L. A. Grashchenko and A. Yu. Fomin, he created a computer Tajik-Persian converter of graphic writing systems;
Together with students O. Soliev, H. Khudoiberdiev and G. Dovudov, he developed and created the Tajik language pack (spell checker) for OpenOffice.Org and Windows.
Usmanov Z.J. is:
Member of the Scientific Board of Advisers, American Biographical Society,
Head of the Department of «Natural Process Metrics» of the Virtual Institute of Interdisciplinary Study of Time, Moscow State University , Moscow ,
Member of the Editorial Board of the Central Asian Journal of Mathematics (Central Asian J Math)
Wilmington, USA Central Asian Mathematical Journal, Washington , USA. (2005),
intravital member of ISAAC (2005),
Member of the International Editorial Board of the journal "Bulletin of the Samara State Technical University. Series «Physics and Mathematics» "(2014)
Reviewer of articles submitted to the journal «Complex variables & Elliptic equations», University of Delaware, Newark, USA (regular reviewer of CV & EE) (2011),
as well as at international conferences:
International Multi-Conference on Society, Cybernetics and Informatics: IMSCI; International Conference on Complexity, Cybernetics, and Informing Science and Engineering: CCISE;
International Conference on Social and Organizational Informatics and Cybernetics: SOIC).
Rewards
anniversary medal "For Valiant Labor. In commemoration of the centenary of the birth of V. I. Lenin ", (1970),
the honorary badge of the All-Union Central Council of Trade Unions «Winner of social competition», (1973),
Honorary Badge of the All-Union Society Knowledge «For active work»
Certificate of Honor of the Supreme Council of Tajikistan, (1987),
Komsomol honorary badge in honor of the 70th anniversary of Komsomol, (1988),
honorary badge of the Committee of Physical Culture and Sports under the Council of Ministers of the Tajik SSR "Veteran of Physical Education and Sports Taj. SSR ", (1990),
Honored Scientist of the Republic of Tajikistan, (1997),
Veteran of Labor of the Russian Federation, (1998),
Order of the Lomonosov Committee of Public Awards of the Russian Federation, (2008),
Laureate of the State Prize of Tajikistan in the field of science and technology. Abu Ali ibn Sino. (2013).
See also
Tajik Academy of Sciences
References
External links
1937 births
2021 deaths
People from Dushanbe
Tajikistani mathematicians
Full Members of the USSR Academy of Sciences
Members of the Tajik Academy of Sciences
Tajik National University faculty
Moscow State University alumni
20th-century mathematicians
21st-century mathematicians
|
723581
|
https://en.wikipedia.org/wiki/High%20Level%20Assembly
|
High Level Assembly
|
High Level Assembly (HLA) is a high-level assembly language developed by Randall Hyde. It allows the use of higher-level language constructs to aid both beginners and advanced assembly developers. It fully supports advanced data types and object-oriented programming. It uses a syntax loosely based on several high-level programming languages (HLLs), such as Pascal, Ada, Modula-2, and C++, to allow creating readable assembly language programs, and to allow HLL programmers to learn HLA as fast as possible.
Origins and goals
HLA was originally conceived as a tool to teach assembly language programming at the college-university level. The goal is to leverage students' existing programming knowledge when learning assembly language to get them up to speed as fast as possible. Most students taking an assembly language programming course have already been introduced to high-level control flow structures, such as IF, WHILE, FOR, etc. HLA allows students to immediately apply that programming knowledge to assembly language coding early in their course, allowing them to master other prerequisite subjects in assembly before learning how to code low-level forms of these control structures. The book The Art of Assembly Language Programming by Randall Hyde uses HLA for this purpose.
High vs. low-level assembler
The HLA v2.x assembler supports the same low-level machine instructions as a regular, low-level, assembler. The difference is that high-level assemblers, such as HLA, Microsoft Macro Assembler (MASM), or Turbo Assembler (TASM), on the Intel x86 processor family, also support high-level-language-like statements, such as IF, WHILE, and so on, and fancier data declaration directives, such as structures-records, unions, and even classes.
Unlike most other assembler tools, the HLA compiler includes a Standard Library with thousands of functions, procedures, and macros that can be used to create full applications with the ease of a high-level language. While assembly language libraries are not new, a language that includes a large standardized library makes programmers far more likely to use such library code rather than simply writing their own library functions.
HLA supports all the same low-level machine instructions as other x86 assemblers. Further, HLA's high-level control structures are based on the ones found in MASM and TASM, which HLL-like features predated the arrival of HLA by several years. In HLA, low-level assembly code can be written as easily as with any other assembler by simply ignoring the HLL-control constructs. In contrast to HLLs like Pascal and C(++), HLA doesn't require inline asm statements. In HLA, HLL-like features appear to provide a learning aid for beginning assembly programmers by smoothing the learning curve, with the assumption that they will discontinue the use of those statements once they master the low-level instruction set. In practice, many experienced programmers continue to use HLL-like statements in HLA, MASM, and TASM, long after mastering the low-level instruction set, but this is usually done to improve readability.
It is also possible to write high-level programs using HLA, avoiding much of the tedium of low-level assembly language programming. Some assembly language programmers reject HLA out of hand, because it allows programmers to do this. However, supporting both high-level and low-level programming gives any language an expanded range of applicability. If one must do only low-level-only coding, that is possible. If one must write more readable code, using higher-level statements is an option.
Distinguishing features
Two HLA features set it apart from other x86 assemblers: its powerful macro system (compile-time language) and the HLA Standard Library.
Macro system
HLA's compile-time language allows extending the language with ease, even to creating small domain-specific languages to help easily solve common programming problems. The macro stdout.put briefly described earlier is a good example of a sophisticated macro that can simplify programming. Consider the following invocation of that macro:
stdout.put( "I=", i, " s=", s, " u=", u, " r=", r:10:2, nl );
The stdout.put macro processes each of the arguments to determine the argument's type and then calls an appropriate procedure in the HLA Standard library to handle the output of each of these operands.
Most assemblers provide some sort of macro ability: the advantage that HLA offers over other assemblers is that it can process macro arguments like r:10:2 using HLA's extensive compile-time string functions, and HLA's macro facilities can infer the types of variables and use that information to direct macro expansion.
HLA's macro language provides a special Context-Free macro facility. This feature allows easily writing macros that span other sections of code via a starting and terminating macro pair (along with optional intermediate macro invocations that are only available between the start–terminate macros). For example, one can write a fully recursive-nestable SWITCH–CASE–DEFAULT–ENDSWITCH statement using this macro facility.
Because of the HLA macro facilities context-free design, these switch..case..default..endswitch statements can be nested, and the nested statements' emitted code will not conflict with the outside statements.
Compile-Time Language
The HLA macro system is actually a subset of a larger feature known as the HLA Compile-Time Language (CTL). The HLA CTL is an interpreted language that is available in an HLA program source file. An interpreter executes HLA CTL statements during the compiling of an HLA source file; hence the name compile-time language.
The HLA CTL includes many control statements such as #IF, #WHILE, #FOR, #PRINT, an assignment statement and so on. One can also create compile-time variables and constants (including structured data types such as records and unions). The HLA CTL also provides hundreds of built-in functions (including a very rich set of string and pattern-matching functions). The HLA CTL allows programmers to create CTL programs that scan and parse strings, allowing those programmers to create embedded domain specific languages (EDSLs, also termed mini-languages). The stdout.put macro appearing earlier is an example of such an EDSL. The put macro (in the stdout namespace, hence the name stdout.put) parses its macro parameter list and emits the code that will print its operands.
Standard library
The HLA Standard Library is an extensive set of prewritten routines and macros (like the stdout.put macro described above) that make life easier for programmers, saving them from reinventing the wheel every time they write a new application. Perhaps just as important, the HLA Standard Library allows programmers to write portable applications that run under Windows or Linux with nothing more than recompiling the source code. Like the C standard library for the programming language C, the HLA Standard Library allows abstracting away low-level operating system (OS) calls, so the same set of OS application programming interfaces (APIs) can serve for all operating systems that HLA supports. While an assembly language allows making any needed OS calls, where programs use the HLA Standard Library API set, writing OS-portable programs is easy.
The HLA Standard Library provides thousands of functions, procedures, and macros. While the list changes over time, as of mid-2010 for HLA v2.12, it included functions in these categories:
Command-line argument processing
Array (dynamic) declaration and manipulation
Bit manipulation
Blob (binary large object) manipulation
Character manipulation
Conversions
Character set manipulation
Date and time functions
Object-oriented file I/O
Standard file I/O
File system manipulation functions, e.g., delete, rename, change directory
HLA-related declarations and functions
The HLA Object Windows Library: object-oriented framework for Win32 programming
Linked list manipulation
Mathematical functions
Memory allocation and management
FreeBSD-specific APIs
Linux-specific APIs
MacOS-specific APIs
Win32-specific APIs
Text console functions
Coroutine support
Environment variable support
Exception handling support
Memory-mapped file support
Sockets and client–server object support
Thread and synchronization support
Timer functions
Pattern matching support for regular expressions and context-free languages
Random number generators
Remote procedure call support
Standard error output functions
Standard output functions
Standard input functions
String functions
Table (associative) support
Zero-terminated string functions
Design
The HLA v2.x language system is a command-line driven tool that consists of several components, including a shell program (e.g., hla.exe under Windows), the HLA language compiler (e.g., hlaparse.exe), a low-level translator (e.g., the HLABE, or HLA Back Engine), a linker (link.exe under Windows, ld under Linux), and other tools such as a resource compiler for Windows. Versions before 2.0 relied on an external assembler back end; versions 2.x and later of HLA use the built-in HLABE as the back-end object code formatter.
The HLA shell application processes command line parameters and routes appropriate files to each of the programs that make up the HLA system. It accepts as input .hla files (HLA source files), .asm files (source files for MASM, TASM, FASM, NASM, or Gas assemblers), .obj files for input to the linker, and .rc files (for use by a resource compiler).
Source code translation
Originally, the HLA v1.x tool compiled its source code into an intermediate source file that a back-end assembler such as MASM, TASM, flat assembler (FASM), Netwide Assembler (NASM), or GNU Assembler (Gas) would translate into the low-level object code file. As of HLA v2.0, HLA included its own HLA Back Engine (HLABE) that provided the low-level object code translation. However, via various command-line parameters, HLA v2.x still has the ability to translate an HLA source file into a source file that is compatible with one of these other assemblers.
HLA Back Engine
The HLA Back Engine (HLABE) is a compiler back end that translates an internal intermediate language into low-level Portable Executable (PE), Common Object File Format (COFF), Executable and Linkable Format (ELF), or Mach-O object code. An HLABE program mostly consists of data (byte) emission statements, 32-bit relocatable address statements, x86 control-transfer instructions, and various directives. In addition to translating the byte and relocatable address statements into the low-level object code format, HLABE also handles branch-displacement optimization (picking the shortest possible form of a branch instruction).
Although the HLABE is incorporated into the HLA v2.x compiler, it is actually a separate product. It is public domain and open source (hosted on SourceForge.net).
See also
Comparison of assemblers
Notes
References
Richard Blum, Professional assembly language, Wiley, 2005, , p. 42
Randall Hyde, Write Great Code: Understanding the machine, No Starch Press, 2004, , pp. 14–15 and used throughout the book
Randall Hyde, The Art of Assembly Language, 2nd Edition, No Starch Press, 2010, , used throughout the book
Further reading
Paul Panks (June 29, 2005), HLA: The High Level Assembly Programming Language, Linux Journal
External links
Downloads for Windows, macOS, and Linux
Assemblers
Assembly languages
Public-domain software
High-level programming languages
|
1464042
|
https://en.wikipedia.org/wiki/Open-source%20hardware
|
Open-source hardware
|
Open-source hardware (OSH) consists of physical artifacts of technology designed and offered by the open-design movement. Both free and open-source software (FOSS) and open-source hardware are created by this open-source culture movement and apply a like concept to a variety of components. It is sometimes, thus, referred to as FOSH (free and open-source hardware). The term usually means that information about the hardware is easily discerned so that others can make it – coupling it closely to the maker movement. Hardware design (i.e. mechanical drawings, schematics, bills of material, PCB layout data, HDL source code and integrated circuit layout data), in addition to the software that drives the hardware, are all released under free/libre terms. The original sharer gains feedback and potentially improvements on the design from the FOSH community. There is now significant evidence that such sharing can drive a high return on investment for the scientific community.
It is not enough to merely use an open-source license; an open source product or project will follow open source principles, such as modular design and community collaboration.
Since the rise of reconfigurable programmable logic devices, sharing of logic designs has been a form of open-source hardware. Instead of the schematics, hardware description language (HDL) code is shared. HDL descriptions are commonly used to set up system-on-a-chip systems either in field-programmable gate arrays (FPGA) or directly in application-specific integrated circuit (ASIC) designs. HDL modules, when distributed, are called semiconductor intellectual property cores, also known as IP cores.
Open-source hardware also helps alleviate the issue of proprietary device drivers for the free and open-source software community, however, it is not a pre-requisite for it, and should not be confused with the concept of open documentation for proprietary hardware, which is already sufficient for writing FLOSS device drivers and complete operating systems.
The difference between the two concepts is that OSH includes both the instructions on how to replicate the hardware itself as well as the information on communication protocols that the software (usually in the form of device drivers) must use in order to communicate with the hardware (often called register documentation, or open documentation for hardware), whereas open-source-friendly proprietary hardware would only include the latter without including the former.
History
The first hardware-focused "open source" activities were started around 1997 by Bruce Perens, creator of the Open Source Definition, co-founder of the Open Source Initiative, and a ham radio operator. He launched the Open Hardware Certification Program, which had the goal of allowing hardware manufacturers to self-certify their products as open.
Shortly after the launch of the Open Hardware Certification Program, David Freeman announced the Open Hardware Specification Project (OHSpec), another attempt at licensing hardware components whose interfaces are available publicly and of creating an entirely new computing platform as an alternative to proprietary computing systems. In early 1999, Sepehr Kiani, Ryan Vallance and Samir Nayfeh joined efforts to apply the open-source philosophy to machine design applications. Together they established the Open Design Foundation (ODF) as a non-profit corporation and set out to develop an Open Design Definition. However, most of these activities faded out after a few years.
A "Free Hardware" organization, known as FreeIO, was started in the late 1990s by Diehl Martin, who also launched a FreeIO website in early 2000. In the early to mid 2000s, FreeIO was a focus of free/open hardware designs released under the GNU General Public License. The FreeIO project advocated the concept of Free Hardware and proposed four freedoms that such hardware provided to users, based on the similar freedoms provided by free software licenses. The designs gained some notoriety due to Martin's naming scheme in which each free hardware project was given the name of a breakfast food such as Donut, Flapjack, Toast, etc. Martin's projects attracted a variety of hardware and software developers as well as other volunteers. Development of new open hardware designs at FreeIO ended in 2007 when Martin died of pancreatic cancer but the existing designs remain available from the organization's website.
By the mid 2000s open-source hardware again became a hub of activity due to the emergence of several major open-source hardware projects and companies, such as OpenCores, RepRap (3D printing), Arduino, Adafruit and SparkFun. In 2007, Perens reactivated the openhardware.org website.
Following the Open Graphics Project, an effort to design, implement, and manufacture a free and open 3D graphics chip set and reference graphics card, Timothy Miller suggested the creation of an organization to safeguard the interests of the Open Graphics Project community. Thus, Patrick McNamara founded the Open Hardware Foundation (OHF) in 2007.
The Tucson Amateur Packet Radio Corporation (TAPR), founded in 1982 as a non-profit organization of amateur radio operators with the goals of supporting R&D efforts in the area of amateur digital communications, created in 2007 the first open hardware license, the TAPR Open Hardware License. The OSI president Eric S. Raymond expressed some concerns about certain aspects of the OHL and decided to not review the license.
Around 2010 in context of the Freedom Defined project, the Open Hardware Definition was created as collaborative work of many and is accepted as of 2016 by dozens of organizations and companies.
In July 2011, CERN (European Organization for Nuclear Research) released an open-source hardware license, CERN OHL. Javier Serrano, an engineer at CERN's Beams Department and the founder of the Open Hardware Repository, explained: "By sharing designs openly, CERN expects to improve the quality of designs through peer review and to guarantee their users – including commercial companies – the freedom to study, modify and manufacture them, leading to better hardware and less duplication of efforts". While initially drafted to address CERN-specific concerns, such as tracing the impact of the organization's research, in its current form it can be used by anyone developing open-source hardware.
Following the 2011 Open Hardware Summit, and after heated debates on licenses and what constitutes open-source hardware, Bruce Perens abandoned the OSHW Definition and the concerted efforts of those involved with it. Openhardware.org, led by Bruce Perens, promotes and identifies practices that meet all the combined requirements of the Open Source Hardware Definition, the Open Source Definition, and the Four Freedoms of the Free Software Foundation Since 2014 openhardware.org is not online and seems to have ceased activity.
The Open Source Hardware Association (OSHWA) at oshwa.org acts as hub of open-source hardware activity of all genres, while cooperating with other entities such as TAPR, CERN, and OSI. The OSHWA was established as an organization in June 2012 in Delaware and filed for tax exemption status in July 2013. After some debates about trademark interferences with the OSI, in 2012 the OSHWA and the OSI signed a co-existence agreement.
FSF's Replicant project suggested in 2016 an alternative "free hardware" definition, derived from the FSF's four freedoms.
Forms of open-source hardware
The term hardware in open-source hardware has been historically used in opposition to the term software of open-source software. That is, to refer to the electronic hardware on which the software runs (see previous section). However, as more and more non-electronic hardware products are made open source (for example WikiHouse, OpenBeam or Hovalin), this term tends to be used back in its broader sense of "physical product". The field of open-source hardware has been shown to go beyond electronic hardware and to cover a larger range of product categories such as machine tools, vehicles and medical equipment. In that sense, hardware refers to any form of tangible product, be it electronic hardware, mechanical hardware, textile or even construction hardware. The Open Source Hardware (OSHW) Definition 1.0 defines hardware as "tangible artifacts — machines, devices, or other physical things".
Computers
Due to a mixture of privacy, security, and environmental concerns, a number of projects have started that aim to deliver a variety of open-source computing devices. Examples include the EOMA68 (SBC in a PCMCIA form-factor, intended to be plugged into a laptop or desktop chassis), Novena (bare motherboard with optional laptop chassis), and GnuBee (series of Network Attached Storage devices).
Several retrocomputing hobby groups have created numerous recreations or adaptations of the early home computers of the 1970s and 80s, some of which include improved functionality and more modern components (such as surface-mount ICs and SD card readers). Some hobbyists have also developed add-on cards (such as drive controllers, memory expansion, and sound cards) to improve the functionality of older computers. Miniaturised recreations of vintage computers have also been created.
Electronics
Electronics is one of the most popular types of open-source hardware. There are many companies that provide large varieties of open-source electronics such as Sparkfun, Adafruit and Seeed. In addition, there are NPOs and companies that provide a specific open-source electronic component such as the Arduino electronics prototyping platform. There are many examples of specialty open-source electronics such as low-cost voltage and current GMAW open-source 3-D printer monitor and a robotics-assisted mass spectrometry assay platform. Open-source electronics finds various uses, including automation of chemical procedures.
Mecha(tro)nics
A large range of open-source mechatronic products have been developed, including mechanical components, machine tools, vehicles, musical instruments, and medical equipment.
Examples of open-source machine tools include 3D printers such as RepRap, Prusa, and Ultimaker, 3D printer filament extruders such as polystruder XR3 and as well as the laser cutter Lasersaur.
Open-source vehicles have also been developed including bicycles like XYZ Space Frame Vehicles and cars such as the Tabby OSVehicle.
Examples of open source medical equipment include open-source ventilators, the echostethoscope echOpen, and a wide range of prosthetic hands listed in the review study by Ten Kate et.al. (e.g. OpenBionics’ Prosthetic Hands).
Other
Examples of open-source hardware products can also be found to a lesser extent in construction (Wikihouse), textile (Kit Zéro Kilomètres), and firearms (3D printed firearm, Defense Distributed).
Licenses
Rather than creating a new license, some open-source hardware projects use existing, free and open-source software licenses. These licenses may not accord well with patent law.
Later, several new licenses were proposed, designed to address issues specific to hardware design. In these licenses, many of the fundamental principles expressed in open-source software (OSS) licenses have been "ported" to their counterpart hardware projects. New hardware licenses are often explained as the "hardware equivalent" of a well-known OSS license, such as the GPL, LGPL, or BSD license.
Despite superficial similarities to software licenses, most hardware licenses are fundamentally different: by nature, they typically rely more heavily on patent law than on copyright law, as many hardware designs are not copyrightable. Whereas a copyright license may control the distribution of the source code or design documents, a patent license may control the use and manufacturing of the physical device built from the design documents. This distinction is explicitly mentioned in the preamble of the TAPR Open Hardware License:
Noteworthy licenses include:
The TAPR Open Hardware License: drafted by attorney John Ackermann, reviewed by OSS community leaders Bruce Perens and Eric S. Raymond, and discussed by hundreds of volunteers in an open community discussion
Balloon Open Hardware License: used by all projects in the Balloon Project
Although originally a software license, OpenCores encourages the LGPL
Hardware Design Public License: written by Graham Seaman, admin of Opencollector.org
In March 2011 CERN released the CERN Open Hardware License (OHL) intended for use with the Open Hardware Repository and other projects.
The Solderpad License is a version of the Apache License version 2.0, amended by lawyer Andrew Katz to render it more appropriate for hardware use.
The Open Source Hardware Association recommends seven licenses which follow their open-source hardware definition. From the general copyleft licenses the GNU General Public License (GPL) and Creative Commons Attribution-ShareAlike license, from the hardware-specific copyleft licenses the CERN Open Hardware License (OHL) and TAPR Open Hardware License (OHL) and from the permissive licenses the FreeBSD license, the MIT license, and the Creative Commons Attribution license. Openhardware.org recommended in 2012 the TAPR Open Hardware License, Creative Commons BY-SA 3.0 and GPL 3.0 license.
Organizations tend to rally around a shared license. For example, OpenCores prefers the LGPL or a Modified BSD License, FreeCores insists on the GPL, Open Hardware Foundation promotes "copyleft or other permissive licenses", the Open Graphics Project uses a variety of licenses, including the MIT license, GPL, and a proprietary license, and the Balloon Project wrote their own license.
Development
The adjective "open-source" not only refers to a specific set of freedoms applying to a product, but also generally presupposes that the product is the object or the result of a "process that relies on the contributions of geographically dispersed developers via the Internet." In practice however, in both fields of open-source hardware and open-source software, products may either be the result of a development process performed by a closed team in a private setting or by a community in a public environment, the first case being more frequent than the second which is more challenging. Establishing a community-based product development process faces several challenges such as: to find appropriate product data management tools, document not only the product but also the development process itself, accepting losing ubiquitous control over the project, ensure continuity in a context of fickle participation of voluntary project members, among others.
One of the major differences between developing open-source software and developing open-source hardware is that hardware results in tangible outputs, which cost money to prototype and manufacture. As a result, the phrase "free as in speech, not as in beer", more formally known as Gratis versus Libre, distinguishes between the idea of zero cost and the freedom to use and modify information. While open-source hardware faces challenges in minimizing cost and reducing financial risks for individual project developers, some community members have proposed models to address these needs Given this, there are initiatives to develop sustainable community funding mechanisms, such as the Open Source Hardware Central Bank.
Extensive discussion has taken place on ways to make open-source hardware as accessible as open-source software. Providing clear and detailed product documentation is an essential factor facilitating product replication and collaboration in hardware development projects. Practical guides have been developed to help practitioners to do so. Another option is to design products so they are easy to replicate, as exemplified in the concept of open-source appropriate technology.
The process of developing open-source hardware in a community-based setting is alternatively called open design, open source development or open source product development. All these terms are examples of the open-source model applicable for the development of any product, including software, hardware, cultural and educational. See here for a delineation of these terms.
A major contributor to the production of open-source hardware product designs is the scientific community. There has been considerable work to produce open-source hardware for scientific hardware using a combination of open-source electronics and 3-D printing. Other sources of open-source hardware production are vendors of chips and other electronic components sponsoring contests with the provision that the participants and winners must share their designs. Circuit Cellar magazine organizes some of these contests.
Open-source labs
A guide has been published (Open-Source Lab (book) by Joshua Pearce) on using open-source electronics and 3D printing to make open-source labs. Today, scientists are creating many such labs. Examples include:
Boston Open Source Science Laboratory, Somerville, Massachusetts
BYU Open Source Lab, Brigham Young University
Michigan Tech
National Tsing Hua University
OSU Open Source Lab, Oregon State University
Open Source Research Lab, University of Texas at El Paso
Business models
Open hardware companies are experimenting with business models. For example, littleBits implements open-source business models by making available the circuit designs in each littleBits module, in accordance with the CERN Open Hardware License Version 1.2. Another example is Arduino, which registered its name as a trademark; others may manufacture products from Arduino designs but cannot call the products Arduino products. There are many applicable business models for implementing some open-source hardware even in traditional firms. For example, to accelerate development and technical innovation, the photovoltaic industry has experimented with partnerships, franchises, secondary supplier and completely open-source models.
Recently, many open-source hardware projects were funded via crowdfunding on Indiegogo or Kickstarter. Especially popular is Crowd Supply for crowdfunding open hardware projects.
Reception and impact
Richard Stallman, the founder of the free software movement, was in 1999 skeptical on the idea and relevance of free hardware (his terminology for what is now known as open-source hardware). In a 2015 article in Wired Magazine, he modified this attitude; he acknowledged the importance of free hardware, he still saw no ethical parallel with free software. Also, Stallman prefers the term free hardware design over open source hardware, a request which is consistent with his earlier rejection of the term open source software (see also Alternative terms for free software).
Other authors, such as Professor Joshua Pearce have argued there is an ethical imperative for open-source hardware – specifically with respect to open-source appropriate technology for sustainable development. In 2014, he also wrote the book Open-Source Lab: How to Build Your Own Hardware and Reduce Research Costs, which details the development of free and open-source hardware primarily for scientists and university faculty. Pearce in partnership with Elsevier introduced a scientific journal HardwareX. It has featured many examples of applications of open-source hardware for scientific purposes.
See also
Computer numeric control (CNC)
Fab lab
Hardware backdoor
HardwareX
Open innovation
Open manufacturing
Open Source Ecology
Open-source robotics
Rapid prototyping
Reuse
RISC-V—an open-source computer instruction set architecture
Simputer
NVDLA
References
Further reading
Building Open Source Hardware: DIY Manufacturing for Hackers and Makers by Alicia Gibb, Addison Wesley, 7 Dec. 2014,
Open Source Hardware A Complete Guide by Gerardus Blokdyk, 5STARCooks, 15 Mar. 2021,
Open Source Hardware Technology Paperback by Fouad Soliman, Sanaa A. Kamh, Karima A. Mahmoud, Publisher : Lap Lambert Academic Publishing, 24 Mar. 2020,
Open-Source Lab: How to Build Your Own Hardware and Reduce Research Costs by Joshua M. Pearce, Elsevier, 17 Dec. 2013,
External links
Open Source Hardware Association
Proposed Open Source Hardware (OSHW) Statement of Principles and Definition v1.0
Repositories
OpenHardware.io
Open Hardware Repository
Free culture movement
Open-source economics
|
1128643
|
https://en.wikipedia.org/wiki/College%20Preparatory%20Center
|
College Preparatory Center
|
Saudi Aramco's College Preparatory Center (CPC) is where College Preparatory Program (CPP) takes place. It is a pre-requisite to enter the College Degree Program for Non Employees (CDPNE), a highly selective program run by the Saudi Arabian Oil Company, Saudi Aramco (established in 1985). CPP (College Preparatory Program) is a 10-months program of study at the CPC (near Industrial Training Center in Dhahran) for boys and STC (Special Training Center) which is located inside the seniors' campus, for girls. After those ten months, students move on to universities abroad either to the US, UK, Canada, China, Korea, Japan, or Australia and New Zealand to finish their 4-year bachelor's degree education under the CDPNE program.
The program was exclusive for Saudi male students until Saudi Aramco announced it will give scholarships to female students during Summer-2006. For the Academic Year 2006–2007 251 males and 56 females were accepted in the program. For the Academic Year 2009-2010, 182 males and 43 females were accepted in the program.
In 2003–2004, the admission rate was 1.20%, 148 admitted from a pool of 12,300 qualified applicants. Each student is assigned to a Saudi Aramco department related to his concentration of study.
In 2013-2014, the admission rate was 2.2%, 350 boys and 82 girls admitted from a pool of more than 20,000 students all over the kingdom of Saudi Arabia
During the Cpp year :
CPP Students are divided into four tracks according to their Math and English placement test:
CH: High level in both Math and English.
CR: Regular Math and C in English.
BR: Regular or High-level Math and B in English.
AR: Regular or High-level Math and A in English.
where C in English is the highest and A is the lowest.
The year in CPP is divided into three trimesters in which students take different subjects according to their level and major. All students take ESL courses, research courses, computer & internet skills courses, and library skills courses. All the students also take AP preparation courses which are of great importance to the administration and faculty. All students except business students take AP Calculus AB, or AP Calculus BC. Some students take AP Computer Science, AP Chemistry, AP Physics B, AP Physics C, AP Statistics, AP Macroeconomics, AP Microeconomics.
During the year, C level students are offered to take two SAT tests, and one test for students in level B or A. All students are offered to take up to 3 IELTS tests if they didn't achieve the required record in the first time. In some cases, an early departure is allowed for selective students to study in Far East countries such as China, Korea, Japan, and Australia. Which started in 1998, however, this program stopped in 2015. The CPP is divided into three units. The units are subdivided into departments as follows:
Chemistry & Geology Department (Chairman: Mr. Mohammad Fawwaz),
Computer Science Department (Chairman: Dr.Hanan Mohamed),
General Studies & English Department (Chairman: Mr. Ron Mortensen),
Physics Department (Chairman: Mr. Ihab K. Ashkar),
Mathematics Department (Chairman: Mr. Dennis Luy),
Business Studies Department. (Chairman: Ms. Ria Madlangsakay),
Personal Fitness & Sports (PE) Department. (Chairman: Mr. Musa Abdul-Aziz),
Most of the staff who work in the CPP are from English speaking countries, and they are qualified. The style of education instruction mimics most American universities (schools) in the United States.
The requirements for applying CPC are as follows:
Be a recent high school graduate
a grade of 85 or higher in Qiyas assessment test must be scored in order for a student to be admitted to the program. But put in your mind to get a +90 score.
a school GPA of at least 90% (3.60 / 4 ) must be scored in English, Maths, Computer, Physics, Biology, and Chemistry.
All students must submit a Tahsili score regardless of how good it is. If you score lower than 90% in High school, you should have a grade of 80 or higher in the Tahsili test.
Passing Math and English placement test
The graduation requirements from CPC are as follows:
an IELTS score of 6.5 or higher with each section no lower than 5.5 for CH and CR students, an IELTS score of 6.0 or higher with each section no lower than 5.5 for BR students, or an IELTS score of 5.5 or higher with each section no lower than 5 for AR students.
a CGPA (Cumulative grade point average) of 2.5/4 or higher.
a GPA of 2.5/4 or higher in the third trimester.
To get at least D in the Research Paper Writing English course (ENGL036)
The benefits for all students :
During those 10 months the company offers the students a free dorm room. The boys' dorm is located in Al-Munirah Camp, while the girls' dorm is located inside the seniors' camp.
A monthly salary of 3500 SR.
Free bus transportation from CPC, STC to the dorms.
Students can book free airline tickets to Jeddah or Riyadh and even Yanbu through Aramco's airline.
The benefits for girls:
The girls' dorm has 3 floors and about 36 rooms, on each floor there's a lounge room which has a microwave and a TV and computers. There is a washing room filled with clothes' washing machine, an advisor to help the girls, and to make sure that the place is secured. There's a spacious garden with a study room and a playroom.
living inside the camp for girls, which is a secure place and there are a lot of facilities and activities to do such as The Dhahran Recreation library, Al-Najjar Café, Tandoori House restaurant, Boling, Cinema, Duckpond playground, Dhahran Dining Hall, and a supermarket.
Free buses transportation to and from Dhahran Mall, Al-Rashid Mall at 3 different times a day.
All students who pass their first year in CPP are transferred to the CDPNE ( college degree program for nonemployee) and granted a full scholarship to one of the top overseas universities, mostly in the United States, Canada, United Kingdom, Republic of Ireland, and Australia with all expenses covered. Each scholar is given the opportunity to continue graduate studies up to the PhD degree, while receiving a full-time salary after working at least 5 years in the company.
Students are allocated degree paths upon acceptance. Most of these degrees relate to Engineering and Sciences, and it depends on the company need, it's not necessary they offer all of these majors every year.
The males’ majors:
Chemical Engineering
Petroleum Engineering
Computer Engineering
Computer Science
Microbiology
Industrial Engineering
Material Science & Engineering
Mechanical Engineering
Electrical Engineering
Electronics Engineering
Environmental Engineering
Environmental Sciences
Fire Protection Engineering
Safety Engineering
Software Engineering
Systems Engineering
Geology
Geophysics
Chemistry
Industrial Chemistry
Accounting
Business Administration
Criminal Justice
Finance
Human Resource Development
Human Resource Management
Supply Chain Management
Management Information Systems
Marketing
The females’ majors:
Computer Engineering
Computer Science
Chemical Engineering
Supply Chain Management
Petroleum Engineering
Mechanical Engineering
Electrical Engineering
Software Engineering
Geology
Geophysics
Chemistry
Accounting
Human Resource Development
Human Resource Management
Management Information Systems
References
Schools in Saudi Arabia
University-preparatory schools
|
4075021
|
https://en.wikipedia.org/wiki/Butler%20%28software%29
|
Butler (software)
|
Butler is an application launcher for macOS by Peter Maurer. It can learn common abbreviations for programs and which are used most frequently. Butler can play music on iTunes and copy and move files. It can be accessed via a menu or keyboard shortcut. Butler is similar to other launchers such as Quicksilver and LaunchBar.
Awards
Butler was named MacAddict's April 2006 Shareware pick of the month.
References
External links
Official website
Utilities for macOS
Application launchers
MacOS-only software
|
586357
|
https://en.wikipedia.org/wiki/Artificial%20general%20intelligence
|
Artificial general intelligence
|
Artificial general intelligence (AGI) is the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can.
It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI,
or general intelligent action (although some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness.)
In contrast to strong AI, weak AI or "narrow AI" is not intended to have general cognitive abilities; rather, weak AI is any program that is designed to solve exactly one problem. (Academic sources reserve "weak AI" for programs that do not experience consciousness or do not have a mind in the same sense people do.)
A 2020 survey identified 72 active AGI R&D projects spread across 37 countries.
Characteristics
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone.
Intelligence traits
However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:
reason, use strategy, solve puzzles, and make judgments under uncertainty;
represent knowledge, including common sense knowledge;
plan;
learn;
communicate in natural language;
and integrate all these skills towards common goals. Other important capabilities include:
input as the ability to sense (e.g. see, hear, etc.), and
output as the ability to act (e.g. move and manipulate objects, change own location to explore, etc.)
in this world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but no one has created an integrated system that excels at all these areas.
Tests for confirming human-level AGI
The following tests to confirm human-level AGI have been considered:
The Turing Test (Turing)
A machine and a human both converse unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.
The Coffee Test (Wozniak)
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.
The Robot College Student Test (Goertzel)
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.
The Employment Test (Nilsson)
A machine performs an economically important job at least as well as humans in the same job.
AI-complete problems
There are many individual problems that may require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance.
A problem is informally known as "AI-complete" or "AI-hard", if solving it is equivalent to the general aptitude of human intelligence, or strong AI, and is beyond the capabilities of a purpose-specific algorithm.
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.
AI-complete problems cannot be solved with current computer technology alone, and require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.
History
Classical AI
Modern AI research began in the mid-1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"
Several classical AI projects, such as Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project, were specifically directed at AGI.
However, in the early 1970s and then again in the early 90s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."
Narrow AI research
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: "I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: "The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."
Modern artificial general intelligence research
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers.
However, as of yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature.
Timescales: In the introduction to his 2006 book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the 2007 consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible. - However, mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. It was later found that the dataset listed some experts as non-experts and vice versa.
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.
In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to classify as a narrow AI system.
In the same year Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.
Brain simulation
Whole brain emulation
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably. Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.
Early estimates
For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5×1014 synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014 (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016 computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 1016 "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.
Modelling the neurons in more detail
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are known to play a role in cognitive processes.
Current research
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 1011 neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 108 synapses in 2006. A longer-term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.
Criticisms of simulation-based approaches
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.
Desktop computers using microprocessors capable of more than 109 cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists. There are several reasons for this:
The neuron model seems to be oversimplified (see next section).
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity (observed using techniques such as functional magnetic resonance imaging) correlates with.
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.
Philosophical perspective
"Strong AI" as defined in philosophy
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:
Strong AI hypothesis: An artificial intelligence system can "think", have "a mind" and "consciousness".
Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness.
The first one he called "strong" because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test—the behavior of a "weak AI" machine would be precisely identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.
Mainstream AI is only interested in how a program behaves. According to Russell and Norvig, "as long as the program works, they don't care if you call it real or a simulation." If the program can behave as if it has a mind, then there's no need to know if it actually has mind—indeed, there would be no way to tell. For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." Thus, for academic AI research, "Strong AI" and "AGI" are two very different things.
In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term "strong AI" to mean "human level artificial general intelligence". This is not the same as Searle's strong AI, unless you assume that consciousness is necessary for human-level AGI. Academic philosophers such as Searle don't believe that is the case, and artificial intelligence researchers don't care.
Consciousness
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:
consciousness: To have subjective experience and thought.
self-awareness: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.
sentience: The ability to "feel" perceptions or emotions subjectively.
sapience: The capacity for wisdom.
These traits have a moral dimension, because a machine with this form of strong AI may have rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.
Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity.
It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine. It's also possible that it will become natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.
Artificial consciousness research
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.
Possible explanations for the slow progress of AI research
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.
Controversies and dangers
Feasibility
As of August 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive, at all. At one extreme, AI pioneer Herbert A. Simon speculated in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI.
Potential threat to human existence
The thesis that AI poses an existential risk for humans, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial:
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Former Baidu Vice President and Chief Scientist Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."
See also
Artificial brain
AI control problem
Automated machine learning
BRAIN Initiative
China Brain Project
Future of Humanity Institute
General game playing
Human Brain Project
Intelligence amplification (IA)
Machine ethics
Multi-task learning
Outline of artificial intelligence
Outline of transhumanism
Synthetic intelligence
Transfer learning
Loebner Prize
Hardware for artificial intelligence
Notes
References
Sources
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
The AGI portal maintained by Pei Wang
The Genesis Group at MIT's CSAIL – Modern research on the computations that underlay human intelligence
OpenCog – open source project to develop a human-level AI
Simulating logical human thought
What Do We Know about AI Timelines? – Literature review
Hypothetical technology
Artificial intelligence
Computational neuroscience
fr:Intelligence artificielle#Intelligence artificielle forte
|
5767252
|
https://en.wikipedia.org/wiki/Pepper%20Pad
|
Pepper Pad
|
The Pepper Pad was a family of Linux-based mobile computers with Internet capability and which doubled as a handheld game console. They also served as a portable multimedia device. The devices used Bluetooth and Wi-Fi technologies for Internet connection. Pepper Pads are now obsolete, unsupported and the parent company has ceased operations.
The original prototype Pepper Pad was built in 2003 with an ARM-based PXA255 processor running at 400Mhz, an 8-inch touchscreen in portrait mode, a split QWERTY keyboard, and Wi-Fi. Only 6 were made, and it was never offered for sale. The Pepper Pad was a 2004 Consumer Electronics Show (CES) Innovations Awards Honoree in the Computer Hardware category.
The Pepper Pad 2 was introduced in 2004 with a faster 624Mhz PXA270 processor and the screen was rotated to a landscape format. The Pepper Pad 2 was the first Pepper Pad offered for commercial sale. The Pepper Pad and Pepper Pad 2 both ran Pepper's proprietary Pepper Keeper application on top of a heavily-customized version of the Montavista Linux operating system.
The Pepper Pad 3 was announced in 2006 with as upgrade to a faster AMD Geode processor. The Pepper Pad 3 also used a smaller 7" screen for cost savings. Like previous versions, the Pepper Pad 3 had a split QWERTY button keyboard, built-in microphone, video camera, composite video output, and stereo speakers, Infra-Red receiver and transmitter, 800x480 7 inch LCD touchscreen (with stylus), SD/MMC Flash memory slot, 20 or 30 GB hard disk, 256MB RAM, 256KB ROM, and both Wi-Fi (b/g) and Bluetooth 2.0. The Pepper Pad 3 used a heavily-customized version of the Fedora Linux operating system called Pepper Linux. Unlike the Pepper Pad 2 which was built and sold directly by Pepper, the Pepper Pad 3 was built and sold under license by Hanbit Electronics.
Support
Pepper Computer, Inc. has ceased operations and is no longer providing support or sales for Pepper Pad web computers or Pepper Linux.
Software
Pepper Pads ran Pepper's "Pepper Keeper" software and suite of applications. Pepper's software was designed to be easy to use, and offered many features later found in devices like the iPhone and Android. The Pepper Keeper's home screen provided large icons for launching applications including a web browser, mail client, chat client, photo viewer, music player, video player, games, and a scrapbooking application. Pepper offered an application store, automatic software updates, and a simple way to share photos, music, and files with friends.
The Pepper Keeper ran atop Pepper Linux, Pepper's custom version of the Linux operating system. Pepper Linux was ported to multiple devices including the One Laptop per Child.
Software ported to the Pepper Pad
FCE Ultra (NES emulator)
Adobe Systems/Macromedia Flash 7
Java
X11
GTK+
Mozilla Firefox
RealPlayer
Helix
Squeak
Hardware (Pepper Pad 3)
Mass: 2.1 pounds (985g)
Size: 29 cm x 14.9 cm x 2.3 cm (11.4" x 5.9" x 0.9")
Mainboard
AMD Geode CPU, 533 MHz clock speed, x86 instruction set with MMX and 3DNow extensions, integrated north bridge, graphics controller and PCI bridge
AMD CS5536 Companion device (south bridge), USB 2.0 / IDE / IR / SMBus / APM interface
Wolfson WM9713 AC97 Audio / Touchscreen interface
256MB DDR SDRAM (DDR-333 SO-DIMM)
256KB BIOS ROM
Chrontel CH7013B NTSC/PAL TV signal encoder
IrDA and TvIR emitters/receivers
Subsystems
Hitachi TravelStar 20GB 1.8" IDE disk drive
Atheros AR2413A-based mini-PCI 802.11b/g WiFi interface, with externally attached antenna (external to the card, internal to the Pepper)
Bluetooth 2.0
AU Optronics A070VW01 7.0" 800x480 TFT LCD
Integrated 62-key clicky keyboard, including 4-way cursor array and scroll wheel
3800mAh Lithium-Ion Battery
Stereo Speakers
Microphone
640x480 digital camera, fixed focus
External Ports
USB 2.0 host port
USB 2.0 device port
1/8" Stereo headphone out
1/8" Composite video out
1/8" Microphone In
Internal Ports
miniPCI (occupied by WiFi interface)
JTAG test-access port.
Serial port with console
See also
JooJoo
References
External links
DOWN Pepper Computer, Inc.
DOWN 2011-02-24 Hanbit America, hardware manufacturer website
Archive of Hanbit America Web archive of hardware manufacturer website
DOWN 2011-02024 Pepper Pad Community Forums, official discussion forums
DOWN 2011-02-24 Pepper Wiki, Pepper Pad information repository
Web archive of wiki Archive
Pepper Linux 4.0 Application Launching Preview (Video)
Handheld game consoles
Microsoft Tablet PC
Mobile computers
Linux-based devices
|
2142
|
https://en.wikipedia.org/wiki/List%20of%20artificial%20intelligence%20projects
|
List of artificial intelligence projects
|
The following is a list of current and past, non-classified notable artificial intelligence projects.
Specialized projects
Brain-inspired
Blue Brain Project, an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level.
Google Brain A deep learning project part of Google X attempting to have intelligence similar or equal to human-level.
Human Brain Project
NuPIC, an open source implementation by Numenta of its cortical learning algorithm.
Cognitive architectures
4CAPS, developed at Carnegie Mellon University under Marcel A. Just
ACT-R, developed at Carnegie Mellon University under John R. Anderson.
AIXI, Universal Artificial Intelligence developed by Marcus Hutter at IDSIA and ANU.
CALO, a DARPA-funded, 25-institution effort to integrate many artificial intelligence approaches (natural language processing, speech recognition, machine vision, probabilistic logic, planning, reasoning, many forms of machine learning) into an AI assistant that learns to help manage your office environment.
CHREST, developed under Fernand Gobet at Brunel University and Peter C. Lane at the University of Hertfordshire.
CLARION, developed under Ron Sun at Rensselaer Polytechnic Institute and University of Missouri.
CoJACK, an ACT-R inspired extension to the JACK multi-agent system that adds a cognitive architecture to the agents for eliciting more realistic (human-like) behaviors in virtual environments.
Copycat, by Douglas Hofstadter and Melanie Mitchell at the Indiana University.
DUAL, developed at the New Bulgarian University under Boicho Kokinov.
FORR developed by Susan L. Epstein at The City University of New York.
IDA and LIDA, implementing Global Workspace Theory, developed under Stan Franklin at the University of Memphis.
OpenCog Prime, developed using the OpenCog Framework.
Procedural Reasoning System (PRS), developed by Michael Georgeff and Amy L. Lansky at SRI International.
Psi-Theory developed under Dietrich Dörner at the Otto-Friedrich University in Bamberg, Germany.
R-CAST, developed at the Pennsylvania State University.
Soar, developed under Allen Newell and John Laird at Carnegie Mellon University and the University of Michigan.
Society of mind and its successor the Emotion machine proposed by Marvin Minsky.
Subsumption architectures, developed e.g. by Rodney Brooks (though it could be argued whether they are cognitive).
Games
AlphaGo, software developed by Google that plays the Chinese board game Go.
Chinook, a computer program that plays English draughts; the first to win the world champion title in the competition against humans.
Deep Blue, a chess-playing computer developed by IBM which beat Garry Kasparov in 1997.
FreeHAL, a self-learning conversation simulator (chatterbot) which uses semantic nets to organize its knowledge to imitate a very close human behavior within conversations.
Halite, an artificial intelligence programming competition created by Two Sigma.
Libratus, a poker AI that beat world-class poker players in 2017, intended to be generalisable to other applications.
Quick, Draw!, an online game developed by Google that challenges players to draw a picture of an object or idea and then uses a neural network to guess what the drawing is.
Stockfish AI, an open source chess engine currently ranked the highest in many computer chess rankings.
TD-Gammon, a program that learned to play world-class backgammon partly by playing against itself (temporal difference learning with neural networks).
Internet activism
Serenata de Amor, project for the analysis of public expenditures and detect discrepancies.
Knowledge and reasoning
Braina, an intelligent personal assistant application with a voice interface for Windows OS.
Cyc, an attempt to assemble an ontology and database of everyday knowledge, enabling human-like reasoning.
Eurisko, a language by Douglas Lenat for solving problems which consists of heuristics, including some for how to use and change its heuristics.
Google Now, an intelligent personal assistant with a voice interface in Google's Android and Apple Inc.'s iOS, as well as Google Chrome web browser on personal computers.
Holmes a new AI created by Wipro.
Microsoft Cortana, an intelligent personal assistant with a voice interface in Microsoft's various Windows 10 editions.
Mycin, an early medical expert system.
Open Mind Common Sense, a project based at the MIT Media Lab to build a large common sense knowledge base from online contributions.
P.A.N., a publicly available text analyzer.
Siri, an intelligent personal assistant and knowledge navigator with a voice-interface in Apple Inc.'s iOS and macOS.
SNePS, simultaneously a logic-based, frame-based, and network-based knowledge representation, reasoning, and acting system.
Viv (software), a new AI by the creators of Siri.
Wolfram Alpha, an online service that answers queries by computing the answer from structured data.
Motion and manipulation
AIBO, the robot pet for the home, grew out of Sony's Computer Science Laboratory (CSL).
Cog, a robot developed by MIT to study theories of cognitive science and artificial intelligence, now discontinued.
Music
Melomics, a bioinspired technology for music composition and synthesization of music, where computers develop their own style, rather than mimic musicians.
Natural language processing
AIML, an XML dialect for creating natural language software agents.
Apache Lucene, a high-performance, full-featured text search engine library written entirely in Java.
Apache OpenNLP, a machine learning based toolkit for the processing of natural language text. It supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking and parsing.
Artificial Linguistic Internet Computer Entity (A.L.I.C.E.), an award-winning natural language processing chatterbot.
Cleverbot, successor to Jabberwacky, now with 170m lines of conversation, Deep Context, fuzziness and parallel processing. Cleverbot learns from around 2 million user interactions per month.
ELIZA, a famous 1966 computer program by Joseph Weizenbaum, which parodied person-centered therapy.
GPT-3, a 2020 language model developed by OpenAI that can produce text difficult to distinguish from that written by a human.
Jabberwacky, a chatbot by Rollo Carpenter, aiming to simulate natural human chat.
Mycroft, a free and open-source intelligent personal assistant that uses a natural language user interface.
PARRY, another early chatterbot, written in 1972 by Kenneth Colby, attempting to simulate a paranoid schizophrenic.
SHRDLU, an early natural language processing computer program developed by Terry Winograd at MIT from 1968 to 1970.
SYSTRAN, a machine translation technology by the company of the same name, used by Yahoo!, AltaVista and Google, among others.
ASR-automated speech recognization System.
Other
1 the Road, the first novel marketed by an AI.
Synthetic Environment for Analysis and Simulations (SEAS), a model of the real world used by Homeland security and the United States Department of Defense that uses simulation and AI to predict and evaluate future events and courses of action.
Multipurpose projects
Software libraries
Apache Mahout, a library of scalable machine learning algorithms.
Deeplearning4j, an open-source, distributed deep learning framework written for the JVM.
Keras, a high level open-source software library for machine learning (works on top of other libraries).
Microsoft Cognitive Toolkit (previously known as CNTK), an open source toolkit for building artificial neural networks.
OpenNN, a comprehensive C++ library implementing neural networks.
PyTorch, an open-source Tensor and Dynamic neural network in Python.
TensorFlow, an open-source software library for machine learning.
Theano, a Python library and optimizing compiler for manipulating and evaluating mathematical expressions, especially matrix-valued ones.
GUI frameworks
Neural Designer, a commercial deep learning tool for predictive analytics.
Neuroph, a Java neural network framework.
OpenCog, a GPL-licensed framework for artificial intelligence written in C++, Python and Scheme.
PolyAnalyst: A commercial tool for data mining, text mining, and knowledge management.
RapidMiner, an environment for machine learning and data mining, now developed commercially.
Weka, a free implementation of many machine learning algorithms in Java.
Cloud services
Data Applied, a web based data mining environment.
Grok, a service that ingests data streams and creates actionable predictions in real time.
Watson, a pilot service by IBM to uncover and share data-driven insights, and to spur cognitive applications.
See also
Comparison of cognitive architectures
Comparison of deep-learning software
References
External links
AI projects on GitHub
AI projects on SourceForge
Artificial intelligence projects
|
19199768
|
https://en.wikipedia.org/wiki/HP%20X-Terminals
|
HP X-Terminals
|
HP X-Terminals are a line of X terminals from Hewlett Packard introduced in the early- to mid-1990s, including the 700/X and 700/RX, Envizex and Entria, and the Envizex II and Entria II. They were often sold alongside PA-RISC-based HP 9000 Unix systems. The primary use case was connecting several graphical consoles to a single server or workstation to allow multiple users access the same (expensive) processing system from (less expensive) terminal systems. These X-Terminals all allowed high-resolution, color-graphics access to the main server from which they downloaded their operating system and necessary program files. All models featured limited expandability, in most cases additional I/O options for peripherals and memory for more programs or local storage. HP did not use its own PA-RISC platform for these systems, the first design used an Intel CISC processor, while all later systems used RISC platforms, first Intel i960 and later the popular MIPS.
These 1990s X-Terminals, together with offerings from many other vendors from that time, were precursors to thin computing: the use of small dumb front-end systems for I/O and a larger processing system as back-end, shared by many concurrent users.
700/X
These were the first X-Terminals HP produced, featuring a similar case to that of some HP 9000/300
(Motorola 68000-based) workstations.
They were driven by a pretty obscure CPU combination, an Intel 186 with a TI DSP
as video coprocessor.
CPU: 16 MHz Intel 80186 with a 60 MHz Texas Instruments DSP as video processor
RAM: 1MB on board, 9MB maximum; one slot takes up to 8MB modules of unknown type
Video RAM: Unknown
Maximum video resolution/color-depth: 1024×768/8-bit
I/O connectors: RS-232 serial, HIL and two PS/2 for keyboard/mouse devices, AUI and BNC 10 Mbit Ethernet connectors and VGA video connector
Expansion: Unknown
700/RX
These are the direct successors to the 700/X line of X-Terminals and changed the architecture significantly.
They were the first in a line of terminals to be driven by an Intel i960 RISC CPU and introduced a case which also was used on later systems. They have a (albeit very quiet) fan.
Several submodels were available, featuring different video-options:
16Ca: 1 MB video RAM, max. 1028×768 resolution, 8-bit color-depth
19Ca: 2 MB video RAM, max. 1280×1024 resolution, 8-bit color-depth
14Ci/16Ci/17Ci: 1 MB video RAM, max. 1028×768 resolution, 8-bit color-depth
19Mi: 0.2 MB video RAM, max. 1280×1024 resolution, monographics
All models have these base features in common:
CPU: 22 MHz Intel i960CA with 1KB instruction cache
RAM: 2 MB on board, 34 MB maximum; two slots take each up to 16 MB 72-pin non-parity SIMMs
I/O connectors: RS232 serial, HIL and two PS/2 for keyboard/mouse devices, parallel for printer, AUI and BNC 10 Mbit Ethernet and VGA video connector
Expansion: slot for a Boot-ROM cartridge
Entria
The Entrias were the low-cost line of X-Terminals, featuring the same architecture as the 700/RX terminals, but in a plastic case the same style as the HP 9000/712 workstation. They are very small and quiet.
The Entrias were available in different video configurations, depending on the exact model:
0.6 MB video RAM: max. resolution of 1024×768 with grayscale graphics
1 MB video RAM: max. resolution of 1024×768 with 8-bit color depth
2 MB video RAM: max. resolution of 1280×1024 with 8-bit color depth
Common:
CPU: Intel i960CA with 1 KB instruction cache
RAM: 4 MB on board, 68 MB maximum; two slots take each up to 32 MB 72-pin non-parity SIMMs
I/O connectors: RS-232 serial, two PS/2 for keyboard/mouse devices, parallel for printer, TP and BNC 10 Mbit Ethernet and VGA video connector
Expansion: none
Envizex
The Envizex were the successors to the 700/RX terminals, featuring the same flat pizzabox case and a slightly modified architecture with a faster version of the Intel i960 RISC CPU.
They have a (very quiet) fan inside.
Three different series were available which featured different speeds of the CPU:
i SERIES: 25 MHz Intel i960CF with 4 KB instruction and 1 KB data cache
a SERIES: 28 MHz Intel i960CF with 4 KB instruction and 1 KB data cache
p SERIES: 33 MHz Intel i960CF with 4 KB instruction and 1 KB data cache
Common aspects:
RAM: a and i SERIES: 4 MB on board, 132 MB maximum; four slots take each up to 32 MB 72-pin non-parity SIMMs. p SERIES: 6 MB on board, 102 MB maximum; three slots take each up to 32 MB 72-pin non-parity SIMMs
Video RAM: 2 MB
Maximum video resolution/color-depth: 1280×1024 (i SERIES might do only 1024×768) 8-bit
I/O connectors: two RS-232 serial, HIL and two PS/2 for keyboard/mouse devices, parallel for printer, TP, AUI and BNC 10 Mbit Ethernet and VGA video connector
Expansion: They offer a range of expansion options:
3.5″ PC floppy drive
CD-quality audio support
Either one of the following three cards:
SCSI/ROM adapter card
Token Ring adapter
100VG AnyLan adapter (HP-proprietary 100 MBit networking)
They also have two PCMCIA sockets for:
Boot-ROM card
SRAM cards which contain fonts or a local copy of the X server (no network download necessary)
Entria II
These were the successors of the low-cost Entria X-Terminals, keeping their HP 9000/712-style small footprint plastic case.
The system architecture was changed completely and is shared with the later Envizex II terminals. It is based around a NEC R4300 CPU and PCI-based I/O devices.
CPU: 100 MHz NEC R4300
RAM: 64 MB maximum; two slots take each up to 32 MB 168-pin DIMMs (PC66/100/133 DIMMs in different sizes can be used, but only 8 MB of each module will be available; the larger modules (16 and 32 MB) were HP-proprietary)
Video RAM: 2 MB
Maximum video resolution/color-depth: 1280×1024/8-bit
I/O connectors: RS-232 serial, two PS/2 for keyboard/mouse devices, parallel for printer, TP Ethernet (probably 10 Mbit) connector and VGA video connector
Expansion: none
Envizex II
These are the bigger brothers of the Entria II X-Terminals, driven by the same R4300 MIPS CPU and PCI I/O architecture.
The case was redesigned, is very easy to open and does not have any fans, making the terminal rather quiet.
CPU: 133 MHz NEC R4300
RAM: 96 MB maximum; three slots take each up to 32 MB 168-pin DIMMs (PC66/100/133 DIMMs in different sizes can be used, but only 8 MB of each module will be available; the larger modules (16 and 32 MB) were also HP-proprietary)
Video RAM: 2 or 4 MB VSIMM
Video chipset: ATI Mach64
Maximum video resolution/color-depth: 1600×1200/16-bit
I/O connectors: two RS-232 serial, two PS/2 and USB for keyboard/mouse devices, TP Ethernet connector and EVC video connector (requires an adapter-cable to use standard VGA monitors)
Expansion:
3.5″ PC floppy drive
Audio Kit with telephone I/O
Flash DIMMs card for booting and storing configuration and font files
100VG AnyLan PCI card
100 Mbit Ethernet PCI card
Combined BNC and AUI card (expands the onboard NIC)
Software
These X-Terminals/stations run a proprietary operating system from HP — Netstation, formerly Enware, with some versions apparently based on VxWorks (probably those with RISC support).
This software runs on theoretically any Unix system, native support is available for HP-UX 10, HP-UX 11, IBM AIX and Solaris 2.x. A generic installation image is provided for other Unix flavors; this can be used to install the software via the provided installation shell script on for instance various Linux or BSD flavors.
Netstation Version 7.1
The older Enware/Netstation Version 7.1, HP product B.07.11, supports the following i960-based terminals:
700/RX
Entria
Envizex
It was downloadable from a public HP FTP service (hprc.external.hp.com/B.07.11/), which however was apparently discontinued.
Read the included documentation and technical reference and refer to the installation instructions. Generally, a Unix server is needed from which the station can boot its kernel and load its X server.
This is done via TFTP; the station can be managed locally via a configuration screen or remotely on the server via customizable configuration files.
Netstation Version 9.0
The most current available Netstation version is 9.0, HP product B.09.11. This version supports the newer MIPS-based X-Terminals:
Entria II
Envizex II
Same as with the older Netstation software, version 9.0 was available from a HP FTP service, which was discontinued. (See above)
The newer X-Terminals (IIs) can boot in different ways, over a NFS mount, a SMB share or plain TFTP.
Included in the Netstation software is a native Java environment which makes execution of local Java applets on the terminal possible.
References
Specific references:
General references:
http://www.openpa.net/
External links
700/RX and envizex Terminal password unlocking (Bart Dopheide: n.d. Accessed September 2008)
Setting up the Envizex (N.a.: January 2008. Link updated July 2013) Installation instructions for the HP X-Terminal software
Information and Software for HP Envizex, Entria, and 700/RX X-Terminals (Brian McElroy: April 2000. Access September 2008)
Updated HP Envizex files and basic config (James Baker: Posted July 2013)
X-Terminals
X Window System
|
28737625
|
https://en.wikipedia.org/wiki/ISO%209564
|
ISO 9564
|
ISO 9564 is an international standard for personal identification number (PIN) management and security in financial services.
The PIN is used to verify the identity of a customer (the user of a bank card) within an electronic funds transfer system, and (typically) to authorize the transfer or withdrawal of funds. Therefore, it is important to protect PINs against unauthorized disclosure or misuse. Modern banking systems require interoperability between a variety of PIN entry devices, smart cards, card readers, card issuers, acquiring banks and retailers – including transmission of PINs between those entities – so a common set of rules for handling and securing PINs is required, both to ensure technical compatibility and a mutually agreed level of security. ISO 9564 provides principles and techniques to meet these requirements.
ISO 9564 comprises three parts, under the general title of Financial services — Personal Identification Number (PIN) management and security.
Part 1: Basic principles and requirements for PINs in card-based systems
ISO 9564-1:2011 specifies the basic principles and techniques of secure PIN management. It includes both general principles and specific requirements.
Basic principles
The basic principles of PIN management include:
PIN management functions shall be implemented in software and hardware in such a way that the functionality cannot be modified without detection, and that the data cannot be obtained or misused.
Encrypting the same PIN with the same key but for a different bank account shall not predictably give the same cipher text.
Security of the PIN encryption shall depend on secrecy of the key, not secrecy of the algorithm.
The PIN must always be stored encrypted or physically secured.
Only the customer (i.e. the user of a card) and/or authorized card issuer staff shall be involved with PIN selection or issuing. Where card issuer staff are involved, appropriate strictly enforced procedures shall be used.
A stored encrypted PIN shall be protected from substitution.
A PIN shall be revoked if it is compromised, or suspected to be.
The card issuer shall be responsible for PIN verification.
The customer shall be advised of the importance of keeping the PIN secret.
PIN entry devices
The standard specifies some characteristics required or recommended of PIN entry devices (also known as PIN pads), i.e. the device into which the customer enters the PIN, including:
All PIN entry devices shall allow entry of the digits zero to nine. Numeric keys may also have letters printed on them, e.g. as per E.161. These letters are only for the customers' convenience; internally, the PIN entry device only handles digits. (E.g. the standard does not support multi-tap or similar.) The standard also recommends that customers should be warned that not all devices may have letters.
The PIN entry device shall be physically secured so that it is not feasible to modify its operation or extract PINs or encryption keys from it.
The PIN entry device should be designed or installed so as to prevent other people from observing the PIN as it is entered.
The keyboard layout should be standardized, with consistent and unambiguous labels for function keys, such as "enter", "clear" (this entry) and "cancel" (the transaction). The standard also recommends specific colours for function keys: green for "enter", yellow for "clear", red for "cancel".
Smart card readers
A PIN may be stored in a secure smart card, and verified offline by that card. The PIN entry device and the reader used for the card that will verify the PIN may be integrated into a single physically secure unit, but they do not need to be.
Additional requirements that apply to smart card readers include:
The card reader should be constructed in such a way as to prevent someone monitoring the communications to the card by inserting a monitoring device into the card slot.
If the PIN entry device and the card reader are not both part of an integrated secure unit, then the PIN shall be encrypted while it is transmitted from the PIN entry device to the card reader.
Other specific PIN control requirements
Other specific requirements include:
All hardware and software used for PIN processing shall be implemented such that:
Their correct functioning can be assured.
They cannot be modified or accessed without detection.
The data cannot be inappropriately accessed, modified or misused.
The PIN cannot be determined by a brute-force search.
The PIN shall not be communicated verbally. In particular bank personnel shall never ask the customer to disclose the PIN, nor recommend a PIN value.
PIN encryption keys should not be used for any other purpose.
PIN length
The standard specifies that PINs shall be from four to twelve digits long, noting that longer PINs are more secure but harder to use. It also suggests that the issuer should not assign PINs longer than six digits.
PIN selection
There are three accepted methods of selecting or generating a PIN:
assigned derived PIN The card issuer generates the PIN by applying some cryptographic function to the account number or other value associated with the customer.
assigned random PIN The card issuer generates a PIN value using a random number generator.
customer selected PIN The customer selects the PIN value.
PIN issuance and delivery
The standard includes requirements for keeping the PIN secret while transmitting it, after generation, from the issuer to the customer. These include:
The PIN is never available to the card issuing staff.
The PIN can only be displayed or printed for the customer in an appropriately secure manner. One method is a PIN mailer, an envelope designed so that it can be printed without the PIN being visible (even at printing time) until the envelope is opened. A PIN mailer must also be constructed so that any prior opening will be obvious to the customer, who will then be aware that the PIN may have been disclosed.
The PIN shall never appear where it can be associated with a customer's account. For example, a PIN mailer must not include the account number, but only sufficient information for its physical delivery (e.g. name and address). The PIN and the associated card shall not be mailed together, nor at the same time.
PIN encryption
To protect the PIN during transmission from the PIN entry device to the verifier, the standard requires that the PIN be encrypted, and specifies several formats that may be used. In each case, the PIN is encoded into a PIN block, which is then encrypted by an "approved algorithm", according to part 2 of the standard).
The PIN block formats are:
Format 0
The PIN block is constructed by XOR-ing two 64-bit fields: the plain text PIN field and the account number field, both of which comprise 16 four-bit nibbles.
The plain text PIN field is:
one nibble with the value of 0, which identifies this as a format 0 block
one nibble encoding the length N of the PIN
N nibbles, each encoding one PIN digit
14−N nibbles, each holding the "fill" value 15 (i.e. 11112)
The account number field is:
four nibbles with the value of zero
12 nibbles containing the right-most 12 digits of the primary account number (PAN), excluding the check digit
Format 1
This format should be used where no PAN is available. The PIN block is constructed by concatenating the PIN with a transaction number thus:
one nibble with the value of 1, which identifies this as a format 1 block
one nibble encoding the length N of the PIN
N nibbles, each encoding one PIN digit
14−N nibbles encoding a unique value, which may be a transaction sequence number, time stamp or random number
Format 2
Format 2 is for local use with off-line systems only, e.g. smart cards. The PIN block is constructed by concatenating the PIN with a filler value thus:
one nibble with the value of 2, which identifies this as a format 2 block
one nibble encoding the length N of the PIN
N nibbles, each encoding one PIN digit
14−N nibbles, each holding the "fill" value 15 (i.e. 11112)
(Except for the format value in the first nibble, this is identical to the plain text PIN field of format 0.)
Format 3
Format 3 is the same as format 0, except that the "fill" digits are random values from 10 to 15, and the first nibble (which identifies the block format) has the value 3.
Extended PIN blocks
Formats 0 to 3 are all suitable for use with the Triple Data Encryption Algorithm, as they correspond to its 64-bit block size. However the standard allows for other encryption algorithms with larger block sizes, e.g. the Advanced Encryption Standard has a block size of 128 bits. In such cases the PIN must be encoding into an extended PIN block, the format of which is defined in a 2015 amendment to ISO 9564-1.
Part 2: Approved algorithms for PIN encipherment
ISO 9564-2:2014 specifies which encryption algorithms may be used for encrypting PINs. The approved algorithms are:
Triple Data Encryption Algorithm
RSA;
Advanced Encryption Standard
Part 3 (withdrawn)
ISO 9564-3 Part 3: Requirements for offline PIN handling in ATM and POS systems, most recently published in 2003, was withdrawn in 2011 and its contents merged into part 1.
Part 4: Requirements for PIN handling in eCommerce for Payment Transactions
ISO 9564-4:2016 defines minimum security requirements and practices for the use of PINs and PIN entry devices in electronic commerce.
Notes
References
External links
Complete list of PIN-blocks, with examples
09564
Financial technology
|
23892304
|
https://en.wikipedia.org/wiki/MixRadio
|
MixRadio
|
MixRadio was an online music streaming service owned by Line Corporation. The service was first introduced by Nokia in 2011 as Nokia Music for Windows Phone, serving as a successor to Nokia's previous Nokia Music Store/Comes with Music/Ovi Music Store initiatives, which was based on the LoudEye/OD2 platform. After its acquisition of Nokia's mobile phone business, the service was briefly maintained by Microsoft Mobile Oy before it was sold to Japanese internet company Line Corporation in 2015. Following the acquisition, MixRadio expanded to Android and iOS in May 2015.
On 16 February 2016, Line announced that MixRadio would be discontinued, citing "a careful assessment of the subsidiary's overall performance" and "the financial challenges posed by the music streaming market".
Availability
The service, in MixRadio form, was available as a free app for Android, iOS, Apple Watch, Amazon Appstore, BlackBerry, Windows Phone, Adidas miCoach Smart Run and Harman Kardon Omni Speaker range.
Nokia Music Store was available in 33 countries:
Australia, Austria, Brazil, Canada, China, Egypt, Finland, France, Germany, India, Indonesia, Ireland, Italy, Lebanon, Malaysia, Mexico, Netherlands, Norway, Pakistan, Poland, Portugal, Russia, Saudi Arabia, South Africa, Singapore, Spain, Sweden, Switzerland, Thailand, Turkey, United Arab Emirates, United Kingdom, United States and Vietnam.
France
In France, the Nokia Music Store went live on 23 April 2008.
UAE
The Nokia Music Store went live in the United Arab Emirates in November 2008.
Australia
The Nokia Music Store went live in Australia on 22 April 2008.
India
The Nokia Music Store launched in India in 2009.
Middle East
Comes with Music was launched across 11 Middle Eastern territories, namely: Egypt, Lebanon, Jordan, Palestinian Territories, Iraq, Saudi Arabia, the United Arab Emirates, Kuwait, Qatar, Bahrain, and Oman, on 11 February 2010.
South Africa
The Nokia Music Store was launched in South Africa on 24 April 2009. The Comes With Music product followed on 27 August 2009. The offerings were rebranded to align with Nokia's Ovi branding on 9 September 2010.
Spain
The service was announced for Spain on 28 September 2008.
History
2007-2011
The service was originally launched in 2007 when Nokia set up their Nokia Comes With Music service, in partnership with Universal Music Group International, Sony BMG, Warner Music Group, EMI, and hundreds of independent labels and music aggregators, to allow 12, 18, or 24 months of unlimited free-of-charge music downloads with the purchase of a Nokia Comes With Music edition phone. Files could be downloaded on mobile devices or personal computers, and kept permanently.
On 29 August 2007 Nokia launched the Ovi Music Store as part of the Ovi platform. services portal from Nokia. The original idea behind the store was to provide to all Nokia MP3 capable mobile users a music store on the phone as on the PC. The Ovi Music Store officially opened in the UK on 1 October 2007 with offering of music from SonyBMG, Universal Music, EMI and Warner Music Group, as well as others. This service had its own software to serve as front gate of the store on the PC and on the phones. It was called Nokia Ovi Player, and later Nokia Music Player.
In October 2008, Nokia announced the Nokia 5800, a direct competitor to the iPhone and with it the service Comes With Music, which consists of a year of free music downloads included in the price of the phone. This service was optional to the carriers.
Within the box of the phone there was a card with an ID that will be linked to the PC (MAC address) and mobile phone (IMEI), so that PC and mobile phone have unlimited music downloads for over a year.
Until 2010 the service had DRM files that prevented files from being burn onto CDs, allowing playback from mobile devices and the PC software only. Market conditions encouraged a move to DRM free, as evidenced in the Brainstorm Magazine article "Music wants to be mobile...and DRM free". In case the user wanted to burn the song, they had to buy it from the store. During the latter part of 2010 and into 2011, Nokia Music continued developing its app client for the MeeGo platform along with its existing Symbian platform.
In January 2011 Nokia withdrew this programme in 27 countries, due to its failure to gain traction; existing subscribers could continue to download until their contracts ended. The service continued to be offered until 2014 in China, India, Indonesia, Brazil, Turkey and South Africa where take-up was better.
Nokia Music launched for the first time on the Windows Phone platform with the Lumia 710 and Lumia 800 on 26 October 2011 in London.
2012-2013
With the launch of Windows Phone 8 in late 2012, Nokia Music came to the platform with an app optimised for the new operating system from Microsoft. During the following months, Nokia Music was also released to the Windows 8 and Windows 8 RT app stores.
Nokia Music launched in the U.S. market on 15 September 2012 with a performance at Irving Plaza by Green Day. Fans were treated to a special performance from the band, along with heavy social media involvement by AT&T, Nokia, the band themselves and Warner Bros.
On 20 November 2013 Nokia renamed the service to "Nokia MixRadio". This change also made its way to the Windows 8 and Windows RT app stores The following day, Nokia MixRadio made its official global launch with a special event in New York City where Nile Rodgers played.
2014-2016
Nokia MixRadio began the year with the launch of the MixRadio app for the Nokia Asha and Nokia X platforms at GSMA Mobile World Congress in February 2014.
The service was again renamed to only "MixRadio" on 1 July 2014, to reflect the change of ownership from Nokia to Microsoft. On 11 September 2014, the MixRadio application was announced for the Sonos range of wireless speakers with a companion app. MixRadio further extended their reach on 27 November 2014, with the application being added to the adidas miCoach Smart Run touchscreen watch.
On 18 December 2014, after mulling a spin-off of the service, Microsoft announced that it would sell MixRadio to Line Corporation, a subsidiary of Naver Corporation, for an undisclosed amount.
On 17 March 2015, the transaction was completed. At this time, beta versions of the app were released for Android and iOS.
On 19 May 2015 MixRadio announced the launch of the commercial iOS and Android apps with simultaneous launch events in New York City and Singapore. MixRadio also announced their partnership with HTC at this event to integrate MixRadio into the BlinkFeed software of HTC smartphones. The HTC BlinkFeed integration with MixRadio went live on 9 June 2015.
During the third quarter of 2015, MixRadio further expanded its reach to other platforms, namely Apple Watch, Amazon Appstore and Tizen. In the first week of November 2015, MixRadio launched as a fully featured web browser client for the Windows and OS X operating systems, mirroring the look and functionality of its smartphone apps. Starting in late September 2015, MixRadio was made available to download through the Amazon Appstore (and consequently BlackBerry devices). The app was also preinstalled on the Samsung Z3, a smartphone running the Tizen operating system.
Discontinuation
On 16 February 2016, Line announced that MixRadio would be discontinued, citing "a careful assessment of the subsidiary's overall performance" and "the financial challenges posed by the music streaming market".
MixRadio was officially closed on 21 March 2016.
Features
Catalogue
As of June 2015, MixRadio had licensed a collection of over 36 million music tracks. These tracks were collated from major, major independent and local music labels.
Mixes
MixRadio operated on the premise of playlists, named "mixes". Upon loading the app for the first time, the user was prompted to select some of their favourite genres, which will then ask for favourite artists that the user can choose. This was then designated the user's "My mix". Users were also able to select pre-made mixes by theme or genre, (for example 'Top 40 Australian charts' or 'Rock workout') or create their own mix purely on the selection of artists.
Optimised mixes
Users of MixRadio were able to like ('heart') or dislike ('broken heart)' a song as it is played, upon which these personal listening tastes are saved and new songs based on the users' preferences are played next in the mix. MixRadio possessed a team of staff that personally curated mixes tailored to the data collected around listeners' tastes and habits.
Offline mixes
A big premise of MixRadio was its ability to download mixes offline. This enabled users to listen to their favourite mixes when not in range of a WiFi or mobile data connection.
Unlimited downloads
In India, MixRadio was available for all Nokia Asha, Lumia and Nokia X phones. Users could download songs from MixRadio for free for the first three months after purchase of a Nokia Asha, Lumia or X-series phone, following which the subscription could be renewed for a fixed time period through the purchase of a voucher either online via the Oxicash website or offline through Nokia Care outlets. For Nokia Asha phones, subscription could also be renewed via carrier billing, with the supported carriers being Airtel, Vodafone and Idea. However, the vouchers were no longer issued from May 2014 and in November 2014, Microsoft announced that unlimited downloads from MixRadio will no longer be supported.
MixRadio for Android and iOS public beta
In late August 2015, MixRadio beta was opened to the general public to help test and contribute feedback regarding the app itself. The public betas were later expanded to the MixRadio client on iOS in early October 2015.
External links
References
Naver Corporation
Windows Phone software
Nokia services
Online music stores of Japan
Android (operating system) software
IOS software
Symbian software
|
9583554
|
https://en.wikipedia.org/wiki/Dvdisaster
|
Dvdisaster
|
dvdisaster is a computer program aimed to enhance data survivability on optical discs by creating error detection and correction data, which is used for data recovery. dvdisaster works exclusively at the image level. This program can be used either to generate Error-Correcting Code (ECC) data from an existing media or to augment an ISO image with ECC data prior to being written onto a medium. dvdisaster is free software available under the GNU General Public License.
Recovery modes
When an optical disc is physically damaged (such as by scratching), or has begun to deteriorate, some parts of the data on the disc may become unreadable. By utilizing the ECC data previously generated by dvdisaster, damaged parts of the disc data can be recovered.
The two modes of ECC data generation in dvdisaster make use of Reed–Solomon codes. In RS01 mode, the generated data is created from a disc image and is stored in a separate file, which must be written on some other medium. Alternatively, in RS02 mode, the ECC data is appended to the end of the disc image before the image is burned to disc.
When a CD or DVD has been augmented in RS02 mode, the 'augmented' section of the data remains invisible to the normal user, and the disc remains fully compatible with computers without knowledge of or installation of dvdisaster. In this way a damaged disc may be fully recoverable by installing the software, accessing the Reed-Solomon error correcting code using dvdisaster and rebuilding the image (to hard disk).
dvdisaster can be helpful to recover the contents of a damaged disc even when no ECC data is available. The entire disc can be read into an image, skipping damaged parts. dvdisaster can then repeatedly rescan just the missing parts until all damaged areas have been filled in by correct data.
Difference with other Reed-Solomon implementations
dvdisaster applies an image-based approach to data recovery. It does not apply a file-based data recovery, as reading a defective medium at the file level means trying to read as much data as possible from each file. But a limitation of the file-based approach arises when data sectors are damaged which have book-keeping functions in the file system. The list of files on the medium may be truncated. Or the mapping of data sectors to files is incomplete. Therefore, files or parts from files may be lost even though the respective data sectors would still be readable by the hardware. In contrast, reading at the image level uses direct communication with the drive hardware to access the data sectors.
It is important to point out that each unit of ECC data dvdisaster places at the end is calculated from sectors of the original data spread around in the original image. Each group of original data sectors and the added ECC sector(s) forms a "cluster". Any part of the cluster can be recovered as long as the amount of damages in that cluster is smaller than the amount of added ECC data in that cluster, therefore the location on disk of the ECC data doesn't matter.
Clusters are different in Parchive, since each file is considered as a single block: with dvdisaster data loss begins when one of the clusters has more than about 15% of errors (unlikely to happen but theoretically possible with few KiB of data), while Parchive can recover any error, provided that the PAR2 files are intact and that the number of corrupted files (it doesn't matter how much corrupted) is smaller than the number of available ECC files. dvdisaster also has a mode with separate ECC files.
See also
Data recovery
Error detection and correction
Optical disc authoring
Reed–Solomon error correction
Parchive
List of data recovery software
List of free and open-source software packages
References
Ben Martin (February 7, 2008)
Andrea Ghirardini, Gabriele Faggioli, Computer forensics: Guida completa, Apogeo Editore, 2007, , pp. 345–347
External links
Free data recovery software
Software that uses GTK
Free software programmed in C
Software using the GPL license
2004 software
|
676465
|
https://en.wikipedia.org/wiki/Bdale%20Garbee
|
Bdale Garbee
|
Bdale Garbee () is an American computer specialist who works with Linux particularly Debian. He is also an amateur radio hobbyist (KB0G), and a member of AMSAT, Tucson Amateur Packet Radio (former vice-president), and the American Radio Relay League.
Free Software
Garbee has been a Debian developer since October 1994, the earliest days of the project. He set up the original developer machine (named master.debian.org) in 1995. He served as a Debian Project Leader (DPL) for one year (2002–2003), and served as chairman of the Debian Technical Committee.
Garbee has served on the board of directors of Software in the Public Interest, the non-profit organization that collects donations for Debian and many other Free Software projects, since July 29, 2004, and was elected president on August 1, 2006.
Garbee was formerly on the board of directors of the Linux Foundation where he represented the interests of individual members and developers.
In September 2008, he received a "Lutèce d'Or" during the French event Paris Capitale du Libre as the FLOSS personality of the year.
In March 2011, Bdale Garbee agreed to join the FreedomBox Foundation's board of directors and chair its technical advisory committee.
He retired at the end of August 2012 from long service as the Open Source & Linux Chief Technologist at Hewlett-Packard. From September to December 2013 he was a part-time Senior Adviser to the Open Source Group at Samsung.
He was hired back by HP as Fellow in the Office of the CTO in 2014, with the goal to help driving HP's Open Source strategy. He retired for the second time in September 2016.
Personal
The name "Bdale" is an abbreviation of "Barksdale", given in honor of his maternal grandfather, Judge Alfred D. Barksdale.
At linux.conf.au 2009, Garbee's 27-year-old beard was removed by Linus Torvalds to raise funds for Tasmanian Devil facial tumour disease research. They raised between AU$35,000 and AU$40,000.
His house in Colorado was destroyed by the Black Forest Fire in June 2013.
References
External links
Bdale Garbee's homepage
Beyond Doing Business, podcast of Garbee's keynote at OSCON, 2004.
A conversation with Bdale Garbee, iTWire, January 2009
Debian Project leaders
Living people
Year of birth missing (living people)
Hewlett-Packard people
American computer scientists
Amateur radio people
|
14032417
|
https://en.wikipedia.org/wiki/History%20of%20Linux
|
History of Linux
|
Linux began in 1991 as a personal project by Finnish student Linus Torvalds: to create a new free operating system kernel. The resulting Linux kernel has been marked by constant growth throughout its history. Since the initial release of its source code in 1991, it has grown from a small number of C files under a license prohibiting commercial distribution to the 4.15 version in 2018 with more than 23.3 million lines of source code, not counting comments, under the GNU General Public License v2.
Events leading to creation
After AT&T had dropped out of the Multics project, the Unix operating system was conceived and implemented by Ken Thompson and Dennis Ritchie (both of AT&T Bell Laboratories) in 1969 and first released in 1970. Later they rewrote it in a new programming language, C, to make it portable. The availability and portability of Unix caused it to be widely adopted, copied and modified by academic institutions and businesses.
In 1977, the Berkeley Software Distribution (BSD) was developed by the Computer Systems Research Group (CSRG) from UC Berkeley, based on the 6th edition of Unix from AT&T. Since BSD contained Unix code that AT&T owned, AT&T filed a lawsuit (USL v. BSDi) in the early 1990s against the University of California. This strongly limited the development and adoption of BSD.
In 1983, Richard Stallman started the GNU project with the goal of creating a free UNIX-like operating system. As part of this work, he wrote the GNU General Public License (GPL). By the early 1990s, there was almost enough available software to create a full operating system. However, the GNU kernel, called Hurd, failed to attract enough development effort, leaving GNU incomplete.
In 1985, Intel released the 80386, the first x86 microprocessor with a 32-bit instruction set and a memory management unit with paging.
In 1986, Maurice J. Bach, of AT&T Bell Labs, published The Design of the UNIX Operating System. This definitive description principally covered the System V Release 2 kernel, with some new features from Release 3 and BSD.
In 1987, MINIX, a Unix-like system intended for academic use, was released by Andrew S. Tanenbaum to exemplify the principles conveyed in his textbook, Operating Systems: Design and Implementation. While source code for the system was available, modification and redistribution were restricted. In addition, MINIX's 16-bit design was not well adapted to the 32-bit features of the increasingly cheap and popular Intel 386 architecture for personal computers. In the early nineties a commercial UNIX operating system for Intel 386 PCs was too expensive for private users.
These factors and the lack of a widely adopted, free kernel provided the impetus for Torvalds' starting his project. He has stated that if either the GNU Hurd or 386BSD kernels had been available at the time, he likely would not have written his own.
The creation of Linux
In 1991, while studying computer science at University of Helsinki, Linus Torvalds began a project that later became the Linux kernel. He wrote the program specifically for the hardware he was using and independent of an operating system because he wanted to use the functions of his new PC with an 80386 processor. Development was done on MINIX using the GNU C Compiler.
As Torvalds wrote in his book Just for Fun, he eventually ended up writing an operating system kernel. On 25 August 1991, he (at age ) announced this system in a Usenet posting to the newsgroup "comp.os.minix.":
According to Torvalds, Linux began to gain importance in 1992 after the X Window System was ported to Linux by Orest Zborowski, which allowed Linux to support a GUI for the first time.
Naming
Linus Torvalds had wanted to call his invention Freax, a portmanteau of "free", "freak", and "x" (as an allusion to Unix). During the start of his work on the system, he stored the files under the name "Freax" for about half of a year. Torvalds had already considered the name "Linux", but initially dismissed it as too egotistical.
In order to facilitate development, the files were uploaded to the FTP server (ftp.funet.fi) of FUNET in September 1991. Ari Lemmke at Helsinki University of Technology (HUT), who was one of the volunteer administrators for the FTP server at the time, did not think that "Freax" was a good name. So, he named the project "Linux" on the server without consulting Torvalds. Later, however, Torvalds consented to "Linux".
To demonstrate how the word "Linux" should be pronounced (), Torvalds included an audio guide () with the kernel source code.
Linux under the GNU GPL
Torvalds first published the Linux kernel under its own licence, which had a restriction on commercial activity.
The software to use with the kernel was software developed as part of the GNU project licensed under the GNU General Public License, a free software license. The first release of the Linux kernel, Linux 0.01, included a binary of GNU's Bash shell.
In the "Notes for linux release 0.01", Torvalds lists the GNU software that is required to run Linux:
In 1992, he suggested releasing the kernel under the GNU General Public License. He first announced this decision in the release notes of version 0.12. In the middle of December 1992 he published version 0.99 using the GNU GPL. Linux and GNU developers worked to integrate GNU components with Linux to make a fully functional and free operating system. Torvalds has stated, "making Linux GPLed was definitely the best thing I ever did."
Around 2000, Torvalds clarified that the Linux kernel uses the GPLv2 license, without the common "or later clause".
After years of draft discussions, the GPLv3 was released in 2007; however, Torvalds and the majority of kernel developers decided against adopting the new license.
GNU/Linux naming controversy
The designation "Linux" was initially used by Torvalds only for the Linux kernel. The kernel was, however, frequently used together with other software, especially that of the GNU project. This quickly became the most popular adoption of GNU software. In June 1994 in GNU's bulletin, Linux was referred to as a "free UNIX clone", and the Debian project began calling its product Debian GNU/Linux. In May 1996, Richard Stallman published the editor Emacs 19.31, in which the type of system was renamed from Linux to Lignux. This spelling was intended to refer specifically to the combination of GNU and Linux, but this was soon abandoned in favor of "GNU/Linux".
This name garnered varying reactions. The GNU and Debian projects use the name, although most people simply use the term "Linux" to refer to the combination.
Official mascot
Torvalds announced in 1996 that there would be a mascot for Linux, a penguin. This was because when they were about to select the mascot, Torvalds mentioned he was bitten by a little penguin (Eudyptula minor) on a visit to the National Zoo & Aquarium in Canberra, Australia. Larry Ewing provided the original draft of today's well known mascot based on this description. The name Tux was suggested by James Hughes as derivative of Torvalds' UniX, along with being short for tuxedo, a type of suit with color similar to that of a penguin.
New development
Linux Community
The largest part of the work on Linux is performed by the community: the thousands of programmers around the world that use Linux and send their suggested improvements to the maintainers. Various companies have also helped not only with the development of the kernels, but also with the writing of the body of auxiliary software, which is distributed with Linux. As of February 2015, over 80% of Linux kernel developers are paid.
It is released both by organized projects such as Debian, and by projects connected directly with companies such as Fedora and openSUSE. The members of the respective projects meet at various conferences and fairs, in order to exchange ideas. One of the largest of these fairs is the LinuxTag in Germany, where about 10,000 people assemble annually to discuss Linux and the projects associated with it.
Open Source Development Lab and Linux Foundation
The Open Source Development Lab (OSDL) was created in the year 2000, and is an independent nonprofit organization which pursues the goal of optimizing Linux for employment in data centers and in the carrier range. It served as sponsored working premises for Linus Torvalds and also for Andrew Morton (until the middle of 2006 when Morton transferred to Google). Torvalds worked full-time on behalf of OSDL, developing the Linux kernels.
On 22 January 2007, OSDL and the Free Standards Group merged to form The Linux Foundation, narrowing their respective focuses to that of promoting Linux in competition with Microsoft Windows. As of 2015, Torvalds remains with the Linux Foundation as a Fellow.
Companies
Despite being freely available, companies profit from Linux. These companies, many of which are also members of the Linux Foundation, invest substantial resources into the advancement and development of Linux, in order to make it suited for various application areas. This includes hardware donations for driver developers, cash donations for people who develop Linux software, and the employment of Linux programmers at the company. Some examples are Dell, IBM and Hewlett-Packard, which validate, use and sell Linux on their own servers, and Red Hat (now part of IBM) and SUSE, which maintain their own enterprise distributions. Likewise, Digia supports Linux by the development and LGPL licensing of Qt, which makes the development of KDE possible, and by employing some of the X and KDE developers.
Desktop environments
KDE was the first advanced desktop environment (version 1.0 released in July 1998), but it was controversial due to the then-proprietary Qt toolkit used. GNOME was developed as an alternative due to licensing questions. The two use a different underlying toolkit and thus involve different programming, and are sponsored by two different groups, German nonprofit KDE e.V. and the United States nonprofit GNOME Foundation.
As of April 2007, one journalist estimated that KDE had 65% of market share versus 26% for GNOME. In January 2008, KDE 4 was released prematurely with bugs, driving some users to GNOME. GNOME 3, released in April 2011, was called an "unholy mess" by Linus Torvalds due to its controversial design changes.
Dissatisfaction with GNOME 3 led to a fork, Cinnamon, which is developed primarily by Linux Mint developer Clement LeFebvre. This restores the more traditional desktop environment with marginal improvements.
The relatively well-funded distribution, Ubuntu, designed (and released in June 2011) another user interface called Unity which is radically different from the conventional desktop environment and has been criticized as having various flaws and lacking configurability. The motivation was a single desktop environment for desktops and tablets, although as of November 2012 Unity has yet to be used widely in tablets. However, the smartphone and tablet version of Ubuntu and its Unity interface was unveiled by Canonical Ltd in January 2013. In April 2017, Canonical canceled the phone-based Ubuntu Touch project entirely in order to focus on IoT projects such as Ubuntu Core. In April 2017, Canonical dropped Unity and began to use GNOME for the Ubuntu releases from 17.10 onward.
"Linux is obsolete"
In 1992, Andrew S. Tanenbaum, recognized computer scientist and author of the Minix microkernel system, wrote a Usenet article on the newsgroup comp.os.minix with the title "Linux is obsolete", which marked the beginning of a famous debate about the structure of the then-recent Linux kernel. Among the most significant criticisms were that:
The kernel was monolithic and thus old-fashioned.
The lack of portability, due to the use of exclusive features of the Intel 386 processor. "Writing a new operating system that is closely tied to any particular piece of hardware, especially a weird one like the Intel line, is basically wrong."
There was no strict control of the source code by any individual person.
Linux employed a set of features which were useless (Tanenbaum believed that multithreaded file systems were simply a "performance hack").
Tanenbaum's prediction that Linux would become outdated within a few years and replaced by GNU Hurd (which he considered to be more modern) proved incorrect. Linux has been ported to all major platforms and its open development model has led to an exemplary pace of development. In contrast, GNU Hurd has not yet reached the level of stability that would allow it to be used on a production server. His dismissal of the Intel line of 386 processors as 'weird' has also proven short-sighted, as the x86 series of processors and the Intel Corporation would later become near ubiquitous in personal computers and servers.
In his unpublished book Samizdat, Kenneth Brown claims that Torvalds illegally copied code from MINIX. In May 2004, these claims were refuted by Tanenbaum, the author of MINIX:
The book's claims, methodology and references were seriously questioned and in the end it was never released and was delisted from the distributor's site.
Microsoft competition and collaboration
Although Torvalds has said that Microsoft's feeling threatened by Linux in the past was of no consequence to him, the Microsoft and Linux camps had a number of antagonistic interactions between 1997 and 2001. This became quite clear for the first time in 1998, when the first Halloween document was brought to light by Eric S. Raymond. This was a short essay by a Microsoft developer that sought to lay out the threats posed to Microsoft by free software and identified strategies to counter these perceived threats.
Competition entered a new phase in the beginning of 2004, when Microsoft published results from customer case studies evaluating the use of Windows vs. Linux under the name “Get the Facts” on its own web page. Based on inquiries, research analysts, and some Microsoft sponsored investigations, the case studies claimed that enterprise use of Linux on servers compared unfavorably to the use of Windows in terms of reliability, security, and total cost of ownership.
In response, commercial Linux distributors produced their own studies, surveys and testimonials to counter Microsoft's campaign. Novell's web-based campaign at the end of 2004 was entitled “Unbending the truth” and sought to outline the advantages as well as dispelling the widely publicized legal liabilities of Linux deployment (particularly in light of the SCO v IBM case). Novell particularly referenced the Microsoft studies in many points. IBM also published a series of studies under the title “The Linux at IBM competitive advantage” to again parry Microsoft's campaign. Red Hat had a campaign called “Truth Happens” aimed at letting the performance of the product speak for itself, rather than advertising the product by studies.
In the autumn of 2006, Novell and Microsoft announced an agreement to co-operate on software interoperability and patent protection. This included an agreement that customers of either Novell or Microsoft may not be sued by the other company for patent infringement. This patent protection was also expanded to non-commercial free software developers. The last part was criticized because it only included non-commercial free software developers.
In July 2009, Microsoft submitted 22,000 lines of source code to the Linux kernel under the GPLV2 license, which were subsequently accepted. Although this has been referred to as "a historic move" and as a possible bellwether of an improvement in Microsoft's corporate attitudes toward Linux and open-source software, the decision was not altogether altruistic, as it promised to lead to significant competitive advantages for Microsoft and avoided legal action against Microsoft. Microsoft was actually compelled to make the code contribution when Vyatta principal engineer and Linux contributor Stephen Hemminger discovered that Microsoft had incorporated a Hyper-V network driver, with GPL-licensed open source components, statically linked to closed-source binaries in contravention of the GPL licence. Microsoft contributed the drivers to rectify the licence violation, although the company attempted to portray it as a charitable act, rather than one to avoid legal action against it. In the past Microsoft had termed Linux a "cancer" and "communist".
By 2011, Microsoft had become the 17th largest contributor to the Linux kernel. As of February 2015, Microsoft was no longer among the top 30 contributing sponsor companies.
The Windows Azure project was announced in 2008 and renamed to Microsoft Azure. It incorporates Linux as part of its suite of server-based software applications. In August 2018, SUSE created a Linux kernel specifically tailored to the cloud computing applications under the Microsoft Azure project umbrella. Speaking about the kernel port, a Microsoft representative said "The new Azure-tuned kernel allows those customers to quickly take advantage of new Azure services such as Accelerated Networking with SR-IOV."
In recent years, Torvalds has expressed a neutral to friendly attitude towards Microsoft following the company's new embrace of open source software and collaboration with the Linux community. "The whole anti-Microsoft thing was sometimes funny as a joke, but not really." said Torvalds in an interview with ZDNet. "Today, they're actually much friendlier. I talk to Microsoft engineers at various conferences, and I feel like, yes, they have changed, and the engineers are happy. And they're like really happy working on Linux. So I completely dismissed all the anti-Microsoft stuff."
SCO
In March 2003, the SCO Group accused IBM of violating their copyright on UNIX by transferring code from UNIX to Linux. SCO claims ownership of the copyrights on UNIX and a lawsuit was filed against IBM. Red Hat has counter-sued and SCO has since filed other related lawsuits. At the same time as their lawsuit, SCO began selling Linux licenses to users who did not want to risk a possible complaint on the part of SCO. Since Novell also claims the copyrights to UNIX, it filed suit against SCO.
In early 2007, SCO filed the specific details of a purported copyright infringement. Despite previous claims that SCO was the rightful copyright holder of 1 million lines of code, they specified only 326 lines of code, most of which were uncopyrightable. In August 2007, the court in the Novell case ruled that SCO did not actually hold the Unix copyrights, to begin with, though the Tenth Circuit Court of Appeals ruled in August 2009 that the question of who held the copyright properly remained for a jury to answer. The jury case was decided on 30 March 2010 in Novell's favour.
SCO has since filed for bankruptcy.
Trademark rights
In 1994 and 1995, several people from different countries attempted to register the name "Linux" as a trademark. Thereupon requests for royalty payments were issued to several Linux companies, a step with which many developers and users of Linux did not agree. Linus Torvalds clamped down on these companies with help from Linux International and was granted the trademark to the name, which he transferred to Linux International. Protection of the trademark was later administered by a dedicated foundation, the non-profit Linux Mark Institute. In 2000, Linus Torvalds specified the basic rules for the assignment of the licenses. This means that anyone who offers a product or a service with the name Linux must possess a license for it, which can be obtained through a unique purchase.
In June 2005, a new controversy developed over the use of royalties generated from the use of the Linux trademark. The Linux Mark Institute, which represents Linus Torvalds' rights, announced a price increase from 500 to 5,000 dollars for the use of the name. This step was justified as being needed to cover the rising costs of trademark protection.
In response to this increase, the community became displeased, which is why Linus Torvalds made an announcement on 21 August 2005, in order to dissolve the misunderstandings. In an e-mail he described the current situation as well as the background in detail and also dealt with the question of who had to pay license costs:
The Linux Mark Institute has since begun to offer a free, perpetual worldwide sublicense.
Chronology
1991: The Linux kernel is publicly announced on 25 August by the 21-year-old Finnish student Linus Benedict Torvalds. Version 0.01 is released publicly on 17 September.
1992: The Linux kernel is relicensed under the GNU GPL. The first Linux distributions are created.
1993: Over 100 developers work on the Linux kernel. With their assistance the kernel is adapted to the GNU environment, which creates a large spectrum of application types for Linux. The oldest currently existing Linux distribution, Slackware, is released for the first time. Later in the same year, the Debian project is established. Today it is the largest community distribution.
1994: Torvalds judges all components of the kernel to be fully matured: he releases version 1.0 of Linux. The XFree86 project contributes a graphical user interface (GUI). Commercial Linux distribution makers Red Hat and SUSE publish version 1.0 of their Linux distributions.
1995: Linux is ported to the DEC Alpha and to the Sun SPARC. Over the following years it is ported to an ever-greater number of platforms.
1996: Version 2.0 of the Linux kernel is released. The kernel can now serve several processors at the same time using symmetric multiprocessing (SMP), and thereby becomes a serious alternative for many companies.
1998: Many major companies such as IBM, Compaq and Oracle announce their support for Linux. The Cathedral and the Bazaar is first published as an essay (later as a book), resulting in Netscape publicly releasing the source code to its Netscape Communicator web browser suite. Netscape's actions and crediting of the essay brings Linux's open source development model to the attention of the popular technical press. In addition a group of programmers begins developing the graphical user interface KDE.
1999: A group of developers begin work on the graphical environment GNOME, destined to become a free replacement for KDE, which at the time, depended on the then proprietary Qt toolkit. During the year IBM announces an extensive project for the support of Linux. Version 2.2 of the Linux kernel is released.
2000: Dell announces that it is now the No. 2 provider of Linux-based systems worldwide and the first major manufacturer to offer Linux across its full product line.
2001: Version 2.4 of the Linux kernel is released.
2002: The media reports that "Microsoft killed Dell Linux"
2003: Version 2.6 of the Linux kernel is released.
2004: The XFree86 team splits up and joins with the existing X standards body to form the X.Org Foundation, which results in a substantially faster development of the X server for Linux.
2005: The project openSUSE begins a free distribution from Novell's community. Also the project OpenOffice.org introduces version 2.0 that then started supporting OASIS OpenDocument standards.
2006: Oracle releases its own distribution of Red Hat Enterprise Linux. Novell and Microsoft announce cooperation for a better interoperability and mutual patent protection.
2007: Dell starts distributing laptops with Ubuntu pre-installed on them.
2009: Red Hat's market capitalization equals Sun's, interpreted as a symbolic moment for the "Linux-based economy".
2011: Version 3.0 of the Linux kernel is released.
2012: The aggregate Linux server market revenue exceeds that of the rest of the Unix market.
2013: Google's Linux-based Android claims 75% of the smartphone market share, in terms of the number of phones shipped.
2014: Ubuntu claims 22,000,000 users.
2015: Version 4.0 of the Linux kernel is released.
2019: Version 5.0 of the Linux kernel is released.
See also
History of free software
Linux kernel version history
References
External links
LINUX's History by Linus Torvalds
History of Linux by Ragib Hasan
Changes done in each Linux kernel release (since version 2.5.1)
Linux
Linux kernel
Linux
Linus Torvalds
|
22006015
|
https://en.wikipedia.org/wiki/Telos%20%28company%29
|
Telos (company)
|
Telos Corporation is an information technology (IT) and cybersecurity company located in Ashburn, Virginia. The company’s name is derived from the Greek word for “purpose” or “goal". Telos primarily serves government and enterprise clients, receiving a large number of its contracts from the United States Department of Defense (DoD).
History
Telos was founded in 1969 in Santa Monica, California and incorporated in Maryland in 1971.
John B. Wood joined the company in 1992 and became president and chief executive in 1994. Today, Telos is headquartered in Ashburn, Virginia.
On 16 June 2020, Telos Corporation reported its Automated Message Handling System (AMHS) service was granted an additional five years and $15.6 million contract by the Defense Information Systems Agency (DISA).
In November 2020, the company filed for an initial public offering.
Security areas
Telos is an information technology company. Customers are primarily military, intelligence and civilian agencies of the US government and NATO allies.
The company provides security in four major areas:
Secure wired and wireless networks for DoD and federal agencies.
Software products and consulting services to automate, streamline, and enforce IT security and risk management processes, including cybersecurity consulting services and Xacta brand cyber risk management.
Secure, automated, web-based software for distributing and managing organizational messaging traffic via the Automated Message Handling System.
Identity management, logical and physical network security, and identity vetting services.
Sales
Telos products are available through these contract vehicles:
Telos GSA Schedule
Telos Identity Management Solutions GSA Schedule
ADMC-2 Army Desktop and Mobile Computing
Air Force Network Centric Solutions (NETCENTS-2)
Alliant 2
Army Cloud Computing Enterprise Transformation (ACCENT)
Army Desktop and Mobile Computing-2 (ADMC-2)
Army Information Management Communications Services III (IMCS III)
Army Information Technology Enterprise Solutions – 3 Hardware (ITES-3H)
Defense Manpower Data Center (DMDC)
DHS EAGLE II
DoD ESI/GSA Smartbuy
NETCENTS-2 Air Force Network Operations and Infrastructure Solutions
Risk Management Framework (RMF) Services
Transportation Security Administration (TSA)
Advisory Board
On 20 May 2020, Telos named the four-star general army and USCYBERCOM's first commander, General (Ret) Keith Alexander, as its advisory board 's inaugural leader.
References
1969 establishments in California
Defense companies of the United States
American companies established in 1969
Consulting firms established in 1969
Information technology consulting firms of the United States
Loudoun County, Virginia
2020 initial public offerings
Companies listed on the Nasdaq
Companies based in Virginia
|
23876064
|
https://en.wikipedia.org/wiki/Associate%20Director%20of%20National%20Intelligence%20and%20Chief%20Information%20Officer
|
Associate Director of National Intelligence and Chief Information Officer
|
In the United States the Associate Director of National Intelligence and Chief Information Officer (Intelligence Community CIO, ADNI/CIO or IC CIO) is charged with directing and managing activities relating to information technology for the Intelligence Community (IC) and the Office of the Director of National Intelligence (ODNI). The IC CIO reports directly to the Director of National Intelligence (DNI). As of January 20, 2021 Michael Waschull has been named as Acting IC Chief Information Officer.
Mission
The IC CIO has four primary areas of responsibility:
Manage activities relating to the information technology infrastructure and enterprise architecture of the Intelligence Community;
Exercise procurement approval authority over all information technology items related to the enterprise architecture of all Intelligence Community components;
Direct and manage all information technology-related procurement for the Intelligence Community; and
Ensure all expenditures for information technology and research and development activities are consistent with the Intelligence Community enterprise architecture and the strategy of the Director for such architecture.
History
The Office of the IC CIO was established by Intelligence Community Directive (ICD) 500, "Director of National Intelligence Chief Information Officer," effective August 7, 2008. ICD 500 superseded Director of Central Intelligence Directive (DCID) 1/6, "The Intelligence Community Chief Information Officer."
List of Associate Directors of National Intelligence and Chief Information Officers
Dale Meyerrose December 21, 2005 – September 2008
Patrick Gorman (acting) October 2008 – January 2009
Priscilla Guthrie May 26, 2009 – November 19, 2010
Charlene Leubecker (acting) November 19, 2010 – ?
Al Tarasiuk February 2011 – April 28, 2015
Dr. Raymond "Ray" Cook July 23, 2015 – January 20, 2017
Jennifer Kron (acting) January 20, 2017 – September 11, 2017
John Sherman September 11, 2017 – January 20, 2021
Michael Waschull (acting) January 20, 2021 – present
References
External links
CIA CIO To Head IT For Intelligence Community
Intelligence Community Directive 500: Director of National Intelligence Chief Information Officer
Office of the Director of National Intelligence
US Intelligence Community
An Overview of the United States Intelligence Community
United States intelligence agencies
|
32737852
|
https://en.wikipedia.org/wiki/Glenda%20Schroeder
|
Glenda Schroeder
|
Glenda Schroeder is an American software engineer noted for implementing the first command-line user interface shell and publishing one of the earliest research papers describing electronic mail systems while working as a member of the staff at the MIT Computation Center in 1965.
Biography
Early operating system command-line interfaces were implemented as part of resident monitor programs, and could not easily be replaced. In 1964, MIT Computation Center staff member Louis Pouzin developed the RUNCOM tool for executing command scripts while allowing argument substitution. Pouzin coined the term "shell" to describe the technique of using commands like a programming language, and wrote a paper describing how to implement the idea in the Multics operating system. Pouzin returned to his native France in 1965, and Schroeder developed the first Multics shell with the assistance of an unnamed man from General Electric. Schroeder's Multics shell was the predecessor to the Unix shell, which is still in use today.
Working with Pat Crisman and Louis Pouzin, she also described an early e-mail system called "MAIL" to allow users on the Compatible Time-Sharing System (CTSS) at MIT to send notifications to others about backups of files. Each user's messages would be added to a local file called "MAIL BOX", which would have a “private” mode so that only the owner could read or delete messages. The proposed uses of the proto-email system were for communication from CTSS to notify users that files had been backed up, discussion between authors of CTSS commands, and communication from command authors to the CTSS manual editor. The service only made it possible to leave messages for the other users on the same computer. The idea to allow users to send messages between computers was developed later by Ray Tomlinson in 1971.
References
Internet pioneers
Women Internet pioneers
Living people
American women engineers
American software engineers
Operating system people
Massachusetts Institute of Technology people
Multics people
American computer programmers
Year of birth missing (living people)
20th-century American women scientists
21st-century American women scientists
20th-century American engineers
21st-century American engineers
|
172980
|
https://en.wikipedia.org/wiki/Henry%20Spencer
|
Henry Spencer
|
Henry Spencer (born 1955) is a Canadian computer programmer and space enthusiast. He wrote "regex", a widely used software library for regular expressions, and co-wrote C News, a Usenet server program. He also wrote The Ten Commandments for C Programmers. He is coauthor, with David Lawrence, of the book Managing Usenet. While working at the University of Toronto he ran the first active Usenet site outside the U.S., starting in 1981. His records from that period were eventually acquired by Google to provide an archive of Usenet in the 1980s.
The first international Usenet site was run in Ottawa, in 1981; however, it is generally not remembered, as it served merely as a read-only medium. Later in 1981, Spencer acquired a Usenet feed from Duke University, and brought "utzoo" online; the earliest public archives of Usenet date from May 1981 as a result.
The small size of Usenet in its youthful days, and Spencer's early involvement, made him a well-recognised participant; this is commemorated in Vernor Vinge's 1992 novel A Fire Upon the Deep. The novel featured an interstellar communications medium remarkably similar to Usenet, down to the author including spurious message headers; one of the characters who appeared solely through postings to this was modeled on Spencer (and, slightly obliquely, named for him).
He is also credited with the claim that "Those who do not understand Unix are condemned to reinvent it, poorly."
Preserving Usenet
In mid-December 2001, Google unveiled its improved Usenet archives, which now go more than a decade deeper into the Internet's past than did the millions of posts that the company had originally acquired when it bought an existing archive called Deja News.
Between 1981 and 1991, while running the zoology department's computer system at the University of Toronto, Spencer copied more than 2 million Usenet messages onto magnetic tapes. The 141 tapes wound up at the University of Western Ontario, where Google's Michael Schmidt tracked them down and, with the help of David Wiseman and others, got them transferred onto disks and into Google's archives.
Free Software contributions
Henry Spencer helped Geoff Collyer write C News in 1987.
At around the same time he wrote a non-proprietary replacement for regex(3), the Unix library for handling regular expressions, and made it freely available; his API followed that of Eighth Edition Research Unix.
Spencer's library has been used in many software packages, including Tcl, MySQL, and PostgreSQL, as well as being adapted for others, including early versions of Perl. Circa 1993, Spencer donated a second version of his RE library to 4.4BSD, following the POSIX standard for regular expressions.
Spencer was technical lead on the FreeS/WAN project, implementing an IPsec cryptographic protocol stack for Linux.
He also wrote 'aaa' (Amazing Awk Assembler), which is one of the longest and most complex programs ever written in the awk programming language.
He also developed a 4 point font used by entomologists in labeling pinned insect specimens.
Space
Spencer is a founding member of the Canadian Space Society, and has served on its board of directors several times since 1984. He did mission analysis (planning of launch and orbits) for the CSS's Canadian Solar Sail project (now defunct), and was Software Architect for MOST, a Canadian science microsatellite dedicated to studying variable light from stars and extrasolar planets launched by Eurockot in 2003.
The asteroid 117329 Spencer is named in his honour.
He is a highly regarded space enthusiast, and is a familiar and respected presence on several space forums on Usenet and the Internet. From 1983 to 2007 Spencer posted over 34,000 messages to the sci.space.* newsgroups. His knowledge of space history and technology is such that the "I Corrected Henry Spencer" virtual T-shirt award was created as a reward for anyone who can catch him in an error of fact.
References
External links
Brief biography of Spencer at O'Reilly Media
Spencer presentation at the Apollo Lunar Surface Journal
Janet Wong, News@UofT, December 5, 2001
Asteroids 101 (6:33), The Dawn Mission (4:38), Early Days (6:38) – Moon and Back, three videos of interviews at SpaceAccess 2013 conference, April 2013.
The Ten Commandments for C Programmers (Annotated Edition) by Henry Spencer
— A paper he wrote with Geoff Collyer about software portability.
aaa - the Amazing Awk Assembler by Henry Spencer
awf - the Amazingly Workable Formatter by Henry Spencer
Living people
Unix people
Usenet people
Free software programmers
Duke University alumni
1955 births
|
16003010
|
https://en.wikipedia.org/wiki/Mocmex
|
Mocmex
|
Mocmex is a trojan, which was found in a digital photo frame in February 2008. It was the first serious computer virus on a digital photo frame. The virus was traced back to a group in China.
Overview
Mocmex collects passwords for online games. The virus is able to recognize and block antivirus protection from more than a hundred security companies and the Windows built-in firewall. Mocmex downloads files from remote locations and hides randomly named files on infected computers. Therefore, the virus is difficult to remove. Furthermore, it spreads to other portable storage devices that were plugged into an infected computer. Industry experts describe the writers of the Trojan Horse as professionals and describe Mocmex as a "nuclear bomb of malware".
Protection
As Mocmex can be described as a serious virus, protection is not hard. First of all, it is important to update your antivirus software, as updated antivirus works unless the malware writers get ahead of the antivirus vendors (which is what happened with the new Mocmex). Another way is to check a digital photo frame for malware on a Macintosh or Linux machine before plugging it into a computer with Windows, or disable autorun on Windows.
Effects
A large part of digital photo frames were manufactured in China, particularly in Shenzhen. The negative publicity followed by media reports of the Chinese virus is expected to have negative effects on Chinese manufacturers. Mocmex happened just a few months after quality problems with toys manufactured in China raised the attention of Western countries leading to a low quality image for Chinese products.
References
Digital photography
Display technology
Trojan horses
Hacking in the 2000s
|
65036403
|
https://en.wikipedia.org/wiki/Ciscogate
|
Ciscogate
|
Ciscogate, also known as the Black Hat Bug, is the name given to a legal incident that occurred at the Black Hat Briefings security conference in Las Vegas, Nevada, on July 27, 2005.
On the morning of the first day of the conference, July 26, 2005, some attendees noticed that 30 pages of text had been physically ripped out of the extensive conference presentation booklet the night before at the request of Cisco Systems and the CD-ROM with presentation slides was not included. It was determined the pages covered a talk to be given by Michael Lynn, a security researcher with Atlanta-based IBM Internet Security Systems (ISS). Instead of the pages with the details, attendees found a photographed copy of a notice from Black Hat saying "Due to some last minute changes beyond Black Hat's control, and at the request of the presenter, the included materials aren't up to the standards Black Hat tries to meet. Black Hat will be the first to apologize. We hope the vendors involved will follow suit." According to Lynn's lawyer, his employer had approved of the talk leading up to the conference but changed their minds two days before the scheduled talk, forbidding him from presenting.
Lynn's original presentation was to cover a vulnerability in Cisco routers. The presentation was one of four scheduled to follow Jeff Moss' keynote address on the first day of the conference, titled "Cisco IOS Security Architecture". After being told by his employer that he could not present on the topic, Lynn chose an alternate topic. Cisco and ISS had offered to give new joint presentation but this was turned down by Black Hat because the original speaking slot was given to Lynn, not Cisco. Lynn's presentation began by covering security issues in services that allow users to make Voice over IP telephone calls. Shortly after beginning the presentation Lynn changed back to his original topic and began disclosing technical some details of the vulnerability he found in Cisco routers stating that he would rather resign from his job at ISS than keep the details private.
Lawsuit
Shortly after Lynn concluded his talk he met Jennifer Granick, who would soon become his lawyer. During their initial meeting Lynn told Granick that he expected to be sued. Later in the evening Lynn had heard that Cisco and ISS had filed a lawsuit and requested a temporary restraining order against Black Hat but not himself. A public relations representative from Black Hat told Granick that the lawsuit was against both Black Hat and Lynn and that the companies had scheduled an Ex parte hearing in San Francisco the next morning to request the restraining order. That night, Andrew Valentine, an attorney for ISS and Cisco called Lynn who directed them to Granick. During the conversation Valentine explained the claims and accusations against Lynn, which included three things: 1) ISS claimed copyright over the presentation that Lynn gave, 2) Cisco claimed copyright over the decompiled machine code obtained from the router which was included in the presentation, and 3) Cisco claimed the presentation contained trade secrets. These complaints were outlined in a civil complaint at the U.S. Northern District of California and filed against both Lynn and Black Hat. According to Granick, she and Valentine were able agree to an injunction to settle the case without court proceedings. This deal was almost called off due to an inadvertent mistake by Black Hat in which they had restored Lynn's presentation on their web server. Black Hat, Granick, and the plaintiff's lawyers were able to resolve this problem and the deal stood.
One condition of the settlement required Lynn to provide an image of all computer data he used in his research to be provided to a third party for forensic analysis before erasing his research and any Cisco data from his systems. The settlement also stipulated that Lynn was prohibited from talking about the vulnerability in the future.
FBI Investigation
Shortly after lawyers for Lynn and ISS / Cisco filed settlement papers, FBI agents from the Las Vegas office arrived at the conference to begin asking questions. According to Granick, they were there at the request of the Atlanta FBI office and that Lynn was not of interest. Granick asserted the Fifth and Sixth amendment rights on behalf of her client, Lynn. Granick asserted his rights for the Atlanta office and asked if an arrest warrant had been issued for Lynn. Over the next 24 hours Granick was not able to ascertain the status of a warrant but ultimately determined no warrant was issued.
When the FBI was asked about the case by a journalist, spokesman Paul Bresson declined to discuss the case saying "Our policy is to not make any comment on anything that is ongoing. That's not to confirm that something is, because I really don't know". Granick would only confirm to journalists that the "investigation has to do with the presentation".
Response
Attendees
Attendees of Black Hat Briefings, as well as many that also attended DEF CON, were not happy with vendors threatening legal action over vulnerability disclosure. The term "Ciscogate" was coined quickly by an unknown person, but some attendees were quick to create shirts to commemorate the incident.
Cisco
Mojgan Khalili, a senior manager for corporate PR at Cisco, issued a statement to the press saying "It is important to note that the information Mr. Lynn presented was not a disclosure of a new vulnerability or a flaw with Cisco IOS software. Mr. Lynn's research explores possible ways to expand exploitations of existing security vulnerabilities impacting routers."
ISS
Kim Duffy, managing director of ISS Australia, was asked about ISS's response to the incident. Duffy responded that it was "business as usual" as the company handled the incident "strictly by the book". He gave a brief statement to ZDNet UK saying "ISS has published rules for disclosure and that is what we stick to. We didn't care to publish [the disclosure] because we were not ready. We had not completed the research to our satisfaction so it was not ready to be disclosed". ISS spokesperson Roger Fortier confirmed that Lynn was no longer employed with the company and that ISS was still working with Cisco on the matter. He gave a statement to the Washington Post saying "ISS and Cisco have been working on this in the background and didn't feel at this time that the material was ready for publication. The decision was made on Monday to pull the presentation because we wanted to make sure the research was fully baked."
References
Computer security
|
1454130
|
https://en.wikipedia.org/wiki/Slashed%20zero
|
Slashed zero
|
The slashed zero is a representation of the number "0" (zero), with a slash through it. The slashed zero glyph is often used to distinguish the digit "zero" ("0") from the Latin script letter "O" anywhere that the distinction needs emphasis, particularly in encoding systems, scientific and engineering applications, computer programming (such as software development), and telecommunications. It thus helps to differentiate characters that would otherwise be homoglyphs. It was commonly used during the punched card era, when programs were typically written out by hand, to avoid ambiguity when the character was later typed on a card punch.
Unlike in the Scandinavian vowel "Ø" and the "empty set" symbol "∅", and the diameter symbol ⌀, the slash of a slashed zero usually does not extend past the ellipse in most typographic designs. However, the slashed zero is sometimes approximated by overlaying zero and slash characters, producing
Origins
The slashed zero long predates computers, and is known to have been used in the twelfth and thirteenth centuries. It is used in many Baudot teleprinter applications, specifically the keytop and typepallet that combines "P" and slashed zero. Additionally, the slashed zero is used in many ASCII graphic sets descended from the default typewheel on the Teletype Model 33.
Usage
The slashed zero is used in a number of fields in order to avoid confusion with the letter 'O'. It is used by computer programmers, in recording amateur radio call signs and in military radio, as logs of such contacts tend to contain both letters and numerals.
The slashed zero, sometimes called communications zero, was used on teleprinter circuits for weather applications.
The slashed zero can be used in stoichiometry to avoid confusion with the symbol for oxygen (capital O).
The slashed zero is also used in charting and documenting in the medical and healthcare fields to avoid confusion with the letter 'O'. It also denotes an absence of something (similar to the usage of an 'empty set' character), such as a sign or a symptom.
Along with the Westminster, MICR, and OCR-A fonts, the slashed zero became one of the things associated with hacker culture in the 1980s. Some cartoons depicted computer users talking in binary code with 1s and 0s using a slashed zero for the 0.
To generate a slashed zero on typewriters, typists would type a normal "O" or zero, backspace, and then hit the slash key to mark the zero.
The use of the slashed zero by many computer systems of the 1970s and 1980s inspired the 1980s space rock band Underground Zerø to use a heavy metal umlaut Scandinavian vowel ø in the band's name and as the band logo on all their album covers (see link below).
Slashed zeroes have been used in the Flash-based artwork of Young-Hae Chang Heavy Industries, notably in their 2003 work, Operation Nukorea. The reason for their use is unknown, but has been conjectured to be related to themes of 'negation, erasure, and absence'.
Slashed zeroes can also be used on cheques in order to prevent fraud, for example: Changing a 0 to an 8.
Slashed zeros are used on New Zealand number plates.
Representation in Unicode and HTML
The treatment of slashed zero as a glyph is supported by any font whose designer chose the option. Successful display on any local system depends on having the font available there, either via the system's font files or via font embedding.
Unicode supports explicit slashed zero, but only via a pair of combining characters, not as a distinct single character (or code point, in Unicode parlance). It is treated literally as "a zero that is slashed", and it is coded as two characters: the commonplace zero and then the "combining long solidus overlay" . These combining characters overlay the preceding character, creating a composite glyph: . This may be written in HTML as .
Unicode 9.0 introduced another method to create a short diagonal stroked form by adding the Variation Selector 1 after the zero, on this browser it produces .
Similar symbols
The slashed zero has the disadvantage that it can be confused with several other symbols:
The slashed zero causes problems in all languages because it can be mistaken for 8, especially when lighting is dim, imaging or eyes are out of focus, or printing is small compared to the dot size.
It causes problems for some Scandinavian languages — Ø is used as a letter in the Danish, Faroese, and Norwegian alphabets, where it represents or .
It also resembles the Greek letters Theta and Phi in some fonts (although usually, the slash is horizontal or vertical, respectively).
The symbols Ø and "∅" (U+2205) are used in mathematics to refer to the empty set.
"⌀" — the diameter symbol (U+2300 in Unicode).
In German-speaking countries, Ø is also used as a symbol for average value: average in German is Durchschnitt, directly translated as cut-through.
Variations
Dotted zero
The zero with a dot in the center seems to have originated as an option on IBM 3270 display controllers. The dotted zero may appear similar to the Greek letter theta (particularly capital theta, Θ), but the two have different glyphs. In raster fonts, the theta usually has a horizontal line connecting, or nearly touching, the sides of an O; while the dotted zero simply has a dot in the middle. However, on a low-definition display, such a form can be confused with a numeral 8. In some fonts the IPA letter for a bilabial click (ʘ) looks similar to the dotted zero.
Alternatively, the dot can become a vertical trace, for example by adding a "combining short vertical line overlay" (U+20D3). It may be coded as 0⃓ giving 0⃓.
Slashed 'O'
IBM (and a few other early mainframe makers) used a convention in which the letter O had a slash and the digit 0 did not. This is even more problematic for Danes, Faroese, and Norwegians because it means two of their letters—the O and slashed O (Ø)—are visually similar.
This was later flipped and most mainframe chain or band printers used the opposite convention (letter O printed as is, and digit zero printed with a slash Ø). This was the de facto standard from 1970s to 1990s. However current use of network laser printers that use PC style fonts caused the demise of the slashed zero in most companies - only a few configured laser printers to use Ø.
Short slash
Use of the "combining short solidus overlay" (U+0337) produces a result where the slash is contained within the zero. This may be coded as 0̷ to yield 0̷.
Reversed slash
Some Burroughs/Unisys equipment displays a zero with a reversed slash, similar to the no symbol, ⃠.
Other
Yet another convention common on early line printers left zero unornamented but added a tail or hook to the letter-O so that it resembled an inverted Q (like U+213A ℺) or cursive capital letter-O ().
In the Fixedsys typeface, the numeral 0 has two internal barbs along the lines of the slash. This appears much like a white "S" within the black borders of the zero.
In the FE-Schrift typeface, used on German car license plates, the zero is rectangular and has an "insinuated" slash: a diagonal crack just beneath the top right curve.
Typefaces
Typefaces commonly found on personal computers that use the slashed zero include:
Terminal in Microsoft's Windows line.
Consolas in Microsoft's Windows Vista, Windows 7, Microsoft Office 2007 and Microsoft Visual Studio 2010
Menlo in macOS
Monaco in macOS
SF Mono in macOS
The Fedora Linux distribution ships with a tweaked variant of the Liberation typeface which adds a slash to the zero; this is not present on most other Linux distributions.
ProFont
Roboto Mono
Dotted zero typefaces:
The DejaVu family of typefaces has a "DejaVu Sans Mono" variant with a dotted zero.
Andalé Mono has a dotted zero.
IBM Plex Mono uses a dotted zero.
Source Code Pro and its associated typefaces use a dotted zero.
See also
0 (number)
Symbols for zero
Names for the number 0 in English
Arabic numeral variations#Slashed zero
Regional handwriting variation#Arabic numerals
Footnotes
References
Further reading
; .
External links
.
Underground Zerø Album Cover Underground Zerø Band Logo
Typographical symbols
Numeral systems
0 (number)
|
1498086
|
https://en.wikipedia.org/wiki/Mesa%20%28computer%20graphics%29
|
Mesa (computer graphics)
|
Mesa, also called Mesa3D and The Mesa 3D Graphics Library, is an open-source software implementation of OpenGL, Vulkan, and other graphics API specifications. Mesa translates these specifications to vendor-specific graphics hardware drivers.
Its most important users are two graphics drivers mostly developed and funded by Intel and AMD for their respective hardware (AMD promotes their Mesa drivers Radeon and RadeonSI over the deprecated AMD Catalyst, and Intel has only supported the Mesa driver). Proprietary graphics drivers (e.g., Nvidia GeForce driver and Catalyst) replace all of Mesa, providing their own implementation of a graphics API. An open-source effort to write a Mesa Nvidia driver called Nouveau is mostly developed by the community.
Besides 3D applications such as games, modern display servers (X.org's Glamor or Wayland's Weston) use OpenGL/EGL; therefore all graphics typically go through Mesa.
Mesa is hosted by freedesktop.org and was initiated in August 1993 by Brian Paul, who is still active in the project. Mesa was subsequently widely adopted and now contains numerous contributions from various individuals and corporations worldwide, including from the graphics hardware manufacturers of the Khronos Group that administer the OpenGL specification. For Linux, development has also been partially driven by crowdfunding.
Overview
Implementations of rendering APIs
Mesa is known as housing implementation of graphic APIs. Historically the main API that Mesa has implemented is OpenGL, along with other Khronos Group related specifications (like OpenVG, OpenGL ES or recently EGL). But Mesa can implement other APIs and indeed it did with Glide (deprecated) and Direct3D 9 since July 2013. Mesa is also not specific to Unix-like operating systems: on Windows for example, Mesa provides an OpenGL API over DirectX.
Mesa implements a translation layer between a graphics API such as OpenGL and the graphics hardware drivers in the operating system kernel. The supported version of the different graphic APIs depends on the driver, because each hardware driver has its own implementation (and therefore status). This is especially true for the "classic" drivers, while the Gallium3D drivers share common code that tend to homogenize the supported extensions and versions.
Mesa maintains a support matrix with the status of the current OpenGL conformance visualized at . Mesa 10 complies with OpenGL 3.3, for Intel, AMD/ATI and Nvidia GPU hardware. Mesa 11 was announced with some drivers being OpenGL 4.1 compliant.
Mesa 12 contains OpenGL 4.2 and 4.3 and Intel Vulkan 1.0 support.
Mesa 13 brought Intel support for OpenGL 4.4 and 4.5 (all Features supported for Intel Gen 8+, Radeon GCN, Nvidia (Fermi, Kepler), but no Khronos-Test for 4.5-Label) and experimental AMD Vulkan 1.0 support through the community driver RADV. OpenGL ES 3.2 is possible with Intel Skylake (Gen9).
1st stable version of 2017 is 17.0 (new year Counting). Ready features are certified OpenGL 4.5, OpenGL 4.5 for Intel Haswell, OpenGL 4.3 for NVidia Maxwell and Pascal (GM107+).
Huge performance gain was measured with Maxwell 1 (GeForce GTX 750 Ti and more with GM1xx). Maxwell-2-Cards (GeForce GTX 980 and more with GM2xx) are underclocked without NVidia information.
The Khronos CTS test suite for OpenGL 4.4, 4.5 and OpenGL ES 3.0+ is in now (2017-01-24) Open Source and all tests for Mesa 13 and 17 are now possible without costs.
2nd stable version of 2017, 17.1.0, came out on 10 May 2017 with some interesting improvements. OpenGL 4.2+ for Intel Ivy Bridge and OpenGL 3.3+ for Intel Open SWR Rasterizer are 2 of the highlights.
Note that due to the modularized nature of OpenGL, Mesa can actually support extensions from newer versions of OpenGL without claiming full support for such versions. For example, in July 2016, Mesa supported OpenGL ES 3.1 but also all OpenGL ES 3.2 extensions except for five, as well as a number of extensions not part of any OpenGL or OpenGL ES version.
An open question for Mesa and Linux is High Dynamic Range (HDR). Many problems and open points are in pipe for a clean and basic implementation.
3rd Version 17.2 is available since September 2017 with some new OpenGL 4.6 features and velocity improvements in 3D for Intel and AMD. Only 1.4% of Tests fail for OpenGL 4.5 in Nouveau for Kepler.
4th Version 17.3 is ready since December 2017. Many improvements in many drivers are available. OpenGL 4.6 is nearly fully available (Spir-V is not ready). AMD Vulkan Driver RADV is now fully conformant in Khronos-Test.
1st version of 2018 is 18.0 and available since March 2018 by same scheme in 2017. Full OpenGL 4.6 support is not ready, but many features and improvements were successfully tested in RC3. 10-bit support for Intel i965 in Colors is also a Highlight. New is support for Intel Cannon Lake and AMD Vega with actual Linux Version. AMD Evergreen Chips (RV800 or R900) are near OpenGL 4.5 support. Old AMD R600 or RV700 Chips can only support OpenGL 3.3 with some features of OpenGL 4.x. Freedreno is the Driver for Adreno Hardware and near OpenGL 3.3 support.
2nd version of 2018 is 18.1 and available since May. Target is Vulkan 1.1.72 in Intel ANV and AMD RADV driver. OpenGL 4.6 with spir-V is also main target. Permanent work is possible completion of Features and Optimization of drivers for older hardware like AMD R600/Evergreen, Nvidia Tesla and before, Fermi, Kepler or Intel Sandybridge, Ivybridge, Haswell or Broadwell. ARM Architecture made also great improvements in Adreno 3xx/4xx/5xx and Broadwell VC4/VC5 for Raspi with main target OpenGL ES.
3rd version of 2018 is 18.2 and available in calendar stable in September. OpenGL 4.6 with spir-V and Vulkan 1.1.80 are in WIP. The soft Driver for virtual machines VIRGL is ready for OpenGL 4.3 and OpenGL ES 3.2. RadeonSI is also ready for OpenGL ES 3.2. ASTC Texture Compression Support and Compatibility Modus Support for OpenGL 4.4 (3.1 in 18.1) are other highlights in RadeonSI for AMD GCN Cards. New Vulkan 1.1 and more features for Intel and AMD are available. See more Details for Vulkan in Mesamatrix.
4th version of 2018 is 18.3 and released as stable Version 18.3.1 in December 2018. Many features in Detail and support of newer hardware are main parts. Full support of OpenGL 4.6 is not ready.
1st Version of 2019 is 19.0 and was now released at March. Full support of OpenGL 4.6 is not ready, but many improvements on this way are in all drivers.
2nd Version of 2019 is 19.1. Transition of TGSI to NIR is here one main Feature on way to OpenGL 4.6 with Spir-V and more OpenCL. RadeonSI runs well in dev-Version with NIR.
3rd Version of 2019 is 19.2. OpenGL 4.6 is Beta ready for new Intel Iris Driver.
4th Version of 2019 is 19.3. OpenGL 4.6 is ready for Intel i965 and optional for new Iris Driver.
First Version of 2020 is 20.0. Vulkan 1.2 is ready for AMD RADV and Intel ANV. Intel Iris is default for Intel Broadwell Gen 8+. RadeonSI driver switched to using NIR by default, instead of TGSI.
2nd Version of 2020 is 20.1. Many improvements are ready in many drivers. Zink is a new virtual driver for OpenGL over Vulkan.
3rd Version of 2020 is 20.2. OpenGL 3.0 for Zink is one new feature. LLVMpipe will support OpenGL 4.3+ (4.5+ in 20.3). ARM Panfrost is mostly improved with many modules. Shared virtual memory is possible for OpenCL in Nouveau with Pascal and higher.
4th Version of 2020 is 20.3. v3d and v3dv are new drivers for OpenGL and Vulkan 1.0 with Broadcom hardware like Raspberry Pi 4. OpenCL 1.2 is full supported in clover module. Zink support OpenGL 3.3+. LLVMpipe virtual driver support now OpenGL 4.5+ with 4.6 in view. VALLIUM as Vulkan Tree of LLVMpipe is merged.
In Mesa 21.0 d3d12 will be merged with OpenGL 3.0 to 3.3. Microsoft and Collabora develops new emulation d3d12 in WSL2 to Windows 10 with Direct 3D 12. OpenCL 1.2 is also target in d3d12. An acceleration of factor 2 to 5 is done in Benchmark SPECviewperf with improved OpenGL Code. Many Mesa 21.0 features improves performance. New Release 21.0.0 is public since 11 March 2021.
Mesa 21.1 is second release of year 2021. OpenGL 4.6+ and OpenGL ES 3.1+ is available for Zink. AMD Driver 600g can change to NIR with more possibilities for old Radeeon HD 5000 and 6000 cards. Qualcomm Turnip reaches Vulkan 1.1+ and software emulation Lavapipe Vulkan 1.1+. Google VirtIO GPU Driver Venus with Vulkan 1.2+ is merged in experimental state with low performance in mesa main tree.
Mesa 21.2 is third release of year 2021. Google Virtual Vulkan IO Driver Venus will be official introduced with full Vulkan 1.2+ support (more mesamatrix). ARM Panfrost: OpenGL ES 3.1+ Support is available and panVK is the new Vulkan Driver. Initial support started for ARM Apple M1 with new driver Asahi. 21.2 is available since 4th August 2021.
An old plan is to split old drivers in a classic tree with many advantages in programming, support, bug fixing for the modern gallium 3D part.
One problem here is Intel i965 with support of Popular old hardware to Intel Haswell and before also with Windows 10 support. A new Gallium3D driver Crocus for Intel Gen 4 Graphics to Haswell is here in development to complete here the gallium3D area with possible split in the next time of year 2021. Crocus is optional available in 21.2.
Table of Rendering APIs
Vulkan
The Khronos Group officially announced Vulkan API in March 2015, and officially released Vulkan 1.0 on 16 February 2016. Vulkan breaks compatibility with OpenGL and completely abandons its monolithic state machine concept. The developers of Gallium3D called Vulkan to be something along the lines of Gallium3D 2.0 – Gallium3D separates the code that implements the OpenGL state machine from the code that is specific to the hardware.
As Gallium3D ingests TGSI, Vulkan ingests SPIR-V (Standard Portable Intermediate Representation version "V" as in "Vulkan").
Intel released their implementation of a Vulkan driver for their hardware the day the specification was officially released, but it was only mainlined in April and so became part of Mesa 12.0, released in July 2016. While already the i965 driver wasn't written according to the Gallium3D specifications, for the Vulkan driver it makes even less sense to flange it on top of Gallium3D. Similarly there is no technical reason to flange it with NIR, but yet Intel's employees implemented their Vulkan driver that way.
It is to be expected that AMD's own proprietary Vulkan driver, which was released in March, and was announced to be released as free and open-source software in the future and be mainlined into Mesa, also abandons Gallium3D.
RADV is a free project for AMD and is available since version 13. Conformance with Khronos-Test came in version 17.3. Actual is Full support of Vulkan 1.0 and 1.1 since Mesa 18.1.
Nvidia released their proprietary GeForce driver with Vulkan support at launch day and Imagination Technologies (PowerVR), Qualcomm (Adreno) and ARM (Mali) have done the same or at least announced proprietary Vulkan drivers for Android and other operating systems. But when and whether additional free and open-source Vulkan implementations for these GPUs will show up, remains to be seen.
Mesa Software Driver VIRGL starts Vulkan Development in 2018 with GSOC projects for support of Virtual machines.
Lavapipe is a CPU-based Software Vulkan driver and the brother of LLVMpipe. Mesa Version 21.1 supports Vulkan 1.1+.
Google introduces Venus Vulkan Driver for virtual machines in Mesa 21.1 with full support for Vulkan 1.2+.
Qualcomm Turnip and Broadcom v3dv are new drivers for Qualcomm Adreno and Broadcom Raspberry 4 Hardware. Turnip is the Vulkan brother of freedreno for OpenGL. V3dv supports Vulkan 1.0+ since Mesa 20.3. In Version 21.1 Turnip supports Vulkan 1.1+.
Explicit fencing
A kind of memory barrier that separates one buffer from the rest of the memory is called a fence. Fences are there to ensure that a buffer is not being overwritten before rendering and display operations have completed on it. Implicit fencing is used for synchronization between graphics drivers and the GPU hardware. The fence signals when a buffer is no longer being used by one component so it can be operated on or reused by another. In the past the Linux kernel had an implicit fencing mechanism, where a fence is directly attached to a buffer (cf. GEM handles and FDs), but userspace is unaware of this. Explicit fencing exposes fences to userspace, where userspace gets fences from both the Direct Rendering Manager (DRM) subsystem and from the GPU. Explicit fencing is required by Vulkan and offers advantages for tracing and debugging.
Linux kernel 4.9 added Android's synchronization framework to mainline.
Generic Buffer Management
Generic Buffer Management (GBM) is an API that provides a mechanism for allocating buffers for graphics rendering tied to Mesa. GBM is intended to be used as a native platform for EGL on DRM or openwfd. The handle it creates can be used to initialize EGL and to create render target buffers.
Mesa GBM is an abstraction of the graphics driver specific buffer management APIs (for instance the various libdrm_* libraries), implemented internally by calling into the Mesa GPU drivers.
For example, the Wayland compositor Weston does its rendering using OpenGL ES 2, which it initializes by calling EGL. Since the server runs on the "bare KMS driver", it uses the EGL DRM platform, which could really be called the GBM platform, since it relies on the Mesa GBM interface.
At XDC2014, Nvidia employee Andy Ritger proposed to enhance EGL in order to replace GBM. This was not taken positively by the community, and Nvidia eventually changed their mind, and took another approach.
Implementations of video acceleration APIs
There are three possible ways to do the calculations necessary for the encoding and decoding of video streams:
use a software implementation of a video compression or decompression algorithm (commonly called a CODEC) and execute this software on the CPU
use a software implementation of a video compression or decompression algorithm (commonly called a CODEC) and execute this software on the GPU (the 3D rendering engine)
use a complete (or partial) hardware implementation of a video compression or decompression algorithm; it has become very common to integrate such ASICs into the chip of the GPU/CPU/APU/SoC and therefore abundantly available; for marketing reasons companies have established brands for their ASICs, such as PureVideo (Nvidia), Unified Video Decoder (AMD), Video Coding Engine (AMD), Quick Sync Video (Intel), DaVinci (Texas Instruments), CedarX (Allwinner), Crystal HD (Broadcom); some ASICs are available for licensing as semiconductor intellectual property core; usually different versions implement different video compression and/or video decompression algorithms; support for such ASICs usually belong into the kernel driver, to initialize the hardware and do low-level stuff. Mesa, which runs in user-space, houses the implementations of several APIs for software, e.g. VLC media player, GStreamer, HandBrake, etc., to conveniently access such ASICs:
Video Acceleration API (VAAPI) – the most common API for Linux, used by AMD and Intel
Video Decode and Presentation API for Unix (VDPAU) – used by Nvidia
DirectX Video Acceleration (DXVA) – Microsoft Windows-only
OpenMAX IL – designed by Khronos Group for video compression
Distributed Codec Engine (DCE) – designed by Texas Instruments
X-Video Bitstream Acceleration (XvBA) – extension to Xv - succeeded by VAAPI
X-Video Motion Compensation (XvMC) – extension to Xv - succeeded by VAAPI
For example, Nouveau, which has been developed as part of Mesa, but also includes a Linux kernel component, which is being developed as part of the Linux kernel, supports the PureVideo-branded ASICs and provides access to them through VDPAU and partly through XvMC.
The free radeon driver supports Unified Video Decoder and Video Coding Engine through VDPAU and OpenMAX.
Please note, that V4L2 is a kernel-to-user-space interface for video bit streams delivered by webcams or TV tuners.
Device drivers
The available free and open-source device drivers for graphic chipsets are "stewarded" by Mesa (because the existing free and open-source implementation of APIs are developed inside of Mesa). Currently there are two frameworks to write graphics drivers: "classic" and Gallium3D. An overview over some (but not all) of the drivers available in Mesa is given at .
There are device drivers for AMD/ATI R100 to R800, Intel, and Nvidia cards with 3D acceleration. Previously drivers existed for the IBM/Toshiba/Sony Cell APU of the PlayStation 3, S3 Virge & Savage chipsets, VIA chipsets, Matrox G200 & G400, and more.
The free and open-source drivers compete with proprietary closed-source drivers. Depending on the availability of hardware documentation and man-power, the free and open-source driver lag behind more or less in supporting 3D acceleration of new hardware. Also, 3D rendering performance was usually significantly slower with some notable exceptions. Today this is still true for Nouveau for most NVIDIA GPUs while on AMDs Radeon GPUs the open driver now mostly matches or exceeds the proprietary driver's performance.
Direct Rendering Infrastructure (DRI)
At the time 3D graphics cards became more mainstream for PCs, individuals partly supported by some companies began working on adding more support for hardware-accelerated 3D rendering to Mesa. The Direct Rendering Infrastructure (DRI) was one of these approaches to interface Mesa, OpenGL and other 3D rendering API libraries with the device drivers and hardware. After reaching a basic level of usability, DRI support was officially added to Mesa. This significantly broadened the available range of hardware support achievable when using the Mesa library.
With adapting to DRI, the Mesa library finally took over the role of the front end component of a full scale OpenGL framework with varying backend components that could offer different degrees of 3D hardware support while not dropping the full software rendering capability. The total system used many different software components.
While the design requires all these components to interact carefully, the interfaces between them are relatively fixed. Nonetheless, as most components interacting with the Mesa stack are open source, experimental work is often done through altering several components at once as well as the interfaces between them. If such experiments prove successful, they can be incorporated into the next major or minor release. That applies e.g. to the update of the DRI specification developed in the 2007-2008 timeframe. The result of this experimentation, DRI2, operates without locks and with improved back buffer support. For this, a special git branch of Mesa was created.
DRI3 is supported by the Intel driver since 2013 and is default in some Linux distributions since 2016 to enable Vulkan support and more. It is also default on AMD hardware since late 2016 (X.Org Server 1.18.3 and newer).
Software renderer
Mesa also contains an implementation of software rendering called that allows shaders to run on the CPU as a fallback when no graphics hardware accelerators are present. The Gallium software rasterizer is known as softpipe or when built with support for LLVM llvmpipe, which generates CPU code at runtime. Since Mesa 10.x OpenGL 3.3+ is supported for Softpipe (10.3) and LLVMpipe (10.2). Actual about 80% of Features from OpenGL 4.x are implemented in Mesa 17.3 (See Mesamatrix).
In Mesa 12.0 a new Intel Rasterizer OpenSWR is available with high advantages in clusters for large data sets. It's more focused on engineering visualisation than in game or art imagery and can only work on x86 processors. On the other hand, OpenGL 3.1+ is now supported. Acceleration values from 29 to 51 related to LLVMPIPE were measured in some examples.
OpenGL 3.3+ is supported for OpenSWR since Mesa 17.1.
VirGL is a Rasterizer for Virtual machines implemented in Mesa 11.1 since 2015 with OpenGL 3.3 support and showed in Mesamatrix since Mesa 18. In actual new Mesa 18.2 it supports more than the others with OpenGL 4.3 and OpenGL ES 3.2. About 80% of OpenGL 4.4 and 4.5 features are also now ready. Vulkan Development starts with GSOC 2018 projects.
D3d12 is a project of Microsoft for WSL2 emulation of OpenGL 3.3+ and OpenCL 1.2+ with Direct3D 12. D3D12 is merged in 21.0.
Venus is a new Vulkan VirtIO GPU Driver for GPU in virtual machines by Google. Venus is merged in 21.1 and for public in 21.2 introduced.
Mega drivers
The idea of bundling multiple drivers into a single "mega" driver was proposed by Emma Anholt. It allows for a single copy of the shared Mesa code to be used among multiple drivers (instead of it existing in each driver separately) and offering better performance than a separate shared library due to the removal of the internal library interface. The state trackers for VDPAU and XvMC have become separate libraries.
shader-db
shader-db is a collection of about 20,000 shaders gathered from various computer games and benchmarks as well as some scripts to compile these and collect some statistics. Shader-db is intended to help validate an optimization.
It was noticed that an unexpected number of shaders are not hand-written but generated. This means these shaders were originally written in HLSL and then translated into GLSL by some translator program, such as e.g. HLSL2GLSL. The problem is, that the generated code is often far from being optimal. Matt Turner said it was much easier to fix this in the translator program than having to make Mesa's compiler carry the burden of dealing with such bloated shaders.
shader-db cannot be considered free and open-source software. To use it legally, one must have a license for all the computer games that the shaders are part of.
Software architecture
The so-called "user-mode graphics device drivers" (UMD) in Mesa have very few commonalities with what is generally called a device driver. There are a couple of differences:
they are meant to work on top of additionally existent kernel mode graphics device drivers, that are e.g. available as part of the Linux kernel found in the source code under /drivers/gpu/drm/ Each UMD communicates with its kernel mode counterpart with the help of a specific library, name libdrm_specific and a generic one, named libdrm. This section shall look solely on the user-mode part above libdrm
there is some implementation of the finite-state machine as specified by e.g. OpenGL; this implementation of the OpenGL state machine may be shared among multiple UMDs or not
they consist to a great part of some sort of compiler, that ingests e.g. GLSL and eventually outputs machine code. Parsers may be shared among multiple UMD or be specific
Mesa's Intermediate Representations
One goal of Mesa is the optimization of code that is to be executed by the respective GPU. Another is the sharing of code. Instead of documenting the pieces of software, that do this or that, this Wikipedia article shall instead look at the Intermediate Representations used in the process of compiling and optimizing. See Abstract syntax tree (AST) and Static single assignment form (SSA form).
SPIR-V
SPIR-V is a certain version of the Standard Portable Intermediate Representation. The idea is, that graphics applications output SPIR-V instead of GLSL. In contrast to the latter, SPIR-V is binary to avoid implementation differences between GLSL compiler frontends of different driver implementations, as this has been a major source of application incompatibilities and bugs. Also SPIR-V binary usually also passed through some general optimizations. Also to some degree, SPIR-V's binary representation offers some degree of obfuscation, which might appeal to some software vendors as a form of intellectual property protection; however, SPIR-V contains ample information for reflection and tools exist that translate SPIR-V back into high quality, human readable high level code. A UMD needs only apply optimizations, that are specific to the supported hardware.
SPIR-V Specification (Provisional)
GLSL IR
cgit.freedesktop.org/mesa/mesa/tree/src/compiler/glsl/README
XDC2014, Matt Turner: , Matt Turner - GLSL compiler: Where we've been and where we're going
XDC2015, Matt Turner: ,
Mesa IR
NIR
NIR (New Internal Representation) was introduced to overcome TGSI limitations. NIR was extended in last und actual releases as base of Spir-V support and is since 2016 main development area. LLVMpipe, i965, RadeonSI, Nouveau, freedreno, vc4 are changed to NIR from TGSI. RADV, Zink and other new drivers starts with NIR. All drivers with full OpenGL 4.6 support are related to NIR by SPIR-V support. Also AMD r600 has a fork with NIR for better support of HD5000 and HD6000 series. This option for r600 is default since Mesa 21.0.
Connor Abbott - NIR, or moving beyond GLSL IR in Mesa XDC2014
(Mesa-dev) 2014-12-15 Reintroducing NIR, a new IR for mesa
cgit.freedesktop.org/mesa/mesa/tree/src/compiler/nir/README
fosdem.org/2016/schedule/event/i965_nir/attachments/slides/1113/export/events/attachments/i965_nir/slides/1113/nir_vec4_i965_fosdem_2016_rc1.pdf
NIR in RadeonSI
Noveau update 2018
Noveau with NIR in Mesa 19.3
TGSI
The Tungsten Graphics Shader Infrastructure (TGSI) was introduced in 2008 by Tungsten Graphics. All Gallium3D-style UMDs ingest TGSI.
NIR is now Main development area, so TGSI is only for older driver like r300g default infrastructure and will be deprecated in some years.
LLVM IR
The UMDs radeonsi and llvmpipe do not output machine code, but instead LLVM IR. From here on, LLVM does optimizations and the compilation to machine code. This does mean, that a certain minimum version of LLVM has to be installed as well.
RADV ACO IR
RADV ACO uses own IR that is close to NIR, for optimizing and generating end binary code for Vulkan SPIR-V shaders on top of Radeon GPUs (GCN 1+, aka GFX6+) GPUs. As of version 20.1.0 the ACO is only used in RADV (Vulkan driver) and not in RadeonSI yet.
Mesa's GLSL compiler
Mesa's GLSL compiler generates its own IR. Because each driver has very different requirements from a LIR, it differentiates between HIR (high-level IR) and LIR (low-level IR).
Gallium3D
Gallium3D is a set of interfaces and a collection of supporting libraries intended to ease the programming of device drivers for 3D graphics chipsets for multiple operating systems, rendering or video acceleration APIs.
A feature matrix is being provided at , and the efforts of writing free and open-source device drivers for graphics chips is being separately documented at Free and open-source graphics device driver.
The development of Gallium3D started in 2008 at Tungsten Graphics, and the implementation is available as free and open-source software as part of Mesa 3D hosted by freedesktop.org. The primary goal of making driver development easier, bundling otherwise duplicated code of several different drivers at a single point, and to support modern hardware architectures. This is done by providing a better division of labor, for example, leaving memory management to the kernel DRI driver.
Gallium3D has been a part of Mesa since 2009 and is currently used by the free and open-source graphics driver for Nvidia (nouveau project), for AMD's R300–R900, Intel's 'Iris' driver for generation 8+ iGPUs and for other free and open-source GPU device drivers.
Software architecture
Gallium3D eases programming of device drivers by splitting the graphics device driver into three parts. This is accomplished by the introduction of two interfaces: Gallium3D State Tracker Interface and the Gallium3D WinSys Interface. The three components are called:
Gallium3D State Tracker
Each graphical API by which a device driver is being addressed has its own State Tracker, e.g. there is a Gallium3D State Tracker for OpenGL and a different one for Direct3D or GLX. Each State Tracker contains an implementation of the Gallium3D State Tracker Interface, and is unique, this means is shared by all existent Gallium3D device drivers.
Gallium3D hardware device driver
This is the actual code, that is specific to the underlying 3D graphic accelerator, but only as far as the Gallium3D WinSys Interface allows. There is a unique Gallium3D hardware device driver for each available graphics chip and each implements the Gallium3D State Tracker Interface as well as the Gallium3D WinSys Interface. The Gallium3D hardware device driver understands only TGSI (Tungsten Graphics Shader Infrastructure), an intermediate language for describing shaders. This code translated shaders translated from GLSL into TGSI further into instruction set implemented by the GPU.
Gallium3D WinSys
This is specific to the underlying kernel of the operating system and each one implements the Gallium3D WinSys Interface to interface with all available Gallium3D hardware device drivers.
Differences from classic graphics drivers
Gallium3D provides a unified API exposing standard hardware functions, such as shader units found on modern hardware. Thus, 3D APIs such as OpenGL 1.x/2.x, OpenGL 3.x, OpenVG, GPGPU infrastructure or even Direct3D (as found in the Wine compatibility layer) will need only a single back-end, called a state tracker, targeting the Gallium3D API. By contrast, classic-style DRI device drivers require a different back-end for each hardware platform and several other APIs need translation to OpenGL at the expense of code duplication. All vendor device drivers, due to their proprietary and closed-source nature, are written that way meaning that, e.g. the AMD Catalyst implements both OpenGL and Direct3D, and the vendor drivers for the GeForce have their implementations.
Under Gallium3D, Direct Rendering Manager (DRM) kernel drivers will manage the memory and Direct Rendering Interface (DRI2) drivers will be more GPU processing oriented. During the transition period from userspace modesetting to kernelspace modesetting some of the Mesa 3D drivers, such as the radeon driver or Intel's drivers, ended up supporting both DRI1 and DRI2 and used DRI2 if available on the system. Gallium3D additionally requires a level of shader support that is not available on older cards like e.g. ATi r100-r200 so users for those cards need to keep using Mesa 3D with DRI2 for their 3D usage.
Tungsten Graphics Shader Infrastructure
Tungsten Graphics Shader Infrastructure (TGSI) is an Intermediate representation like LLVM Intermediate Representation or the new Standard Portable Intermediate Representation (SPIR) to be used by the Vulkan API and OpenCL 2.1. Shaders written in OpenGL Shading Language are to be translated/compiled into TGSI, then optimizations are made, and then the TGSI shaders are being compiled into shaders for the instruction set of the used GPU.
NIR is the new Layer representation in Mesa with full SPIR-V support and since 2019 main development area of all newer drivers with OpenGL 4.6 support.
LLVM usage
In addition, using the modular structure of Gallium3D, there is an effort underway to use the LLVM compiler suite and create a module to optimize shader code on the fly.
The library represents each shader program using an extensible binary intermediate representation called Tungsten Graphics Shader Infrastructure (TGSI), which LLVM then translates into GLSL shaders optimized for target hardware.
Adoption
Several free and open-source graphics device drivers, which have been, or are being written based on information gained through clean-room reverse engineering, adopted the driver model provided by Gallium3D, e.g. nouveau and others (see Free and open-source graphics device driver for a complete list). The main reason may be that the Gallium3D driver model lessens the amount of code required to be written. Of course, being licensed under a free software license, this code can at any time by anybody be rewritten to implement the DRI-, or some other, driver model.
History
Original authors of Gallium3D were Keith Whitwell and Brian Paul at Tungsten Graphics (acquired by VMware in 2008).
Milestones
As of fall 2011, there were at least 10 known, mature and working Gallium3D drivers. Open-source drivers for Nvidia graphics cards by the name of Nouveau team develops its drivers using the Gallium3D framework.
2008-07-13: Nouveau development is done exclusively for the Gallium framework. The old DRI driver was removed from the master branch of the Mesa repository on Freedesktop.org.
2009-02-11: The gallium-0.2 branch was merged into mainline Master branch of Mesa. Development is done in Mesa mainline.
2009-02-25: Gallium3D can run on Linux as well as FreeBSD kernels.
2009-05-01: Zack Rusin from Tungsten Graphics added the OpenVG state tracker to Mesa 3D, which enables Scalable Vector Graphics to be hardware-accelerated by any Gallium3D-based driver.
2009-07-17: Mesa3D 7.5 is released, the first version to include Gallium3D.
2010-09-10: Initial support for the Evergreen GPUs was added to the r600g driver.
2010-09-21: There are two Gallium3D drivers for ATI hardware known as r300g and r600g for R300-R500 and R600-Evergreen GPUs respectively.
2010-09-21: Major commits were made to the code to support Direct3D 10 and 11. In time, this might offer the ability to use recent Direct3D implementations on Linux systems.
2011-11-30: Intel 965g and Cell Gallium drivers were removed from the master branch of Mesa as unmaintained and broken.
2013-11-30: Mesa 10 with OpenGL 3.2, 3.3 and OpenCL 1.0+
2014-11-18: Major commits were made to the code to support Direct3D 9.
2015-09-15: Mesa 11 with OpenGL 4.0, 4.1 and OpenCL 1.2 (incomplete)
2015-12-15: Mesa 11.1 Driver VIRGL for virtual machines with OpenGL 3.3
2016-07-08: Mesa 12 with OpenGL 4.2, 4.3 and Vulkan 1.0 (Intel ANV and AMD RADV)
2016-11-01: Mesa 13 with OpenGL 4.4 and OpenGL ES 3.2
2017-02-13: Mesa 17.0 with OpenGL 4.5 and freedreno driver with OpenGL 3.0 and 3.1
2017-05-10: Mesa 17.1 OpenGL 4.2+ for Intel Ivy Bridge (more than Intel driver for Windows, OpenGL 3.3+ for Intel Open SWR Rasterizer (important for cluster Computer for huge simulations)
2017-12-08: Mesa 17.3 AMD Vulkan Driver RADV full compliant in Khronos Test of Vulkan 1.0
2018-05-18: Mesa 18.1 with Vulkan 1.1 (Intel ANV and AMD RADV)
2018-09-07: Mesa 18.2 with OpenGL 4.3 for Soft Driver VIRGL (important for virtual machines in cloud Cluster Computer), OpenGL ES 3.1 for Freedreno with Adreno A5xx
2019-06-11: Mesa 19.1 released with Intel's next generation 'iris' graphics driver for generation 8+ iGPUs
2019-12-11: Mesa 19.3 released OpenGL 4.6 with Intel i965 with gen 7+ and optional Iris Gen 8+
2020-03-18: Mesa 20.0 released OpenGL 4.6 with AMD GCN and Vulkan 1.2 for Intel
2020-05-27: Mesa 20.1 released NIR vectorisation support and shared virtual memory support for OpenCL in Clover
2020-11-30: Mesa 20.3 full support of OpenCL 1.2 in Clover
2021-03-11: Mesa 21.0 initial support of "D3D12“: Direct 3D 12 for WSL2 in Windows 10 with OpenGL 3.3+, ARM Freedreno: OpenGL 3.3+
2021-05-05: Mesa 21.1 initial support of Google VirtIO GPU Driver "Venus“ with Vulkan 1.2+; Zink: OpenGL 4.6+, OpenGL ES 3.1+; Qualcomm Turnip, Lavapipe: Vulkan 1.1+
2021-08-04: Mesa 21.2 initial support of new Intel Crocus OpenGL 4.6 driver based on gallium3D to Intel Sandy Bridge to Haswell for old i965, Vulkan Driver panVK for ARM Panfrost
Performance
Performance comparison of free and open-source graphics device drivers
History
Project initiator Brian Paul was a graphics hobbyist. He thought it would be fun to implement a simple 3D graphics library using the OpenGL API, which he might then use instead of VOGL (very ordinary GL Like Library). Beginning in 1993, he spent eighteen months of part-time development before he released the software on the Internet in February 1995. The software was well received, and people began contributing to its development. Mesa started off by rendering all 3D computer graphics on the CPU. Despite this, the internal architecture of Mesa was designed to be open for attaching to graphics processor-accelerated 3D rendering. In this first phase, rendering was done indirectly in the display server, leaving some overhead and noticeable speed lagging behind the theoretical maximum. The Diamond Monster 3D, using the Voodoo Graphics chipset, was one of the first 3D hardware devices supported by Mesa.
The first true graphics hardware support was added to Mesa in 1997, based upon the Glide API for the then new 3dfx Voodoo I/II graphics cards and their successors. A major problem of using Glide as the acceleration layer was the habit of Glide to run full screen, which was only suitable for computer games. Further, Glide took the lock of the screen memory, and thus the display server was blocked from doing any other GUI tasks.
See also
Free and open-source graphics device driver
References
External links
External links for Gallium3D
1993 software
Direct Rendering Infrastructure
Free 3D graphics software
Free computer libraries
Free software programmed in C
Free system software
Freedesktop.org
Graphics libraries
OpenGL
OpenGL
Software that uses Meson
Software using the MIT license
Assembly language software
|
2261519
|
https://en.wikipedia.org/wiki/User%20interface%20design
|
User interface design
|
User interface (UI) design or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. In computer or software design, user interface (UI) design is the process of building interfaces that are aesthetically pleasing. Designers aim to build interfaces that are easy and pleasant to use. UI design refers to graphical user interfaces and other forms of interface design. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals (user-centered design).
User interfaces are the points of interaction between users and designs. There are three types:
Graphical user interfaces (GUIs) - Users interact with visual representations on a computer's screen. The desktop is an example of a GUI.
Interfaces controlled through voice - Users interact with these through their voices. Most smart assistants, such as Siri on smartphones or Alexa on Amazon devices, use voice control.
Interactive interfaces utilizing gestures- Users interact with 3D design environments through their bodies, e.g., in virtual reality (VR) games.
Interface design is involved in a wide range of projects, from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered on their expertise, whether it is software design, user research, web design, or industrial design.
Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design and typography are utilized to support its usability, influencing how the user performs certain interactions and improving the aesthetic appeal of the design; design aesthetics may enhance or detract from the ability of users to use the functions of the interface. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs.
Compared to UX design
Compared to UX design, UI design is more about the surface and overall look of a design. User interface design is a craft in which designers, perform an important function in creating the user experience. On the other hand, the term UX design refers to the entire process of creating a user experience.
Don Norman and Jakob Nielsen said:
Processes
User interface design requires a good understanding of user needs. It mainly focuses on the needs of the platform and its user expectations. There are several phases and processes in the user interface design, some of which are more demanded upon than others, depending on the project. (Note: for the remainder of this section, the word system is used to denote any project whether it is a website, application, or device.)
Functionality requirements gathering – assembling a list of the functionality required by the system to accomplish the goals of the project and the potential needs of the users.
User and task analysis – a form of field research, it's the analysis of the potential users of the system by studying how they perform the tasks that the design must support, and conducting interviews to elaborate their goals. Typical questions involve:
What would the user want the system to do?
How would the system fit in with the user's normal workflow or daily activities?
How technically savvy is the user and what similar systems does the user already use?
What interface look & feel styles appeal to the user?
Information architecture – development of the process and/or information flow of the system (i.e. for phone tree systems, this would be an option tree flowchart and for web sites this would be a site flow that shows the hierarchy of the pages).
Prototyping – development of wire-frames, either in the form of paper prototypes or simple interactive screens. These prototypes are stripped of all look & feel elements and most content in order to concentrate on the interface.
Usability inspection – letting an evaluator inspect a user interface. This is generally considered to be cheaper to implement than usability testing (see step below), and can be used early on in the development process since it can be used to evaluate prototypes or specifications for the system, which usually cannot be tested on users. Some common usability inspection methods include cognitive walkthrough, which focuses the simplicity to accomplish tasks with the system for new users, heuristic evaluation, in which a set of heuristics are used to identify usability problems in the UI design, and pluralistic walkthrough, in which a selected group of people step through a task scenario and discuss usability issues.
Usability testing – testing of the prototypes on an actual user—often using a technique called think aloud protocol where you ask the user to talk about their thoughts during the experience. User interface design testing allows the designer to understand the reception of the design from the viewer's standpoint, and thus facilitates creating successful applications.
Graphical user interface design – actual look and feel design of the final graphical user interface (GUI). These are design’s control panels and faces; voice-controlled interfaces involve oral-auditory interaction, while gesture-based interfaces witness users engaging with 3D design spaces via bodily motions. It may be based on the findings developed during the user research, and refined to fix any usability problems found through the results of testing. Depending on the type of interface being created, this process typically involves some computer programming in order to validate forms, establish links or perform a desired action.
Software maintenance – after the deployment of a new interface, occasional maintenance may be required to fix software bugs, change features, or completely upgrade the system. Once a decision is made to upgrade the interface, the legacy system will undergo another version of the design process, and will begin to repeat the stages of the interface life cycle.
Requirements
The dynamic characteristics of a system are described in terms of the dialogue requirements contained in seven principles of part 10 of the ergonomics standard, the ISO 9241. This standard establishes a framework of ergonomic "principles" for the dialogue techniques with high-level definitions and illustrative applications and examples of the principles. The principles of the dialogue represent the dynamic aspects of the interface and can be mostly regarded as the "feel" of the interface.
The seven dialogue principles are:
Suitability for the task: the dialogue is suitable for a task when it supports the user in the effective and efficient completion of the task.
Self-descriptiveness: the dialogue is self-descriptive when each dialogue step is immediately comprehensible through feedback from the system or is explained to the user on request.
Controllability: the dialogue is controllable when the user is able to initiate and control the direction and pace of the interaction until the point at which the goal has been met.
Conformity with user expectations: the dialogue conforms with user expectations when it is consistent and corresponds to the user characteristics, such as task knowledge, education, experience, and to commonly accepted conventions.
Error tolerance: the dialogue is error-tolerant if, despite evident errors in input, the intended result may be achieved with either no or minimal action by the user.
Suitability for individualization: the dialogue is capable of individualization when the interface software can be modified to suit the task needs, individual preferences, and skills of the user.
Suitability for learning: the dialogue is suitable for learning when it supports and guides the user in learning to use the system.
The concept of usability is defined of the ISO 9241 standard by effectiveness, efficiency, and satisfaction of the user. Part 11 gives the following definition of usability:
Usability is measured by the extent to which the intended goals of use of the overall system are achieved (effectiveness).
The resources that have to be expended to achieve the intended goals (efficiency).
The extent to which the user finds the overall system acceptable (satisfaction).
Effectiveness, efficiency, and satisfaction can be seen as quality factors of usability. To evaluate these factors, they need to be decomposed into sub-factors, and finally, into usability measures.
The information presented is described in Part 12 of the ISO 9241 standard for the organization of information (arrangement, alignment, grouping, labels, location), for the display of graphical objects, and for the coding of information (abbreviation, colour, size, shape, visual cues) by seven attributes. The "attributes of presented information" represent the static aspects of the interface and can be generally regarded as the "look" of the interface. The attributes are detailed in the recommendations given in the standard. Each of the recommendations supports one or more of the seven attributes.
The seven presentation attributes are:
Clarity: the information content is conveyed quickly and accurately.
Discriminability: the displayed information can be distinguished accurately.
Conciseness: users are not overloaded with extraneous information.
Consistency: a unique design, conformity with user's expectation.
Detectability: the user's attention is directed towards information required.
Legibility: information is easy to read.
Comprehensibility: the meaning is clearly understandable, unambiguous, interpretable, and recognizable.
The user guidance in Part 13 of the ISO 9241 standard describes that the user guidance information should be readily distinguishable from other displayed information and should be specific for the current context of use. User guidance can be given by the following five means:
Prompts indicating explicitly (specific prompts) or implicitly (generic prompts) that the system is available for input.
Feedback informing about the user's input timely, perceptible, and non-intrusive.
Status information indicating the continuing state of the application, the system's hardware and software components, and the user's activities.
Error management including error prevention, error correction, user support for error management, and error messages.
On-line help for system-initiated and user-initiated requests with specific information for the current context of use.
Research
User interface design has been a topic of considerable research, including on its aesthetics. Standards have been developed as far back as the 1980s for defining the usability of software products.
One of the structural bases has become the IFIP user interface reference model. The model proposes four dimensions to structure the user interface:
The input/output dimension (the look)
The dialogue dimension (the feel)
The technical or functional dimension (the access to tools and services)
The organizational dimension (the communication and co-operation support)
This model has greatly influenced the development of the international standard ISO 9241 describing the interface design requirements for usability.
The desire to understand application-specific UI issues early in software development, even as an application was being developed, led to research on GUI rapid prototyping tools that might offer convincing simulations of how an actual application might behave in production use. Some of this research has shown that a wide variety of programming tasks for GUI-based software can, in fact, be specified through means other than writing program code.
Research in recent years is strongly motivated by the increasing variety of devices that can, by virtue of Moore's law, host very complex interfaces.
See also
Chief experience officer (CXO)
Cognitive dimensions
Discoverability
Experience design
Gender HCI
Human interface guidelines
Human-computer interaction
Icon design
Information architecture
Interaction design
Interaction design pattern
Interaction Flow Modeling Language (IFML)
Interaction technique
Knowledge visualization
Look and feel
Mobile interaction
Natural mapping (interface design)
New Interfaces for Musical Expression
Participatory design
Principles of user interface design
Process-centered design
Progressive disclosure
User experience design
User-centered design
References
Usability
Design
Graphic design
Industrial design
Information architecture
Design
|
45280199
|
https://en.wikipedia.org/wiki/EyeVerify
|
EyeVerify
|
EyeVerify, Inc. is a biometric security technology company based in Kansas City, Missouri owned by Ant Group. Its chief product, Eyeprint ID, provides verification using eye veins and other micro-features in and around the eye. Images of the human eye are used to authenticate mobile device users. EyeVerify licenses its software for use in mobile banking applications, such as those offered by Tangerine Bank, NCR/Digital Insight and Wells Fargo.
About
EyeVerify is part of the Kansas City Crossroads neighborhood alongside several other tech companies. EyeVerify's flagship product is Eyeprint ID, a system that authenticates users by recognizing patterns of blood vessels that are visible in the sclera, the whites of the eyes, as well as other eye-based micro-features.
An independent assessment by iBeta determined that Eyeprint ID meets the requirements for inclusion as a built-in subsystem in an Electronic Prescription of Controlled Substance (EPCS) Application.
History
Entrepreneur Toby Rush founded the company in 2012, some months after visiting the lab of University of Missouri-Kansas City professor Reza Derakhshani, who developed the eye vein verification technology. Derakhshani holds the patent to Eyeprint ID and now serves as the company's chief science officer.
In September 2016, EyeVerify was acquired by Alibaba's payments arm, Ant Group for $100 million.
Investors
In September 2016, Ant Group, the financial services arm of Alibaba Group, acquired EyeVerify for an estimated $100M.
Prior to that, Wells Fargo, Sprint, Qihoo 360 and Samsung Electronics had invested more than $6 million in EyeVerify. Mid-America Angels and Nebraska Angels were also investors.
The company was an early participant in the Wells Fargo Startup Accelerator for innovators in mobile security.
Partners/customers
Financial services
In April 2016, Tangerine Bank became the first Canadian financial institution to offer Eyeprint ID. The same month, Wells Fargo discussed its summer 2016 implementation of Eyeprint ID with the Wall Street Journal. The large bank mentioned that it had "tested voice and facial recognition technologies but found that each was subject to vagaries in the environment," and that "Eyeprint ID works correctly more often and is more discrete."
In October 2015, RSA Security added Eyeprint ID to its Adaptive Authentication Software Development Kit, after extensive evaluation of the technology, to provide in-app biometric step up authentication for high risk login and transactions.
EyeVerify and Olcsan CAD announced a partnership to offer Eyeprint ID to institutions in Turkey and other European countries. Their first project was to integrate with Vodafone Turkey's mobile wallet, Vodafone Cep Cuzdan.
Mountain America Credit Union conducted a beta launch of Eyeprint ID as part of its dual biometric authentication system, making it the first financial institution officially to launch Eyeprint ID in the United States.
Digital Insight, a division of NCR, announced in February, 2015, that it would incorporate Eyeprint ID into its mobile banking platform. As of October 30, 2015, five Digital Insight Financial Institutions use Eyeprint ID in their mobile banking app: Service Credit Union, Arizona Federal Credit Union, Community America Credit Union, First Internet Bank and Evansville Teachers Federal Credit Union. By the end of 2016, dozens of Digital Insight credit unions had launched Eyeprint ID.
EyeVerify also has partnerships with other technology companies that serve the financial services industry, including Comarch, Hypr and BioConnect.
Spoofing and liveness detection
On 29 September 2015, a YouTube video was posted by an associate of eyeThenticate (at the time of posting) demonstrating a spoof of Eyeprint ID using version 2.3.6 of the demo application. The Play Store app description has been updated to clarify that the demo application "includes limited liveness detection only, and spoofing tests conducted on this app will not be relevant to the product sold through EyeVerify partners". The version of Eyeprint ID that is integrated by partners has liveness detection built in to prevent spoofing attempts.
Recognition
2013, Get in the Ring U.S. and international winner
2014, "Rookie of the Year" and "Technology Innovation," Compass Intelligence Mobility Awards
2015, 2015 "Cool Vendor" in Mobile and Wireless, Gartner
2016, "Best of Show" at FinovateEurope.
2016, "Best of Show at FinovateAsia.
References
Companies based in Kansas City, Missouri
Biometrics
Security
American companies established in 2012
|
29199
|
https://en.wikipedia.org/wiki/Simple%20DirectMedia%20Layer
|
Simple DirectMedia Layer
|
Simple DirectMedia Layer (SDL) is a cross-platform software development library designed to provide a hardware abstraction layer for computer multimedia hardware components. Software developers can use it to write high-performance computer games and other multimedia applications that can run on many operating systems such as Android, iOS, Linux, macOS, and Windows.
SDL manages video, audio, input devices, CD-ROM, threads, shared object loading, networking and timers. For 3D graphics, it can handle an OpenGL, Vulkan, Metal, or Direct3D11 (older Direct3D version 9 is also supported) context. A common misconception is that SDL is a game engine, but this is not true. However, the library is suited to building games directly, or is usable indirectly by engines built on top of it.
The library is internally written in C and possibly, depending on the target platform, C++ or Objective-C, and provides the application programming interface in C, with bindings to other languages available. It is free and open-source software subject to the requirements of the zlib License since version 2.0, and with prior versions subject to the GNU Lesser General Public License. Under the zlib License, SDL 2.0 is freely available for static linking in closed-source projects, unlike SDL 1.2. SDL 2.0, released in 2013, was a major departure from previous versions, offering more opportunity for 3D hardware acceleration, but breaking backwards-compatibility.
SDL is extensively used in the industry in both large and small projects. Over 700 games, 180 applications, and 120 demos have been posted on the library website.
History
Sam Lantinga created the library, first releasing it in early 1998, while working for Loki Software. He got the idea while porting a Windows application to Macintosh. He then used SDL to port Doom to BeOS (see Doom source ports). Several other free libraries were developed to work alongside SDL, such as SMPEG and OpenAL. He also founded Galaxy Gameworks in 2008 to help commercially support SDL, although the company plans are currently on hold due to time constraints.
Soon after putting Galaxy Gameworks on hold, Lantinga announced that SDL 1.3 (which would then later become SDL 2.0) would be licensed under the zlib License. Lantinga announced SDL 2.0 on 14 July 2012, at the same time announcing that he was joining Valve, the first version of which was announced the same day he joined the company. Lantinga announced the stable release of SDL 2.0.0 on 13 August 2013.
SDL 2.0 is a major update to the SDL 1.2 codebase with a different, not backwards-compatible API. It replaces several parts of the 1.2 API with more general support for multiple input and output options. Some feature additions include multiple window support, hardware-accelerated 2D graphics, and better Unicode support.
Support for Mir and Wayland was added in SDL 2.0.2 and enabled by default in SDL 2.0.4. Version 2.0.4 also provided better support for Android.
Software architecture
SDL is a wrapper around the operating-system-specific functions that the game needs to access. The only purpose of SDL is to provide a common framework for accessing these functions for multiple operating systems (cross-platform). SDL provides support for 2D pixel operations, sound, file access, event handling, timing and threading. It is often used to complement OpenGL by setting up the graphical output and providing mouse and keyboard input, since OpenGL comprises only rendering.
A game using the Simple DirectMedia Layer will not automatically run on every operating system; further adaptations must be applied. These are reduced to the minimum, since SDL also contains a few abstraction APIs for frequent functions offered by an operating system.
The syntax of SDL is function-based: all operations done in SDL are done by passing parameters to subroutines (functions). Special structures are also used to store the specific information SDL needs to handle. SDL functions are categorized under several different subsystems.
Subsystems
SDL is divided into several subsystems:
Basics Initialization and Shutdown, Configuration Variables, Error Handling, Log Handling
Video Display and Window Management, surface functions, rendering acceleration, etc.
Input Events Event handling, Support for Keyboard, Mouse, Joystick and Game controller
Force Feedback SDL_haptic.h implements support for "Force Feedback"
Audio SDL_audio.h implements Audio Device Management, Playing and Recording
Threads multi-threading: Thread Management, Thread Synchronization Primitives, Atomic Operations
Timers Timer Support
File Abstraction Filesystem Paths, File I/O Abstraction
Shared Object Support Shared Object Loading and Function Lookup
Platform and CPU Information Platform Detection, CPU Feature Detection, Byte Order and Byte Swapping, Bit Manipulation
Power Management Power Management Status
Additional Platform-specific functionality
Besides this basic, low-level support, there also are a few separate official libraries that provide some more functions. These comprise the "standard library", and are provided on the official website and included in the official documentation:
SDL_image — support for multiple image formats
SDL_mixer — complex audio functions, mainly for sound mixing
SDL_net — networking support
SDL_ttf — TrueType font rendering support
SDL_rtf — simple Rich Text Format rendering
Other, non-standard libraries also exist. For example: SDL_Collide on SourceForge created by Amir Taaki.
Language bindings
The SDL 2.0 library has language bindings for:
Ada
Beef
C
C++
C#
D
Fortran
Genie
Go
Haskell
Java (e.g. JSDL)
Julia
Lua
Nim
OCaml
Pascal
Perl (via SDL)
PHP
Python (several, e.g. pygame_sdl2 and sdl2hl)
Raku
Ring
Rust
Vala
Common Lisp
Supported back-ends
Because of the way SDL is designed, much of its source code is split into separate modules for each operating system, to make calls to the underlying system. When SDL is compiled, the appropriate modules are selected for the target system. The following back-ends are available:
GDI back-end for Microsoft Windows.
DirectX back-end; older SDL 1.2 uses DirectX 7 by default, while 2.0 defaults to DirectX 9 and can access up to DirectX 11.
Quartz back-end for macOS (dropped in 2.0).
Metal back-end for macOS / iOS / tvOS since 2.0.8; older versions use OpenGL by default.
Xlib back-end for X11-based windowing system on various operating systems.
OpenGL contexts on various platforms.
EGL back-end when used in conjunction with Wayland-based windowing system., Raspberry Pi and other systems.
Vulkan contexts on platforms that support it.
sceGu back-end, a Sony OpenGL-like backend native to the PSP.
SDL 1.2 has support for RISC OS (dropped in 2.0).
An unofficial Sixel back-end is available for SDL 1.2.
The Rockbox MP3 player firmware also distributes a version of SDL 1.2, which is used to run games such as Quake.
Reception and adoption
Over the years SDL was used for many commercial and non-commercial video game projects. For instance, MobyGames listed 120 games using SDL in 2013, and the SDL website itself listed around 700 games in 2012. Important commercial examples are Angry Birds, Unreal Tournament, and games developed using Valve's Source Engine, which uses SDL extensively for cross-platform compatibility; ones from the open-source domain are OpenTTD, The Battle for Wesnoth or Freeciv.
The cross-platform game releases of the popular Humble Indie Bundles for Linux, Mac and Android are often SDL-based.
SDL is also often used for later ports on new platforms with legacy code. For instance, the PC game Homeworld was ported to the Pandora handheld and Jagged Alliance 2 for Android via SDL.
Also, several non video game programs use SDL; examples are the emulators, such as DOSBox, FUSE ZX Spectrum emulator and VisualBoyAdvance.
There were several books written for development with SDL (see further readings).
SDL is used in university courses teaching multimedia and computer science, for instance, in a workshop about game programming using libSDL at the University of Cadiz in 2010, or a Game Design discipline at UTFPR (Ponta Grossa campus) in 2015.
Video game examples using SDL
See also
References
Further reading
Alberto García Serrano: Programación de videojuegos en SDL, Ediversitas, (Spanish)
Ernest Pazera: Focus On SDL, Muska & Lipman/Premier-Trade,
Ron Penton: Data Structures for Game Programmers, Muska & Lipman/Premier-Trade, (game programming examples with SDL)
John R. Hall: Programming Linux Games, No Starch, (First SDL book, by Loki Games, archived online version: , )
External links
Application programming interfaces
Audio libraries
C (programming language) libraries
Cross-platform software
Graphics libraries
Linux APIs
MacOS APIs
Software using the zlib license
Video game development
Video game development software for Linux
Windows APIs
|
30871197
|
https://en.wikipedia.org/wiki/Avira
|
Avira
|
Avira Operations GmbH & Co. KG is a German multinational computer security software company mainly known for their antivirus software Avira Free Security (formerly known as Avira Free Antivirus and Avira AntiVir). Avira was founded in 2006, but the antivirus application has been under active development since 1986, through its predecessor company H+BEDV Datentechnik GmbH. , Avira is owned by American software company NortonLifeLock, after being previously owned by investment firm Investcorp.
The company also has offices in the US, China, Romania, and the Netherlands.
Technology
Virus definition
Avira periodically "cleans out" its virus definition files, replacing specific signatures with generic ones for a general increase in performance and scanning speed. A 15MB database clean-out was made on 27 October 2008, causing problems to the users of the Free edition because of its large size and Avira's slow Free edition servers. Avira responded by reducing the size of the individual update files, delivering less data in each update. Nowadays there are 32 smaller definition files that are updated regularly in order to avoid peaks in the download of the updates.
Its file-by-file scanning feature has jokingly been titled "Luke Filewalker" by the developers, as a reference to the Star Wars media franchise character "Luke Skywalker".
Advance heuristic
Avira products contain heuristics that can proactively uncover unknown malware, before a special virus signature to combat the damaging element has been created and before a virus guard update has been sent.
Heuristic virus detection involves extensive analysis and investigation of the affected codes for functions typical of malware.
If the code being scanned exhibits these characteristic features it is reported as being suspicious, although not necessarily malware; the user decides whether to act on or ignore the warning.
ProActiv
The ProActiv component uses rule sets developed by the Avira Malware Research Center to identify suspicious behavior. The rule sets are supplied by Avira databases. ProActiv sends information on suspicious programs to the Avira databases for logging.
Firewall
Avira removed their own firewall technology from 2014 onwards, with protection supplied instead by Windows Firewall (Windows 7 and after), because in Windows 8 and later the Microsoft Certification Program forces developers to use interfaces introduced in Windows Vista.
Protection Cloud
Avira Protection Cloud (APC) was first introduced in version 2013. It uses information available via the Internet (cloud computing) to improve detection and affect system performance less. This technology was implemented in all paid 2013 products.
APC was initially only used during a manual quick system scan; later it was extended to real-time protection. This improved Avira's score in the AV-Comparatives and Report from September 2013.
Partners
Avira offers its antivirus engine in the form of a software development kit to implement in complementary products. Strategic and technology partners of Avira include Canonical, CYAN Networks, IBM, intelligence AG, Microsoft, novell, OPSWAT, Synergy Systems and others.
On 4 September 2014, Avira announced a partnership with Dropbox, to combine Avira's security with Dropbox's "sync and share" capabilities.
Tjark Auerbach, the founder of Avira sold almost 100% stakes of the company to the Investcorp Group of Manama (Bahrain) in April 2020. The stakes were reportedly sold at a price of 180 million dollars. The Investcorp Group has invested in several other firms from the cybersecurity sector in the past. The directors of Investcorp Group belong to several royal families of Middle East countries like Kuwait, Bahrain, Saudi Arabia, etc. However, 20% of its total ordinary and preferred shares are owned by the Abu Dhabi-based Mubadala Group since 2017. The UAE also serves as the headquarter of a cybersecurity firm discredited for its involvement in human rights abuses against activists, dissidents arrested for criticizing the monarchy; conducting cyber offensives against FIFA officials, and the ruler of Qatar; and the surveillance over Jamal Khashoggi. The chairman of the Mubadala Group owns an institute called ECSSR or the Emirates Center for Strategic Studies & Research, which allegedly influenced German academics to gain soft-power and impact policies in the conflict of interest of the UAE.
On December 7, 2020, NortonLifeLock announced acquisition of Avira for approximately US$360 million from Investcorp Technology Partners. The acquisition was closed in January 2021.
In February 2021, BullGuard joined Avira as part of NortonLifeLock.
Products
Windows
Avira offers the following security products and tools for Microsoft Windows:
Avira Free Antivirus: The free edition antivirus/anti-spyware, for non-commercial use, with promotional pop-ups.
Avira Antivirus Pro: The premium edition antivirus/anti-spyware.
Avira System Speedup Free: A free suite of PC tune-up tools.
Avira System Speedup Pro: The premium edition of the suite of PC tune-up tools.
Avira Internet Security Suite: Consists of Antivirus Pro + System Speedup + Firewall Manager.
Avira Ultimate Protection Suite: Consists of Internet Security Suite + additional PC maintenance tools (e.g. SuperEasy Driver Updater).
Avira Rescue System: A set of free tools that include a utility used to write a Linux-based bootable CD. It can be used to clean an unbootable PC, and is also able to find malware that hides when the host's operating system is active (e.g., some rootkits). The tool contains the antivirus program and the virus database current at the time of download. It boots the machine into the antivirus program, then scans for and removes malware, and restores normal boot and operation if necessary. It is updated frequently so that the most recent security updates are always available.
OS X
Avira Free Mac Security for Mac: Runs on OS X 10.9 and above.
Android and iOS
Avira offers the following security applications for mobile devices running Android and iOS:
Avira Antivirus Security for Android: Free application for Android, runs on versions 2.2 and above.
Avira Antivirus Security Pro for Android: Premium edition for Android, runs on versions 2.2 and above. Available as an upgrade from within the free application.it provide additional Safe browsing, Hourly update and free tech support.
Avira Mobile Security for iOS Free edition for iOS devices, such as iPhone and iPad.
Other products
Avira Phantom VPN: Avira's virtual private network software for Android, iOS, macOS and Windows.
Avira Prime: In April 2017 Avira launched a single-user, multi-device subscription-based product designed to provide a complete set of all Avira products available for the duration of the license along with premium support.
Avira Prime is compatible with Windows, OSX, iOS, and Android operating systems and related devices and is available to consumers in 5- and 25-device editions, dubbed "Avira Prime" and "Avira Prime Unlimited" respectively.
Subscriptions are in 30-day and 1-year increments.
Discontinued platforms
Avira formerly offered free antivirus software for Unix and Linux. That was discontinued in 2013, although updates were supplied until June 2016.
Security vulnerabilities
In 2005, Avira was hit by ACE archive buffer overflow vulnerability. A remote attacker could have exploited this vulnerability by crafting an ACE archive and delivering it via a malicious web page or e-mail. A buffer overflow could occur when Avira scanned the malicious archive. That would have allowed the attacker to execute arbitrary code on the affected system.
In 2010, Avira Management Console was hit by the use-after-free remote code execution vulnerability. The vulnerability allowed remote attackers to execute arbitrary code on vulnerable installations of Avira Management Console. Authentication was not required to exploit the vulnerability.
In 2013, Avira engines were hit by a 0-day vulnerability that allowed attackers to get access to a customer's PC. The bug was found in the avipbb.sys driver file and allowed privilege escalation.
Awards and reviews
In January 2008, Anti-Malware Test Lab gave Avira "gold" status for proactive virus detection and detection/removal of rootkits.
AV-Comparatives awarded Avira its "AV Product of the Year" award in its "Summary Report 2008."
In April 2009, PC Pro awarded Avira Premium Security Suite 9 the maximum six stars and a place on its A-list for Internet security software.
In August 2009, Avira performed at a 98.9% percent overall malware detection rate, and was the fastest for both on-demand scans and on-access scans conducted by PC World magazine, which ranked it first on its website.
Avira was among the first companies to receive OESIS OK Gold Certification, indicating that both the antispyware and antivirus components of several of its security products achieved the maximum compatibility score with widespread network technologies such as SSL/TLS VPN and Network Access Control from companies including Juniper Networks, Cisco Systems, and SonicWALL.
In February 2010, testing by firm AV-TEST, Avira tied for first place (with another German company) in the "malware on demand" detection test and earned a 99% score in the "adware/spyware on demand" test.
AV-Comparatives gave Avira its Silver award (for 99.5% detection rate) in its "Summary Report 2010."
For 2012, AV-Comparatives awarded Avira with "gold" status for its 99.6% performance in the "On-Demand Malware Detection" category and classified Avira as a "Top Rated" product overall for that year.
In the AV-Comparatives August 2014 "Real-World Protection Test," with 669 total test cases tried against various security products, Avira tied for first place.
Avira has also received numerous VB100 awards.
AV-Comparatives awarded Avira its "AV Product of the Year" award in its "Summary Report 2016."
See also
Comparison of antivirus software
Comparison of firewalls
Comparison of virtual private network services
References
External links
Auerbach Stiftung (Foundation)
Antivirus software
Freeware
Software companies established in 1986
Computer security software
Computer security software companies
Software companies of Germany
Windows security software
Linux security software
MacOS security software
Android (operating system) software
1986 establishments in West Germany
2020 mergers and acquisitions
2021 mergers and acquisitions
NortonLifeLock acquisitions
Firewall software
German brands
|
36246987
|
https://en.wikipedia.org/wiki/NOS/VE
|
NOS/VE
|
NOS/VE (Network Operating System / Virtual Environment) is a discontinued operating system with time-sharing capabilities, written by Control Data Corporation in the 1980s. It is a virtual memory operating system, employing the 64-bit virtual mode of the CDC Cyber 180 series computers. NOS/VE replaced the earlier NOS and NOS/BE operating systems of the 1970s.
Commands
The command shell interface for NOS/VE is called the System Command Language, or SCL for short. In order to be callable from SCL, command programs must declare their parameters; this permits automatic usage summaries, passing of parameters by name or by position, and type checking on the parameter values. All standard NOS/VE commands further follow a particular naming convention, where the form of the command is verb{_adjective}_noun; these commands could be abbreviated with the first three characters of the verb followed by the first character(s) of all further words. Examples:
Inspired by addressing structure-members in various programming languages, the catalog separator is the dot.
Subsystems like FTP integrate into the command shell. They change the prompt and add commands like get_file. Thereby statements like flow-control stay the same and subsystems can be mixed in procedures (scripts).
Parameters
Commands could take parameters such as the create_connection command:
crec telnet sd='10.1.2.3'
would connect you to IP address 10.1.2.3 with telnet service.
See also
NOS
CDC Kronos
NOS/BE
External links
User's Guide for NOS/VE on the CDC Cyber 960
Computer history - NOS/VE (Most of this information was extracted from a CDC NOS/VE information leaflet)
NOS/VE Operating System
NOS/VE Command Language Syntax
List of NOS/VE Commands
NOS/VE Utilities
CDC operating systems
Discontinued operating systems
Time-sharing operating systems
|
16476621
|
https://en.wikipedia.org/wiki/2893%20Peiroos
|
2893 Peiroos
|
2893 Peiroos is a large Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 30 August 1975, by astronomers of the Felix Aguilar Observatory at the Leoncito Astronomical Complex in Argentina. The D-type asteroid has a rotation period of 8.96 hours and belongs to the 40 largest Jupiter trojans. It was named after Peiroos (Peirous) from Greek mythology.
Orbit and classification
Peiroos is a dark Jovian asteroid orbiting in the trailing Trojan camp at Jupiter's Lagrangian point, 60° behind its orbit in a 1:1 resonance (see Trojans in astronomy). It is a non-family asteroid in the Jovian background population. It orbits the Sun at a distance of 4.8–5.5 AU once every 11 years and 8 months (4,269 days; semi-major axis of 5.15 AU). Its orbit has an eccentricity of 0.08 and an inclination of 15° with respect to the ecliptic.
The body's observation arc begins with its first observation as at Heidelberg Observatory in January 1933, more than 42 years prior to its official discovery observation at Leoncito.
Physical characteristics
In the Tholen classification, Peiroos has been characterized as a dark D-type asteroid.
Rotation period
In October 1989, a rotational lightcurve of Peiroos was obtained from photometric observations by German and Italian astronomers. Lightcurve analysis gave a well-defined rotation period of 8.96 hours with a brightness variation of 0.30 magnitude (). Between 2015 and 2017, photometric observations by Robert Stephens and collaborators at the Center for Solar System Studies in Landers, California, gave two concurring periods of 8.951 and 8.99 hours, both with an amplitude of 0.31 magnitude ().
Diameter and albedo
According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Peiroos measures between 86.76 and 87.46 kilometers in diameter and its surface has an albedo between 0.0469 and 0.048.
The Collaborative Asteroid Lightcurve Link derives an albedo of 0.0588 and a diameter of 87.67 kilometers based on an absolute magnitude of 8.98.
Naming
This minor planet was named after Peiroos (Peirous), Thracian war leader from the city of Aenus and an ally of King Priam who fought courageously to defend Troy against the Greek during the Trojan War. The official naming citation was published by the Minor Planet Center on 25 September 1988 ().
Notes
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
002893
002893
Minor planets named from Greek mythology
Named minor planets
002893
19750830
|
30022492
|
https://en.wikipedia.org/wiki/Ultracopier
|
Ultracopier
|
Ultracopier is file-copying software for Windows, macOS, and Linux. It supersedes SuperCopier.
Features
Main features include:
pause/resume transfers
dynamic speed limitation
on-error resume
error/collision management
data security
intelligent reorganization of transfer to optimize performance
plugins
Normal vs Ultimate version:
The code sources are exactly the same, and under the same licence
The basic ultimate version just include some alternate plugin
All versions are without DRM (this is explicitly banned by the GPLv3 license) and can be redistributed freely.
The difference between SuperCopier and Ultracopier is the skin, Supercopier is just a skin for Ultracopier in CSS (then use little bit more cpu). But talk about SuperCopier implies you refer to the v3 or less, while talk about Ultracopier implies you refer to SuperCopier v4 and later, which has been renamed as Ultracopier v1.4.
See also
List of file copying software
FastCopy
GS RichCopy 360
References
External links
https://alternativeto.net/software/ultracopier/
https://www.softpedia.com/reviews/windows/Ultracopier-Review-162037.shtml
http://www.ohloh.net/p/ultracopier
https://mac.softpedia.com/get/Utilities/Ultracopier.shtml
https://doc.ubuntu-fr.org/ultracopier
https://salsa.debian.org/debian/ultracopier.git
2010 software
File copy utilities
Cross-platform free software
Free multilingual software
Free utility software
Utilities for macOS
Software that uses Qt
|
28811
|
https://en.wikipedia.org/wiki/Static%20program%20analysis
|
Static program analysis
|
Static program analysis is the analysis of computer software performed without executing any programs, in contrast with dynamic analysis, which is performed on programs during their execution.
The term is usually applied to analysis performed by an automated tool, with human analysis typically being called "program understanding", program comprehension, or code review. In the last of these, software inspection and software walkthroughs are also used. In most cases the analysis is performed on some version of a program's source code, and, in other cases, on some form of its object code.
Rationale
The sophistication of the analysis performed by tools varies from those that only consider the behaviour of individual statements and declarations, to those that include the complete source code of a program in their analysis. The uses of the information obtained from the analysis vary from highlighting possible coding errors (e.g., the lint tool) to formal methods that mathematically prove properties about a given program (e.g., its behaviour matches that of its specification).
Software metrics and reverse engineering can be described as forms of static analysis. Deriving software metrics and static analysis are increasingly deployed together, especially in creation of embedded systems, by defining so-called software quality objectives.
A growing commercial use of static analysis is in the verification of properties of software used in safety-critical computer systems and
locating potentially vulnerable code. For example, the following industries have identified the use of static code analysis as a means of improving the quality of increasingly sophisticated and complex software:
Medical software: The US Food and Drug Administration (FDA) has identified the use of static analysis for medical devices.
Nuclear software: In the UK the Office for Nuclear Regulation (ONR) recommends the use of static analysis on reactor protection systems.
Aviation software (in combination with dynamic analysis)
Automotive & Machines (Functional safety features form an integral part of each automotive product development phase, ISO 26262, Sec 8.)
A study in 2012 by VDC Research reported that 28.7% of the embedded software engineers surveyed currently use static analysis tools and 39.7% expect to use them within 2 years.
A study from 2010 found that 60% of the interviewed developers in European research projects made at least use of their basic IDE built-in static analyzers. However, only about 10% employed an additional other (and perhaps more advanced) analysis tool.
In the application security industry the name Static application security testing (SAST) is also used. SAST is an important part of Security Development Lifecycles (SDLs) such as the SDL defined by Microsoft and a common practice in software companies.
Tool types
The OMG (Object Management Group) published a study regarding the types of software analysis required for software quality measurement and assessment. This document on "How to Deliver Resilient, Secure, Efficient, and Easily Changed IT Systems in Line with CISQ Recommendations" describes three levels of software analysis.
Unit Level Analysis that takes place within a specific program or subroutine, without connecting to the context of that program.
Technology Level Analysis that takes into account interactions between unit programs to get a more holistic and semantic view of the overall program in order to find issues and avoid obvious false positives. For instance, it is possible to statically analyze the Android technology stack to find permission errors.
System Level Analysis that takes into account the interactions between unit programs, but without being limited to one specific technology or programming language.
A further level of software analysis can be defined.
Mission/Business Level Analysis that takes into account the business/mission layer terms, rules and processes that are implemented within the software system for its operation as part of enterprise or program/mission layer activities. These elements are implemented without being limited to one specific technology or programming language and in many cases are distributed across multiple languages, but are statically extracted and analyzed for system understanding for mission assurance.
Formal methods
Formal methods is the term applied to the analysis of software (and computer hardware) whose results are obtained purely through the use of rigorous mathematical methods. The mathematical techniques used include denotational semantics, axiomatic semantics, operational semantics, and abstract interpretation.
By a straightforward reduction to the halting problem, it is possible to prove that (for any Turing complete language), finding all possible run-time errors in an arbitrary program (or more generally any kind of violation of a specification on the final result of a program) is undecidable: there is no mechanical method that can always answer truthfully whether an arbitrary program may or may not exhibit runtime errors. This result dates from the works of Church, Gödel and Turing in the 1930s (see: Halting problem and Rice's theorem). As with many undecidable questions, one can still attempt to give useful approximate solutions.
Some of the implementation techniques of formal static analysis include:
Abstract interpretation, to model the effect that every statement has on the state of an abstract machine (i.e., it 'executes' the software based on the mathematical properties of each statement and declaration). This abstract machine over-approximates the behaviours of the system: the abstract system is thus made simpler to analyze, at the expense of incompleteness (not every property true of the original system is true of the abstract system). If properly done, though, abstract interpretation is sound (every property true of the abstract system can be mapped to a true property of the original system).
Data-flow analysis, a lattice-based technique for gathering information about the possible set of values;
Hoare logic, a formal system with a set of logical rules for reasoning rigorously about the correctness of computer programs. There is tool support for some programming languages (e.g., the SPARK programming language (a subset of Ada) and the Java Modeling Language—JML—using ESC/Java and ESC/Java2, Frama-C WP (weakest precondition) plugin for the C language extended with ACSL (ANSI/ISO C Specification Language) ).
Model checking, considers systems that have finite state or may be reduced to finite state by abstraction;
Symbolic execution, as used to derive mathematical expressions representing the value of mutated variables at particular points in the code.
Data-driven static analysis
Data-driven static analysis uses large amounts of code to infer coding rules. For instance, one can use all Java open-source packages on GitHub to learn a good analysis strategy. The rule inference can use machine learning techniques. For instance, it has been shown that when one deviates too much in the way one uses an object-oriented API, it is likely to be a bug. It is also possible to learn from a large amount of past fixes and warnings.
See also
References
Further reading
"Abstract interpretation and static analysis," International Winter School on Semantics and Applications 2003, by David A. Schmidt
External links
Code Quality Improvement - Coding standards conformance checking (DDJ)
Competition on Software Verification (SV-COMP)
Episode 59: Static Code Analysis Interview (Podcast) at Software Engineering Radio
Implementing Automated Governance for Coding Standards Explains why and how to integrate static code analysis into the build process
Integrate static analysis into a software development process
The SAMATE Project, a resource for Automated Static Analysis tools
A hands-on introduction to static code analysis
Program analysis
Software review
Quality assurance
|
46459635
|
https://en.wikipedia.org/wiki/Rebel%20Code
|
Rebel Code
|
Rebel Code: Linux and the Open Source Revolution is a technology book by Glyn Moody published in 2001. It describes the evolution and significance of the free software and open source movements with many interviews with notable hackers.
In a review in The Guardian, Stephen Poole wrote that the open source movement might have the effect of reducing the price people are willing to pay for other products. He also highlighted the inconsistency between the free cost of open source and the price the publishers were asking for the book.
Chris Douce wrote that the book is an "important addition to the genre of writing that will undoubtedly become termed 'pop-computing'". He also wrote that the book raised interesting questions regarding the relationship between technology and culture, as lot of early design decisions about the Linux kernel were determined by microprocessors.
Sean Jewett wrote that "Rebel Code, despite some flaws, is a must read for those using Linux. It helps put into perspective the decisions that were made early on, and sheds light on the revolution to come."
References
Basic Books books
Free software
Books about free software
Books about Linux
Technology books
|
34181970
|
https://en.wikipedia.org/wiki/Jasig
|
Jasig
|
Jasig is a non-profit US organization founded by a group of university IT personnel in late 1999 with the stated goal of creating open source computer programs for use in higher education environments, mostly written in the Java programming language. Jasig, “a federation of higher ed institutions interested in open source”, is registered as a US 501(c)3 non-profit organization. The name Jasig is an acronym for Java in Administration Special Interest Group. The founders of Jasig included Carl Jacobson from University of Delaware, David Koehler from Princeton, Bernie Gleason from Boston College, Ted Dodds at the University of British Columbia, Jeffrey Gozdieski and Art Pasquinelli from Sun Microsystems.
Jasig developed uPortal, a portal framework for higher education; Bedework, an enterprise calendar system; CAS, an authentication system and single sign-on service; and “2-3-98” to help raise awareness and adoption of open-source.
Licensing policy
All the software sponsored by Jasig is open source, released under the Apache license.
Community model
Jasig utilizes a community model based on three classes of membership:
institutional members
partners
affiliates
Each type of membership assumes a different role in the organization. Institutional members tend to be colleges or universities that use Jasig commissioned software. Partners tend to be commercial entities who have some vested interest in Jasig software. Affiliates are similar to partners, but have a lower level of commitment to the organization.
Partners
Jasig has worked with a variety of commercial entities in the development and support of its various technologies and software.
Unicon:
Offers an Open Source Cooperative Support Program for Central Authentication Service (CAS).
Unicon is a Jasig contributor and offers services for uPortal in the areas of deployment, customization, integration, and general support for education organizations.
Software projects
Jasig sponsors four main software projects, and one community project:
uMobile: delivers educational content to mobile devices.
uPortal: an enterprise portal framework.
CAS (Central Authentication Service): allows students and faculty to sign into multiple websites with a single sign on.
Bedework: a calendaring system.
The 2-3-98 Project: a community project that assists college faculty and staff in moving proprietary systems to open source alternatives.
Funding
Jasig's primary means of funding are through membership fees, sponsorships, and donations. The organization also relies on volunteers to assist in other non-monetary ways such as writing computer programs, writing documentation etc.
Activities
Jasig holds an annual conference spotlighting open source in education. This annual event often coincides with other conferences dedicated to the development and adoption of not only open source applications, but technology generally.
Merger with Sakai Foundation
In 2010, Jasig entered into talks with the Sakai Foundation to merge the two organizations. The two organizations were consolidated as Apereo Foundation in December 2012.
References
Free and open-source software organizations
Organizations established in 1999
Organizations based in Denver
1999 establishments in Colorado
|
47323216
|
https://en.wikipedia.org/wiki/NDCRTC
|
NDCRTC
|
National Digital Crime Resource & Training Centre (NDCRTC) is a centre functioning under the IT wing of SVP National Police Academy (SVPNPA) with the objective of Capacity Building of Law Enforcement Agencies in Cyber Crime Investigation. This Centre is giving training to officers from State Police, CPOs and other LEAs of Central Government, Digital Forensic Experts of CFSL and State FSL, Judges and Prosecutors. The main objective of this Centre is to build capacity in the field of Cyber Crime Investigation and creating Master Trainers in this field among all wings of Criminal Justice System of the Country in association with all agencies of State as well as Central Government working in this field.
This Centre is being governed by a Governing Body headed by Director, NPA. The Governing body will be assisted by Technical Advisory Body composed of domain experts from Police, other LEAs, Forensic Science Departments and Industry. The main focus of this centre at present is on Digital Forensics. However, in future, it will also focus on IT Act, Digital Evidence, Malware Analysis and Cyber Security. The long-term vision is to make this Centre a "Centre of Excellence" in the field of Cyber Crime Investigation Training in the country.
Objective
The Vision of NDCRTC is to be a National Centre of Excellence for training in the field of Digital Crime investigation and Cyber Forensics. It intends to achieve this vision through the establishment of a governance structure designed specifically to provide for autonomy, flexibility, accountability and for building production interfaces with various police organisations, Judiciary, forensic laboratories, Academia, and the industry.
Achievements
The team has provided hands-on training on disk forensics, mobile forensics & CDR analysis, Windows forensics, Internet-based crimes, open-source intelligence and social media analysis. The team has developed some powerful tools for hashing, registry analysis and CDR analysis to ease up the process of Forensic analysis.
NDCRTC won the Excellence Award in the category of Law Enforcement Capacity Building by DSCI at the Annual Information Security Summit 2016 (AISS2016)
Trainings by NDCRTC in SVP National Police Academy
02 days Course on ‘Dark web for LEAs’
3-day Course on "Network & Cloud Forensics"
3 day Course on 'Dark web & Crypto Currency'
03-days course on "OSINT & Social Media Analysis"
03 Days Course on "Investigation of Digital Payments Frauds"
Three days Course on 'Cyber Crime & Cyber Law' for Judicial Officers & Public Prosecutors
5-day Course on 'Cyber Crime Investigation'
5 day Advance Course on "Cyber Crime, Dark Web and investigation of Crypto Currencies"
5 Day Course on ''Disc Forensics''
Trainings provided by NDCRTC on Cyber Crime Investigation & Digital Forensics to Officers from Foreign Nationals
Sierra Leone Police
Sri Lankan Police
South Sudan Police
NDCRTC on Social Media
NDCRTC on Facebook
NDCRTC on Twitter
Sources
About NDCRTC
Article from The Hindu
Article from NDTV
Article from The New Indian Express
Article from Hans India
Article from United News of India
Article from Amar Ujala
Article from The Hindu
Article from New Indian Express
Article from The Times of India
Article from The Times of India
Article from Odisha TV
Article from Telegraph India
Article from The Pioneer
Article from Sambad
Article from News Live
See also
Ministry of Home Affairs (MHA)
Sardar Vallabhbhai Patel National Police Academy (SVPNPA)
References
E-government in India
Law enforcement in India
Criminology
Hyderabad, India
|
667624
|
https://en.wikipedia.org/wiki/Mindscape%20%28company%29
|
Mindscape (company)
|
Mindscape was a French (previously American) video game developer and publisher based in Boulogne-Billancourt. The company was founded by Roger Buoy in October 1983 in Northbrook, Illinois, originally as part of SFN Companies until a management buyout was completed in 1987. Mindscape went public in 1988 and was subsequently acquired in 1990 by The Software Toolworks, eyeing Mindscape's Nintendo license. When Toolworks was acquired by Pearson plc in 1994, Mindscape became the primary identity for the development group. Mindscape was then sold to The Learning Company in 1998 and bought out by Jean-Pierre Nordman in 2001. Following the poor performance of its products, Mindscape exited the video game industry in August 2011. Notable titles released by Mindscape include the MacVenture series, Balance of Power, Moonstone: A Hard Days Knight, Legend, Warhammer: Shadow of the Horned Rat, Warhammer: Dark Omen and Lego Island.
History
Early years (1983–1988)
Mindscape was founded in October 1983 as a wholly owned subsidiary of holding company SFN Companies. Mindscape's founder, Australian entrepreneur Roger Buoy, had previously been a computer analyst for Rolls-Royce Limited and later worked for the software division of Scholastic Inc., before being hired by SFN in October 1983 to set up Mindscape. For Mindscape, Buoy acted as president and chief executive officer (CEO). Mindscape released its first product in April 1984. Early games published by the company include Déjà Vu, Balance of Power, and Sub Mission: A Matter of Life and Death. In its early years, Mindscape lost about annually.
In July 1986, Mindscape acquired the assets of Scarborough Systems, a software company from Tarrytown, New York. Scarborough Systems continued its operations through Lifeboat Assoc., a subsidiary that was not acquired by Mindscape. In October, SFN announced that it would be selling or closing large parts of its business, including plans to liquidate Mindscape. On December 31, Mindscape also acquired the assets of Roslyn, New York-based company Learning Well. Because Mindscape was not liquidated by the end of 1986, it was assigned to SFN Partners L.P., a limited partnership company. A new corporation set up by Buoy and SFN's former president and chairman, John Purcell, subsequently acquired Mindscape from SFN Partners on January 16, 1987, for . Buoy retained his positions in the company, while Purcell became its chairman. At this point, Mindscape had 74 employees.
With sales of , Mindscape had become profitable for the first time in the fourth quarter of 1986; it started publishing black numbers by 1987. In March 1987, Mindscape acquired the software division of Holt, Rinehart and Winston formerly known as CBS Interactive Learning, with all operations moved to Mindscape's Northbrook, Illinois, headquarters. By June 1988, Mindscape filed with the U.S. Securities and Exchange Commission to prepare an initial public offering (IPO) and become a public company. The move aimed at raising through sale of stock to reduce its bank loan debts of . The IPO was completed that same month, with the company commencing trading over-the-counter, and the first shares were issued by July. Bob Ingersoll and Dennis O'Malley were appointed vice president (VP) of marketing and VP of sales, respectively, in May 1987. In November, Mindscape signed a lease of of office space in Wheeling, Illinois, for . Robert A. Drell, formerly of Dresher Inc., became VP of finance and chief financial officer in October 1988.
Under The Software Toolworks and Pearson (1989–1997)
In December 1989, video game company The Software Toolworks reached an agreement to acquire Mindscape, exchanging every Mindscape share for 0.4375 of a share in newly issued Toolworks common stock. The deal was completed on March 13, 1990 and valued at . Mindscape had been one of the approximately forty companies licensed to develop for Nintendo video game platforms, which was a major reason for the acquisition. The two companies merged, and Buoy joined Les Crane on Toolworks' company board. Following the acquisition, Mindscape became Toolwork's division working exclusively on games for Nintendo platforms, which sharply increased Toolwork's earnings. Subsequently, in March 1994, Pearson plc agreed to acquire Toolworks for , with the deal closing on May 12, 1994.
Pearson was criticized for overpaying in the acquisition, and the acquired company lost in its early years under Pearson. By November 1994, Toolworks had assumed the Mindscape identity. The same year, Mindscape acquired video game developer Strategic Simulations. In September 1995, it acquired Micrologic Software from Emeryville, California, to undisclosed terms. In January 1996, John F. Moore became CEO after leaving the same position at Western Publishing. In November, it laid off twelve developed staff as a cost reduction measure. In 1997, Mindscape acquired software company Multimedia Design. In its final year under Pearson, 1997, Mindscape become profitable again, generating .
Under The Learning Company and later years (1998–2011)
Pearson proceeded to sell Mindscape to The Learning Company (TLC) in March 1998 for in cash and stock. A waiting period was temporarily imposed by the Federal Trade Commission and subsequently terminated the same month. TLC expected that its stocks would rise per share as a result of the acquisition, while Pearson lost around . Later that year, when TLC integrated its Broderbund division, Mindscape took over Broderbund's productivity, reference and entertainment brands. TLC would be eventually acquired by Mattel in May 1999 and became a subsidiary of the company's Mattel Media division, later renamed Mattel Interactive. By then, Mattel occasionally used the Mindscape name for publishing.
TLC and Mattel Interactive's gaming assets were acquired by Gores Technology Group in 2000 and its game brands were reformed under a new entity, Game Studios, in January 2001. The same year, former TLC-Edusoft executive Jean-Pierre Nordman bought out Mindscape from TLC, installing it as a separate entity in Boulogne-Billancourt, a suburb of Paris, France, and assuming a managerial role.
In October 2005, French video game developer and publisher Coktel Vision was sold to Mindscape, wherein eleven Coktel employees were absorbed into Mindscape. The Coktel brand name, however, was retained by Mindscape many years afterwards; its history officially ended in 2011 when Mindscape closed.
By December 2009, Thierry Bensoussan had become the managing director for Mindscape. The company opened an internal development studio, Punchers Impact, in Paris to develop multi-platform digital download games. The studio's managers, Guillaume Descamps and Jérôme Amouyal, left the studio less than a year later, in September 2010, to found a new studio, Birdies Road. Punchers Impact developed two games—Crasher, a racing game, and U-Sing, a music game. U-Sing performed well at retail, but the cost of music licenses for the game had a severe impact on its revenue, while Crasher underperformed in general. As a result, Mindscape announced on August 10, 2011, that it had closed Punchers Impact and laid off its forty employees, while itself would effectively exit the video game industry. Some regional subsidiaries, such as Mindscape Asia-Pacific in Sydney, Australia, continued operating in the video game business as entities independent from Mindscape.
Software developed and/or published
Balance of Power (1985)
Deja Vu (1985)
American Challenge: A Sailing Simulation (1986)
Harrier Combat Simulator (1986)
James Bond 007: Goldfinger (1986)
Uninvited (1986)
Shadowgate (1987)
Mavis Beacon Teaches Typing (1987)
Road Runner (Commodore 64, MS-DOS) (United States, Canada) (1987)
Visions of Aftermath: The Boomtown (PC) (1988)
Willow (Amiga, Atari ST, Commodore 64, MS-DOS) (1988)
The Colony (1988)
Indiana Jones and the Temple of Doom (NES) (1988)
Paperboy (NES, Game Boy) (1988, 1990)
Fiendish Freddy's Big Top O'Fun (Amiga, ZX Spectrum, Commodore 64, Amstrad CPC) (1989)
Prince of Persia (1989)
Captive (1990)
SimEarth (1990)
Mad Max (NES) (1990)
SimAnt (1991)
Moonstone: A Hard Days Knight (1991)
Knightmare (1991)
Captain America and The Avengers (SNES + Handheld games ver.) (1991)
Captain Planet and the Planeteers (1991)
Gods (1991)
D/Generation (1991)
Contraption Zack (1992)
SimLife (1992)
Outlander (1992)
The Terminator (NES) (1992)
Legend (aka The Four Crystals of Trazere) (1992)
Worlds of Legend: Son of the Empire (1993)
Prince of Persia 2: The Shadow and the Flame (1993)
Wing Commander (SNES) (1993)
Super Battleship (1993)
Star Wars Chess (1993)
Metal Marines (1993)
Dragon Lore: The Legend Begins (1994)
Liberation: Captive 2 (Amiga, Amiga CD32) (1994)
Aliens: A Comic Book Adventure (MS-DOS) (1995)
Cyberspeed (PC [unreleased], PlayStation) (1995)
Warhammer: Shadow of the Horned Rat (1995)
Pool Champion (1995)
Angel Devoid: Face of the Enemy (1996)
Azrael's Tear (1996)
Starwinder (1996)
Steel Harbinger (1996)
Counter Action (1997)
Lego Island (PC) (1997)
Aaron Vs. Ruth (1997)
John Saul's Blackstone Chronicles (1998)
Warhammer: Dark Omen (1998)
Prince of Persia 3D (1999)
Rat Attack! (1999)
Billy Hatcher and the Giant Egg (PC) (2006)
Golden Balls (2008)
References
External links
Mindscape at Giant Bomb
Mindscape at MobyGames
Mindscape at IGDB.co
Novato, California
Video game companies established in 1983
Video game companies disestablished in 2011
Defunct video game companies of the United States
Defunct video game companies of France
Video game development companies
Video game publishers
Mattel
1983 establishments in Illinois
|
54968454
|
https://en.wikipedia.org/wiki/ActivityPub
|
ActivityPub
|
ActivityPub is an open, decentralized social networking protocol based on Pump.io's ActivityPump protocol. It provides a client/server API for creating, updating, and deleting content, as well as a federated server-to-server API for delivering notifications and content.
Project status
ActivityPub is a standard for the Internet in the Social Web Networking Group of the World Wide Web Consortium (W3C). At an earlier stage, the name of the protocol was "ActivityPump", but it was felt that ActivityPub better indicated the cross-publishing purpose of the protocol. It learned from the experiences with the older standard called OStatus.
In January 2018, the World Wide Web Consortium (W3C) published the ActivityPub standard as a Recommendation.
The W3C Social Community Group organizes a yearly free conference about the future of the ActivityPub.
Former Diaspora community manager Sean Tilley wrote an article that suggests ActivityPub protocols may eventually provide a way to federate Internet platforms.
Notable implementations
Federated (server-to-server) protocol
Friendica, a piece of social networking software, implemented ActivityPub in version 2019.01.
Lemmy, a Reddit-like link aggregator social network
Mastodon, a piece of social networking software, implemented ActivityPub in version 1.6, released on 10 September 2017. It is intended that ActivityPub offers more security for private messages than the previous OStatus protocol does.
Nextcloud, a federated service for file hosting.
PeerTube, a federated service for video streaming.
Pleroma, a social networking software, implemented ActivityPub
See also
Activity Streams
Comparison of software and protocols for distributed social networking
Comparison of microblogging services
Fediverse
Micropub
References
External links
2018 introductions
Distributed computing
Microblogging software
Social software
Web applications
|
18166468
|
https://en.wikipedia.org/wiki/List%20of%20devices%20that%20run%20MontaVista%20Linux
|
List of devices that run MontaVista Linux
|
This is an incomplete list of embedded devices that run MontaVista Linux: electronic devices with limited internal computers whose main operating system is based on MontaVista's distribution of the open-source Linux operating system.
Digital Televisions
Philips Aurea and selected ambiLight models
Sony Bravia models from 2005 and earlier
selected models from Samsung, Panasonic, Sharp and Mitsubishi
Digital Video Recorders and Set-Top Boxes
Sony DHG-HDD250
Sony DHG-HDD500
eBook Readers
Sony LIBRIé EBR-1000
Sony PRS-505
Sony PRS-700
Sony PRS-300
Sony PRS-600
Sony PRS-900
VoIP Phones
D-Link DPH-125MS
Mobile Phones
Motorola A760
Motorola A768
Motorola A768i
Motorola A780
Motorola A910
Motorola E680
Motorola E680i
Motorola MING
Motorola RAZR2
Motorola ROKR E2
Motorola ROKR E6
Motorola RIZR Z6
Motorola ZN5
NEC N900iL
NEC N901iC
Panasonic P901i
Musical Instruments
Yamaha MOTIF XS music production synthesiser, Yamaha Motif-Rack XS tone module, and Yamaha S90XS synthesizer.
Network Attached Storage (NAS)
Seagate Central STCG2000100
Seagate Business Storage STBN8000200
SMC TigerSTore SMCNAS02
SMC TigerSTore SMCNAS04
Notebooks
Dell Latitude E4200
Dell Latitude E4300
Dell Latitude Z600
Routers
D-Link G604T Network Adaptor
D-Link G624T Router
D-Link G664T
D-Link G684T ADSL2+/WiFi
Linksys WAG200G ADSL2+/WiFi
ACORP Sprinter@ADSL LAN120M
Cable Modems
all DOCSIS/EuroDOCSIS 3.0 cable modems based on Intel Puma5 chipset
Traffic Signal Control
Peek Traffic PTC-1
Telecom Equipment
Alcatel-Lucent
Brocade SAN switches
Ericsson
Fujitsu
Iskratel SI 2000 call server
Microsemi SyncServer
Motorola WiMax CPE i775
NEC
Cisco Application Control Engine module
Cisco Nexus switches running NX-OS
Avaya Aura Session Border Controller (SBC)
Cyclades ACS
Digital Televisions
Aviosys IP Kamera 9070 series (TI Davinci DM355 board)
Other
Spirent Testcenter
APC by Schneider Electric IP KVM - The AP5405 remote Internet Protocol keyboard/video/mouse controller allows 16 servers to be accessed over a TCP/IP network.
Philips iPronto remote controller
St. Jude Medical Merlin patient care system
Texas Instruments announced using MontaVista Linux as the supported operating system for their system on a chip platform, Texas Instruments DaVinci. MVL4 and MVL5 were used for the first and second software development kit series until TI decided for a less commercial approach with the third edition of their software development kit.
The terminals used for the National Lottery and EuroMillions games in the Republic of Ireland are based on MontaVista Linux and use a Java client, as do most other newer GTECH Corporation Altura terminals.
SEGA Lindbergh hardware for arcade gaming
Clarion NX603 multimedia head unit
Phenom (electron microscope) (first generation - later switched to other distributions)
British Telecom ITS.Netrix dealerboards
Canon imageRUNNER Advance C5051i multi-function printer (MFP)
Sony DVS, MVS & MVS-X Production Switchers
Hewlett-Packard Designjet large-format printers (z3200 series)
References
Embedded Linux
|
6903632
|
https://en.wikipedia.org/wiki/Adempiere
|
Adempiere
|
ADempiere is an Enterprise Resource Planning or ERP software package released under a free software license. The word adempiere in Italian means "to fulfill" or "to accomplish".
The software is licensed under the GNU General Public License.
History
The ADempiere project was created in September 2006. Disagreement between the open-source developer community that formed around the Compiere open-source ERP software and the project's corporate sponsor ultimately led to the creation of Adempiere as a fork of Compiere.
Within weeks of the fork, ADempiere reached the top five of the SourceForge.net rankings. This ranking provides a measure of both the size of its developer community and also its impact on the open source ERP software market.
The project name comes from an Italian word which means "satisfy" but with the additional senses of "Complete, reach, practice, perform tasks, or release; also, give honor, respect", here which were considered appropriate to what the project aimed to achieve.
Goals of this project
The goal of the Adempiere project is the creation of a community-developed and supported open source business solution. The Adempiere community follows the open-source model of the Bazaar described in Eric Raymond's article The Cathedral and the Bazaar.
Business functionality
The following business areas are addressed by the Adempiere application:
Enterprise Resource Planning (ERP)
Supply Chain Management (SCM)
Customer Relationship Management (CRM)
Financial Performance Analysis
Integrated Point of sale (POS) solution
Cost Engine for different Cost types
Two different Productions (light and complex) which include Order batch and Material Requirements Planning (or Manufacturing Resource Planning).
Project structure
All community members are entitled to their say in the project discussion forums. For practical purposes, the project is governed by a council of contributors. A leader is nominated from this council to act as overall project manager. The role of the Adempiere Council is to:
Support decisions of the leader.
Accept contributions.
Define the roadmap.
Review and approve specifications.
Vote for new functionalities.
Approve changes to core.
Technology
Adempiere is developed with Java EE technology, specifically utilizing Apache Tomcat and the JBoss application server. Currently database support is restricted to PostgreSQL and Oracle.
Architecture
Adempiere inherited the Data Dictionary from the Compiere project. This architecture extends the Data Dictionary concept into the application; thus the application's entities, their validation rules and screen layout can be controlled from within the application itself. In practice, this means that customization of the application can be done without new coding.
A Workflow Management Coalition and Object Management Group standards based workflow engine is utilized to provide Business Process Management. These features allow for the rapid customization of the application to a business's needs.
See also
Compiere, iDempiere, metasfresh, Openbravo (Compiere source code family)
List of ERP software packages
List of ERP vendors
List of free and open source software packages
forks
iDempiere It modularized the code through the OSGi framework so it allows a plugin architecture.
metasfresh - originally based on ADempiere, developed in Germany.
References
Notes
Top Open Source ERPs
Heise Online -Technology News Portal
GudangLinux note
LinuxPR note
InfoWorld article
Full Open Source compliance and Database independence, one step closer with Adempiere first release
Compiere User Community Splits; Code Forks
External links
Official Community website
Free customer relationship management software
Free ERP software
Free software programmed in Java (programming language)
Software forks
Enterprise resource planning software for Linux
|
67814886
|
https://en.wikipedia.org/wiki/Chiswick%20Mall
|
Chiswick Mall
|
Chiswick Mall is a waterfront street on the north bank of the river Thames in the oldest part of Chiswick in West London, with a row of large houses from the Georgian and Victorian eras overlooking the street on the north side, and their gardens on the other side of the street beside the river and Chiswick Eyot.
While the area was once populated by fishermen, boatbuilders and other tradespeople associated with the river, since Early Modern times it has increasingly been a place where the wealthy built imposing houses in the riverside setting.
Many of the houses are older than they appear, as they were given new facades in the 18th or 19th centuries rather than being completely rebuilt; among them is the largest, Walpole House. St Nicholas Church, Chiswick lies at the western end; the eastern end reaches to Hammersmith. The street, which contains numerous listed buildings, partially floods at high water in spring tides.
The street has been represented in paintings by artists such as Lucien Pissarro and Walter Bayes; in literature, in Thackeray's novel Vanity Fair; and in film and television, including in the 1955 Breakaway, the 1961 Victim, and the 1992 Howards End.
History
Early origins
Chiswick grew as a village in Anglo-Saxon times from smaller settlements dating back to Mesolithic times in the prehistoric era. Roman roads running east-west along the lines of the modern Chiswick High Road and Wellesley Road met some 500 metres north of Chiswick Mall; the High Road was for centuries the main road westwards from London, while goods were carried along the river Thames.
St Nicholas Church, Chiswick was built in the twelfth century, and by 1181, the settlement of Chiswick had grown up "immediately east" of the church. The prebendal manor house belonging to the church was founded circa 1100 as a stone building; it was demolished around 1710, and is now the site of College House and other buildings. Local trades included farming, fishing, boatbuilding, and operating a ferry across the river.
Changing land use
The area was described by John Bowack, writing-master at Westminster School, in 1705; he wrote that "the greatest number of houses are stretched along the waterside from the Lyme kiln near Hammersmith to the Church, in which dwell several small traders, but for the most part fishermen and watermen who make up a considerable part of the inhabitants of this town."
The street was part of a rural riverside village until late in the nineteenth century; the 1865 Ordnance Survey map shows orchards to the north and west of Old Chiswick. The area to the north had become built up with streets of terraced housing by 1913, as shown on the Ordnance Survey map of that date. The Great West Road crossed Brentford in 1925 and Chiswick in the 1950s, passing immediately north of Old Chiswick and severing it from the newer commercial and residential centre around Chiswick High Road.
The naturalist Charles John Cornish, who lived in Orford House on Chiswick Mall, wrote in 1902 that the river bank beside Chiswick Eyot had once been a "famous fishery"; he recorded that "perhaps the last" salmon was caught between the eyot and Putney in 1812, and expressed the hope that if the "purification" of the river continued, the salmon might return.
Setting
Chiswick Mall is a waterfront street on the north bank of the river in the oldest part of Chiswick. It consists of a row of "grand houses" providing "Old Chiswick's main architectural distinction"; the street has changed relatively little since 1918. The houses, mainly from the Georgian and Victorian eras, overlook the street on the north side; their gardens are on the other side of the street, beside the river. St Nicholas Church, Chiswick lies at the western end; the eastern end reaches to Hammersmith. Just to the north of the row of grand houses is Fuller's Brewery, giving the area an industrial context. The street and the gardens partially flood at high water in spring tides. Chiswick Mall forms part of the Old Chiswick conservation area; the borough's appraisal of the conservation area describes it as "a remnant of a riverside village for wealthy landowners". The name "Mall" was most likely added in the early 19th century after the model of the fashionable Pall Mall in Westminster.
Grand houses
Medieval period
British History Online states that the prebendal manor house and its medieval neighbours must have been reached by a road that ran eastwards for an unknown distance beside the river from the ferry, and that this eventually became Chiswick Mall.
In 1470, Robert Stillington, Chancellor of England and bishop of Bath and Wells had a "hospice" with a "great chamber" by the Thames in Chiswick.
Early Modern period
The vicarage house at the corner of Church Street and Chiswick Mall had been built by 1589–90. The prebendal manor house was extended to accommodate Westminster School in around 1570. There appears to have been a group of "imposing" houses on Chiswick Mall in the Early Modern period, including a large house on the site of Walpole House, since by 1706 John Bowack wrote of "very ancient" houses beside the river at Chiswick. A house on the site of Bedford House was inhabited by the Russell family in around 1664; it and others nearby were later rebuilt.
17th century
The largest, one of the finest, and most complicated of the grand houses on Chiswick Mall is the Grade I Walpole House. Parts of it, behind the later facade, were according to the historian of buildings Nikolaus Pevsner constructed late in the Tudor era, whereas the visible parts are late 17th and early 18th century. It has three storeys, of brown bricks with red brick dressings. The front door is in a porch with Corinthian pilasters standing on plinths; above is an entablature. Its windows have double-hung sashes topped with flat arches. In front of the house is an elegant Grade II* listed screen and wrought iron gate; the gateposts are topped with globes. Its garden is listed in the English Heritage Register of Parks and Gardens.
Walpole House was the home of Barbara Villiers, Duchess of Cleveland, a mistress of King Charles II, until her death in 1709; it was later inherited by Thomas Walpole, for whom it is now named. From 1785 to 1794 it served as a boarding house; one of its lodgers was the Irish politician Daniel O'Connell. In the early 19th century it became a boys' school, its pupils including William Makepeace Thackeray. The actor-manager Herbert Beerbohm Tree owned the house at the start of the 20th century. It was then bought by the merchant banker Robin Benson; over several generations the Benson family designed and then restored the garden.
18th century
Morton House was built in 1726 of brown bricks. Its garden is listed in the English Heritage Register of Parks and Gardens. Its former owner, Sir Percy Harris, had a relief sculpture depicting the resurrection of the dead made by Edward Bainbridge Copnall for the garden in the 1920s; the sculpture now serves as his tombstone a short distance away in St Nicholas Churchyard.
The Grade II* Strawberry House was built early in the 18th century and given a new front of red bricks with red dressings around 1730. It has two main storeys with a brick attic above. The front doorway is round-headed; it has a door with six panels, topped with a fanlight decorated with complex tracery. The door is in a porch with cast iron columns; above the porch is a balcony of wrought iron. At the back of the house on the first floor is an oriel window. Its garden is listed in the English Heritage Register of Parks and Gardens. Arabella Lennox-Boyd suggested that the garden was used by the botanist Joseph Banks to grow the plant species he had discovered. The walled garden was remodelled in the 1920s by the house's then owner, Howard Wilkinson and his son the stage designer Norman Wilkinson.
The former Red Lion inn, now called Red Lion House, was built of brick; it was licensed as an inn by 1722 for Thomas Mawson's brewery just behind the row of houses, now the Griffin Brewery. It was conveniently placed to attract passing trade from thirsty workers from the Mall's draw dock, where boats unloaded goods including hops for the brewery and rope and timber for the local boatbuilders. Inside, it has a handsome staircase; its sitting room is equipped with two fireplaces and decorated above with a frieze in plasterwork showing bowls of punch. At a later date it was given a stucco facade with a six-panelled door under a fanlight, and double-hung sash windows with surrounds.
Of the same period is the Grade II pair of three-storey brown brick houses, Lingard House, with a dormer, and Thames View. Above their doors are doorhoods supported by brackets. Also early 18th century is the brown brick with red dressings Grade II Woodroffe House, which was at that time of two storeys; its third story was added late in that century.
A large house, started in 1665 as the house of the Russell family, then the earls of Bedford, but with an 18th century front, is now divided into Eynham House and the Grade II* Bedford house. The latter has a Grade II gazebo in its garden. The actor Michael Redgrave lived in Bedford House from 1945 to 1954.
The Grade II Cedar House and Swan House were built late in the 18th century; both are three storey buildings of brown brick. Their windows are double-hung sashes with flat arches.
Two more three-storey Grade II 18th century houses are those named Thamescote and Magnolia; the latter has glazing bars on its windows, with iron balconettes on its second floor.
The house called The Osiers was built late in the 18th century but has a newer facade.
19th century
An early 19th century pair are the Grade II, three-storey Riverside House and Cygnet House. They are built of brown bricks and have porches with a trellis. Another Grade II house of the same era is the two-story Oak Cottage; it has a stucco facade, a moulded cornice, and pineapple finials on its parapet.
Another pair of the same period are the Grade II Island House and Norfolk House. They are faced in stucco, and have three storeys and double-hung sash windows. Their basements have a rusticated facade, while the grand first and second storeys are adorned with large Corinthian pilasters. At the centre, they have paired Ionic columns supporting a balcony with a balustrade; to either side, the windows on the first floor are adorned with pilasters and topped with a pediment.
The house called Orford House and The Tides are a Grade II pair; they were built by John Belcher in 1886. Orford House has timber framing in its gables, while The Tides has hanging tiling there. Belcher also designed Greenash in Arts and Crafts style, with tall chimneys and high gables for the local shipbuilder, Sir John Thorneycroft.
The medieval prebendal manor house was replaced in 1875 with a row of houses. They are decorated with many architectural details such as fruity swags. They are not all in a line, but all are the same height with a balustrade along the parapet.
20th century
Dan Mason, owner of the Chiswick Soap Company, bought Rothbury House in 1911; it lay at the eastern end of Chiswick Mall, for the Chiswick Cottage Hospital. The house was used for staff quarters, administration, and kitchens. The main hospital block was built in the ¾ acre garden; it had two ten-person wards on the ground floor, one male, one female, and a ward for twelve children upstairs; the whole hospital was constructed and equipped at Mason's expense. A third building housed the outpatients department. The main entrance was on Netheravon Road (to the north), with a second entrance on the Mall. By 1936 the buildings were obsolete, and Mason's nephew, also called Dan Mason, laid the foundation stone for a more modern hospital on 29 February 1936; the new building was finished by 1940. In 1943 it was requisitioned by the Ministry of Health, and it became the Chiswick Maternity Hospital. This closed in 1975. It was then used for accommodation for Charing Cross Hospital, and as a film set, including for the BBC TV series Bergerac and Not the Nine O'clock News. From 1986 it served as Chiswick Lodge, a nursing home for patients with dementia or motor neurone disease; it closed in 2006, and the building was demolished in 2010, to be replaced by housing.
In culture
In painting
The street has been depicted by a variety of artists. The Tate Gallery holds a 1974 intaglio print on paper by the artist Julian Trevelyan entitled Chiswick Mall. Around 1928, the musician James Brown made an oil painting entitled Chiswick Mall from Island House; he had been tutored in oil painting by the impressionist painter Lucien Pissarro, who lived for a time in Chiswick. The Victoria and Albert Museum has a 1940 pen and ink and watercolour painting by the London Group artist Walter Bayes with the same title. Mary Fedden, the first woman to teach painting at the Royal College of Art, made an oil on board painting called Chiswick Mall of a woman feeding geese just in front of Chiswick Eyot.
In literature
In English literature, the street features in the first chapter of Thackeray's 1847–48 novel Vanity Fair. The book begins:
In film
Henry Cass's 1955 detective thriller film Breakaway involves a houseboat on Chiswick Mall; the Rolls-Royce driven by 'Duke' Martin (Tom Conway) stops in front of the Mill Bakery, now Miller's Court. The 1961 thriller Victim, set in Chiswick, has its barrister protagonist, Melville Farr, played by Dirk Bogarde, living on Chiswick Mall; Melville walks through St Nicholas Churchyard, and meets his wife Laura (Sylvia Syms) in front of his house. William Nunez's 2021 The Laureate, about the war poet Robert Graves (Tom Hughes), features a barge on Chiswick Mall.
The scene in the 1992 Merchant Ivory film of E. M. Forster's Howards End, where Margaret (Emma Thompson) and Helen (Helena Bonham Carter) stroll with Henry (Anthony Hopkins) in the evening, was shot on Chiswick Mall.
Series One of the BBC's The Apprentice was filmed in the first-floor drawing room "Galleon Wing" extension, of Sir Nigel Playfair's Said House; the room features a large curved plate-glass window giving views up and down the river.
Open Gardens
Some of the properties on Chiswick mall, including Bedford, Eynham, and Woodroffe Houses, from time to time offer access to their private gardens on the National Garden Scheme "Open Gardens" days.
Notes
References
General sources
External links
Panorama of the Thames - view of Chiswick Mall, starting from St Nicholas Church
Streets in the London Borough of Hounslow
|
39475323
|
https://en.wikipedia.org/wiki/Tom%20Preston-Werner
|
Tom Preston-Werner
|
Thomas Preston-Werner (born October 28, 1979) is an American billionaire software developer and entrepreneur. He is an active contributor within the free and open-source software community, most prominently in the San Francisco Bay Area, where he lives.
He is best known as the founder and former CEO of GitHub, a Git repository web-based hosting service, which he co-founded in 2008 with Chris Wanstrath and P. J. Hyett. He resigned from GitHub in 2014 when an internal investigation concluded that he and his wife harassed an employee. Preston-Werner is also the creator of the avatar service Gravatar, the TOML configuration file format, the static site generator software Jekyll, and the Semantic Versioning Specification (SemVer).
Early life
Preston-Werner grew up in Dubuque, Iowa. His father died when he was a child. His mother was a teacher and his stepfather was an engineer.
He graduated high school at Dubuque Senior High School and attended Harvey Mudd College in Claremont, California for 2 years before dropping out to pursue other endeavours. He realized that he enjoyed programming far more than the math that was the core of his physics studies.
Influence
As an active contributor to the open-source developer and hacker culture, most prominently in areas involving the programming language Ruby, he has written articles regarding his philosophies and opinions on various issues. He has been featured as a guest on podcasts, including Rubyology and SitePoint, and he often speaks out about his conviction that developers should seek to collaborate more, and the measures which would promote such collaboration, such as writing better documentation and contributing to other people's projects.
Preston was one of the initial members of the San Francisco group IcanhazRuby or ICHR, after he became a regular member of the San Francisco Ruby Meetups. He continued until the meetings became overwhelmed by venture capital investors searching for talent; this prompted him to seek more private gatherings. On April 8, 2011, he also started a conference called CodeConf, by means of GitHub's influence in the coding community.
Preston-Werner is the creator of the TOML configuration file format.
Career
In an article published by Hacker Monthly in 2010, Preston wrote about his passion for ensuring that developers document the code they write so others can easily understand how it works.
In 2004, Preston-Werner founded Gravatar, a service for providing globally unique avatars that follow users from site to site. The company grew to about 32,000 users in 2007, when Preston-Werner sold the company to Automattic.
In 2005 he moved to San Francisco to work at Powerset, a natural language search engine. Powerset was acquired by Microsoft. Preston-Werner declined a $300,000 bonus and stock options from Microsoft so that he could focus on GitHub.
GitHub
Preston-Werner co-founded GitHub in 2008 with Chris Wanstrath, P. J. Hyett and Scott Chacon, as a place to share and collaborate on code.
Architects, musicians, city governments, builders and others are currently using GitHub to share and collaborate on projects beyond software code.
In 2010 Preston-Werner read a comment on Twitter insulting the quality of GitHub's search function. This prompted him to overhaul the service's search, drawing on his experience having worked at Powerset.
Resignation from GitHub
Julie Ann Horvath, a GitHub programmer, alleged in March 2014 that Tom Preston-Werner and his wife Theresa engaged in a pattern of harassment against her that led her to leave the company. GitHub initially denied Horvath's allegations, then following an internal investigation, confirmed some of the claims. Preston-Werner resigned. GitHub's new CEO Chris Wanstrath said the "investigation found Tom Preston-Werner in his capacity as GitHub's CEO acted inappropriately, including confrontational conduct, disregard of workplace complaints, insensitivity to the impact of his spouse's presence in the workplace, and failure to enforce an agreement that his spouse should not work in the office."
After GitHub
Following his resignation from GitHub, Preston-Werner sold his shares in the company to Microsoft. Along with a team of former GitHub co-founders and executives, Preston-Werner then cofounded Chatterbug, a software for language-learning. In 2018, Chatterbug cofounder Scott Chacon announced an 8 million series A funding round for the company, financed by himself and Preston-Werner. Preston-Werner, a hacker himself, has hosted AMA-style events for student hackers, such as for Hack Club, at the Def Hacks Virtual 2020 hackathon, and Dubhacks 2020
Personal life
Preston-Werner lives in San Francisco with his wife Theresa and their sons.
His wife is a former graduate student in cultural anthropology known for her involvement in historical research and social subjects.
See also
Jekyll (software)
References
Living people
1979 births
People from Dubuque, Iowa
Businesspeople from Iowa
American technology chief executives
21st-century American businesspeople
GitHub people
American software engineers
American technology company founders
Businesspeople in software
|
403362
|
https://en.wikipedia.org/wiki/Radeon
|
Radeon
|
Radeon () is a brand of computer products, including graphics processing units, random-access memory, RAM disk software, and solid-state drives, produced by Radeon Technologies Group, a division of Advanced Micro Devices (AMD). The brand was launched in 2000 by ATI Technologies, which was acquired by AMD in 2006 for US$5.4 billion.
Radeon Graphics
Radeon Graphics is the successor to the Rage line. Three different families of microarchitectures can be roughly distinguished, the fixed-pipeline family, the unified shader model-families of TeraScale and Graphics Core Next. ATI/AMD have developed different technologies, such as TruForm, HyperMemory, HyperZ, XGP, Eyefinity for multi-monitor setups, PowerPlay for power-saving, CrossFire (for multi-GPU) or Hybrid Graphics. A range of SIP blocks is also to be found on certain models in the Radeon products line: Unified Video Decoder, Video Coding Engine and TrueAudio.
The brand was previously only known as "ATI Radeon" until August 2010, when it was renamed to increase AMD's brand awareness on a global scale. Products up to and including the HD 5000 series are branded as ATI Radeon, while the HD 6000 series and beyond use the new AMD Radeon branding.
On 11 September 2015, AMD's GPU business was split into a separate unit known as Radeon Technologies Group, with Raja Koduri as Senior Vice President and chief architect.
Radeon Graphics card brands
AMD does not distribute Radeon cards directly to consumers (though some exceptions can be found). Instead, it sells Radeon GPUs to third-party manufacturers, who build and sell the Radeon-based video cards to the OEM and retail channels. Manufacturers of the Radeon cards—some of whom also make motherboards—include ASRock, Asus, Biostar, Club 3D, Diamond, Force3D, Gainward, Gigabyte, HIS, MSI, PowerColor, Sapphire, VisionTek, and XFX.
Graphics processor generations
Early generations were identified with a number and major/minor alphabetic prefix. Later generations were assigned code names. New or heavily redesigned architectures have a prefix of R (e.g., R300 or R600) while slight modifications are indicated by the RV prefix (e.g., RV370 or RV635).
The first derivative architecture, RV200, did not follow the scheme used by later parts.
Fixed-pipeline family
R100/RV200
The Radeon, first introduced in 2000, was ATI's first graphics processor to be fully DirectX 7 compliant. R100 brought with it large gains in bandwidth and fill-rate efficiency through the new HyperZ technology.
The RV200 was a die-shrink of the former R100 with some core logic tweaks for clockspeed, introduced in 2002. The only release in this generation was the Radeon 7500, which introduced little in the way of new features but offered substantial performance improvements over its predecessors.
R200
ATI's second generation Radeon included a sophisticated pixel shader architecture. This chipset implemented Microsoft's pixel shader 1.4 specification for the first time.
Its performance relative to competitors was widely perceived as weak, and subsequent revisions of this generation were cancelled in order to focus on development of the next generation.
R300/R350
The R300 was the first GPU to fully support Microsoft's DirectX 9.0 technology upon its release in 2001. It incorporated fully programmable pixel and vertex shaders.
About a year later, the architecture was revised to allow for higher frequencies, more efficient memory access, and several other improvements in the R350 family. A budget line of RV350 products was based on this refreshed design with some elements disabled or removed.
Models using the new PCI Express interface were introduced in 2004. Using 110-nm and 130-nm manufacturing technologies under the X300 and X600 names, respectively, the RV370 and RV380 graphics processors were used extensively by consumer PC manufacturers.
R420
While heavily based upon the previous generation, this line included extensions to the Shader Model 2 feature-set. Shader Model 2b, the specification ATI and Microsoft defined with this generation, offered somewhat more shader program flexibility.
R520
ATI's DirectX 9.0c series of graphics cards, with complete shader Model 3.0 support. Launched in October 2005, this series brought a number of enhancements including the floating point render target technology necessary for HDR rendering with anti-aliasing.
TeraScale-family
R600
ATI's first series of GPUs to replace the old fixed-pipeline and implement unified shader model. Subsequent revisions tuned the design for higher performance and energy efficiency, resulting in the ATI Mobility Radeon HD series for mobile computers.
R700
Based on the R600 architecture. Mostly a bolstered with many more stream processors, with improvements to power consumption and GDDR5 support for the high-end RV770 and RV740(HD4770) chips. It arrived in late June 2008. The HD 4850 and HD 4870 have 800 stream processors and GDDR3 and GDDR5 memory, respectively. The 4890 was a refresh of 4870 with the same amount of stream processors yet higher clock rates due to refinements. The 4870x2 has 1600 stream processors and GDDR5 memory on an effective 512-bit memory bus with 230.4 Gbit/s video memory bandwidth available.
Evergreen
The series was launched on 23 September 2009. It featured a 40 nm fabrication process for the entire product line (only the HD4770 (RV740) was built on this process previously), with more stream cores and compatibility with the next major version of the DirectX API, DirectX 11, which launched on 22 October 2009 along with Microsoft Windows 7. The Rxxx/RVxxx codename scheme was scrapped entirely. The initial launch consisted of only the 5870 and 5850 models. ATI released beta drivers that introduced full OpenGL 4.0 support on all variants of this series in March 2010.
Northern Islands
This is the first series to be marketed solely under the "AMD" brand. It features a 3rd generation 40 nm design, rebalancing the existing architecture with redesigned shaders to give it better performance. It was released first on October 22, 2010, in the form of the 6850 and 6870. 3D output is enabled with HDMI 1.4a and DisplayPort 1.2 outputs.
Graphics Core Next-family
Southern Islands
"Southern Islands" was the first series to feature the new compute microarchitecture known as "Graphics Core Next"(GCN). GCN was used among the higher end cards, while the VLIW5 architecture utilized in the previous generation was used in the lower end, OEM products. However, the Radeon HD 7790 uses GCN 2, and was the first product in the series to be released by AMD on 9 January 2012.
Sea Islands
The "Sea Islands" were OEM rebadges of the 7000 series, with only three products, code named Oland, available for general retail. The series, just like the "Southern Islands", used a mixture of VLIW5 models and GCN models for its desktop products.
Volcanic Islands
"Volcanic Islands" GPUs were introduced with the AMD Radeon Rx 200 Series, and were first released in late 2013. The Radeon Rx 200 line is mainly based on AMD's GCN architecture, with the lower end, OEM cards still using VLIW5. The majority of desktop products use GCN 1, while the R9 290x/290 & R7 260X/260 use GCN 2, and with only the R9 285 using the new GCN 3.
Caribbean Islands
GPUs codenamed "Caribbean Islands" were introduced with the AMD Radeon Rx 300 Series, released in 2015. This series was the first to solely use GCN based models, ranging from GCN 1st to GCN 3rd Gen, including the GCN 3-based Fiji-architecture models named Fury X, Fury, Nano and the Radeon Pro Duo.
Arctic Islands
GPUs codenamed "Arctic Islands"were first introduced with the Radeon RX 400 Series in June 2016 with the announcement of the RX 480. These cards were the first to use the new Polaris chips which implements GCN 4th Gen on the 14 nm fab process. The RX 500 Series released in April 2017 also uses Polaris chips.
Vega
RDNA-family
RDNA 1
On 27 May 2019, at COMPUTEX 2019, AMD announced the new 'RDNA' graphics micro-architecture, which is to succeed the Graphics Core Next micro-architecture. This is the basis for the Radeon RX 5700-series graphics cards, the first to be built under the codename 'Navi'. These cards feature GDDR6 SGRAM and support for PCI Express 4.0.
RDNA 2
On March 5, 2020, AMD publicly announced its plan to release a "refresh" of the RDNA micro-architecture. Dubbed as the RDNA 2 architecture, it was stated to succeed the first-gen RDNA micro-architecture and was initially scheduled for a release in Q4 2020. RDNA 2 was confirmed as the graphics microarchitecture featured in the Xbox Series X and Series S consoles from Microsoft, and PlayStation 5 from Sony, with proprietary tweaks and different GPU configurations in each systems' implementation.
AMD unveiled the Radeon RX 6000 series, its next-gen RDNA 2 graphics cards at an online event on October 28, 2020. The lineup consists of the RX 6800, RX 6800 XT and RX 6900 XT. The RX 6800 and 6800 XT launched on November 18, 2020, with the RX 6900 XT being released on December 8, 2020. Further variants including a Radeon RX 6700 (XT) series based on Navi 22, launched on March 18, 2021, a Radeon RX 6600(XT) series based on Navi 23, launched in August 11, 2021 (that is the 6600XT release date, the RX 6600 launched in October 13, 2021), and a Radeon RX 6500(XT), launched in January 19, 2022.
API overview
Some generations vary from their predecessors predominantly due to architectural improvements, while others were adapted primarily to new manufacturing processes with fewer functional changes. The table below summarizes the APIs supported in each Radeon generation. Also see AMD FireStream and AMD FirePro branded products.
Feature overview
Graphics device drivers
AMD's proprietary graphics device driver "Radeon Software" (Formerly Catalyst)
On 24 November 2015, AMD released a new version of their graphics driver following the formation of the Radeon Technologies Group (RTG) to provide extensive software support for their graphics cards. This driver, labelled Radeon Software Crimson Edition, overhauls the UI with Qt, resulting in better responsiveness from a design and system perspective. It includes a new interface featuring a game manager, clocking tools, and sections for different technologies.
Unofficial modifications such as Omega drivers and DNA drivers were available. These drivers typically consist of mixtures of various driver file versions with some registry variables altered and are advertised as offering superior performance or image quality. They are, of course, unsupported, and as such, are not guaranteed to function correctly. Some of them also provide modified system files for hardware enthusiasts to run specific graphics cards outside of their specifications.
On operating systems
Radeon Software is being developed for Microsoft Windows and Linux. , other operating systems are not officially supported. This may be different for the AMD FirePro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers.
ATI previously offered driver updates for their retail and integrated Macintosh video cards and chipsets. ATI stopped support for Mac OS 9 after the Radeon R200 cards, making the last officially supported card the Radeon 9250. The Radeon R100 cards up to the Radeon 7200 can still be used with even older classic Mac OS versions such as System 7, although not all features are taken advantage of by the older operating system.
Ever since ATI's acquisition by AMD, ATI no longer supplies or supports drivers for classic Mac OS nor macOS. macOS drivers can be downloaded from Apple's support website, while classic Mac OS drivers can be obtained from 3rd party websites that host the older drivers for users to download. ATI used to provide a preference panel for use in macOS called ATI Displays which can be used both with retail and OEM versions of its cards. Though it gives more control over advanced features of the graphics chipset, ATI Displays has limited functionality compared to Catalyst for Windows or Linux.
Free and open-source graphics device driver "Radeon"
The free and open-source for Direct Rendering Infrastructure has been under constant development by the Linux kernel developers, by 3rd party programming enthusiasts and by AMD employees. It is composed out of five parts:
Linux kernel component DRM
this part received dynamic re-clocking support in Linux kernel version 3.12 and its performance has become comparable to that of AMD Catalyst
Linux kernel component KMS driver: basically the device driver for the display controller
user-space component libDRM
user-space component in Mesa 3D; currently most of these components are written conforming to the Gallium3D-specifications.
all drivers in Mesa 3D with Version 10.x (last 10.6.7) are as of September 2014 limited to OpenGL version 3.3 and OpenGL ES 3.0.
all drivers in Mesa 3D with Version 11.x (last 11.2.2) are as of Mai 2016 limited to OpenGL version 4.1 and OpenGL ES 3.0 or 3.1 (11.2+).
all drivers in Mesa 3D with version 12.x (in June 2016) can support OpenGL version 4.3.
all drivers in Mesa 3D with Version 13.0.x ( in November 2016) can support OpenGL 4.4 and unofficial 4.5.
all drivers in Mesa 3D with Version 17.0.x ( in January 2017) can support OpenGL 4.5 and OpenGL ES 3.2
Actual Hardware Support for different MESA versions see: glxinfo
AMD R600/700 since Mesa 10.1: OpenGL 3.3+, OpenGL ES 3.0+ (+: some more Features of higher Levels and Mesa Version)
AMD R800/900 (Evergreen, Northern Islands): OpenGL 4.1+ (Mesa 13.0+), OpenGL ES 3.0+ (Mesa 10.3+)
AMD GCN (Southern/Sea Islands and newer): OpenGL 4.5+ (Mesa 17.0+), OpenGL ES 3.2+ (Mesa 18.0+), Vulkan 1.0 (Mesa 17.0+), Vulkan 1.1 (GCN 2nd Gen+, Mesa 18.1+)
a special and distinct 2D graphics device driver for X.Org Server, which is finally about to be replaced by Glamor
OpenCL with GalliumCompute (previous Clover) is not full developed in 1.0, 1.1 and only parts of 1.2. Some OpenCL conformance tests were failed in 1.0 and 1.1, most in 1.2. ROCm is developed by AMD and Open Source. OpenCL 1.2 is full supported with OpenCL 2.0 language. Only CPU or GCN-Hardware with PCIe 3.0 is supported. So GCN 3rd Gen. or higher is here full usable for OpenCL 1.2 software.
Supported features
The free and open-source driver supports many of the features available in Radeon-branded cards and APUs, such as multi-monitor or hybrid graphics.
Linux
The free and open-source drivers are primarily developed on Linux and for Linux.
Other operating systems
Being entirely free and open-source software, the free and open-source drivers can be ported to any existing operating system. Whether they have been, and to what extent depends entirely on the man-power available. Available support shall be referenced here.
FreeBSD adopted DRI, and since Mesa 3D is not programmed for Linux, it should have identical support.
MorphOS supports 2D and 3D acceleration for Radeon R100, R200 and R300 chipsets.
AmigaOS 4 supports Radeon R100, R200, R300, R520 (X1000 Series), R700 (HD 4000 Series), HD 5000 (Evergreen) series, HD 6000 (Northern Islands) series and HD 7000 (Southern Islands) series. The RadeonHD AmigaOS 4 driver has been developed by Hans de Ruiter funded and owned by A-EON Technology Ltd. The older R100 and R200 "ATIRadeon" driver for AmigaOS, originally developed Forefront Technologies has been acquired by A-EON Technology Ltd in 2015.
In the past ATI provided hardware and technical documentation to the Haiku Project to produce drivers with full 2D and video in/out support on older Radeon chipsets (up to R500) for Haiku. A new Radeon HD driver was developed with the unofficial and indirect guidance of AMD open source engineers and currently exists in recent Haiku versions. The new Radeon HD driver supports native mode setting on R600 through Southern Islands GPU's.
Embedded GPU products
AMD (and its predecessor ATI) have released a series of embedded GPUs targeted toward medical, entertainment, and display devices.
Radeon Memory
In August 2011, AMD expanded the Radeon name to include random access memory modules under the AMD Memory line. The initial releases included 3 types of 2GiB DDR3 SDRAM modules: Entertainment (1333 MHz, CL9 9-9), UltraPro Gaming (1600 MHz, CL11 11-11) and Enterprise (specs to be determined).
In 2013-05-08, AMD announced the release of Radeon RG2133 Gamer Series Memory.
Radeon R9 2400 Gamer Series Memory was released in 2014-01-16.
Production
Dataram Corporation is manufacturing RAM for AMD.
Radeon RAMDisk
In 2012-09-06, Dataram Corporation announced it has entered into a formal agreement with AMD to develop an AMD-branded version of Dataram's RAMDisk software under the name Radeon RAMDisk, targeting gaming enthusiasts seeking exponential improvements in game load times leading to an enhanced gaming experience. The freeware version of Radeon RAMDisk software supports Windows Vista and later with minimum 4GiB memory, and supports maximum of 4GiB RAM disk (6GiB if AMD Radeon Value, Entertainment, Performance Edition or Products installed, and Radeon RAMDisk is activated between 2012-10-10 and 2013-10-10). Retail version supports RAM disk size between 5MiB to 64GiB.
Version history
Version 4.1 was released in 2013-05-08.
Production
In 2014-04-02, Dataram Corporation announced it has signed an Agreement with Elysium Europe Ltd. to expand sales penetration in Europe, the Middle East and Africa. Under this Agreement, Elysium is authorized to sell AMD Radeon RAMDisk software. Elysium is focusing on etailers, retailers, system builders and distributors.
Radeon SSD
AMD planned to enter solid state drive market with the introduction of R7 models powered by Indilinx Barefoot 3 controller and Toshiba 19 nm MLC flash memory, and initially available in 120G, 240G, 480G capacities. The R7 Series SSD was released on 2014-08-09, which included Toshiba's A19 MLC NAND flash memory, Indilinx Barefoot 3 M00 controller. These components are the same as in the SSD OCZ Vector 150 model.
See also
AMD FirePro – brand for professional product line based on Radeon GPUs up to the AMD Radeon Rx 300 series
AMD Radeon Pro – successor to AMD FirePro and launched alongside the AMD Radeon 400 series
AMD FireStream – brand for stream processing and GPGPU based on Radeon GPUs
AMD FireMV – brand for multi-monitor product line based on Radeon GPUs
References
External links
Radeon Technologies Group pages: Radeon Graphics Cards,
AMD Radeon pages: AMD Graphics, Radeon Memory, Radeon RAMDisk
X.Org driver for ATI/AMD Radeon
DRI Wiki: ATI Radeon
Rage3D: Support community for ATI hardware and drivers. News and discussion.
ATI Technologies
ATI Technologies products
Graphics cards
Products introduced in 2000
|
536697
|
https://en.wikipedia.org/wiki/Authentication%20protocol
|
Authentication protocol
|
An authentication protocol is a type of computer communications protocol or cryptographic protocol specifically designed for transfer of authentication data between two entities. It allows the receiving entity to authenticate the connecting entity (e.g. Client connecting to a Server) as well as authenticate itself to the connecting entity (Server to a client) by declaring the type of information needed for authentication as well as syntax. It is the most important layer of protection needed for secure communication within computer networks.
Purpose
With the increasing amount of trustworthy information being accessible over the network, the need for keeping unauthorized persons from access to this data emerged. Stealing someone's identity is easy in the computing world - special verification methods had to be invented to find out whether the person/computer requesting data is really who he says he is. The task of the authentication protocol is to specify the exact series of steps needed for execution of the authentication. It has to comply with the main protocol principles:
A Protocol has to involve two or more parties and everyone involved in the protocol must know the protocol in advance.
All the included parties have to follow the protocol.
A protocol has to be unambiguous - each step must be defined precisely.
A protocol must be complete - must include a specified action for every possible situation.
An illustration of password-based authentication using simple authentication protocol:
Alice (an entity wishing to be verified) and Bob (an entity verifying Alice's identity) are both aware of the protocol they agreed on using. Bob has Alice's password stored in a database for comparison.
Alice sends Bob her password in a packet complying with the protocol rules.
Bob checks the received password against the one stored in his database. Then he sends a packet saying "Authentication successful" or "Authentication failed" based on the result.
This is an example of a very basic authentication protocol vulnerable to many threats such as eavesdropping, replay attack, man-in-the-middle attacks, dictionary attacks or brute-force attacks. Most authentication protocols are more complicated in order to be resilient against these attacks.
Types
Authentication protocols developed for PPP Point-to-Point Protocol
Protocols are used mainly by Point-to-Point Protocol (PPP) servers to validate the identity of remote clients before granting them access to server data. Most of them use a password as the cornerstone of the authentication. In most cases, the password has to be shared between the communicating entities in advance.
PAP - Password Authentication Protocol
Password Authentication Protocol is one of the oldest authentication protocols. Authentication is initialized by the client sending a packet with credentials (username and password) at the beginning of the connection, with the client repeating the authentication request until acknowledgement is received. It is highly insecure because credentials are sent "in the clear" and repeatedly, making it vulnerable even to the most simple attacks like eavesdropping and man-in-the-middle based attacks. Although widely supported, it is specified that if an implementation offers a stronger authentication method, that method must be offered before PAP. Mixed authentication (e.g. the same client alternately using both PAP and CHAP) is also not expected, as the CHAP authentication would be compromised by PAP sending the password in plain-text.
CHAP - Challenge-handshake authentication protocol
The authentication process in this protocol is always initialized by the server/host and can be performed anytime during the session, even repeatedly. Server sends a random string (usually 128B long). The client uses password and the string received as parameters for MD5 hash function and then sends the result together with username in plain text. Server uses the username to apply the same function and compares the calculated and received hash. An authentication is successful or unsuccessful.
EAP - Extensible Authentication Protocol
EAP was originally developed for PPP(Point-to-Point Protocol) but today is widely used in IEEE 802.3, IEEE 802.11(WiFi) or IEEE 802.16 as a part of IEEE 802.1x authentication framework. The latest version is standardized in RFC 5247. The advantage of EAP is that it is only a general authentication framework for client-server authentication - the specific way of authentication is defined in its many versions called EAP-methods. More than 40 EAP-methods exist, the most common are:
EAP-MD5
EAP-TLS
EAP-TTLS
EAP-FAST
EAP-PEAP
AAA architecture protocols (Authentication, Authorization, Accounting)
Complex protocols used in larger networks for verifying the user (Authentication), controlling access to server data (Authorization) and monitoring network resources and information needed for billing of services (Accounting).
TACACS, XTACACS and TACACS+
The oldest AAA protocol using IP based authentication without any encryption (usernames and passwords were transported as plain text). Later version XTACACS (Extended TACACS) added authorization and accounting. Both of these protocols were later replaced by TACACS+. TACACS+ separates the AAA components thus they can be segregated and handled on separate servers (It can even use another protocol for e.g. Authorization). It uses TCP (Transmission Control Protocol) for transport and encrypts the whole packet. TACACS+ is Cisco proprietary.
RADIUS
Remote Authentication Dial-In User Service (RADIUS) is a full AAA protocol commonly used by ISP. Credentials are mostly username-password combination based, it uses NAS and UDP protocol for transport.
DIAMETER
Diameter (protocol) evolved from RADIUS and involves many improvements such as usage of more reliable TCP or SCTP transport protocol and higher security thanks to TLS.
Other
Kerberos (protocol)
Kerberos is a centralized network authentication system developed at MIT and available as a free implementation from MIT but also in many commercial products. It is the default authentication method in Windows 2000 and later. The authentication process itself is much more complicated than in the previous protocols - Kerberos uses symmetric key cryptography, requires a trusted third party and can use public-key cryptography during certain phases of authentication if need be.
List of various other authentication protocols
AKA
Basic access authentication
CAVE-based authentication
CRAM-MD5
Digest
Host Identity Protocol (HIP)
LAN Manager
NTLM, also known as NT LAN Manager
OpenID protocol
Password-authenticated key agreement protocols
Protocol for Carrying Authentication for Network Access (PANA)
Secure Remote Password protocol (SRP)
RFID-Authentication Protocols
Woo Lam 92 (protocol)
SAML
References
Computer access control protocols
|
1157398
|
https://en.wikipedia.org/wiki/Who%20%28Unix%29
|
Who (Unix)
|
The standard Unix command who displays a list of users who are currently logged into the computer.
The who command is related to the command , which provides the same information but also displays additional data and statistics.
History
A command that displays the names of users logged in was first implemented within Multics. Later, it appeared in Version 1 Unix and became part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX.1 and the Single Unix Specification.
The version of who bundled in GNU coreutils was written by Joseph Arceneaux, David MacKenzie, and Michael Stone.
Specification
The Single UNIX Specification (SUS) specifies that who should list information about accessible users. The XSI extension also specifies that the data of the username, terminal, login time, process ID, and time since last activity occurred on the terminal, furthermore, an alternate system database used for user information can be specified as an optional argument to .
The command can be invoked with the arguments am i or am I (so it is invoked as who am i or who am I), showing information about the current terminal only (see the command and the -m option below, of which this invocation is equivalent).
Usage
The SUS without extensions only specifies the following -m, -T, and -u options, all other options are specified in the XSI extension.
-a, process the system database used for user information with the -b, -d, -l, -p, -r, -t, -T and -u.
-b, show time when system was last rebooted
-d, show zombie processes and details
-H, show column headers
-l, show terminals where a user can log in
-m, show information about the current terminal only
-p, show active processes
-q, quick format, show only names and the number of all users logged on, disables all other options; equivalent to users command line utility
-r, show runlevel of the init process.
-s, (default) show only name, terminal, and time details
-t, show when system clock was last changed
-T, show details of each terminal in a standard format (see note in Examples section)
-u, show idle time; XSI shows users logged in and displays information whether the terminal has been used recently or not
Other Unix and Unix-like operating systems may add extra options. GNU includes a -i option behaving similarly to -u and a -w option displaying whether the user listed accepts messages (the SUS displays this when -T is specified), yet GNU who and BSD who both omit a number of the above options (such as -a, -b, -d, and others); GNU who instead uses -l to perform DNS lookups on hostnames listed.
Output
The SUS without extensions specifies that the output format is to be "implementation-defined". The XSI extension specifies a format, but notes that it is not fully specified; delimiters and field lengths are not precisely specified. Thus, the format of the output differs considerably among Unix implementations.
See also
List of Unix commands
References
External links
who — manual page from GNU coreutils
Multics commands
Unix user management and support-related utilities
Standard Unix programs
Unix SUS2008 utilities
Plan 9 commands
|
20057931
|
https://en.wikipedia.org/wiki/IT%20infrastructure
|
IT infrastructure
|
Information technology infrastructure is defined broadly as a set of information technology (IT) components that are the foundation of an IT service; typically physical components (computer and networking hardware and facilities), but also various software and network components.
According to the ITIL Foundation Course Glossary, IT Infrastructure can also be termed as “All of the hardware, software, networks, facilities, etc., that are required to develop, test, deliver, monitor, control or support IT services. The term IT infrastructure includes all of the Information Technology but not the associated People, Processes and documentation.”
Overview
In IT Infrastructure, the above technological components contribute to and drive business functions. Leaders and managers within the IT field are responsible for ensuring that both the physical hardware and software networks and resources are working optimally. IT infrastructure can be looked at as the foundation of an organization's technology systems, thereby playing an integral part in driving its success. All organizations who rely on technology to do their business can benefit from having a robust, interconnected IT Infrastructure. With the current speed that technology changes and the competitive nature of businesses, IT leaders have to ensure that their IT Infrastructure is designed such that changes can be made quickly and without impacting the business continuity. While traditionally companies used to typically rely on physical data centers or colocation facilities to support their IT Infrastructure, cloud hosting has become more popular as it is easier to manage and scale. IT Infrastructure can be managed by the company themselves or it can be outsourced to another company who has consulting expertise to develop robust infrastructures for an organization. With advances in online outreach availability, it has become easier for end users to access technology. As a result, IT infrastructures have become more complex and therefore, it is harder for managers to oversee the end to end operations. In order to mitigate this issue, strong IT Infrastructures require employees with varying skill sets. The fields of IT management and IT service management rely on IT infrastructure, and the ITIL framework was developed as a set of best practices with regard to IT infrastructure. The ITIL framework assists companies with the ability to be responsive to technological market demands. Technology can often be thought of as an innovative product which can incur high production costs. However, the ITIL framework helps address these issues and allows the company to be more cost effective which helps IT managers to keep the IT Infrastructure functioning.
Background
Even though the IT infrastructure has been around for over 60 years, there have been incredible advances in technology in the past 15 years.
Components of IT infrastructure
The primary components of an IT Infrastructure are the physical systems such as hardware, storage, any kind of routers/switches and the building itself but also networks and software . In addition to these components, there is the need for “IT Infrastructure Security”. Security keeps the network and its devices safe in order to maintain the integrity within the overall infrastructure of the organization.
Specifically, the first three layers are directly involved with IT Infrastructure. The physical layer serves as the fundamental layer for hardware. The second and third layers (Data Link and Network), are essential for communication to and from hardware devices. Without this, networking is not possible. Therefore, in a sense the internet itself would not be possible.
IT infrastructure types
Different types of technological task may require a tailore approach to the infrastructures. These can be achieved through a traditional, cloud or hyper converged IT Infrastructure.
Skills
There are many functioning parts that go into the health of an IT infrastructure. In order to contribute positively to the organization, employees can acquire abilities to benefit the company. These include key technical abilities such as cloud, network, and data administration skills and soft abilities such collaboration and communication skills.
Future
As data storage and management becomes more digitized, IT Infrastructure is moving towards the cloud. Infrastructure-as-a-service (IaaS) provides the ability to host on a server and is a platform for cloud computing.
See also
Converged infrastructure
Dynamic infrastructure
Hyper-converged infrastructure
Information infrastructure
Infrastructure as a service
Infrastructure as code
Software-defined infrastructure
References
Sources
Rouse, Margaret (2017). "A DevOps primer: Start, improve and extend your DevOps teams", "TechTarget Search Data Center". Retrieved 28 September 2019.
Information technology
Information technology management
|
63510139
|
https://en.wikipedia.org/wiki/KeeWeb
|
KeeWeb
|
KeeWeb is a free and open-source password manager compatible with KeePass, available as a web version and desktop apps. The underlying file format is KDBX (KeePass database file).
Technology
KeeWeb is written in JavaScript and uses WebCrypto and WebAssembly to process password files in the browser, without uploading them to a server. It can synchronize files with popular file hosting services, such as Dropbox, Google Drive, and OneDrive.
KeeWeb is also available as an Electron bundle which resembles a desktop app. The desktop version adds some features not available on web:
auto-typing passwords
ability to open and save local files
sync to WebDAV without CORS enabled
KeeWeb can also be deployed as a standalone server, or installed as a Nextcloud app.
Reception
KeeWeb was praised by Ghacks Technology News in 2016 as "brand-new" fixing the "shortcoming of a web-based version" of KeePass, and by Tech Advisor in 2020 as "well-designed cross-platform password manager".
See also
List of password managers
Password manager
Cryptography
References
External links
Cryptographic software
Free password managers
Password managers
Android (operating system) software
IOS software
Linux software
MacOS software
Windows software
|
769088
|
https://en.wikipedia.org/wiki/Satellite%20Internet%20access
|
Satellite Internet access
|
Satellite Internet access is Internet access provided through communication satellites. Modern consumer grade satellite Internet service is typically provided to individual users through geostationary satellites that can offer relatively high data speeds, with newer satellites using to achieve downstream data speeds up to 506 Mbit/s. In addition, new satellite internet constellations are being developed in low-earth orbit to enable low-latency internet access from space.
History
Following the launch of the first satellite, Sputnik 1, by the Soviet Union in October 1957, the US successfully launched the Explorer 1 satellite in 1958. The first commercial communications satellite was Telstar 1, built by Bell Labs and launched in July 1962.
The idea of a geosynchronous satellite—one that could orbit the Earth above the equator and remain fixed by following the Earth's rotation—was first proposed by Herman Potočnik in 1928 and popularised by the science fiction author Arthur C. Clarke in a paper in Wireless World in 1945. The first satellite to successfully reach geostationary orbit was Syncom3, built by Hughes Aircraft for NASA and launched on August 19, 1963. Succeeding generations of communications satellites featuring larger capacities and improved performance characteristics were adopted for use in television delivery, military applications and telecommunications purposes. Following the invention of the Internet and the World Wide Web, geostationary satellites attracted interest as a potential means of providing Internet access.
A significant enabler of satellite-delivered Internet has been the opening up of the for satellites. In December 1993, Hughes Aircraft Co. filed with the Federal Communications Commission for a license to launch the first Ka-band satellite, Spaceway. In 1995, the FCC issued a call for more Ka-band satellite applications, attracting applications from 15 companies. Among those were EchoStar, Lockheed Martin, GE-Americom, Motorola and KaStar Satellite, which later became WildBlue.
Among prominent aspirants in the early-stage satellite Internet sector was Teledesic, an ambitious and ultimately failed project funded in part by Microsoft that ended up costing more than $9 billion. Teledesic's idea was to create a broadband satellite constellation of hundreds of low-orbiting satellites in the Ka-band frequency, providing inexpensive Internet access with download speeds of up to 720 Mbit/s. The project was abandoned in 2003. Teledesic's failure, coupled with the bankruptcy filings of the satellite communications providers Iridium Communications Inc. and Globalstar, dampened marketplace enthusiasm for satellite Internet development. It wasn't until September 2003 when the first Internet-ready satellite for consumers was launched by Eutelsat.
In 2004, with the launch of Anik F2, the first high throughput satellite, a class of next-generation satellites providing improved capacity and bandwidth became operational. More recently, high throughput satellites such as ViaSat's ViaSat-1 satellite in 2011 and HughesNet's Jupiter in 2012 have achieved further improvements, elevating downstream data rates from 1–3 Mbit/s up to 12–15Mbit/s and beyond. Internet access services tied to these satellites are targeted largely to rural residents as an alternative to Internet service via dial-up, ADSL or classic FSSes.
In 2013 the first four satellites of the O3b constellation were launched into medium Earth orbit (MEO) to provide internet access to the "other three billion" people without stable internet access at that time. Over the next six years, 16 further satellites joined the constellation, now owned and operated by SES.
Since 2014, a rising number of companies announced working on internet access using satellite constellations in low Earth orbit. SpaceX, OneWeb and Amazon all plan to launch more than 1000 satellites each. OneWeb alone raised $1.7 billion by February 2017 for the project, and SpaceX raised over one billion in the first half of 2019 alone for their service called Starlink and expected more than $30 billion in revenue by 2025 from its satellite constellation. Many planned constellations employ laser communication for inter-satellite links to effectively create a space-based internet backbone.
In September 2017, SES announced the next generation of O3b satellites and service, named O3b mPOWER. The constellation of 11 MEO satellites will deliver 10 terabits of capacity globally through 30,000 spot beams for broadband internet services. The first three O3b mPOWER satellites are scheduled to launch in Q2 2022.
As of 2017, airlines such as Delta and American have been introducing satellite internet as a means of combating limited bandwidth on airplanes and offering passengers usable internet speeds.
Companies and market
United States
Companies providing home internet service in the United States of America include ViaSat, through its Exede brand, EchoStar, through subsidiary HughesNet, and Starlink.
United Kingdom
In the United Kingdom, companies providing satellite Internet access include Konnect, Broadband Everywhere and Freedomsat.
Function
Satellite Internet generally relies on three primary components: a satellite historically in geostationary orbit (or GEO) but now increasingly in Low Earth orbit (LEO) or Medium Earth orbit MEO) a number of ground stations known as gateways that relay Internet data to and from the satellite via radio waves (microwave), and further ground stations to serve each subscriber, with a small antenna and transceiver. Other components of a satellite Internet system include a modem at the user end which links the user's network with the transceiver, and a centralized network operations centre (NOC) for monitoring the entire system. Working in concert with a broadband gateway, the satellite operates a Star network topology where all network communication passes through the network's hub processor, which is at the centre of the star. With this configuration, the number of ground stations that can be connected to the hub is virtually limitless.
Satellite
Marketed as the centre of the new broadband satellite networks are a new generation of high-powered GEO satellites positioned above the equator, operating in Ka-band (18.3–30 GHz) mode. These new purpose-built satellites are designed and optimized for broadband applications, employing many narrow spot beams, which target a much smaller area than the broad beams used by earlier communication satellites. This spot beam technology allows satellites to reuse assigned bandwidth multiple times which can enable them to achieve much higher overall capacity than conventional broad beam satellites. The spot beams can also increase performance and consequential capacity by focusing more power and increased receiver sensitivity into defined concentrated areas. Spot beams are designated as one of two types: subscriber spot beams, which transmit to and from the subscriber-side terminal, and gateway spot beams, which transmit to/from a service provider ground station. Note that moving off the tight footprint of a spotbeam can degrade performance significantly. Also, spotbeams can make the use of other significant new technologies impossible, including 'Carrier in Carrier' modulation.
In conjunction with the satellite's spot-beam technology, a bent-pipe architecture has traditionally been employed in the network in which the satellite functions as a bridge in space, connecting two communication points on the ground. The term "bent-pipe" is used to describe the shape of the data path between sending and receiving antennas, with the satellite positioned at the point of the bend.
Simply put, the satellite's role in this network arrangement is to relay signals from the end user's terminal to the ISP's gateways, and back again without processing the signal at the satellite. The satellite receives, amplifies, and redirects a carrier on a specific radio frequency through a signal path called a transponder.
Some proposed satellite constellations in LEO such as Starlink and Telesat will employ laser communication equipment for high-throughput optical inter-satellite links. The interconnected satellites allow for direct routing of user data from satellite to satellite and effectively create a space-based optical mesh network that will enable seamless network management and continuity of service.
The satellite has its own set of antennas to receive communication signals from Earth and to transmit signals to their target location. These antennas and transponders are part of the satellite's "payload", which is designed to receive and transmit signals to and from various places on Earth. What enables this transmission and reception in the payload transponders is a repeater subsystem (RF (radio frequency) equipment) used to change frequencies, filter, separate, amplify and group signals before routing them to their destination address on Earth. The satellite's high-gain receiving antenna passes the transmitted data to the transponder which filters, translates and amplifies them, then redirects them to the transmitting antenna on board. The signal is then routed to a specific ground location through a channel known as a carrier. Beside the payload, the other main component of a communications satellite is called the bus, which comprises all equipment required to move the satellite into position, supply power, regulate equipment temperatures, provide health and tracking information, and perform numerous other operational tasks.
Gateways
Along with dramatic advances in satellite technology over the past decade, ground equipment has similarly evolved, benefiting from higher levels of integration and increasing processing power, expanding both capacity and performance boundaries.
The Gateway—or Gateway Earth Station (its full name)—is also referred to as a ground station, teleport or hub. The term is sometimes used to describe just the antenna dish portion, or it can refer to the complete system with all associated components.
In short, the gateway receives radio wave signals from the satellite on the last leg of the return or upstream payload, carrying the request originating from the end-user's site. The satellite modem at the gateway location demodulates the incoming signal from the outdoor antenna into IP packets and sends the packets to the local network. Access server/gateways manage traffic transported to/from the Internet. Once the initial request has been processed by the gateway's servers, sent to and returned from the Internet, the requested information is sent back as a forward or downstream payload to the end-user via the satellite, which directs the signal to the subscriber terminal. Each Gateway provides the connection to the Internet backbone for the gateway beam(s) it serves.
The system of gateways comprising the satellite ground system provides all network services for satellite and corresponding terrestrial connectivity. Each gateway provides a multiservice access network for subscriber terminal connections to the Internet.
In the continental United States, because it is north of the equator, all gateway and subscriber dish antenna must have an unobstructed view of the southern sky. Because of the satellite's geostationary orbit, the gateway antenna can stay pointed at a fixed position.
Antenna dish and modem
For the customer-provided equipment (i.e. PC and router) to access the broadband satellite network, the customer must have additional physical components installed:
Outdoor unit (ODU)
At the far end of the outdoor unit is typically a small (2–3-foot, 60–90 cm diameter), reflective dish-type radio antenna. The VSAT antenna must also have an unobstructed view of the sky to allow for proper line-of-sight (L-O-S) to the satellite.
There are four physical characteristic settings used to ensure that the antenna is configured correctly at the satellite, which are: azimuth, elevation, polarization, and skew. The combination of these settings gives the outdoor unit a L-O-S to the chosen satellite and makes data transmission possible. These parameters are generally set at the time the equipment is installed, along with a beam assignment (Ka-band only); these steps must all be taken prior to the actual activation of service.
Transmit and receive components are typically mounted at the focal point of the antenna which receives/sends data from/to the satellite. The main parts are:
Feed – This assembly is part of the VSAT receive and transmit chain, which consists of several components with different functions, including the feed horn at the front of the unit, which resembles a funnel and has the task of focusing the satellite microwave signals across the surface of the dish reflector. The feed horn both receives signals reflected off the dish's surface and transmits outbound signals back to the satellite.
Block upconverter (BUC) – This unit sits behind the feed horn and may be part of the same unit, but a larger (higher wattage) BUC could be a separate piece attached to the base of the antenna. Its job is to convert the signal from the modem to a higher frequency and amplify it before it is reflected off the dish and towards the satellite.
Low-noise block downconverter (LNB) – This is the receiving element of the terminal. The LNB's job is to amplify the received satellite radio signal bouncing off the dish and filter out the noise, which is any signal not carrying valid information. The LNB passes the amplified, filtered signal to the satellite modem at the user's location.
Indoor unit (IDU)
The satellite modem serves as an interface between the outdoor unit and customer-provided equipment (i.e. PC, router) and controls satellite transmission and reception. From the sending device (computer, router, etc.) it receives an input bitstream and converts or modulates it into radio waves, reversing that order for incoming transmissions, which is called demodulation. It provides two types of connectivity:
Coaxial cable (COAX) connectivity to the satellite antenna. The cable carrying electromagnetic satellite signals between the modem and the antenna generally is limited to be no more than 150 feet in length.
Ethernet connectivity to the computer, carrying the customer's data packets to and from the Internet content servers.
Consumer grade satellite modems typically employ either the DOCSIS or WiMAX telecommunication standard to communicate with the assigned gateway.
Challenges and limitations
Signal latency
Latency (commonly referred to as "ping time") is the delay between requesting data and the receipt of a response, or in the case of one-way communication, between the actual moment of a signal's broadcast and the time it is received at its destination.
A radio signal takes about 120 milliseconds to reach a geostationary satellite and then 120 milliseconds to reach the ground station, so nearly 1/4 of a second overall. Typically, during perfect conditions, the physics involved in satellite communications account for approximately 550 milliseconds of latency round-trip time.
The longer latency is the primary difference between a standard terrestrial-based network and a geostationary satellite-based network. The round-trip latency of a geostationary satellite communications network can be more than 12 times that of a terrestrial based network.
Geostationary orbits
A geostationary orbit (or geostationary Earth orbit/GEO) is a geosynchronous orbit directly above the Earth's equator (0° latitude), with a period equal to the Earth's rotational period and an orbital eccentricity of approximately zero (i.e. a "circular orbit"). An object in a geostationary orbit appears motionless, at a fixed position in the sky, to ground observers. Launchers often place communications satellites and weather satellites in geostationary orbits, so that the satellite antennas that communicate with them do not have to move to track them, but can point permanently at the position in the sky where the satellites stay. Due to the constant 0° latitude and circularity of geostationary orbits, satellites in GEO differ in location by longitude only.
Compared to ground-based communication, all geostationary satellite communications experience higher latency due to the signal having to travel to a satellite in geostationary orbit and back to Earth again. Even at the speed of light (about 300,000 km/s or 186,000 miles per second), this delay can appear significant. If all other signaling delays could be eliminated, it still takes a radio signal about 250 milliseconds (ms), or about a quarter of a second, to travel to the satellite and back to the ground. The absolute minimum total amount of delay varies, due to the satellite staying in one place in the sky, while ground-based users can be directly below (with a roundtrip latency of 239.6 ms), or far to the side of the planet near the horizon (with a roundtrip latency of 279.0 ms).
For an Internet packet, that delay is doubled before a reply is received. That is the theoretical minimum. Factoring in other normal delays from network sources gives a typical one-way connection latency of 500–700 ms from the user to the ISP, or about 1,000–1,400 ms latency for the total round-trip time (RTT) back to the user. This is more than most dial-up users experience at typically 150–200 ms total latency, and much higher than the typical 15–40 ms latency experienced by users of other high-speed Internet services, such as cable or VDSL.
For geostationary satellites, there is no way to eliminate latency, but the problem can be somewhat mitigated in Internet communications with TCP acceleration features that shorten the apparent round trip time (RTT) per packet by splitting ("spoofing") the feedback loop between the sender and the receiver. Certain acceleration features are often present in recent technology developments embedded in satellite Internet equipment.
Latency also impacts the initiation of secure Internet connections such as SSL which require the exchange of numerous pieces of data between web server and web client. Although these pieces of data are small, the multiple round-trips involved in the handshake produce long delays compared to other forms of Internet connectivity, as documented by Stephen T. Cobb in a 2011 report published by the Rural Mobile and Broadband Alliance. This annoyance extends to entering and editing data using some Software as a Service or SaaS applications as well as in other forms of online work.
One should thoroughly test the functionality of live interactive access to a distant computer—such as virtual private networks. Many TCP protocols were not designed to work in high-latency environments.
Medium and Low Earth Orbits
Medium Earth orbit (MEO) and low Earth orbit (LEO) satellite constellations do not have such great delays, as the satellites are closer to the ground. For example:
The current LEO constellations of Globalstar and Iridium satellites have delays of less than 40 ms round trip, but their throughput is less than broadband at 64 kbit/s per channel. The Globalstar constellation orbits 1,420 km above the Earth and Iridium orbits at 670 km altitude.
The O3b constellation orbits at 8,062 km, with RTT latency of approximately 125 ms. The network is also designed for much higher throughput with links well in excess of 1 Gbit/s (Gigabits per second). The forthcoming O3b mPOWER constellation shares the same orbit and will deliver from 50Mbps to multiple gigabits per second to a single user.
Unlike geostationary satellites, LEO and MEO satellites do not stay in a fixed position in the sky and from a lower altitude they can "see" a smaller area of the Earth, and so continuous widespread access requires a constellation of many satellites (low-Earth orbits needing more satellites than medium-Earth orbits) with complex constellation management to switch data transfer between satellites and keep the connection to a customer, and tracking by the ground stations.
MEO satellites require higher power transmissions than LEO to achieve the same signal strength at the ground station but their higher altitude also provides less orbital overcrowding, and their slower orbit speed reduces both Doppler shift and the size and complexity of the constellation required.
Tracking of the moving satellites is usually undertaken in one of three ways, using:
more diffuse or completely omnidirectional ground antennas capable of communicating with one or more satellites visible in the sky at the same time, but at significantly higher transmit power than fixed geostationary dish antennas (due to the lower gain), and with much poorer signal-to-noise ratios for receiving the signal
motorized antenna mounts with high-gain, narrow beam antennas tracking individual satellites
phased array antennas that can steer the beam electronically, together with software that can predict the path of each satellite in the constellation
Ultralight atmospheric aircraft as satellites
A proposed alternative to relay satellites is a special-purpose solar-powered ultralight aircraft, which would fly along a circular path above a fixed ground location, operating under autonomous computer control at a height of approximately 20,000 meters.
For example, the United States Defense Advanced Research Projects Agency Vulture project envisaged an ultralight aircraft capable of station-keeping over a fixed area for a period of up to five years, and able to provide both continuous surveillance to ground assets as well as to service extremely low-latency communications networks. This project was cancelled in 2012 before it became operational.
Onboard batteries would charge during daylight hours through solar panels covering the wings, and would provide power to the plane during night. Ground-based satellite internet dishes would relay signals to and from the aircraft, resulting in a greatly reduced round-trip signal latency of only 0.25 milliseconds. The planes could potentially run for long periods without refueling. Several such schemes involving various types of aircraft have been proposed in the past.
Interference
Satellite communications are affected by moisture and various forms of precipitation (such as rain or snow) in the signal path between end users or ground stations and the satellite being utilized. This interference with the signal is known as rain fade. The effects are less pronounced on the lower frequency 'L' and 'C' bands, but can become quite severe on the higher frequency 'Ku' and 'Ka' band. For satellite Internet services in tropical areas with heavy rain, use of the C band (4/6 GHz) with a circular polarisation satellite is popular. Satellite communications on the Ka band (19/29 GHz) can use special techniques such as large rain margins, adaptive uplink power control and reduced bit rates during precipitation.
Rain margins are the extra communication link requirements needed to account for signal degradations due to moisture and precipitation, and are of acute importance on all systems operating at frequencies over 10 GHz.
The amount of time during which service is lost can be reduced by increasing the size of the satellite communication dish so as to gather more of the satellite signal on the downlink and also to provide a stronger signal on the uplink. In other words, increasing antenna gain through the use of a larger parabolic reflector is one way of increasing the overall channel gain and, consequently, the signal-to-noise (S/N) ratio, which allows for greater signal loss due to rain fade without the S/N ratio dropping below its minimum threshold for successful communication.
Modern consumer-grade dish antennas tend to be fairly small, which reduces the rain margin or increases the required satellite downlink power and cost. However, it is often more economical to build a more expensive satellite and smaller, less expensive consumer antennas than to increase the consumer antenna size to reduce the satellite cost.
Large commercial dishes of 3.7 m to 13 m diameter can be used to achieve increased rain margins and also to reduce the cost per bit by allowing for more efficient modulation codes. Alternately, larger aperture antennae can require less power from the satellite to achieve acceptable performance. Satellites typically use photovoltaic solar power, so there is no expense for the energy itself, but a more powerful satellite will require larger, more powerful solar panels and electronics, often including a larger transmitting antenna. The larger satellite components not only increase materials costs but also increase the weight of the satellite, and in general, the cost to launch a satellite into an orbit is directly proportional to its weight. (In addition, since satellite launch vehicles [i.e. rockets] have specific payload size limits, making parts of the satellite larger may require either more complex folding mechanisms for parts of the satellite like solar panels and high-gain antennas, or upgrading to a more expensive launch vehicle that can handle a larger payload.)
Modulated carriers can be dynamically altered in response to rain problems or other link impairments using a process called adaptive coding and modulation, or "ACM". ACM allows the bit rates to be increased substantially during normal clear sky conditions, increasing the number of bits per Hz transmitted, and thus reducing overall cost per bit. Adaptive coding requires some sort of a return or feedback channel which can be via any available means, satellite or terrestrial.
Line of sight
Two objects are said to be within line of sight if a straight line between the objects can be connected without any interference, such as a mountain. An object beyond the horizon is below the line of sight and, therefore, can be difficult to communicate with.
Typically a completely clear line of sight between the dish and the satellite is required for the system to work optimally. In addition to the signal being susceptible to absorption and scattering by moisture, the signal is similarly impacted by the presence of trees and other vegetation in the path of the signal. As the radio frequency decreases, to below 900 MHz, penetration through vegetation increases, but most satellite communications operate above 2 GHz making them sensitive to even minor obstructions such as tree foliage. A dish installation in the winter must factor in plant foliage growth that will appear in the spring and summer.
Fresnel zone
Even if there is a direct line of sight between the transmitting and receiving antenna, reflections from objects near the path of the signal can decrease apparent signal power through phase cancellations. Whether and how much signal is lost from a reflection is determined by the location of the object in the Fresnel zone of the antennas.
Two-way satellite-only communication
Home or consumer grade two-way satellite Internet service involves both sending and receiving data from a remote very-small-aperture terminal (VSAT) via satellite to a hub telecommunications port (teleport), which then relays data via the terrestrial Internet. The satellite dish at each location must be precisely pointed to avoid interference with other satellites. At each VSAT site the uplink frequency, bit rate and power must be accurately set, under control of the service provider hub.
There are several types of two way satellite Internet services, including time-division multiple access (TDMA) and single channel per carrier (SCPC). Two-way systems can be simple VSAT terminals with a 60–100 cm dish and output power of only a few watts intended for consumers and small business or larger systems which provide more bandwidth. Such systems are frequently marketed as "satellite broadband" and can cost two to three times as much per month as land-based systems such as ADSL. The modems required for this service are often proprietary, but some are compatible with several different providers. They are also expensive, costing in the range of US$600 to $2000.
The two-way "iLNB" used on the SES Broadband terminal dish has a transmitter and single-polarity receive LNB, both operating in the . Pricing for SES Broadband modems range from €299 to €350. These types of system are generally unsuitable for use on moving vehicles, although some dishes may be fitted to an automatic pan and tilt mechanism to continuously re-align the dish—but these are more expensive. The technology for SES Broadband was delivered by a Belgian company called Newtec.
Bandwidth
Consumer satellite Internet customers range from individual home users with one PC to large remote business sites with several hundred PCs.
Home users tend to use shared satellite capacity to reduce the cost, while still allowing high peak bit rates when congestion is absent. There are usually restrictive time-based bandwidth allowances so that each user gets their fair share, according to their payment. When a user exceeds their allowance, the company may slow down their access, deprioritise their traffic or charge for the excess bandwidth used. For consumer satellite Internet, the allowance can typically range from 200 MB per day to 25 GB per month. A shared download carrier may have a bit rate of 1 to 40 Mbit/s and be shared by up to 100 to 4,000 end users.
The uplink direction for shared user customers is normally time-division multiple access (TDMA), which involves transmitting occasional short packet bursts in between other users (similar to how a cellular phone shares a cell tower).
Each remote location may also be equipped with a telephone modem; the connections for this are as with a conventional dial-up ISP. Two-way satellite systems may sometimes use the modem channel in both directions for data where latency is more important than bandwidth, reserving the satellite channel for download data where bandwidth is more important than latency, such as for file transfers.
In 2006, the European Commission sponsored the UNIC Project which aimed to develop an end-to-end scientific test bed for the distribution of new broadband interactive TV-centric services delivered over low-cost two-way satellite to actual end-users in the home. The UNIC architecture employs DVB-S2 standard for downlink and DVB-RCS standard for uplink.
Normal VSAT dishes (1.2–2.4 m diameter) are widely used for VoIP phone services. A voice call is sent by means of packets via the satellite and Internet. Using coding and compression techniques the bit rate needed per call is only 10.8 kbit/s each way.
Portable satellite Internet
Portable satellite modem
These usually come in the shape of a self-contained flat rectangular box that needs to be pointed in the general direction of the satellite—unlike VSAT the alignment need not be very precise and the modems have built in signal strength meters to help the user align the device properly. The modems have commonly used connectors such as Ethernet or Universal Serial Bus (USB). Some also have an integrated Bluetooth transceiver and double as a satellite phone. The modems also tend to have their own batteries so they can be connected to a laptop without draining its battery. The most common such system is INMARSAT's BGAN—these terminals are about the size of a briefcase and have near-symmetric connection speeds of around 350–500 kbit/s. Smaller modems exist like those offered by Thuraya but only connect at 444 kbit/s in a limited coverage area. INMARSAT now offer the IsatHub, a paperback book sized satellite modem working in conjunction with the users mobile phone and other devices. The cost has been reduced to $3 per MB and the device itself is on sale for about $1300.
Using such a modem is extremely expensive—data transfer costs between $5 and $7 per megabyte. The modems themselves are also expensive, usually costing between $1,000 and $5,000.
Internet via satellite phone
For many years satellite phones have been able to connect to the Internet. Bandwidth varies from about 2400 bit/s for Iridium network satellites and ACeS based phones to 15 kbit/s upstream and 60 kbit/s downstream for Thuraya handsets. Globalstar also provides Internet access at 9600 bit/s—like Iridium and ACeS a dial-up connection is required and is billed per minute, however both Globalstar and Iridium are planning to launch new satellites offering always-on data services at higher rates. With Thuraya phones the 9,600 bit/s dial-up connection is also possible, the 60 kbit/s service is always-on and the user is billed for data transferred (about $5 per megabyte). The phones can be connected to a laptop or other computer using a USB or RS-232 interface. Due to the low bandwidths involved it is extremely slow to browse the web with such a connection, but useful for sending email, Secure Shell data and using other low-bandwidth protocols. Since satellite phones tend to have omnidirectional antennas no alignment is required as long as there is a line of sight between the phone and the satellite.
One-way receive, with terrestrial transmit
One-way terrestrial return satellite Internet systems are used with conventional dial-up Internet access, with outbound (upstream) data traveling through a telephone modem, but downstream data sent via satellite at a higher rate. In the U.S., an FCC license is required for the uplink station only; no license is required for the users.
Another type of 1-way satellite Internet system uses General Packet Radio Service (GPRS) for the back-channel. Using standard GPRS or Enhanced Data Rates for GSM Evolution (EDGE), costs are reduced for higher effective rates if the upload volume is very low, and also because this service is not per-time charged, but charged by volume uploaded. GPRS as return improves mobility when the service is provided by a satellite that transmits in the field of 100-200 kW. Using a 33 cm wide satellite dish, a notebook and a normal GPRS equipped GSM phone, users can get mobile satellite broadband.
System components
The transmitting station has two components, consisting of a high speed Internet connection to serve many customers at once, and the satellite uplink to broadcast requested data to the customers. The ISP's routers connect to proxy servers which can enforce quality of service (QoS) bandwidth limits and guarantees for each customer's traffic.
Often, nonstandard IP stacks are used to address the latency and asymmetry problems of the satellite connection. As with one-way receive systems, data sent over the satellite link is generally also encrypted, as otherwise it would be accessible to anyone with a satellite receiver.
Many IP-over-satellite implementations use paired proxy servers at both endpoints so that certain communications between clients and servers need not to accept the latency inherent in a satellite connection. For similar reasons, there exist special Virtual private network (VPN) implementations designed for use over satellite links because standard VPN software cannot handle the long packet travel times.
Upload speeds are limited by the user's dial-up modem, while download speeds can be very fast compared to dial-up, using the modem only as the control channel for packet acknowledgement.
Latency is still high, although lower than full two-way geostationary satellite Internet, since only half of the data path is via satellite, the other half being via the terrestrial channel.
One-way broadcast, receive only
One-way broadcast satellite Internet systems are used for Internet Protocol (IP) broadcast-based data, audio and video distribution. In the U.S., a Federal Communications Commission (FCC) license is required only for the uplink station and no license is required for users. Note that most Internet protocols will not work correctly over one-way access, since they require a return channel. However, Internet content such as web pages can still be distributed over a one-way system by "pushing" them out to local storage at end user sites, though full interactivity is not possible. This is much like TV or radio content which offers little user interface.
The broadcast mechanism may include compression and error correction to help ensure the one-way broadcast is properly received. The data may also be rebroadcast periodically, so that receivers that did not previously succeed will have additional chances to try downloading again.
The data may also be encrypted, so that while anyone can receive the data, only certain destinations are able to actually decode and use the broadcast data. Authorized users only need to have possession of either a short decryption key or an automatic rolling code device that uses its own highly accurate independent timing mechanism to decrypt the data.
System hardware components
Similar to one-way terrestrial return, satellite Internet access may include interfaces to the public switched telephone network for squawk box applications. An Internet connection is not required, but many applications include a File Transfer Protocol (FTP) server to queue data for broadcast.
System software components
Most one-way broadcast applications require custom programming at the remote sites. The software at the remote site must filter, store, present a selection interface to and display the data. The software at the transmitting station must provide access control, priority queuing, sending, and encapsulating of the data.
Services
Emerging commercial services in this area include:
Outernet – Satellite constellation technology
Efficiency increases
2013 FCC report cites big jump in satellite performance
In its report released in February, 2013, the Federal Communications Commission noted significant advances in satellite Internet performance. The FCC's Measuring Broadband America report also ranked the major ISPs by how close they came to delivering on advertised speeds. In this category, satellite Internet topped the list, with 90% of subscribers seeing speeds at 140% or better than what was advertised.
Reducing satellite latency
Much of the slowdown associated with satellite Internet is that for each request, many roundtrips must be completed before any useful data can be received by the requester. Special IP stacks and proxies can also reduce latency through lessening the number of roundtrips, or simplifying and reducing the length of protocol headers. Optimization technologies include TCP acceleration, HTTP pre-fetching and DNS caching among many others. See the Space Communications Protocol Specifications standard (SCPS), developed by NASA and adopted widely by commercial and military equipment and software providers in the market space.
Satellites launched
The WINDS satellite was launched on February 23, 2008. The WINDS satellite is used to provide broadband Internet services to Japan and locations across the Asia-Pacific region. The satellite to provides a maximum speed of 155 Mbit/s down and 6 Mbit/s up to residences with a 45 cm aperture antenna and a 1.2 Gbit/s connection to businesses with a 5-meter antenna. It has reached the end of its design life expectancy.
SkyTerra-1 was launched in mid-November 2010, providing North America, while Hylas-1 was launched in November 2010, targeting Europe.
On December 26, 2010, Eutelsat's KA-SAT was launched. It covers the European continent with 80 spot beams—focused signals that cover an area a few hundred kilometers across Europe and the Mediterranean. Spot beams allow for frequencies to be effectively reused in multiple regions without interference. The result is increased capacity. Each of the spot beams has an overall capacity of 900 Mbit/s and the entire satellite will has a capacity of 70 Gbit/s.
ViaSat-1, the highest capacity communications satellite in the world, was launched Oct. 19, 2011 from Baikonur, Kazakhstan, offering 140 Gbit/s of total throughput capacity, through the Exede Internet service. Passengers aboard JetBlue Airways can use this service since 2015. The service has also been expanded to United Airlines, American Airlines, Scandinavian Airlines, Virgin America and Qantas.
The EchoStar XVII satellite was launched July 5, 2012 by Arianespace and was placed in its permanent geosynchronous orbital slot of 107.1° West longitude, servicing HughesNet. This Ka-band satellite has over 100 Gbit/s of throughput capacity.
Since 2013, the O3b satellite constellation claims an end-to-end round-trip latency of 238 ms for data services.
In 2015 and 2016, the Australian Government launched two satellites to provide internet to regional Australians and residents of External Territories, such as Norfolk Island and Christmas Island.
Low Earth orbit
, around 700 satellites have been launched for Starlink and 74 for the OneWeb satellite constellation. Starlink has begun its private beta phase.
In oceanography and in seismology
Satellite communications are used for data transmission, remote instrument diagnostics, for physical satellite and oceanographic measurements from the sea surface (e.g. sea surface temperature and sea surface height) to the ocean floor, and for seismological analyses.
See also
Back-channel and return channel
DishNET (satellite Internet access in the United States)
HughesNet (formerly DIRECWAY)
IP over DVB
Lamit Company
NetHope#NetReliefKit
SES Broadband (satellite Internet access in Europe)
StarBand
Teledesic
Tooway
TS 2
Very small aperture terminal
Viasat Inc. (Excede Internet)
Wireless Internet Service Provider
References
External links
ViaSat/TIA Satellite Equipment Systems Standardization Efforts
Broadband
|
9536893
|
https://en.wikipedia.org/wiki/Armature%20%28computer%20animation%29
|
Armature (computer animation)
|
An armature is the name of the kinematic chains used in computer animation to simulate the motions of virtual human or animal characters. In the context of animation, the inverse kinematics of the armature is the most relevant computational algorithm.
There are two types of digital armatures: Keyframing (stop-motion) armatures and real-time (puppeteering) armatures. Keyframing armatures were initially developed to assist in animating digital characters without basing the movement on a live performance. The animator poses a device manually for each keyframe, while the character in the animation is set up with a mechanical structure equivalent to the armature. The device is connected to the animation software through a driver program and each move is recorded for a particular frame in time. Real-time armatures are similar, but they are puppeteered by one or more people and captured in real-time.
Inside of every puppet stop motion character is an Armature. Armature fills two main functions, It must move in a human-like manner in order to give life to character animation. The inner skeletal structure must be close to human-like as possible this includes having joint and finger flexibility. Human movements also involve spinning at the elbows, knees, shoulders and hips in order to bend down and pick up something from the ground. The second job of an armature is to not move. It is necessary that armature does not move, it must have some stiffness when positioned into place. “you don't want to be in the middle of a shoot with several minutes can pass by between each shot and have the puppets slowly drooping between one shoot and the next” (Justatinyamount, 2018). In built geometric movement can extend into the hands and feet. Even facial movements are available in some armature figures.
“There are two types of digital armatures: Keyframing (stop-motion) armatures and real-time (puppeteering) armatures” (Wikipedia Article). Both Key framing and of wax, clay or plaster. Historically, armatures were made of chicken wire, wood and many pounds of clay, making them very heavy” (Computer Animation Complete, Parent, Rick).
Real-time puppeteering
Real-time Puppeteering, simply means that the character skeleton is controlled by a human animator, by “simple mapping between a hardware controller (such as a gamepad, a wiimote, a custom puppet controller, or even your body with a kinect-based system) and the character’s 3D animation armature” (Three Real-Time Animation Methods, Hancock, Terry). Because of the nature of Puppeteering devices there is no attempt to control “Natural Motion”, you just simply manipulate the Digital puppet to do what you want. It takes some skill to do this like puppeteering, “You can take advantage of the puppeteer's skills, because these guys are masters. It’s just amazing seeing them put their hand in a puppet and it just comes to life” (“Computer Animation Complete”, Parent, Rick). Digital Real-Time Armature main difference, “It is controlled or we can say it is being puppets by people and the motions are captured in real time. This type of animation is used basically for 2D or 3D figures and objects created by using some 2D and 3D animation software. This type of animation armature is used for mainly filmmaking and Television production purposes. The film made using this technique is captured in real time by making the character move and capturing its live performance for the film” (Virtualschooldek.com). Motions are recorded with digital software. Software such as Blender, “Blender is the free and open source 3D creation suite. It supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, compositing and motion tracking, video editing and 2D animation pipeline” (https://www.blender.org). This software allows the registering of data from Digital Armatures.
Types of armatures build
Action Figures can also be converted into Armatures, for instance, TV shows such as Robot Chicken use modified forms of Armature Action Figures for their show. The most common toys for Armature are Lego’s, “It’s easy to see why they look great, they stay where you put them and you can move them around a reasonable amount” (Youtube Video “The 3 main types of Armature”). The only setback is their range of motion so this calls for them to be modified, so aluminium wire is inserted inside of them in order for joints to have a wider range of motion that seems organic to the character. In the “Lego Movie”, the majority of the characters were Digital Armatures. Director Chris Miller, even used his childhood Lego Toy as the main character for the entire movie; “Chris Miller literally brought his broken space man toy which he had since he was a little kid … That’s the level of detail and realism we wanna create (Film). Armatures was not the only method used but it was a hybrid effort of techniques in order to achieve the final product. Modifications can be done on figures such as LEGOs, such as installing wire through the character in order to have a fuller range of motion. Bringing life into toys is nothing new shows such as “Robot Chicken” implement, Action Figure Armatures in order for iconic doll super heroes to appear on the show.
Twisted Wire Armature is another cost-effective method used by animators, the only problem with this type of method is that joints are not strong and will lead to a drooping figure over time. Ball and Sockets Armatures have been the traditional method in which figures have been built, with a method such as this the plates inside the puppet are easily adjustable for stiffness. Balls and sockets can often be bulky so a Hybrid method between Twisted Wire Armature and Ball and Socket Armatures are created by animators in order for characters to move fluidly throughout the body. Ball and Socket Armature Kits are available for purchase, but these figures are often expensive. Descriptions for Kits include, “The Standard Armature Kit comes complete with everything you need to make your own armature. It is easy assembled and has been designed to meet most character designs. As well as double jointed ball and socket joints” (Animation Supplies Website).
How armatures work
“Puppeteering armatures are very similar to keyframing armatures, expect the motion is captured in real time as performed by one or more puppeteers.”( Parent, Rick and Computer Animation Complete) Through the use of tracking markers, sensors, hand controls, and animation software, “puppeteers” are able to produce an animated performance in real time. Through a live performance conducted in real time, sensors and tracking markers attached to the performer help translate their body movements into the actions and expressions that the 3D digital armature replicates in real time. Rather than using an armature model with joints and built in sensors, this type of “real time” animation relies on a live performer and the movements they conduct to simulate the actions the 3D digital armature will follow. Aside from the sensors attached to the live performer used to record their body movements, face trackers are also used to record the performers facial expressions in real time as well . “ The most popular are the real-time optical face trackers, consisting of a camera that is placed in a structure attached to the performer’s head so that it moves with the performer. The device captures the motion of small markers placed in different areas of the face.”( Understanding Motion Capture for Computer Animation and Menache, Alberto). Along with the body sensors these facial trackers allow the puppets to capture the real time expression and digitally animate it. However the application of facial tracks through real time simulations has a limit when it comes to its range of detail. “Unfortunately, these are 2D devices that cannot capture certain motions such as puckering of the lips, so the data is all in one plane and not very realistic. Three-dimensional facial motion data can be captured with an optical system using two or more cameras, yielding a much better result, but not in real time.” ( Understanding Motion Capture for Computer Animation and Menache, Alberto). Aside from the use of live performers in real time armatures, digital armatures made of joints, and sensors can also be used to simulate animated movement in real time. “The last commercially available example of a digital armature was the Monkey 2 by Digital Image Design in New York City. It was modular, so it could be assembled in different joint configurations.”( Understanding Motion Capture for Computer Animation and Menache, Alberto) With modular capabilities embedded in the armature the puppeteers are allowed to maneuver the figure and transport its actions digitally. “Both stop-motion and real time types consist of a series of rigid modules connected by joints whose rotations are measured by potentiometers or angular sensors. The sensors are usually analog devices, but they are called digital because the resulting readings are converted to digital signals to be processed by the computer system. These armatures are typically modular in order to accommodate different character designs.”( Understanding Motion Capture for Computer Animation and Menache, Alberto) The modular design allows for these digital armature models to simulate movements that in an animated environment resemble those of a human or animal. “ The first-generation Monkey had a fixed configuration, which made it unusable for any non humanoid applications.”(Understanding Motion Capture for Computer Animation and Menache, Alberto) A modular armature helps overcome this problem and create a solid baseline that is capable of various movements and extensions. Now depending on the package contents that come with the digital armature model itself the price may vary. “ The typical cost for a 39-joint Monkey 2 setup was approximately $15,000, which included all the necessary parts as well as driver software for most well known animation packages.”(Understanding Motion Capture for Computer Animation and Menache, Alberto) Both the live performance and digital armature real time animation methods convert to digital formats allowing the puppeteers different ways to produce and create an animated visual. Through real time armatures “Motus Digital” is able to bring their creations to life and implement interactive settings within their animations. Through real time armature this company creates “Larry the Zombie”, an interactive character that they use in their business ventures. “This year Larry will take our Real-Time technology to the next level by attending the show himself, in search of “The Next Big Thing”. He will interview panelists, share his schedule and report what’s going on and do whatever it takes to find the next big thing. Unlike Elf Hotline that takes place at the “North Pole”, Larry lives in the real world and is keen on pop culture and world events. Although the Real-Time interviews will be conducted via Skype, Larry’s character will be composited into the shot giving Larry a real presence at this years South by SouthWest.”( Motus Digital) Through real time armature this company shows how creative and interactive content can be created and shared with a larger audience.
Keyframing (stop motion)
Stop motion animation is captured one frame at a time and when they are played back it looks like movement. It is achieved by taking a picture of each individual frame and then editing it together in whatever software you may prefer. The more fluid you want the movements to look, the more frames you would need. Stop motion works just a flipbook but in a digital format. In this analogy frames would be equivalent to the pages in a flipbook, the more pages you have the more organic the movements look. Stop motion animation can be integrated into live action films or be completely animated. Live action movies like “The Terminator”(1984) and “RoboCop” (1987) both had stop motion sequences.
Tim Burton: Armature Influence
Tim Burton, a graduate from CalArts in southern california, was well aware of the stop motion techniques available to him. CalArts, founded by Walt Disney, in 1961 was innovating computer applications of armatures and other forms of computer designs. A popular example of stop motion are Tim Burton movies, his animated ones of course. “Nightmare before Christmas”, “Corpse Bride”, “Coraline”, etc. A lot goes into making a stop motion film with the quality of a Tim Burton one. Everything you see on screen was made by someone in the crew. The set was built, the puppets were hand made and every single frame was an individual picture taken. On a behind the scenes video for the movie “Frankenweenie” one of the crew members shows one of their exposure sheets which is a stop motion equivalent to a shot list. The frames are broken down in phonetics (the sound a letter makes) which means it takes multiple frames to even make one word that a character may say. The crew member has to shape the puppet's mouth so that it matches every sound made in every word that the voice actor says in the dialogue. It is also mentioned in the video that even the smallest action such as blinking requires the puppets eyelids to be replaced each time. In the behind the scenes video for the movie “Coraline” crew members show how movement can look so fluid in stop motion by explaining the process of rigging. Rigging in stop motion is when someone uses mechanics like winding or pulling to propel a puppet or set piece through space. For example an animator may put an object on a piece of wire and move the wire in different positions and then edit the wire out in post production.
Tim Burton’s love for the old age technique of stop-motion animation, which has contributed a big part to its success is due to his obsession with the simple idea that this specific technique could really bring to life ideas that otherwise would be impossible. “The animation helped viewers identify with something that could not possibly exist.” Burton’s need to use stop-motion animation was due to the idea that “it could bring something purely imagined to vivid life in a way that 2-D animation couldn’t.” This was amplified through his exploration with other works of art such as James Whale’s Frankenstein, which helped focus on those ideas of monsters and his need to create them in his life. There is no question that Burton did exactly that with his work in The Nightmare Before Christmas which brought this technique back into the limelight. Although Burton worked with animators like John Lasseter who made Toy Story happen with computer animation he never lost focus on stop-motion animation. This helped him create amazing films such as Corpse Bride and remake of his 1984 short film Frankenweenie. To no surprise Burton’s original commercial for MoMA’s exhibition was made with the age-old technique of stop-motion animation to honor the amazing creator. Although Tim Burton was not the original creator of this technique he was honored in part of the series to help represent the influence that his films have had on the technique. Other creators who have come before him in the field of stop-motion animation are Harryhausen “the subject of a MoMA retrospective—Ray Harryhausen: Special Effects (May 28–September 1, 1981)”. Harryhausen who trained under Willis O’Brien “creator of effects in King Kong (1933) and Mighty Joe Young (1949)”.
Wes Anderson: Armature Animation Influence
Wes Anderson is a director who primarily makes live action films but he does have two stop motion animated movies “Fantastic Mr. Fox” and “Isle of Dogs”. Although these films are animated they still show Wes Anderson’s signature style. In a behind the scenes video for the movie “Isle of Dogs” it’s shown that animators used actual movement as a reference. They had real dogs on set so that they could have something to base their animation on. Also animators would film themselves walking or doing any action that they need to animate and then they would look at it when animating a human puppet. This helped them to make action look more fluid and realistic because they were using themselves as a reference.
History of Stop Motion
Stop motion started in the late 1800s. The first ever Stop-motion film was released in 1898 “The Humpty Dumpty Circus” created by J. Stuart Black and Albert E. Smith. It was popularized by the animator Willis O’ Brien who mixed stop motion with live actors. His best known work was “King Kong”(1933). In 1940 animator George Pal developed a technique called “replacement animation”. Instead of using clay or any other malleable material to change the puppet he created multiple wooden heads, each with a different facial expression and replaced the heads when conveying different emotions. In 1944 George Pal won an Oscar for this technique. In 1955 Art Clokey, a pioneer in claymation (stop motion with clay figures), created “Gumby” for “The Howdy Doody Show”. In 1961 he also created the children’s show “Davey and Goliath” which was sponsored by the Lutheran Church.
Claymation
Claymation is a very popular type of Stop motion animation used by many different creators. Pioneered by Art Clokey as mentioned in the previous section, Claymation is still used by modern day animators. It was used by Tim Burton and Henry Selick in films like “ParaNorman” “Coraline” and “The Nightmare before Christmas”. Director and animator Nick Park also used claymation in all of his “Chicken Run” and “Wallace and Gromit” movies. Rankin/Bass Productions Inc. also used claymation in Christmas special movies like “Rudolph the Red-Nosed Reindeer”(1964) and “Jack Frost”(1979) both in which air on the network NBC. Although all of these films use clay and stop motion animation, they each have their own unique look and style. Claymation has evolved throughout the years because all creators have their own vision and clay is so malleable that it could make endless styles of characters and scenery. moving clay. In this technique pieces of clay are moulded to create characters and based on the imagination of the animator, a story is unfolded. There are oil based and water based clays available. Sometimes the clay is moulded into free forms or filled up in a wire like structure called armature. The animated characters are kept in a set and with only short movements, the whole scene is film. Famous examples of this technique being used are, Missing Link by Chris Butler, Kubo and the two strings by Travis Andrew Knight, Coraline by Charles Henry Selick, The Boxtrolls by Graham Annable, Anthony Stacchi.
Other animation techniques
Stop motion: The illusion of movement achieved by moving a figure in tiny increments between each photographed frame
Object-Motion: moving or animating objects, this is a form of stop motion animation that uses physical items instead of drawn objects which are not meant to be a human or recognizable character.
Puppet Animation: moving puppets, a unique phenomenon. It's born out of the hard work of its pioneers and their followers to bring creative impulses to life by experimenting with new technologie. in cinema a great example of this technique being used is in the film King Kong (1933). The writer/director Tim Burton often uses puppet animation in his work for example, Corpse Bride (2005) which is a stop motion animated horror musical.
See also
Linkages
Skeletal animation
References
Sources
https://www.techopedia.com/definition/109/stop-motion-animation
https://www.youtube.com/watch?v=VUFkjD9_QPQ
https://www.youtube.com/watch?v=jXqqd0ZBEMA
https://www.youtube.com/watch?v=mZcZfES8EEE
https://www.thewrap.com/a-timeline-of-stop-motion-animation-history-from-a-trip-to-the-moon-to-isle-of-dogs-photos/
Menache, Alberto. Understanding Motion Capture for Computer Animation. Morgan Kaufman, Elsevier Publication. Google Books, https://books.google.com/books?id=bgkzx4q88ScC&pg=PA30&lpg=PA30&dq=modular+armatures+for+real+time+animation&source=bl&ots=voG7seE378&sig=ACfU3U3YD_EYdfDPdJpR7yl1na0Govdnkw&hl=en&sa=X&ved=2ahUKEwjf9ODPg_joAhXYIDQIHWDwDDoQ6AEwAnoECAkQAQ#v=onepage&q=modular%20armatures%20for%20real%20time%20animation&f=false
“What is Real Time Animation”. Motus Digital, http://motusdigital.com/real-time-animation.htm
Parent, Rick, et.al.Computer Animation Complete: All-in-One: Learn Motion Capture, Characteristic, Point-Based and Maya Winning Techniques. Morgan Kaufmann Elsevier, Google Books, https://books.google.com/books?id=bgkzx4q88ScC&pg=PA30&lpg=PA30&dq=modular+armatures+for+real+time+animation&source=bl&ots=voG7seE378&sig=ACfU3U3YD_EYdfDPdJpR7yl1na0Govdnkw&hl=en&sa=X&ved=2ahUKEwjf9ODPg_joAhXYIDQIHWDwDDoQ6AEwAnoECAkQAQ#v=onepage&q=modular%20armatures%20for%20real%20time%20animation&f=false
Animation Supplies, www.animationsupplies.net/standard-armature.html.
https://www.moma.org/explore/inside_out/2010/04/07/crude-elegance-stop-motion-animation-and-tim-burton/
Parent, Rick. Computer Animation Complete: All-in-One: Learn Motion Capture, Characteristic, Point-Based, and Maya Winning Techniques. Morgan Kaufman Is an Imprint of Elsevier, 2010
Sharma, Deepali, and Deepali Sharma. “Animation, VFx, Creative Skills, Digital Marketing, Online Personality Development.” Animation, VFx, Creative Skills, Digital Marketing, Online Personality Development, 2 Nov. 2016, virtualschooldesk.com/armature-meaning-importance-animation/..
Hancock, Terry. “Three Real-Time Animation Methods.” Three Real-Time Animation Methods: Machinima, Digital Puppetry, and Motion Capture, freesoftwaremagazine.com/articles/three_real_time_animation_methods_machinima_digital_puppetry_and_motion_capture/.
Foundation, Blender. “Home of the Blender Project - Free and Open 3D Creation Software.” Blender.org, www.blender.org/.
Craig, G. [Just a Tiny Amount]. (2018, May 28). The Three Main Types of Armature [Video]. Youtube. https://www.youtube.com/watch?v=H6b3lRadcwE&t=1s
[Warner Bros. Pictures]. (2015, January 26). The Lego Movie Creating The Bricks [Video]. Youtube. https://www.youtube.com/watch?v=po0dmHhgsxU
3D graphics software
Computational physics
|
3107574
|
https://en.wikipedia.org/wiki/ADOS
|
ADOS
|
ADOS may refer to:
Computer operating systems
Atari DOS, an 8-bit disk operating system used in Atari computers
Arabic MS-DOS, from Microsoft
Advanced DOS, a project name for IBM and Microsoft's OS/2 1.0
Access DOS, assistive software shipped with Microsoft's MS-DOS 6.2x
ADOS (Russian operating system) (or АДОС), ca. 1989
Psychology
Autism Diagnostic Observation Schedule
Sociology
American Descendants of Slavery, a neologism describing African Americans of American Colonial-era chattel slave descent
Fiction
A Dream of Spring, a forthcoming novel in the A Song of Ice and Fire book series
See also
AOS (disambiguation)
DOS (disambiguation)
|
382535
|
https://en.wikipedia.org/wiki/Hal%20Abelson
|
Hal Abelson
|
Harold "Hal" Abelson (born April 26, 1947) is the Class of 1922 Professor of Computer Science and Engineering in the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT), a fellow of the Institute of Electrical and Electronics Engineers (IEEE), and a founding director of both Creative Commons and the Free Software Foundation.
He directed the first implementation of the language Logo for the Apple II, which made the language widely available on personal computers starting in 1981; and published a widely selling book on Logo in 1982. Together with Gerald Jay Sussman, Abelson developed MIT's introductory computer science subject, The Structure and Interpretation of Computer Programs (called by the course number, 6.001), a subject organized around the idea that a computer language is primarily a formal medium for expressing ideas about methodology, rather than just a way to get a computer to perform operations. Abelson and Sussman also cooperate in codirecting the MIT Project on Mathematics and Computation. The MIT OpenCourseWare (OCW) project was spearheaded by Abelson and other MIT faculty.
Abelson led an internal investigation of the school's choices and role in the prosecution of Aaron Swartz by the Federal Bureau of Investigation (FBI), which concluded that MIT did nothing wrong legally, but recommended that MIT consider changing some of its internal policies.
Education
Abelson graduated with an B.A. in mathematics from Princeton University in 1969 after completing a senior thesis, titled "Actions with fixed-point set: a homology sphere", under the supervision of William Browder.
He later received a Ph.D. in mathematics from the Massachusetts Institute of Technology in 1973 after completing his doctoral dissertation, titled "Topologically distinct conjugate varieties with finite fundamental group", under the supervision of Dennis Sullivan.
Work
Computer science education
Abelson has a longstanding interest in using computation as a conceptual framework in teaching. He directed the first implementation of Logo for the Apple II, which made the language widely available on personal computers starting in 1981; and published a widely selling book on Logo in 1982. His book Turtle Geometry, written with Andrea diSessa in 1981, presented a computational approach to geometry which has been cited as "the first step in a revolutionary change in the entire teaching/learning process." In March 2015, a copy of Abelson's 1969 implementation of Turtle graphics was sold at The Algorithm Auction, the world’s first auction of computer algorithms.
Together with Gerald Jay Sussman, Abelson developed MIT's introductory computer science subject, Structure and Interpretation of Computer Programs, a subject organized around the notion that a computer language is primarily a formal medium for expressing ideas about methodology, rather than just a way to get a computer to perform operations. This work, through the textbook of the same name, videotapes of their lectures, and the availability on personal computers of the Scheme dialect of Lisp (used in teaching the course), has had a worldwide impact on university computer science education.
He is a visiting faculty member at Google, where he was part of the App Inventor for Android team, an educational program aiming to make it easy for people with no programming background to write mobile phone applications and "explore whether this could change the nature of introductory computing". He is coauthor of the book App Inventor with David Wolber, Ellen Spertus, and Liz Looney, published by O'Reilly Media in 2011. After Google released App Inventor as open source software in late 2009 and provided seed funding to the MIT Media Lab in 2011, Abelson became codirector of the MIT Center for Mobile Learning to continue development of App Inventor.
Computing tools
Abelson and Sussman also cooperate in codirecting the MIT Project on Mathematics and Computation, a project of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), formerly a joint project of the MIT Artificial Intelligence Laboratory (AI Lab) and MIT Laboratory for Computer Science (LCS), CSAIL's components. The goal of the project is to create better computational tools for scientists and engineers. But even with powerful numerical computers, exploring complex physical systems still requires substantial human effort and human judgement to prepare simulations and to interpret numerical results.
Together with their students, Abelson and Sussman are combining methods from numerical computation, symbolic algebra, and heuristic programming to develop programs that not only perform massive numerical computations, but that also interpret these computations and discuss the results in qualitative terms. Programs such as these could form the basis for intelligent scientific instruments that monitor physical systems based upon high-level behavioral descriptions. More generally, they could lead to a new generation of computational tools that can autonomously explore complex physical systems, and which will play an important part in the future practice of science and engineering. At the same time, these programs incorporate computational formulations of scientific knowledge that can form the foundations of better ways to teach science and engineering.
Free software movement
Abelson and Sussman have also been a part of the free software movement (FSM), including serving on the board of directors of the Free Software Foundation (FSF).
Abelson is known to have been involved in publishing Andrew Huang's Hacking the Xbox and Keith Winstein's seven-line Perl DeCSS script (named qrpff), and Library Access to Music Project (LAMP), MIT's campus-wide music distribution system. The MIT OpenCourseWare (OCW) project was spearheaded by Hal Abelson and other MIT faculty.
Aaron Swartz investigation
In January 2013, open access activist Aaron Swartz died by suicide. He had been arrested near MIT and was facing up to 35 years imprisonment for the alleged crime of downloading Journal Storage (JSTOR) articles through MIT's open access campus network.
In response, MIT appointed professor Hal Abelson to lead an internal investigation of the school's choices and role in the prosecution of Aaron Swartz by the FBI. The report was delivered on July 26, 2013. It concluded that MIT did nothing wrong legally, but recommended that MIT consider changing some of its internal policies.
Other affiliations
Abelson is also a founding director of Creative Commons and Public Knowledge, and a director of the Center for Democracy and Technology, as stated in his own biography.
Awards and honors
Designated as one of MIT's six inaugural MacVicar Faculty Fellows, in 1992, in recognition of his significant and sustained contributions to teaching and undergraduate education
1992 Bose Award, the MIT School of Engineering teaching award
1995 Taylor L. Booth Education Award, given by IEEE Computer Society, cited for his continued contributions to the pedagogy and teaching of introductory computer science
2011 ACM Karl V. Karlstrom Outstanding Educator Award for "his contribution to computing education, through his innovative advances in curricula designed for students pursuing different kinds of computing expertise, and for his leadership in the movement for open educational resources"
2012 Association for Computing Machinery (ACM) SIGCSE Award for Outstanding Contribution to Computer Science Education
Writings
References
External links
, MIT
Creative Commons
Hal Abelson Playlist Appearance on WMBR's Dinnertime Sampler radio show May 7, 2003
Q&A with Professor Hal Abelson of MIT on Research at Google
This entry was initially based on an autobiography by Hal Abelson, posted on his website and used by permission.
American computer scientists
Artificial intelligence researchers
Lisp (programming language) people
Programming language designers
20th-century American mathematicians
21st-century American mathematicians
American electrical engineers
Princeton University alumni
Massachusetts Institute of Technology School of Science alumni
MIT School of Engineering faculty
Members of the Free Software Foundation board of directors
Fellow Members of the IEEE
Free software programmers
GNU people
Living people
Members of the Creative Commons board of directors
Google employees
1947 births
Computer science educators
Creative Commons-licensed authors
|
153364
|
https://en.wikipedia.org/wiki/Sum%2041
|
Sum 41
|
Sum 41 is a Canadian rock band from Ajax, Ontario. Originally called Kaspir, the band was formed in 1996 and currently consists of Deryck Whibley (lead vocals, guitars, keyboards), Dave Baksh (lead guitar, backing vocals), Jason "Cone" McCaslin (bass, backing vocals), Tom Thacker (guitars, keyboards, backing vocals), and Frank Zummo (drums, occasional backing vocals).
In 1999, Sum 41 signed an international record deal with Island Records and released its first EP, Half Hour of Power, in 2000. The band released its debut album, All Killer No Filler, in 2001. The album achieved mainstream success with its first single, "Fat Lip", which reached number one on the Billboard Modern Rock Tracks chart and remains the band's most successful single to date. The album's next singles "In Too Deep" and "Motivation" also achieved commercial success. All Killer No Filler was certified platinum in both the United States and the United Kingdom and triple platinum in Canada. In 2002, the band released Does This Look Infected?, which was also a commercial and critical success. The singles "The Hell Song" and "Still Waiting" both charted highly on the modern rock charts.
The band released its next album, Chuck, in 2004, led by singles "We're All to Blame" and "Pieces". The album proved successful, peaking at number 10 on the Billboard 200. In 2007, the band released Underclass Hero, which was met with a mixed reception, but became the band's highest charting album to date. It was also the band's last album on Aquarius Records. The band released the album Screaming Bloody Murder, on Island Records in 2011 to a generally positive reception, though it fell short of its predecessors' commercial success. The band's sixth studio album, 13 Voices was released in 2016. IMPALA awarded the album with a double gold award for 150,000 sold copies across Europe. The band's seventh studio album Order in Decline was released on July 19, 2019.
The band often performs more than 300 times each year and holds long global tours, most of which last more than a year. The group have been nominated for seven Juno Awards and won twice – Group of the Year in 2002, and Rock Album of the Year for Chuck in 2005. Sum 41 was nominated for a Grammy Award for Best Hard Rock/Metal Performance for the song "Blood in My Eyes". From their formation to 2016, Sum 41 were the 31st best-selling Canadian artist in Canada and among the top 10 best-selling Canadian bands in Canada.
History
1996–1998: Formative years
Sum 41 was formed by guitarist Deryck Whibley, drummer Steve Jocz, bassist Richard Roy and vocalist Jon Marshall. They were originally called Kaspir. The band is from Ajax, Ontario. The group members decided to change the band's name to Supernova while on tour on September 28, 1996, which happened to be the 41st day of their summer vacation.
Band manager and producer Greig Nori advised Whibley be the vocalist, causing Marshall to leave. With Whibley moving to lead vocals and rhythm guitar, Dave Baksh joined as lead guitarist. Early on; the band was involved in a near-fatal car accident, resulting in Roy leaving the band. Mark Spicoluk briefly filled in the position before Jason McCaslin was brought in on bass to complete the new line-up.
1998–2000: Half Hour of Power
In 1998, the band recorded a demo tape on compact cassette which they sent to record companies in the hope of getting a recording contract. The tapes are considered rarities.
From 1999 to 2000, the band recorded several new songs. The Introduction to Destruction and later the Cross The T's and Gouge Your I's DVDs both contain the self-recorded footage, which show the band performing a dance to "Makes No Difference" in front of a theatre.
After signing with Island Records in 1999, Sum 41's first EP, Half Hour of Power, was released on June 27, 2000. The first single released by the band was "Makes No Difference", which had two different music videos. The first video was put together using the video clips sent to the record label, and the second showed the band performing at a house party. The album was certified platinum in Canada.
2001–2003: All Killer No Filler and Does This Look Infected?
Sum 41's first full-length album, All Killer No Filler, was released on May 8, 2001 following an album release party at a record store in St. Louis, MO. The album was very successful; it was certified platinum by the Recording Industry Association of America in August 2001. "Fat Lip", the album's first single, achieved significant chart and commercial success; it topped the US Billboard Modern Rock Tracks chart as well as many other charts around the world. The song remains the band's most successful to date. After "Fat Lip", two more singles were released from the album: "In Too Deep" and "Motivation". "In Too Deep" peaked at number 10 on the Modern Rock Tracks chart, while "Motivation" peaked at number 24 on the same chart. The album peaked at number 13 on the Billboard 200 chart and at number nine on the Top Canadian Albums chart. The album was a commercial success, and was certified Platinum in the United States, UK, and triple platinum in Canada The album's name was taken from the initial reaction from Joe Mcgrath, an engineer working in the studio.
The success of the album brought the band touring offers with mainstream bands such as Blink-182 and The Offspring. The band spent much of 2001 touring; the group played over 300 concerts that year before returning to the studio to record another album.
On November 26, 2002, the group released its second album, Does This Look Infected? The special edition came with a DVD, Cross The T's and Gouge Your I's. Whibley said of the album: "We don't want to make another record that sounds like the last record, I hate when bands repeat albums." The album featured a harder and edgier sound, and the lyrics featured a more serious outlook. The album peaked at number 32 on the Billboard 200 chart and at number eight on the Top Canadian Albums chart. It was certified Platinum in Canada and gold in the United States, but was not as successful as its predecessor.
The first single released from the album was "Still Waiting", which peaked at number seven on the Modern Rock Tracks chart. The second single, "The Hell Song" peaked at number 13 on the chart. "The Hell Song"'s music video depicted the band members using dolls with their pictures on them and others, such as Korn, Kiss, AC/DC, Snoop Dogg, Destiny's Child, Ozzy Osbourne, Sharon Osbourne, and Pamela Anderson. The third single, "Over My Head (Better Off Dead)", had a video released exclusively in Canada and on the band's website, featuring live shots of the band. The video also appeared on the group's live DVD, Sake Bombs And Happy Endings (2003), as a bonus feature. The band again began a long tour to promote the album before recording the group's third studio album.
2004–2005: Chuck
In late May 2004, the band traveled to the Democratic Republic of Congo with War Child Canada, a branch of the British charity organization War Child, to document the country's civil war. Days after arriving, fighting broke out in Bukavu near the hotel where the band was staying. The band waited for the fighting to die down, but it did not. A UN peacekeeper, Charles "Chuck" Pelletier, called for armoured carriers to take the hotel's occupants out of the hot zone. After nearly twenty hours, the carriers arrived, and the band and forty other civilians were taken to safety.
In honour of Pelletier, Sum 41 named its next album Chuck; it was released on October 12, 2004. The album charted at number 10 on the Billboard 200 chart and on the Top Internet Albums chart. It also peaked at number two on the Canadian Albums chart and was the band's highest-charting album until it was surpassed by Underclass Hero. The album received positive reviews, and was certified Platinum in Canada and gold in the United States.
The first single from the album was "We're All To Blame", which peaked at number 10 on the Modern Rock Tracks chart. It was followed by "Pieces", a relatively soft song which reached the top of the charts in Canada. The next single was "Some Say", released only in Canada and Japan. The last single from the record was "No Reason", released at the same time as "Some Say", but with no music video. It was released only in Europe and the US, where it reached number 16 on the Billboard Modern Rock chart.
A documentary of the band's experience in Congo was made into a film called Rocked: Sum 41 in Congo and later aired on MTV. War Child released it on DVD on November 29, 2005, in the United States and Canada.
Following the album's release, the band went on a tour with Good Charlotte until 2006. On December 21, 2005, Sum 41 released a live album, Happy Live Surprise, in Japan. The CD contained a full concert recorded live in London, Ontario and was produced by Whibley. The same CD was released March 7, 2006, in Canada under the name Go Chuck Yourself. The band played videos before its set that were deemed "unsuitable for children". Controversy arose over some of the videos' violent content.
2006–2008: Underclass Hero
On May 10, 2006, Dave Baksh announced in a statement through his management company that he was leaving Sum 41 to work with his new band, Brown Brigade, which has a more "classic metal" sound. Baksh cited "creative differences" as the reason for his departure, but claimed that he was still on good terms with the band.
The next day, Whibley confirmed Baksh's departure and announced that the band would only replace him with a touring guitarist, who would not have any decision-making power in the band or be in videos, photo shoots, or albums. The band hired Gob frontman and guitarist Tom Thacker to replace Baksh.
Recording of the band's fourth studio album, Underclass Hero, began on November 8, 2006, and finished on March 14, 2007. On April 17, 2007, the band released a song on iTunes, "March of the Dogs". Although not a single, the band released it early because, according to Whibley, "the record [wouldn't] be out until the summer". Whibley was threatened with deportation for the song, because he metaphorically "killed the president" on it.
The album, backed by the first single and title track, "Underclass Hero", was released on July 24, 2007. Despite mixed reviews, the album was a commercial success, debuting at number seven on the Billboard 200 and at number one on the Billboard Rock Albums chart, the band's highest US chart position to date. It also peaked at number one on the Canadian Albums chart and on the Alternative Albums chart, a first for the band on both the charts. Two more singles were released from the album, "Walking Disaster" and "With Me". "With Me" especially found radio success by 2008. Underclass Hero was certified Platinum in Canada.
In October 2007, the band began the Strength in Numbers Tour, a tour of Canada with Canadian band Finger Eleven; Die Mannequin opened each of Sum 41's shows. During the tour, Whibley sustained a herniated disk. As a result, the group cancelled the rest of its shows.
After Whibley recovered from his injury, the band continued the Underclass Hero tour in March 2008 and toured until early July, when the group began preparation for its next album.
Sum 41 released a greatest hits album in Japan titled 8 Years of Blood, Sake and Tears on November 26, 2008. The album included a previously unreleased song, "Always", and a DVD, which contains each of the band's music videos. On March 17, the band released the worldwide version of the album titled All the Good Shit.
2009–2012: Screaming Bloody Murder
Drummer Steve Jocz confirmed that Tom Thacker was now an official member of Sum 41, and would take part in the writing and recording. On November 5, 2009, Whibley posted a blog on the band's MySpace page announcing Gil Norton as the producer of the band's upcoming album, also saying that 20 songs were already written for the album. In an interview with Tom Thacker, some working titles for songs for the new album were confirmed, including "Panic Attack", "Jessica Kill" and "Like Everyone Else". Pre-production for the new album took 13 days in December 2009, with the band officially entering the studio to begin recording at Perfect Sound Studios on January 26, 2010. The new studio album, titled Screaming Bloody Murder, was expected for a late 2010 release, but was delayed until early 2011. The band finished recording on June 24, 2010, just before joining the 2010 Warped Tour. While the group was on the tour, the new album entered the post-production stages of mixing and mastering. A new song called "Skumfuk" was leaked online on July 6, 2010. In an interview with Canoe.ca, Steve Jocz said that while producer Gil Norton was originally hired to engineer the new album, he was only around for a week and Sum 41 self-produced the record.
The first single from the album, "Screaming Bloody Murder", was released on February 7, 2011, in the United States. The song had its worldwide premiere on January 14, 2011, on Windsor, Ontario radio station 89X. The album Screaming Bloody Murder was released in Japan on April 6, 2011. On February 28, 2011, a stream of "Blood in My Eyes", another new song from the album, was released for free listening on Alternative Press.
On May 28, 2011, Sum 41 performed a live set for Guitar Center Sessions on DirecTV. The episode included an interview with program host Nic Harcourt.
"Baby You Don't Wanna Know" was released as the album's second single. The band shot a music video for the song during a day off in Germany. A music video was also produced for the first single, "Screaming Bloody Murder", but it was left unreleased due to its content and difficulties with the label.
On August 9, 2011, Sum 41 released the live album Live at the House of Blues, Cleveland 9.15.07 – a live recording of a show that took place on September 15, 2007, in Cleveland, Ohio, while the band was touring its previous album Underclass Hero. A week later when the band was touring the US as part of the Vans Warped Tour, making up for dates the group had to cancel on its 2010 tour, they were forced once again to cancel all remaining dates, when Whibley re-injured his back after playing three shows. It was announced on the band's official website that they would be postponing indefinitely all upcoming tour dates for 2011 while Whibley underwent treatment. In an interview with Jason McCaslin that took place in Oppikoppi, he said that "it's safe to say Sum 41 won't have another album out for at least the next two years". In 2011 Sum 41 was nominated for a Grammy Award for Best Hard Rock/Metal Performance for the song "Blood in My Eyes", but lost to the Foo Fighters.
In February 2012 the band shot a music video for the song "Blood in My Eyes", the third single from the album, with director Michael Maxxis in Los Angeles. Shooting took place on February 29 at the desert around the Los Angeles area; it was released a month later.
From November to December 2012 the band undertook the Does This Look Infected? 10th Anniversary Tour, touring the United States to celebrate the album's release in 2002.
On November 26, 2012, the band members revealed that they were taking a break from touring in 2013 to begin work on a new record.
2013–2018: Jocz's departure, return of Baksh, and 13 Voices
On April 18, 2013, drummer Jocz announced he would be leaving the band on his official Facebook page, leaving Whibley as the sole founding member of the band.
On May 16, 2014, Deryck Whibley posted on his website, explaining that he had liver and kidney failure due to excessive drinking. He also said that he had some ideas for new songs, and that the band would be soon starting to make a new album. On June 9, 2014, Whibley said on his Facebook page that he was working on new Sum 41 music out of his home studio to get ready to record some new tunes.
On July 9, 2015, the band launched a PledgeMusic campaign for its comeback album. On July 23, 2015, the band played its comeback show at the Alternative Press Awards, which featured former lead guitarist Dave Baksh, joining the band on stage nine years after his departure. The band's set also featured DMC as guest. It also introduced Frank Zummo from Street Drum Corps as the new drummer. Sum 41 confirmed Baksh's official return to the band on August 14, 2015. On December 26, 2015, Sum 41 teased two new songs on their Instagram profile.
The band performed on the 2016 Warped Tour. On May 11, 2016, the group announced its signing to Hopeless Records. The band announced on June 6, 2016 that their sixth album would be called 13 Voices and would be released on October 7, 2016. That same day, they also revealed album's track list and cover art. The first song from the upcoming album, "Fake My Own Death", was released on June 28, 2016, through Hopeless Records' official YouTube channel, along with a music video for the song. The song was performed on The Late Show with Stephen Colbert on October 3, 2016. The album's first official single, "War", was released on August 25, 2016. On September 28, 2016, the album's eighth track, "God Save Us All (Death to Pop)" was leaked online, before being officially released (along with a live music video) on September 29, 2016. The band invited fans to record a music video for "Goddamn I'm Dead Again" that was released on May 3, 2017.
On October 22, 2017, the band's Facebook page announced that Whibley had started writing new songs. The group embarked on a 15th anniversary tour of Does This Look Infected in 2018.
2019–present: Order in Decline
On April 22, 2019, the band announced via Twitter its return with new music. On April 24, they released the single, "Out for Blood" through Hopeless Records. The same day, the band also announced their seventh studio album, Order in Decline, with a set release date of July 19. The second single from the album "A Death in the Family" was released along with a music video on June 11. On June 18, "Never There" was released as the third single, along with a video. On July 8, the band released "45 (A Matter of Time)" as the fourth single, along with a video. On May 28, 2021, the band released a version of "Catching Fire" featuring Nothing,Nowhere, along with a music video. On February 22, 2022, the band announced a U.S. tour with Simple Plan called the Blame Canada tour. The tour is set to run from April to August 2022.
Side projects and collaborations
Before the release of Half Hour of Power, and up until the departures of Dave Baksh and Steve Jocz, Sum 41 occasionally played as an alter ego 1980s heavy metal band called Pain for Pleasure during shows. The band appeared in Sum 41's music videos for "Fat Lip" and "We're All to Blame" and had at least one song on each of the band's first three releases. The group's best known song under the Pain for Pleasure moniker is the song of the same name from All Killer No Filler, a track that remains the band's staple during live shows and features drummer Steve Jocz on lead vocals. During the Don't Call It a Sum-Back Tour in 2017, Pain for Pleasure appeared performing the song at the end of their show with guitarist Tom Thacker replacing Jocz as the vocalist.
Sum 41 has collaborated with many other artists, both live and in the studio, including: Tenacious D, Ludacris, Iggy Pop, Pennywise, The BurnOuts, Bowling for Soup, Unwritten Law, Treble Charger, Nelly, Gob, Tommy Lee, Rob Halford, Kerry King, Metallica, and Ja Rule.
Shortly after touring for Does This Look Infected?, Sum 41 was recruited by Iggy Pop for his album, Skull Ring. Whibley co-wrote the first single from the album, "Little Know It All", and joined Iggy on the Late Show with David Letterman to promote it. Following the band's show of September 11, 2005, in Quebec City, Quebec, the band went on a touring hiatus, although on April 17, 2006, Sum 41 played at a tribute to Iggy Pop, joining Iggy on stage for "Little Know It All" and "Lust For Life".
During the band's 2006 touring hiatus, Whibley focused on his producing career: he produced two songs for Avril Lavigne's album The Best Damn Thing. Jocz recorded his first video as director for a Canadian band, The Midway State, and McCaslin started a side project with Todd Morse of H2O and Juliette and the Licks. McCaslin's two-person band, named The Operation M.D., released its debut album, We Have an Emergency, in early 2007. As well as playing bass, keyboards, and acoustic guitar, McCaslin contributed backing vocals as well as leading vocals on three songs. The album was co-produced and mixed by Whibley. The group's video for its first single, "Sayonara", was directed by Jocz.
Musical style, influences and legacy
Sum 41 has been described as punk rock, pop punk, skate punk, alternative metal, alternative rock, melodic hardcore, thrash metal, heavy metal, punk metal, nu metal, arena rock, hard rock, and pop rock, with elements of hip hop.
In a November 2004 interview, Deryck Whibley said: "We don't even consider ourselves punk. We're just a rock band. We want to do something different. We want to do our own thing. That's how music has always been to us." Dave Baksh reiterated Whibley's claims, stating "We just call ourselves rock... It's easier to say than punk, especially around all these fuckin' kids that think they know what punk is. Something that was based on not having any rules has probably one of the strictest fucking rule books in the world."
The band's style has been disputed by fans because of the complex combination of different musical styles and the more mature, serious, and heavy sound on later albums. The band's EP Half Hour of Power is described as punk rock, skate punk and pop punk. All Killer No Filler was described as pop punk and skate punk (except for "Pain for Pleasure", which is purely heavy metal). Does This Look Infected? has been described as punk rock, pop punk and melodic hardcore. Chuck was getting heavier opting out the original pop punk sound with a heavy metal sound, but the band kept in touch with its punk rock and melodic hardcore roots, which created an even more mature sound than the group's previous effort. Critics have described Underclass Hero as a revival of the band's pop punk style. Screaming Bloody Murder and 13 Voices saw the band return to some alternative metal influences. Some of the band's songs contain political-social commentary; "Still Waiting" is an anti-George W. Bush and anti-Iraq War song, "The Jester" and "March of the Dogs" also are critical of Bush, "45 (A Matter of Time)" is critical of President Donald Trump, "Underclass Hero" is a song about class struggle, and "Dear Father" is about Whibley's absent father.
Sum 41's influences include Weezer, Slayer, The Police, Devo, Megadeth, Pennywise, Rancid, No Use for a Name, The Vandals, Anthrax, Carcass, Dio, Judas Priest, Foo Fighters, Green Day, NOFX, Lagwagon, Face to Face, Refused, Nirvana, The Beatles (including John Lennon's solo work), Elvis Costello, Beastie Boys, Rob Base and DJ E-Z Rock, Metallica, Guns N' Roses, and Iron Maiden.
Sum 41 has inspired modern artists such as 5 Seconds of Summer, Seaway, Dune Rats, Marshmello, PVRIS, Trash Boat, Neck Deep, The Vamps, Tonight Alive, Bully Waterparks, and ROAM.
Awards and nominations
Sum 41 has been nominated for seven Juno Awards and has won twice. In 2001, the group was nominated for Best New Group at the Juno awards, but lost to Nickelback. The band was nominated for Best Group in the Juno Awards of 2002 but again lost to Nickelback. Also in 2001, The album All Killer No Filler was nominated for Best Album; however, it lost to The Look of Love by Diana Krall. In 2003, Sum 41 won a Juno Award for Group of the Year. In 2004, the group was nominated again, this time with Does This Look Infected? for Rock Album of the Year, but lost to Sam Roberts's We Were Born in a Flame. In 2005, the album Chuck won Rock Album of the Year; the group was also nominated for Group of The Year, but lost to Billy Talent. In 2008, the band's album Underclass Hero was nominated for the Juno Award Rock Album of the Year; however, the album lost to Finger Eleven's Them vs. You vs. Me.
The group also has been nominated for three different Canadian Independent Music Awards. In 2004, the band won a Woodie Award for The Good Woodie (Greatest Social Impact). The band was also nominated for a Kerrang! Award in 2003 for Best Live Act. On November 30, 2011, Sum 41 was nominated for a Grammy Award for Best Hard Rock/Metal Performance for the song" Blood in My Eyes", however on February 12, 2012, the Foo Fighters won.
Awards
A select list of Sum 41's awards and nominations.
|-
|rowspan="3"| 2001 || "Sum 41" || Juno Award – Best New Group ||
|-
|| "Makes No Difference" || MuchMusic Video Award – People's Choice: Favorite Canadian Group ||
|-
|| "Fat Lip" || MTV Video Music Award – Best New Artist in a Video ||
|-
|rowspan="3"| 2002 || "Sum 41" || Juno Award – Best Group ||
|-
|| "All Killer No Filler" || Juno Award – Best Album ||
|-
|| "In Too Deep" || MuchMusic Video Award – MuchLoud Best Rock Video ||
|-
|rowspan="2"| 2003 || "Sum 41" || Juno Award – Group of the Year ||
|-
|| "Sum 41" || Kerrang! Award – Best Live Act ||
|-
|rowspan="4"| 2004 || "Sum 41" || Canadian Independent Music Awards – Favorite Rock Artist/Group ||
|-
|| "Still Waiting" || Canadian Independent Music Awards – Favorite Single ||
|-
|| "Does This Look Infected?" || Juno Award – Rock Album of the Year ||
|-
|| "Sum 41" || Woodie Award – The Good Woodie (Greatest Social Impact) ||
|-
| rowspan="4"| 2005 || "Chuck" || Canadian Independent Music Awards – Favorite Album ||
|-
|| "Sum 41" || Juno Award – Group of the Year ||
|-
|| "Chuck" || Juno Award – Rock Album of the Year ||
|-
|| "Pieces" || MuchMusic Video Award – People's Choice: Favourite Canadian Group ||
|-
|rowspan="3"| 2008 || "With Me" || MuchMusic Video Award – MuchLOUD Best Rock Video ||
|-
|| "Underclass Hero" || Juno Award – Rock Album of the Year ||
|-
|| Underclass Hero || MTV Video Music Awards Japan – Best Group Video ||
|-
|2012 || "Blood in My Eyes" || Grammy Award for Best Hard Rock/Metal Performance ||
|-
|rowspan="2"| 2016 || "Sum 41" || Kerrang! Award – Best Live Act ||
|-
|| "Sum 41" || Kerrang! Award – Best Fanbase ||
|-
|rowspan="3"| 2017 || "Frank Zummo" || Alternative Press Music Awards – Best Drummer ||
|-
|| "Fake My Own Death" || Alternative Press Music Awards – Best Music Video ||
|-
|| "Sum 41" || Alternative Press Music Awards – Artist of the Year ||
|-
|2020
|| "Order in Decline" || Juno Award – Rock Album of the Year ||
Band members
Current members
Deryck Whibley – lead vocals, rhythm guitar (1997–present), keyboards (2004–present), lead guitar (1996–1997, studio 2006–2007), backing vocals (1996–1997), occasional drums (1997–2015)
Dave Baksh – lead guitar, backing vocals (1997–2006; 2015–present)
Jason McCaslin – bass, backing vocals (1999–present)
Tom Thacker – rhythm and lead guitar, keyboards, backing vocals (2007–present)
Frank Zummo – drums, percussion, occasional backing vocals (2015–present)
Touring musicians
Matt Whibley – keyboards (2011)
Darrin Pfeiffer – drums (2015)
Former members
Steve Jocz – drums, percussion, backing vocals, occasional lead and co-lead vocals (1996–2013)
Jon Marshall – lead vocals, rhythm guitar (1996–1997)
Richard Roy – bass (1996–1998)
Mark Spicoluk – bass (1998–1999)
Timeline
Discography
All Killer No Filler (2001)
Does This Look Infected? (2002)
Chuck (2004)
Underclass Hero (2007)
Screaming Bloody Murder (2011)
13 Voices (2016)
Order in Decline (2019)
Tours
Headlining
Tour of the Rising Sum
Sum Like it Loud Tour
Sum on Your Face Tour
Go Chuck Yourself Tour
European in Your Pants Tour
Screaming Bloody Murder Tour
Does This Look Infected?: 10th Anniversary Tour
20th Anniversary Tour
Don't Call It a Sum-Back Tour
Does This Look Infected?: 15th Anniversary Tour
No Personal Space Tour
Order In Decline World Tour
Co-headlining
2004 North American Tour
For One Night Only
Strength in Numbers Tour
2008 Australian Tour
Dead Silence Tour
2017 Canadian Tour
We Will Detonate Tour
Blame Canada Tour
Traveling festival
Warped Tour
Campus Invasion Tour
Kerrang! Tour
Opening act
Pay Attention Tour
Conspiracy of One Tour
1st Annual Honda Civic Tour
Carnival of Sins Tour
The Sufferer & the Witness Tour
Shit is Fucked Up Tour
One More Light World Tour
Notes
References
External links
CanadianBands.com entry
1996 establishments in Ontario
Canadian alternative metal musical groups
Canadian pop punk groups
Canadian punk rock groups
Skate punk groups
Juno Award for Group of the Year winners
Kerrang! Awards winners
Musical groups established in 1996
Musical groups from the Regional Municipality of Durham
Musical quartets
Ajax, Ontario
Articles which contain graphical timelines
Hopeless Records artists
Juno Award for Rock Album of the Year winners
|
9646594
|
https://en.wikipedia.org/wiki/ZoneAlarm%20Z100G
|
ZoneAlarm Z100G
|
ZoneAlarm Secure Wireless Router Z100G is a discontinued Unified Threat Management security router for the home and SOHO market.
The Z100G was developed by SofaWare Technologies, a Check Point Company. The hardware is similar to SofaWare's Safe@Office and VPN-1 Edge lines, and the software differs only in what features the license allows the user to access and to what degree.
Features
ZoneAlarm Z100G provides networking and security related features, including -
Router with 4 Fast Ethernet LAN ports and one WAN port.
Wireless access point with 108 Mbit/s Super G and Extended Range (XR) technologies.
Stateful Inspection Firewall
Remote Access VPN for a single user at a time
Intrusion Prevention IPS
Gateway Antivirus
Web filtering
USB 2.0 Print Server
Security Reporting
Integrated ActiveX Remote Desktop client to connect to internal computers
Performance
Firewall Throughput - 70 Mbit/s
VPN Throughput - 5 Mbit/s (AES)
Concurrent Firewall Connections - 4,000
External links
ZoneAlarm Z100G Home Page
ZoneAlarm Z100G Technical Specifications
Firewall software
|
4299490
|
https://en.wikipedia.org/wiki/Opportunistic%20encryption
|
Opportunistic encryption
|
Opportunistic encryption (OE) refers to any system that, when connecting to another system, attempts to encrypt communications channels, otherwise falling back to unencrypted communications. This method requires no pre-arrangement between the two systems.
Opportunistic encryption can be used to combat passive wiretapping. (an active wiretapper, on the other hand, can disrupt encryption negotiation to either force an unencrypted channel or perform a man-in-the-middle attack on the encrypted link.) It does not provide a strong level of security as authentication may be difficult to establish and secure communications are not mandatory. However, it does make the encryption of most Internet traffic easy to implement, which removes a significant impediment to the mass adoption of Internet traffic security.
Opportunistic encryption on the Internet is described in "Opportunistic Encryption using the Internet Key Exchange (IKE)", "Opportunistic Security: Some Protection Most of the Time", and in "Opportunistic Security for HTTP/2".
Routers
The FreeS/WAN project was one of the early proponents of OE. The effort is continued by the former freeswan developers now working on Libreswan. Libreswan aims to support different authentication hooks for Opportunistic Encryption with IPsec. Version 3.16 released in December 2015 has support for Opportunistic IPsec using AUTH-NULL which is based on on RFC 7619. The Libreswan Project is currently working on (forward) DNSSEC and Kerberos support for Opportunistic IPsec.
Openswan has also been ported to the OpenWrt project. Openswan used reverse DNS records to facilitate the key exchange between the systems.
It is possible to use OpenVPN and networking protocols to set up dynamic VPN links which act similar to OE for specific domains.
Unix and unix-like systems
The FreeS/WAN and forks such as Openswan and strongSwan offer VPNs which can also operate in OE mode using IPsec based technology. Obfuscated TCP is another method of implementing OE.
Windows OS
Windows platforms have an implementation of OE installed by default. This method uses IPsec to secure the traffic and is a simple procedure to turn on. It is accessed via the MMC and "IP Security Policies on Local Computer" and then editing the properties to assign the "(Request Security)" policy. This will turn on optional IPsec in a Kerberos environment.
In a non-Kerberos environment, a certificate from a certificate authority (CA) which is common to any system with which you communicate securely is required.
Many systems also have problems when either side is behind a NAT. This problem is addressed by NAT Traversal (NAT-T) and is accomplished by adding a DWORD of 2 to the registry: HKLM\SYSTEM\CurrentControlSet\Services\IPsec\AssumeUDPEncapsulationContextOnSendRule
Using the filtering options provided in MMC, it is possible to tailor the networking to require, request or permit traffic to various domains and protocols to use encryption.
E-mail
Opportunistic encryption can also be used for specific traffic like e-mail using the SMTP STARTTLS extension for relaying messages across the Internet, or the Internet Message Access Protocol (IMAP) STARTTLS extension for reading e-mail. With this implementation, it is not necessary to obtain a certificate from a certificate authority, as a self-signed certificate can be used.
Using TLS with IMAP, POP3 and ACAP
SMTP Service Extension for Secure SMTP over TLS
STARTTLS and postfix
STARTTLS and Exchange
Many systems employ a variant with third-party add-ons to traditional email packages by first attempting to obtain an encryption key and if unsuccessful, then sending the email in the clear. PGP, p≡p, Hushmail, and Ciphire, among others can all be set up to work in this mode.
In practice, STARTTLS in SMTP is often deployed with self-signed certificates, which represents a minimal one-time task for a system administrator, and results in most email traffic being opportunistically encrypted.
VoIP
Some Voice over IP (VoIP) solutions provide for painless encryption of voice traffic when possible. Some versions of the Sipura and Linksys lines of analog telephony adapters (ATA) include a hardware implementation of SRTP with the installation of a certificate from Voxilla, a VoIP information site. When the call is placed an attempt is made to use SRTP, if successful a series of tones are played into the handset, if not the call proceeds without using encryption. Skype and Amicima use only secure connections and Gizmo5 attempts a secure connection between its clients. Phil Zimmermann, Alan Johnston, and Jon Callas have proposed a new VoIP encryption protocol called ZRTP. They have an implementation of it called Zfone whose source and compiled binaries are available.
Websites
For encrypting WWW/HTTP connections, HTTPS is typically used, which requires strict encryption and has significant administrative costs, both in terms of initial setup and continued maintenance costs for the website operator. Most browsers verify the webserver's identity to make sure that an SSL certificate is signed by a trusted certificate authority and has not expired, usually requiring the website operator to manually change the certificate every one or two years. The easiest way to enable some sort of opportunistic website encryption is by using self-signed certificates, but this causes browsers to display a warning each time the website is visited unless the user manually marks the website's certificate as trusted. Because unencrypted websites do not currently display any such warnings, the use of self-signed certificates is not well received.
In 2015, Mozilla started to roll out opportunistic encryption in Firefox version 37. This was quickly rolled back (in update 37.0.1) due to a serious vulnerability that could bypass SSL certificate verification.
Browser extensions like HTTPS Everywhere and HTTPSfinder find and automatically switch the connection to HTTPS when possible.
Several proposals were available for true, seamless opportunistic encryption of HTTP/2 protocol. These proposals were later rejected. Poul-Henning Kamp, lead developer of Varnish and a senior FreeBSD kernel developer, has criticized the IETF for following a particular political agenda with HTTP/2 for not implementing opportunistic encryption in the standard.
Weaknesses
STARTTLS implementations often used with SMTP are vulnerable to STRIPTLS attacks when subject to active wiretapping.
See also
John Gilmore
Multi-factor authentication
Security level
Security level managment
Opportunistic TLS
Opportunistic Wireless Encryption (OWE)
tcpcrypt
References
External links
Enabling Email Confidentiality through the use of Opportunistic Encryption by Simson Garfinkel of the MIT Laboratory for Computer Science, May 2003
Windows KB article on NAT-T and DH2048
- Opportunistic Encryption using the Internet Key Exchange (IKE)
- Pervasive Monitoring Is an Attack
Cryptographic software
Internet Protocol based network software
Internet privacy
|
41120998
|
https://en.wikipedia.org/wiki/Jawanza%20Starling
|
Jawanza Starling
|
Jawanza Starling (born June 21, 1991) is an American football safety who is currently a free agent. He played college football for USC. He was signed by the Texans as an undrafted free agent in 2013. He has also been a member of the New York Giants.
Professional career
Houston Texans
Starling was signed by the Houston Texans after going unselected in the 2013 NFL Draft. He was released for final roster cuts before the start of the season.
New York Giants
The New York Giants signed Starling to their practice squad on September 3, 2013.
Second Stint with Texans
On November 14, 2013, the Texans signed Starling off the Giants' practice squad.
He was released on August 30, 2014.
References
External links
Houston Texans bio
NFL Combine Profile
USC Trojans football bio
USC Trojans baseball bio
1991 births
Living people
Players of American football from Tallahassee, Florida
American football safeties
USC Trojans football players
USC Trojans baseball players
Houston Texans players
New York Giants players
|
54383106
|
https://en.wikipedia.org/wiki/2005%20Troy%20Trojans%20football%20team
|
2005 Troy Trojans football team
|
The 2005 Troy Trojans football team represented Troy University in the 2005 NCAA Division I-A football season. The Trojans played their home games at Movie Gallery Stadium in Troy, Alabama and competed in the Sun Belt Conference.
Schedule
References
Troy
Troy Trojans football seasons
Troy Trojans football
|
217500
|
https://en.wikipedia.org/wiki/Dialog%20box
|
Dialog box
|
The dialog box (also called dialogue box (non-U.S. English) or simply dialog) is a graphical control element in the form of a small window that communicates information to the user and prompts them for a response.
Dialog boxes are classified as "modal" or "modeless", depending on whether they block interaction with the software that initiated the dialog. The type of dialog box displayed is dependent upon the desired user interaction.
The simplest type of dialog box is the alert, which displays a message and may require an acknowledgment that the message has been read, usually by clicking "OK", or a decision as to whether or not an action should proceed, by clicking "OK" or "Cancel". Alerts are also used to display a "termination notice"—sometimes requesting confirmation that the notice has been read—in the event of either an intentional closing or unintentional closing ("crash") of an application or the operating system. (E.g., "Gedit has encountered an error and must close.") Although this is a frequent interaction pattern for modal dialogs, it is also criticized by usability experts as being ineffective for its intended use, which is to protect against errors caused by destructive actions, and for which better alternatives exist.
An example of a dialog box is the about box found in many software programs, which usually displays the name of the program, its version number, and may also include copyright information.
Modeless
Non-modal or modeless dialog boxes are used when the requested information is not essential to continue, and so the window can be left open while work continues elsewhere. A type of modeless dialog box is a toolbar which is either separate from the main application, or may be detached from the main application, and items in the toolbar can be used to select certain features or functions of the application.
In general, good software design calls for dialogs to be of this type where possible, since they do not force the user into a particular mode of operation. An example might be a dialog of settings for the current document, e.g. the background and text colors. The user can continue adding text to the main window whatever color it is, but can change it at any time using the dialog. (This isn't meant to be an example of the best possible interface for this; often the same functionality may be accomplished by toolbar buttons on the application's main window.)
System modal
System modal dialog boxes prevent interaction with any other window onscreen and prevent users from switching to another application or performing any other action until the issue presented in the dialog box is addressed. System modal dialogs were more commonly used in the past on single tasking systems where only one application could be running at any time. One current example is the shutdown screen of current Windows versions.
Application modal
Modal dialog boxes temporarily halt the program: the user cannot continue without closing the dialog; the program may require some additional information before it can continue, or may simply wish to confirm that the user wants to proceed with a potentially dangerous course of action (confirmation dialog box). Usability practitioners generally regard modal dialogs as bad design-solutions, since they are prone to produce mode errors. Dangerous actions should be undoable wherever possible; a modal alert dialog that appears unexpectedly or which is dismissed automatically (because the user has developed a habit) will not protect from the dangerous action.
A modal dialog interrupts the main workflow. This effect has either been sought by the developer because it focuses on the completion of the task at hand or rejected because it prevents the user from changing to a different task when needed.
Document modal
The concept of a document modal dialog has recently been used, most notably in macOS and Opera Browser. In the first case, they are shown as sheets attached to a parent window. These dialogs block only that window until the user dismisses the dialog, permitting work in other windows to continue, even within the same application.
In macOS, dialogs appear to emanate from a slot in their parent window, and are shown with a reinforcing animation. This helps to let the user understand that the dialog is attached to the parent window, not just shown in front of it. No work can be done in the underlying document itself while the dialog is displayed, but the parent window can still be moved, re-sized, and minimized, and other windows can be brought in front so the user can work with them:
The same type of dialog box can be compared with the "standard" modal dialog boxes used in Windows and other operating systems.
Similarities include:
the parent window is frozen when the dialog box opens, and one cannot continue to work with the underlying document in that window
no work can be done with the underlying document in that window.
The differences are that
the dialog box may open anywhere in the parent window
depending on where the parent window is located, the dialog box may open virtually anywhere on screen
the dialog box may be moved (in almost all cases), in some cases may be resizable, but usually cannot be minimized, and
no changes to the parent window are possible (cannot be resized, moved or minimized) while the dialog box is open.
Both mechanisms have shortcomings:
The Windows dialog box locks the parent window which can hide other windows the user may need to refer to while interacting with the dialog, though this may be mitigated since other windows are available through the task bar.
The macOS dialog box blocks the parent window, preventing the user from referring to it while interacting with the dialog. This may require the user to close the dialog to access the necessary information, then re-open the dialog box to continue.
See also
Application posture
References
Graphical control elements
|
14676596
|
https://en.wikipedia.org/wiki/Openbravo
|
Openbravo
|
Openbravo is a Spanish cloud-based software provider specializing in retail and restaurants; formerly known as a horizontal open source ERP software vendor for different industries. The head office of Openbravo is located in Pamplona, Spain. Openbravo also has offices in Barcelona and Lille. The company's main product is Openbravo Commerce Cloud, a cloud-based omnichannel platform.
History
Openbravo's roots are in the development of business administration software, first developed by two employees of the University of Navarra, Nicolas Serrano and Ismael Ciordia. They were both involved in the mid 1990s in developing the management of the university. They used emerging internet technologies while doing their work, and subsequently introduced a new approach for web applications. Their concept was realized in a new company called Tecnicia, founded in August 2001 by Serrano, Ciordia, and Aguinaga. In 2005, two management consultants, Manel Sarasa and Josep Mitjá, were asked by a venture capital company to evaluate Tecnicia and prepare a business plan for its evolution. In 2006, the two consultants joined Tecnicia as the CEO and COO, respectively. Around the same time the Spanish investment company Sodena invested US$6.4 million in the further development of the company.
In 2006 the company was renamed Openbravo and the first product launched was Openbravo ERP. The code was open-sourced in April that same year. In 2007, the company announced the acquisition of LibrePOS, a Java-based Point-of-Sale (POS) application for the retail and hospitality businesses. LibrePOS was rebranded as Openbravo POS (or Openbravo Java POS). In May 2008 Openbravo attracted three more investors, Amadeus (UK), GIMV (Belgium) and Adara (Spain) for a second investment round totalling US$12.5 million. This investment launched Openbravo as one of the leading open source companies with substantial resources to further develop its products and services.
In July 2012 Openbravo launched Openbravo for Retail, a vertical solution for the Retail industry including the Openbravo Web POS, a new POS solution that replaced the previous Openbravo Java POS. Openbravo Web POS is a web, mobile and responsive POS solution.
In March 2014, Openbravo ERP was renamed to Openbravo ERP Platform and Openbravo for Retail renamed to Openbravo Commerce Platform.
In May 2015, the Openbravo Commerce Platform and Openbravo ERP Platform were renamed to Openbravo Commerce Suite and Openbravo Business Suite. Openbravo announces its strategic focus in Retail. Openbravo also launches the Openbravo Subscription Management and Recurring Billing, a specialized solution for recurring transactions based revenue models.
In February 2016, Openbravo launches Openbravo Cloud, its official cloud offering, and starts the distribution of Openbravo Commerce Cloud, a cloud-based and mobile-enabled omnichannel platform for midsize to large retail and restaurant chains.
In 2018, Openbravo announces a certified SAP connector to facilitate integration of the Openbravo Commerce Cloud in all those clients running SAP as their central corporate system.
Business and markets
Openbravo targets today mid-sized to large retail and restaurant chains seeking for a new cloud-based platform to support their omnichannel operations. Typically with a physical network of 15/20 locations and more and running omnichannel operations.
Current products
Openbravo currently distributes Openbravo Commerce Cloud, a mobile and cloud omnichannel platform targeting retail and restaurant chains to support its omnichannel operations. The functionality offered by the platform covers both front and backoffice processes for the integration of all sales channels. Features such as a web and mobile point of sale, an integrated OMS engine, CRM & Clienteling functionalities or mobile warehouse and inventory management among others. The company also offers connectors with external solutions such as ERP, eCommerce payments and others.
The Openbravo platform is distributed under an annual subscription model, based mainly on the number of concurrent backoffice users and number of points of sale (POS or POS) subscribed. Additional costs may exist for subscription to additional commercial functionality as connectors with external systems.
Previous products (discontinued)
Since its appearance in the market in 2006, Openbravo has launched different products that help to describe the evolution of the company. The following information is shown for historical purposes only, since all these products are no longer offered.
Openbravo ERP
Openbravo ERP was the first product launched by Openbravo. It is a web-based Enterprise Resource Planning software for small and medium-sized companies that is released under the Openbravo Public License, based on the Mozilla Public License. The model for the program was originally based on the Compiere ERP program that is also open source, released under the GNU General Public License version 2. As of January 2008, the program was among the top ten most active projects of SourceForge.
With Openbravo ERP organizations can automate and register most common business processes, in the fields: Sales, Procurement, Manufacturing, Projects, Finance, MRP and more. Numerous commercial extensions are available on the Openbravo Exchange which can be procured by users with a commercial edition of Openbravo ERP. This paid-for version offers additional functionality compared to the free Community Edition, among them integrated administration tools, non-technical tool for updates and upgrades, access to Openbravo Exchange and a Service Level Agreement. Characteristic of the Openbravo ERP application is the green web interface through which users maintain company data in a web-browser. Openbravo can also create and export reports and data to several formats, such as PDF and Microsoft Excel.
Openbravo's Java-based architecture focuses on two development models:
model-driven development, in which developers describe the application in terms of models rather than code
model-view-controller, a well established design pattern in which the presentation logic and the business logic are kept isolated
These two models allow for integration with other programs and for a simple interface. Because of the application of open standards Openbravo ERP can be integrated with other open source applications like Magento webshop, Pentaho Business Intelligence, ProcessMaker BPM, Liferay Portal and SugarCRM.
In March 2014, Openbravo ERP was renamed to Openbravo ERP Platform, which was changed again to Openbravo Business Suite in May 2015. The latest version is 3.0.36902 released in April 2020.
Openbravo Java POS
Openbravo POS was the first POS solution offered by Openbravo. It is a Java Point-of-Sale (POS) application for retail and hospitality businesses. The application came into existence called TinaPOS. For legal reasons the application was renamed to LibrePOS. In 2007 LibrePOS was acquired by Openbravo and it is known by its current name. The program was completely integrated into Openbravo ERP. Through this integration it was possible to update stock levels, financial journals and customer data directly in the central database when a POS sales is executed in the stores. Openbravo POS can be applied using PDAs for order intake.
In July 2012 Openbravo launched its new POS solution, the Openbravo Web POS, included in the Openbravo Commerce Suite and which replaced the Openbravo Java POS. Openbravo Java POS has been discontinued.
Openbravo Business Suite
The Openbravo Business Suite was launched in May 2015, replacing the previous Openbravo ERP Platform. It is a global management solution built on top of the Openbravo Technology Platform including horizontal ERP, CRM and BI functionality for across industries.
Openbravo Commerce Suite
The Openbravo Commerce Suite is the Openbravo's solution for retailers. It is a multi-channel retail management solution including a responsive web and mobile POS (Openbravo Web POS) backed by a comprehensive functionality for Merchandise Management, Supply Chain Management and Enterprise Management.
Openbravo Subscription Management and Recurring Billing
A commercial solution for companies with recurring billing revenue models, including functionality from pricing definition to automatic revenue recognition and accounting.
Openbravo Commerce Cloud
Current version of the Openbravo software providing a cloud-based and mobile-enabled omnichannel platform for midsize to large specialty retailers and restaurant chains. It is composed by: Openbravo Commerce Central, Openbravo Store, Openbravo OMS, Openbravo WMS and Openbravo Reporting.
See also
Omnichannel
Cloud computing
Retail
Point of Sale
OMS
List of free and open source software packages
References
Business software companies
Point of sale companies
Retail point of sale systems
Cloud computing providers
Development software companies
Supply chain software companies
Companies based in Navarre
Software companies of Spain
|
3600079
|
https://en.wikipedia.org/wiki/Dynamic%20linker
|
Dynamic linker
|
In computing, a dynamic linker is the part of an operating system that loads and links the shared libraries needed by an executable when it is executed (at "run time"), by copying the content of libraries from persistent storage to RAM, filling jump tables and relocating pointers. The specific operating system and executable format determine how the dynamic linker functions and how it is implemented.
Linking is often referred to as a process that is performed when the executable is compiled, while a dynamic linker is a special part of an operating system that loads external shared libraries into a running process and then binds those shared libraries dynamically to the running process. This approach is also called dynamic linking or late linking.
Implementations
Microsoft Windows
Dynamic-link library, or DLL, is Microsoft's implementation of the shared library concept in the Microsoft Windows and OS/2 operating systems. These libraries usually have the file extension DLL, OCX (for libraries containing ActiveX controls), or DRV (for legacy system drivers). The file formats for DLLs are the same as for Windows EXE files that is, Portable Executable (PE) for 32-bit and 64-bit Windows, and New Executable (NE) for 16-bit Windows. As with EXEs, DLLs can contain code, data, and resources, in any combination.
Data files with the same file format as a DLL, but with different file extensions and possibly containing only resource sections, can be called resource DLLs. Examples of such DLLs include icon libraries, sometimes having the extension ICL, and font files, having the extensions FON and FOT.
Unix-like systems using ELF, and Darwin-based systems
In most Unix-like systems, most of the machine code that makes up the dynamic linker is actually an external executable that the operating system kernel loads and executes first in a process address space newly constructed as a result of calling exec or posix_spawn functions. At link time, the path of the dynamic linker that should be used is embedded into the executable image.
When an executable file is loaded, the operating system kernel reads the path of the dynamic linker from it and then attempts to load and execute this other executable binary; if that attempt fails because, for example, there is no file with that path, the attempt to execute the original executable fails. The dynamic linker then loads the initial executable image and all the dynamically-linked libraries on which it depends and starts the executable. As a result, the pathname of the dynamic linker is part of the operating system's application binary interface.
Systems using ELF
In Unix-like systems that use ELF for executable images and dynamic libraries, such as Solaris, 64-bit versions of HP-UX, Linux, FreeBSD, NetBSD, OpenBSD, and DragonFly BSD, the path of the dynamic linker that should be used is embedded at link time into the .interp section of the executable's PT_INTERP segment. In those systems, dynamically loaded shared libraries can be identified by the filename suffix .so (shared object).
The dynamic linker can be influenced into modifying its behavior during either the program's execution or the program's linking, and the examples of this can be seen in the run-time linker manual pages for various Unix-like systems. A typical modification of this behavior is the use of LD_LIBRARY_PATH and LD_PRELOAD environment variables, which adjust the runtime linking process by searching for shared libraries at alternate locations and by forcibly loading and linking libraries that would otherwise not be, respectively. An example is zlibc, also known as uncompress.so, which facilitates transparent decompression when used through the LD_PRELOAD hack; consequently, it is possible to read pre-compressed (gzipped) file data on BSD and Linux systems as if the files were not compressed, essentially allowing a user to add transparent compression to the underlying filesystem, although with some caveats. The mechanism is flexible, allowing trivial adaptation of the same code to perform additional or alternate processing of data during the file read, prior to the provision of said data to the user process that has requested it.
macOS and iOS
In the Apple Darwin operating system, and in the macOS and iOS operating systems built on top of it, the path of the dynamic linker that should be used is embedded at link time into one of the Mach-O load commands in the executable image. In those systems, dynamically loaded shared libraries can be identified either by the filename suffix .dylib or by their placement inside the bundle for a framework.
The dynamic linker not only links the target executable to the shared libraries but also places machine code functions at specific address points in memory that the target executable knows about at link time. When an executable wishes to interact with the dynamic linker, it simply executes the machine-specific call or jump instruction to one of those well-known address points. The executables on the macOS and iOS platforms often interact with the dynamic linker during the execution of the process; it is even known that an executable might interact with the dynamic linker, causing it to load more libraries and resolve more symbols, hours after it initially launches. The reason that a macOS or iOS program interacts with the dynamic linker so often is due both to Apple's Cocoa and Cocoa Touch APIs and Objective-C, the language in which they are implemented (see their main articles for more information).
The dynamic linker can be coerced into modifying some of its behavior; however, unlike other Unix-like operating systems, these modifications are hints that can be (and sometimes are) ignored by the dynamic linker. Examples of this can be seen in dyld's manual page. A typical modification of this behavior is the use of the DYLD_FRAMEWORK_PATH and DYLD_PRINT_LIBRARIES environment variables. The former of the previously-mentioned variables adjusts the executables' search path for the shared libraries, while the latter displays the names of the libraries as they are loaded and linked.
Apple's macOS dynamic linker is an open-source project released as part of Darwin and can be found in the Apple's open-source dyld project.
XCOFF-based Unix-like systems
In Unix-like operating systems using XCOFF, such as AIX, dynamically-loaded shared libraries use the filename suffix .a.
The dynamic linker can be influenced into modifying its behavior during either the program's execution or the program's linking.
A typical modification of this behavior is the use of the LIBPATH environment variable.
This variable adjusts the runtime linking process by searching for shared libraries at alternate locations and by forcibly loading and linking libraries that would otherwise not be, respectively.
OS/360 and successors
Dynamic linking from Assembler language programs in IBM OS/360 and its successors is done typically using a LINK macro instruction containing a Supervisor Call instruction that activates the operating system routines that makes the library module to be linked available to the program. Library modules may reside in a "STEPLIB" or "JOBLIB" specified in control cards and only available to a specific execution of the program, in a library included in the LINKLIST in the PARMLIB (specified at system startup time), or in the "link pack area" where specific reentrant modules are loaded at system startup time.
Multics
In the Multics operating system all files, including executables, are segments. A call to a routine not part of the current segment will cause the system to find the referenced segment, in memory or on disk, and add it to the address space of the running process. Dynamic linking is the normal method of operation, and static linking (using the binder) is the exception.
Efficiency
Dynamic linking is generally slower (requires more CPU cycles) than linking during compilation time, as is the case for most processes executed at runtime. However, dynamic linking is often more space-efficient (on disk and in memory at runtime). When a library is linked statically, every process being run is linked with its own copy of the library functions being called upon. Therefore, if a library is called upon many times by different programs, the same functions in that library are duplicated in several places in the system's memory. Using shared, dynamic libraries means that, instead of linking each file to its own copy of a library at compilation time and potentially wasting memory space, only one copy of the library is ever stored in memory at a time, freeing up memory space to be used elsewhere. Additionally, in dynamic linking, a library is only loaded if it is actually being used.
See also
Direct binding
DLL Hell
Dynamic loading
Late binding
prelink
Dynamic dead code elimination
Notes
References
Further reading
Code: Errata:
External links
Dynamic Linking and Loading, IECC.com
Dynamic Linking in Linux and Windows, part one, Symantec.com
Anatomy of Linux dynamic libraries, IBM.com
Computer libraries
Computer security exploits
Compilers
|
237214
|
https://en.wikipedia.org/wiki/AspectJ
|
AspectJ
|
AspectJ is an aspect-oriented programming (AOP) extension created at PARC for the Java programming language. It is available in Eclipse Foundation open-source projects, both stand-alone and integrated into Eclipse. AspectJ has become a widely used de facto standard for AOP by emphasizing simplicity and usability for end users. It uses Java-like syntax, and included IDE integrations for displaying crosscutting structure since its initial public release in 2001.
Simple language description
All valid Java programs are also valid AspectJ programs, but AspectJ lets programmers define special constructs called aspects. Aspects can contain several entities unavailable to standard classes. These are:
Extension methods Allow a programmer to add methods, fields, or interfaces to existing classes from within the aspect. This example adds an acceptVisitor (see visitor pattern) method to the Point class:
aspect VisitAspect {
void Point.acceptVisitor(Visitor v) {
v.visit(this);
}
}
Pointcuts Allow a programmer to specify join points (well-defined moments in the execution of a program, like method call, object instantiation, or variable access). All pointcuts are expressions (quantifications) that determine whether a given join point matches. For example, this point-cut matches the execution of any instance method in an object of type Point whose name begins with set:
pointcut set() : execution(* set*(..) ) && this(Point);
Advices Allow a programmer to specify code to run at a join point matched by a pointcut. The actions can be performed before, after, or around the specified join point. Here, the advice refreshes the display every time something on Point is set, using the pointcut declared above:
after () : set() {
Display.update();
}
AspectJ also supports limited forms of pointcut-based static checking and aspect reuse (by inheritance). See the AspectJ Programming Guide for a more detailed description of the language.
AspectJ compatibility and implementations
AspectJ can be implemented in many ways, including source-weaving or bytecode-weaving, and directly in the virtual machine (VM). In all cases, the AspectJ program becomes a valid Java program that runs in a Java VM. Classes affected by aspects are binary-compatible with unaffected classes (to remain compatible with classes compiled with the unaffected originals). Supporting multiple implementations allows the language to grow as technology changes, and being Java-compatible ensures platform availability.
Key to its success has been engineering and language decisions that make the language usable and programs deployable. The original Xerox AspectJ implementation used source weaving, which required access to source code. When Xerox contributed the code to Eclipse, AspectJ was reimplemented using the Eclipse Java compiler and a bytecode weaver based on BCEL, so developers could write aspects for code in binary (.class) form. At this time the AspectJ language was restricted to support a per-class model essential for incremental compilation and load-time weaving. This made IDE integrations as responsive as their Java counterparts, and it let developers deploy aspects without altering the build process. This led to increased adoption, as AspectJ became usable for impatient Java programmers and enterprise-level deployments. Since then, the Eclipse team has increased performance and correctness, upgraded the AspectJ language to support Java 5 language features like generics and annotations, and integrated annotation-style pure-java aspects from AspectWerkz.
The Eclipse project supports both command-line and Ant interfaces. A related Eclipse project has steadily improved the Eclipse IDE support for AspectJ (called AspectJ Development Tools (AJDT)) and other providers of crosscutting structure. IDE support for emacs, NetBeans, and JBuilder foundered when Xerox put them into open source, but support for Oracle's JDeveloper did appear. IDE support has been key to Java programmers using AspectJ and understanding crosscutting concerns.
BEA has offered limited VM support for aspect-oriented extensions, but for extensions supported in all Java VM's would require agreement through Sun's Java Community Process (see also the java.lang.instrument package available since Java SE 5 — which is a common ground for JVM load-time instrumentation).
Academic interest in the semantics and implementation of aspect-oriented languages has surrounded AspectJ since its release. The leading research implementation of AspectJ is the AspectBench Compiler, or abc; it supports extensions for changing the syntax and semantics of the language and forms the basis for many AOP experiments that the AspectJ team can no longer support, given its broad user base.
Many programmers discover AspectJ as an enabling technology for other projects, most notably Spring AOP. A sister Spring project, Spring Roo, automatically maintains AspectJ inter-type declarations as its principal code generation output.
History and contributors
Gregor Kiczales started and led the Xerox PARC team that eventually developed AspectJ. He coined the term crosscutting. Fourth on the team, Chris Maeda coined the term aspect-oriented programming. Jim Hugunin and Erik Hilsdale (Xerox PARC team members 12 and 13) were the original compiler and weaver engineers, Mik Kersten implemented the IDE integration and started the Eclipse AJDT project with Adrian Colyer (current lead of the AspectJ project) and Andrew Clement (current compiler engineer).
The AspectBench Compiler was developed and is maintained as a joint effort of the Programming Tools Group at the Oxford University Computing Laboratory, the Sable Research Group at McGill University, and the Institute for Basic Research in Computer Science (BRICS).
AspectWerkz
AspectWerkz is a dynamic, lightweight and high-performance AOP/AOSD framework for Java. It has been merged with the AspectJ project, which supports AspectWerkz functionality since AspectJ 5.
Jonas Boner and Alex Vasseur engineered the AspectWerkz project, and later contributed to the AspectJ project when it merged in the AspectWerkz annotation style and load-time weaving support.
Unlike AspectJ prior to version 5, AspectWerkz did not add any new language constructs to Java, but instead supported declaration of aspects within Java annotations. It utilizes bytecode modification to weave classes at project build-time, class load time, as well as runtime. It uses standardized . Aspects can be defined using either Java annotations (introduced with Java 5), Java 1.3/1.4 custom doclet or a simple XML definition file.
AspectWerkz provides an API to use the very same aspects for proxies, hence providing a transparent experience, allowing a smooth transition for users familiar with proxies.
AspectWerkz is free software. The LGPL-style license allows the use of AspectWerkz 2.0 in both commercial and open source projects.
See also
Aspect-oriented programming
Spring AOP (part of the Spring Framework)
Aspect-oriented software development
References
External links
AJDT
Aspect bench : https://web.archive.org/web/20170816093700/http://www.sable.mcgill.ca/abc/
AspectJ Home Page
AspectWerkz Project homepage
Improve modularity with aspect-oriented programming
Spring AOP and AspectJ Introduction
The AspectJ Programming Guide
Xerox has for AOP/AspectJ, but published AspectJ source code under the Common Public License, which grants some patent rights.
Programming languages
Aspect-oriented programming
Aspect-oriented software development
Cross-platform software
Eclipse (software)
Eclipse software
Eclipse technology
Java programming language family
Software distribution
Software using the Eclipse license
Programming languages created in 2001
2001 software
Cross-platform free software
High-level programming languages
|
612330
|
https://en.wikipedia.org/wiki/Last.fm
|
Last.fm
|
Last.fm is a music website founded in the United Kingdom in 2002. Using a music recommender system called "Audioscrobbler", Last.fm builds a detailed profile of each user's musical taste by recording details of the tracks the user listens to, either from Internet radio stations, or the user's computer or many portable music devices. This information is transferred ("scrobbled") to Last.fm's database either via the music player (including, among others, Spotify, Deezer, Tidal, MusicBee, and Anghami) or via a plug-in installed into the user's music player. The data is then displayed on the user's profile page and compiled to create reference pages for individual artists.
On 30 May 2007, it was acquired by CBS Interactive for £140 million (US$280 million).
The site formerly offered a radio streaming service, which was discontinued on 28 April 2014. The ability to access the large catalogue of music stored on the site was later removed entirely, replaced by links to YouTube and Spotify where available.
History
The current Last.fm website was developed from two separate sources, Last.fm and Audioscrobbler, which were merged in 2005. Audioscrobbler began as a computer science project of Richard Jones when he attended the University of Southampton School of Electronics and Computer Science in the United Kingdom, with the term scrobbling defined as the finding, processing, and distribution of information involving people, music, and other data. Jones developed the first plugins, and then opened an API to the community, after which many music players on different operating system platforms were supported. Audioscrobbler was limited to keeping track of which songs its users played on a registered computer, which allowed for charting and collaborative filtering.
Audioscrobbler and Last.fm (2002–2006)
Last.fm was founded in 2002 by Felix Miller, Martin Stiksel, Michael Breidenbruecker and Thomas Willomitzer, all of them from Germany or Austria, as an Internet radio station and music community site, using similar music profiles to generate dynamic playlists. The site name takes advantage of a domain hack using .fm, the top level domain of Micronesia, popular with FM radio related sites. The "love" and "ban" buttons allowed users to gradually customise their profiles. Last.fm won the Europrix 2002 and was nominated for the Prix Ars Electronica in 2003.
The Audioscrobbler and Last.fm teams began to work closely together, both teams moving into the same offices in Whitechapel, London, and by 2003 Last.fm was fully integrated with Audioscrobbler profiles. Input could come through an Audioscrobbler plugin or a Last.fm station. The sites also shared many community forums, although a few were unique to each site. The old Audioscrobbler site at the audioscrobbler.com domain name was wholly merged into the new Last.fm site on 9 August 2005. Audioscrobbler.net was launched as a separate development-oriented site on 5 September 2005. However, at the very bottom of each of the Last.fm pages there was an Audioscrobbler "slogan", which changes each time the page is refreshed. Based on well-known sayings or advertisements, these originally appeared at the top of the Audioscrobbler website pages and were all created and contributed by the original site members.
An update to the site was made on 14 July 2006, which included a new software application for playing Last.fm radio streams and for logging of tracks played with other media players. Other changes included the improvement of the friends system and updating it to require a two-way friendship, the addition of the Last.fm "Dashboard" where users can see on one page relevant information for their profile, expanded options for purchasing music from online retailers and a new visual design for the web site (including an optional black colour scheme). The site began expanding its language base on 15 July 2006, with a Japanese version. Currently, the site is available in German, Spanish, French, Italian, Polish, Portuguese, Swedish, Russian, Turkish and Simplified Chinese. In late 2006, the site won Best Community Music Site at the BT Digital Music Awards in October. Last.fm also teamed with EMI on Tuneglue-Audiomap. In January 2007 it was nominated for Best Website at the NME Awards.
CBS Acquisition and redesign (2007–2009)
At the end of April 2007, rumours of negotiations between CBS and Last.fm emerged, suggesting that CBS intended to purchase Last.fm for about £225 million ($449 million). In May 2007 it was announced that Channel 4 Radio was to broadcast a weekly show called Worldwide Chart reflecting what Last.fm users around the world were listening to. On 30 May 2007, it was announced that Last.fm had been bought by CBS for £140 million with Last.fm's current management team staying in place. In July 2008, the "new generation" Last.fm was launched featuring a completely new layout, color scheme, and several new features, as well as some old ones removed. This was, however, met with dissatisfaction amongst some users, who complained about the "ugly and non-user-friendly layout", bugs, and slowness. Still, a month after the redesign a CBS press release credited the redesign with generating a 20% growth in the site's traffic.
On 22 February 2009, Techcrunch claimed that "[the] RIAA asked social music service Last.fm for data about its users' listening habits to find people with unreleased tracks on their computers. And Last.fm, which is owned by CBS, allegedly handed the data over to the RIAA." This led to several public postings from both Last.fm and Techcrunch, with Last.fm denying passing any personal data to RIAA. The request was purportedly prompted by the leak of U2's then-unreleased album No Line On The Horizon, and its subsequent widespread distribution via peer-to-peer file sharing services such as BitTorrent.
Three months later, on 22 May 2009, Techcrunch claimed that it was CBS, the parent company of Last.fm, that handed over the data. Last.fm again denied that this was the case, saying that CBS could not have handed over the data without Last.fm's knowledge.
Changes to streaming and access on other platforms (2009–2011)
On 24 March 2009, Last.fm announced a change in free stream listening policy. According to the blog post "[...] In the United States, United Kingdom and Germany, nothing will change. In all other countries, listening to Last.fm Radio will soon require a subscription of €3.00 per month." The change went into effect on 22 April 2009. The announcement led to a wave of disappointment among users, resulting in users stopping submission of their data, refusing to change signatures/avatars and even deleting their accounts.
On 11 September 2009, CBS Radio announced that Last.fm programming would be available in four major market FM stations for the first time on their HD Radio multicasts. This includes KCBS-HD2 in Los Angeles; KITS-HD3 in San Francisco; WWFS-HD2 in New York City; and WXRT-HD3 in Chicago. The programming, which consisted mostly of music aggregated by Last.fm's user-generated weekly music charts as well as live performances and interviews from the Last.fm studios in New York City, debuted on 5 October.
On 12 April 2010, Last.fm announced that they would be removing the option to preview entire tracks, instead redirecting to sites such as the free Hype Machine and pay-to-listen MOG for this purpose. This provoked a large negative reaction from some in the Last.fm user community who perceived the removal as hindering the ability of lesser-known and unsigned artists to gain exposure for their music and general enjoyment of the site. A new "Play direct from artist" feature was introduced soon after, which allowed artists to select individual tracks for users to be able to stream in full.
The ability to listen to custom radio stations ("personal tag radio", "loved tracks radio") was withdrawn on 17 November 2010. This change provoked an angry response among users. Last.fm stated that the move was for licensing reasons. The change meant that a tag radio stream would include all music tagged as such, not just that tagged by each individual user, effectively widening the number of tracks that might be streamed under any one tag set.
Website and desktop application redesigns (2012–2013)
In March 2012, Last.fm was breached by hackers and more than 43 million user accounts were compromised. The full extent of the hack, and its connection to similar attacks against Tumblr, LinkedIn and MySpace in the same time frame, were not confirmed until August 2016. The passwords were encrypted using an outdated, unsalted MD5 hash. Last.fm made users aware of the attack in June 2012.
On 14 February 2012, Last.fm announced that a new beta desktop client had been launched for public testing. The new scrobbler was released for all users on 15 January 2013.
On 12 July 2012, Last.fm announced a new website redesign was also open to public beta and would rely on feedback from testing users. The site redesign went live for all users on 2 August 2012. While well received by technology websites, some of the site's users reacted negatively to the changes on the website's forum.
On 19 June 2012, Last.fm launched Last.fm Originals, a new website featuring exclusive performances and interviews from various musical artists.
On 13 December 2012, it was announced that Last.fm would discontinue radio service after January 2013 to subscribers in all countries except the United States, United Kingdom, Germany, Canada, Ireland, Australia, New Zealand and Brazil. Additionally, radio in the desktop client would require a subscription in the US, UK and Germany, although the website radio would remain free in those countries.
End of radio streaming and redesign (2014–present)
In January 2014, the website announced on-demand integration with Spotify and a new YouTube-powered radio player. Upon the introduction of the YouTube player, the standard radio service became a subscriber-only feature.
On 26 March 2014, Last.fm announced they would be discontinuing their streaming radio service on 28 April 2014. In a statement, the site said the decision was made in order to "focus on improving scrobbling and recommendations".
On 15 April 2015, Last.fm released a subscriber-exclusive beta of a new website redesign. Digital Spy described user reactions on the site's forums during the week of the redesign as "universally negative".
In 2016, Music Manager was discontinued and music uploaded to the site by musicians and record labels became inaccessible; post-Spotify integration they could still be played and downloaded where the option was given, but following the change artists themselves were unable to access their songs in the Last.fm catalogue.
Funding and staff
Last.fm Ltd is funded from the sale of online advertising space and monthly user subscriptions.
Funding prior to acquisition
In 2004, the company received the first round of angel money, from Peter Gardner, an investment banker who was introduced to the founders as early as 2002. A second round was led by Stefan Glaenzer (joined by Joi Ito and Reid Hoffman), who bought into Michael Breidenbruecker's shares as well. In 2006 the company received the first round of venture capital funding from European investors Index Ventures, whose General Partners Neil Rimer and Danny Rimer also joined Last.fm's board of directors, consisting of Felix Miller, Martin Stiksel and Stefan Glaenzer (Chair).
Original founders Felix Miller, Martin Stiksel and Richard Jones left the company in summer 2009.
Features
User accounts
The free user account includes access to all the main features listed below. Registered Users are also able to send and receive private messages.
Profile
A Last.fm user can build a musical profile using any or all of several methods: by listening to their personal music collection on a music player application on a computer or an iPod with an Audioscrobbler plugin, or by listening to the Last.fm Internet radio service, either with the Last.fm client, or with the embedded player. All songs played are added to a log from which personal top artist/track bar charts and musical recommendations are calculated. This automatic track logging is called scrobbling.
Last.fm automatically generates a profile page for every user which includes basic information such as their user name, avatar, date of registration and total number of tracks played. There is also a Shoutbox for public messages. Profile pages are visible to all, together with a list of top artists and tracks, and the 10 most recently played tracks (can be expanded). Each user's profile has a 'Taste-o-Meter' which gives a rating of how compatible the user's music taste is.
Recommendations
Last.fm features a personal recommendations page that is only visible to the user concerned and lists suggested new music and events, all tailored to the user's own preferences. Recommendations are calculated using a collaborative filtering algorithm so users can browse and hear previews of a list of artists not listed on their own profile but which appear on those of others with similar musical tastes.
Artist pages
Once an artist has had a track or tracks "scrobbled" by at least one user, Last.fm automatically generates a main artist page. This page shows details of the total number of plays, the total number of listeners, the most popular weekly and overall tracks, the top weekly listeners, a list of similar artists, most popular tags and a shoutbox for messages. There are also links to events, additional album and individual track pages and similar artists radio. Official music videos and other videos imported from YouTube may also be viewed on the relevant artist and track pages.
Users may add relevant biographical details and other information to any artist's main page in the form of a wiki. Edits are regularly moderated to prevent vandalism. A photograph of the artist may also be added. If more than one is submitted, the most popular one is chosen by public vote. User submitted content is licensed for use under the Creative Commons Attribution Share-Alike License and GNU Free Documentation License.
Last.fm currently cannot disambiguate artists with the same name; a single artist profile is shared between valid artists with the same name. Also Last.fm and its users currently do not differentiate between the Composer and the Artist of music which serves for confusion in classical music genres.
Charts
One particular feature of Last.fm is the semi-automatic weekly generation and archiving of detailed personal music charts and statistics which are created as part of its profile building. Users have several different charts available, including Top Artists, Top Tracks, and Top Albums. Each of these charts is based on the actual number of people listening to the track, album or artist recorded either through an Audioscrobbler plugin or the Last.fm radio stream.
Additionally, charts are available for the top tracks by each artist in the Last.fm system as well as the top tracks for individual albums (when the tagging information of the audio file is available). Artist profiles also keep track of a short list of Top Fans, which is calculated by a formula meant to portray the importance of an artist in a fan's own profile, balancing out users who play hundreds of tracks overall versus those who play only a few.
As the information generated is largely compiled from the ID3 data from audio files "scrobbled" from users' own computers, and which may be incorrect or misspelled, there are many errors in the listings. Tracks with ambiguous punctuation are especially prone to separate listings, which can dilute the apparent popularity of a track. Artists or bands with the same name are not always differentiated. The system attempts to translate some different artist tags to a single artist profile, and has recently attempted to harmonise track names.
Global charts
Last.fm generates weekly "global" charts of the top 400 artists and tracks listened to by all Last.fm users.
The result is notably different from traditional commercial music charts provided by the UK Top 40, Billboard magazine, Soundscan and others, which are based on radio plays or sales. Last.fm charts are less volatile and a new album's release may be reflected in play data for many months or years after it drops out of commercial charts. For example, The Beatles have consistently been a top-five band at Last.fm, reflecting the continued popularity of the band's music irrespective of current album sales. Significant events, such as the release of a highly anticipated album or the death of an artist can have a large impact on the charts.
The Global Tag Chart shows the 100 most popular tags that have been used to describe artists, albums, and tracks. This is based on the total number of times the tag has been applied by Last.fm users since the tagging system was first introduced and does not necessarily reflect the number of users currently listening to any of the related "global tag radio" stations.
Radio stations
Last.fm previously offered customized virtual "radio stations" consisting of uninterrupted audio streams of individual tracks selected from the music files in the music library. This service was discontinued 28 April 2014.
Stations can be based on the user's personal profile, the user's "musical neighbours", or the user's "friends". Tags also have radio stations if enough music has the same tag. Radio stations can also be created on the fly, and each artist page allows selection of a "similar artists" or "artist fan" radio station. As of May 2009, Last.fm introduced Visual Radio, an improved version of Last.fm radio. This brought features such as an artist slideshow and combo stations, which allows for listening to stations consisting of common similar artists of up to either 3 artists or 3 tags.
Under the terms of the station's "radio" license, listeners may not select specific tracks (except as previews), or choose the order in which they are played, although any of the tracks played may be skipped or banned completely. The appropriate royalties are paid to the copyright holders of all streamed audio tracks according to the law in the UK. The radio stream uses an MP3 stream encoded at 128 kbit/s 44.1 kHz, which may be played using the in-page Flash player or the downloaded Last.fm client, but other community-supported players are available as well as a proxy which allows using a media player of choice.
On 24 March 2009, Last.fm announced that Last.fm Radio would require a subscription of €3.00 per month for users living outside the US, the UK, and Germany. This change was to take effect on 30 March, but was postponed until 22 April. The decision resulted in over 1,000 comments, most of them negative, on the Last.fm blog.
Streaming and radio services were discontinued by Last.fm on 28 April 2014, in order to "focus on its core product, the scrobbling experience". However, the website continues to generate recommendations based on a user's existing library.
Player
An "in-page" player is provided automatically for all listeners with HTML5-enabled browser or Adobe Flash installed on their computers. However, it is necessary to download and install the Last.fm client if a user also wishes information about played tracks from their own digital music collection to be included in their personal music profile.
Prior to August 2005, Last.fm generated an open stream that could be played in the user's music player of choice, with a browser-based player control panel. This proved difficult to support and has been officially discontinued. The Last.fm client is currently the only officially supported music player for playing customized Last.fm radio streams on desktop computers. The current version combines the functions of the music player with the plugin that transmits all track data to the Last.fm server, and effectively replaces the separate Last.fm Player and the standalone track submission plugins. It is also free software licensed under the GNU General Public License and available for Linux, Mac OS X and Microsoft Windows operating systems.
The player allows the user to enter the name of any artist or tag, which then gives a choice of a number of similar artist stations, or similar global tag stations. Alternatively, Recommendation radio or any of the user's personal radio stations may be played without the necessity to visit the website.
The player displays the name of the station and track currently playing, the song artist, title and track length as well as album details, the artist's photo and biographical details, album cover art when available, lists of similar artists and the most popular tags and top fans. There are several buttons, allowing the user to love, skip, or ban a song. The love button adds the song to the user's loved tracks list; the ban button ensures that the song will not be played again. Both features affect the user's profile. The skip button does not. Other buttons allow the user to tag or recommend the currently playing track. Other features offered by the application are: minor editing of the user's profile including removing recently played artists and songs from the loved, banned, or previously played track lists; lists of friends and neighbours, lists of tags and a list of previously played radio stations. Users can also open their full Last.fm profile page directly from the player.
The client also enables the user to install player plugins, these integrate with various standalone media players to allow the submission of tracks played in those programs.
In the latest version of the Last.fm Player application, the user can select to use an external player. When this is done, the Last.fm Player provides the user with a local URL, through which the Last.fm music stream is proxied. Users can then open the URL in their preferred media player.
A new version of the desktop client, which had been in beta since early 2012, was released on 15 January 2013. This version disabled the radio function for free users. To access that feature, a paid subscription is necessary.
Last.fm has also developed client software for mobile phones running the iPhone OS, BlackBerry OS and the Android OS. Last.fm has only released these apps in the United States, United Kingdom and Germany, claiming for four years that they are negotiating licenses for making the streaming available in other countries.
Last.fm remained out of service for more than 22 hours on 10 June 2014. It was amongst the longest outages the company has faced. The company, however, remained in contact with visitors using a status page.
Scrobbling
In addition to Last.fm automatically tracking music played via Last.fm's radio, users can also contribute (scrobble) listening data to their Last.fm profile from other streaming sites or by tracking music played locally on their own personal devices. Scrobbling is possible with music stored and played locally via software on devices such as PCs, mobile phones, tablets, and standalone (hardware) media players. Indeed, these were the only methods of scrobbling listening data both before and after the existence of the Last.fm radio service. Certain sites and media players have the ability to upload (scrobble) listening data built-in, for others users must download and install a plugin for their music player, which will automatically submit the artist and title of the song after either half the song or the first four minutes have played, whichever comes first. When the track is shorter than 30 seconds (31 seconds in iTunes) or the track lacks metadata (ID3, CDDB, etc.), the track is not submitted. To accommodate dial-up users or those listening to music while offline, caching of the data and submitting it in bulk is also possible.
List of supported media players and streaming sites
The following services support sending service-specific recently played track feeds:
Rhobbler for Rhapsody (beta)
Other third party applications
Supported applications
Build Last.fm
As of March 2008, the website has added a section titled "Build" where third-party applications can be submitted for review, and then posted to the page.
SXSW Band-Aid
Last.fm partnered up with the SXSW festival by creating an application embedded in the corresponding group page that filters the various artists at the festival by a user's listening statistics, and then uses Last.fm's recommendation service to also suggest other performing artists that said user has not listened to.
Other applications
QuietScrob is a background scrobbler for iPhone, iPod Touch and iPad
iScrob is a full realtime backgrounding Scrobbler for iPhone and iPod touch
iScrobble is a Last.fm Live Scrobbler with Full Realtime Background Scrobbling and non-realtime scrobbling for when the app is not running and is available for the iPhone, iPod Touch, and iPad and will be coming to the Mac soon
Pocket Scrobbler for Windows Mobile enables scrobbling from supported media players as well as streaming radio from Last.fm
LastFmLib.net (LGPL) for using the Last.fm web services in VB.Net/C#
Last.fm Java bindings (BSD) for using the Last.fm web services in Java
Last.fm recent tracks widget for Mac OS X displays a user's most recently played tracks.
Last.fm widget for Opera also displays a user's most recently played tracks, but works on all platforms Opera runs on.
Last.fm dashboard widget for Mac OS X displays a user's last messages in his/her shoutbox.
last.tweet widget for Mac OS X displays the cover art of the recently played track, with Twitter integration.
FoxyTunes Firefox extension places Last.fm player controls and current song information on the browser status bar.
4u2Stream can play Last.fm content on UPnP equipped players.
ExitAhead finds music on eBay matching a Last.fm profile.
Last-Stats shows a user's stats and creates dynamic profile/chart images based on a user's Last.fm profile.
The Hype Machine can scrobble songs the user is listening to on the Hype Machine web-site.
SongStory is an App for the iPhone, which shows useful information about the currently playing track.
Tastebuds.fm is a free music-oriented dating site that imports a user's profile from Last.fm.
Web Scrobbler is a browser plugin with wide community support that offers scrobbling for many web based player applications, including the ability to favorite tracks and edit scrobble information.
See also
List of Internet radio stations
List of online music databases
References
External links
– official site
Audioscrobbler development site
The Old Last.fm
Free Last.fm Music Streamer Plugin for Chrome
Tiny webcaster Last.fm causes major online splash, Rockbites, 22 July 2003
Last.fm: Music to Listeners' Ears, Wired, 7 July 2003
The Musical Myware, Audio presentation by CEO Felix Miller, IT Conversations, 7 March 2006
Guardian Unlimited Interview, Guardian Unlimited Interview with Last.fm co-founder, Martin Stiksel, 4 November 2006
The Celestial Jukebox, New Statesman on the story of Last.fm, June 2009
Last.fm music charts widget
Last.fm for PC alternative download
Last.fm Down-Time Monitoring Tool
2002 establishments in the United Kingdom
Android (operating system) software
BlackBerry software
ViacomCBS subsidiaries
Domain hacks
Online music stores of the United Kingdom
Internet properties established in 2002
Internet radio in the United Kingdom
IOS software
Online music and lyrics databases
Recommender systems
Social cataloging applications
Software that uses Qt
Windows Phone software
2007 mergers and acquisitions
|
60291082
|
https://en.wikipedia.org/wiki/Phosh
|
Phosh
|
Phosh (portmanteau of phone and shell) is a graphical user interface designed for mobile and touch-based devices. It is the default shell used on several mobile Linux operating systems including PureOS, Mobian, and Fedora Mobility. It is also an option on postmarketOS, Manjaro, and openSUSE.
Development
In August 2017, Purism, personal computing hardware vendors and developers of PureOS announced their intention to release a privacy-centric smartphone that ran a mobile-optimized version of their Linux-based operating system. With this announcement, Purism released mockups of Phosh that resembled a modified GNOME Shell. This eventually became known as the Librem 5.
In April 2018, Purism started to publicly release documentation that referenced Phosh with updated mockups, and hired GNOME UI/UX developer Tobias Bernard to directly contribute to the shell.
Despite the Librem 5 phone being delayed, Phosh received its first official release in October 2018, which was primarily focused-on developer usage. The first official hardware for direct use with Phosh was shipped several months later in December when Purism shipped hardware devkits. In July 2020, the PinePhone was released with a version of postmarketOS that featured the Phosh interface.
Since August 2021, Phosh's source code repository has been hosted by the GNOME Foundation, but Purism maintains a separate repository for changes prior to being merged into the development branch.
Features
Overview
The Phosh Overview screen is the primary method to interact with the shell. It contains the App Grid, which displays user applications that can be launched from icons. The App Grid is split into two sections. The top section is reserved for frequently-used applications, and is known as "Favorites". The bottom section is reserved for all other installed applications.
In addition, a functionality is included that allows users to type search terms to find specific applications. The Overview screen also contains the Activities view, which visualizes the currently-opened applications, and gives a method to dismiss them as well.
Lock Screen
When the device's display is toggled from off to on, Phosh displays a Lock Screen with the time and date along with several indicator icons that illustrate the device's status of cellular network service, Wi-Fi, Bluetooth, and battery percentage. Upon sliding up from the bottom of the screen, the Lock Screen requests a predefined passcode to unlock and continue to the Overview screen.
Related technologies
Phosh is based-on the GTK widget toolkit, and descends as a mobile-specific fork from the GNOME Shell. Like GNOME Shell, Phosh relies upon certain GNOME components to provide a fully-featured mobile interface. Primary examples of this are its use of the GNOME Session Manager for session management and the GNOME Settings Daemon for storing application and shell settings. Phosh also makes use of some freedesktop.org system components such as Polkit, UPower, iio-sensor-proxy, NetworkManager and ModemManager.
It is both open source and libre software. Closely-related technologies used in conjunction with Phosh, and also significantly developed by Purism, are Phoc (a Wayland compositor), Squeekboard (an on-screen virtual keyboard), feedbackd (a haptic feedback daemon) and portions of libadwaita in regards to adaptive windowing to allow for otherwise desktop-centric apps to act and feel as true mobile apps.
Version history
The table illustrates major releases, and is not an exhaustive list of releases.
See also
Plasma Mobile
PureOS
Librem 5
References
2017 software
Free desktop environments
GNOME
Graphical shells that use GTK
Graphical user interfaces
Mobile/desktop convergence
Mobile Linux
Software using the GPL license
|
22697171
|
https://en.wikipedia.org/wiki/Cantor%27s%20first%20set%20theory%20article
|
Cantor's first set theory article
|
Cantor's first set theory article contains Georg Cantor's first theorems of transfinite set theory, which studies infinite sets and their properties. One of these theorems is his "revolutionary discovery" that the set of all real numbers is uncountably, rather than countably, infinite. This theorem is proved using Cantor's first uncountability proof, which differs from the more familiar proof using his diagonal argument. The title of the article, "On a Property of the Collection of All Real Algebraic Numbers" ("Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen"), refers to its first theorem: the set of real algebraic numbers is countable. Cantor's article was published in 1874. In 1879, he modified his uncountability proof by using the topological notion of a set being dense in an interval.
Cantor's article also contains a proof of the existence of transcendental numbers. Both constructive and non-constructive proofs have been presented as "Cantor's proof." The popularity of presenting a non-constructive proof has led to a misconception that Cantor's arguments are non-constructive. Since the proof that Cantor published either constructs transcendental numbers or does not, an analysis of his article can determine whether or not this proof is constructive. Cantor's correspondence with Richard Dedekind shows the development of his ideas and reveals that he had a choice between two proofs: a non-constructive proof that uses the uncountability of the real numbers and a constructive proof that does not use uncountability.
Historians of mathematics have examined Cantor's article and the circumstances in which it was written. For example, they have discovered that Cantor was advised to leave out his uncountability theorem in the article he submitted—he added it during proofreading. They have traced this and other facts about the article to the influence of Karl Weierstrass and Leopold Kronecker. Historians have also studied Dedekind's contributions to the article, including his contributions to the theorem on the countability of the real algebraic numbers. In addition, they have recognized the role played by the uncountability theorem and the concept of countability in the development of set theory, measure theory, and the Lebesgue integral.
The article
Cantor's article is short, less than four and a half pages. It begins with a discussion of the real algebraic numbers and a statement of his first theorem: The set of real algebraic numbers can be put into one-to-one correspondence with the set of positive integers. Cantor restates this theorem in terms more familiar to mathematicians of his time: The set of real algebraic numbers can be written as an infinite sequence in which each number appears only once.
Cantor's second theorem works with a closed interval [a, b], which is the set of real numbers ≥ a and ≤ b. The theorem states: Given any sequence of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in the given sequence. Hence, there are infinitely many such numbers.
Cantor observes that combining his two theorems yields a new proof of Liouville's theorem that every interval [a, b] contains infinitely many transcendental numbers.
Cantor then remarks that his second theorem is:
This remark contains Cantor's uncountability theorem, which only states that an interval [a, b] cannot be put into one-to-one correspondence with the set of positive integers. It does not state that this interval is an infinite set of larger cardinality than the set of positive integers. Cardinality is defined in Cantor's next article, which was published in 1878.
Cantor only states his uncountability theorem. He does not use it in any proofs.
The proofs
First theorem
To prove that the set of real algebraic numbers is countable, define the height of a polynomial of degree n with integer coefficients as: n − 1 + |a0| + |a1| + ... + |an|, where a0, a1, ..., an are the coefficients of the polynomial. Order the polynomials by their height, and order the real roots of polynomials of the same height by numeric order. Since there are only a finite number of roots of polynomials of a given height, these orderings put the real algebraic numbers into a sequence. Cantor went a step further and produced a sequence in which each real algebraic number appears just once. He did this by only using polynomials that are irreducible over the integers. The following table contains the beginning of Cantor's enumeration.
Second theorem
Only the first part of Cantor's second theorem needs to be proved. It states: Given any sequence of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in the given sequence.
To find a number in [a, b] that is not contained in the given sequence, construct two sequences of real numbers as follows: Find the first two numbers of the given sequence that are in the open interval (a, b). Denote the smaller of these two numbers by a1 and the larger by b1. Similarly, find the first two numbers of the given sequence that are in (a1, b1). Denote the smaller by a2 and the larger by b2. Continuing this procedure generates a sequence of intervals (a1, b1), (a2, b2), (a3, b3), ... such that each interval in the sequence contains all succeeding intervals—that is, it generates a sequence of nested intervals. This implies that the sequence a1, a2, a3, ... is increasing and the sequence b1, b2, b3, ... is decreasing.
Either the number of intervals generated is finite or infinite. If finite, let (aL, bL) be the last interval. If infinite, take the limits a∞ = limn → ∞ an and b∞ = limn → ∞ bn. Since an < bn for all n, either a∞ = b∞ or a∞ < b∞. Thus, there are three cases to consider:
Case 1: There is a last interval (aL, bL). Since at most one xn can be in this interval, every y in this interval except xn (if it exists) is not contained in the given sequence.
Case 2: a∞ = b∞. Then a∞ is not contained in the given sequence since for all n: a∞ belongs to the interval (an, bn) but xn does not belong to (an, bn). In symbols: a∞ ∈ (an, bn) but xn ∉ (an, bn).
{| class="wikitable collapsible collapsed"
! style="background: f5f5f5;" |Proof that for all n:
|- style="text-align: left; vertical-align: top; background: white"
| style="padding-left: 1em; padding-right: 1em" | This lemma is used by cases 2 and 3. It is implied by the stronger lemma: For all n, (an, bn) excludes x1, ..., x2n. This is proved by induction. Basis step: Since the endpoints of (a1, b1) are x1 and x2 and an open interval excludes its endpoints, (a1, b1) excludes x1, x2. Inductive step: Assume that (an, bn) excludes x1, ..., x2n. Since (an+1, bn+1) is a subset of (an, bn) and its endpoints are x2n+1 and x2n+2, (an+1, bn+1) excludes x1, ..., x2n and x2n+1, x2n+2. Hence, for all n, (an, bn) excludes x1, ..., x2n. Therefore, for all n, xn ∉ (an, bn).
|}
Case 3: a∞ < b∞. Then every y in [a∞, b∞] is not contained in the given sequence since for all n: y belongs to (an, bn) but xn does not.
The proof is complete since, in all cases, at least one real number in [a, b] has been found that is not contained in the given sequence.
Cantor's proofs are constructive and have been used to write a computer program that generates the digits of a transcendental number. This program applies Cantor's construction to a sequence containing all the real algebraic numbers between 0 and 1. The article that discusses this program gives some of its output, which shows how the construction generates a transcendental.
Example of Cantor's construction
An example illustrates how Cantor's construction works. Consider the sequence: , , , , , , , , , ... This sequence is obtained by ordering the rational numbers in (0, 1) by increasing denominators, ordering those with the same denominator by increasing numerators, and omitting reducible fractions. The table below shows the first five steps of the construction. The table's first column contains the intervals (an, bn). The second column lists the terms visited during the search for the first two terms in (an, bn). These two terms are in red.
Since the sequence contains all the rational numbers in (0, 1), the construction generates an irrational number, which turns out to be − 1.
Cantor's 1879 uncountability proof
Everywhere dense
In 1879, Cantor published a new uncountability proof that modifies his 1874 proof. He first defines the topological notion of a point set P being "everywhere dense in an interval":
If P lies partially or completely in the interval [α, β], then the remarkable case can happen that every interval [γ, δ] contained in [α, β], no matter how small, contains points of P. In such a case, we will say that P is everywhere dense in the interval [α, β].
In this discussion of Cantor's proof: a, b, c, d are used instead of α, β, γ, δ. Also, Cantor only uses his interval notation if the first endpoint is less than the second. For this discussion, this means that (a, b) implies a < b.
Since the discussion of Cantor's 1874 proof was simplified by using open intervals rather than closed intervals, the same simplification is used here. This requires an equivalent definition of everywhere dense: A set P is everywhere dense in the interval [a, b] if and only if every open subinterval (c, d) of [a, b] contains at least one point of P.
Cantor did not specify how many points of P an open subinterval (c, d) must contain. He did not need to specify this because the assumption that every open subinterval contains at least one point of P implies that every open subinterval contains infinitely many points of P.
Cantor's 1879 proof
Cantor modified his 1874 proof with a new proof of its second theorem: Given any sequence P of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in P. Cantor's new proof has only two cases. First, it handles the case of P not being dense in the interval, then it deals with the more difficult case of P being dense in the interval. This division into cases not only indicates which sequences are more difficult to handle, but it also reveals the important role denseness plays in the proof.
In the first case, P is not dense in [a, b]. By definition, P is dense in [a, b] if and only if for all subintervals (c, d) of [a, b], there is an x ∈ P such that . Taking the negation of each side of the "if and only if" produces: P is not dense in [a, b] if and only if there exists a subinterval (c, d) of [a, b] such that for all x ∈ P: . Therefore, every number in (c, d) is not contained in the sequence P. This case handles case 1 and case 3 of Cantor's 1874 proof.
In the second case, which handles case 2 of Cantor's 1874 proof, P is dense in [a, b]. The denseness of sequence P is used to recursively define a sequence of nested intervals that excludes all the numbers in P and whose intersection contains a single real number in [a, b]. The sequence of intervals starts with (a, b). Given an interval in the sequence, the next interval is obtained by finding the two numbers with the least indices that belong to P and to the current interval. These two numbers are the endpoints of the next open interval. Since an open interval excludes its endpoints, every nested interval eliminates two numbers from the front of sequence P, which implies that the intersection of the nested intervals excludes all the numbers in P. Details of this proof and a proof that this intersection contains a single real number in [a, b] are given below.
The development of Cantor's ideas
The development leading to Cantor's 1874 article appears in the correspondence between Cantor and Richard Dedekind. On November 29, 1873, Cantor asked Dedekind whether the collection of positive integers and the collection of positive real numbers "can be corresponded so that each individual of one collection corresponds to one and only one individual of the other?" Cantor added that collections having such a correspondence include the collection of positive rational numbers, and collections of the form (an1, n2, . . . , nν) where n1, n2, . . . , nν, and ν are positive integers.
Dedekind replied that he was unable to answer Cantor's question, and said that it "did not deserve too much effort because it has no particular practical interest." Dedekind also sent Cantor a proof that the set of algebraic numbers is countable.
On December 2, Cantor responded that his question does have interest: "It would be nice if it could be answered; for example, provided that it could be answered no, one would have a new proof of Liouville's theorem that there are transcendental numbers."
On December 7, Cantor sent Dedekind a proof by contradiction that the set of real numbers is uncountable. Cantor starts by assuming that the real numbers in can be written as a sequence. Then, he applies a construction to this sequence to produce a number in that is not in the sequence, thus contradicting his assumption. Together, the letters of December 2 and 7 provide a non-constructive proof of the existence of transcendental numbers. Also, the proof in Cantor's December 7th letter shows some of the reasoning that led to his discovery that the real numbers form an uncountable set.
Dedekind received Cantor's proof on December 8. On that same day, Dedekind simplified the proof and mailed his proof to Cantor. Cantor used Dedekind's proof in his article. The letter containing Cantor's December 7th proof was not published until 1937.
On December 9, Cantor announced the theorem that allowed him to construct transcendental numbers as well as prove the uncountability of the set of real numbers:
This is the second theorem in Cantor's article. It comes from realizing that his construction can be applied to any sequence, not just to sequences that supposedly enumerate the real numbers. So Cantor had a choice between two proofs that demonstrate the existence of transcendental numbers: one proof is constructive, but the other is not. These two proofs can be compared by starting with a sequence consisting of all the real algebraic numbers.
The constructive proof applies Cantor's construction to this sequence and the interval [a, b] to produce a transcendental number in this interval.
The non-constructive proof uses two proofs by contradiction:
The proof by contradiction used to prove the uncountability theorem (see Proof of Cantor's uncountability theorem).
The proof by contradiction used to prove the existence of transcendental numbers from the countability of the real algebraic numbers and the uncountability of real numbers. Cantor's December 2nd letter mentions this existence proof but does not contain it. Here is a proof: Assume that there are no transcendental numbers in [a, b]. Then all the numbers in [a, b] are algebraic. This implies that they form a subsequence of the sequence of all real algebraic numbers, which contradicts Cantor's uncountability theorem. Thus, the assumption that there are no transcendental numbers in [a, b] is false. Therefore, there is a transcendental number in [a, b].
Cantor chose to publish the constructive proof, which not only produces a transcendental number but is also shorter and avoids two proofs by contradiction. The non-constructive proof from Cantor's correspondence is simpler than the one above because it works with all the real numbers rather than the interval [a, b]. This eliminates the subsequence step and all occurrences of [a, b] in the second proof by contradiction.
A misconception about Cantor's work
Akihiro Kanamori, who specializes in set theory, stated that "Accounts of Cantor's work have mostly reversed the order for deducing the existence of transcendental numbers, establishing first the uncountability of the reals and only then drawing the existence conclusion from the countability of the algebraic numbers. In textbooks the inversion may be inevitable, but this has promoted the misconception that Cantor's arguments are non-constructive."
Cantor's published proof and the reverse-order proof both use the theorem: Given a sequence of reals, a real can found that is not in the sequence. By applying this theorem to the sequence of real algebraic numbers, Cantor produced a transcendental number. He then proved that the reals are uncountable: Assume that there is a sequence containing all the reals. Applying the theorem to this sequence produces a real not in the sequence, contradicting the assumption that the sequence contains all the reals. Hence, the reals are uncountable. The reverse-order proof starts by first proving the reals are uncountable. It then proves that transcendental numbers exist: If there were no transcendental numbers, all the reals would be algebraic and hence countable, which contradicts what was just proved. This contradiction proves that transcendental numbers exist without constructing any.
The correspondence containing Cantor's non-constructive reasoning was published in 1937. By then, other mathematicians had rediscovered his non-constructive, reverse-order proof. As early as 1921, this proof was called "Cantor's proof" and criticized for not producing any transcendental numbers. In that year, Oskar Perron gave the reverse-order proof and then stated: "… Cantor's proof for the existence of transcendental numbers has, along with its simplicity and elegance, the great disadvantage that it is only an existence proof; it does not enable us to actually specify even a single transcendental number."
As early as 1930, some mathematicians have attempted to correct this misconception of Cantor's work. In that year, the set theorist Abraham Fraenkel stated that Cantor's method is "… a method that incidentally, contrary to a widespread interpretation, is fundamentally constructive and not merely existential." In 1972, Irving Kaplansky wrote: "It is often said that Cantor's proof is not 'constructive,' and so does not yield a tangible transcendental number. This remark is not justified. If we set up a definite listing of all algebraic numbers … and then apply the diagonal procedure …, we get a perfectly definite transcendental number (it could be computed to any number of decimal places)." Cantor's proof is not only constructive, it is also simpler than Perron's proof, which requires the detour of first proving that the set of all reals is uncountable.
Cantor's diagonal argument has often replaced his 1874 construction in expositions of his proof. The diagonal argument is constructive and produces a more efficient computer program than his 1874 construction. Using it, a computer program has been written that computes the digits of a transcendental number in polynomial time. The program that uses Cantor's 1874 construction requires at least sub-exponential time.
The presentation of the non-constructive proof without mentioning Cantor's constructive proof appears in some books that were quite successful as measured by the length of time new editions or reprints appeared—for example: Oskar Perron's Irrationalzahlen (1921; 1960, 4th edition), Eric Temple Bell's Men of Mathematics (1937; still being reprinted), Godfrey Hardy and E. M. Wright's An Introduction to the Theory of Numbers (1938; 2008 6th edition), Garrett Birkhoff and Saunders Mac Lane's A Survey of Modern Algebra (1941; 1997 5th edition), and Michael Spivak's Calculus (1967; 2008 4th edition). Since 2014, at least two books have appeared stating that Cantor's proof is constructive, and at least four have appeared stating that his proof does not construct any (or a single) transcendental.
Asserting that Cantor gave a non-constructive argument without mentioning the constructive proof he published can lead to erroneous statements about the history of mathematics. In A Survey of Modern Algebra, Birkhoff and Mac Lane state: "Cantor's argument for this result [Not every real number is algebraic] was at first rejected by many mathematicians, since it did not exhibit any specific transcendental number." The proof that Cantor published produces transcendental numbers, and there appears to be no evidence that his argument was rejected. Even Leopold Kronecker, who had strict views on what is acceptable in mathematics and who could have delayed publication of Cantor's article, did not delay it. In fact, applying Cantor's construction to the sequence of real algebraic numbers produces a limiting process that Kronecker accepted—namely, it determines a number to any required degree of accuracy.
The influence of Weierstrass and Kronecker on Cantor's article
Historians of mathematics have discovered the following facts about Cantor's article "On a Property of the Collection of All Real Algebraic Numbers":
Cantor's uncountability theorem was left out of the article he submitted. He added it during proofreading.
The article's title refers to the set of real algebraic numbers. The main topic in Cantor's correspondence was the set of real numbers.
The proof of Cantor's second theorem came from Dedekind. However, it omits Dedekind's explanation of why the limits a∞ and b∞ exist.
Cantor restricted his first theorem to the set of real algebraic numbers. The proof he was using demonstrates the countability of the set of all algebraic numbers.
To explain these facts, historians have pointed to the influence of Cantor's former professors, Karl Weierstrass and Leopold Kronecker. Cantor discussed his results with Weierstrass on December 23, 1873. Weierstrass was first amazed by the concept of countability, but then found the countability of the set of real algebraic numbers useful. Cantor did not want to publish yet, but Weierstrass felt that he must publish at least his results concerning the algebraic numbers.
From his correspondence, it appears that Cantor only discussed his article with Weierstrass. However, Cantor told Dedekind: "The restriction which I have imposed on the published version of my investigations is caused in part by local circumstances …" Cantor biographer Joseph Dauben believes that "local circumstances" refers to Kronecker who, as a member of the editorial board of Crelle's Journal, had delayed publication of an 1870 article by Eduard Heine, one of Cantor's colleagues. Cantor would submit his article to Crelle's Journal.
Weierstrass advised Cantor to leave his uncountability theorem out of the article he submitted, but Weierstrass also told Cantor that he could add it as a marginal note during proofreading, which he did. It appears in a
remark at the end of the article's introduction. The opinions of Kronecker and Weierstrass both played a role here. Kronecker did not accept infinite sets, and it seems that Weierstrass did not accept that two infinite sets could be so different, with one being countable and the other not. Weierstrass changed his opinion later. Without the uncountability theorem, the article needed a title that did not refer to this theorem. Cantor chose "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen" ("On a Property of the Collection of All Real Algebraic Numbers"), which refers to the countability of the set of real algebraic numbers, the result that Weierstrass found useful.
Kronecker's influence appears in the proof of Cantor's second theorem. Cantor used Dedekind's version of the proof except he left out why the limits a∞ = limn → ∞ an and
b∞ = limn → ∞ bn exist. Dedekind had used his "principle of continuity" to prove they exist. This principle (which is equivalent to the least upper bound property of the real numbers) comes from Dedekind's construction of the real numbers, a construction Kronecker did not accept.
Cantor restricted his first theorem to the set of real algebraic numbers even though Dedekind had sent him a proof that handled all algebraic numbers. Cantor did this for expository reasons and because of "local circumstances." This restriction simplifies the article because the second theorem works with real sequences. Hence, the construction in the second theorem can be applied directly to the enumeration of the real algebraic numbers to produce "an effective procedure for the calculation of transcendental numbers." This procedure would be acceptable to Weierstrass.
Dedekind's contributions to Cantor's article
Since 1856, Dedekind had developed theories involving infinitely many infinite sets—for example: ideals, which he used in algebraic number theory, and Dedekind cuts, which he used to construct the real numbers. This work enabled him to understand and contribute to Cantor's work.
Dedekind's first contribution concerns the theorem that the set of real algebraic numbers is countable. Cantor is usually given credit for this theorem, but the mathematical historian José Ferreirós calls it "Dedekind's theorem." Their correspondence reveals what each mathematician contributed to the theorem.
In his letter introducing the concept of countability, Cantor stated without proof that the set of positive rational numbers is countable, as are sets of the form (an1, n2, ..., nν) where n1, n2, ..., nν, and ν are positive integers. Cantor's second result uses an indexed family of numbers: a set of the form (an1, n2, ..., nν) is the range of a function from the ν indices to the set of real numbers. His second result implies his first: let ν = 2 and an1, n2 = . The function can be quite general—for example, an1, n2, n3, n4, n5 = +
Dedekind replied with a proof of the theorem that the set of all algebraic numbers is countable. In his reply to Dedekind, Cantor did not claim to have proved Dedekind's result. He did indicate how he proved his theorem about indexed families of numbers: "Your proof that (n) [the set of positive integers] can be correlated one-to-one with the field of all algebraic numbers is approximately the same as the way I prove my contention in the last letter. I take n12 + n22 + ··· + nν2 = and order the elements accordingly." However, Cantor's ordering is weaker than Dedekind's and cannot be extended to -tuples of integers that include zeros.
Dedekind's second contribution is his proof of Cantor's second theorem. Dedekind sent this proof in reply to Cantor's letter that contained the uncountability theorem, which Cantor proved using infinitely many sequences. Cantor next wrote that he had found a simpler proof that did not use infinitely many sequences. So Cantor had a choice of proofs and chose to publish Dedekind's.
Cantor thanked Dedekind privately for his help: "… your comments (which I value highly) and your manner of putting some of the points were of great assistance to me." However, he did not mention Dedekind's help in his article. In previous articles, he had acknowledged help received from Kronecker, Weierstrass, Heine, and Hermann Schwarz. Cantor's failure to mention Dedekind's contributions damaged his relationship with Dedekind. Dedekind stopped replying to his letters and did not resume the correspondence until October 1876.
The legacy of Cantor's article
Cantor's article introduced the uncountability theorem and the concept of countability. Both would lead to significant developments in mathematics. The uncountability theorem demonstrated that one-to-one correspondences can be used to analyze infinite sets. In 1878, Cantor used them to define and compare cardinalities. He also constructed one-to-one correspondences to prove that the n-dimensional spaces Rn (where R is the set of real numbers) and the set of irrational numbers have the same cardinality as R.
In 1883, Cantor extended the positive integers with his infinite ordinals. This extension was necessary for his work on the Cantor–Bendixson theorem. Cantor discovered other uses for the ordinals—for example, he used sets of ordinals to produce an infinity of sets having different infinite cardinalities. His work on infinite sets together with Dedekind's set-theoretical work created set theory.
The concept of countability led to countable operations and objects that are used in various areas of mathematics. For example, in 1878, Cantor introduced countable unions of sets. In the 1890s, Émile Borel used countable unions in his theory of measure, and René Baire used countable ordinals to define his classes of functions. Building on the work of Borel and Baire, Henri Lebesgue created his theories of measure and integration, which were published from 1899 to 1901.
Countable models are used in set theory. In 1922, Thoralf Skolem proved that if conventional axioms of set theory are consistent, then they have a countable model. Since this model is countable, its set of real numbers is countable. This consequence is called Skolem's paradox, and Skolem explained why it does not contradict Cantor's uncountability theorem: although there is a one-to-one correspondence between this set and the set of positive integers, no such one-to-one correspondence is a member of the model. Thus the model considers its set of real numbers to be uncountable, or more precisely, the first-order sentence that says the set of real numbers is uncountable is true within the model. In 1963, Paul Cohen used countable models to prove his independence theorems.
See also
Cantor's theorem
Notes
Note on Cantor's 1879 proof
References
Bibliography
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. (Reprinted by Dover Publications, 2002.)
.
.
.
.
.
.
.
History of mathematics
Set theory
Real analysis
Georg Cantor
|
21263381
|
https://en.wikipedia.org/wiki/SWAT%20and%20WADS%20conferences
|
SWAT and WADS conferences
|
WADS, the Algorithms and Data Structures Symposium, is an international academic conference in the field of computer science, focusing on algorithms and data structures. WADS is held every second year, usually in Canada and always in North America. It is held in alternation with its sister conference, the Scandinavian Symposium and Workshops on Algorithm Theory (SWAT), which is usually held in Scandinavia and always in Northern Europe. Historically, the proceedings of both conferences were published by Springer Verlag through their Lecture Notes in Computer Science series. Springer continues to publish WADS proceedings, but starting in 2016, SWAT proceedings are now published by Dagstuhl through their Leibniz International Proceedings in Informatics.
History
The first SWAT took place in 1988, in Halmstad, Sweden. The first WADS was organised one year later, in 1989, in Ottawa, Ontario, Canada. Until 2007, WADS was known as the Workshop on Algorithms and Data Structures, and until 2008, SWAT was known as the Scandinavian Workshop on Algorithm Theory.
See also
The list of computer science conferences contains other academic conferences in computer science.
Notes
References
. Also available as a Princeton University technical report TR-521-96. Section 13.2 mentions the following conferences (in this order) as examples of "major algorithms conferences" with "a large amount of geometry": SODA, ISAAC, ESA, WADS, SWAT.
. Section 7.3.2 mentions the following conferences (in this order) as examples of conferences that publish articles on pattern matching (in addition to more narrow conferences CPM, COCOON, RECOMB, SPIRE, ISMB): DCC, ESA, FOCS, FSTTCS, ICALP, ISAAC, MFCS, SODA, STACS, STOC, SWAT, WAE, WADS.
The 2007 Australian Ranking of ICT Conferences. Conferences on tier A ("... would add to the author's respect...") include SWAT and WADS.
External links
Bibliographic information about SWAT at DBLP
Bibliographic information about WADS at DBLP
Theoretical computer science conferences
|
29503316
|
https://en.wikipedia.org/wiki/Centrifugal%20micro-fluidic%20biochip
|
Centrifugal micro-fluidic biochip
|
The centrifugal micro-fluidic biochip or centrifugal micro-fluidic biodisk is a type of lab-on-a-chip technology, also known as lab-on-a-disc, that can be used to integrate processes such as separating, mixing, reaction and detecting molecules of nano-size in a single piece of platform, including a compact disk or DVD. This type of micro-fluidic biochip is based upon the principle of microfluidics; to take advantage of noninertial pumping for lab-on-a-chip devices using noninertial valves and switches under centrifugal force and Coriolis effect to distribute fluids about the disks in a highly parallel order.
This biodisk is an integration of multiple technologies in different areas. The designer must be familiar with the process of biology testing before designing the detailed micro-structures in the compact disk. Some basic element components such as valves, mixing units, and separating units should all be used to complete the full testing process. The most basic principles applied in such micro-fluidic structures are centrifugal force, coriolis effect, and surface tension. The micromachining techniques, including patterning, photolithography, and etching should all be used as long as the design is verified. Once the testing process is successful in the biodisk, the complex detection technique is started. There are many methods proposed by scientists in this area. The most popular method is immunoassay which is widely use in the testing of biology. The final step is receiving data from the biodisk by means of a CD drive and modifying either software or hardware that can achieve this function. A popular method is reading data from the biodisk using a common CD drive with some developed software, which contains the advantage of being low on cost.
Once the centrifugal micro-fluidic biochip is developed well enough to be manufactured on a large scale, it will cause a wide effect on the industry as well as medical care, especially in developing countries, where high-precision equipment is not available. People in developed countries who are willing to do such regular home-care detections can also benefit from this new technology.
History
The centrifugal microfluidic platform, including the chip and the device, has been a focus of academic and industrial research efforts for almost 40 years. Primarily targeting biomedical applications, a range of assays have been adapted on the system. The platform has found success as a research or clinical tool and has been further commercialised recently. Nonetheless, this micro-fluidic lab-on-a-chip technology has experienced a breathtaking surge over the last 10–15 years, and new developments in centrifugal microfluidic technologies have the potential to establish widespread utilization of the platform. Therefore, different liquid-handling platforms have been developed to implement unit operations such as sample take-up, sample preconditioning, reagent supply, metering, aliquoting, valving, routing, mixing, incubation, washing, as well as analytical or preparative separations. The integration of such sample preparation, incubation, analysis on a self-contained disc in a device that controls the spinning for automatic performance encourages the sample-to-answer diagnosis in the point-of-care biomedical platform.
Dr. Marc Madou in UC Irvine is one of the leaders in the centrifugal micro-fluidic biochip. He has done several research projects on this area and has made great success such as pneumatic pumping in centrifugal microfluidic platforms, integration of 3D carbon-electrode dielectrophoresis, and serial siphon valving. His group members are working on projects including cell lysis, PCR card, DNA hybridization, anthrax diagnostics and respiratory virus detection (see external links). Dr. Hua-Zhong Yu in SFU.ca also made great progress in this area, proposing a new digitized molecular diagnostics reading method and a new DNA detection method on plastic CD. (see external links) Dr. Gang Logan Liu in UIUC is currently also focusing on this area (see external links).
Structure design
The design of structure bases on the principle of microfluidics and typical components are used in the platform. many structures for centrifugal microfluidic biochips have been developed, with more interesting ones yet to be released. Madou's group invented the valve-chamber structure in 2004. In recent years, Saki Kondo released the vertical liquid transportation structure, which pushed the design to become a three-dimensional concept. Madou's group also invented a serial siphon valving structure which makes flow control much easier. Hong Chen created a spiral microchannel which allows parallel testing with more steps.
Principle
The principle for the centrifugal micro-fluidic biochip includes the basic forces of a particle as well as the principle of flow control.
For a particle in the flow the basic forces are centrifugal force, Coriolis force, Euler force and viscous force.
The centrifugal force plays a role as a pump in the fluid flowing. It offers the basic source to transfer the fluid flowing from the inner radius of CD to the outer radius. The magnitude of the centrifugal force is determined by the radius of particle location and the rotational speed. The formula for centrifugal force density is:
where N is the mass density of the liquid, ω the angular frequency and r the (radial) distance between the particle and center of the disk.
The formula for Coriolis force density is:
where u is the flow velocity.
The Coriolis force generates when the liquid has a velocity component along the radial direction. This force is generally smaller than the centrifugal force when the rotating speed is not high enough. When it comes to a high angular frequency, the Coriolis force makes a difference to the flow of liquid, which is often used to separate fluid flow in the separation unit.
Another basic force is Euler force, which is often defined as the acceleration of angular frequency. For example, when the CD is rotating at a constant speed, the Euler force is relatively slow. The formula for Euler force density is:
As for a particle in the fluidic flow, the viscous force is:
v is the viscosity of the liquid.
As for the entire fluid flow, surface tension plays an important role in flow control. When the flow comes across a varied cross section, the surface tension will balance the centrifugal force and as a result block the flow of liquid. Higher rotation speed is necessary if the liquid would like to enter the next chamber. In this way, due to surface tension, the flowing process is divided into several steps which makes it simpler to realize flow control.
Typical component
There are various typical units in a centrifugal microfluidic structure, including valves, volume metering, mixing and flow switching. These types of units can make up structures that can be used in a variety of ways.
Valves
The principle of valves is the balance between centrifugal force and surface tension. When the centrifugal force is smaller than the surface tension, the liquid flow will be held in the original chamber; when the centrifugal force overbalances the surface tension due to a higher rotating speed, the liquid flow will break the valve and flow into the next chamber. This can be used to control the flow process simply by controlling the rotating speed of the disk.
The most commonly used valves include the hydrophilic valve, the hydrophobic valve, the syphon valve and sacrificial valve.
As for hydrophilic and hydrophobic valves, the generation of surface tension is almost the same. It is the sudden change of cross section of the channel that generates the surface tension. The liquid flow will be held in a hydrophilic channel when the cross section suddenly becomes large, while the flow will be held when the cross section of hydrophobic channel suddenly shrinks.
The siphon valve is simply based upon the siphon phenomenon. When the cross section of the channel is small enough, the liquid in the chamber is able to flow along the channel due to surface tension. Unlike hydrophilic or hydrophobic valves, surface tension acts as a pump in this model while centrifugal force acts as resistance instead.
The sacrificial valve is a new technique that is controlled by laser irradiation. These sacrificial valves are composed of iron oxide nanoparticles dispersed in paraffin wax. Upon excitation with a laser diode, iron oxide nanoparticles within the wax act as integrated nanoheaters, causing the wax to quickly melt at relatively low intensities of laser diode excitation. The valve operation is independent of the spin speed or the location of the valves and therefore allows for more complex biological assays integrated on the disk.
Volume metering
Volume metering is a typical function of centrifugal fluidics to reach a certain amount of liquid reagent. It can be achieved by simply connecting an overflow channel to the chamber. Once the liquid is at the level of the overflow channel, the rest of the liquid will be routed into the waste chamber connected to the overflow channel.
Mixing
Mixing is an important function in microfluidics, which combines various reagents for downstream analysis. As the fluid is confined in the microscale domain, mixing becomes difficult due to the low Reynolds number with laminar flow. That indicates that there is no convective mixing but diffusion, which limits the mixing process. This problem can be solved using several methods. A typical way is to rotating the disk in different directions, namely clockwise and counter clockwise rotation.
Flow switching
Flow switching is necessary when routing reagents into different chambers. A common method for flow switching in a centrifugal device is to utilize the Coriolis force within a Y-shaped structure. When the rotating speed is too low, the liquid flow will follow the original path; when the rotating speed is high enough, which is at almost the same level as centrifugal force, the liquid flow will be routed into another chamber.
Others
Other functions such as sedimentation are also used in microfluidic platforms when necessary. Due to the different mass and radius between different particles, these particles can be separated by viscosity and velocity. In this way, the sedimentation of different particles can be achieved.
Materials
Many structures can be formed using the most common, rapid prototyping technology, soft lithography with polydimethylsiloxane(PDMS). PDMS is an inexpensive, clear elastomeric polymer with rubbery mechanical properties at room temperature. In the laboratory, PDMS is mixed in small batches, poured onto moulds, for example, poly(methyl methacrylate)(PMMA), with microscale features, and cured at moderate temperatures for minutes to hours. Open PDMS channels are closed by adhering the channel bearing component to a glass slide or a second, flat piece of PDMS. Inlets and outlets can be formed easily using punch tools. Although many surface modifications are not permanent on PDMS due to its relatively high chain mobility compared with polymers, PDMS still remains relevant as a material for microfluidic applications.
Thermoplastics are also coming into use. The use of engineering thermoplastics has many advantages, although most of these advantages have not yet been realized. There are a few commodity plastics that have emerged as suitable for medical microfluidic applications. These include poly(methyl methacrylate)(PMMA), polystyrene, polycarbonate, and a variety of cyclic polyolefin materials. PMMA has good optical properties for fluorescence, and UV detection modes are relatively easy to seal to themselves. These are available in grades suitable for both injection and compression molding. Polystyrene is a material known for assay development. Polycarbonates have a high glass transition temperature but poor optical properties for fluorescent detection. The cyclic polyolefins appear to have the best combination of optical and mechanical properties.
Detection
Signal sending
Sample preparation
Before the molecules react with the reagents, they should be prepared for the reactions. The most typical is separation by centrifugal force. In the case of blood, for example, the sedimentation of blood cells from plasma can be achieved by rotating the biodisk for some time. After separation, all molecular diagnostic assays require a step of cell/viral lysis in order to release genomic and proteomic material for downstream processing. Typical lysis methods include chemical and physical method. The chemical lysis method, which is the simplest way, uses chemical detergents or enzymes to break down membranes. The physical lysis can be achieved by using bead beating system on a disk. Lysis occurs due to collisions and shearing between the beads and the cells and through friction shearing along the lysis chamber walls.
ELISA/FIA
ELISA (enzyme-linked immunosorbent assays) and FIA (fluorescent immunoassays) are two methods of immunoassays. Immunoassays are standard tools used in clinical diagnostics. These tests rely on the specific detection of either the antibody or antigen, and are commonly performed by labeling the antibody/antigen of interest through various means such as fluorescent or enzymatic labels. However, washing, mixing, and incubation always take a great deal of time. When integrated in microfluid biodisks, the detection times become extremely short and such types of tests can be widely used in this area.
In ELISA method, enzymes are used to produce a detectable signal from an antibody–antigen complex. At the first step, any antigen present will bind to capture antibodies which have been coated on the channel's surface. Then, detecting antibodies added to bind to the antigen. The enzyme-linked secondary antibody follows the detecting antibodies and binds to them. Finally, when substrate is added, it will be converted by enzyme to a detectable form. Base on this principle, Sergi Morais achieved multiplexed microimmunoassays on a digital versatile disk. This
multiplexed assay could achieve detection limits (IC10) of 0.06μg/L and sensitivities of (IC50) 0.54μg/L.
In addition to typical ELISA assays, fluorescent immunoassays (FIA) are also introduced on a centrifugal microfluidic device. The principle of FIA is almost the same with ELISA; the most significant difference is that fluorescence labels are used instead of enzymes.
Nucleic acid analysis
Nucleic acid sensing using gene-specific nucleic acid amplification with a fluorescence dye or a probe, nucleic acid microarrays, such as DNA microarrays, have become important tools for genetic analysis, gene expression profiling, and genetic-based diagnostics. In the gene-specific nucleic acid amplification, standard PCR or isothermal amplification, such as loop-mediated isothermal amplification (LAMP), is used to amplify the target genetic marker with the DNA-binding fluorescence dye or a sequence-specific probe is applied for signal generation. The fluorescence can be detected in a modified CD/DVD drive or a disc device.
In the nucleic acid microarrays, the process of probe immobilization and signal amplification can be separated into five steps. The surface of the microchannel is first irradiated with UV light in the presence of ozone to produce a hydrophilic surface with a high density of carboxylic acid groups (step 1). Then, the probe molecules (biotin, DNA, or human plasma IgG) are covalently attached to the polycarbon surface via amide coupling (step 2). Later, the target molecules are labeled with fluorescent tags and this biotin-labeled target DNA is hybridized with the probe DNA immobilized on the disk (step 3). Subsequently, gold nanoparticles are bonded with the target via streptavidin conjugate (step 4). Silver is then deposited onto the gold “seed” (step 5) to increase the particle size from a few to several hundred nanometers. The amplification of fluorescence will be detected by the detection system in the CD drive.
Signal receiving
The detection system should be completed by the signal receiving component. There are roughly three types of systems which can be used for detecting. The first is Hardware and software modification, which means the CD/DVD drive should be modified and the software should also be developed at the same time. This type will cause superfluous labor and expenses, and may not be versatile in developing countries or indigenous areas. The second type is Software modification with standard hardware, which means that the detection can be achieved by developing customer software on platforms such as C++ without making any changes to hardware. The third is Standard hardware and existing software, which means that the detection can be realized simply by using the existing equipment. Manu Pallapa described a new protocol to read and quantify biotin–streptavidin binding assays with a standard optical drive by using a current CD-data analysis software (IsoBuster) successfully. The latter two types are both considerable when coming across different situations.
No matter which type of detection system one uses, the reading method is an important factor. There are mainly two reading methods, which are AAS (acquired analog signals) and ERD (error reading detection). In the AAS method, to determine multianalytes on a DVD, the analog signals acquired directly from the photodiode of a CD/DVD drive correlate well with the optical density of the reaction products. The ERD method is based on the analysis of reading errors. It can use the same digital versatile disk and a standard DVD drive without any supplementary hardware.
ERD
In the ERD method, the position and level of the resulting reading error correspond to the physical location and the intensity of the bioassay signal, respectively. The errors are then compared with a perfectly recorded CD to identify the time when one certain error was read out. There are several free CD-quality diagnostic programs, such as PlexTools Professional, Kprobe, and CD-DVD Speed, which can be used to access the error-statistic information in a CD/DVD drive and to generate a plot displaying the variation of the block error rate as function of playtime. In a typical 700-MB CD-R containing 79.7 minutes of audio data, for example, the radius that error occurs can be calculated from the following equation:
t is the reading time and r is the radius location.
AAS
In the AAS method, the set of servo systems (focus, tracking, sled, and spindle servos) keeps the laser beam focused on the spiral track, and allows disc rotation and laser head motion during the scanning. The amplification/detection board (DAB) is integrated into the CD/DVD drive unit and incorporates a photosensor and electronic circuitry to amplify the RF signal extracted from the photodiode transducer. The photosensor generates a trigger signal when detecting the trigger mark. Both signals are brought to the USB2.0 data acquisition board(DAQ) for digitization and quantification.
See also
lab on a chip
point-of-care testing
diagnostic testing
MEMS
Immunoassay
Notes
References
External links
Marc Madou's BioMEMS lab in UC Irvine
YKCho's FRUITS lab in UNIST
Hua-Zhong Yu's webpage in SFU
Gang Logan Liu's research webpage on biodisk
Lab-on-a-chip publishing
Cell culture techniques
Microfluidics
|
554716
|
https://en.wikipedia.org/wiki/Oblivious%20transfer
|
Oblivious transfer
|
In cryptography, an oblivious transfer (OT) protocol is a type of protocol in which a sender transfers one of potentially many pieces of information to a receiver, but remains oblivious as to what piece (if any) has been transferred.
The first form of oblivious transfer was introduced in 1981 by Michael O. Rabin. In this form, the sender sends a message to the receiver with probability 1/2, while the sender remains oblivious as to whether or not the receiver received the message. Rabin's oblivious transfer scheme is based on the RSA cryptosystem. A more useful form of oblivious transfer called 1–2 oblivious transfer or "1 out of 2 oblivious transfer", was developed later by Shimon Even, Oded Goldreich, and Abraham Lempel, in order to build protocols for secure multiparty computation. It is generalized to "1 out of n oblivious transfer" where the user gets exactly one database element without the server getting to know which element was queried, and without the user knowing anything about the other elements that were not retrieved. The latter notion of oblivious transfer is a strengthening of private information retrieval, in which the database is not kept private.
Claude Crépeau showed that Rabin's oblivious transfer is equivalent to 1–2 oblivious transfer.
Further work has revealed oblivious transfer to be a fundamental and important problem in cryptography. It is considered one of the critical problems in the field, because of the importance of the applications that can be built based on it. In particular, it is complete for secure multiparty computation: that is, given an implementation of oblivious transfer it is possible to securely evaluate any polynomial time computable function without any additional primitive.
Rabin's oblivious transfer protocol
In Rabin's oblivious transfer protocol, the sender generates an RSA public modulus N=pq where p and q are large prime numbers, and an exponent e relatively prime to λ(N) = (p − 1)(q − 1). The sender encrypts the message m as me mod N.
The sender sends N, e, and me mod N to the receiver.
The receiver picks a random x modulo N and sends x2 mod N to the sender. Note that gcd(x,N) = 1 with overwhelming probability, which ensures that there are 4 square roots of x2 mod N.
The sender finds a square root y of x2 mod N and sends y to the receiver.
If the receiver finds y is neither x nor −x modulo N, the receiver will be able to factor N and therefore decrypt me to recover m (see Rabin encryption for more details). However, if y is x or −x mod N, the receiver will have no information about m beyond the encryption of it. Since every quadratic residue modulo N has four square roots, the probability that the receiver learns m is 1/2.
1–2 oblivious transfer
In a 1–2 oblivious transfer protocol, Alice the sender has two messages m0 and m1, and wants to ensure that the receiver only learns one. Bob, the receiver, has a bit b and wishes to receive mb without Alice learning b.
The protocol of Even, Goldreich, and Lempel (which the authors attribute partially to Silvio Micali), is general, but can be instantiated using RSA encryption as follows.
Alice has two messages, , and wants to send exactly one of them to Bob. Bob does not want Alice to know which one he receives.
Alice generates an RSA key pair, comprising the modulus , the public exponent and the private exponent
She also generates two random values, and sends them to Bob along with her public modulus and exponent.
Bob picks to be either 0 or 1, and selects either the first or second .
He generates a random value and blinds by computing , which he sends to Alice.
Alice doesn't know (and hopefully cannot determine) which of and Bob chose. She applies both of her random values and comes up with two possible values for : and . One of these will be equal to and can be correctly decrypted by Bob (but not Alice), while the other will produce a meaningless random value that does not reveal any information about .
She combines the two secret messages with each of the possible keys, and , and sends them both to Bob.
Bob knows which of the two messages can be unblinded with , so he is able to compute exactly one of the messages
1-out-of-n oblivious transfer and k-out-of-n oblivious transfer
A 1-out-of-n oblivious transfer protocol can be defined as a natural generalization of a 1-out-of-2 oblivious transfer protocol. Specifically, a sender has n messages, and the receiver has an index i, and the receiver wishes to receive the i-th among the sender's messages, without the sender learning i, while the sender wants to ensure that the receiver receive only one of the n messages.
1-out-of-n oblivious transfer is incomparable to private information retrieval (PIR).
On the one hand, 1-out-of-n oblivious transfer imposes an additional privacy requirement for the database: namely, that the receiver learn at most one of the database entries. On the other hand, PIR requires communication sublinear in n, whereas 1-out-of-n oblivious transfer has no such requirement. However, assuming single server PIR is a sufficient assumption in order to construct 1-out-of-2 Oblivious Transfer .
1-out-of-n oblivious transfer protocol with sublinear communication was first constructed (as a generalization of single-server PIR) by Eyal Kushilevitz and Rafail Ostrovsky . More efficient constructions were proposed by Moni Naor and Benny Pinkas, William Aiello, Yuval Ishai and Omer Reingold, Sven Laur and Helger Lipmaa.. In 2017, Kolesnikov et al., proposed an efficient 1-n oblivious transfer protocol which requires roughly 4x the cost of 1-2 oblivious transfer in amortized setting.
Brassard, Crépeau and Robert further generalized this notion to k-n oblivious transfer, wherein the receiver obtains a set of k messages from the n message collection. The set of k messages may be received simultaneously ("non-adaptively"), or they may be requested consecutively, with each request based on previous messages received.
Generalized oblivious transfer
k-n Oblivious transfer is a special case of generalized oblivious transfer, which was presented by Ishai and Kushilevitz. In that setting, the sender has a set U of n messages, and the transfer constraints are specified by a collection A of permissible subsets of U.
The receiver may obtain any subset of the messages in U that appears in the collection A. The sender should remain oblivious of the selection made by the receiver, while the receiver cannot learn the value of the messages outside the subset of messages that he chose to obtain. The collection A is monotone decreasing, in the sense that it is closed under containment (i.e., if a given subset B is in the collection A, so are all of the subsets of B).
The solution proposed by Ishai and Kushilevitz uses the parallel invocations of 1-2 oblivious transfer while making use of a special model of private protocols. Later on, other solutions that are based on secret sharing were published – one by Bhavani Shankar, Kannan Srinathan, and C. Pandu Rangan, and another by Tamir Tassa.
Origins
In the early seventies Stephen Wiesner introduced a primitive called multiplexing in his seminal paper "Conjugate Coding",
which was the starting point of quantum cryptography. Unfortunately it took more than ten years to be published. Even though
this primitive was equivalent to what was later called 1–2 oblivious transfer, Wiesner did not see its application to cryptography.
Quantum oblivious transfer
Protocols for oblivious transfer can be implemented with quantum systems. In contrast to other tasks in quantum cryptography, like quantum key distribution, it has been shown that quantum oblivious transfer cannot be implemented with unconditional security, i.e. the security of quantum oblivious transfer protocols cannot be guaranteed only from the laws of quantum physics.
See also
k-anonymity
Secure multi-party computation
Zero-knowledge proof
Private information retrieval
References
Stephen Wiesner, "Conjugate coding", Sigact News, vol. 15, no. 1, 1983, pp. 78–88; original manuscript written circa 1970.
Michael O. Rabin. "How to exchange secrets with oblivious transfer." Technical Report TR-81, Aiken Computation Laboratory, Harvard University, 1981. Scanned handwriting + typed version on eprint.iacr.org archive. Typed version available on Dousti's homepage.
S. Even, O. Goldreich, and A. Lempel, "A Randomized Protocol for Signing Contracts", Communications of the ACM, Volume 28, Issue 6, pg. 637–647, 1985. Paper at Catuscia Palamidessi's page
Claude Crépeau. "Equivalence between two flavours of oblivious transfer". In Advances in Cryptology: CRYPTO '87, volume 293 of Lecture Notes in Computer Science, pages 350–354. Springer, 1988
Joe Kilian. "Founding Cryptography on Oblivious Transfer", Proceedings, 20th Annual ACM Symposium on the Theory of Computation (STOC), 1988. Paper at ACM portal (subscription required)
Gilles Brassard, Claude Crépeau and Jean-Marc Robert. "All-or-nothing disclosure of secrets." In Advances in Cryptology: CRYPTO ’86, volume 263 of LNCS, pages 234–238. Springer, 1986.
Moni Naor and Benny Pinkas. "Oblivious transfer with adaptive queries." In Advances in Cryptology: CRYPTO ’99, volume 1666 of LNCS, pages 573–590. Springer, 1999.
Yuval Ishai and Eyal Kushilevitz. "Private simultaneous messages protocols with applications." In Proc. of ISTCS’97, IEEE Computer Society, pages 174–184, 1997.
Bhavani Shankar, Kannan Srinathan and C. Pandu Rangan. "Alternative protocols for generalized oblivious transfer". In Proc. of ICDCN’08, LNCS 4904, pages 304–309, 2008.
Tamir Tassa. "Generalized oblivious transfer by secret sharing". Designs, Codes and Cryptography, Volume 58:1, pages 11–21, January 2011. Paper at openu.ac.il
Moni Naor and Benny Pinkas (1990). Oblivious Polynomial Evaluation 31st STOC
William Aiello, Yuval Ishai and Omer Reingold (2001) Priced Oblivious Transfer: How to Sell Digital Goods EUROCRYPT '01 Proceedings of the International Conference on the Theory and Application of Cryptographic Techniques: Advances in Cryptology, pages 119–135
Sven Laur and Helger Lipmaa (2007). "A New Protocol for Conditional Disclosure of Secrets And Its Applications". In Jonathan Katz and Moti Yung, editors, ACNS, Lecture Notes in Computer Science 4521: 207–225. Springer, Heidelberg.
Vladimir Kolesnikov, Ranjit Kumaresan, Mike Rosulek, and Ni Trieu (2017). "Efficient batched oblivious prf with applications to private set intersection". In Edgar R.Weippl, Stefan Katzenbeisser, Christopher Kruegel, Andrew C. Myers, and Shai Halevi, editors, ACM CCS 16, pages 818–829. ACM Press, October 2016.
Giovanni Di Crescenzo, Tal Malkin, Rafail Ostrovsky: Single Database Private Information Retrieval Implies Oblivious Transfer. EUROCRYPT 2000: 122-138
Eyal Kushilevitz, Rafail Ostrovsky: Replication is NOT Needed: SINGLE Database, Computationally-Private Information Retrieval. FOCS 1997: 364-373
External links
Helger Lipmaa's collection of Web links on the topic
Cryptographic primitives
|
15381
|
https://en.wikipedia.org/wiki/NATO%20Integrated%20Air%20Defense%20System
|
NATO Integrated Air Defense System
|
The NATO Integrated Air Defense System (short: NATINADS) is a command and control network combining radars and other facilities spread throughout the NATO alliance's air defence forces. It formed in the mid-1950s and became operational in 1962 as NADGE. It has been constantly upgraded since its formation, notably with the integration of Airborne Early Warning aircraft in the 1970s. The United Kingdom maintained its own network, but was fully integrated with the network since the introduction of the Linesman/Mediator network in the 1970s. Similarly, the German network maintained an independent nature through GEADGE.
Development
Development was approved by the NATO Military Committee in December 1955. The system was to be based on four air defense regions (ADRs) coordinated by SACEUR (Supreme Allied Commander Europe). Starting from 1956 early warning coverage was extended across Western Europe using 18 radar stations. This part of the system was completed by 1962. Linked to existing national radar sites the coordinated system was called the NATO Air Defence Ground Environment (NADGE).
From 1960 NATO countries agreed to place all their air defence forces under the command of SACEUR in the event of war. These forces included command & control (C2) systems, radar installations, and Surface-to-Air (SAM) missile units as well as interceptor aircraft.
By 1972 NADGE was converted into NATINADS consisting of 84 radar sites and associated Control Reporting Centers (CRC) and in the 1980s the Airborne Early Warning / Ground Environment Integration Segment (AEGIS) upgraded the NATINADS with the possibility to integrate the AWACS radar picture and all of its information into its visual displays. (NOTE: This AEGIS is not to be confused with the U.S.Navy AEGIS, a shipboard fire control radar and weapons system.) AEGIS processed the information through Hughes H5118ME computers, which replaced the H3118M computers installed at NADGE sites in the late 1960s and early 1970s.
NATINADS ability to handle data increased with faster clock rates. The H5118M computer had a staggering 1 megabyte of memory and could handle 1.2 million instructions per second while the former model had a memory of only 256 kilobytes and a clock speed of 150,000 instructions per seconds.
NATINADS/AEGIS were complemented, in West Germany by the German Air Defence Ground Environment (GEADGE), an updated radar network adding the southern part of Germany to the European system and Coastal Radar Integration System (CRIS), adding data links from Danish coastal radars.
In order to counter the hardware obsolescence, during the mid-1990s NATO started the AEGIS Site Emulator (ASE) program allowing the NATINADS/AEGIS sites to replace the proprietary hardware (the 5118ME computer and the various operator consoles IDM-2, HMD-22, IDM-80) with commercial-off-the-shelf (COTS) servers and workstations.
In the first years 2000, the initial ASE capability was expanded with the possibility to run, thanks to the new hardware power, multiple site emulators on the same hardware, so the system was renamed into Multi-AEGIS Site Emulator (MASE). The NATO system designed to replace MASE in the near future is the Air Command and Control System (ACCS).
Because of changing politics, NATO expanding and financial crises most European (NATO) countries are trying to cut defence budgets; as a direct result, many obsolete and outdated NATINADS facilities are phased out earlier. As of 2013, operational NATO radar sites in Europe are as follows:
Allied Air Command
Allied Air Command (AIRCOM) is the central command of all NATO air forces on the European continent. The command is based at Ramstein Air Base in Germany and has two subordinate commands in Germany and Spain. The Royal Canadian Air Force and United States Air Force fall under command of the Canadian/American North American Aerospace Defense Command.
Allied Air Command, at Ramstein Air Base, Germany
CAOC Torrejón, at Torrejón Air Base, Spain - responsible for the airspace South of the Alps
Albania: Air Surveillance Centre, at Tirana International Airport
Bulgaria: Air Sovereignty Operations Centre, in Sofia
Croatia: Airspace Surveillance Centre, in Podvornica
Greece: Air Operations Centre, at Larissa Air Base
Italy: Air Operations Centre, in Poggio Renatico
Montenegro: Air Surveillance and Reporting Centre, at Podgorica Airport
Portugal: Control and Reporting Centre, in Monsanto
Romania: Air Operations Center, in Bucharest
Slovenia: Airspace Surveillance and Control Centre, in Brnik
Spain: Air Operations Centre, in Torrejón
Central Command and Control Group, at Torrejón Air Base
Northern Command and Control Group, at Zaragoza Air Base
Turkey: Control and Reporting Centre, in Ahlatlıbel
CAOC Uedem, in Uedem, Germany - responsible for the airspace North of the Alps
Baltic Air Surveillance Network - Regional Airspace Surveillance Coordination Centre, in Karmėlava
Estonia: Air Operations Control Centre, at Ämari Air Base
Latvia: Air Operations Centre, at Lielvārde Air Base
Lithuania: Airspace Control Centre, in Karmėlava
Belgium: Control and Reporting Centre, at Beauvechain Air Base
Czech Republic: Control and Reporting Centre, in Hlavenec
Denmark: Control and Reporting Centre, at Karup Air Base
France: Control and Reporting Centre, at Mont Verdun Air Base
Germany: Air Operations Centre, in Uedem
Control and Reporting Centre 2, in Erndtebrück
Control and Reporting Centre 3, in Schönewalde
Hungary: Air Operations Centre, in Veszprém
Iceland: Control and Reporting Centre, at Keflavik Air Base
Luxembourg: airspace controlled by Belgium's Control and Reporting Centre, at Beauvechain Air Base
Netherlands: Control and Reporting Centre, in Nieuw-Milligen
Norway: Control and Reporting Centre, in Sørreisa
Poland: Air Operations Centre, in Warsaw-Pyry
22nd Command and Control Centre, in Osówiec
32nd Command and Control Centre, in Balice
Slovakia: Air Operations Centre, at Sliač Air Base
United Kingdom: Control and Reporting Centre, at RAF Boulmer
Radar stations
Albania
The Albanian Air Force operates Lockheed Martin AN/TPS-77 radars.
Belgium
The Belgian Air Component's Control and Reporting Centre was based at Glons, where also its main radar was located. The radar was deactivated in 2015 and the Centre moved to Beauvechain Air Base in 2020. The Belgian Control and Reporting Centre reports to CAOC Uedem in Germany and is also responsible for guarding the airspace of Luxembourg. At the new location the Control and Reporting Centre uses digital radar data of the civilian radars of Belgocontrol and the Marconi S-723 radar of the Air Component's Air Traffic Control Centre in Semmerzake.
Bulgaria
The Bulgarian Air Force's Air Sovereignty Operations Centre is located in Sofia and reports to CAOC Torrejón. The Bulgarian Air Force fields three control and surveillance zones, which operate obsolete Soviet-era radars. The Bulgarian Air Force intends to replace these radars with fewer, but more capable Western 3-D radars as soon as possible. The future locations of the new radars are as of 2018 unknown.
Joint Forces Command, in Sofia
Air Sovereignty Operational Center (ASOC), in Sofia
Base Operative Center (part of 3rd Air Base), Graf Ignatievo Air Base, operational control of fighter aviation
Command, Control and Surveillance Base, in Sofia
1st Control and Surveillance Zone, in Bozhurishte, Sofia Province
2nd Control and Surveillance Zone, in Trud, Plovdiv Province
3rd Control and Surveillance Zone, in Bratovo, Burgas Province
Canada
The Royal Canadian Air Force's control centres and radar stations are part of the Canadian/American North American Aerospace Defense Command.
Croatia
The Croatian Air Force and Air Defense's Airspace Surveillance Centre is headquartered in Podvornica and reports to CAOC Torrejón.
Air Force and Air Defense Command
Airspace Surveillance and Control Battalion, at 91st Air Force Base (Zagreb - Pleso)
Airspace Surveillance Centre, in Podvornica
Sector Operational Centre, in Split
Mount Sljeme Radar Post, with AN/FPS-117(E)1T
Borinci Radar Post, with AN/FPS-117(E)1T
Papuk Radar Post, with AN/FPS-117(E)1T
Učka Radar Post, with AN/FPS-117(E)1T
Mount Rota, with AN/FPS-117(E)1T
Czech Republic
The Czech Air Force's Control and Reporting Centre is located in Hlavenec and reports to CAOC Uedem.
Air Force Command, in Prague
26th Air Command, Control and Surveillance Regiment, in Stará Boleslav
261st Control and Reporting Centre (CRC), in Hlavenec
262nd Radiotechnical Battalion, in Hlavenec
1st Radiotechnical Company, in Nepolisy, with RAT-31DL
4th Radiotechnical Company, in Sokolnice, with RAT-31DL
263nd Support Battalion, in Hlavenec
Reserve Control and Reporting Centre, in Větrušice
Denmark
The Royal Danish Air Force's Combined Air Operations Centre (CAOC 1) in Finderup was deactivated in 2008 and replaced at the same location by the Combined Air Operations Centre Finderup (CAOC F), which had responsibility for the airspaces of Iceland, Norway, Denmark and the United Kingdom. CAOC F was deactivated in 2013 and its responsibilities were transferred to CAOC Uedem in Germany. The national Danish Control and Reporting Centre is located at Karup Air Base and it reports to CAOC Uedem.
The Thule Air Base in Greenland is a United States Air Force installation and its radars are part of the North American Aerospace Defense Command.
Air Force Tactical Command, at Karup Air Base
Air Control Wing, at Karup Air Base
Control and Reporting Centre, at Karup Air Base
Radar Station Skagen, in Skagen, with RAT-31DL
Radar Station Skrydstrup, at Skrydstrup Air Base, with AN/TPS-77
Radar Station Bornholm, in Almindingen, with Marconi S-723
Estonia
The Estonian Air Force's Air Operations Control Centre is located at Ämari Air Base and reports to the Baltic Air Surveillance Network's Regional Airspace Surveillance Coordination Centre (RASCC) in Karmėlava, Lithuania, which in turn reports to CAOC Uedem.
Air Force Command, in Tallinn
Air Surveillance Wing, at Ämari Air Base
Air Operations Control Centre, at Ämari Air Base
Engineering and Technical Group, at Ämari Air Base
Radar Station, in Levalõpme, with GM 403
Radar Station, in Otepää, with GM 403
Radar Station, in Kellavere, with AN/TPS-77(V)
Airport Surveillance Radar at Ämari Air Base, with ASR-8
France
The French Air and Space Force's Air Operations Centre is located at Mont Verdun Air Base and reports to CAOC Uedem. Most French radar sites use the PALMIER radar, which is being taken out of service. By 2022 all PALMIER radars will have been replaced with new radar stations using the GM 403 radar.
Air Defense and Air Operations Command
Air Operations Brigade, at Mont Verdun Air Base
Air Operations Centre, at Mont Verdun Air Base
Control and Reporting Centre, at Mont-de-Marsan Air Base
Control and Reporting Centre, in Cinq-Mars-la-Pile
Mont Verdun Air Base radar, with GM GM 406
Élément Air Rattaché (EAR) 943, on Mont Agel, with GM 406
Additionally the French Air and Space Force fields a GM 406 radar at the Cayenne-Rochambeau Air Base in French Guiana to protect the Guiana Space Centre in Kourou.
Germany
The German Air Force's Combined Air Operations Centre (CAOC 2) in Uedem was deactivated in 2008 and reactivated as CAOC Uedem in 2013. CAOC Uedem is responsible for the NATO airspace North of the Alps. The HADR radars are a variant of the HR-3000 radar, while the RRP-117 radars are a variant of the AN/FPS-117.
Air Operations Centre (Zentrum Luftoperationen der Luftwaffe) (NATO CAOC Uedem), in Uedem
Control and Reporting Centre 2 (Einsatzführungsbereich 2), in Erndtebrück
Operations Squadron 21, in Erndtebrück
Operations Support Squadron 22, in Erndtebrück
Sensor Platoon I, in Lauda
Remote Radar Post 240 "Loneship", in Erndtebrück with GM 406F
Remote Radar Post 246 "Hardwheel", on Erbeskopf with HADR
Remote Radar Post 247 "Batman", in Lauda with GM 406F
Remote Radar Post 248 "Coldtrack", in Freising with GM 406F
Remote Radar Post 249 "Sweet Apple", in Meßstetten with HADR
Sensor Platoon II, in Auenhausen
Remote Radar Post 241 "Crabtree", in Marienbaum with HADR
Remote Radar Post 242 "Backwash", in Auenhausen with GM 406F
Remote Radar Post 243 "Silver Cork", in Visselhövede with GM 406F
Remote Radar Post 244 "Round up", in Brockzetel with HADR
Remote Radar Post 245 "Bugle", in Brekendorf with GM 406F
Control and Reporting Training Inspection 23, in Erndtebrück
Education and Training Centre, in Erndtebrück
Education, Test and Training Group, in Erndtebrück
Control and Reporting Centre 3 (Einsatzführungsbereich 3), in Schönewalde
Operations Squadron 31, in Schönewalde
Operations Support Squadron 32, in Schönewalde
Sensor Platoon III, in Cölpin
Remote Radar Post 351 "Matchpoint", in Putgarten with RRP-117
Remote Radar Post 352 "Mindreader", in Cölpin with RRP-117
Remote Radar Post 353 "Teddy Bear", in Tempelhof with RRP-117
Remote Radar Post 356 "", in Elmenhorst with RRP-117
Sensor Platoon IV, in Regen
Remote Radar Post 354 "Blackmoor", in Döbern with RRP-117
Remote Radar Post 355 "Royal Flash", in Gleina with RRP-117
Remote Radar Post 357 "", on Döbraberg with RRP-117
Remote Radar Post 358 "Snow Cap", on Großer Arber with RRP-117
Greece
1st Area Control Centre, inside Mount Chortiatis, with Marconi S-743D
2nd Area Control Centre, inside Mount Parnitha, with Marconi S-743D
9th Control and Warning Station Squadron, on Mount Pelion, with Marconi S-743D
10th Control and Warning Station Squadron, on Mount Chortiatis, with Marconi S-743D
The Hellenic Air Force's Combined Air Operations Centre (CAOC 7) at Larissa Air Base was deactivated in 2013 and its responsibilities transferred to the CAOC Torrejón in Spain. The Hellenic Air Force fields two HR-3000, four AR-327 and six Marconi S-743D radar systems, however as of 2018 the air force is in the process of replacing some of its older systems with three RAT-31DL radars.
Air Force Tactical Command, at Larissa Air Base
Air Operations Centre, at Larissa Air Base
1st Area Control Centre, inside Mount Chortiatis
2nd Area Control Centre, inside Mount Parnitha
1st Control and Warning Station Squadron, in Didymoteicho, with AR-327
2nd Control and Warning Station Squadron, on Mount Ismaros, with HR-3000
3rd Control and Warning Station Squadron, on Mount Vitsi, with Marconi S-743D
4th Control and Warning Station Squadron, on Mount Elati, with RAT-31DL
5th Control and Warning Station Squadron, in Kissamos, with Marconi S-743D
6th Control and Warning Station Squadron, on Mykonos, with AR-327
7th Control and Warning Station Squadron, on Mount Mela, with AR-327
8th Control and Warning Station Squadron, on Lemnos, with AR-327
9th Control and Warning Station Squadron, on Mount Pelion, with Marconi S-743D
10th Control and Warning Station Squadron, on Mount Chortiatis, with Marconi S-743D
11th Control and Warning Station Squadron, in Ziros, with HR-3000
Hungary
The Hungarian Air Force's Air Operations Centre is located in Veszprém and reports to CAOC Uedem. There are additional three radar companies with Soviet-era equipment subordinate to the 54th Radar Regiment "Veszprém", however it is unclear if they will remain in service once Hungary's newest radar at Medina reaches full operational capability.
Air Force Command, in Budapest
Air Operations Centre, in Veszprém
54th Radar Regiment "Veszprém", in Veszprém
1st Radar Data Centre, in Békéscsaba, with RAT-31DL
2nd Radar Data Centre, in Medina, with RAT-31DL
3rd Radar Data Centre, in Bánkút, with RAT-31DL
Iceland
The Iceland Air Defense System, which is part of the Icelandic Coast Guard, monitors Iceland's airspace. Air Defense is provided by fighter jets from NATO allies, which rotate units for the Icelandic Air Policing mission to Keflavik Air Base.
The Iceland Air Defense System's Control and Reporting Centre is at Keflavik Air Base and reports to CAOC Uedem in Germany.
Iceland Air Defense System, at Keflavik Air Base
Control and Reporting Centre, at Keflavik Air Base
H1 Radar Station, at Miðnesheiði, with AN/FPS-117(V)5
H2 Radar Station, on Mount Gunnolfsvík, with AN/FPS-117(V)5
H3 Radar Station, at Stokksnes, with AN/FPS-117(V)5
H4 Radar Station, on Mount Bolafjalli, with AN/FPS-117(V)5
Italy
The Italian Air Force's Combined Air Operations Centre (CAOC 5) in Poggio Renatico was deactivated in 2013 and replaced with the Mobile Command and Control Regiment (RMCC) at Bari Air Base, while the Centre's responsibilities were transferred to the CAOC Torrejón in Spain.
Air Operations Command (COA), in Poggio Renatico
Air Operations Centre, in Poggio Renatico
Integrated Missile Air-defense Regiment (Rep. DAMI), in Poggio Renatico
11th Integrated Missile Air-defense Squadron, in Poggio Renatico
22nd Air Force Radar Squadron (GrRAM), in Licola, with AN/FPS-117(V)
112th Remote Radar Station Flight, in Mortara, with RAT-31DL
113th Remote Radar Station Flight, in Lame di Concordia, with RAT-31DL
114th Remote Radar Station Flight, in Potenza Picena, with RAT-31DL
115th Remote Radar Station Flight, in Capo Mele, with RAT-31DL
121st Remote Radar Station Flight, in Poggio Ballone, with AN/FPS-117(V)
123rd Remote Radar Station Flight, in Capo Frasca, with AN/FPS-117(V)
131st Remote Radar Station Flight, in Jacotenente, with RAT-31DL
132nd Remote Radar Station Flight, in Capo Rizzuto, with RAT-31DL
133rd Remote Radar Station Flight, in San Giovanni Teatino, with AN/FPS-117(V)
134th Remote Radar Station Flight, in Lampedusa, with RAT-31DL
135th Remote Radar Station Flight, in Marsala, with RAT-31DL
136th Remote Radar Station Flight, in Otranto, with RAT-31DL
137th Remote Radar Station Flight, in Mezzogregorio, with RAT-31DL
Latvia
The Latvian Air Force's Air Operations Centre is located at Lielvārde Air Base and reports to the Baltic Air Surveillance Network's Regional Airspace Surveillance Coordination Centre (RASCC) in Karmėlava, Lithuania, which in turn reports to CAOC Uedem.
Air Force Headquarters, at Lielvārde Air Base
Air Surveillance Squadron, at Lielvārde Air Base
Air Operations Centre, at Lielvārde Air Base
1st Radiotechnical (Radar) Post, at Lielvārde Air Base, with AN/TPS-77(V)
2nd Radiotechnical (Radar) Post, in Audriņi, with AN/TPS-77(V)
3rd Radiotechnical (Radar) Post, in Čalas, with AN/TPS-77(V)
Mobile Radar Section, with TPS-77 MRR
Lithuania
The Lithuanian Air Force's Air Operations Control Centre is located in Karmėlava and reports to the Baltic Air Surveillance Network's Regional Airspace Surveillance Coordination Centre (RASCC) co-located in Karmėlava, which in turn reports to CAOC Uedem.
Lithuanian Air Force Headquarters, in Kaunas
Airspace Surveillance and Control Command, in Kaunas
Airspace Control Centre, in Karmėlava
1st Radar Post, in Antaveršis
3rd Radar Post, in Degučiai
4th Radar Post, in Ceikiškės
Luxembourg
Luxembourg's airspace is monitored and guarded by the Belgian Air Component's Control and Reporting Centre at Beauvechain Air Base.
Montenegro
The Armed Forces of Montenegro do not possess a modern air defense radar and the country's airspace is monitored by Italian Air Force radar sites. The Armed Forces Air Surveillance and Reporting Centre is located at Podgorica Airport in Golubovci and reports to CAOC Torrejón in Spain.
Netherlands
The Royal Netherlands Air Force's Air Operations Centre is located at Nieuw-Milligen and reports to CAOC Uedem. The air force's main radars are being replaced with two modern SMART-L GB radars.
Air Force Command, in The Hague
Air Operations Control Station, in Nieuw-Milligen
Control and Reporting Centre, in Nieuw-Milligen
Radar Station South, in Nieuw-Milligen, with SMART-L GB
Radar Station North, at Wier, with SMART-L GB
Norway
The Royal Norwegian Air Force's Combined Air Operations Centre (CAOC 3) in Reitan was deactivated in 2008 and its responsibilities were transferred to the Combined Air Operations Centre Finderup (CAOC F). After CAOC F was deactivated in 2013 the responsibility for the air defense of Norway was transferred to CAOC Uedem in Germany and the Royal Norwegian Air Force's Control and Reporting Centre in Sørreisa reports to it. Until 2016 the Royal Norwegian Air Force's radar installations were distributed between two CRCs. That year the CRC Mågerø was disbanded. In its place a wartime mobilization back-up CRC has been formed with a reduction in personnel from the around active 170 duty to about 50 air force home guardsmen. The SINDRE I radars are a variant of the HR-3000 radar, which is also used in the German HADR radars. The newer RAT-31SL/N radars are sometimes designated SINDRE II.
Armed Forces Operational Headquarters, Reitan near Bodø Main Air Station
131 Air Wing, in Sørreisa
Control and Reporting Centre Sørreisa
Radar Station Njunis, with RAT-31SL/N
Radar Station Senja, with RAT-31SL/N
Radar Station Honningsvåg, with RAT-31SL/N
Radar Station Vestvågøy, with SINDRE I
Radar Station Vågsøy, with SINDRE I
Radar Station Skykula, with SINDRE I
Poland
The Polish Armed Forces Operational Command's Air Operations Centre is located in the Warsaw-Pyry neighborhood and reports to CAOC Uedem. The 3rd Wrocław Radiotechnical Brigade is responsible for the operation of the Armed Forces radar equipment. As of 2021 the Polish Air Force possesses three NUR-12M and three RAT-31DL long-range radars making up BACKBONE system, which are listed below.
Armed Forces Operational Command, in Warsaw
Air Operations Centre - Air Component Command, in Warsaw-Pyry
Mobile Air Operations Command Unit, in Babki
22nd Command and Control Centre, in Osówiec
32nd Command and Control Centre, at Kraków-Balice Air Base
1st Air Operations Coordination Centre, in Gdynia
2nd Air Operations Coordination Centre, in Kraków
4th Air Operations Coordination Centre, in Szczecin
3rd Wrocław Radiotechnical Brigade, in Wrocław
3rd Sandomierz Radiotechnical Battalion, in Sandomierz
110th Long Range Radiolocating Post, in Łabunie, with RAT-31DL
360th Long Range Radiolocating Post, in Brzoskwinia, with NUR-12M
8th Szczycień Radiotechnical Battalion, in Lipowiec
144th Long Range Radiolocating Post, in Roskosz, with NUR-12M
184th Long Range Radiolocating Post, in Szypliszki, with RAT-31DL
211th Long Range Radiolocating Post, in Chruściel, with RAT-31DL
31st Lower Silesian Radiotechnical Battalion, in Wrocław
170th Long Range Radiolocating Post, in Wronowice, with NUR-12M
34th Chojnice Radiotechnical Battalion, in Chojnice
Portugal
The Portuguese Air Force's Combined Air Operations Centre (CAOC 10) in Lisbon was deactivated in 2013 and its responsibilities were transferred to CAOC Torrejón in Spain.
Air Command, in Lisbon
Control and Reporting Centre, in Monsanto
Radar Station 1, on Monte Fóia, with HR-3000
Radar Station 2, on Monte Pilar in Paços de Ferreira, with HR-3000
Radar Station 3, at Montejunto, with HR-3000
Radar Station 4, on Pico do Arieiro, on the island of Madeira, with LANZA 3-D
Romania
The Romanian Air Force's Air Operations Centre is headquartered in Bucharest and reports to CAOC Torrejón. The radar station in Bârnova is officially designated and operated as a civilian radar station, however its data is fed into the military air surveillance system.
Air Operations Centre, in Bucharest
2nd Airspace Surveillance Centre "North", at 71st Air Base, in Câmpia Turzii
Radar Station, in Ovidiu, with AN/FPS-117(V)
Radar Station, at Giarmata Airport, with AN/FPS-117(V)
Radar Station, in Suceava, with AN/FPS-117(V)
Radar Station, in Craiova, with AN/FPS-117(V)
Radar Station, on Muntele Mare, with AN/FPS-117(V)
Civil/Military Radar Station, in Bârnova, with AN/FPS-117(V)
Slovakia
The Slovak Air Force's Air Operations Centre is located at Sliač Air Base and reports to CAOC Uedem. The Slovak Air Force still operates obsolete Soviet-era radars, which it intends to replace with fewer, but more capable Western 3-D radars as soon as possible. The future locations of the new radars are as of 2018 unknown.
Air Force Command, at Sliač Air Base
Command, Control and Surveillance Wing, at Sliač Air Base
Air Operations Centre, at Sliač Air Base
Radar Surveillance Battalion, in Sliač Air Base
Slovenia
The Slovenian Air Force and Air Defense's Airspace Surveillance and Control Centre is headquartered in Brnik and reports to CAOC Torrejón.
The Italian Air Force's 4th Wing at Grosseto Air Base and 36th Wing at Gioia del Colle Air Base rotate a QRA flight of Eurofighter Typhoons to Istrana Air Base, which are responsible for the air defense of Northern Italy and Slovenia.
Forces Command, in Vrhnika
15th Military Aviation Regiment, at Cerklje ob Krki Air Base
16th Airspace Surveillance and Control Battalion in Brnik
Airspace Surveillance and Control Centre, in Brnik
1st Radar Station, in Vrhnika, with GM 403
2nd Radar Station, in Hočko Pohorje, with GM 403
Spain
The Spanish Air Force's Combined Air Operations Centre (CAOC 8) at Torrejón Air Base was deactivated in 2013 and replaced at same location by CAOC Torrejon, which took over the functions of CAOC 5, CAOC 7, CAOC 8 and CAOC 10. CAOC Torrejón is responsible for the NATO airspace South of the Alps.
Combat Air Command, at Torrejón Air Base
Combat Air Command Headquarter (CGMACOM), at Torrejón Air Base
Air Operations Centre / NATO CAOC Torrejón
Command and Control Systems Headquarter (JSMC), at Torrejón Air Base
Central Command and Control Group (GRUCEMAC), at Torrejón Air Base
Northern Command and Control Group (GRUNOMAC), at Zaragoza Air Base
Mobile Air Control Group (GRUMOCA) at Tablada Air Base
1st Air Surveillance Squadron (EVA 1) radar station, at Air Station El Frasno, with LANZA 3-D
2nd Air Surveillance Squadron (EVA 2) radar station, at Air Station Villatobas, with RAT-31SL/T
3rd Air Surveillance Squadron (EVA 3) radar station, at Air Station Constantina, with LANZA 3-D
4th Air Surveillance Squadron (EVA 4) radar station, at Air Station Roses, with LANZA 3-D
5th Air Surveillance Squadron (EVA 5) radar station, at Air Station Aitana, with RAT-31SL/T
7th Air Surveillance Squadron (EVA 7) radar station, at Air Station Puig Major, with LANZA 3-D
9th Air Surveillance Squadron (EVA 9) radar station, at Air Station Motril, with RAT-31SL/T
10th Air Surveillance Squadron (EVA 10) radar station, at Air Station Barbanza, with LANZA 3-D
11th Air Surveillance Squadron (EVA 11) radar station, at Air Station Alcalá de los Gazules, with LANZA 3-D
12th Air Surveillance Squadron (EVA 12) radar station, at Air Station Espinosa de los Monteros, with RAT-31SL/T
13th Air Surveillance Squadron (EVA 13) radar station, at Air Station Sierra Espuña, with LANZA 3-D
21st Air Surveillance Squadron (EVA 21) radar station, at Vega de San Mateo on Gran Canaria, with LANZA 3-D
22nd Air Surveillance Squadron (EVA 22) radar station, in Haría on Lanzarote, with RAT-31SL/T
Turkey
The Turkish Air Force's Combined Air Operations Centre (CAOC 6) in Eskisehir was deactivated in 2013 and its responsibilities were transferred to CAOC Torrejón in Spain. Turkey's Air Force fields a mix of HR-3000, AN/FPS-117, RAT-31SL and RAT-31DL radars, however the exact number of each of these radar and their location in the Turkish radar system is unknown.
Air Force Command (COA), in
Control and Reporting Centre, in Ahlatlıbel
Aerial Surveillance Radar Post, in Ahlatlıbel, with
Aerial Surveillance Radar Post, in Körfez, with
Aerial Surveillance Radar Post, in Karabelen, with
Aerial Surveillance Radar Post, in Çanakkale, with
Aerial Surveillance Radar Post, in Erzurum, with
Aerial Surveillance Radar Post, in Datça, with
Aerial Surveillance Radar Post, in İnebolu, with
Aerial Surveillance Radar Post, in İskenderun, with
Aerial Surveillance Radar Post, in Rize, with
United Kingdom
The Royal Air Force's Combined Air Operations Centre (CAOC 9) at RAF High Wycombe was deactivated in 2008 and its responsibilities were transferred to the Combined Air Operations Centre Finderup (CAOC F). After CAOC F was deactivated in 2013 the responsibility for the air defense of the United Kingdom was transferred to CAOC Uedem in Germany. The Royal Air Force's Control and Reporting Centres report to it.
No. 1 Group RAF, at RAF High Wycombe
UK Air Surveillance and Control Systems, at RAF Boulmer
Control and Reporting Centre, at RAF Boulmer
No. 1 Air Control Centre, at RAF Scampton (National Air Control Centre)
RRH Benbecula, in North Uist, with AN/TPS-77
RRH Brizlee Wood, in Shipley, with AN/FPS-117
RRH Buchan, in Boddam, with AN/TPS-77
RRH Portreath, in Portreath, with AR-327
RRH Saxa Vord, in Unst, with AN/TPS-77
RRH Trimingham, in Trimingham, with AN/FPS-117 (satellite station of RRH Neatishead)
RRH Staxton Wold, in Scarborough, had an AN/TPS-77 radar, which was moved to RRH Saxa Vord in 2017, future plans for RRH Staxton Wold are as of 2018 unknown.
United States
The United States Air Force's control centres and radar stations are part of the Canadian/American North American Aerospace Defense Command.
Non-NATO European air defense systems
Austria
Austrian Air Force - GOLDHAUBE system:
Command and Control Center "Basisraum", in St Johann im Pongau
Kolomansberg Radar Station
Großer Speikkogel Radar Station
Steinmandl Radar Station
Switzerland
Swiss Air Force - FLORAKO system:
Air Defence & Direction Center, at Dübendorf Air Base
Pilatus Radar Station
Scopi Radar Station
Weisshorn Radar Station
Weissfluh Radar Station
References
Integrated Air Defense System
Air defence radar networks
|
524425
|
https://en.wikipedia.org/wiki/Computer%20science%20and%20engineering
|
Computer science and engineering
|
Computer Science and Engineering (CSE) is an academic program at many universities which comprises scientific and engineering aspects of computing. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both Undergraduate as well Postgraduate with specializations.
Academic courses
Academic programs vary between colleges. Undergraduate Courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most of the above CSE areas require initial mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, Probability, and statistics, as well as the basics of Electrical and electronic engineering, physics - field theory, and electromagnetism.
Example universities with CSE majors and departments
Massachusetts Institute of Technology
University of Oxford
University of Asia Pacific
California Institute of Technology
Stanford University
North South University
Global University Bangladesh
University of Barisal
University of Chittagong
American University of Beirut
Santa Clara University
University of Michigan
University of New South Wales
University of Washington
Bucknell University
Indian Institute of Technology Kanpur
Indian Institute of Technology Bombay
Indian Institute of Technology Delhi
Indian Institute of Technology Madras
Amrita Vishwa Vidyapeetham
National University of Singapore
Ghent University
Lund University
University of Nevada
University of Notre Dame
Delft University of Technology
See also
Computer science
Computer graphics (computer science)
Bachelor of Technology
References
Computer science education
Computer engineering
Engineering academics
Engineering education
|
2581364
|
https://en.wikipedia.org/wiki/NetIQ%20eDirectory
|
NetIQ eDirectory
|
eDirectory is an X.500-compatible directory service software product from NetIQ. Previously owned by Novell, the product has also been known as Novell Directory Services (NDS) and sometimes referred to as NetWare Directory Services. NDS was initially released by Novell in 1993 for Netware 4, replacing the Netware bindery mechanism used in previous versions, for centrally managing access to resources on multiple servers and computers within a given network. eDirectory is a hierarchical, object oriented database used to represent certain assets in an organization in a logical tree, including organizations, organizational units, people, positions, servers, volumes, workstations, applications, printers, services, and groups to name just a few.
Features
eDirectory uses dynamic rights inheritance, which allows both global and specific access controls. Access rights to objects in the tree are determined at the time of the request and are determined by the rights assigned to the objects by virtue of their location in the tree, any security equivalences, and individual assignments. The software supports partitioning at any point in the tree, as well as replication of any partition to any number of servers. Replication between servers occurs periodically using deltas of the objects. Each server can act as a master of the information it holds (provided the replica is not read only). Additionally, replicas may be filtered to only include defined attributes to increase speed (for example, a replica may be configured to only include a name and phone number for use in a corporate address book, as opposed to the entire directory user profile).
The software supports referential integrity, multi-master replication, and has a modular authentication architecture. It can be accessed via LDAP, DSML, SOAP, ODBC, JDBC, JNDI, and ADSI.
Supported platforms
Windows 2000
Windows Server 2003
Windows Server 2008
Windows Server 2012
SUSE Linux Enterprise Server
Red Hat Enterprise Linux
Novell NetWare
Sun Solaris
IBM AIX
HP-UX
Network configuration stored in the directory
When Novell first designed their directory, they decided to store large amounts of their operational server data within the directory in addition to just user account information. As a result, a typical Novell directory contains a large pool of additional objects representing the servers themselves and any software services running on those servers, such as LDAP or email software.
Storage
Versions of eDirectory prior to version 8 (then called Novell Directory Services) used a record-based database management engine called Recman, which relied on the Transaction Tracking System built into the NetWare operating system. Since version 8, eDirectory (along with the GroupWise collaboration suite, starting with version 5) uses the FLAIM (FLexible Adaptable Information Management) database engine. FLAIM is an open source embeddable database engine developed by Novell and released under the GPL license in 2006. This change allowed for it to be ported to other platforms such as Windows, Linux, and Unix.
Further reading
See also
List of LDAP software
References
External links
NetIQ eDirectory product page
eDirectory
EDirectory
Directory services
Proprietary software
|
28402437
|
https://en.wikipedia.org/wiki/ExoPC
|
ExoPC
|
The EXOPC is a Tablet PC, in slate form, that uses Windows 7 Home Premium as its operating system, and is designed by the company of the same name, based in Quebec, Canada. The EXOPC Slate is manufactured by Pegatron. The first EXOPC slate was launched in October 2010 directly from EXOPC Corp. on their website, and in Canada through the company Hypertechnologie Ciara. Hypertechnologie Ciara markets the slate under the name Ciara Vibe. Probitas markets the EXOPC as Mobi-One in Southern Europe and North Africa. RM Education markets the EXOPC in the UK as the RM Slate. Leader Computers markets the EXOPC in Australia. The EXOPC Slate is also currently available in the United States via the Microsoft Store, both online and in stores. Mustek markets it as the Mecer Lucid Slate in South Africa.
Hardware
The architecture is based on an Intel Atom-M Pineview N450 CPU that is clocked at 1.66 GHz, and includes 2 GB of DDR2 SDRAM and 32 GB of solid-state drive (SSD) storage in its basic version, with an alternative model having a larger 64 GB SSD.
The EXOPC is also equipped with an accelerometer, which lets the display change from a portrait mode to a landscape mode by turning the slate in either direction. Internally it has four mini-PCIe slots of which three provide space for full-length cards and one half length. Three of these slots are in use and the fourth is available, but intended for a WWAN card. The unit also provides a SIM card slot.
Display
The EXOPC has an 11.6-inch diagonal, capacitive multi-touch screen. The screen has a resolution of 1366 × 768 pixels (WXGA), a 16:9 ratio, and has 135 pixels per inch.
The screen's firmware currently allows detection of two points of simultaneous touch, but is technically capable of up to 10 points of touch.
A light sensor built into the front of the tablet automatically adjusts the display brightness to ambient condition.
It is also possible to use a capacitive stylus for precision work, such as hand-drawn art and graphic works.
Connectivity
The EXOPC offers connectivity equivalent to that of a standard laptop:
Wi-Fi IEEE 802.11b/IEEE 802.11g / IEEE 802.11n
Bluetooth 2.1 + EDR
Two USB 2.0 ports
Audio in/out SuperJack
Mini-HDMI for connecting to an external monitor or television, with a maximum output resolution of 1080p (upscaled from 1366 × 768)
Dock connector
External power supply
Recharging the battery is done through a standard external power supply:
Size:
Weight:
Input: 100–240 V
Output: 19 V, 2.1 amperes
Software features
Operating system
The EXOPC uses Microsoft Windows 7 as its operating system. The company has developed a GUI interface around the standard Windows 7 GUI, nicknamed by the EXOPC community as the Connect Four Interface due to its full screen of interactive circles arranged in a grid pattern. A dedicated button on the touch-screen interface will minimize the EXOPC layer and reveal the Windows 7 desktop, allowing the user to have the EXOPC Slate act as a standard Windows computer when needed.
Applications
Pre-installed applications
The EXOPC comes with the following pre-installed applications:
Microsoft Security Essentials
Microsoft .NET framework 4.0
Microsoft Silverlight runtime for IE
Adobe Flash Player 10.2 and Acrobat Reader for reading PDF files
EXOPC GUI Layer
Store-specific applications
An application library, similar to the Apple App Store or the Android Market is available for the device, accessible through the EXOPC UI.
Feedback
The tablet captured the attention of several blogs and websites in the summer of 2010, being heralded as a possible alternative to the iPad. However, early reviews criticized the weight and battery life of the final product, as well as many missing features, the interface itself, sluggishness of the Internet browser, and difficulties to use the on-screen keyboard.
See also
WeTab – German version with the MeeGo-OS, and similar hardware
References
External links
Official company website
Manufacturer website
Website of Hypertechnologie Ciara, Inc.
EXOPC Microsoft Store
Computer companies of Canada
Tablet computers
|
692880
|
https://en.wikipedia.org/wiki/ALGOL%2068
|
ALGOL 68
|
ALGOL 68 (short for Algorithmic Language 1968) is an imperative programming language that was conceived as a successor to the ALGOL 60 programming language, designed with the goal of a much wider scope of application and more rigorously defined syntax and semantics.
The complexity of the language's definition, which runs to several hundred pages filled with non-standard terminology, made compiler implementation difficult and it was said it had "no implementations and no users". This was only partly true; ALGOL 68 did find use in several niche markets, notably in the United Kingdom where it was popular on International Computers Limited (ICL) machines, and in teaching roles. Outside these fields, use was relatively limited.
Nevertheless, the contributions of ALGOL 68 to the field of computer science have been deep, wide-ranging and enduring, although many of these contributions were only publicly identified when they had reappeared in subsequently developed programming languages. Many languages were developed specifically as a response to the perceived complexity of the language, the most notable being Pascal, or were reimplementations for specific roles, like Ada.
Many languages of the 1970s trace their design specifically to ALGOL 68, selecting some features while abandoning others that were considered too complex or out-of-scope for given roles. Among these is the language C, which was directly influenced by ALGOL 68, especially by its strong typing and structures. Most modern languages trace at least some of their syntax to either C or Pascal, and thus directly or indirectly to ALGOL 68.
Overview
ALGOL 68 features include expression-based syntax, user-declared types and structures/tagged-unions, a reference model of variables and reference parameters, string, array and matrix slicing, and concurrency.
ALGOL 68 was designed by the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi. On December 20, 1968, the language was formally adopted by the group, and then approved for publication by the General Assembly of IFIP.
ALGOL 68 was defined using a formalism, a two-level formal grammar, invented by Adriaan van Wijngaarden. Van Wijngaarden grammars use a context-free grammar to generate an infinite set of productions that will recognize a particular ALGOL 68 program; notably, they are able to express the kind of requirements that in many other programming language technical standards are labelled semantics, and must be expressed in ambiguity-prone natural language prose, and then implemented in compilers as ad hoc code attached to the formal language parser.
The main aims and principles of design of ALGOL 68:
Completeness and clarity of description
Orthogonality of design
Security
Efficiency:
Static mode checking
Mode-independent parsing
Independent compiling
Loop optimizing
Representations – in minimal & larger character sets
ALGOL 68 has been criticized, most prominently by some members of its design committee such as C. A. R. Hoare and Edsger Dijkstra, for abandoning the simplicity of ALGOL 60, becoming a vehicle for complex or overly general ideas, and doing little to make the compiler writer's task easier, in contrast to deliberately simple contemporaries (and competitors) such as C, S-algol and Pascal.
In 1970, ALGOL 68-R became the first working compiler for ALGOL 68.
In the 1973 revision, certain features – such as proceduring, gommas and formal bounds – were omitted. C.f. The language of the unrevised report.r0
Though European defence agencies (in Britain Royal Signals and Radar Establishment (RSRE)) promoted the use of ALGOL 68 for its expected security advantages, the American side of the NATO alliance decided to develop a different project, the language Ada, making its use obligatory for US defense contracts.
ALGOL 68 also had a notable influence in the Soviet Union, details of which can be found in Andrey Ershov's 2014 paper: "ALGOL 68 and Its Impact on the USSR and Russian Programming", and "Алгол 68 и его влияние на программирование в СССР и России".
Steve Bourne, who was on the ALGOL 68 revision committee, took some of its ideas to his Bourne shell (and thereby, to descendant Unix shells such as Bash) and to C (and thereby to descendants such as C++).
The complete history of the project can be found in C. H. Lindsey's A History of ALGOL 68.
For a full-length treatment of the language, see "Programming ALGOL 68 Made Easy" by Dr. Sian Mountbatten, or "Learning ALGOL 68 Genie" by Marcel van der Veer which includes the Revised Report.
History
Origins
ALGOL 68, as the name implies, is a follow-on to the ALGOL language that was first formalized in 1960. That same year the International Federation for Information Processing (IFIP) formed and started the Working Group on ALGOL, or WG2.1. This group released an updated ALGOL 60 specification in Rome in April 1962. At a follow-up meeting in March 1964, it was agreed that the group should begin work on two follow-on standards, ALGOL X which would be a redefinition of the language with some additions, and an ALGOL Y, which would have the ability to modify its own programs in the style of the language LISP.
Definition process
The first meeting of the ALGOL X group was held in Princeton University in May 1965. A report of the meeting noted two broadly supported themes, the introduction of strong typing and interest in Euler's concepts of 'trees' or 'lists' for handling collections.
At the second meeting in October in France, three formal proposals were presented, Niklaus Wirth's ALGOL W along with comments about record structures by C.A.R. (Tony) Hoare, a similar language by Gerhard Seegmüller, and a paper by Adriaan van Wijngaarden on "Orthogonal design and description of a formal language". The latter, written in almost indecipherable "W-Grammar", proved to be a decisive shift in the evolution of the language. The meeting closed with an agreement that van Wijngaarden would re-write the Wirth/Hoare submission using his W-Grammar.
This seemingly simple task ultimately proved more difficult than expected, and the follow-up meeting had to be delayed six months. When it met in April 1966 in Kootwijk, van Wijngaarden's draft remained incomplete and Wirth and Hoare presented a version using more traditional descriptions. It was generally agreed that their paper was "the right language in the wrong formalism". As these approaches were explored, it became clear there was a difference in the way parameters were described that would have real-world effects, and while Wirth and Hoare protested that further delays might become endless, the committee decided to wait for van Wijngaarden's version. Wirth then implemented their current definition as ALGOL W.
At the next meeting in Warsaw in October 1966, there was an initial report from the I/O Subcommittee who had met at the Oak Ridge National Laboratory and the University of Illinois but had not yet made much progress. The two proposals from the previous meeting were again explored, and this time a new debate emerged about the use of pointers; ALGOL W used them only to refer to records, while van Wijngaarden's version could point to any object. To add confusion, John McCarthy presented a new proposal for operator overloading and the ability to string together and or constructs, and Klaus Samelson wanted to allow anonymous functions. In the resulting confusion, there was some discussion of abandoning the entire effort. The confusion continued through what was supposed to be the ALGOL Y meeting in Zandvoort in May 1967.
Publication
A draft report was finally published in February 1968. This was met by "shock, horror and dissent", mostly due to the hundreds of pages of unreadable grammar and odd terminology. Charles H. Lindsey attempted to figure out what "language was hidden inside of it", a process that took six man-weeks of effort. The resulting paper, "ALGOL 68 with fewer tears", was widely circulated. At a wider information processing meeting in Zurich in May 1968, attendees complained that the language was being forced upon them and that IFIP was "the true villain of this unreasonable situation" as the meetings were mostly closed and there was no formal feedback mechanism. Wirth and Peter Naur formally resigned their authorship positions in WG2.1 at that time.
The next WG2.1 meeting took place in Tirrenia in June 1968. It was supposed to discuss the release of compilers and other issues, but instead devolved into a discussion on the language. van Wijngaarden responded by saying (or threatening) that he would release only one more version of the report. By this point Naur, Hoare, and Wirth had left the effort, and several more were threatening to do so. Several more meetings followed, North Berwick in August 1968, Munich in December which produced the release of the official Report in January 1969 but also resulted in a contentious Minority Report being written. Finally, at Banff, Alberta in September 1969, the project was generally considered complete and the discussion was primarily on errata and a greatly expanded Introduction to the Report.
The effort took five years, burned out many of the greatest names in computer science, and on several occasions became deadlocked over issues both in the definition and the group as a whole. Hoare released a "Critique of ALGOL 68" almost immediately, which has been widely referenced in many works. Wirth went on to further develop the ALGOL W concept and released this as Pascal in 1970.
Implementations
ALGOL 68-R
The first implementation of the standard, based on the late-1968 draft Report, was introduced by the Royal Radar Establishment in the UK as ALGOL 68-R in July 1970. This was, however, a subset of the full language, and Barry Mailloux, the final editor of the Report, joked that "It is a question of morality. We have a Bible and you are sinning!" This version nevertheless became very popular on the ICL machines, and became a widely-used language in military coding, especially in the UK.
Among the changes in 68-R was the requirement for all variables to be declared before their first use. This had a significant advantage that it allowed the compiler to be one-pass, as space for the variables in the activation record was set aside before it was used. However, this change also had the side-effect of demanding the PROCs be declared twice, once as a declaration of the types, and then again as the body of code. Another change was to eliminate the assumed VOID mode, an expression that returns no value (named a statement in other languages) and demanding the word VOID be added where it would have been assumed. Further, 68-R eliminated the explicit parallel processing commands based on PAR.
Others
The first full implementation of the language was introduced in 1974 by CDC Netherlands for the Control Data mainframe series. This saw limited use, mostly teaching in Germany and the Netherlands.
A version similar to 68-R was introduced from Carnegie Mellon University in 1976 as 68S, and was again a one-pass compiler based on various simplifications of the original and intended for use on smaller machines like the DEC PDP-11. It too was used mostly for teaching purposes.
A version for IBM mainframes did not become available until 1978, when one was released from Cambridge University. This was "nearly complete". Lindsey released a version for small machines including the IBM PC in 1984.
Three open source Algol 68 implementations are known:
a68g, GPLv3, written by Marcel van der Veer.
algol68toc, an open-source software port of ALGOL 68RS.
experimental Algol68 frontend for GCC, written by Jose E. Marchesi.
Timeline
"A Shorter History of Algol 68"
ALGOL 68 – 3rd generation ALGOL
The Algorithmic Language ALGOL 68 Reports and Working Group members
March 1968: Draft Report on the Algorithmic Language ALGOL 68 – Edited by: Adriaan van Wijngaarden, Barry J. Mailloux, John Peck and Cornelis H. A. Koster.
October 1968: Penultimate Draft Report on the Algorithmic Language ALGOL 68 – Chapters 1-9 Chapters 10-12 – Edited by: A. van Wijngaarden, B.J. Mailloux, J. E. L. Peck and C. H. A. Koster.
December 1968: Report on the Algorithmic Language ALGOL 68 – Offprint from Numerische Mathematik, 14, 79-218 (1969); Springer-Verlag. – Edited by: A. van Wijngaarden, B. J. Mailloux, J. E. L. Peck and C. H. A. Koster.
March 1970: Minority report, ALGOL Bulletin AB31.1.1 - signed by Edsger Dijkstra, Fraser Duncan, Jan Garwick, Tony Hoare, Brian Randell, Gerhard Seegmüller, Wlad Turski, and Mike Woodger.
September 1973: Revised Report on the Algorithmic Language Algol 68 – Springer-Verlag 1976 – Edited by: A. van Wijngaarden, B. Mailloux, J. Peck, K. Koster, M. Sintzoff, C. H. Lindsey, Lambert Meertens and Richard G. Fisker.
other WG 2.1 members active in ALGOL 68 design: Friedrich L. Bauer • Hans Bekic • Gerhard Goos • Peter Zilahy Ingerman • Peter Landin • Charles H. Lindsey • John McCarthy • Jack Merner • Peter Naur • Manfred Paul • Willem van der Poel • Doug Ross • Klaus Samelson • Michel Sintzoff • Niklaus Wirth • Nobuo Yoneda.
Timeline of standardization
1968: On 20 December 1968, the "Final Report" (MR 101) was adopted by the Working Group, then subsequently approved by the General Assembly of UNESCO's IFIP for publication. Translations of the standard were made for Russian, German, French and Bulgarian, and then later Japanese and Chinese. The standard was also made available in Braille.
1984: TC97 considered ALGOL 68 for standardisation as "New Work Item" TC97/N1642 . West Germany, Belgium, Netherlands, USSR and Czechoslovakia willing to participate in preparing the standard but the USSR and Czechoslovakia "were not the right kinds of member of the right ISO committees" and Algol 68's ISO standardisation stalled.
1988: Subsequently ALGOL 68 became one of the GOST standards in Russia.
GOST 27974-88 Programming language ALGOL 68 – Язык программирования АЛГОЛ 68
GOST 27975-88 Programming language ALGOL 68 extended – Язык программирования АЛГОЛ 68 расширенный
Notable language elements
Bold symbols and reserved words
The standard language contains about sixty reserved words, typically bolded in print, and some with "brief symbol" equivalents:
MODE, OP, PRIO, PROC,
FLEX, HEAP, LOC, LONG, REF, SHORT,
BITS, BOOL, BYTES, CHAR, COMPL, INT, REAL, SEMA, STRING, VOID,
CHANNEL, FILE, FORMAT, STRUCT, UNION,
AT "@", EITHERr0, IS ":=:", ISNT is notr0 ":/=:" ":≠:", OF "→"r0, TRUE, FALSE, EMPTY, NIL "○", SKIP "~",
CO "¢", COMMENT "¢", PR, PRAGMAT,
CASE ~ IN ~ OUSE ~ IN ~ OUT ~ ESAC "( ~ | ~ |: ~ | ~ | ~ )",
FOR ~ FROM ~ TO ~ BY ~ WHILE ~ DO ~ OD,
IF ~ THEN ~ ELIF ~ THEN ~ ELSE ~ FI "( ~ | ~ |: ~ | ~ | ~ )",
PAR BEGIN ~ END "( ~ )", go to, GOTO, EXIT "."r0.
Units: Expressions
The basic language construct is the unit. A unit may be a formula, an enclosed clause, a routine text or one of several technically needed constructs (assignation, jump, skip, nihil). The technical term enclosed clause unifies some of the inherently bracketing constructs known as block, do statement, switch statement in other contemporary languages. When keywords are used, generally the reversed character sequence of the introducing keyword is used for terminating the enclosure, e.g. ( IF ~ THEN ~ ELSE ~ FI, CASE ~ IN ~ OUT ~ ESAC, FOR ~ WHILE ~ DO ~ OD ). This Guarded Command syntax was reused by Stephen Bourne in the common Unix Bourne shell. An expression may also yield a multiple value, which is constructed from other values by a collateral clause. This construct just looks like the parameter pack of a procedure call.
mode: Declarations
The basic data types (called modes in Algol 68 parlance) are real, int, compl (complex number), bool, char, bits and bytes. For example:
INT n = 2;
CO n is fixed as a constant of 2. CO
INT m := 3;
CO m is a newly created local variable whose value is initially set to 3. CO
CO This is short for ref int m = loc int := 3; CO
REAL avogadro = 6.0221415⏨23; CO Avogadro's number CO
long long real long long pi = 3.14159 26535 89793 23846 26433 83279 50288 41971 69399 37510;
COMPL square root of minus one = 0 ⊥ 1;
However, the declaration REAL x; is just syntactic sugar for REF REAL x = LOC REAL;. That is, x is really the constant identifier for a reference to a newly generated local REAL variable.
Furthermore, instead of defining both float and double, or int and long and short, etc., ALGOL 68 provides modifiers, so that the presently common double would be written as LONG REAL or LONG LONG REAL instead, for example. The prelude constants max real and min long int are provided to adapt programs to different implementations.
All variables need to be declared, but declaration does not have to precede the first use.
primitive-declarer: INT, REAL, COMPL, COMPLEXG, BOOL, CHAR, STRING, BITS, BYTES, FORMAT, FILE, PIPEG, CHANNEL, SEMA
BITS – a "packed vector" of BOOL.
BYTES – a "packed vector" of CHAR.
STRING – a FLEXible array of CHAR.
SEMA – a SEMAphore which can be initialised with the OPerator LEVEL.
Complex types can be created from simpler ones using various type constructors:
REF mode – a reference to a value of type mode, similar to & in C/C++ and REF in Pascal
STRUCT – used to build structures, like STRUCT in C/C++ and RECORD in Pascal
UNION – used to build unions, like in C/C++ and Pascal
PROC – used to specify procedures, like functions in C/C++ and procedures/functions in Pascal
For some examples, see Comparison of ALGOL 68 and C++.
Other declaration symbols include: FLEX, HEAP, LOC, REF, LONG, SHORT, EVENTS
FLEX – declare the array to be flexible, i.e. it can grow in length on demand.
HEAP – allocate variable some free space from the global heap.
LOC – allocate variable some free space of the local stack.
LONG – declare an INT, REAL or COMPL to be of a LONGer size.
SHORT – declare an INT, REAL or COMPL to be of a SHORTer size.
A name for a mode (type) can be declared using a MODE declaration,
which is similar to TYPEDEF in C/C++ and TYPE in Pascal:
INT max=99;
MODE newmode = [0:9][0:max]STRUCT (
LONG REAL a, b, c, SHORT INT i, j, k, REF REAL r
);
This is similar to the following C code:
const int max=99;
typedef struct {
double a, b, c; short i, j, k; float *r;
} newmode[9+1][max+1];
For ALGOL 68, only the NEWMODE mode-indication appears to the left of the equals symbol, and most notably the construction is made, and can be read, from left to right without regard to priorities. Also, the lower bound of Algol 68 arrays is one by default, but can be any integer from -max int to max int.
Mode declarations allow types to be recursive: defined directly or indirectly in terms of themselves.
This is subject to some restrictions – for instance, these declarations are illegal:
MODE A = REF A
MODE A = STRUCT (A a, B b)
MODE A = PROC (A a) A
while these are valid:
MODE A = STRUCT (REF A a, B b)
MODE A = PROC (REF A a) REF A
Coercions: casting
The coercions produce a coercee from a coercend according to three criteria: the a priori mode of the coercend before the application of any coercion, the a posteriori mode of the coercee required after those coercions, and the syntactic position or "sort" of the coercee. Coercions may be cascaded.
The six possible coercions are termed deproceduring, dereferencing, uniting, widening, rowing, and voiding. Each coercion, except for uniting, prescribes a corresponding dynamic effect on the associated values. Hence, many primitive actions can be programmed implicitly by coercions.
Context strength – allowed coercions:
soft – deproceduring
weak – dereferencing or deproceduring, yielding a name
meek – dereferencing or deproceduring
firm – meek, followed by uniting
strong – firm, followed by widening, rowing or voiding
Coercion hierarchy with examples
ALGOL 68 has a hierarchy of contexts which determine the kind of coercions available at a particular point in the program. These contexts are:
For more details about Primaries, Secondaries, Tertiary & Quaternaries refer to Operator precedence.
pr & co: Pragmats and Comments
Pragmats are directives in the program, typically hints to the compiler; in newer languages these are called "pragmas" (no 't'). e.g.
PRAGMAT heap=32 PRAGMAT
PR heap=32 PR
Comments can be inserted in a variety of ways:
¢ The original way of adding your 2 cents worth to a program ¢
COMMENT "bold" comment COMMENT
CO Style i comment CO
# Style ii comment #
£ This is a hash/pound comment for a UK keyboard £
Normally, comments cannot be nested in ALGOL 68. This restriction can be circumvented by using different comment delimiters (e.g. use hash only for temporary code deletions).
Expressions and compound statements
ALGOL 68 being an expression-oriented programming language, the value returned by an assignment statement is a reference to the destination. Thus, the following is valid ALGOL 68 code:
REAL half pi, one pi; one pi := 2 * ( half pi := 2 * arc tan(1) )
This notion is present in C and Perl, among others. Note that as in earlier languages such as Algol 60 and FORTRAN, spaces are allowed in identifiers, so that half pi is a single identifier (thus avoiding the underscores versus camel case versus all lower-case issues).
As another example, to express the mathematical idea of a sum of f(i) from i=1 to n, the following ALGOL 68 integer expression suffices:
(INT sum := 0; FOR i TO n DO sum +:= f(i) OD; sum)
Note that, being an integer expression, the former block of code can be used in any context where an integer value can be used. A block of code returns the value of the last expression it evaluated; this idea is present in Lisp, among other languages.
Compound statements are all terminated by distinctive closing brackets:
IF choice clauses:
IF condition THEN statements [ ELSE statements ] FI
"brief" form: ( condition | statements | statements )
IF condition1 THEN statements ELIF condition2 THEN statements [ ELSE statements ] FI
"brief" form: ( condition1 | statements |: condition2 | statements | statements )
This scheme not only avoids the dangling else problem but also avoids having to use BEGIN and END in embedded statement sequences.
CASE choice clauses:
CASE switch IN statements, statements,... [ OUT statements ] ESAC
"brief" form: ( switch | statements,statements,... | statements )
CASE switch1 IN statements, statements,... OUSE switch2 IN statements, statements,... [ OUT statements ] ESAC
"brief" form of CASE statement: ( switch1 | statements,statements,... |: switch2 | statements,statements,... | statements )
Choice clause example with Brief symbols:
PROC days in month = (INT year, month)INT:
(month|
31,
(year÷×4=0 ∧ year÷×100≠0 ∨ year÷×400=0 | 29 | 28 ),
31, 30, 31, 30, 31, 31, 30, 31, 30, 31
);
Choice clause example with Bold symbols:
PROC days in month = (INT year, month)INT:
CASE month IN
31,
IF year MOD 4 EQ 0 AND year MOD 100 NE 0 OR year MOD 400 EQ 0 THEN 29 ELSE 28 FI,
31, 30, 31, 30, 31, 31, 30, 31, 30, 31
ESAC;
Choice clause example mixing Bold and Brief symbols:
PROC days in month = (INT year, month)INT:
CASE month IN
¢Jan¢ 31,
¢Feb¢ ( year MOD 4 = 0 AND year MOD 100 ≠ 0 OR year MOD 400 = 0 | 29 | 28 ),
¢Mar¢ 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 ¢ to Dec. ¢
ESAC;
Algol68 allowed the switch to be of either type INT or (uniquely) UNION. The latter allows the enforcing strong typing onto UNION variables. c.f. union below for example.
do loop clause:
[ FOR index ] [ FROM first ] [ BY increment ] [ TO last ] [ WHILE condition ] DO statements OD
The minimum form of a "loop clause" is thus: DO statements OD
This was considered the "universal" loop, the full syntax is:
FOR i FROM 1 BY -22 TO -333 WHILE i×i≠4444 DO ~ OD
The construct have several unusual aspects:
only the DO ~ OD portion was compulsory, in which case the loop will iterate indefinitely.
thus the clause TO 100 DO ~ OD, will iterate only 100 times.
the WHILE "syntactic element" allowed a programmer to break from a FOR loop early. e.g.
INT sum sq:=0;
FOR i
WHILE
print(("So far:",i,newline));
sum sq≠70↑2
DO
sum sq+:=i↑2
OD
Subsequent "extensions" to the standard Algol68 allowed the TO syntactic element to be replaced with UPTO and DOWNTO to achieve a small optimisation. The same compilers also incorporated:
UNTIL(C) – for late loop termination.
FOREACH(S) – for working on arrays in parallel.
Further examples can be found in the code examples below.
struct, union & [:]: Structures, unions and arrays
ALGOL 68 supports arrays with any number of dimensions, and it allows for the slicing of whole or partial rows or columns.
MODE VECTOR = [1:3] REAL; # vector MODE declaration (typedef) #
MODE MATRIX = [1:3,1:3]REAL; # matrix MODE declaration (typedef) #
VECTOR v1 := (1,2,3); # array variable initially (1,2,3) #
[]REAL v2 = (4,5,6); # constant array, type equivalent to VECTOR, bounds are implied #
OP + = (VECTOR a,b) VECTOR: # binary OPerator definition #
(VECTOR out; FOR i FROM ⌊a TO ⌈a DO out[i] := a[i]+b[i] OD; out);
MATRIX m := (v1, v2, v1+v2);
print ((m[,2:])); # a slice of the 2nd and 3rd columns #
Matrices can be sliced either way, e.g.:
REF VECTOR row = m[2,]; # define a REF (pointer) to the 2nd row #
REF VECTOR col = m[,2]; # define a REF (pointer) to the 2nd column #
ALGOL 68 supports multiple field structures (STRUCT) and united modes. Reference variables may point to any MODE including array slices and structure fields.
For an example of all this, here is the traditional linked list declaration:
MODE NODE = UNION (VOID, REAL, INT, COMPL, STRING),
LIST = STRUCT (NODE val, REF LIST next);
Usage example for UNION CASE of NODE:
proc: Procedures
Procedure (PROC) declarations require type specifications for both the parameters and the result (VOID if none):
PROC max of real = (REAL a, b) REAL:
IF a > b THEN a ELSE b FI;
or, using the "brief" form of the conditional statement:
PROC max of real = (REAL a, b) REAL: (a>b | a | b);
The return value of a proc is the value of the last expression evaluated in the procedure. References to procedures (ref proc) are also permitted. Call-by-reference parameters are provided by specifying references (such as ref real) in the formal argument list. The following example defines a procedure that applies a function (specified as a parameter) to each element of an array:
PROC apply = (REF [] REAL a, PROC (REAL) REAL f):
FOR i FROM LWB a TO UPB a DO a[i] := f(a[i]) OD
This simplicity of code was unachievable in ALGOL 68's predecessor ALGOL 60.
op: Operators
The programmer may define new operators and both those and the pre-defined ones may be overloaded and their priorities may be changed by the coder. The following example defines operator MAX with both dyadic and monadic versions (scanning across the elements of an array).
PRIO MAX = 9;
OP MAX = (INT a,b) INT: ( a>b | a | b );
OP MAX = (REAL a,b) REAL: ( a>b | a | b );
OP MAX = (COMPL a,b) COMPL: ( ABS a > ABS b | a | b );
OP MAX = ([]REAL a) REAL:
(REAL out := a[LWB a];
FOR i FROM LWB a + 1 TO UPB a DO ( a[i]>out | out:=a[i] ) OD;
out)
Array, Procedure, Dereference and coercion operations
These are technically not operators, rather they are considered "units associated with names"
Monadic operators
Dyadic operators with associated priorities
Specific details:
Tertiaries include names NIL and ○.
LWS: In Algol68r0 the operators LWS and ⎩ ... both return TRUE if the lower state of the dimension of an array is fixed.
The UPS and ⎧ operators are similar on the upper state.
The LWB and UPB operators are automatically available on UNIONs of different orders (and MODEs) of arrays. eg. UPB of union([]int, [,]real, flex[,,,]char)
Assignation and identity relations etc
These are technically not operators, rather they are considered "units associated with names"
Note: Quaternaries include names SKIP and ~.
":=:" (alternatively "IS") tests if two pointers are equal; ":/=:" (alternatively "ISNT") tests if they are unequal.
Why :=: and :/=: are needed: Consider trying to compare two pointer values, such as the following variables, declared as pointers-to-integer:
REF INT ip, jp
Now consider how to decide whether these two are pointing to the same location, or whether one of them is pointing to NIL. The following expression
ip = jp
will dereference both pointers down to values of type INT, and compare those, since the "=" operator is defined for INT, but not REF INT. It is not legal to define "=" for operands of type REF INT and INT at the same time, because then calls become ambiguous, due to the implicit coercions that can be applied: should the operands be left as REF INT and that version of the operator called? Or should they be dereferenced further to INT and that version used instead? Therefore the following expression can never be made legal:
ip = NIL
Hence the need for separate constructs not subject to the normal coercion rules for operands to operators. But there is a gotcha. The following expressions:
ip :=: jp
ip :=: NIL
while legal, will probably not do what might be expected. They will always return FALSE, because they are comparing the actual addresses of the variables ip and jp, rather than what they point to. To achieve the right effect, one would have to write
ip :=: REF INT(jp)
ip :=: REF INT(NIL)
Special characters
Most of Algol's "special" characters (⊂, ≡, ␣, ×, ÷, ≤, ≥, ≠, ¬, ⊃, ≡, ∨, ∧, →, ↓, ↑, ⌊, ⌈, ⎩, ⎧, ⊥, ⏨, ¢, ○ and □) can be found on the IBM 2741 keyboard with the APL "golf-ball" print head inserted; these became available in the mid-1960s while ALGOL 68 was being drafted. These characters are also part of the Unicode standard and most of them are available in several popular fonts.
transput: Input and output
Transput is the term used to refer to ALGOL 68's input and output facilities. It includes pre-defined procedures for unformatted, formatted and binary transput. Files and other transput devices are handled in a consistent and machine-independent manner. The following example prints out some unformatted output to the standard output device:
print ((newpage, "Title", newline, "Value of i is ",
i, "and x[i] is ", x[i], newline))
Note the predefined procedures newpage and newline passed as arguments.
Books, channels and files
The TRANSPUT is considered to be of BOOKS, CHANNELS and FILES:
Books are made up of pages, lines and characters, and may be backed up by files.
A specific book can be located by name with a call to match.
CHANNELs correspond to physical devices. e.g. card punches and printers.
Three standard channels are distinguished: stand in channel, stand out channel, stand back channel.
A FILE is a means of communicating between a program and a book that has been opened via some channel.
The MOOD of a file may be read, write, char, bin, and opened.
transput procedures include: establish, create, open, associate, lock, close, scratch.
position enquires: char number, line number, page number.
layout routines include:
space, backspace, newline, newpage.
get good line, get good page, get good book, and PROC set=(REF FILE f, INT page,line,char)VOID:
A file has event routines. e.g. on logical file end, on physical file end, on page end, on line end, on format end, on value error, on char error.
formatted transput
"Formatted transput" in ALGOL 68's transput has its own syntax and patterns (functions), with FORMATs embedded between two $ characters.
Examples:
printf (($2l"The sum is:"x, g(0)$, m + n)); ¢ prints the same as: ¢
print ((new line, new line, "The sum is:", space, whole (m + n, 0))
par: Parallel processing
ALGOL 68 supports programming of parallel processing. Using the keyword PAR, a collateral clause is converted to a parallel clause, where the synchronisation of actions is controlled using semaphores. In A68G the parallel actions are mapped to threads when available on the hosting operating system. In A68S a different paradigm of parallel processing was implemented (see below).
PROC
eat = VOID: ( muffins-:=1; print(("Yum!",new line))),
speak = VOID: ( words-:=1; print(("Yak...",new line)));
INT muffins := 4, words := 8;
SEMA mouth = LEVEL 1;
PAR BEGIN
WHILE muffins > 0 DO
DOWN mouth;
eat;
UP mouth
OD,
WHILE words > 0 DO
DOWN mouth;
speak;
UP mouth
OD
END
Examples of use
Code sample
This sample program implements the Sieve of Eratosthenes to find all the prime numbers that are less than 100. NIL is the ALGOL 68 analogue of the null pointer in other languages. The notation x OF y accesses a member x of a STRUCT y.
BEGIN # Algol-68 prime number sieve, functional style #
PROC error = (STRING s) VOID:
(print(( newline, " error: ", s, newline)); GOTO stop);
PROC one to = (INT n) LIST:
(PROC f = (INT m,n) LIST: (m>n | NIL | cons(m, f(m+1,n))); f(1,n));
MODE LIST = REF NODE;
MODE NODE = STRUCT (INT h, LIST t);
PROC cons = (INT n, LIST l) LIST: HEAP NODE := (n,l);
PROC hd = (LIST l) INT: ( l IS NIL | error("hd NIL"); SKIP | h OF l );
PROC tl = (LIST l) LIST: ( l IS NIL | error("tl NIL"); SKIP | t OF l );
PROC show = (LIST l) VOID: ( l ISNT NIL | PRINT((" ",whole(hd(l),0))); show(tl(l)));
PROC filter = (PROC (INT) BOOL p, LIST l) LIST:
IF l IS NIL THEN NIL
ELIF p(hd(l)) THEN cons(hd(l), filter(p,tl(l)))
ELSE filter(p, tl(l))
FI;
PROC sieve = (LIST l) LIST:
IF l IS NIL THEN NIL
ELSE
PROC not multiple = (INT n) BOOL: n MOD hd(l) ≠ 0;
cons(hd(l), sieve( filter( not multiple, tl(l) )))
FI;
PROC primes = (INT n) LIST: sieve( tl( one to(n) ));
show( primes(100) )
END
Operating systems written in ALGOL 68
Cambridge CAP computer – All procedures constituting the operating system were written in ALGOL 68C, although several other closely associated protected procedures, such as a paginator, are written in BCPL.
Eldon 3 – Developed at Leeds University for the ICL 1900 was written in ALGOL 68-R.
Flex machine – The hardware was custom and microprogrammable, with an operating system, (modular) compiler, editor, garbage collector and filing system all written in ALGOL 68RS. The command shell Curt was designed to access typed data similar to Algol-68 modes.
VME – S3 was the implementation language of the operating system VME. S3 was based on ALGOL 68 but with data types and operators aligned to those offered by the ICL 2900 Series.
Note: The Soviet Era computers Эльбрус-1 (Elbrus-1) and Эльбрус-2 were created using high-level language Эль-76 (AL-76), rather than the traditional assembly. Эль-76 resembles Algol-68, The main difference is the dynamic binding types in Эль-76 supported at the hardware level. Эль-76 is used for application, job control, system programming.
Applications
Both ALGOL 68C and ALGOL 68-R are written in ALGOL 68, effectively making ALGOL 68 an application of itself. Other applications include:
ELLA – a hardware description language and support toolset. Developed by the Royal Signals and Radar Establishment during the 1980s and 1990s.
RAF Strike Command System – "... 400K of error-free ALGOL 68-RT code was produced with three man-years of work. ..."
Libraries and APIs
NAG Numerical Libraries – a software library of numerical analysis routines. Supplied in ALGOL 68 during the 1980s.
TORRIX – a programming system for operations on vectors and matrices over arbitrary fields and of variable size by S. G. van der Meulen and M. Veldhorst.
Program representation
A feature of ALGOL 68, inherited from the ALGOL tradition, is its different representations. There is a representation language used to describe algorithms in printed work, a strict language (rigorously defined in the Report), and an official reference language intended to be used in compiler input. The examples contain BOLD typeface words, this is the STRICT language. ALGOL 68's reserved words are effectively in a different namespace from identifiers, and spaces are allowed in identifiers, so this next fragment is legal:
INT a real int = 3 ;
The programmer who writes executable code does not always have an option of BOLD typeface or underlining in the code as this may depend on hardware and cultural issues. Different methods to denote these identifiers have been devised. This is called a stropping regime. For example, all or some of the following may be available programming representations:
INT a real int = 3; # the STRICT language #
'INT'A REAL INT = 3; # QUOTE stropping style #
.INT A REAL INT = 3; # POINT stropping style #
INT a real int = 3; # UPPER stropping style #
int a_real_int = 3; # RES stropping style, there are 61 accepted reserved words #
All implementations must recognize at least POINT, UPPER and RES inside PRAGMAT sections. Of these, POINT and UPPER stropping are quite common, while RES stropping is a contradiction to the specification (as there are no reserved words). QUOTE (single apostrophe quoting) was the original recommendation, while matched apostrophe quoting, common in ALGOL 60, is not used much in ALGOL 68.
The following characters were recommended for portability, and termed "worthy characters" in the Report on the Standard Hardware Representation of Algol 68 :
Worthy Characters: ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 "#$%'()*+,-./:;<=>@[ ]_|
This reflected a problem in the 1960s where some hardware didn't support lower-case, nor some other non-ASCII characters, indeed in the 1973 report it was written: "Four worthy characters — "|", "_", "[", and "]" — are often coded differently, even at installations which nominally use the same character set."
Base characters: "Worthy characters" are a subset of "base characters".
Example of different program representations
ALGOL 68 allows for every natural language to define its own set of keywords Algol-68. As a result, programmers are able to write programs using keywords from their native language. Below is an example of a simple procedure that calculates "the day following", the code is in two languages: English and German.
# Next day date - English variant #
MODE DATE = STRUCT(INT day, STRING month, INT year);
PROC the day following = (DATE x) DATE:
IF day OF x < length of month (month OF x, year OF x)
THEN (day OF x + 1, month OF x, year OF x)
ELIF month OF x = "December"
THEN (1, "January", year OF x + 1)
ELSE (1, successor of month (month OF x), year OF x)
FI;
# Nachfolgetag - Deutsche Variante #
MENGE DATUM = TUPEL(GANZ tag, WORT monat, GANZ jahr);
FUNKTION naechster tag nach = (DATUM x) DATUM:
WENN tag VON x < monatslaenge(monat VON x, jahr VON x)
DANN (tag VON x + 1, monat VON x, jahr VON x)
WENNABER monat VON x = "Dezember"
DANN (1, "Januar", jahr VON x + 1)
ANSONSTEN (1, nachfolgemonat(monat VON x), jahr VON x)
ENDEWENN;
Russian/Soviet example:
In English Algol68's case statement reads CASE ~ IN ~ OUT ~ ESAC, in Cyrillic this reads выб ~ в ~ либо ~ быв.
Some Vanitas
For its technical intricacies, ALGOL 68 needs a cornucopia of methods to deny the existence of something:
SKIP, "~" or "?"C – an undefined value always syntactically valid,
EMPTY – the only value admissible to VOID, needed for selecting VOID in a UNION,
VOID – syntactically like a MODE, but not one,
NIL or "○" – a name not denoting anything, of an unspecified reference mode,
() or specifically [1:0]INT – a vacuum is an empty array (here specifically of MODE []INT).
undefined – a standards reports procedure raising an exception in the runtime system.
ℵ – Used in the standards report to inhibit introspection of certain types. e.g. SEMA
c.f. below for other examples of ℵ.
The term NIL IS var always evaluates to TRUE for any variable (but see above for correct use of IS :/=:), whereas it is not known to which value a comparison x < SKIP evaluates for any integer x.
ALGOL 68 leaves intentionally undefined what happens in case of integer overflow, the integer bit representation, and the degree of numerical accuracy for floating point. In contrast, the language Java has been criticized for over-specifying the latter.
Both official reports included some advanced features that were not part of the standard language. These were indicated with an ℵ and considered effectively private. Examples include "≮" and "≯" for templates, the OUTTYPE/INTYPE for crude duck typing, and the STRAIGHTOUT and STRAIGHTIN operators for "straightening" nested arrays and structures.
Extract from the 1973 report:
§10.3.2.2. Transput modes
a) MODE ℵ SIMPLOUT = UNION (≮ℒ INT≯, ≮ℒ REAL≯, ≮ℒ COMPL≯, BOOL, ≮ℒ bits≯,
CHAR, [ ] CHAR);
b) MODE ℵ OUTTYPE = ¢ an actual – declarer specifying a mode united
from a sufficient set of modes none of which is 'void' or contains 'flexible',
'reference to', 'procedure' or 'union of' ¢;
c) MODE ℵ SIMPLIN = UNION (≮REF ℒ INT≯, ≮REF ℒ REAL≯, ≮REF ℒ COMPL≯, REF BOOL,
≮REF ℒ BITS≯, REF CHAR, REF [ ] CHAR, REF STRING);
d) MODE ℵ INTYPE = ¢ ... ¢;
§10.3.2.3. Straightening
a) OP ℵ STRAIGHTOUT = (OUTTYPE x) [ ] SIMPLOUT: ¢ the result of "straightening" 'x' ¢;
b) OP ℵ STRAIGHTIN = (INTYPE x) [ ] SIMPLIN: ¢ the result of straightening 'x' ¢;
Comparisons with other languages
1973 – Comparative Notes on Algol 68 and PL/I – S. H. Valentine – February 1973
1973 – B. R. Alexander and G. E. Hedrick. A Comparison of PL/1 and ALGOL 68. International Symposium on Computers and Chinese Input/Output Systems. pp. 359–368.
1976 – Evaluation of ALGOL 68, JOVIAL J3B, Pascal, Simula 67, and TACPOL Versus TINMAN – Requirements for a Common High Order Programming Language.
1976 – A Language Comparison – A Comparison of the Properties of the Programming Languages ALGOL 68, CAMAC-IML, Coral 66, PAS 1, PEARL, PL/1, PROCOL, RTL/2 in Relation to Real Time Programming – R. Roessler; K. Schenk – October 1976
1976 – Evaluation of ALGOL 68, JOVIAL J3B, PASCAL, SIMULA 67, and TACPOL Versus [Steelman language requirements|TINMAN] Requirements for a Common High Order Programming Language. October 1976
1977 – Report to the High Order-Language Working Group (HOLWG) – Executive Summary – Language Evaluation Coordinating Committee – Evaluation of PL/I, Pascal, ALGOL 68, HAL/S, PEARL, SPL/I, PDL/2, LTR, CS-4, LIS, Euclid, ECL, Moral, RTL/2, Fortran, COBOL, ALGOL 60, TACPOL, CMS-2, Simula 67, JOVIAL J3B, JOVIAL J73 & Coral 66.
1977 – A comparison of PASCAL and ALGOL 68 – Andrew S. Tanenbaum – June 1977.
1980 – A Critical Comparison of Several Programming Language Implementations – Algol 60, FORTRAN, Pascal and Algol 68.
1993 – Five Little Languages and How They Grew – BLISS, Pascal, Algol 68, BCPL & C – Dennis M. Ritchie – April 1993.
1999 – On Orthogonality: Algol68, Pascal and C
2000 – A Comparison of Arrays in ALGOL 68 and BLISS – University of Virginia – Michael Walker – Spring 2000
2009 – On Go – oh, go on – How well will Google's Go stand up against Brand X programming language? – David Given – November 2009
2010 – Algol and Pascal from "Concepts in Programming Languages – Block-structured procedural languages" – by Marcelo Fiore
Comparison of ALGOL 68 and C++
Revisions
Except where noted (with a superscript), the language described above is that of the "Revised Report(r1)".
The language of the unrevised report
The original language (As per the "Final Report"r0) differs in syntax of the mode cast, and it had the feature of proceduring, i.e. coercing the value of a term into a procedure which evaluates the term. Proceduring would be intended to make evaluations lazy. The most useful application could have been the short-circuited evaluation of boolean operators. In:
OP ANDF = (BOOL a,PROC BOOL b)BOOL:(a | b | FALSE);
OP ORF = (BOOL a,PROC BOOL b)BOOL:(a | TRUE | b);
b is only evaluated if a is true.
As defined in ALGOL 68, it did not work as expected, for example in the code:
IF FALSE ANDF CO proc bool: CO ( print ("Should not be executed"); TRUE)
THEN ...
against the programmers naïve expectations the print would be executed as it is only the value of the elaborated enclosed-clause after ANDF that was procedured. Textual insertion of the commented-out PROC BOOL: makes it work.
Some implementations emulate the expected behaviour for this special case by extension of the language.
Before revision, the programmer could decide to have the arguments of a procedure evaluated serially instead of collaterally by using semicolons instead of commas (gommas).
For example in:
PROC test = (REAL a; REAL b) :...
...
test (x PLUS 1, x);
The first argument to test is guaranteed to be evaluated before the second, but in the usual:
PROC test = (REAL a, b) :...
...
test (x PLUS 1, x);
then the compiler could evaluate the arguments in whatever order it felt like.
Extension proposals from IFIP WG 2.1
After the revision of the report, some extensions to the language have been proposed to widen the applicability:
partial parametrisation (aka Currying): creation of functions (with fewer parameters) by specification of some, but not all parameters for a call, e.g. a function logarithm of two parameters, base and argument, could be specialised to natural, binary or decadic log,
module extension: for support of external linkage, two mechanisms were proposed, bottom-up definition modules, a more powerful version of the facilities from ALGOL 68-R and top-down holes, similar to the ENVIRON and USING clauses from ALGOL 68C
mode parameters: for implementation of limited parametrical polymorphism (most operations on data structures like lists, trees or other data containers can be specified without touching the pay load).
So far, only partial parametrisation has been implemented, in Algol 68 Genie.
True ALGOL 68s specification and implementation timeline
The S3 language that was used to write the ICL VME operating system and much other system software on the ICL 2900 Series was a direct derivative of Algol 68. However, it omitted many of the more complex features, and replaced the basic modes with a set of data types that mapped directly to the 2900 Series hardware architecture.
Implementation specific extensions
ALGOL 68R(R) from RRE was the first ALGOL 68 subset implementation, running on the ICL 1900. Based on the original language, the main subset restrictions were definition before use and no parallel processing. This compiler was popular in UK universities in the 1970s, where many computer science students learnt ALGOL 68 as their first programming language; the compiler was renowned for good error messages.
ALGOL 68RS(RS) from RSRE was a portable compiler system written in ALGOL 68RS (bootstrapped from ALGOL 68R), and implemented on a variety of systems including the ICL 2900/Series 39, Multics and DEC VAX/VMS. The language was based on the Revised Report, but with similar subset restrictions to ALGOL 68R. This compiler survives in the form of an Algol68-to-C compiler.
In ALGOL 68S(S) from Carnegie Mellon University the power of parallel processing was improved by adding an orthogonal extension, eventing. Any variable declaration containing keyword EVENT made assignments to this variable eligible for parallel evaluation, i.e. the right hand side was made into a procedure which was moved to one of the processors of the C.mmp multiprocessor system. Accesses to such variables were delayed after termination of the assignment.
Cambridge ALGOL 68C(C) was a portable compiler that implemented a subset of ALGOL 68, restricting operator definitions and omitting garbage collection, flexible rows and formatted transput.
Algol 68 Genie(G) by M. van der Veer is an ALGOL 68 implementation for today's computers and operating systems.
"Despite good intentions, a programmer may violate portability by inadvertently employing a local extension. To guard against this, each implementation should provide a PORTCHECK pragmat option. While this option is in force, the compiler prints a message for each construct that it recognizes as violating some portability constraint."
Quotes
... The scheme of type composition adopted by C owes considerable debt to Algol 68, although it did not, perhaps, emerge in a form that Algol's adherents would approve of. The central notion I captured from Algol was a type structure based on atomic types (including structures), composed into arrays, pointers (references), and functions (procedures). Algol 68's concept of unions and casts also had an influence that appeared later. Dennis Ritchie Apr 1993.
... C does not descend from Algol 68 is true, yet there was influence, much of it so subtle that it is hard to recover even when I think hard. In particular, the union type (a late addition to C) does owe to A68, not in any details, but in the idea of having such a type at all. More deeply, the type structure in general and even, in some strange way, the declaration syntax (the type-constructor part) was inspired by A68. And yes, of course, "long". Dennis Ritchie, 18 June 1988
"Congratulations, your Master has done it" – Niklaus Wirth
The more I see of it, the more unhappy I become – E. W. Dijkstra, 1968
[...] it was said that A68's popularity was inversely proportional to [...] the distance from Amsterdam – Guido van Rossum
[...] The best we could do was to send with it a minority report, stating our considered view that, "... as a tool for the reliable creation of sophisticated programs, the language was a failure." [...] – C. A. R. Hoare in his Oct 1980 Turing Award Lecture
"[...] More than ever it will be required from an adequate programming tool that it assists, by structure, the programmer in the most difficult aspects of his job, viz. in the reliable creation of sophisticated programs. In this respect we fail to see how the language proposed here is a significant step forward: on the contrary, we feel that its implicit view of the programmer's task is very much the same as, say, ten years ago. This forces upon us the conclusion that, regarded as a programming tool, the language must be regarded as obsolete. [...]" 1968 Working Group minority report on 23 December 1968.
See also
ALGOL 60
ALGOL Y
ALGOL N
ALGOL 68C
C (programming language)
C++
Bourne shell
Bash (Unix shell)
Steelman language requirements
Ada (programming language)
Python (programming language)
References
Citations
Works cited
Brailsford, D. F. and Walker, A. N., Introductory ALGOL 68 Programming, Ellis Horwood/Wiley, 1979
Lindsey, C. H. and van der Meulen, S. G., Informal Introduction to ALGOL 68, North-Holland, 1971
McGettrick, A. D., ALGOL 68, A First and Second Course, Cambridge Univ. Press, 1978
Peck, J. E. L., An ALGOL 68 Companion, Univ. of British Columbia, October 1971
Tanenbaum, A. S., A Tutorial on ALGOL 68, Computing Surveys 8, 155-190, June 1976 and 9, 255-256, September 1977,
Woodward, P. M. and Bond, S. G., ALGOL 68-R Userssic Guide, London, Her Majesty's Stationery Office, 1972
External links
Revised Report on the Algorithmic Language ALGOL 68 The official reference for users and implementors of the language (large pdf file, scanned from Algol Bulletin)
Revised Report on the Algorithmic Language ALGOL 68 Hyperlinked HTML version of the Revised Report
A Tutorial on Algol 68, by Andrew S. Tanenbaum, in Computing Surveys, Vol. 8, No. 2, June 1976, with Corrigenda (Vol. 9, No. 3, September 1977)
Algol 68 Genie – a GNU GPL Algol 68 compiler-interpreter
Open source ALGOL 68 implementations, on SourceForge
Algol68 Standard Hardware representation (.pdf)
Из истории создания компилятора с Алгол 68
Algol 68 – 25 Years in the USSR
Система программ динамической поддержки для транслятора с Алгол 68
C history with Algol68 heritage
McJones, Paul, "Algol 68 implementations and dialects", Software Preservation Group, Computer History Museum, 2011-07-05
Web enabled ALGOL 68 compiler for small experiments
Algol programming language family
Academic programming languages
Articles with example ALGOL 68 code
Computer-related introductions in 1968
Procedural programming languages
Programming languages created in 1968
Systems programming languages
Programming languages
|
18421368
|
https://en.wikipedia.org/wiki/David%20Watson%20%28coach%29
|
David Watson (coach)
|
David Watson (born August 16, 1976) is an American football coach. He was most recently the Defensive Line Coach for the University of Southern California (USC) Trojans.
Coaching career
Watson began his college coaching career at NCAA Division II Southwest Minnesota State. While there he served as a graduate assistant working with the defensive line in 2002, transitioning to a full-time assistant in 2003 handling the defensive line, linebackers and the front seven. In 2004, he served as a Defensive Graduate Assistant for Michigan State.
Watson joined the staff at USC in February 2005, serving his first year as an offensive line graduate assistant. He was promoted to full-time assistant working with the defensive line in February 2006. Among the players he coached at USC were Sedrick Ellis and Lawrence Jackson. After a shift in defensive coaching personnel in January 2009, Watson was no longer with the program.
College career
Watson played defensive end in college, beginning his career at Minnesota. As a freshman, he earned Academic All-Big Ten honors in 1994; his career was then beset by two season-ending injuries that caused him to take medical redshirt seasons for both 1995 and 1996. He then opted to transfer to Division I-AA Western Illinois where he played for three seasons (1997–1999), earning All-Gateway Conference selection all three years while setting WIU records for season (41) and career (72) tackles for loss. His injuries finally caused him to end his playing career in 1999. He graduated from WIU in 2001.
High school career
Watson began playing football at age 7, and prepped at Bloomington Jefferson High School in Bloomington, Minnesota. A three-sport athlete, he was the Minnesota Gatorade Player of the Year in football in 1993.
Personal
Watson is married with two children. During his first two seasons with USC, former high school teammate Lane Kiffin served as Offensive Coordinator for the Trojans. His players have nicknamed him "Coach Sweaty" due to his heavy perspiration during practices.
Watson successfully fought an addiction to prescription painkillers such as Vicodin and Soma: He had developed an early addiction due to injuries while playing college football, but after weaning himself off he relapsed after injuring himself while working in construction. On May 17, 2008, Watson was arrested under suspicion of driving under the influence of prescription drugs and underwent rehabilitation treatment after encouragement from USC Head Coach Pete Carroll. Watson and USC were named in a personal injury lawsuit arising from the accident.
References
External links
David Watson, USC Bio
Lawsuit Names Former USC Assistant Coach Dave Watson at LAist.com
1976 births
Living people
American football defensive ends
Sportspeople from Bloomington, Minnesota
Minnesota Golden Gophers football players
Western Illinois Leathernecks football players
Southwest Minnesota State Mustangs football coaches
Michigan State Spartans football coaches
USC Trojans football coaches
|
11080707
|
https://en.wikipedia.org/wiki/The%20Master%20Genealogist
|
The Master Genealogist
|
The Master Genealogist (TMG) is genealogy software originally created by Bob Velke for Microsoft DOS in 1993, with a version for Microsoft Windows released in 1996. Data entry was customized through the use of user-defined events, names, and relationship types. Official support for TMG ceased at the end of 2014. Informal support continues through a number of online user groups.
Features
Designed for both normal users and genealogy professionals
Flexibly displays information
Has elaborate database-oriented support for source and citation information
Supports the inclusion of media files
Supports DNA information
Allows the user to record conflicting evidence
Allows a "Surety" of a given piece of evidence to be recorded
Supports elaborate chart-making
Supports smart importing of genealogy files. Its GenBridge recognizes many common genealogy data format files from other programs and imports genealogical data directly into TMG. This minimizes the loss of data when transferred from other software and avoids some of the problems caused by transferring files through the limited but universal GEDCOM format.
Source types
The default source types in the standard edition are based on Wholly Genes' interpretation of Elizabeth Shown Mills's Evidence! Citation & Analysis for the Family Historian. Source templates based upon Wholly Genes' interpretation of the source types in Richard S. Lackey's Cite Your Sources are also provided. The source templates in the UK edition are based on designs by Caroline Gurney for sources commonly encountered in the United Kingdom.
Platforms
From version 2 onwards TMG was designed to run on the Windows platform but can be run on Macintosh and Linux machines using a Windows emulator.
Limitations
TMG did not support Unicode, which limits data entry to the Western European (Latin) character set.
Before TMG version 8, reports generated on computers with 64-bit operating systems (only) were limited to "txt", HTML, and PDF output, although popular word processor reporting formats were supported on 32-bit platforms. The print routine was rewritten for the current version (v 9.05) of the program eliminating this restriction.
Some users have complained about the limitations in the program's multilingual support in narratives. This issue is focused on personal pronoun and other individual word replacement resulting in output that may have minor grammar errors.
TMG version history
Please press show for more information on past versions.
Migration from TMG
GEDCOM
TMG has elaborate and detailed support for sources in a database format where a source can be referred to by any other record. In the GEDCOM database specification, sources can be attached to any number of individuals or multiple families, by attaching to any number of facts for that individual or family. Exporting a TMG database involves duplicating the sources into each place where a given source is used. All of the information is exported, but the structure of each source is lost permanently.
An example is when there is a census or ship's record that lists many members of an extended family. TMG allows each individual's entry to refer to a common source record, which can itself have an elaborate description. GEDCOM also allows every fact in that census or ship record to apply to a single source, it's simply a matter of tagging the facts with that source. This may vary with how TMG handles sources but that's perhaps the fault of TMG not adhering to the standard that was well established prior to the program being produced.
Non GEDCOM exports of TMG
The following options allow for some form of direct transfer of TMG files, possibly limited, apart from GEDCOM.
tmg2gramps Converts TMG 6 datafiles to a GRAMPS v2.2.6 XML - by Anne Jessel.
Forays Into Genealogy Data Base Modelling - Details how Leif Biberg Kristensen moved his data out of TMG into a program of his own creation .
History Research Environment - This community project is creating a free and open source platform-independent application for the serious or professional historical researcher. It is designed to provide an onward path for genealogists who currently use the now-discontinued program The Master Genealogist (TMG).
RootsMagic - RootsMagic can now directly import files from TMG 7.04, 8.08, or later.
File format
TMG's underlying database engine is Visual FoxPro v9 and does not support Unicode.
File Structures for (TMG) for v9 - Last updated July, 2014
TMG File Structure - Applicable to TMG v3.x, v4.x, v5.x, v6.x, v7.x
Companion products
Several software developers have created companion products specifically for TMG that enhance its functionality. These products include:
Second Site, advanced web publishing and data review application for TMG - by John Cardinal
PathWiz!, TMG exhibit file management product - by BeeSoft
GedStar Pro, for Android smartphones display application - by GHCS Software.
GenSmarts, research advisor that analyzes users' genealogical data and offers suggestions
TMG data output is compatible with a range of geographical mapping and genealogical reporting applications that support the GEDCOM format.
Significant freeware and shareware utility applications, as well as independently published user guides and manuals, also support TMG's installed user base.
References
External links
Wholly Genes
Lee Hoffman's TMG Tips
Terry's TMG Tips
Customizing TMG™ Using It My Way, On-Line Book by ©MJH
Installers for Older TMG Versions, Alternate location for downloads by The ROOTS Users Group of Arlington, VA.
Reviews
Version 6 Review
Version 7 Review
Version 7 Review
Windows-only genealogy software
1993 software
|
24111849
|
https://en.wikipedia.org/wiki/Type%207103%20DSRV
|
Type 7103 DSRV
|
The Type 7103 deep-submergence rescue vehicle (DSRV) is a submarine rescue submersible of the People's Liberation Army Navy (PLAN) of the People's Republic of China (PRC).
Origin
Type 7103 DSRV was intended to rescue sailors trapped in submarine lost at the sea, and when the decision was made to develop a DSRV, the project number 7103 was given, and subsequently was also used to indicate the type designation. It was jointly developed by a team that included the 701st Research Institute of China Shipbuilding Industry Corporation, Shanghai Jiaotong University, Harbin Shipbuilding Engineering Institute (HSEI, later reorganized as Harbin Engineering University), Huazhong Institute of Technology (HIT, later reorganized as Huazhong University of Science and Technology), and the same builder of the Osprey class submersibles in Chinese service, Wuchang Shipbuilding Factory (later reorganized to Wuhan Shipbuilding Industry Corporation, Inc.) Type 7103 DSRV is the first manned submersible in China.
Developmental history
In April, 1971, the then 34-year-old assistant professor Zhu Jimao (朱继懋, February 1932 -) of HSEI was named by the 6th Ministry as the chief designer of Type 7103 DSRV. A 1954 Shanghai Jiaotong University graduate, Professor Zhu was in charge of some of the tests of Type 091 submarine, the first Chinese nuclear submarine in the 1960s, and had successfully developed some subsystems used on the SSN, such as the underwater autopilot, reverse thruster controlling system, and resistance measuring system. In his work, Professor Zhu has suggested a new experimental methodology of determining the shape of the submarine that would minimize the drag by based on the theory of wave drag with the peak of the transverse wave that has low Fr value, and this resulted in the successful completion of the water tank used for testing. Due to his past success and achievement, Professor Zhu was assigned as the general designer of Type 7103 DSRV at young age, and he immediately joined his colleagues in a small test facilities on the bank of Lake Tai, sacrificing his personal life, as with the rest of the team. For example, Professor Zhu did not see his daughter for more than a year after she was born. Also in April, 1971, a joint team was set up at Wuhan Shipbuilding Factory, with the 701st Research Institute of China Shipbuilding Industry Corporation as the general design team and HSEI as the deputy general design team.
China had never developed anything similar before and at the beginning, many suggested to take the prudent approach by repeating the common foreign practice of developing four generations submersibles, i.e. 1st, developing an observatory submersible, and then the 2nd, a submersible capable of working underwater, while the 3rd would be a submersible enabling divers to enter and exit underwater. Finally, the 4th generation would be the DSRV capable of performing underwater rescue operations. Professor Zhu felt that China does not have much time and thus could not spend too much time repeating the foreign experience. Instead, China must develop its own way and thus should directly jump into developing the fourth generation DSRV, despite technological difficulties. After consulting with all experts on the team, a scaled-down radio operated remotely controlled model was used in experiments such as mating operations, as opposed to the common practice of building 1:1 scaled model like the way US did it, and the result was tens of millions of Chinese dollars saved, while the time needed was much shortened. Meanwhile, the methodology Professor Zhu developed for Type 7103 DSRV development, titled “Analysis Methodology of Density in the Designs of Deep Diving Submersibles” had since become the standard text for future Chinese submersible designers.
In 1972, another scholar, Xu Yuru (徐玉如) joined the design team of Type 7103 DSRV. Xu is also a professor of Harbin Engineering University, and an expert in fluid dynamic, and he was first put in charge of building the scaled model of Type 7103 DSRV for simulation. The team, as well as the entire 7103 project, soon ran into difficulties when after three years of experiments, a little progress was made. Some of the team members were reassigned, while others eventually immigrated abroad, but Professor Xu stayed, and believed that there would be success when the correct methodologies were used. After overcame difficulties such as inadequate funding and primitive equipment, as well as his own health problem of having peptic ulcer which resulted in half of his stomach being removed in surgery, Professor Xu eventually succeeded, obtaining sixty-seven important fluid dynamic parameters of Type 7103 DSRV, while at the same time completing the associating planar motion system used in the research, the first of its kind in China.
Construction of Type 7103 DSRV begun in 1976 at Wuhan Shipbuilding Factory, and in January, 1980, it was launched simultaneously with its mother ship, Type 925 Dajiang class submarine rescue ship / salvage ship (ASR/ARS). The 1st stage of sea trials lasted from October, 1984 through August, 1983, with three objectives totaling forty-one trials. Test results from these trials revealed several issues that needs to be addressed, including the need to improve search, guidance and observatory capabilities, the need to improve the reliability of onboard subsystems, particularly that of the reverse thruster and the silver–zinc batteries, and the need to improve the mating system of the skirt. Round-the-clock work was immediately begun on solving these problems. After the intensive work, the upgraded sample was ready for the next stage of trials.
The 2nd stage of trials begun in 1985, and in May, 1985, sea trials was conducted in South China Sea under the supervision of Professor Xu Yuru, where three operators succeeded in continuously mating with submarines for three times under different sea states. These tests provided the bases on which the mathematic model and other analysis for Type 7103 DSRV were established. Based on the results of these trials, Professor Xu led the team to design the simulation / testing water tank, and also conducted further tests on models in Songhua River. The experience gained was instrumental in successfully developing a four-degree of freedom (DOF) dynamic positioning system needed for Type 7103 DSRV.
From April 1986 through June 1986, tests were conducted for three functions including deep diving, wet and dry rescues. On June 3, 1986, under the command of the general designer Zhu Jimao, Type 7103 DSRV mated with a submarine and transferred seven crew members of the submarine to DSRV in less than three minutes during a dry rescue mission. This success has the Chinese being the second country in the world to achieve successful underwater mating between a DSRV and a submarine, after the United States. In the meantime, Type 7103 also completed wet rescue tests, and achieved the deepest diving record of Chinese submarine when it dived 360 meters below the surface.
Design and structure
Shanghai 3rd Steel Factory (上钢三厂) and Shanghai 8th Steel Factory (上钢八厂) teamed up to jointly develop Type 402 steel used to construct the pressure hull of Type 7103 DSRV, while Type 840-S welding stick of 3Ni – Mn – Cr – Mo series was also developed for welding of the pressure hull. HSEI developed the positioning and integrated display systems for rescue operations in fast flowing ocean currents with low visibility, and the system was the first of its type in China, and subsequently adopted for other applications in Chinese oceanic exploration.
After the conclusion of the first stage of the trials at the sea, it was obvious that operating the DSRV in such complex submarine rescue operations could not be conducted manually. Automation was now not simply desirable, it was an absolute must. Harbin Engineering University professor Bian Xinqian (边信黔), who was among the first exchange scholars China sent abroad in the early 1980s to study computer systems, PCs in particular, recommended and subsequently lead a team to develop a new computer system for Type 7103 DSRV, because the computer system China had back then was simply too bulky to be fit into Type 7103 DSRV and a brand new system was needed. This was a considerable challenge for Chinese considering that the first miniature computer IBM PC had just appeared several years before and China had nothing similar at the time. Under the leadership of Professor Bian Xinqian, the team succeeded in successfully developing a mini/micro computer system during a short span of time, and adopted the system for use on Type 7103 DSRV. This computer system won national award in 1985, even before the completion of Type 7103 DSRV program.
In November 1987, Type 7103 DSRV was formally handed to PLAN. The success of Type 7103 DSRV is viewed by Chinese as a proof of China joining the ranks of those other countries with advanced submarine rescue technologies. For his contribution of pioneering work in Type 7103 DSRV and submarine rescue technological development in China, the general designer, Zhu Jimao, was awarded the rank of full professor in 1984 by the direct order of the State Council of the People's Republic of China before the completion of the project, and in 1990, he was awarded the national title of Mid-aged and Young Experts with Outstanding Contribution, which was one of twenty-one various awards he received since 1977. Professor Zhu Jimao was later named as the general designer of HR-01 ROUV. In 1990, Type 7103 DSRV won first place in the national science and technological advance award, and the design team of one of the main contractor, the then Naval Engineering Institute of Harbin Engineering University, headed by Zhang Yongyao (张诵尧), was also awarded. The computerized control and command system of Type 7103 was one of the first applications of mini/micro computers in China, and for that pioneering work, this system won the national award in 1985, before the completion of the project.
7103 training submersible
As part of #7103 project, training submersibles based on Type 7103 DSRV are also built. The training submersible is similar in size, and differs only slightly in external appearance in comparison to Type 7103 DSRV: the tiny conning tower is absent on the training submersible, but instead, an elevated flat-top ridge in the midsection of the training submersible, which is used to simulate the hatches of regular submarines. The training submersible is used to simulate submarines in distraught, and the DSRV would practice docking with the training submersible and transfer crews in training missions. A total of two training submersibles are built and they can be carried by the same mother ships that carry Type 7103 DSRVs.
Service and deployment
Type 7103 DSRV is usually carried by Type 925 Dajiang class submarine rescue / salvage ship (ASR/ARS) of PLAN, which also carries the training submersible. A total of 4 Type 7103 DSRVs are built, but in general, only two are readily available at any given time, while this pair is deployed on ships, the other pair would be at base for maintenance and providing secondary shore-based training. Under emergency situations, all four could be readily available for deployment. While at sea, each Type 925 Dajiang class ASR/ARS would only carry one Type 7103 DSRV, while the slot for the second is used to carry the training submersible for training at sea. During rescue missions, the training submersible would be replaced by a second Type 7103 DSRV.
Although Type 7103 DSRV and its supporting equipment are designed to be air transportable like the American Mystic class deep submergence rescue vehicle, China lacks the heavy-lifting cargo airplanes such as the Antonov An-124 or C-5 Galaxy for rapid aerial deployment. This air transportable capability is no longer present in the successor of Type 7103 DSRV designed by Harbin Engineering University.
Modernization
Because Type 7103 DSRV is a design of the 1970s, and has a rather limited capability to meet the needs of a 21st-century environment. The 701st Institute and Wuchang Shipbuilding Factory jointly launched a comprehensive modernization program for all Type 7103 DSRVs, which lasted from 1994 through 1996. The most significant upgrade included the installation of an upgraded 4-DOF positioning system, and the installation of a new integrated command, control and display system. The maximum rescue depth is increased by 20% to 360 meters, while some other additional tasks can also be performed at the maximum diving depth. The size of Type 7103 DSRV is slightly increased after its upgrade in the mid 1990s.
Despite the upgrade, Type 7103 DSRV has nonetheless limited capability due to its inherent old design of the 1970s. The major limitation is that it cannot dock with submarines which are titled at greater angles. Furthermore, during the docking operation, the maximum speed of the oceanic current must not exceed 1.5 kt and the visibility must be greater than 0.5 meter. Realizing that Type 7103 DSRV has reached its potential and there is not much room for any further significant improvements, Harbin Engineering University has developed a successor of Type 7103 DSRV to overcome these shortcomings.
Specifications
The dimension of Type 7103 is slightly increased after the 1994–1996 modernization, though the differences are very minor:
Length: 14.88 meter pre modernization, > 15 meter post modernization
Width: 2.6 meter
Height: 4 meter
Speed: 4 kt
Displacement: 32 ton pre modernization, > 35 ton post modernization
Maximum diving depth: > 600 meter
Maximum rescue depth: 300 meter pre modernization, 360 meter post modernization
Rescue pressures: 5 bar
Crew: 4 (1 or 2 operators, a diver, and a doctor)
Propulsion: Silver–zinc battery powered
Maximum number of rescued submariners: 22 in docking operation, 6 to 10 in wet rescue operation
References
Chinese DSRV
Submarines of the People's Liberation Army Navy
Deep-submergence vehicles
Deep-submergence rescue vehicles
Lifeboats
Crewed submersibles
1980 ships
|
22565059
|
https://en.wikipedia.org/wiki/Yuri%20Rozhdestvensky
|
Yuri Rozhdestvensky
|
Yuri Rozhdestvensky (December 21, 1926 – October 24, 1999) - Russian rhetorician, educator, linguist and philosopher. Rozhdestvensky started his scholarly career from writing on Chinese grammar; his second Ph.D. involved the study and comparison of 2,000 grammars and established several language universals; he then moved on to comparative study of Chinese, Indian, Arabic and European rhetorical traditions, and then to the study of general laws of culture. Rozhdestvensky's influence continues to be powerful. In his lifetime, he directed 112 dissertations. His students now teach culture, media ecology, linguistics and communication theory courses in leading colleges in Russia.
Accumulative approach to media
Similar to the field of media ecology which was developed in the West, Rozhdestvensky studied the role of communication media in society. Rozhdestvensky developed the theory of language in the information age. It says that language in society goes through the following stages:
evolvement of language, the stage of folklore and syncretic performance. Plato's Cratylus addresses the philosophy of language for that period;
formation of canonical texts, when the language of the religious canon is studied in schools and often creates diglossia. It is the stage of written language, and its philosophy is contained in the theories of divine origin of language;
national languages, which arise after the printing press. At that stage countries receive documents and classical texts in vernacular and vernacular mutates into a national language. The language philosophy of that stage is contained in the theory of social contract;
the informational age, the stage of languages spilling beyond national borders and employing electronic means for recording verbal acts.
The following classification of texts reflects the stages of language development, showing the accumulation of genres with the introduction of each new medium. Oral Genres: Pre-literary (daily dialogue, rumor, folklore) and literary (oratory (forensic, consultative, ceremonial), homily (sermon, lecture, propaganda), theater). Written genres: sphragistics, numismatics, epigraphy, paleography (personal letters, documents, literature). Printed genres: fiction, scientific literature, journalism. Mass Communication: mass information (radio, TV, newspapers), advertising, computer languages. This classification is open-ended and is meant to be a living tool – new genres which appear with the invention of a new medium will be comfortably plugged in the chart as its next level. One of the key aspects of the theory of language in the information age is that old genres do not disappear or lose their importance. On the contrary, they become invigorated and grow with the help of new technologies.
Every society has the three pre-literary genres: oral dialogue, folklore and news. For thousands of years human societies lived comfortably with those genres, perfected them and crystallized the rules of their use. In folklore we find everything needed to govern human communication: rules prescribing to listen before speaking (god gave you one mouth and two ears!); rules prohibiting direct physical and emotional harm to the listener (don't talk about the rope in the house of the hanged) and rules prescribing thinking before action (think before you leap). Folklore becomes the repository of culture because it is a form of speech that every member of a society is required to accept and heed as many times as the folklore text is directed at the listener. The main rule of communication recorded in folklore – do not harm the listener – puts important and different checks on the content of each of the above genres. Folklore never contains direct denigration of society members, and if criticism is issued, the figures of folklore are metaphorical – animals act instead of people. News may not include defamation – if it does, it becomes rumor, a scorned form of tongue flapping. Oral dialogue may not contain messages harmful for immediate participants of the conversation, but may have content denigrating a third party – as long as the conversation content remains confidential.
Only slowly and relatively recently does writing seep in the communication chart, first as seals and inscriptions on things (sphragistics and epigraphy), then as written genres, literature being one of them - a fairly late one, coming after documents and letters. With the appearance of writing old genres receive an influx of new energy: folklore can be recorded and stored, oral dialogue can involve exchange of notes, news can be recorded and spread confidentially. Public speeches now may be written down before they are pronounced, and there is an expectation of greater uniformity in grammar even in traditional oral genres.
With the invention of the printing press the number of genres grows. Again, the old genres do not disappear but become invigorated by the new technology – more of everything can be published now. For instance, scientific community can start a more rigorous exchange of ideas. For another instance, writing and publishing fiction becomes a major industry.
Electronic technology brings with it mass communication. The use of computer influenced almost every genre on the chart (e.g. documents and oral dialogue – modified and enhanced by e-mail) and added new ones, like blogs and web sites, which were not on Rozhdestvensky's chart, but fit in comfortably, like new elements into the Mendeleev's table.
In the information age it is important to study new genres and the influence of new media on old genres. It is also important to understand that the explosion of new technologies has happened before: with the invention of writing, with printing press, telegraph and radio. Humankind has coped with the previous technological explosions and expansions of genres, and is now coping with another step on the same road. In the Western tradition, similar ideas have been expressed by Marshall McLuhan and Neil Postman.
The Study of Culture
Rozhdestvensky founded a vibrant school of culture studies in Moscow Lomonosov University, profoundly influencing Russian intellectuals through his writing and teaching.
To make culture amenable to organized study, Rozhdestvensky identifies, classifies and describes the domains of culture common to all human societies. He builds his blueprint on John Locke’s (1690) three parts of knowledge outlined in his An Essay Concerning Human Understanding. For Locke “all that can fall within the compass of human understanding” is of three sorts: physica (natural philosophy), practica (“the skill of right applying our own powers and actions, for the attainment of things good and useful”) and semeiotike (“the nature of signs the mind makes use of for the understanding of things, or conveying its knowledge to others”). Rozhdestvensky (1996) explains how three commonly noted domains of culture—physical, material and spiritual—may be interpreted as relationships between the vectors of Locke's three domains of knowledge (physica, practica and semeiotike).
In order to make the field amenable to study in its totality Rozhdestvensky also introduces its division into the culture of a person, an organization and a whole society. The culture of a person exists inside that person and is available to other persons who are in contact with him/her. It is the person's skills and knowledge. The culture of a whole society is impersonal, is preserved in archives, museums, libraries and is (or at least should be) accessible for all members of society. Their relationship forms a matrix:
Components of culture
According to Rozhdestvensky, physical culture contains hygiene, childbirth and birth control, games, rites, diet, safety, etc.; material culture contains animal breeds, plants, cultivated soils, buildings, tools, roads and transportation, communication technology; spiritual culture contains morality (tribal, religious, professional, national and global levels), beauty (applied and non-applied art), and knowledge (information, wisdom, including religions, science). Categories of spiritual culture are correlated with the categories of philosophy:
Trained as a linguist, Rozhdestvensky developed a semiotic approach to the study of culture. In the Introduction to the Study of Culture he argues (ch. 2) that signs are the carriers of culture; e.g. nature becomes part of culture when it is studied by humans, which can be done by verbal description, by codifying soils or domestic animal breeds, etc. Sometimes the process is not formal, e.g. personal physical culture of an individual becomes formally described or recorded only in exceptional cases. Rozhdestvensky demonstrates that it is impossible to record culture outside semiotics. Rozhdestvensky argues based on archeological, folklore and ethnographic data that all human societies possess sixteen semiotic systems. Of these systems four (language, rites, games and count) are of unifying, society-wide application, and the rest are specialized systems where non-experts can participate, but only select individuals achieve master-level skill. They are the systems of prognosis (signs, omens, fortune-telling), non-applied art (dance, music, pictures), applied art (crafts, architecture, costume), management (commands, measures, reference points). As societies become more complex and technology develops, no new semiotic systems are added, but the existing ones grow; e.g. weather forecasters and financial engineers use advance computers and mathematical models to predict the behavior of weather fronts or stock markets, expanding the semiotics of prognosis.
The unique value of Rozhdestvensly's approach is that it systematizes the study of culture so that societies can apply it to further their spiritual and economic well-being. He argues that mastery of culture is the condition of proper application of capital to land, demonstrates how economic wealth interplays with other ideals of humankind like truth, honor, honesty, beauty, creativity, leisure and basic human health, and explores the "art of world-wide community". Thus, the proposed translation will be of use not only to humanities scholars and to the general public, but also to policy-makers, because it can inform their economic and social decisions with the understanding of cultural processes.
As one example of the application of Rozhdestvensly's theory to practical dilemmas of contemporary world, consider his study of the levels of morality. These levels are tribal (justifying murder for the sake of protection of kin or tribal territory), religious (where the murder of a non-relative is as condemned as the murder of a relative), professional (often involving exceptions to religious morality, e.g. artists who are required to prefer beauty over truth), and ecological (the level that presumes to overcome tribal, religious or professional allegiances for the sake of global well-being). Rozhdestvensky demonstrates how all of these levels co-exist and complement each other within individuals in the world "struggling with globalist forces on the one hand and localist instincts on the other" (Leach, Bridging Cultures, 2009).
The Law of Non-Destruction and Accumulation of Culture
In his Introduction to the Study of Culture Rozhdestvensky defines culture as events, facts and artifacts that are relevant for future generations because they provide rules, precedents and best practices. In that sense culture includes patterns of daily activity that constitute rules, and examples of human achievement that constitute precedents and best practices. New artifacts or events appear; they are accepted or ignored by users, and critiqued and evaluated by experts; then they are included in museum or other appropriate collections; they become systematized and codified. Then they become part of culture. The process of selection, description, codification is the process of formation of culture. In that sense there is no “high” or “low” inside culture: if something is low quality, it does not become selected by users and experts and does not become culture. It remains on the level of daily exploits, vanity and handfuls of wind, and eventually sinks into oblivion. Once an event or work of art has become part of culture, it stays forever. This is the Law Of Accumulation And Non-Destruction Of Culture.
According to this law, new facts and artifacts do not cancel out other facts that are already included in culture; facts and artifacts belonging to one time period form a stratum; new strata enhance and invigorate old ones. For example, preliterary societies use animals as a source of power (horses, oxen, donkeys, mules, etc.); ancient civilizations add mechanisms (windmills, water mills) and keep and improve, through selection and breeding, the breeds of animals used as source of power; modern civilization adds electricity and nuclear power and keeps animals as a source of power, enhancing that old stratum through attributing entertainment value to it (e.g. hay rides and sleigh rides at $3 per adult at a historic farm). For another example, every preliterary society has oral speech, news and folklore; when writing is invented, old genres become invigorated and grow with the help of the new technology: folklore can be recorded and stored, oral dialogue can involve exchange of notes, news can be recorded and spread confidentially, public speeches now may be written down before they are pronounced, and there is an expectation of greater uniformity in grammar even in traditional oral genres. With the invention of the printing press manuscripts receive standard orthography, footnotes, tables of content, i.e. printing adds onto the achievements of the written stage; and certainly with the introduction of electronic means oral genres are not cancelled out but are enhanced (we can now talk on the phone or even a videophone), written genres are enhanced (documents, letters and memos receive additional formatting and can be exchanged faster) and printed genres are enhanced (e.g. many texts can be accessed easier, searched for specific expressions and abundantly commented).
An important task of society is to acculturate the young. The degree of cultural knowledge differentiates generations. It is confirmed in the initiation rites that the young ones pass. All peoples have initiation rites to mark a person's passage into the category of adults; all of those rites include a course of study and some testing that needs to be completed before the rites are administered. The new generation needs to be “assimilated” into the culture of their parents in order to be able to function. Clearly, each generation has a characteristic behavior; each generation re-assesses “old” culture.
The new generations almost never criticize or reject the physical culture of their society: they accept uncritically what coaches and teachers present to them; the innovations in physical culture come from the old generation – teachers and coaches.
Material culture is not so lucky: the new generations will re-assess existing agricultural practices, technologies, buildings, materials, etc. and try to approach them differently, or introduce new additions. However, the young ones are usually respectful of the old generation's material culture because they need to use it until they can invent something better.
The worst lot falls to spiritual culture: learning it is a long and dull process, so it is easier to start creating your own, anew, rejecting the “obsolete”. Every new generation goes through this cycle: they create their own new works of art and behavior precedents. For instance, modernism negated preceding culture and claimed to start a “new era”. They create a new style. In this sense, every new style is, to a degree, demonstration of ignorance. Often new styles are based on inventions of a new technology or on an innovation in division of labor.
There is a dialogical relationship between “culture” – things that have already been selected as rules and precedents – and current goings-on of creative work. They feed off each other: new items are born in the imagination and intuition of an artist; those new items are successful if they find their place in relation to the tradition; tradition is enriched by new items. It is the role of users and experts to determine what products of the new aesthetic become part of tradition, i.e. overall human culture (for the future generations to try to overthrow). It is the role of educators to include those items in the curriculum and to adjust curriculum accordingly. It is certainly the role of educators to preserve in the curriculum everything important that has been a part of culture.
Educational, usage-related and physical vandalism
Destruction of culture is called vandalism. Vandalism can be physical, when facts of culture are physically lost. More interesting are two other forms. Usage-related vandalism means that access to facts of culture becomes limited or hampered. Educational vandalism means that knowledge is not passed on, or education loses prestige, or schools are stagnated in their curriculum and methods. All three forms of vandalism will wreck a country's culture. Educational vandalism can be avoided if the curriculum includes the old achievements, timely includes new achievements and teaches old subjects through the prism of new stylistic interests.
Influence of mass media on culture and style
In ch.4 of Philosophy of Language. Study of Culture and Didactics Rozhdestvensky describes the cultural and stylistic processes caused by mass media. In the 20th century mass culture became a third field in human history (after the military and the sports) to be gemmated into a pocket with its own rituals, language, management, prognosis, etc. However, unlike the military and sports, mass culture is inexorably linked to mass information (being dependent on TV, newspapers, radio) which is by definition not a cultural but a transient text. This makes mass culture also a transient phenomenon oriented at the style of one particular generation (though does not preclude a possibility of a timeless piece being created in that field).
Mass advertisement is, obviously, also dependent on mass information and is influenced by its collage and figurative structure. Its goal is to cause a desire in recipients. To cause a desire it is necessary to use semiotic signs to appeal to the rational, the emotional and the subconscious. This is why mass advertisement turns to research in animal psychology. It addresses all levels of zoological behavior in humans: tropism and taxis, which are behavior patterns common to all forms of life, e.g. viruses and bacteria moving to parts of the Petri dish that contain more broth; knee-jerk reflexes, which is a behavior present in all animals with a nervous system; instincts, i.e. innate complex behavior programs, like those determining reproductive or social behavior in insects; conditional reflexes, like salivation of Pavlov's dogs; rational behavior demonstrated in an individual's learning, e.g. a mouse memorizing through trial and error the shortest way to food in a labyrinth; and finally conscious behavior, i.e. solving new problems in new situations, e.g. a cat rolling its toy under a closed door and walking around through a second door to reach the toy. Common to all advertisement is attracting attention through paradoxical images. Also common to all forms of advertisement is the change in the notion of “value”: from a philosophical and ideological notion it has changed into an object of desire. Advertisement creates values in the sense that it causes recipients to desire additional objects.
Mass games are lotteries, TV word-guessing games, trivia games, erudition competitions, and others. While games have folk origin, mass games depend on mass information. The games include prizes, i.e. financial interests, not even excepting children athletic competitions. This creates an atmosphere of gambling and chance, where taking a risk or subjecting oneself to public embarrassment may suddenly result in a windfall. Many such games are fairly plebeian, e.g. eating competitions or public undressing and dressing; all are based on a desire of a chance reward. Amplified by mass media they create an atmosphere of primitiveness and of possible luck.
Together, mass culture, mass advertisement and mass games produce the feeling of liberation and a state of mind in which success is necessary, is achieved through a gamble without effort, and can be achieved through a new gamble if this one didn't work out. This combination contrasts with the scary dark news of mass information. On the other side of the screen there are someone else's disasters like airplane crashes, famines, arms race, etc.; those gloomy events underscore the joy of entertainment, game, freedom and intuitive good guesses. Life, ideally, is shaped as a sequence of stages: carefree babyhood; studying for the sake of future earnings; earnings; and, thanks to the earnings, carefree idleness after retirement. People steeped in mass media are seized by a desire to acquire valuables and by the fear of losing those valuables as accidentally as they were acquired.
Beyond such state of mind there is real life with family, creativity, professional achievement. This serious life requires consistent work and real feelings; it has its foundation in real culture, i.e. in rules and precedents selected in history. Productive activity is impossible for individuals limited to mass media culture oriented at quick changes of fashion and not familiar with real culture. Of all new technologies, computer programs reflect real productive activity. Computer programs can participate in almost all semiotic systems that service human culture, e.g. computer design (applied arts), computer graphics and music (non-applied arts), computer games, computer simulation (prognosis). Two semiotic systems do not use computer programming: rites and dance. Those two are not eligible for computer help because their material carrier is the human body which so far cannot be blended with computer hardware. Thus there is an opposition between transient, fleeting products of mass media and real culture that forms the foundation of real life.
One may say that the aesthetic of the new generation and modern development of culture have been influenced by the tragic mood of mass information and the euphoric mood of mass entertainment. Together they produce a few effects. Parenthetically, we should not count acquisitive impulses and other physiological effects among serious cultural shifts: they are better classified as curable diseases of mass media consumers; the cure lies in turning off the TV for a few days. Valid modern developments include these: heightened interest in religion as the bearer of more solid moral values (people need an anchor, after all); heightened interest in health and activity in the adulthood and old age (valeology); heightened interest in games and winning (game-ology?); heightened interest in world culture, its logic and typology (culture studies). Rozhdestvensky calls the former three “stylistic interests”. The latter appears because it may be helpful in predicting future style changes. Rozhdestvenky offers the following chain of reasoning: ecological and valeological interests are often in contradiction with the game interests; the contradiction may be resolved through study of style; even the most sophisticated mathematical models cannot predict future style changes; however, systematization of culture, its typological and comparative study may give us tools to see the laws of style shift.
Related schools of thought
Rozhdestvensky's approach to the study of culture is culturology (). In the Eastern European tradition it is a widely developed field of study. In the Western tradition, a similar approach was proposed by Leslie White, though White developed it in a different direction and did not propose a comprehensive structure of human culture encompassing physical, material and spiritual as equal components. Cultural studies, the approach based on the work of Stuart Hall, Michel Foucault, Raymond Williams and others, calls for sensitivity to points of view and to the marginalization of "the other", especially considering that educators to a large extent are able to control students' perspectives. This approach is very different from the systematic approach to the underlying patterns of culture advocated and developed by Rozhdestvensky.
Books
Rozhdestvensky Yuri. Typology of the Word. Moscow: Vyshaya Shkola, 1969.
Amirova, Olhovikov, Rozhdestvensky. Essays in the History of Linguistics. Moscow: Science, 1975
Rozhdestvensky Yuri. Introduction to General Language Study. Moscow: Vyshaya Shkola, 1979
Rozhdestvensky, Yuri., Sychev O. General scientific lexicon in automated translation. International Forum on Information and Documentation, vol 9 #2 p. 23-27, 1984.
Volkov, Marchuk, Rozhdestvensky. Introduction to Applied Linguistics. Moscow: Moscow State Univ. Press, 1988
Rozhdestvensky Yuri. Lectures in General Linguistics. Moscow: Vyshaya Shkola, 1990
Rozhdestvensky Yuri. Introduction to the study of culture. Moscow, CheRo: 1996 http://www.eastwest.edu/wp-content/uploads/2015/01/Rozhdestvensky-Introduction-to-the-study-of-Culture-Intro-and-ch-1.pdf
Rozhdestvensky Yuri. General Language Study. Moscow, Fund New Millennium: 1996 http://www.eastwest.edu/wp-content/uploads/2015/01/Rozhdestvensky-Language-Theory-and-the-Problem-of-Language-Development.pdf
Rozhdestvensky, Yuri. Theory of Rhetoric, Moscow: Dobrosvet, 1997.
Rozhdestvensky Yuri. Principles of modern rhetoric. Moscow: Fund New Millennium, 1999
Rozhdestvensky Yuri. Philosophy of Language. Study of Culture and Didactics. Moscow: Grant, 2003
References
External links
(Russian language)
(Russian language)
(Culturology)
Linguists from Russia
Linguists from the Soviet Union
20th-century linguists
Moscow State University faculty
Academicians of the Russian Academy of Education
1926 births
1999 deaths
|
49833168
|
https://en.wikipedia.org/wiki/Josh%20Hagins
|
Josh Hagins
|
Josh Hagins (born March 17, 1994) is an American professional basketball player who last played for Peristeri of the Greek Basket League. He played college basketball for the Little Rock Trojans.
Early life and high school
Hagins was born in Washington, D.C. and is the son of Janell and Michael Larkin, coming from a military family who lived in Little Rock, Arkansas while his family was stationed in the region. He has an older brother D.J. Berry. Airline High School in Bossier City, Louisiana, Hagins' current hometown, he was named District, City and Parish MVP, respectively. Guided the Vikings to 3 district titles, being an All-State selection during that span. He led the Vikings to a 24-9 record (6-0 district), being two-time district champions and reaching state quarterfinals.
College career
Hagins enrolled in the University of Arkansas at Little Rock in 2012. Hagins ended his freshman year being the 3rd in points, albeit leading the team in assists. He played in 31 games that season, starting in 12.
Hagins had his sophomore season again leading his team in assists, 2nd in points, 3rd in rebounds. Ranked sixth in the conference in assists (3.7 apg), 10th in steals (1.2 spg), 12th in 3-pointers made per game (1.5), 13th in blocks (0.7 bpg) and seventh in assist-to-turnover ratio (1.6).
He had a breakout year as a junior, being named to third team at Sun Belt. This time, he led the team in points, assists, being the 4th in rebounds.
Among Sun Belt Conference individuals, ranked 14th in scoring, eighth in assists, third in free throw shooting percentage, seventh in steals and fifth in assist-to-turnover ratio. Scored in double figures 20 times on the season, including 14 of 16 games to close the season, and led UALR with 93 assists and a career-high 52 steals.
In his senior year, he had his best year, aside from being the offensive reference in the team, leading the team in points and assists, he was named to All-Sun Belt First Team honors for the first time. He guided his team to the Sun Belt Tournament title, despite underperforming in the final game, against UL Monroe, won by Little Rock 70–50.
Little Rock made it to the 2016 NCAA tournament, their first appearance since 2011. They faced Big Ten's Purdue. Little Rock was down 65–52 with 3:33 left, and Hagins tied the game with a 3-pointer to send it to overtime. The Trojans eventually won 85-83, in 2 overtimes. It was their first tournament win since 1986. Hagins is the 3rd player to score 30+ points against Purdue in the NCAA tournament, aside Isiah Thomas and Lew Alcindor.
Professional career
After going undrafted in the 2016 NBA draft, Hagins joined the Sacramento Kings for the 2016 NBA Summer League On October 11, 2016, he signed with KK Bosna Royal of the Bosnian League. In 10 games, he averaged 10.4 points, 3.3 assists and 3.4 rebounds in 22.4 minutes. On January 26, he was acquired by the Maine Red Claws of the NBA Development League.
On July 31, 2020, he signed with Telekom Baskets Bonn of the Basketball Bundesliga (BBL).
On February 21, 2021, Hagins moved to Iraklis of the Greek Basket League. In 9 games, he averaged 14 points (44% from the field, 41% from beyond the arc, 85% from the free throw line), 3 rebounds, and 5 assists, in 29 minutes per contest. On July 25, 2021, he signed with fellow Greek club Peristeri. On December 2 of the same year, he mutually parted ways with Peristeri.
Personal life
Hagins has two brothers, D.J. Berry and Kerrick Larkin and a sister, Kerita Larkin. Hagins is single. He majors in health science.
References
1994 births
Living people
African-American basketball players
American expatriate basketball people in Bosnia and Herzegovina
American expatriate basketball people in Cyprus
American expatriate basketball people in Germany
American expatriate basketball people in Greece
American men's basketball players
Basketball players from Louisiana
Guards (basketball)
Iraklis Thessaloniki B.C. players
Keravnos B.C. players
Little Rock Trojans men's basketball players
Maine Red Claws players
Peristeri B.C. players
Reno Bighorns players
Sportspeople from Bossier City, Louisiana
Telekom Baskets Bonn players
21st-century African-American sportspeople
|
34710720
|
https://en.wikipedia.org/wiki/Acer%20Iconia%206120
|
Acer Iconia 6120
|
The Acer Iconia Tab 6120 is a touch screen tablet computer made by Acer and unveiled on 23 November 2010. The Iconia was first announced at an Acer press conference in New York City on 23 November 2010.
The device was released in January 2011 in the United States, and earlier in Europe, though the exact dates are not known. In Europe, it is priced at €1500 and £1500, while the price in the US was not set at the time of its release.
Design and software
It is constructed out of a pair of LCD screens, attached with a hinge in the manner of a traditional laptop, but with a screen replacing the keyboard. The device weighs and is equipped with Windows 7, and a proprietary Acer operating system for the touchscreen interface. The Iconia is also to operate Acer programs for accessing multimedia and other content, including Alive, a program for downloading content such as music, videos and application, and Clear.Fi, designed to enable content to be shared among multiple devices over the internet.
Specifications
Acer Iconia is equipped with a 640GB hard drive, and has four gigabytes of RAM. Its processor is an Intel Core i5-480M unit, running at 2.67 GHz. There are two USB 2.0 ports, a single USB 3.0 port, and a HDMI-out port. A 1.3 megapixel webcam, Wi-Fi n and Bluetooth connectivity are also provided.
Reviews
Initial reactions to the device were mixed, with both CNET and Engadget commenting positively on Iconia's touch-screen software, though the keyboard was criticized and some features were considered to be "perhaps an unnecessary visual gloss." The screens were said to be glossy and prone to glare, though clear in good conditions.
Alternate Operating System
Linux
The integrated GPU is well-supported starting with the 3.1 version of the Linux kernel. Before that, one might need to disable the Kernel Mode Setting.
Proper support of the second screen was integrated in the 3.2-rc6 version of the Linux kernel, making it available for the 3.2 release in December 2011.
See also
Acer Tablet
ASUS Eee Pad Transformer
Motorola Xoom
T-Mobile G-Slate
Samsung Galaxy Tab 10.1
References
Iconia 6120
Tablet computers
|
37943112
|
https://en.wikipedia.org/wiki/Path%20protection
|
Path protection
|
Path protection in telecommunications is an end-to-end protection scheme used in connection oriented circuits in different network architectures to protect against inevitable failures on service providers’ network that might affect the services offered to end customers. Any failure occurred at any point along the path of a circuit will cause the end nodes to move/pick the traffic to/from a new route. Finding paths with protection, especially in elastic optical networks, was considered a difficult problem, but an efficient and optimal algorithm was proposed.
Other techniques to protect telecommunications networks against failures are: Channel Protection, Link Protection, Segment-Protection, and P-cycle Protection
Path protection in ring-based networks
In ring-based networks topology where the setup is to form a closed loop among the Add Drop Multiplexers, there is basically one path related ring protection scheme available in Unidirectional Path-Switched Ring architecture. In SDH networks, the equivalent of UPSR is Sub-Network Connection Protection (SNCP). Note that SNCP does not assume a ring topology, and can also be used in mesh topologies.
In UPSR, the data is transmitted in both directions, clock and counter clock wise, at the source ADM. At the destination then, both signals are compared and the best one of the two is selected. If a failure occurs then the destination just needs to switch to the unaffected path.
Path protection in optical mesh network
Circuits in optical mesh networks can be unprotected, protected to a single failure, and protected to multiple failures. The end optical switches in protected circuits are in charge of detecting the failure, in some cases requesting digital cross connects or optical cross-connects in intermediate devices, and switching the traffic to/from the backup path. When the primary and backup paths are calculated, it is important that they are at least link diverse so that a single link failure does not affect both of them at the same time. They can also be node diverse, which offers more protection in case a node failure occurs; depending on the network sometimes the primary and backup path cannot be provisioned to be node diverse at the edges, ingress and egress, node.
There are two types of path protection in Optical Mesh Networks: Dedicated Backup Path Protection and Shared Backup Path Protection
Dedicated backup path protection or DBPP (1+1)
In DBPP, both the primary and backup path carry the traffic end to end, then it is up to the receiver to decide which of the two incoming traffic it is going to pick; this is exactly the same concept as in Ring Based Path Protection. Since the optics along both paths are already active, DBPP is the fastest protection scheme available, usually in the order of a few tens of milliseconds, because there is no signaling involved in between ingress and egress nodes thus only needing the egress node to detect the failure and switch the traffic over to the unaffected path. Being the fastest protection scheme also makes it the most expensive; normally using more than double of the provisioned capacity for the primary because the backup path is usually longer due to the link and/or node diversity rule of thumb.
Shared backup path protection or SBPP
The concept behind this protection scheme is to share a backup channel among different, link/node diverse, primary paths. In other words, one backup channel can be used to protect various primary paths as shown on the figure below where the link between S and T is used to protect both AB and CD primaries. Under normal operations, assuming no failure on the network, the traffic is carried on the primary paths only; the shared backup path is only used when there is a failure in one of those primary paths.
There are two approaches to provision or reserve backups channels. First, there is the failure dependent assignment or approach also known as restoration in which the backup path is calculated in real time after the failure occurs. This technique is found in early versions of Mesh networks. However, in today’s Optical Mesh Network it can be used as a re-provisioning technique to help recover a second failure when the backup resources are already in use. The down side to restoration as a protection technique is that the recovery time is not fast enough.
The second approach is to have a predefined backup path computed before the failure. This approach is said to be failure independent and it takes less processing time to recover as compared to the failure dependent approach. Here the backup path is calculated together with the primary at provisioning time. Even though the backup path is calculated, it is not assigned to a specific circuit before a failure occurs; cross connect requests are initiated after the fact on a first-come, first-served basis. Since this approach can only protect from a single failure at a time, if a second primary path fails and at least a portion of its backup path is already in used, this path won't be able to recover unless restoration technique is in place for such cases.
There is a general down side to both of the above approaches and is that assuming there is a link failure with several paths running through it, each path in that link is going to be recovered individually. This implies that the total time the last path on that link is going to take to be back in service through the secondary path will be the sum of all other previous recovery times plus its own. This could affect the committed SLA (Service Level Agreement) to the customer.
Path protection in MPLS networks
Multi-Protocol Label Switching (MPLS) architecture is described in the RFC-3031. It is a packet-based network technology that provides a framework for recovery through the creation of point to point paths called Label Switched Paths (LSP). These LSPs creation are between a head-end and a tail-end Label Switch Router (LSR). In the former case, the head-end router is the input or ingress router. In the latter case the tail-end represents the output or egress router in the path.
There are a few protection techniques for MPLS very similar in the general concept to those for Optical Mesh Networks, such as link protection (e.g., MPLS local protection) and path protection. The path protection schemes for MPLS are as follow:
Packet protection scheme (1+1)
This protection scheme is similar in a sense to Ring-based path protection and Dedicated Backup Path Protection (DBPP) schemes described before. Here, same traffic is transmitted over two, link and/or node disjoint, LSPs; primary and backup. The transmission is done by the head-end LSR. The tail-end LSR then receives and compares both traffics; when a failure occurs, the tail-end detects it and switches the traffic to the secondary LSP. As with DBPP in Optical Mesh Network, there is no signaling involved in this protection scheme. This technique is the simplest and fastest of all, but as it reserves and transmits packets on both LSP, it takes away bandwidth that could be shared and used by other LSPs.
Global path protection (1:1)
In this protection scheme, a primary and a backup LSP are computed and setup at the provisioning time prior to failures. The backup LSP does not necessarily need to have the same constrain in terms of bandwidth as the primary; it is possible to reserve less bandwidth on the backup LSP and not incur in packet loss when in use. This is because the bandwidth of the link is shared among the different LSPs and the reason why the previous explained protection scheme is not preferred. It is also true that the Backup LSP does not necessarily carry traffic unless the primary LSP fails. When this occurs, a fault indication signal (FIS) is sent back to the head-end LSR that will immediately switch the traffic to the backup LSP. The drawback in this protection scheme is that the longer the LSPs, the longer the recovery time will be because of the travel time of the FIS notification.
See also
SONET
Add-drop Multiplexer (ADM)
Optical Mesh Networks
Shortest Path Problem
K Shortest Path Routing
Link Protection
Segment Protection
Shared Risk Resource Group
MPLS
Service Level Agreement
References
Further reading
An Overview of DWDM Networks
"Path Routing in Mesh Optical Networks", by Eric Bouillet, Georgios Ellinas, Jean-Francois Labourdette, and Ramu Ramamurthy , ,
"Network Recovery: Protection and Restoration of Optical, SONET-SDH, IP, and MPLS", by Jean-Philippe Vasseur, Mario Pickavet, and Piet Demeester
"Gmpls Technologies: Broadband Backbone Networks and Systems" by Naoaki Yamanaka, Kohei Shiomoto, and EIJI AUTOR OKI
Jean-Philippe Vasseur, Mario Pickavet, and Piet Demeester. Network Recovery, Protection and Restoration of Optical, SONET-SDH, IP, and MPLS. Morgan Kaufmann Publishers, 2004.
Addressing Transparency in DWDM mesh survivable networks by Sid Chaudhuri, Eric Bouillet, and Georgios Ellinas
Shared Path Protection in DWDM Mesh Networks
The Multiple Path Protection of DWDM Backbone Optical Networks
RFC-3031
G.841
Telecommunications
Network architecture
Network protocols
|
19763183
|
https://en.wikipedia.org/wiki/HTree
|
HTree
|
An HTree is a specialized tree data structure for directory indexing, similar to a B-tree. They are constant depth of either one or two levels, have a high fanout factor, use a hash of the filename, and do not require balancing. The HTree algorithm is distinguished from standard B-tree methods by its treatment of hash collisions, which may overflow across multiple leaf and index blocks. HTree indexes are used in the ext3 and ext4 Linux filesystems, and were incorporated into the Linux kernel around 2.5.40. HTree indexing improved the scalability of Linux ext2 based filesystems from a practical limit of a few thousand files, into the range of tens of millions of files per directory.
History
The HTree index data structure and algorithm were developed by Daniel Phillips in 2000 and implemented for the ext2 filesystem in February 2001. A port to the ext3 filesystem by Christopher Li and Andrew Morton in 2002 during the 2.5 kernel series added journal based crash consistency. With minor improvements, HTree continues to be used in ext4 in the Linux 3.x.x kernel series.
Use
ext2 HTree indexes were originally developed for ext2 but the patch never made it to the official branch. The dir_index feature can be enabled when creating an ext2 filesystem, but the ext2 code won't act on it.
ext3 HTree indexes are available in ext3 when the dir_index feature is enabled.
ext4 HTree indexes are turned on by default in ext4. This feature is implemented in Linux kernel 2.6.23. HTree indexes is also used for file extents when a file needs more than the 4 extents stored in the inode.
PHTree
PHTree (Physically stable HTree) is a derivation intended as a successor. It fixes all the known issues with HTree except for write multiplication. It is used in the Tux3 filesystem.
References
External links
A Directory Index for Ext2 (which describes the HTree data structure)
HTree
HPDD Wiki - Parallel Directory High Level Design
Disk file systems
B-tree
Linux
|
64771765
|
https://en.wikipedia.org/wiki/Slowly%20%28app%29
|
Slowly (app)
|
Slowly (stylized as SLOWLY) is a geosocial networking application that allows users to exchange delayed messages or "letters". The time taken by a message to be delivered depends on the distance between the sender and the recipient.
History
Slowly was released on iOS in 2017 and its version 2.0 on Android a year later. It was featured as App Store's "App of the Day" in over 30 regions worldwide. It was also awarded 2019's "Best Breakthrough App" by Google Play. By January 2019, it had reached 1 million users.
Slowly web application feature was launched in the version 5.0. In 2020, features such as Dark Mode, the ability to exchange audio notes and to pass a letter without it affecting the user's sent:received ratio were introduced in version 6.0 of the app. A paid membership feature called SLOWLY PLUS was also launched that allows members to double their quotas for the number of friends, excluded topics and regions, and photo sharing.
Slowly also allows to use 'Web Version' using barcode generated through the app.
Operation
Users are required to create a nickname and an avatar for themselves to get started. They can either manually browse user profiles or be "auto-matched" using an algorithm. It lets users search for people using various filters, such as, common interests or people from particular countries. However, the app follows 'House Rules' which basically are the set of moral code of conducts for a user to abide by and must be accepted at the time of registration in the app. The "letters" take anything from 30 minutes to 60 hours to "slowly" reach their destination, depending on how far apart the sender and the recipient live. Virtual stamps are collected and attached to the "letters" before mailing them. Variety of virtual stamps can also be bought of choice by paying coins. Messages may also include photos and audio notes with the recipient's consent.
See also
Pen pal
References
External links
Web App
Mobile applications
Android (operating system) software
IOS software
Cross-platform software
Communication software
2017 software
Software companies of Hong Kong
Computer-related introductions in 2017
Geosocial networking
Mobile social software
Proprietary cross-platform software
Social networking websites
Social networking mobile apps
Social networking services
|
1964845
|
https://en.wikipedia.org/wiki/SEAC%20%28computer%29
|
SEAC (computer)
|
SEAC (Standards Eastern Automatic Computer or Standards Electronic Automatic Computer) was a first-generation electronic computer, built in 1950 by the U.S. National Bureau of Standards (NBS) and was initially called the National Bureau of Standards Interim Computer, because it was a small-scale computer designed to be built quickly and put into operation while the NBS waited for more powerful computers to be completed (the DYSEAC). The team that developed SEAC was organized by Samuel N. Alexander. SEAC was demonstrated in April 1950 and was dedicated on June 1950; it is claimed to be the first fully operational stored-program electronic computer in the US.
Description
Based on EDVAC, SEAC used only 747 vacuum tubes (a small number for the time) eventually expanded to 1,500 tubes. It had 10,500 germanium diodes which performed all of the logic functions (see the article diode–transistor logic for the working principles of diode logic), later expanded to 16,000 diodes. It was the first computer to do most of its logic with solid-state devices. The tubes were used for amplification, inversion and storing information in dynamic flip-flops.
The machine used 64 acoustic delay lines to store 512 words of memory, with each word being 45 bits in size. The clock rate was kept low (1 MHz).
The computer's instruction set consisted of only 11 types of instructions: fixed-point addition, subtraction, multiplication, and division; comparison, and input & output. It eventually expanded to 16 instructions.
The addition time was 864 microseconds and the multiplication time was 2,980 microseconds (i.e. close to 3 milliseconds).
Weight: (central machine).
Applications
On some occasions SEAC was used by a remote teletype. This makes it one of the first computers to be used remotely. With many modifications, it was used until 1964. Some of the problems run on it dealt with:
digital imaging, led by Russell A. Kirsch
computer animation of the city traffic simulation
meteorology
linear programming
optical lenses
a program for Los Alamos National Laboratory
tables for LORAN navigation
statistical sampling plans
wave function of the helium atom
designing a proton synchrotron
See also
SWAC (Standards Western Automatic Computer)
List of vacuum-tube computers
Manchester Baby
References
Williams, Michael R. (1997). A History of Computing Technology. IEEE Computer Society.
Metropolis, N; Howlett, J.; Rota, Gian-Carlo (editors) (1980). A History of Computing in the Twentieth Century. Academic Press. (The chapter "Memories of the Bureau of Standards' SEAC", by Ralph J. Slutz.)
Astin, A. V. (1955), Computer Development (SEAC and DYSEAC) at the National Bureau of Standards, Washington D.C., National Bureau of Standards Circular 551, Issued January 25, 1955, U.S. Government Printing Office. Includes several papers describing SEAC, its technical details, and its operation. In particular, see "SEAC", by S. Greenwald, S. N. Alexander, and Ruth C. Haueter, on pp. 5–26, for an overview of the SEAC system.
Further reading
External links
SEAC and the Start of Image Processing at the National Bureau of Standards, (Archived) – At the NIST virtual museum
Margaret R. Fox Papers, 1935-1976, Charles Babbage Institute, University of Minnesota. collection contains reports, including the original report on the ENIAC, UNIVAC, and many early in-house National Bureau of Standards (NBS) activity reports; memoranda on and histories of SEAC, SWAC, and DYSEAC; programming instructions for the UNIVAC, LARC, and MIDAC; patent evaluations and disclosures; system descriptions; speeches and articles written by Margaret Fox's colleagues; and correspondence of Samuel Alexander, Margaret Fox, and Samuel Williams. Boxes 6-8 of the Fox papers contain documents, reports, and analysis of the NBS's SEAC.
SEAC ("Standards Eastern Automatic Computer") (1950) (Archived), from History of Computing: An Encyclopedia of the People and Machines that Made Computer History, Lexikon Services Publishing
Timeline of Computer History at CHM
One-of-a-kind computers
Vacuum tube computers
1950s computers
Computer-related introductions in 1950
Serial computers
|
18208726
|
https://en.wikipedia.org/wiki/Fedora%20Media%20Writer
|
Fedora Media Writer
|
Fedora Media Writer is a open source tool designed to create live media for Fedora Linux.
Features
Cross-platform (available for Linux, macOS, and Windows)
Non-destructive installer (does not format the device)
Supports various Fedora Linux releases
Automatically detects all removable devices
Persistent storage creation, to save all documents created and modifications made to the system
SHA-1 checksum verification of known releases, to ensure there is no corruption when downloading
Not limited to Fedora Linux releases, supports custom images
See also
Fedora Linux
Fedora Project
List of tools to create live media systems
References
External links
fedoralinux.org
How to create and use live media
Cross-platform software
Live USB
Fedora Project
Linux installation software
|
13052312
|
https://en.wikipedia.org/wiki/Elliott%20Organick
|
Elliott Organick
|
Elliott Irving Organick (February 25, 1925 – December 21, 1985) was a computer scientist and pioneer in operating systems development and education. He was considered "the foremost expositor writer of computer science", and was instrumental in founding the ACM Special Interest Group for Computer Science Education.
Career
Organick described the Burroughs large systems in an ACM monograph of which he was the sole author, covering the work of Robert (Bob) Barton and others. He also wrote a monograph about the Multics timesharing operating system. By the mid 1970s he had become "the foremost expositor writer of computer science"; he published 19 books.
He was editor of ACM Computing Surveys (ISSN 0360-0300) between 1973 and 1976.
In 1985 he received the ACM Special Interest Group on Computer Science Education Award for Outstanding Contribution to Computer Science Education.
He died of leukemia on December 21, 1985.
He taught at the University of Utah, where a Memorial Lecture series was established in his name.
Publications
The Multics System: An Examination of its Structure. MIT Press, 1972, . Still available from the MIT Libraries as a digital reprint (Laser-printed copy or PDF file of a scanned version.)
Computer Systems Organization: The B5700/B6700. ACM Monograph Series, 1973. LCN: 72-88334
References
External links
1985 deaths
University of Utah faculty
1925 births
Manhattan Project people
Computer science educators
University of Michigan alumni
Deaths from leukemia
Massachusetts Institute of Technology people
|
992412
|
https://en.wikipedia.org/wiki/Wireless%20distribution%20system
|
Wireless distribution system
|
A wireless distribution system (WDS) is a system enabling the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the traditional requirement for a wired backbone to link them. The notable advantage of WDS over other solutions is that it preserves the MAC addresses of client frames across links between access points.
An access point can be either a main, relay, or remote base station.
A main base station is typically connected to the (wired) Ethernet.
A relay base station relays data between remote base stations, wireless clients, or other relay stations; to either a main, or another relay base station.
A remote base station accepts connections from wireless clients and passes them on to relay stations or to main stations. Connections between "clients" are made using MAC addresses.
All base stations in a wireless distribution system must be configured to use the same radio channel, method of encryption (none, WEP, WPA or WPA2) and the same encryption keys. They may be configured to different service set identifiers (SSIDs). WDS also requires every base station to be configured to forward to others in the system.
WDS may also be considered a repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). However, with the repeater method, throughput is halved for all clients connected wirelessly. This is because Wi-Fi is an inherently half duplex medium and therefore any Wi-Fi device functioning as a repeater must use the Store and forward method of communication.
WDS may be incompatible between different products (even occasionally from the same vendor) since the IEEE 802.11-1999 standard does not define how to construct any such implementations or how stations interact to arrange for exchanging frames of this format. The IEEE 802.11-1999 standard merely defines the 4-address frame format that makes it possible.
Technical
WDS may provide two modes of access point-to-access point (AP-to-AP) connectivity:
Wireless bridging, in which WDS APs (AP-to-AP on local routers AP) communicate only with each other and don't allow wireless stations (STA, also known as wireless clients) to access them
Wireless repeating, in which APs (WDS on local routers) communicate with each other and with wireless STAs
Two disadvantages to using WDS are:
The maximum wireless effective throughput may be halved after the first retransmission (hop) being made. For example, in the case of two APs connected via WDS, and communication is made between a computer which is plugged into the Ethernet port of AP A and a laptop which is connected wirelessly to AP B. The throughput is halved, because AP B has to retransmit the information during the communication of the two sides. However, in the case of communications between a computer which is plugged into the Ethernet port of AP A and a computer which is plugged into the Ethernet port of AP B, the throughput is not halved since there is no need to retransmit the information. Dual band/radio APs may avoid this problem, by connecting to clients on one band/radio, and making a WDS network link with the other.
Dynamically assigned and rotated encryption keys are usually not supported in a WDS connection. This means that dynamic Wi-Fi Protected Access (WPA) and other dynamic key assignment technology in most cases cannot be used, though WPA using pre-shared keys is possible. This is due to the lack of standardization in this field, which may be resolved with the upcoming 802.11s standard. As a result, only static WEP or WPA keys may be used in a WDS connection, including any STAs that associate to a WDS repeating AP.
OpenWRT, a universal third party router firmware, supports WDS with WPA-PSK, WPA2-PSK, WPA-PSK/WPA2-PSK Mixed-Mode encryption modes. Recent Apple base stations allow WDS with WPA, though in some cases firmware updates are required. Firmware for the Renasis SAP36g super access point and most third party firmware for the Linksys WRT54G(S)/GL support AES encryption using WPA2-PSK mixed-mode security, and TKIP encryption using WPA-PSK, while operating in WDS mode. However, this mode may not be compatible with other units running stock or alternate firmware.
Example
Suppose one has a Wi-Fi-capable game console. This device needs to send one packet to a WAN host, and receive one packet in reply.
Network 1: A wireless base station acting as a simple (non-WDS) wireless router. The packet leaves the game console, goes over-the-air to the router, which then transmits it across the WAN. One packet comes back, through the router, which transmits it wirelessly to the game console. Total packets sent over-the-air: 2.
Network 2: Two wireless base stations employing WDS: WAN connects to the master base station. The master base station connects over-the-air to the remote base station. The Remote base station connects over-the-air to the game console. The game console sends one packet over-the-air to the remote base station, which forwards it over-the-air to the master base station, which forwards it to the WAN. The reply packet comes from the WAN to the master base station, over-the-air to the remote, and then over-the-air again to the game console. Total packets sent over-the-air: 4.
Network 3: Two wireless base stations employing WDS, but this time the game console connects by Ethernet cable to the remote base station. One packet is sent from the game console over the Ethernet cable to the remote, from there by air to the master, and on to the WAN. Reply comes from WAN to master, over-the-air to remote, over cable to game console. Total packets sent over-the-air: 2.
Notice that network 1 (non-WDS) and network 3 (WDS) send the same number of packets over-the-air. The only slowdown is the potential halving due to the half-duplex nature of Wi-Fi.
Network 2 gets an additional halving because the remote base station uses double the air time because it is re-transmitting over-the-air packets that it has just received over-the-air. This is the halving that is usually attributed to WDS, but that halving only happens when the route through a base station uses over-the-air links on both sides of it. That does not always happen in a WDS, and can happen in non-WDS.
Important Note: This "double hop" (one wireless hop from the main station to the remote station, and a second hop from the remote station to the wireless client [game console]) is not necessarily twice as slow. End to end latency introduced here is in the "store and forward" delay associated with the remote station forwarding packets. In order to accurately identify the true latency contribution of relaying through a wireless remote station vs. simply increasing the broadcast power of the main station, more comprehensive tests specific to the environment would be required.
See also
Ad hoc wireless network
Network bridge
Wireless intrusion detection system
Wireless mesh network
References
External links
Swallow-Wifi Wiki (WDS Network dashboard for DD-WRT devices)
Alternative Wireless Signal-repeating Scheme with DD-WRT and AutoAP
What is Third Generation Mesh? Review of three generation of mesh networking architectures.
Wi-Fi Range Extender Vs Mesh Network System Explanation how wifi extender and mesh network works.
How to Extend Your Wireless Network with Tomato-Powered Routers
Polarcloud.com (How Do I Use WDS)
Me
IEEE 802.11
|
16700028
|
https://en.wikipedia.org/wiki/Fujitsu%27s%20Application
|
Fujitsu's Application
|
Fujitsu's Application [1997] EWCA Civ 1174 is a 6 March 1997 judgment by the Court of Appeal of England and Wales. The judges' decision was to confirm the refusal of a patent by the United Kingdom Patent Office and by Mr Justice Laddie in the High Court. Lord Justice Aldous heard the appeal before the Court of Appeal.
Facts
Fujitsu's claimed invention was a new tool for modelling crystal structures on a computer. A scientist wishing to investigate what would result if he made a new material consisting of a combination of two existing compounds would enter data representing those compounds and how they should be joined into the computer. The computer then automatically generated and displayed the new structure using the data supplied. Previously, the same effect could only have been achieved by assembling plastic models by hand - a time-consuming task.
Discussion
UK courts should look to the decisions of the European Patent Office for guidance in interpreting the exclusions.
A "technical contribution" is needed to make a potentially excluded thing patentable, proclaiming that this was a concept at the heart of patent law and referring to the European Patent Office's decision in T 208/84, VICOM.
There is a difficulty inherent in determining what is and is not "technical", such that each case should be decided on its own facts.
The substance of an invention should be used to assess whether or not a thing is patentable, not the form in which it is claimed. Thus a non-patentable method cannot be patented under the guise of an apparatus.
Judgment
The claimed invention was certainly a useful tool. However, as claimed, the invention was nothing more than a conventional computer which automatically displayed a crystal structure shown pictorally in a form that would in the past have been produced as a model. The only advance expressed in the claims was the computer program which enabled the combined structure to be portrayed more quickly. The new tool therefore provided nothing that went beyond the normal advantages that are obtained by the use of a computer program. Thus, there was no technical contribution and the application was rejected as being a computer program as such.
See also
List of judgments of the UK Courts relating to excluded subject matter
Software patents under United Kingdom patent law
Software patents under the European Patent Convention
References
External links
Software Patents After Fujitsu. New Directions or (another) Missed Opportunity?
Is the extension of the patent system to include software related inventions desirable?
A STEP FORWARD: EXCLUDING 'TECHNICAL' FROM THE TEST FOR PATENTABLE SUBJECT MATTER
Inherent Patentability as related to computer software written after High Court judgment but before Court of Appeal had issued their judgment
Sviluppo Web ed eCommerce
Digital Marketing ed eCommerce
Software patent case law
United Kingdom patent case law
Court of Appeal (England and Wales) cases
1997 in case law
1997 in British law
Fujitsu
|
25340011
|
https://en.wikipedia.org/wiki/Semi-membership
|
Semi-membership
|
In mathematics and theoretical computer science, the semi-membership problem for a set is the problem of deciding which of two possible elements is logically more likely to belong to that set; alternatively, given two elements of which at least one is in the set, to distinguish the member from the non-member.
The semi-membership problem may be significantly easier than the membership problem. For example, consider the set S(x) of finite-length binary strings representing the dyadic rationals less than some fixed real number x. The semi-membership problem for a pair of strings is solved by taking the string representing the smaller dyadic rational, since if exactly one of the strings is an element, it must be the smaller, irrespective of the value of x. However, the language S(x) may not even be a recursive language, since there are uncountably many such x, but only countably many recursive languages.
A function f on ordered pairs (x,y) is a selector for a set S if f(x,y) is equal to either x or y and if f(x,y) is in S whenever at least one of x, y is in S. A set is semi-recursive if it has a recursive selector, and is P-selective or semi-feasible if it is semi-recursive with a polynomial time selector.
Semi-feasible sets have small circuits; they are in the extended low hierarchy; and cannot be NP-complete unless P=NP.
References
Derek Denny-Brown, "Semi-membership algorithms: some recent advances", Technical report, University of Rochester Dept. of Computer Science, 1994
Lane A. Hemaspaandra, Mitsunori Ogihara, "The complexity theory companion", Texts in theoretical computer science, EATCS series, Springer, 2002, , page 294
Lane A. Hemaspaandra, Leen Torenvliet, "Theory of semi-feasible algorithms", Monographs in theoretical computer science, Springer, 2003, , page 1
Ker-I Ko, "Applying techniques of discrete complexity theory to numerical computation" in Ronald V. Book (ed.), "Studies in complexity theory", Research notes in theoretical computer science, Pitman, 1986, , p.40
Computational complexity theory
|
164379
|
https://en.wikipedia.org/wiki/UNIVAC%201101
|
UNIVAC 1101
|
The ERA 1101, later renamed UNIVAC 1101, was a computer system designed and built by Engineering Research Associates (ERA) in the early 1950s and continued to be sold by the Remington Rand corporation after that company later purchased ERA. Its (initial) military model, the ERA Atlas, was the first stored-program computer that was moved from its site of manufacture and successfully installed at a distant site. Remington Rand used the 1101's architecture as the basis for a series of machines into the 1960s.
History
Codebreaking
ERA was formed from a group of code-breakers working for the United States Navy during World War II. The team had built a number of code-breaking machines, similar to the more famous Colossus computer in England, but designed to attack Japanese codes. After the war the Navy was interested in keeping the team together even though they had to formally be turned out of Navy service. The result was ERA, which formed in St. Paul, Minnesota in the hangars of a former Chase Aircraft shadow factory.
After the war, the team continued to build codebreaking machines, targeted at specific codes. After one of these codes changed, making an expensive computer obsolete, the team convinced the Navy that the only way to make a system that would remain useful was to build a fully programmable computer. The Navy agreed, and in 1947 they funded development of a new system under "Task 13".
The resulting machines, known as "Atlas", used drum memory for main memory and featured a simple central processing unit built for integer math. The first Atlas machine was built, moved, and installed at the Army Security Agency by December 1950. A faster version using Williams tubes and drums was delivered to the NSA in 1953.
Commercialization
The company turned to the task of selling the systems commercially. Atlas was named after a character in the popular comic strip Barnaby, and they initially decided to name the commercial versions "Mabel". Jack Hill suggested "1101" instead; 1101 is the binary representation of the number 13. The ERA 1101 was publicly announced in December 1951. Atlas II, slightly modified became the ERA 1103, while a more heavily modified version with core memory and floating point math support became the UNIVAC 1103A.
At about this time the company became embroiled in a lengthy series of political maneuverings in Washington, D.C. Drew Pearson's Washington Merry-Go-Round claimed that the founding of ERA was a conflict of interest for Norris and Engstrom because they had used their war-time government connections to set up a company for their own profit. The resulting legal fight left the company drained, both financially and emotionally. In 1952 they were purchased by Remington Rand, largely as a result of these problems.
Remington Rand had recently purchased Eckert–Mauchly Computer Corporation, builders of the famed UNIVAC I, the first commercial computer in the US. Although ERA and UNIVAC were run separately within the company, looking to cash in on the UNIVAC's well known name, they renamed the machine to become the "UNIVAC 1101". A series of machines based on the same basic design followed, and were sold into the 1960s before being replaced by the similar-in-name-only UNIVAC 1100 family.
Description
This computer was long, wide, weighed about and used 2700 vacuum tubes for its logic circuits. Its drum memory was in diameter, rotated at 3500 rpm, had 200 read-write heads, and held 16,384 24-bit words (a memory size equivalent to 48 kB) with access time between 32 microseconds and 17 milliseconds.
Instructions were 24 bits long, with six bits for the opcode, four bits for the "skip" value (telling how many memory locations to skip to get to the next instruction in program sequence), and 14 bits for the memory address. Numbers were binary with negative values in ones' complement. The addition time was 96 microseconds and the multiplication time was 352 microseconds.
The single 48-bit accumulator was fundamentally subtractive, addition being carried out by subtracting the ones' complement of the number to be added. This may appear rather strange, but the subtractive adder reduces the chance of getting negative zero in normal operations.
The machine had 38 instructions.
Instruction set
Conventions
y is memory box at address y
X = X-Register (24 bits)
( ) is interpreted as the contents of
Q = Q-Register (24 bits)
A = Accumulator (48 bits)
Arithmetic
Insert (y) in A
Insert complement of (y) in A
Insert (y) in A [multiple precision]
Insert complement of (y) in A [multiple precision]
Insert absolute value (y) in A
Insert complement of absolute value (y) in A
Add (y) to (A)
Subtract (y) from (A)
Add (y) to (A) [multiple precision]
Subtract (y) from (A) [multiple precision]
Add absolute value of (y) to (A)
Subtract absolute value of (y) from (A)
Insert (Q) in A
Clear right half of A
Add (Q) to (A)
Transmit (A) to Q
Insert [(y) + 1] in A
Multiply and divide
Form product (Q) * (y) in A
Add logical product (Q) * (y) to (A)
Form logical product (Q) * (y) in A
Divide (A) by (y), (quotient forms in Q, non-negative remainder left in A)
Add product (Q) * (y) to (A)
Logical and control flow
Store right half of (A) at y
Shift (A) left
Store (Q) at y
Shift (Q) left
Replace (y) with (A) using (Q) as operator
Take (y) as next order
Replace (y) with (A) [address portion only]
Take (y) as next order if (A) is not zero
Insert (y) in Q
Take (y) as next order if (A) is negative
Take (y) as next order if (Q) is negative
Input Output and control
Print right-hand 6 digits of (y)
Optional Stop
Print and punch right-hand 6 digits of (y)
Intermediate Stop
Final Stop
See also
List of UNIVAC products
History of computing hardware
References
External links
Introducing the ERA 1101: An operationally proven high-speed, electronic, general purpose digital computer, ERA, no-date. (8 pp)
Oral history interviews with ERA personnel on 1101, Charles Babbage Institute, University of Minnesota. Interviewees include Arnold A. Cohen; Arnold Dumey ; John Lindsay Hill; Frank Mullaney; and William C. Norris .
ERA 1101 Documents (archive) list of 44 scanned course notes on 1101 by H. C. Snyder USN
Summary of Characteristics Magnetic Drum Binary Computer, Engineering Research Associates Pub No. 25, 30 November 1948
1101
Early computers
Military computers
Computer-related introductions in 1950
24-bit computers
|
21169396
|
https://en.wikipedia.org/wiki/Ship%20gun%20fire-control%20system
|
Ship gun fire-control system
|
Ship gun fire-control systems (GFCS) are analogue fire-control systems that were used aboard naval warships prior to modern electronic computerized systems, to control targeting of guns against surface ships, aircraft, and shore targets, with either optical or radar sighting. Most US ships that are destroyers or larger (but not destroyer escorts except Brooke class DEG's later designated FFG's or escort carriers) employed gun fire-control systems for and larger guns, up to battleships, such as .
Beginning with ships built in the 1960s, warship guns were largely operated by computerized systems, i.e. systems that were controlled by electronic computers, which were integrated with the ship's missile fire-control systems and other ship sensors. As technology advanced, many of these functions were eventually handled fully by central electronic computers.
The major components of a gun fire-control system are a human-controlled director, along with or later replaced by radar or television camera, a computer, stabilizing device or gyro, and equipment in a plotting room.
For the US Navy, the most prevalent gunnery computer was the Ford Mark 1, later the Mark 1A Fire Control Computer, which was an electro-mechanical analog ballistic computer that provided accurate firing solutions and could automatically control one or more gun mounts against stationary or moving targets on the surface or in the air. This gave American forces a technological advantage in World War II against the Japanese who did not develop remote power control for their guns; both the US Navy and Japanese Navy used visual correction of shots using shell splashes or air bursts, while the US Navy augmented visual spotting with radar. Digital computers would not be adopted for this purpose by the US until the mid-1970s; however, it must be emphasized that all analog anti-aircraft fire control systems had severe limitations, and even the US Navy's Mark 37 system required nearly 1000 rounds of mechanical fuze ammunition per kill, even in late 1944.
The Mark 37 Gun Fire Control System incorporated the Mark 1 computer, the Mark 37 director, a gyroscopic stable element along with automatic gun control, and was the first US Navy dual-purpose GFCS to separate the computer from the director.
History of analogue fire control systems
Naval fire control resembles that of ground-based guns, but with no sharp distinction between direct and indirect fire. It is possible to control several same-type guns on a single platform simultaneously, while both the firing guns and the target are moving.
Though a ship rolls and pitches at a slower rate than a tank does, gyroscopic stabilization is extremely desirable. Naval gun fire control potentially involves three levels of complexity:
Local control originated with primitive gun installations aimed by the individual gun crews.
The director system of fire control was incorporated first into battleship designs by the Royal Navy in 1912. All guns on a single ship were laid from a central position placed as high as possible above the bridge. The director became a design feature of battleships, with Japanese "Pagoda-style" masts designed to maximize the view of the director over long ranges. A fire control officer who ranged the salvos transmitted elevations and angles to individual guns.
Coordinated gunfire from a formation of ships at a single target was a focus of battleship fleet operations. An officer on the flagship would signal target information to other ships in the formation. This was necessary to exploit the tactical advantage when one fleet succeeded in crossing the T of the enemy fleet, but the difficulty of distinguishing the splashes made walking the rounds in on the target more difficult.
Corrections can be made for surface wind velocity, roll and pitch of the firing ship, powder magazine temperature, drift of rifled projectiles, individual gun bore diameter adjusted for shot-to-shot enlargement, and rate-of-change of range with additional modifications to the firing solution based upon the observation of preceding shots. More sophisticated fire control systems consider more of these factors rather than relying on simple correction of observed fall of shot. Differently colored dye markers were sometimes included with large shells so individual guns, or individual ships in formation, could distinguish their shell splashes during daylight. Early "computers" were people using numerical tables.
Pre-dreadnought director system
The Royal Navy had a proposal for salvo firing from a single fire control director on hand, but not yet implemented it in 1904. The Royal Navy considered Russia a potential adversary through The Great Game, and sent Commander Walter Hugh Thring of the Navy Gunnery Division with an early example of Dumaresq to Japan during the Russo-Japanese War. His mission was to guide and train the Japanese naval gunnery personnel in the latest technological developments, but more importantly for the Imperial Japanese Navy (IJN), he was aware of the proposal.
During the 10 August 1904 Battle of the Yellow Sea against the Russian Pacific Fleet, the British-built IJN battleship Asahi and her sister ship, the fleet flagship Mikasa, were equipped with the latest Barr and Stroud range finders on the bridge, but the ships were not designed for coordinated aiming and firing. Asahis chief gunnery officer, Hiroharu Kato (later Commander of Combined Fleet), experimented with the first director system of fire control, using speaking tube (voicepipe) and telephone communication from the spotters high on the mast to his position on the bridge where he performed the range and deflection calculations, and from his position to the gun turrets forward and astern.
With the semi-synchronized salvo firing upon his voice command from the bridge, the spotters using stopwatches on the mast could identify the distant salvo of splashes created by the shells from their own ship more effectively than trying to identify a single shell splash among the many. Kato gave the firing order consistently at a particular moment in the rolling and pitching cycles of the ship, simplifying firing and correction duties formerly performed independently with varying accuracy using artificial horizon gauges in each turret.
Kato was transferred to Mikasa as the Chief Gunnery Officer, and his primitive director system was in fleet-wide operation by the time the Japanese fleet destroyed the Russian Baltic Fleet (renamed the 2nd and 3rd Pacific Fleet) in the Battle of Tsushima during 27–28 May 1905.
Central fire control and World War I
Centralized naval fire control systems were first developed around the time of World War I. Local control had been used up until that time, and remained in use on smaller warships and auxiliaries through World War II. Specifications of were finalized after the report on the Battle of Tsushima was submitted by the official observer to IJN onboard Asahi, Captain Pakenham (later Admiral), who observed how Kato system worked first hand. From this design on, large warships had a main armament of one size of gun across a number of turrets (which made corrections simpler still), facilitating central fire control via electric triggering.
The UK built their first central system before the Great War. At the heart was an analogue computer designed by Commander (later Admiral Sir) Frederic Charles Dreyer that calculated range rate, the rate of change of range due to the relative motion between the firing and target ships. The Dreyer Table was to be improved and served into the interwar period at which point it was superseded in new and reconstructed ships by the Admiralty Fire Control Table.
The use of Director-controlled firing together with the fire control computer moved the control of the gun laying from the individual turrets to a central position (usually in a plotting room protected below armor), although individual gun mounts and multi-gun turrets could retain a local control option for use when battle damage prevented the director setting the guns. Guns could then be fired in planned salvos, with each gun giving a slightly different trajectory. Dispersion of shot caused by differences in individual guns, individual projectiles, powder ignition sequences, and transient distortion of ship structure was undesirably large at typical naval engagement ranges. Directors high on the superstructure had a better view of the enemy than a turret mounted sight, and the crew operating it were distant from the sound and shock of the guns.
Analogue computed fire control
Unmeasured and uncontrollable ballistic factors like high altitude temperature, humidity, barometric pressure, wind direction and velocity required final adjustment through observation of fall of shot. Visual range measurement (of both target and shell splashes) was difficult prior to availability of radar. The British favoured coincidence rangefinders while the Germans and the US Navy, stereoscopic type. The former were less able to range on an indistinct target but easier on the operator over a long period of use, the latter the reverse.
During the Battle of Jutland, while the British were thought by some to have the finest fire control system in the world at that time, only three percent of their shots actually struck their targets. At that time, the British primarily used a manual fire control system. This experience contributed to computing rangekeepers becoming standard issue.
The US Navy's first deployment of a rangekeeper was on in 1916. Because of the limitations of the technology at that time, the initial rangekeepers were crude. For example, during World War I the rangekeepers would generate the necessary angles automatically but sailors had to manually follow the directions of the rangekeepers. This task was called "pointer following" but the crews tended to make inadvertent errors when they became fatigued during extended battles. During World War II, servomechanisms (called "power drives" in the US Navy) were developed that allowed the guns to automatically steer to the rangekeeper's commands with no manual intervention, though pointers still worked even if automatic control was lost. The Mark 1 and Mark 1A computers contained approximately 20 servomechanisms, mostly position servos, to minimize torque load on the computing mechanisms.
Radar and World War II
During their long service life, rangekeepers were updated often as technology advanced and by World War II they were a critical part of an integrated fire control system. The incorporation of radar into the fire control system early in World War II provided ships with the ability to conduct effective gunfire operations at long range in poor weather and at night.
In a typical World War II British ship the fire control system connected the individual gun turrets to the director tower (where the sighting instruments were) and the analogue computer in the heart of the ship. In the director tower, operators trained their telescopes on the target; one telescope measured elevation and the other bearing. Rangefinder telescopes on a separate mounting measured the distance to the target. These measurements were converted by the Fire Control Table into bearings and elevations for the guns to fire on. In the turrets, the gunlayers adjusted the elevation of their guns to match an indicator which was the elevation transmitted from the Fire Control Table—a turret layer did the same for bearing. When the guns were on target they were centrally fired.
The Aichi Clock Company first produced the Type 92 Shagekiban low angle analog computer in 1932. The US Navy Rangekeeper and the Mark 38 GFCS had an edge over Imperial Japanese Navy systems in operability and flexibility. The US system allowing the plotting room team to quickly identify target motion changes and apply appropriate corrections. The newer Japanese systems such as the Type 98 Hoiban and Shagekiban on the were more up to date, which eliminated the Sokutekiban, but it still relied on seven operators.
In contrast to US radar aided system, the Japanese relied on averaging optical rangefinders, lacked gyros to sense the horizon, and required manual handling of follow-ups on the Sokutekiban, Shagekiban, Hoiban as well as guns themselves. This could have played a role in Center Force's battleships' dismal performance in the Battle off Samar in October 1944.
In that action, American destroyers pitted against the world's largest armored battleships and cruisers dodged shells for long enough to close to within torpedo firing range, while lobbing hundreds of accurate automatically aimed rounds on target. Cruisers did not land hits on splash-chasing escort carriers until after an hour of pursuit had reduced the range to . Although the Japanese pursued a doctrine of achieving superiority at long gun ranges, one cruiser fell victim to secondary explosions caused by hits from the carriers' single 5-inch guns. Eventually with the aid of hundreds of carrier based aircraft, a battered Center Force was turned back just before it could have finished off survivors of the lightly armed task force of screening escorts and escort carriers of Taffy 3. The earlier Battle of Surigao Strait had established the clear superiority of US radar-assisted systems at night.
The rangekeeper's target position prediction characteristics could be used to defeat the rangekeeper. For example, many captains under long range gun attack would make violent maneuvers to "chase salvos." A ship that is chasing salvos is maneuvering to the position of the last salvo splashes. Because the rangekeepers are constantly predicting new positions for the target, it is unlikely that subsequent salvos will strike the position of the previous salvo. The direction of the turn is unimportant, as long as it is not predicted by the enemy system. Since the aim of the next salvo depends on observation of the position and speed at the time the previous salvo hits, that is the optimal time to change direction. Practical rangekeepers had to assume that targets were moving in a straight-line path at a constant speed, to keep complexity to acceptable limits. A sonar rangekeeper was built to include a target circling at a constant radius of turn, but that function had been disabled.
Only the RN and USN achieved 'blindfire' radar fire-control, with no need to visually acquire the opposing vessel. The Axis powers all lacked this capability. Classes such as Iowa and South Dakota battleships could lob shells over visual horizon, in darkness, through smoke or weather. American systems, in common with many contemporary major navies, had gyroscopic stable vertical elements, so they could keep a solution on a target even during maneuvers. By the start of World War II British, German and American warships could both shoot and maneuver using sophisticated analog fire-control computers that incorporated gyro compass and gyro Level inputs. In the Battle of Cape Matapan the British Mediterranean Fleet using radar ambushed and mauled an Italian fleet, although actual fire was under optical control using starshell illumination. At the Naval Battle of Guadalcanal , in complete darkness, inflicted fatal damage at close range on the battleship using a combination of optical and radar fire-control; comparisons between optical and radar tracking, during the battle, showed that radar tracking matched optical tracking in accuracy, while radar ranges were used throughout the battle.
The last combat action for the analog rangekeepers, at least for the US Navy, was in the 1991 Persian Gulf War when the rangekeepers on the s directed their last rounds in combat.
British Royal Navy systems
Dreyer Table
Arthur Pollen's Argo Clock
Admiralty Fire Control Table – from 1920s
HACS – A/A system from 1931
Fuze Keeping Clock – simplified HACS A/A system for destroyers from 1938
Pom-Pom Director – pioneered use of gyroscopic tachymetric fire-control for short range weapons – From 1940
Gyro Rate Unit – pioneered use of gyroscopic Tachymetric fire-control for medium calibre weapons – From 1940
Royal Navy radar – pioneered the use of radar for A/A fire-control and centimetric radar for surface fire-control – from 1939
Ferranti Computer Systems developed the GSA4 digital computerised gunnery fire control system that was deployed on HMS Amazon (Type 21 frigate commissioned in 1974) as part of the WAS4 (Weapon Systems Automation - 4) system.
BAE Systems' Sea Archer – computerised gunnery system. Royal Navy designation G SA.7 from 1980 and GSA.8 from 1985. Production completed for Royal Navy Type 23 frigates in 1999. Remains in active service on Type 23 (Duke class). Replaced in 2012 on Type 45 destroyers by Ultra Electronics Series 2500 Electro-Optical Gun Control System.
US Navy analogue Gun Fire Control Systems (GFCS)
Mark 33 GFCS
The Mark 33 GFCS was a power-driven fire control director, less advanced than the Mark 37. The Mark 33 GFCS used a Mark 10 Rangekeeper, analog fire-control computer. The entire rangekeeper was mounted in an open director rather than in a separate plotting room as in the RN HACS, or the later Mark 37 GFCS, and this made it difficult to upgrade the Mark 33 GFCS. It could compute firing solutions for targets moving at up to 320 knots, or 400 knots in a dive. Its installations started in the late 1930s on destroyers, cruisers and aircraft carriers with two Mark 33 directors mounted fore and aft of the island. They had no fire-control radar initially, and were aimed only by sight. After 1942, some of these directors were enclosed and had a Mark 4 fire-control radar added to the roof of the director, while others had a Mark 4 radar added over the open director. With the Mark 4 large aircraft at up to 40,000 yards could be targeted. It had less range against low-flying aircraft, and large surface ships had to be within 30,000 yards. With radar, targets could be seen and hit accurately at night, and through weather. The Mark 33 and 37 systems used tachymetric target motion prediction. The USN never considered the Mark 33 to be a satisfactory system, but wartime production problems, and the added weight and space requirements of the Mark 37 precluded phasing out the Mark 33:
The Mark 33 was used as the main director on some destroyers and as secondary battery / anti-aircraft director on larger ships (i.e. in the same role as the later Mark 37). The guns controlled by it were typically 5 inch weapons: the 5-inch/25 or 5-inch/38.
Deployment
destroyers (1 per vessel, total 48)
8 (launched ca. 1934)
18 (ca. 1935) ( later rebuilt with Mk37)
4 (ca. 1937)
8 (ca. 1937)
10 (ca. 1938)
heavy cruisers
7 (launched ca. 1933): for the 5"/25 secondary battery
(1937): for the 5"/38 secondary battery
light cruisers
9 (launched ca. 1937): for the 5"/25 and 5"/38 secondary batteries
Mark 34 GFCS
The Mark 34 was used to control the main batteries of large gun ships. Its predecessors include Mk18 (), Mk24 (), Mk27 () and Mk31 ()
Deployment
2 large cruisers (2 per vessel)
heavy cruisers
(2x)
14 (2 per vessel, 28 total)
(as upgrade)
light cruisers
9
27
Mark 37 GFCS
According to the US Navy Bureau of Ordinance,
While the defects were not prohibitive and the Mark 33 remained in production until fairly late in World War II, the Bureau started the development of an improved director in 1936, only 2 years after the first installation of a Mark 33. The objective of weight reduction was not met, since the resulting director system actually weighed about more than the equipment it was slated to replace, but the Gun Director Mark 37 that emerged from the program possessed virtues that more than compensated for its extra weight. Though the gun orders it provided were the same as those of the Mark 33, it supplied them with greater reliability and gave generally improved performance with gun batteries, whether they were used for surface or antiaircraft use. Moreover, the stable element and computer, instead of being contained in the director housing were installed below deck where they were less vulnerable to attack and less of a jeopardy to a ship's stability. The design provided for the ultimate addition of radar, which later permitted blind firing with the director. In fact, the Mark 37 system was almost continually improved. By the end of 1945 the equipment had run through 92 modifications—almost twice the total number of directors of that type which were in the fleet on December 7, 1941. Procurement ultimately totalled 841 units, representing an investment of well over $148,000,000. Destroyers, cruisers, battleships, carriers, and many auxiliaries used the directors, with individual installations varying from one aboard destroyers to four on each battleship. The development of the Gun Directors Mark 33 and 37 provided the United States Fleet with good long range fire control against attacking planes. But while that had seemed the most pressing problem at the time the equipments were placed under development, it was but one part of the total problem of air defense. At close-in ranges the accuracy of the directors fell off sharply; even at intermediate ranges they left much to be desired. The weight and size of the equipments militated against rapid movement, making them difficult to shift from one target to another.Their efficiency was thus in inverse proportion to the proximity of danger.
The computer was completed as the Ford Mark 1 computer by 1935. Rate information for height changes enabled complete solution for aircraft targets moving over . Destroyers starting with the employed one of these computers, battleships up to four. The system's effectiveness against aircraft diminished as planes became faster, but toward the end of World War II upgrades were made to the Mark 37 System, and it was made compatible with the development of the VT (Variable Time) proximity fuze which exploded when it was near a target, rather than by timer or altitude, greatly increasing the probability that any one shell would destroy a target.
Mark 37 Director
The function of the Mark 37 Director, which resembles a turret with "ears" rather than guns, was to track the present position of the target in bearing, elevation, and range. To do this, it had optical sights (the rectangular windows or hatches on the front), an optical rangefinder (the tubes or ears sticking out each side), and later models, fire control radar antennas. The rectangular antenna is for the Mark 12 FC radar, and the parabolic antenna on the left ("orange peel") is for the Mark 22 FC radar. They were part of an upgrade to improve tracking of aircraft.
The director was manned by a crew of 6: Director Officer, Assistant Control Officer, Pointer, Trainer, Range Finder Operator and Radar Operator.
The Director Officer also had a slew sight used to quickly point the director towards a new target. Up to four Mark 37 Gun Fire Control Systems were installed on battleships. On a battleship, the director was protected by of armor, and weighs 21 tons. The Mark 37 director aboard is protected with of armor plate and weighs 16 tons.
Stabilizing signals from the Stable Element kept the optical sight telescopes, rangefinder, and radar antenna free from the effects of deck tilt. The signal that kept the rangefinder's axis horizontal was called "crosslevel"; elevation stabilization was called simply "level". Although the stable element was below decks in Plot, next to the Mark 1/1A computer, its internal gimbals followed director motion in bearing and elevation so that it provided level and crosslevel data directly. To do so, accurately, when the fire control system was initially installed, a surveyor, working in several stages, transferred the position of the gun director into Plot so the stable element's own internal mechanism was properly aligned to the director.
Although the rangefinder had significant mass and inertia, the crosslevel servo normally was only lightly loaded, because the rangefinder's own inertia kept it essentially horizontal; the servo's task was usually simply to ensure that the rangefinder and sight telescopes remained horizontal.
Mark 37 director train (bearing) and elevation drives were by D.C. motors fed from Amplidyne rotary power-amplifying generators. Although the train Amplidyne was rated at several kilowatts maximum output, its input signal came from a pair of 6L6 audio beam tetrode vacuum tubes (valves, in the U.K.).
Plotting room
In battleships, the Secondary Battery Plotting Rooms were down below the waterline and inside the armor belt. They contained four complete sets of the fire control equipment needed to aim and shoot at four targets. Each set included a Mark 1A computer, a Mark 6 Stable Element, FC radar controls and displays, parallax correctors, a switchboard, and people to operate it all.
(In the early 20th century, successive range and/or bearing readings were probably plotted either by hand or by the fire control devices (or both). Humans were very good data filters, able to plot a useful trend line given somewhat-inconsistent readings. As well, the Mark 8 Rangekeeper included a plotter. The distinctive name for the fire-control equipment room took root, and persisted even when there were no plotters.)
Ford Mark 1A Fire Control Computer
The Mark 1A Fire Control Computer was an electro-mechanical analog ballistic computer. Originally designated the Mark 1, design modifications were extensive enough to change it to "Mark 1A". The Mark 1A appeared post World War II and may have incorporated technology developed for the Bell Labs Mark 8, Fire Control Computer. Sailors would stand around a box measuring . Even though built with extensive use of an aluminum alloy framework (including thick internal mechanism support plates) and computing mechanisms mostly made of aluminum alloy, it weighed as much as a car, about , with the Star Shell Computer Mark 1 adding another . It used 115 volts AC, 60 Hz, single phase, and typically a few amperes or even less. Under worst-case fault conditions, its synchros apparently could draw as much as 140 amperes, or 15,000 watts (about the same as 3 houses while using ovens). Almost all of the computer's inputs and outputs were by synchro torque transmitters and receivers.
Its function was to automatically aim the guns so that a fired projectile would collide with the target. This is the same function as the main battery's Mark 8 Rangekeeper used in the Mark 38 GFCS except that some of the targets the Mark 1A had to deal with also moved in elevation—and much faster. For a surface target, the Secondary Battery's Fire Control problem is the same as the Main Battery's with the same type inputs and outputs. The major difference between the two computers is their ballistics calculations. The amount of gun elevation needed to project a shell is very different from the elevation needed to project a shell the same distance.
In operation, this computer received target range, bearing, and elevation from the gun director. As long as the director was on target, clutches in the computer were closed, and movement of the gun director (along with changes in range) made the computer converge its internal values of target motion to values matching those of the target. While converging, the computer fed aided-tracking ("generated") range, bearing, and elevation to the gun director. If the target remained on a straight-line course at a constant speed (and in the case of aircraft, constant rate of change of altitude ("rate of climb"), the predictions became accurate and, with further computation, gave correct values for the gun lead angles and fuze setting.
Concisely, the target's movement was a vector, and if that didn't change, the generated range, bearing, and elevation were accurate for up to 30 seconds. Once the target's motion vector became stable, the computer operators told the gun director officer ("Solution Plot!"), who usually gave the command to commence firing. Unfortunately, this process of inferring the target motion vector required a few seconds, typically, which might take too long.
The process of determining the target's motion vector was done primarily with an accurate constant-speed motor, disk-ball-roller integrators, nonlinear cams, mechanical resolvers, and differentials. Four special coordinate converters, each with a mechanism in part like that of a traditional computer mouse, converted the received corrections into target motion vector values. The Mark 1 computer attempted to do the coordinate conversion (in part) with a rectangular-to polar converter, but that didn't work as well as desired (sometimes trying to make target speed negative!). Part of the design changes that defined the Mark 1A were a re-thinking of how to best use these special coordinate converters; the coordinate converter ("vector solver") was eliminated.
The Stable Element, which in contemporary terminology would be called a vertical gyro, stabilized the sights in the director, and provided data to compute stabilizing corrections to the gun orders. Gun lead angles meant that gun-stabilizing commands differed from those needed to keep the director's sights stable. Ideal computation of gun stabilizing angles required an impractical number of terms in the mathematical expression, so the computation was approximate.
To compute lead angles and time fuze setting, the target motion vector's components as well as its range and altitude, wind direction and speed, and own ship's motion combined to predict the target's location when the shell reached it. This computation was done primarily with mechanical resolvers ("component solvers"), multipliers, and differentials, but also with one of four three-dimensional cams.
Based on the predictions, the other three of the three-dimensional cams provided data on ballistics of the gun and ammunition that the computer was designed for; it could not be used for a different size or type of gun except by rebuilding that could take weeks.
Servos in the computer boosted torque accurately to minimize loading on the outputs of computing mechanisms, thereby reducing errors, and also positioned the large synchros that transmitted gun orders (bearing and elevation, sight lead angles, and time fuze setting).These were electromechanical "bang-bang", yet had excellent performance.
The anti-aircraft fire control problem was more complicated because it had the additional requirement of tracking the target in elevation and making target predictions in three dimensions. The outputs of the Mark 1A were the same (gun bearing and elevation), except fuze time was added. The fuze time was needed because the ideal of directly hitting the fast moving aircraft with the projectile was impractical. With fuze time set into the shell, it was hoped that it would explode near enough to the target to destroy it with the shock wave and shrapnel. Towards the end of World War II, the invention of the VT proximity fuze eliminated the need to use the fuze time calculation and its possible error. This greatly increased the odds of destroying an air target. Digital fire control computers were not introduced into service until the mid-1970s.
Central aiming from a gun director has a minor complication in that the guns are often far enough away from the director to require parallax correction so they aim correctly. In the Mark 37 GFCS, the Mark 1/1A sent parallax data to all gun mounts; each mount had its own scale factor (and "polarity") set inside the train (bearing) power drive (servo) receiver-regulator (controller).
Twice in its history, internal scale factors were changed, presumably by changing gear ratios. Target speed had a hard upper limit, set by a mechanical stop. It was originally , and subsequently doubled in each rebuild.
These computers were built by Ford Instrument Company, Long Island City, Queens, New York. The company was named after Hannibal C. Ford, a genius designer, and principal in the company. Special machine tools machined face cam grooves and accurately duplicated 3-D ballistic cams.
Generally speaking, these computers were very well designed and built, very rugged, and almost trouble-free, frequent tests included entering values via the handcranks and reading results on the dials, with the time motor stopped. These were static tests. Dynamic tests were done similarly, but used gentle manual acceleration of the "time line" (integrators) to prevent possible slippage errors when the time motor was switched on; the time motor was switched off before the run was complete, and the computer was allowed to coast down. Easy manual cranking of the time line brought the dynamic test to its desired end point, when dials were read.
As was typical of such computers, flipping a lever on the handcrank's support casting enabled automatic reception of data and disengaged the handcrank gear. Flipped the other way, the gear engaged, and power was cut to the receiver's servo motor.
The mechanisms (including servos) in this computer are described superbly, with many excellent illustrations, in the Navy publication OP 1140.
There are photographs of the computer's interior in the National Archives; some are on Web pages, and some of those have been rotated a quarter turn.
Stable Element
The function of the Mark 6 Stable Element (pictured) in this fire control system is the same as the function of the Mark 41 Stable Vertical in the main battery system. It is a vertical seeking gyroscope ("vertical gyro", in today's terms) that supplies the system with a stable up direction on a rolling and pitching ship. In surface mode, it replaces the director's elevation signal. It also has the surface mode firing keys.
It is based on a gyroscope that erects so its spin axis is vertical. The housing for the gyro rotor rotates at a low speed, on the order of 18 rpm. On opposite sides of the housing are two small tanks, partially filled with mercury, and connected by a capillary tube. Mercury flows to the lower tank, but slowly (several seconds) because of the tube's restriction. If the gyro's spin axis is not vertical, the added weight in the lower tank would pull the housing over if it were not for the gyro and the housing's rotation. That rotational speed and rate of mercury flow combine to put the heavier tank in the best position to make the gyro precess toward the vertical.
When the ship changes course rapidly at speed, the acceleration due to the turn can be enough to confuse the gyro and make it deviate from true vertical. In such cases, the ship's gyrocompass sends a disabling signal that closes a solenoid valve to block mercury flow between the tanks. The gyro's drift is low enough not to matter for short periods of time; when the ship resumes more typical cruising, the erecting system corrects for any error.
The Earth's rotation is fast enough to need correcting. A small adjustable weight on a threaded rod, and a latitude scale makes the gyro precess at the Earth's equivalent angular rate at the given latitude. The weight, its scale, and frame are mounted on the shaft of a synchro torque receiver fed with ship's course data from the gyro compass, and compensated by a differential synchro driven by the housing-rotator motor. The little compensator in operation is geographically oriented, so the support rod for the weight points east and west.
At the top of the gyro assembly, above the compensator, right on center, is an exciter coil fed with low-voltage AC. Above that is a shallow black-painted wooden bowl, inverted. Inlaid in its surface, in grooves, are two coils essentially like two figure 8s, but shaped more like a letter D and its mirror image, forming a circle with a diametral crossover. One coil is displaced by 90 degrees. If the bowl (called an "umbrella") is not centered above the exciter coil, either or both coils have an output that represents the offset. This voltage is phase-detected and amplified to drive two DC servo motors to position the umbrella in line with the coil.
The umbrella support gimbals rotate in bearing with the gun director, and the servo motors generate level and crosslevel stabilizing signals.
The Mark 1A's director bearing receiver servo drives the pickoff gimbal frame in the stable element through a shaft between the two devices, and the Stable Element's level and crosslevel servos feed those signals back to the computer via two more shafts.
(The sonar fire-control computer aboard some destroyers of the late 1950s required roll and pitch signals for stabilizing, so a coordinate converter containing synchros, resolvers, and servos calculated the latter from gun director bearing, level, and crosslevel.)
Fire Control Radar
The fire-control radar used on the Mark 37 GFCS has evolved. In the 1930s, the Mark 33 Director did not have a radar antenna. The Tizard Mission to the United States provided the USN with crucial data on UK and Royal Navy radar technology and fire-control radar systems. In September 1941, the first rectangular Mark 4 Fire-control radar antenna was mounted on a Mark 37 Director, and became a common feature on USN Directors by mid 1942. Soon aircraft flew faster, and in c1944 to increase speed and accuracy the Mark 4 was replaced by a combination of the Mark 12 (rectangular antenna) and Mark 22 (parabolic antenna) "orange peel" radars. (pictured) in the late 1950s, Mark 37 directors had Western Electric Mark 25 X-band conical-scan radars with round, perforated dishes. Finally, the circular SPG 25 antenna was mounted on top.
Deployment
destroyers (1 per vessel, 441 total)
2 rebuilt : ,
12 (launched ca. 1939)
30 (1939 - 1942)
66 (1940 - 1942)
175 (1942 - 1944)
58 (ca. 1944)
98 (ca. 1945)
light cruisers (62 total)
TBD: Atlanta, Fargo classes
3 (ca. 1945) (2 per vessel, 6 total)
27 (launched ca. 1942 - 1945) (2 per vessel, 54 total)
one Mk37 removed on during CLG conversion
2 (ca. 1947) (4 per vessel, 8 total)
heavy cruisers (46 total)
14 (ca. 1942 - 1945) (2 per vessel, 28 total)
3 (ca. 1945) (2 per vessel, 6 total)
3 (ca. 1947) (4 per vessel, 12 total)
2 large cruisers (ca. 1943) (2 per vessel, 4 total)
aircraft carriers (2 total)
TBD: Yorktown, Essex classes, Midway(?)
: 2xMk37 refitted by May 1942
battleships (16 total)
TBD: North Carolina, South Dakota classes, all the old ones that were upgraded with 5in/38(?)
4 (launched ca. 1942 - 1943) (4 per vessel)
Mark 38 GFCS
The Mark 38 Gun Fire Control System (GFCS) controlled the large main battery guns of Iowa-class battleships. The radar systems used by the Mark 38 GFCS were far more advanced than the primitive radar sets used by the Japanese in World War II. The major components were the director, plotting room, and interconnecting data transmission equipment. The two systems, forward and aft, were complete and independent. Their plotting rooms were isolated to protect against battle damage propagating from one to the other.
Director
The forward Mark 38 Director (pictured) was situated on top of the fire control tower. The director was equipped with optical sights, optical Mark 48 Rangefinder (the long thin boxes sticking out each side), and a Mark 13 Fire Control Radar antenna (the rectangular shape sitting on top). The purpose of the director was to track the target's present bearing and range. This could be done optically with the men inside using the sights and Rangefinder, or electronically with the radar. (The fire control radar was the preferred method.) The present position of the target was called the Line-Of-Sight (LOS), and it was continuously sent down to the plotting room by synchro motors. When not using the radar's display to determine Spots, the director was the optical spotting station.
Plotting room
The Forward Main Battery Plotting Room was located below the waterline and inside the armored belt. It housed the forward system's Mark 8 Rangekeeper, Mark 41 Stable Vertical, Mark 13 FC Radar controls and displays, Parallax Correctors, Fire Control Switchboard, battle telephone switchboard, battery status indicators, assistant Gunnery Officers, and Fire Controlmen (FC's)(between 1954 and 1982, FC's were designated as Fire Control Technicians (FT's)).
The Mark 8 Rangekeeper was an electromechanical analog computer whose function was to continuously calculate the gun's bearing and elevation, Line-Of-Fire (LOF), to hit a future position of the target. It did this by automatically receiving information from the director (LOS), the FC Radar (range), the ship's gyrocompass (true ship's course), the ships Pitometer log (ship's speed), the Stable Vertical (ship's deck tilt, sensed as level and crosslevel), and the ship's anemometer (relative wind speed and direction). Also, before the surface action started, the FT's made manual inputs for the average initial velocity of the projectiles fired out of the battery's gun barrels, and air density. With all this information, the rangekeeper calculated the relative motion between its ship and the target. It then could calculate an offset angle and change of range between the target's present position (LOS) and future position at the end of the projectile's time of flight. To this bearing and range offset, it added corrections for gravity, wind, Magnus Effect of the spinning projectile, stabilizing signals originating in the Stable Vertical, Earth's curvature, and Coriolis effect. The result was the turret's bearing and elevation orders (LOF). During the surface action, range and deflection Spots and target altitude (not zero during Gun Fire Support) were manually entered.
The Mark 41 Stable Vertical was a vertical seeking gyroscope, and its function was to tell the rest of the system which-way-is-up on a rolling and pitching ship. It also held the battery's firing keys.
The Mark 13 FC Radar supplied present target range, and it showed the fall of shot around the target so the Gunnery Officer could correct the system's aim with range and deflection spots put into the rangekeeper. It could also automatically track the target by controlling the director's bearing power drive. Because of radar, Fire Control systems are able to track and fire at targets at a greater range and with increased accuracy during the day, night, or inclement weather. This was demonstrated in November 1942 when the battleship engaged the Imperial Japanese Navy battlecruiser at a range of at night. The engagement left Kirishima in flames, and she was ultimately scuttled by her crew. This gave the United States Navy a major advantage in World War II, as the Japanese did not develop radar or automated fire control to the level of the US Navy and were at a significant disadvantage.
The parallax correctors are needed because the turrets are located hundreds of feet from the director. There is one for each turret, and each has the turret and director distance manually set in. They automatically received relative target bearing (bearing from own ship's bow), and target range. They corrected the bearing order for each turret so that all rounds fired in a salvo converged on the same point.
The fire control switchboard configured the battery. With it, the Gunnery Officer could mix and match the three turrets to the two GFCSs. He could have the turrets all controlled by the forward system, all controlled by the aft system, or split the battery to shoot at two targets.
The assistant Gunnery Officers and Fire Control Technicians operated the equipment, talked to the turrets and ship's command by sound-powered telephone, and watched the Rangekeeper's dials and system status indicators for problems. If a problem arose, they could correct the problem, or reconfigure the system to mitigate its effect.
Mark 51 Fire Control System
The Bofors 40 mm anti-aircraft guns were arguably the best light anti-aircraft weapon of World War II., employed on almost every major warship in the U.S. and UK fleet during World War II from about 1943 to 1945. They were most effective on ships as large as destroyer escorts or larger when coupled with electric-hydraulic drives for greater speed and the Mark 51 Director (pictured) for improved accuracy, the Bofors 40 mm gun became a fearsome adversary, accounting for roughly half of all Japanese aircraft shot down between 1 October 1944 and 1 February 1945.
Mark 56 GFCS
This GFCS was an intermediate-range, anti-aircraft gun fire-control system. It was designed for use against high-speed subsonic aircraft. It could also be used against surface targets. It was a dual ballistic system. This means that it was capable of simultaneously producing gun orders for two different gun types (e.g.: 5"/38cal and 3"/50cal) against the same target. Its Mark 35 Radar was capable of automatic tracking in bearing, elevation, and range that was as accurate as any optical tracking. The whole system could be controlled from the below decks Plotting Room with or without the director being manned. This allowed for rapid target acquisition when a target was first detected and designated by the ship's air-search radar, and not yet visible from on deck. Its target solution time was less than 2 seconds after Mark 35 radar "Lock on". It was designed toward the end of World War II, apparently in response to Japanese kamikaze aircraft attacks. It was conceived by Ivan Getting, mentioned near the end of his Oral history, and its linkage computer was designed by Antonín Svoboda. Its gun director was not shaped like a box, and it had no optical rangefinder. The system was manned by crew of four. On the left side of the director, was the Cockpit where the Control Officer stood behind the sitting Director Operator (Also called Director Pointer). Below decks in Plot, was the Mark 4 Radar Console where the Radar Operator and Radar Tracker sat. The director's movement in bearing was unlimited because it had slip-rings in its pedestal. (The Mark 37 gun director had a cable connection to the hull, and occasionally had to be "unwound".) Fig. 26E8 on this Web page shows the director in considerable detail.
The explanatory drawings of the system show how it works, but are wildly different in physical appearance from the actual internal mechanisms, perhaps intentionally so. However, it omits any significant description of the mechanism of the linkage computer. That chapter is an excellent detailed reference that explains much of the system's design, which is quite ingenious and forward-thinking in several respects.
In the 1968 upgrade to for service off Vietnam, three Mark 56 Gun Fire Control Systems were installed. Two on either side just forward of the aft stack, and one between the aft mast and the aft Mark 38 Director tower. This increased New Jerseys anti-aircraft capability, because the Mark 56 system could track and shoot at faster planes.
Mark 63 GFCS
The Mark 63 was introduced in 1953 for the twin QF 4-inch naval gun Mk XVI and Mk.33 twin 3"/50 cal guns. The GFCS consists of an AN/SPG-34 radar tracker and a Mark 29 gun sight.
Mark 68 GFCS
Introduced in the early 1950s, the Mark 68 was an upgrade from the Mark 37 effective against air and surface targets. It combined a manned topside director, a conical scan acquisition and tracking radar, an analog computer to compute ballistics solutions, and a gyro stabilization unit.
The gun director was mounted in a large yoke, and the whole director was stabilized in crosslevel (the yoke's pivot axis). That axis was in a vertical plane that included the line of sight.
At least in 1958, the computer was the Mark 47, an hybrid electronic/electromechanical system. Somewhat akin to the Mark 1A, it had electrical high-precision resolvers instead of the mechanical one of earlier machines, and multiplied with precision linear potentiometers. However, it still had disc/roller integrators as well as shafting to interconnect the mechanical elements. Whereas access to much of the Mark 1A required time-consuming and careful disassembly (think days in some instances, and possibly a week to gain access to deeply buried mechanisms), the Mark 47 was built on thick support plates mounted behind the front panels on slides that permitted its six major sections to be pulled out of its housing for easy access to any of its parts. (The sections, when pulled out, moved fore and aft; they were heavy, not counterbalanced. Typically, a ship rolls through a much larger angle than it pitches.) The Mark 47 probably had 3-D cams for ballistics, but information on it appears very difficult to obtain.
Mechanical connections between major sections were via shafts in the extreme rear, with couplings permitting disconnection without any attention, and probably relief springs to aid re-engagement. One might think that rotating an output shaft by hand in a pulled-out section would misalign the computer, but the type of data transmission of all such shafts did not represent magnitude; only the incremental rotation of such shafts conveyed data, and it was summed by differentials at the receiving end. One such kind of quantity is the output from the roller of a mechanical integrator; the position of the roller at any given time is immaterial; it is only the incrementing and decrementing that counts.
Whereas the Mark 1/1A computations for the stabilizing component of gun orders had to be approximations, they were theoretically exact in the Mark 47 computer, computed by an electrical resolver chain.
The design of the computer was based on a re-thinking of the fire control problem; it was regarded quite differently.
Production of this system lasted for over 25 years. A digital upgrade was available from 1975 to 1985, and it was in service into the 2000s. The digital upgrade was evolved for use in the s.
The AN/SPG-53''' was a United States Navy gun fire-control radar used in conjunction with the Mark 68 gun fire-control system. It was used with the 5"/54 caliber Mark 42 gun system aboard s, s, s, Farragut-class destroyers, s, s as well as others.
US Navy computerized fire control systems
Mark 86 GFCS
The US Navy desired a digital computerized gun fire-control system in 1961 for more accurate shore bombardment. Lockheed Electronics produced a prototype with AN/SPQ-9 radar fire control in 1965. An air defense requirement delayed production with the AN/SPG-60 until 1971. The Mark 86 did not enter service until when the nuclear-powered missile cruiser was commissioned in February 1974, and subsequently installed on US cruisers and amphibious assault ships. The last US ship to receive the system, was commissioned in July 1994.
The Mark 86 on Aegis-class ships controls the ship's 5"/54 caliber Mark 45 gun mounts, and can engage up to two targets at a time. It also uses a Remote Optical Sighting system which uses a TV camera with a telephoto zoom lens mounted on the mast and each of the illuminating radars.
Mark 34 Gun Weapon System (GWS)
The Mark 34 Gun Weapon System comes in various versions. It is an integral part of the Aegis combat weapon system on guided missile destroyers and Modified s. It combines the Mark 45 5"/54 or 5"/60 Caliber Gun Mount, Mark 46 Optical Sight System or Mark 20 Electro–Optical Sight System and the Mark 160 Mod 4–11 Gunfire Control System / Gun Computer System. Other versions of the Mark 34 GWS are used by foreign Navies as well as the US Coast Guard with each configuration having its own unique camera and / or gun system. It can be used against surface ship and close hostile aircraft, and as Naval Gunfire Support (NGFS) against shore targets.
Mark 92 Fire Control System (FCS)
The Mark 92 fire control system, an Americanized version of the WM-25 system designed in The Netherlands, was approved for service use in 1975. It is deployed on board the relatively small and austere to control the Mark 75 Naval Gun and the Mark 13 Guided Missile Launching System (missiles have since been removed since retirement of its version of the Standard missile). The Mod 1 system used in PHMs (retired) and the US Coast Guard's WMEC and WHEC ships can track one air or surface target using the monopulse tracker and two surface or shore targets. Oliver Hazard Perry''-class frigates with the Mod 2 system can track an additional air or surface target using the Separate Track Illuminating Radar (STIR).
Mark 160 Gun Computing System
Used in the Mark 34 Gun Weapon System, the Mark 160 Gun Computing System (GCS) contains a gun console computer (GCC), a computer display console (CDC), a magnetic tape recorder-reproducer, a watertight cabinet housing the signal data converter and gun mount microprocessor, a gun mount control panel (GMCP), and a velocimeter.
See also
Close-in weapon system
Director (military)
Fire-control system Ground, sea and air based systems
Mathematical discussion of rangekeeping
Rangekeeper shipboard analog fire-control computer
Notes
Citations
Bibliography
External links
The British High Angle Control System (HACS)
Best Battleship Fire control – Comparison of World War II battleship systems
Appendix one, Classification of Director Instruments
HACS III Operating manual Part 1
HACS III Operating manual Part 2
USS Enterprise Action Log
The RN Pocket Gunnery Book
Fire Control Fundamentals
Manual for the Mark 1 and Mark 1a Computer
Maintenance Manual for the Mark 1 Computer
Manual for the Mark 6 Stable Element
Gun Fire Control System Mark 37 Operating Instructions at ibiblio.org
Director section of Mark 1 Mod 1 computer operations at NavSource.org
Naval Ordnance and Gunnery, Vol. 2, Chapter 25, AA Fire Control Systems
Anti-aircraft artillery
Naval weapons of the United States
Naval artillery
fire control system
Artillery operation
World War II naval weapons
Artillery of the United States
Fire-control computers of World War II
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.